index int64 0 20.3k | text stringlengths 0 1.3M | year stringdate 1987-01-01 00:00:00 2024-01-01 00:00:00 | No stringlengths 1 4 |
|---|---|---|---|
6,800 | Context Selection for Embedding Models Li-Ping Liu∗ Tufts University Francisco J. R. Ruiz Columbia University University of Cambridge Susan Athey Stanford University David M. Blei Columbia University Abstract Word embeddings are an effective tool to analyze language. They have been recently extended to model other types of data beyond text, such as items in recommendation systems. Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items). In this paper, we show that conditioning on all the elements in the context is not optimal. Instead, we model the probability of the target conditioned on a learned subset of the elements in the context. We use amortized variational inference to automatically choose this subset. Compared to standard embedding models, this method improves predictions and the quality of the embeddings. 1 Introduction Word embeddings are a powerful model to capture latent semantic structure of language. They can capture the co-occurrence patterns of words (Bengio et al., 2006; Mikolov et al., 2013a,b,c; Pennington et al., 2014; Mnih and Kavukcuoglu, 2013; Levy and Goldberg, 2014; Vilnis and McCallum, 2015; Arora et al., 2016), which allows for reasoning about word usage and meaning (Harris, 1954; Firth, 1957; Rumelhart et al., 1986). The ideas of word embeddings have been extended to other types of high-dimensional data beyond text, such as items in a supermarket or movies in a recommendation system (Liang et al., 2016; Barkan and Koenigstein, 2016), with the goal of capturing the co-occurrence patterns of objects. Here, we focus on exponential family embeddings (EFE) (Rudolph et al., 2016), a method that encompasses many existing methods for embeddings and opens the door to bringing expressive probabilistic modeling (Bishop, 2006; Murphy, 2012) to the problem of learning distributed representations. In embedding models, the object of interest is the conditional probability of a target given its context. For instance, in text, the target corresponds to a word in a given position and the context are the words in a window around it. For an embedding model of items in a supermarket, the target corresponds to an item in a basket and the context are the other items purchased in the same shopping trip. In this paper, we show that conditioning on all elements of the context is not optimal. Intuitively, this is because not all objects (words or items) necessarily interact with each other, though they may appear together as target/context pairs. For instance, in shopping data, the probability of purchasing chocolates should be independent of whether bathroom tissue is in the context, even if the latter is actually purchased in the same shopping trip. With this in mind, we build a generalization of the EFE model (Rudolph et al., 2016) that relaxes the assumption that the target depends on all elements in the context. Rather, our model considers that the target depends only on a subset of the elements in the context. We refer to our approach as ∗Li-Ping Liu’s contribution was made when he was a postdoctoral researcher at Columbia University. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. context selection for exponential family embeddings (CS-EFE). Specifically, we introduce a binary hidden vector to indicate which elements the target depends on. By inferring the indicator vector, the embedding model is able to use more related context elements to fit the conditional distribution, and the resulting learned vectors capture more about the underlying item relations. The introduction of the indicator comes at the price of solving this inference problem. Most embedding tasks have a large amount of target/context pairs and require a fast solution to the inference problem. To avoid solving the inference problem separately for all target/context pairs, we use amortized variational inference (Dayan et al., 1995; Gershman and Goodman, 2014; Korattikara et al., 2015; Kingma and Welling, 2014; Rezende et al., 2014; Mnih and Gregor, 2014). We design a shared neural network structure to perform inference for all pairs. One difficulty here is that the varied sizes of the contexts require varied input and output sizes for the shared structure. We overcome this problem with a binning technique, which we detail in Section 2.3. Our contributions are as follows. First, we develop a model that allows conditioning on a subset of the elements in the context in an EFE model. Second, we develop an efficient inference algorithm for the CS-EFE model, based on amortized variational inference, which can automatically infer the subset of elements in the context that are most relevant to predict the target. Third, we run a comprehensive experimental study on three datasets, namely, MovieLens for movie recommendations, eBird-PA for bird watching events, and grocery data for shopping behavior. We found that CS-EFE consistently outperforms EFE in terms of held-out predictive performance on the three datasets. For MovieLens, we also show that the embedding representations of the CS-EFE model have higher quality. 2 The Model Our context selection procedure builds on models based on embeddings. We adopt the formalism of exponential family embeddings (EFE) (Rudolph et al., 2016), which extend the ideas of word embeddings to other types of data such as count or continuous-valued data. We briefly review the EFE model in Section 2.1. We then describe our model in Section 2.2, and we put forward an efficient inference procedure in Section 2.3. 2.1 Exponential Family Embeddings In exponential family embeddings (EFE), we have a collection of J objects, such as words (in text applications) or movies (in a recommendation problem). Our goal is to learn a vector representation of these objects based on their co-occurrence patterns. Let us consider a dataset represented as a (typically sparse) N × J matrix X, where rows are datapoints and columns are objects. For example, in text applications each row corresponds to a location in the text, and it is a one-hot vector that represents the word appearing in that location. In movie data, each entry xnj indicates the rating of movie j for user n. The EFE model learns the vector representation of objects based on the conditional probability of each observation, conditioned on the observations in its context. The context cnj = [(n1, j1), (n2, j2), . . .] gives the indices of the observations that appear in the conditional probability distribution of xnj. The definition of the context varies across applications. In text, it corresponds to the set of words in a fixed-size window centered at location n. In movie recommendation, cnj corresponds to the set of movies rated by user n, excluding j. In EFE, we represent each object j with two vectors: an embedding vector ρj and a context vector αj. These two vectors interact in the conditional probability distributions of each observation xnj as follows. Given the context cnj and the corresponding observations xcnj indexed by cnj, the distribution for xnj is in the exponential family, p(xnj | xcnj; α, ρ) = ExpFam t(xnj), ηj(xcnj; α, ρ) , (1) where t(xnj) is the sufficient statistic of the exponential family distribution, and ηj(xcnj; α, ρ) is its natural parameter. The natural parameter is set to ηj(xcnj; α, ρ) = g ρ(0) j + 1 |cnj|ρ⊤ j |cnj| X k=1 xnkjkαjk , (2) 2 where |cnj| is the number of elements in the context, and g(·) is the link function (which depends on the application and plays the same role as in generalized linear models). We consider a slightly different form for ηj(xcnj; α, ρ) than in the original EFE paper by including the intercept terms ρ(0) j . We also average the elements in the context. These choices generally improve the model performance. The vectors αj and ρj (and the intercepts) are found by maximizing the pseudo-likelihood, i.e., the product of the conditional probabilities in Eq. 1 for each observation xnj. 2.2 Context Selection for Exponential Family Embeddings The base EFE model assumes that all objects in the context cnj play a role in the distribution of xnj through Eq. 2. This is often an unrealistic assumption. The probability of purchasing chocolates should not depend on the context vector of bathroom tissue, even when the latter is actually in the context. Put formally, there are domains where the all elements in the context interact selectively in the probability of xnj. We now develop our context selection for exponential family embeddings (CS-EFE) model, which selects a subset of the elements in the context for the embedding model, so that the natural parameter only depends on objects that are truly related to the target object. For each pair (n, j), we introduce a hidden binary vector bnj ∈{0, 1}|cnj| that indicates which elements in the context cnj should be considered in the distribution for xnj. Thus, we set the natural parameter as ηj(xcnj, bnj; α, ρ) = g ρ(0) j + 1 Bnj ρ⊤ j |cnj| X k=1 bnjkxnkjkαjk , (3) where Bnj = P k bnjk is the number of non-zero elements of bnj. The prior distribution. We assign a prior to bnj, such that Bnj ≥1 and p(bnj; πnj) ∝ Y k (πnjk)bnjk(1 −πnjk)1−bnjk. (4) The constraint Bnj ≥1 states that at least one element in the context needs to be selected. For values of bnj satisfying the constraint, their probabilities are proportional to those of independent Bernoulli variables, with hyperparameters πnjk. If πnjk is small for all k (near 0), then the distribution approaches a categorical distribution. If a few πnjk values are large (near 1), then the constraint Bnj ≥ 1 becomes less relevant and the distribution approaches a product of Bernoulli distributions. The scale of the probabilities πnj has an impact on the number if elements to be selected as the context. We let πnjk ≡πnj = π min(1, β/|cnj|), (5) where π ∈(0, 1) is a global parameter to be learned, and β is a hyperparameter. The value of β controls the average number of elements to be selected. If β tends to infinity and we hold π fixed to 1, then we recover the basic EFE model. The objective function. We form the objective function L as the (regularized) pseudo log-likelihood. After marginalizing out the variables bnj, it is L = Lreg + X n,j log X bnj p(xnj | xcnj, bnj; α, ρ)p(bnj; πnj), (6) where Lreg is the regularization term. Following Rudolph et al. (2016), we use ℓ2-regularization over the embedding and context vectors. It is computationally difficult to marginalize out the context selection variables bnj, particularly when the cardinality of the context cnj is large. We address this issue in the next section. 2.3 Inference We now show how to maximize the objective function in Eq. 6. We propose an algorithm based on amortized variational inference, which shares a global inference network for all local variables bnk. Here, we describe the inference method in detail. 3 Variational inference. In variational inference, we introduce a variational distribution q(bnj; νnj), parameterized by νnj ∈R|cnj|, and we maximize a lower bound eL of the objective in Eq. 6, L ≥eL, eL = Lreg + X n,j Eq(bnj;νnj) log p(xnj | xcnj, bnj; α, ρ) + log p(bnj; πnj) −log q(bnj; νnj) . (7) Maximizing this bound with respect to the variational parameters νnj corresponds to minimizing the Kullback-Leibler divergence from the posterior of bnj to the variational distribution q(bnj; νnj) (Jordan et al., 1999; Wainwright and Jordan, 2008). Variational inference was also used for EFE by Bamler and Mandt (2017). The properties of this maximization problem makes this approach hard in our case. First, there is no closed-form solution, even if we use a mean-field variational distribution. Second, the large size of the dataset requires fast online training of the model. Generally, we cannot fit each q(bnj; νnj) individually by solving a set of optimization problems, nor even store νnj for later use. To address the former problem, we use black-box variational inference (Ranganath et al., 2014), which approximates the expectations via Monte Carlo to obtain noisy gradients of the variational lower bound. To tackle the latter, we use amortized inference (Gershman and Goodman, 2014; Dayan et al., 1995), which has the advantage that we do not need to store or optimize local variables. Amortization. Amortized inference avoids the optimization of the parameter νnj for each local variational distribution q(bnj; νnj); instead, it fits a shared structure to calculate each local parameter νnj. Specifically, we consider a function f(·) that inputs the target observation xnj, the context elements xcnj and indices cnj, and the model parameters, and outputs a variational distribution for bnj. Let anj = [xnj, cnj, xcnj, α, ρ, πnj] be the set of inputs of f(·), and let νnj ∈R|cnj| be its output, such that νnj = f(anj) is a vector containing the logits of the variational distribution, q(bnjk = 1; νnjk) = sigmoid (νnjk) , with νnjk = [f(anj)]k. (8) Similarly to previous work (Korattikara et al., 2015; Kingma and Welling, 2014; Rezende et al., 2014; Mnih and Gregor, 2014), we let f(·) be a neural network, parameterized by W. The key in amortized inference is to design the network and learn its parameters W. Network design. Typical neural networks transform fixed-length inputs into fixed-length outputs. However, in our case, we face variable size inputs and outputs. First, the output of the function f(·) for q(bnj; νnj) has length equal to the context size |cnj|, which varies across target/context pairs. Second, the length of the local variables anj also varies, because the length of xcnj depends on the number of elements in the context. We propose a network design that addresses these challenges. To overcome the difficulty of the varying output sizes, we split the computation of each component νnjk of νnj into |cnj| separate tasks. Each task computes the logit νnjk using a shared function f(·), νnjk = f(anjk). The input anjk contains information about anj and depends on the index k. We now need to specify how we form the input anjk. A naïve approach would be to represent the indices of the context items and their corresponding counts as a sparse vector, but this would require a network with a very large input size. Moreover, most of the weights of this large network would not be used (nor trained) in the computation of νnjk, since only a small subset of them would be assigned a non-zero input. Instead, in this work we use a two-step process to build an input vector anjk that has fixed length regardless of the context size |cnj|. In Step 1, we transform the original input anj = [xnj, cnj, xcnj, α, ρ, πnj] into a vector of reduced dimensionality that preserves the relevant information (we define “relevant” below). In Step 2, we transform the vector of reduced dimensionality into a fixed-length vector. For Step 1, we first need to determine which information is relevant. For that, we inspect the posterior for bnj, p(bnj | xnj, xcnj; α, ρ, πnj) ∝p(xnj | xcnj, bnj, bnj; α, ρ)p(bnj; πnj) = p(xnj | snj, bnj)p(bnj; πnj). (9) We note that the dependence on xcnj, α, and ρ comes through the scores snj, a vector of length |cnj| that contains for each element the inner product of the corresponding embedding and context vector, 4 Other scores (variable length) (L bins) h(k) njL Figure 1: Representation of the amortized inference network that outputs the variational parameter for the context selection variable bnjk. The input has fixed size regardless of the context size, and it is formed by the score snjk (Eq. 10), the prior parameter πnj, the target observation xnj, and a histogram of the scores snjk′ (for k′ ̸= k). scaled by the context observation, snjk = xnkjkρ⊤ j αjk. (10) Therefore, the scores snj are sufficient: f(·) does not need the raw embedding vectors as input, but rather the scores snj ∈R|cnj|. We have thus reduced the dimensionality of the input. For Step 2, we need to transform the scores snj ∈R|cnj| into a fixed-length vector that the neural network f(·) can take as input. We represent this vector and the full neural network structure in Figure 1. The transformation is carried out differently for each value of k. For the network that outputs the variational parameter νnjk, we let the k-th score snjk be directly one of the inputs. The reason is that the k-th score snjk is more related to νnjk, because the network that outputs νnjk ultimately indicates the probability that bnjk takes value 1, i.e., νnjk indicates whether to include the kth element as part of the context in the computation of the natural parameter in Eq. 3. All other scores (snjk′ for k′ ̸= k) have the same relation to νnjk, and their permutations give the same posterior. We bin these scores (snjk′, for k′ ̸= k) into L bins, therefore obtaining a fixed-length vector. Instead of using bins with hard boundaries, we use Gaussian-shaped kernels. We denote by ωℓand σℓthe mean and width of each Gaussian kernel, and we denote by h(k) nj ∈RL to the binned variables, such that h(k) njℓ= |cnj| X k′=1 k′̸=k exp −(snjk′ −ωℓ)2 σ2 ℓ . (11) Finally, for νnjk = f(anjk) we form a neural network that takes as input the score snjk, the binned variables h(k) nj , which summarize the information of the scores (snjk′ : k′ ̸= k), as well as the target observation xnj and the prior probability πnj. That is, anjk = [snjk, h(k) nj , xnj, πnj]. Variational updates. We denote by W the parameters of the network (all weights and biases). To perform inference, we need to iteratively update W, together with α, ρ, and π, to maximize Eq. 7, where νnj is the output of the network f(·). We follow a variational expectation maximization (EM) algorithm. In the M step, we take a gradient step with respect to the model parameters (α, ρ, and π). In the E step, we take a gradient step with respect to the network parameters (W). We obtain the (noisy) gradient with respect to W using the score function method as in black-box variational inference (Paisley et al., 2012; Mnih and Gregor, 2014; Ranganath et al., 2015), which allows rewriting the gradient of Eq. 7 as an expectation with respect to the variational distribution, ∇eL = X n,j Eq(bnj;W) h log p(xnj | xsnj, bnj) + log p(bnj; πnj) −log q(bnj; W) · ∇log q(bnj; W) i . Then, we can estimate the gradient via Monte Carlo by drawing samples from q(bnj; W). 5 3 Empirical Study We study the performance of context selection on three different application domains: movie recommendations, ornithology, and market basket analysis. On these domains, we show that context selection improves predictions. For the movie data, we also show that the learned embeddings are more interpretable; and for the market basket analysis, we provide a motivating example of the variational probabilities inferred by the network. Data. MovieLens: We consider the MovieLens-100K dataset (Harper and Konstan, 2015), which contains ratings of movies on a scale from 1 to 5. We only keep those ratings with value 3 or more (and we subtract 2 from all ratings, so that the counts are between 0 and 3). We remove users who rated less than 20 movies and movies that were rated fewer than 50 times, yielding a dataset with 943 users and 811 movies. The average number of non-zeros per user is 82.2. We set aside 9% of the data for validation and 10% for test. eBird-PA: The eBird data (Munson et al., 2015; Sullivan et al., 2009) contains information about a set of bird observation events. Each datum corresponds to a checklist of counts of 213 bird species reported from each event. The values of the counts range from zero to hundreds. Some extraordinarily large counts are treated as outliers and set to the mean of positive counts of that species. Bird observations in the subset eBird-PA are from a rectangular area that mostly overlaps Pennsylvania and the period from day 180 to day 210 of years from 2002 to 2014. There are 22, 363 checklists in the data and 213 unique species. The average number of non-zeros per checklist is 18.3. We split the data into train (67%), test (26%), and validation (7%) sets. Market-Basket: This dataset contains purchase records of more than 3, 000 customers on an anonymous supermarket. We aggregate the purchases of one month at the category level, i.e., we combine all individual UPC (Universal Product Code) items into item categories. This yields 45, 615 purchases and 364 unique items. The average basket size is of 12.5 items. We split the data into training (86%), test (5%), and validation (9%) sets. Models. We compare the base exponential family embeddings (EFE) model (Rudolph et al., 2016) with our context selection procedure. We implement the amortized inference network described in Section 2.32, for different values of the prior hyperparameter β (Eq. 5) (see below). For the movie data, in which the ratings range from 0 to 3, we use a binomial conditional distribution (Eq. 1) with 3 trials, and we use an identity link function for the natural parameter ηj (Eq. 2), which is the logit of the binomial probability. For the eBird-PA and Market-Basket data, which contain counts, we consider a Poisson conditional distribution and use the link function3 g(·) = log softplus (·) for the natural parameter, which is the Poisson log-rate. The context set corresponds to the set of other movies rated by the same user in MovieLens; the set of other birds in the same checklist on eBird-PA; and the rest of items in the same market basket. Experimental setup. We explore different values for the dimensionality K of the embedding vectors. In our tables of results, we report the values that performed best in the validation set (there was no qualitative difference in the relative performance between the methods for the non-reported results). We use negative sampling (Rudolph et al., 2016) with a ratio of 1/10 of positive (non-zero) versus negative samples. We use stochastic gradient descent to maximize the objective function, adaptively setting the stepsize with Adam (Kingma and Ba, 2015), and we use the validation log-likelihood to assess convergence. We consider unit-variance ℓ2-regularization, and the weight of the regularization term is fixed to 1.0. In the context selection for exponential family embeddings (CS-EFE) model, we set the number of hidden units to 30 and 15 for each of the hidden layers, and we consider 40 bins to form the histogram. (We have also explored other settings of the network, obtaining very similar results.) We believe that the network layers can adapt to different settings of the bins as long as they pick up essential information of the scores. In this work, we place these 40 bins equally spaced by a distance of 0.2 and set the width to 0.1. 2The code is in the github repo: https://github.com/blei-lab/context-selection-embedding 3The softplus function is defined as softplus (x) = log(1 + exp(x)). 6 Baseline: EFE CS-EFE (this paper) K (Rudolph et al., 2016) β = 20 β = 50 β = 100 β = ∞ 10 -1.06 ( 0.01 ) -1.00 ( 0.01 ) -1.03 ( 0.01 ) -1.03 ( 0.01 ) -1.03 ( 0.01 ) 50 -1.06 ( 0.01 ) -0.97 ( 0.01 ) -0.99 ( 0.01 ) -1.00 ( 0.01 ) -1.01 ( 0.01 ) (a) MovieLens-100K. Baseline: EFE CS-EFE (this paper) K (Rudolph et al., 2016) β = 2 β = 5 β = 10 β = ∞ 50 -1.74 ( 0.01 ) -1.34 ( 0.01 ) -1.33 ( 0.00 ) -1.51 ( 0.01 ) -1.34 ( 0.01 ) 100 -1.74 ( 0.01 ) -1.34 ( 0.00 ) -1.33 ( 0.00 ) -1.31 ( 0.00 ) -1.31 ( 0.01 ) (b) eBird-PA. Baseline: EFE CS-EFE (this paper) K (Rudolph et al., 2016) β = 2 β = 5 β = 10 β = ∞ 50 -0.632 ( 0.003 ) -0.626 ( 0.003 ) -0.623 ( 0.003 ) -0.625 ( 0.003 ) -0.628 ( 0.003 ) 100 -0.633 ( 0.003 ) -0.630 ( 0.003 ) -0.623 ( 0.003 ) -0.626 ( 0.003 ) -0.628 ( 0.003 ) (c) Market-Basket. Table 1: Test log-likelihood for the three considered datasets. Our CS-EFE models consistently outperforms the baseline for different values of the prior hyperparameter β. The numbers in brackets indicate the standard errors. In our experiments, we vary the hyperparameter β in Eq. 5 to check how the expected context size (see Section 2.2) impacts the results. For the MovieLens dataset, we choose β ∈{20, 50, 100, ∞}, while for the other two datasets we choose β ∈{2, 5, 10, ∞}. Results: Predictive performance. We compare the methods in terms of predictive pseudo loglikelihood on the test set. We calculate the marginal log-likelihood in the same way as Rezende et al. (2014). We report the average test log-likelihood on the three datasets in Table 1. The numbers are the average predictive log-likelihood per item, together with the standard errors in brackets. We compare the predictions of our models (in each setting) with the baseline EFE method using paired t-test, obtaining that all our results are better than the baseline at a significance level p = 0.05. In the table we only bold the best performance across different settings of β. The results show that our method outperforms the baseline on all three datasets. The improvement over the baseline is more significant on the eBird-PA datasets. We can also see that the prior parameter β has some impact on the model’s performance. Evaluation: Embedding quality. We also study how context selection affects the quality of the embedding vectors of the items. In the MovieLens dataset, each movie has up to 3 genre labels. We calculate movie similarities by their genre labels and check whether the similarities derived from the embedding vectors are consistent with genre similarities. More in detail, let gj ∈{0, 1}G be a binary vector containing the genre labels for each movie j, where G = 19 is the number of genres. We define the similarity between two genre vectors, gj and gj′, as the number of common genres normalized by the larger number genres, sim(gj, gj′) = g⊤ j gj′ max(1⊤gj, 1⊤gj′), (12) where 1 is a vector of ones. In an analogous manner, we define the similarity of two embedding vectors as their cosine distance. We now compute the similarities of each movie to all other movies, according to both definitions of similarity (based on genre and based on embeddings). For each query movie, we provide two correlation metrics between both lists. The first metric is simply Spearman’s correlation between the two ranked lists. For the second metric, we rank the movies based on the embedding similarity only, and we calculate the average genre similarity of the top 5 movies. Finally, we average both metrics across all possible query movies, and we report the results in Table 2. 7 Baseline: EFE CS-EFE (this paper) Metric (Rudolph et al., 2016) β = 20 β = 50 β = 100 β = ∞ Spearmans 0.066 0.108 0.090 0.082 0.076 mean-sim@5 0.272 0.328 0.317 0.299 0.289 Table 2: Correlation between the embedding vectors and the movie genre. The embedding vectors found with our CS-EFE model exhibit higher correlation with movie genres. Target: Taco shells Target: Cat food dry Taco shells − 0.219 Hispanic salsa 0.309 0.185 Tortilla 0.287 0.151 Hispanic canned food 0.315 0.221 Cat food dry 0.220 − Cat food wet 0.206 0.297 Cat litter 0.225 0.347 Pet supplies 0.173 0.312 Table 3: Approximate posterior probabilities of the CS-EFE model for a basket with eight items broken down into two unrelated clusters. The left column represents a basket of eight items of two types, and then we take one item of each type as target in the other two columns. For a Mexican food target, the posterior probabilities of the items in the Mexican type are larger compared to the probabilities in the pet type, and vice-versa. From this result, we can see that the similarity of the embedding vectors obtained by our model is more consistent with the genre similarity. (We have also computed the top-1 and top-10 similarities, which supports the same conclusion.) The result suggests a small number of context items are actually better for learning relations of movies. Evaluation: Posterior checking. To get more insight of the variational posterior distribution that our model provides, we form a heterogeneous market basket that contains two types of items: Mexican food, and pet-related products. In particular, we form a basket with four items of each of those types, and we compute the variational distribution (i.e., the output of the neural network) for two different target items from the basket. Intuitively, the Mexican food items should have higher probabilities when the target item is also in the same type, and similarly for the pet food. We fit the CS-EFE model with β = 2 on the Market-Basket data. We report the approximate posterior probabilities in Table 3, for two query items (one from each type). As expected, the probabilities for the items of the same type than the target are higher, indicating that their contribution to the context will be higher. 4 Conclusion The standard exponential family embeddings (EFE) model finds vector representations by fitting the conditional distributions of objects conditioned on their contexts. In this work, we show that choosing a subset of the elements in the context can improve performance when the objects in the subset are truly related to the object to be modeled. As a consequence, the embedding vectors can reflect co-occurrence relations with higher fidelity compared with the base embedding model. We formulate the context selection problem as a Bayesian inference problem by using a hidden binary vector to indicate which objects to select from each context set. This leads to a difficult inference problem due to the (large) scale of the problems we face. We develop a fast inference algorithm by leveraging amortization and stochastic gradients. The varying length of the binary context selection vectors poses further challenges in our amortized inference algorithm, which we address using a binning technique. We fit our model on three datasets from different application domains, showing its superiority over the EFE model. There are still many directions to explore to further improve the performance of the proposed context selection for exponential family embeddings (CS-EFE). First, we can apply the context selection technique on text data. Though the neighboring words of each target word are more likely to be the 8 “correct” context, we can still combine the context selection technique with the ordering in which words appear in the context, hopefully leading to better word representations. Second, we can explore variational inference schemes that do not rely on mean-field, improving the inference network to capture more complex variational distributions. Acknowledgments This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA PPAML FA875014-2-0009, DARPA SIMPLEX N66001-15-C-4032, the Alfred P. Sloan Foundation, and the John Simon Guggenheim Foundation. Francisco J. R. Ruiz is supported by the EU H2020 programme (Marie Skłodowska-Curie grant agreement 706760). We also acknowledge the support of NVIDIA Corporation with the donation of two GPUs used for this research. References Arora, S., Li, Y., Liang, Y., and Ma, T. (2016). RAND-WALK: A latent variable model approach to word embeddings. Transactions of the Association for Computational Linguistics, 4. Bamler, R. and Mandt, S. (2017). Dynamic word embeddings. In International Conference in Machine Learning. Barkan, O. and Koenigstein, N. (2016). Item2Vec: Neural item embedding for collaborative filtering. In IEEE International Workshop on Machine Learning for Signal Processing. Bengio, Y., Schwenk, H., Senécal, J.-S., Morin, F., and Gauvain, J.-L. (2006). Neural probabilistic language models. In Innovations in Machine Learning. Springer. Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA. Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7(5):889–904. Firth, J. R. (1957). A synopsis of linguistic theory 1930-1955. In Studies in Linguistic Analysis (special volume of the Philological Society), volume 1952–1959. Gershman, S. J. and Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society. Harper, F. M. and Konstan, J. A. (2015). The MovieLens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19. Harris, Z. S. (1954). Distributional structure. Word, 10(2–3):146–162. Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233. Kingma, D. P. and Ba, J. L. (2015). Adam: A method for stochastic optimization. In International Conference on Learning Representations. Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In International Conference on Learning Representations. Korattikara, A., Rathod, V., Murphy, K. P., and Welling, M. (2015). Bayesian dark knowledge. In Advances in Neural Information Processing Systems. Levy, O. and Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems. Liang, D., Altosaar, J., Charlin, L., and Blei, D. M. (2016). Factorization meets the item embedding: Regularizing matrix factorization with item co-occurrence. In ACM Conference on Recommender System. Mikolov, T., Chen, K., Corrado, G. S., and Dean, J. (2013a). Efficient estimation of word representations in vector space. International Conference on Learning Representations. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Mikolov, T., Yih, W.-T. a., and Zweig, G. (2013c). Linguistic regularities in continuous space word representations. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 9 Mnih, A. and Gregor, K. (2014). Neural variational inference and learning in belief networks. In International Conference on Machine Learning. Mnih, A. and Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Processing Systems. Munson, M. A., Webb, K., Sheldon, D., Fink, D., Hochachka, W. M., Iliff, M., Riedewald, M., Sorokina, D., Sullivan, B., Wood, C., and Kelling, S. (2015). The eBird reference dataset. Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press. Paisley, J. W., Blei, D. M., and Jordan, M. I. (2012). Variational Bayesian inference with stochastic search. In International Conference on Machine Learning. Pennington, J., Socher, R., and Manning, C. D. (2014). GloVe: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing. Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Artificial Intelligence and Statistics. Ranganath, R., Tang, L., Charlin, L., and Blei, D. M. (2015). Deep exponential families. In Artificial Intelligence and Statistics. Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning. Rudolph, M., Ruiz, F. J. R., Mandt, S., and Blei, D. M. (2016). Exponential family embeddings. In Advances in Neural Information Processing Systems. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(9):533–536. Sullivan, B., Wood, C., Iliff, M. J., Bonney, R. E., Fink, D., and Kelling, S. (2009). eBird: A citizen-based bird observation network in the biological sciences. Biological Conservation, 142:2282–2292. Vilnis, L. and McCallum, A. (2015). Word representations via Gaussian embedding. In International Conference on Learning Representations. Wainwright, M. J. and Jordan, M. I. (2008). Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305. 10 | 2017 | 313 |
6,801 | Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction Kristofer E. Bouchard∗ Alejandro F. Bujan† Farbod Roosta-Khorasani‡ Shashanka Ubaru§ Prabhat¶ Antoine M. Snijders∥ Jian-Hua Mao∥ Edward F. Chang∗∗ Michael W. Mahoney‡ Sharmodeep Bhattacharyya†† Abstract The increasing size and complexity of scientific data could dramatically enhance discovery and prediction for basic scientific applications. Realizing this potential, however, requires novel statistical analysis methods that are both interpretable and predictive. We introduce Union of Intersections (UoI), a flexible, modular, and scalable framework for enhanced model selection and estimation. Methods based on UoI perform model selection and model estimation through intersection and union operations, respectively. We show that UoI-based methods achieve low-variance and nearly unbiased estimation of a small number of interpretable features, while maintaining high-quality prediction accuracy. We perform extensive numerical investigation to evaluate a UoI algorithm (UoILasso) on synthetic and real data. In doing so, we demonstrate the extraction of interpretable functional networks from human electrophysiology recordings as well as accurate prediction of phenotypes from genotype-phenotype data with reduced features. We also show (with the UoIL1Logistic and UoICUR variants of the basic framework) improved prediction parsimony for classification and matrix factorization on several benchmark biomedical data sets. These results suggest that methods based on the UoI framework could improve interpretation and prediction in data-driven discovery across scientific fields. 1 Introduction A central goal of data-driven science is to identify a small number of features (i.e., predictor variables; X in Fig. 1(a)) that generate a response variable of interest (y in Fig. 1(a)) and then to estimate the relative contributions of these features as the parameters in the generative process relating the predictor variables to the response variable (Fig. 1(a)). A common characteristic of many modern massive data sets is that they have a large number of features (i.e., high-dimensional data), while ∗Biological Systems and Engineering Division, LBNL. kebouchard@lbl.gov †Redwood Center, UC Berkeley. afbujan@gmail.com ‡ICSI and Department of Statistics, UC Berkeley. {farbod,mmahoney}@icsi.berkeley.edu §Department of Computer Science and Engineering, University of Minnesota. ubaru001@umn.edu ¶NERSC, LBNL. prabhat@lbl.gov ∥Biological Systems and Engineering Division, LBNL. {AMSnijders,jhmao}@lbl.gov ∗∗Department of Neurological Surgery, UC San Francisco. Edward.Chang@ucsf.edu ††Department of Statistics, Oregon State University. bhattash@science.oregonstate.edu 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: The basic UoI framework. (a) Schematic of regularization and ensemble methods for regression. (b) Schematic of the Union of Intersections (UoI) framework. (c) A data-distributed version of the UoILasso algorithm. (d) Dependence of false positive, false negatives, and estimation variability on number of bootstraps in selection (B1) and estimation (B2) modules. also exhibiting a high degree of sparsity and/or redundancy [2, 19, 11]. That is, while formally high-dimensional, most of the useful information in the data features for tasks such as reconstruction, regression, and classification can be restricted or compressed into a much smaller number of important features. In regression and classification, it is common to employ sparsity-inducing regularization to attempt to achieve simultaneously two related but quite different goals: to identify the features important for prediction (i.e., model selection) and to estimate the associated model parameters (i.e., model estimation) [2, 19]. For example, the Lasso algorithm in linear regression uses L1regularization to penalize the total magnitude of model parameters, and this often results in feature compression by setting some parameters exactly to zero [18] (See Fig. 1(a), pure white elements in right-hand vectors, emphasized by ×). It is well known that this type of regularization implies a prior assumption about the distribution of the parameter (e.g., L1-regularization implicitly assumes a Laplacian prior distribution) [12]. However, strong sparsity-inducing regularization, which is common when there are many more potential features than data samples (i.e., the so-called small n/p regime) can severely hinder the interpretation of model parameters (Fig. 1(a), indicated by less saturated colors between top and bottom vectors on right hand side). For example, while sparsity may be achieved, incorrect features may be chosen and parameters estimates may be biased. In addition, it can impede model selection and estimation when the true model distribution deviates from the assumed distribution [2, 10]. This may not matter for prediction quality, but it clearly has negative consequences for interpretability, an admittedly not completely-well-defined property of algorithms that is crucial in many scientific applications [9]. In this context, interpretability reflects the degree to which an algorithm returns a small number of physically meaningful features with unbiased and low variance estimates of their contributions. On the other hand, another common characteristic of many state of the art methods is to combine several related models for a given task. In statistical data analysis, this is often formalized by so-called 2 ensemble methods, which improve prediction accuracy by combining parameter estimates [12]. In particular, by combining several different models, ensemble methods often include more features to predict the response variables, and thus the number of data features is expanded relative to the individuals in the ensemble. For example, estimating an ensemble of model parameters by randomly resampling the data many times (e.g., bootstrapping) and then averaging the parameter estimates (e.g., bagging) can yield improved prediction accuracy by reducing estimation variability [8, 12] (See Fig. 1(a), bottom). However, by averaging estimates from a large ensemble, this process often results in many non-zero parameters, which can hinder interpretability and the identification of the true model support (compare top and bottom vectors on right hand side of Fig. 1(a)). Taken together, these observations suggest that explicit and more precise control of feature compression and expansion may result in an algorithm with improved interpretative and predictive properties. In this paper, we introduce Union of Intersections (UoI), a flexible, modular, and scalable framework to enhance both the identification of features (model selection) as well as the estimation of the contributions of these features (model estimation). We have found that the UoI framework permits us to explore the interpretability-predictivity trade-off space, without imposing an explicit prior on the model distribution, and without formulating a non-convex problem, thereby often leading to improved interpretability and prediction. Ideally, data analysis methods in many scientific applications should be selective (only features that influence the response variable are selected), accurate (estimated parameters in the model are as close to the true value as possible), predictive (allowing prediction of the response variable), stable (e.g., the variability of the estimated parameters is small), and scalable (able to return an answer in a reasonable amount of time on very large data sets) [17, 2, 15, 10]. We show empirically that UoI-based methods can simultaneously achieve these goals, results supported by preliminary theory. We primarily demonstrate the power of UoI-based methods in the context of sparse linear regression (UoILasso), as it is the canonical statistical/machine learning problem, it is theoretically tractable, and it is widely used in virtually every field of scientific inquiry. However, our framework is very general, and we demonstrate this by extending UoI to classification (UoIL1Logistic) and matrix factorization (UoICUR) problems. While our main focus is on neuroscience (broadly speaking) applications, our results also highlight the power of UoI across a broad range of synthetic and real scientific data sets.1 2 Union of Intersections (UoI) For concreteness, we consider an application of UoI in the context of the linear regression. Specifically, we consider the problem of estimating the parameters β ∈Rp that map a p-dimensional vector of predictor variables x ∈Rp to the observation variable y ∈R, when there are n paired samples of x and y corrupted by i.i.d Gausian noise: y = βT x + ε, (1) where ε iid ∼N(0, σ2) for each sample. When the true β is thought to be sparse (i.e., in the L0-norm sense), then an estimate of β (call it ˆβ) can be found by solving a constrained optimization problem of the form: ˆβ ∈argminβ∈Rp n X i=1 (yi −βxi)2 + λR(β). (2) Here, R(β) is a regularization term that typically penalizes the overall magnitude of the parameter vector β (e.g., R(β) = ∥β∥1 is the target of the Lasso algorithm). The Basic UoI Framework. The key mathematical idea underlying UoI is to perform model selection through intersection (compressive) operations and model estimation through union (expansive) operations, in that order. This is schematized in Fig. 1(b), which plots a hypothetical range of selected 1More details, including both empirical and theoretical results, are in the associated technical report [4]. 3 features (x1 : xp, abscissa) for different values of the regularization parameter (λ, ordinate). See [4] for a more detailed description. In particular, UoI first performs feature compression (Fig. 1(b), Step 1) through intersection operations (intersection of supports across bootstrap samples) to construct a family (S) of candidate model supports (Fig. 1(b), e.g., Sj−1, opaque red region is intersection of abutting pink regions). UoI then performs feature expansion (Fig. 1(b), Step 2) through a union of (potentially) different model supports: for each bootstrap sample, the best model estimates (across different supports) is chosen, and then a new model is generated by averaging the estimates (i.e., taking the union) across bootstrap samples (Fig. 1(b), dashed vertical black line indicates the union of features from Sj and Sj+1). Both feature compression and expansion are performed across all regularization strengths. In UoI, feature compression via intersections and feature expansion via unions are balanced to maximize prediction accuracy of the sparsely estimated model parameters for the response variable y. Innovations in Union of Intersections. UoI has three central innovations: (1) calculate model supports (Sj) using an intersection operation for a range of regularization parameters (increases in λ shrink all values ˆβ towards 0), efficiently constructing a family of potential model supports {S : Sj ∈Sj−k, for k sufficiently large}; (2) use a novel form of model averaging in the union step to directly optimize prediction accuracy (this can be thought of as a hybrid of bagging [8] and boosting [16]); and (3) combine pure model selection using an intersection operation with model selection/estimation using a union operation in that order (which controls both false negatives and false positives in model selection). Together, these innovations often lead to better selection, estimation and prediction accuracy. Importantly, this is done without explicitly imposing a prior on the distribution of parameter values, and without formulating a non-convex optimization problem. The UoILasso Algorithm. Since the basic UoI framework, as described in Fig. 1(c), has two main computational modules—one for model selection, and one for model estimation—UoI is a framework into which many existing algorithms can be inserted. Here, for simplicity, we primarily demonstrate UoI in the context of linear regression in the UoILasso algorithm, although we also apply it to classification with the UoIL1Logistic algorithm as well as matrix factorization with the UoICUR algorithm. UoILasso expands on the BoLasso method for the model selection module [1], and it performs a novel model averaging in the estimation module based on averaging ordinary least squares (OLS) estimates with potentially different model supports. UoILasso (and UoI in general) has a high degree of natural algorithmic parallelism that we have exploited in a distributed Python-MPI implementation. (Fig. 1(c) schematizes a simplified distributed implementation of the algorithm; see [4] for more details.) This parallelized UoILasso algorithm uses distribution of bootstrap data samples and regularization parameters (in Map) for independent computations involving convex optimizations (Lasso and OLS, in Solve), and it then combines results (in Reduce) with intersection operations (model selection module) and union operations (model estimation module). By solving independent convex optimization problems (e.g., Lasso, OLS) with distributed data resampling, our UoILasso algorithm efficiently constructs a family of model supports, and it then averages nearly unbiased model estimates, potentially with different supports, to maximize prediction accuracy while minimizing the number of features to aid interpretability. 3 Results 3.1 Methods All numerical results used 100 random sub-samplings with replacement of 80-10-10 cross-validation to estimate model parameters (80%), choose optimal meta-parameters (e.g., λ, 10%), and determine prediction quality (10%). Below, β denotes the values of the true model parameters, ˆβ denotes the estimated values of the model parameters from some algorithm (e.g., UoILasso), Sβ is the support of 4 the true model (i.e., the set of non-zero parameter indices), and S ˆβ is the support of the estimated model. We calculated several metrics of model selection, model estimation, and prediction accuracy. (1) Selection accuracy (set overlap): 1− |S ˆ β∆Sβ| |S ˆ β|0+|Sβ|0 , where, ∆is the symmetric set difference operator. This metric ranges in [0, 1], taking a value of 0 if Sβ and S ˆβ have no elements in common, and taking a value of 1 if and only if they are identical. (2) Estimation error (r.m.s): q 1 p P (βi −ˆβi) 2. (3) Estimation variability (parameter variance): E[ˆβ2] −(E[ˆβ])2. (4) Prediction accuracy (R2): P (yi−ˆ yi)2 P (yi−E[y])2 . (5) Prediction parsimony (BIC): n log( 1 n−1 Pn i=1(yi −ˆyi)2) + ∥ˆβ∥0 log(n). For the experimental data, as the true model size is unknown, the selection ratio ( ∥ˆβ∥0 p ) is a measure of the overall size of the estimated model relative to the total number of parameters. For the classification task using UoIL1Logistic, BIC was calculated as: −2 log ℓ+ S ˆβ log N, where ℓis the log-likelihood on the validation set. For the matrix factorization task using UoICUR, reconstruction accuracy was the Frobenius norm of the difference between the data matrix A and the low-rank approximation matrix A′ constructed from A(:, c), the reduced column matrix of A: ∥A −A′∥F , where c is the set of k selected columns. 3.2 Model Selection and Stability: Explicit Control of False Positives, False Negatives, and Estimate Stability Due to the form of the basic UoI framework, we can control both false negative and false positive discoveries, as well as the stability of the estimates. For any regularized regression method like in (2), a decrease in the penalization parameter (λ) tends to increase the number of false positives, and an increase in λ tends to increase false negatives. Preliminary analysis of the UoI framework shows that, for false positives, a large number of bootstrap resamples in the intersection step (B1) produces an increase in the probability of getting no false positive discoveries, while an increase in the number of bootstraps in the union step (B2) leads to a decrease in the probability of getting no false positives. Conversely, for false negatives, a large number of bootstrap resamples in the union step (B2) produces an increase in the probability of no false negative discoveries, while an increase in the number of bootstraps in the intersection step (B1) leads to a decrease in the probability of no false negatives. Also, a large number of bootstrap samples in union step (B2) gives a more stable estimate. These properties were confirmed numerically for UoILasso and are displayed in Fig. 1(d), which plots the average normalized false negatives, false positives, and standard deviation of model estimates from running UoILasso, with ranges of B1 and B2 on four different models. These results are supported by preliminary theoretical analysis of a variant of UoILasso (see [4]). Thus, the relative values of B1 and B2 express the fundamental balance between the two basic operations of intersection (which compresses the feature space) and union (which expands the feature space). Model selection through intersection often excludes true parameters (i.e., false negatives), and, conversely, model estimation using unions often includes erroneous parameters (i.e., false positives). By using stochastic resampling, combined with model selection through intersections, followed by model estimation through unions, UoI permits us to mitigate the feature inclusion/exclusion inherent in either operation. Essentially, the limitations of selection by intersection are counteracted by the union of estimates, and vice versa. 3.3 UoILasso has Superior Performance on Simulated Data Sets To explore the performance of the UoILasso algorithm, we have performed extensive numerical investigations on simulated data sets, where we can control key properties of the data. There are a large number of algorithms available for linear regression, and we picked some of the most popular algorithms (e.g., Lasso), as well as more uncommon, but more powerful algorithms (e.g., SCAD, a non-convex method). Specifically, we compared UoILasso to five other model selection/estimation 5 Figure 2: Range of observed results, in comparison with existing algorithms. (a) True β distribution (grey histograms) and estimated values (colored lines). (b) Scatter plot of true and estimated values of observation variable on held-out samples. (c) Metrics of algorithm performance. methods: Ridge, Lasso, SCAD, BoATS, and debiased Lasso [12, 18, 10, 5, 3, 13]. Note that BoATS and debiased Lasso are both two-stage methods. We examined performance of these algorithms across a variety of underlying distributions of model parameters, degrees of sparsity, and noise levels. Across all algorithms examined, we found that UoILasso (Fig. 2, black) generally resulted in very high selection accuracy (Fig. 2(c), right) with parameter estimates with low error (Fig. 2(c), center-right), leading to the best prediction accuracy (Fig. 2(c), center-left) and prediction parsimony (Fig. 2(c), left). In addition, it was very robust to differences in underlying parameter distribution, degree of sparsity, and magnitude of noise. (See [4] for more details.) 3.4 UoILasso in Neuroscience: Sparse Functional Networks from Human Neural Recordings and Parsimonious Prediction from Genetic and Phenotypic Data We sought to determine if the enhanced selection and estimation properties of UoILasso also improved its utility as a tool for data-driven discovery in complex, diverse neuroscience data sets. Neurobiology seeks to understand the brain across multiple spatio-temporal scales, from molecules-to-minds. We first tackled the problem of graph formation from multi-electrode (p = 86 electrodes) neural recordings taken directly from the surface of the human brain during speech production (n = 45 trials each). See [7] for details. That is, the goal was to construct sparse neuroscientifically-meaningful graphs for further downstream analysis. To estimate functional connectivity, we calculated partial correlation graphs. The model was estimated independently for each electrode, and we compared the results of graphs estimated by UoILasso to the graphs estimated by SCAD. In Fig. 3(a)-(b), we display the networks derived from recordings during the production of /b/ while speaking /ba/. We found that the UoILasso network (Fig. 3(a)) was much sparser than the SCAD network (Fig. 3(b)). Furthermore, the network extracted by UoILasso contained electrodes in the lip (dorsal vSMC), jaw (central vSMC), and larynx (ventral vSMC) regions, accurately reflecting the articulators engaged in the production of /b/ (Fig. 3(c)) [7]. The SCAD network (Fig. 3(d)) did not have any of these properties. This highlights the improved power of UoILasso to extract sparse graphs with functionally meaningful features relative to even some non-convex methods. We calculated connectivity graphs during the production of 9 consonant-vowel syllables. Fig. 3(e) displays a summary of prediction accuracy for UoILasso networks (red) and SCAD networks (black) 6 Figure 3: Application of UoI to neuroscience and genetics data. (a)-(f): Functional connectivity networks from ECoG recordings during speech production. (g)-(h): Parsimonious prediction of complex phenotypes form genotype and phenotype data. as a function of time. The average relative prediction accuracy (compared to baseline times) for the UoILasso network was generally greater during the time of peak phoneme encoding [T = -100:200] compared to the SCAD network. Fig. 3(f) plots the time course of the parameter selection ratio for the UoILasso network (red) and SCAD network (black). The UoILasso network was consistently ∼5× sparser than the SCAD network. These results demonstrate that UoILasso extracts sparser graphs from noisy neural signals with a modest increase in prediction accuracy compared to SCAD. We next investigated whether UoILasso would improve the identification of a small number of highly predictive features from genotype-phenotype data. To do so, we analyzed data from n = 365 mice (173 female, 192 male) that are part of the genetically diverse Collaborative Cross cohort. We analyzed single-nucleotide polymorphisms (SNPs) from across the entire genome of each mouse (p = 11, 563 SNPs). For each animal, we measured two continuous, quantitative phenotypes: weight and behavioral performance on the rotorod task (see [14] for details). We focused on predicting these phenotypes from a small number of geneotype-phenotype features. We found that UoILasso identified and estimated a small number of features that were sufficient to explain large amounts of variability in these complex behavioral and physiological phenotypes. Fig. 3(g) displays the non-zero values estimated for the different features (e.g., location of loci on the genome) contributing to the weight (black) and speed (red) phenotype. Here, non-opaque points correspond to the mean ± s.d. across cross-validation samples, while the opaque points are the medians. Importantly, for both speed and weight phenotypes, we confirmed that several identified predictor features had been reported in the literature, though by different studies, e.g., genes coding for Kif1b, Rrm2b/Ubr5, and Dloc2. (See [4] for more details.) Accurate prediction of phenotypic variability with a small number of factors was a unique property of models found by UoILasso. For both weight and rotorod performance, models fit by UoILasso had marginally increased prediction accuracy compared to other methods (+1%), but they did so with far fewer parameters (lower selection ratios). This results in prediction parsimony (BIC) that was several orders of magnitude better (Fig. 3(h)). Together, these results demonstrate that UoILasso can identify a small number of genetic/physiological factors that are highly predictive of complex physiological and behavioral phenotypes. 7 Figure 4: Extension of UoI to classification and matrix decomposition. (a) UoI for classification (UoIL1Logistic). (b) UoI for matrix decomposition (UoICUR); solid and dashed lines are for PAH and dashed SORCH data sets, respectively. 3.5 UoIL1Logistic and UoICUR: Application of UoI to Classification and Matrix Decomposition As noted, UoI is is a framework into which other methods can be inserted. While we have primarily demonstrated UoI in the context of linear regression, it is much more general than that. To illustrate this, we implemented a classification algorithm (UoIL1Logistic) and matrix decomposition algorithm (UoICUR), and we compared them to the base methods on several data sets (see [4] for details). In classification, UoI resulted in either equal or improved prediction accuracy with 2x-10x fewer parameters for a variety of biomedical classification tasks (Fig. 4(a)). For matrix decomposition (in this case, column subset selection), for a given dimensionality, UoI resulted in reconstruction errors that were consistently lower than the base method (BasicCUR), and quickly approached an unscalable greedy algorithm (GreedyCUR) for two genetics data sets (Fig. 4(b)). In both cases, UoI improved the prediction parsimony relative to the base (classification or decomposition) method. 4 Discussion UoI-based methods leverage stochastic data resampling and a range of sparsity-inducing regularization parameters/dimensions to build families of potential features, and they then average nearly unbiased parameter estimates of selected features to maximize predictive accuracy. Thus, UoI separates model selection with intersection operations from model estimation with union operations: the limitations of selection by intersection are counteracted by the union of estimates, and vice versa. Stochastic data resampling can be a viewed as a perturbation of the data, and UoI efficiently identifies and robustly estimates features that are stable to these perturbations. A unique property of UoI-based methods is the ability to control both false positives and false negatives. Initial theoretical work (see [4]) shows that increasing the number of bootstraps in the selection module (B1) increases the amount of feature compression (primary controller of false positives), while increasing the number of bootstraps in the estimation module (B2) increases feature expansion (primary controller of false negatives), and we observe this empirically. Thus, neither should be too large, and their relative values express the balance between feature compression and expansion. This tension is seen in many places in machine learning and data analysis: local nearest neighbor methods vs. global latent factor models; local spectral methods that tend to expand due to their diffusion-based properties vs. flow-based methods that tend to contract; and sparse L1 vs. dense L2 penalties/priors more generally. Interestingly, an analogous balance of compressive and expansive forces contributes to neural leaning algorithms based on Hebbian synaptic plasticity [6]. Our results highlight how revisiting popular methods in light of new data science demands can lead to still further-improved methods, and they suggest several directions for theoretical and empirical work. 8 References [1] F. R. Bach. Bolasso: model consistent Lasso estimation through the bootstrap. In Proceedings of the 25th international conference on Machine learning, pages 33–40, 2008. [2] P. Bickel and B. Li. Regularization in statistics. TEST, 15(2):271–344, 2006. [3] K. E. Bouchard. Bootstrapped adaptive threshold selection for statistical model selection and estimation. Technical report, 2015. Preprint: arXiv:1505.03511. [4] K. E. Bouchard, A. F. Bujan, F. Roosta-Khorasani, S. Ubaru, Prabhat, A. M. Snijders, J.-H. Mao, E. F. Chang, M. W. Mahoney, and S. Bhattacharyya. Union of Intersections (UoI) for interpretable data driven discovery and prediction. Technical report, 2017. Preprint: arXiv:1705.07585 (also available as Supplementary Material). [5] K. E. Bouchard and E. F. Chang. Control of spoken vowel acoustics and the influence of phonetic context in human speech sensorimotor cortex. Journal of Neuroscience, 34(38):12662–12677, 2014. [6] K. E. Bouchard, S. Ganguli, and M. S. Brainard. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences. Frontiers in Computational Neuroscience, 9(92), 2015. [7] K. E. Bouchard, N. Mesgarani, K. Johnson, and E. F. Chang. Functional organization of human sensorimotor cortex for speech articulation. Nature, 495(7441):327–332, 2013. [8] L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996. [9] National Research Council. Frontiers in Massive Data Analysis. The National Academies Press, Washington, D. C., 2013. [10] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. [11] S. Ganguli and H. Sompolinsky. 2012. Annual Review of Neuroscience, 35(1):485–508, Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis. [12] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, New York, 2003. [13] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. Journal of Machine Learning Research, 15:2869–2909, 2014. [14] J.-H. Mao, S. A. Langley, Y. Huang, M. Hang, K. E. Bouchard, S. E. Celniker, J. B. Brown, J. K. Jansson, G. H. Karpen, and A. M. Snijders. Identification of genetic factors that modify motor performance and body weight using collaborative cross mice. Scientific Reports, 5:16247, 2015. [15] V. Marx. Biology: The big challenges of big data. Nature, 498(7453):255–260, 2013. [16] R. E. Schapire and Y. Freund. Boosting: Foundations and Algorithms. MIT Press, Cambridge, MA, 2012. [17] T. J. Sejnowski, P. S. Churchland, and J. A. Movshon. Putting big data to good use in neuroscience. Nature Neuroscience, 17(11):1440–1441, 2014. [18] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58(1):267–288, 1996. [19] M. J. Wainwright. Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1:233–253, 2014. 9 | 2017 | 314 |
6,802 | Good Semi-supervised Learning That Requires a Bad GAN Zihang Dai∗, Zhilin Yang∗, Fan Yang, William W. Cohen, Ruslan Salakhutdinov School of Computer Science Carnegie Melon University dzihang,zhiliny,fanyang1,wcohen,rsalakhu@cs.cmu.edu Abstract Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets2. 1 Introduction Deep neural networks are usually trained on a large amount of labeled data, and it has been a challenge to apply deep models to datasets with limited labels. Semi-supervised learning (SSL) aims to leverage the large amount of unlabeled data to boost the model performance, particularly focusing on the setting where the amount of available labeled data is limited. Traditional graph-based methods [2, 26] were extended to deep neural networks [22, 23, 8], which involves applying convolutional neural networks [10] and feature learning techniques to graphs so that the underlying manifold structure can be exploited. [15] employs a Ladder network to minimize the layerwise reconstruction loss in addition to the standard classification loss. Variational auto-encoders have also been used for semi-supervised learning [7, 12] by maximizing the variational lower bound of the unlabeled data log-likelihood. Recently, generative adversarial networks (GANs) [6] were demonstrated to be able to generate visually realistic images. GANs set up an adversarial game between a discriminator and a generator. The goal of the discriminator is to tell whether a sample is drawn from true data or generated by the generator, while the generator is optimized to generate samples that are not distinguishable by the discriminator. Feature matching (FM) GANs [16] apply GANs to semi-supervised learning on Kclass classification. The objective of the generator is to match the first-order feature statistics between the generator distribution and the true distribution. Instead of binary classification, the discriminator employs a (K + 1)-class objective, where true samples are classified into the first K classes and generated samples are classified into the (K + 1)-th class. This (K + 1)-class discriminator objective leads to strong empirical results, and was later widely used to evaluate the effectiveness of generative models [5, 21]. Though empirically feature matching improves semi-supervised classification performance, the following questions still remain open. First, it is not clear why the formulation of the discriminator ∗Equal contribution. Ordering determined by dice rolling. 2Code is available at https://github.com/kimiyoung/ssl_bad_gan. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. can improve the performance when combined with a generator. Second, it seems that good semisupervised learning and a good generator cannot be obtained at the same time. For example, [16] observed that mini-batch discrimination generates better images than feature matching, but feature matching obtains a much better semi-supervised learning performance. The same phenomenon was also observed in [21], where the model generated better images but failed to improve the performance on semi-supervised learning. In this work, we take a step towards addressing these questions. First, we show that given the current (K + 1)-class discriminator formulation of GAN-based SSL, good semi-supervised learning requires a “bad” generator. Here by bad we mean the generator distribution should not match the true data distribution. Then, we give the definition of a preferred generator, which is to generate complement samples in the feature space. Theoretically, under mild assumptions, we show that a properly optimized discriminator obtains correct decision boundaries in high-density areas in the feature space if the generator is a complement generator. Based on our theoretical insights, we analyze why feature matching works on 2-dimensional toy datasets. It turns out that our practical observations align well with our theory. However, we also find that the feature matching objective has several drawbacks. Therefore, we develop a novel formulation of the discriminator and generator objectives to address these drawbacks. In our approach, the generator minimizes the KL divergence between the generator distribution and a target distribution that assigns high densities for data points with low densities in the true distribution, which corresponds to the idea of a complement generator. Furthermore, to enforce our assumptions in the theoretical analysis, we add the conditional entropy term to the discriminator objective. Empirically, our approach substantially improves over vanilla feature matching GANs, and obtains new state-of-the-art results on MNIST, SVHN, and CIFAR-10 when all methods are compared under the same discriminator architecture. Our results on MNIST and SVHN also represent state-of-the-art amongst all single-model results. 2 Related Work Besides the adversarial feature matching approach [16], several previous works have incorporated the idea of adversarial training in semi-supervised learning. Notably, [19] proposes categorical generative adversarial networks (CatGAN), which substitutes the binary discriminator in standard GAN with a multi-class classifier, and trains both the generator and the discriminator using information theoretical criteria on unlabeled data. From the perspective of regularization, [14, 13] propose virtual adversarial training (VAT), which effectively smooths the output distribution of the classifier by seeking virtually adversarial samples. It is worth noting that VAT bears a similar merit to our approach, which is to learn from auxiliary non-realistic samples rather than realistic data samples. Despite the similarity, the principles of VAT and our approach are orthogonal, where VAT aims to enforce a smooth function while we aim to leverage a generator to better detect the low-density boundaries. Different from aforementioned approaches, [24] proposes to train conditional generators with adversarial training to obtain complete sample pairs, which can be directly used as additional training cases. Recently, Triple GAN [11] also employs the idea of conditional generator, but uses adversarial cost to match the two model-defined factorizations of the joint distribution with the one defined by paired data. Apart from adversarial training, there has been other efforts in semi-supervised learning using deep generative models recently. As an early work, [7] adapts the original Variational Auto-Encoder (VAE) to a semi-supervised learning setting by treating the classification label as an additional latent variable in the directed generative model. [12] adds auxiliary variables to the deep VAE structure to make variational distribution more expressive. With the boosted model expressiveness, auxiliary deep generative models (ADGM) improve the semi-supervised learning performance upon the semi-supervised VAE. Different from the explicit usage of deep generative models, the Ladder networks [15] take advantage of the local (layerwise) denoising auto-encoding criterion, and create a more informative unsupervised signal through lateral connection. 3 Theoretical Analysis Given a labeled set L = {(x, y)}, let {1, 2, · · · , K} be the label space for classification. Let D and G denote the discriminator and generator, and PD and pG denote the corresponding distributions. 2 Consider the discriminator objective function of GAN-based semi-supervised learning [16]: max D Ex,y∼L log PD(y|x, y ≤K) + Ex∼p log PD(y ≤K|x) + Ex∼pG log PD(K + 1|x), (1) where p is the true data distribution. The probability distribution PD is over K + 1 classes where the first K classes are true classes and the (K + 1)-th class is the fake class. The objective function consists of three terms. The first term is to maximize the log conditional probability for labeled data, which is the standard cost as in supervised learning setting. The second term is to maximize the log probability of the first K classes for unlabeled data. The third term is to maximize the log probability of the (K + 1)-th class for generated data. Note that the above objective function bears a similar merit to the original GAN formulation if we treat P(K + 1|x) to be the probability of fake samples, while the only difference is that we split the probability of true samples into K sub-classes. Let f(x) be a nonlinear vector-valued function, and wk be the weight vector for class k. As a standard setting in previous work [16, 5], the discriminator D is defined as PD(k|x) = exp(w⊤ k f(x)) PK+1 k′=1 exp(w⊤ k′f(x)). Since this is a form of over-parameterization, wK+1 is fixed as a zero vector [16]. We next discuss the choices of different possible G’s. 3.1 Perfect Generator Here, by perfect generator we mean that the generator distribution pG exactly matches the true data distribution p, i.e., pG = p. We now show that when the generator is perfect, it does not improve the generalization over the supervised learning setting. Proposition 1. If pG = p, and D has infinite capacity, then for any optimal solution D = (w, f) of the following supervised objective, max D Ex,y∼L log PD(y|x, y ≤K), (2) there exists D∗= (w∗, f ∗) such that D∗maximizes Eq. (1) and that for all x, PD(y|x, y ≤K) = PD∗(y|x, y ≤K). The proof is provided in the supplementary material. Proposition 1 states that for any optimal solution D of the supervised objective, there exists an optimal solution D∗of the (K +1)-class objective such that D and D∗share the same generalization error. In other words, using the (K + 1)-class objective does not prevent the model from experiencing any arbitrarily high generalization error that it could suffer from under the supervised objective. Moreover, since all the optimal solutions are equivalent w.r.t. the (K + 1)-class objective, it is the optimization algorithm that really decides which specific solution the model will reach, and thus what generalization performance it will achieve. This implies that when the generator is perfect, the (K + 1)-class objective by itself is not able to improve the generalization performance. In fact, in many applications, an almost infinite amount of unlabeled data is available, so learning a perfect generator for purely sampling purposes should not be useful. In this case, our theory suggests that not only the generator does not help, but also unlabeled data is not effectively utilized when the generator is perfect. 3.2 Complement Generator The function f maps data points in the input space to the feature space. Let pk(f) be the density of the data points of class k in the feature space. Given a threshold ϵk, let Fk be a subset of the data support where pk(f) > ϵk, i.e., Fk = {f : pk(f) > ϵk}. We assume that given {ϵk}K k=1, the Fk’s are disjoint with a margin. More formally, for any fj ∈Fj, fk ∈Fk, and j ̸= k, we assume that there exists a real number 0 < α < 1 such that αfj + (1 −α)fk /∈Fj ∪Fk. As long as the probability densities of different classes do not share any mode, i.e., ∀i ̸= j, argmaxfpi(f) ∩argmaxfpj(f) = ∅, this assumption can always be satisfied by tuning the thresholds ϵk’s. With the assumption held, we will show that the model performance would be better if the thresholds could be set to smaller values (ideally zero). We also assume that each Fk contains at least one labeled data point. Suppose ∪K k=1Fk is bounded by a convex set B. If the support FG of a generator G in the feature space is a relative complement set in B, i.e., FG = B −∪K k=1Fk, we call G a complement generator. The reason why we utilize a bounded B to define the complement is presented in the supplementary 3 material. Note that the definition of complement generator implies that G is a function of f. By treating G as function of f, theoretically D can optimize the original objective function in Eq. (1). Now we present the assumption on the convergence conditions of the discriminator. Let U and G be the sets of unlabeled data and generated data. Assumption 1. Convergence conditions. When D converges on a finite training set {L, U, G}, D learns a (strongly) correct decision boundary for all training data points. More specifically, (1) for any (x, y) ∈L, we have w⊤ y f(x) > w⊤ k f(x) for any other class k ̸= y; (2) for any x ∈G, we have 0 > maxK k=1 w⊤ k f(x); (3) for any x ∈U, we have maxK k=1 w⊤ k f(x) > 0. In Assumption 1, conditions (1) and (2) assume classification correctness on labeled data and true-fake correctness on generated data respectively, which is directly induced by the objective function. Likewise, it is also reasonable to assume true-fake correctness on unlabeled data, i.e., log P k exp w⊤ k f(x) > 0 for x ∈U. However, condition (3) goes beyond this and assumes maxk w⊤ k f(x) > 0. We discuss this issue in detail in the supplementary material and argue that these assumptions are reasonable. Moreover, in Section 5, our approach addresses this issue explicitly by adding a conditional entropy term to the discriminator objective to enforce condition (3). Lemma 1. Suppose for all k, the L2-norms of weights wk are bounded by ∥wk∥2 ≤C. Suppose that there exists ϵ > 0 such that for any fG ∈FG, there exists f ′ G ∈G such that ∥fG −f ′ G∥2 ≤ϵ. With the conditions in Assumption 1, for all k ≤K, we have w⊤ k fG < Cϵ. Corollary 1. When unlimited generated data samples are available, with the conditions in Lemma 1, we have lim|G|→∞w⊤ k fG ≤0. See the supplementary material for the proof. Proposition 2. Given the conditions in Corollary 1, for all class k ≤K, for all feature space points fk ∈Fk, we have w⊤ k fk > w⊤ j fk for any j ̸= k. Proof. Without loss of generality, suppose j = arg maxj̸=k w⊤ j fk. Now we prove it by contradiction. Suppose w⊤ k fk ≤w⊤ j fk. Since Fk’s are disjoint with a margin, B is a convex set and FG = B −∪kFk, there exists 0 < α < 1 such that fG = αfk + (1 −α)fj with fG ∈FG and fj being the feature of a labeled data point in Fj. By Corollary 1, it follows that w⊤ j fG ≤0. Thus, w⊤ j fG = αw⊤ j fk + (1 −α)w⊤ j fj ≤0. By Assumption 1, w⊤ j fk > 0 and w⊤ j fj > 0, leading to contradiction. It follows that w⊤ k fk > w⊤ j fk for any j ̸= k. Proposition 2 guarantees that when G is a complement generator, under mild assumptions, a nearoptimal D learns correct decision boundaries in each high-density subset Fk (defined by ϵk) of the data support in the feature space. Intuitively, the generator generates complement samples so the logits of the true classes are forced to be low in the complement. As a result, the discriminator obtains class boundaries in low-density areas. This builds a connection between our approach with manifold-based methods [2, 26] which also leverage the low-density boundary assumption. With our theoretical analysis, we can now answer the questions raised in Section 1. First, the (K +1)class formulation is effective because the generated complement samples encourage the discriminator to place the class boundaries in low-density areas (Proposition 2). Second, good semi-supervised learning indeed requires a bad generator because a perfect generator is not able to improve the generalization performance (Proposition 1). 4 Case Study on Synthetic Data In the previous section, we have established the fact a complement generator, instead of a perfect generator, is what makes a good semi-supervised learning algorithm. Now, to get a more intuitive understanding, we conduct a case study based on two 2D synthetic datasets, where we can easily verify our theoretical analysis by visualizing the model behaviors. In addition, by analyzing how feature matching (FM) [16] works in 2D space, we identify some potential problems of it, which motivates our approach to be introduced in the next section. Specifically, two synthetic datasets are four spins and two circles, as shown in Fig. 1. 4 Figure 1: Labeled and unlabeled data are denoted by cross and point respectively, and different colors indicate classes. Figure 2: Left: Classification decision boundary, where the white line indicates true-fake boundary; Right: True-Fake decision boundary Figure 3: Feature space at convergence Figure 4: Left: Blue points are generated data, and the black shadow indicates unlabeled data. Middle and right can be interpreted as above. Soundness of complement generator Firstly, to verify that the complement generator is a preferred choice, we construct the complement generator by uniformly sampling from the a bounded 2D box that contains all unlabeled data, and removing those on the manifold. Based on the complement generator, the result on four spins is visualized in Fig. 2. As expected, both the classification and true-fake decision boundaries are almost perfect. More importantly, the classification decision boundary always lies in the fake data area (left panel), which well matches our theoretical analysis. Visualization of feature space Next, to verify our analysis about the feature space, we choose the feature dimension to be 2, apply the FM to the simpler dataset of two circles, and visualize the feature space in Fig. 3. As we can see, most of the generated features (blue points) resides in between the features of two classes (green and orange crosses), although there exists some overlap. As a result, the discriminator can almost perfectly distinguish between true and generated samples as indicated by the black decision boundary, satisfying the our required Assumption 1. Meanwhile, the model obtains a perfect classification boundary (blue line) as our analysis suggests. Pros and cons of feature matching Finally, to further understand the strength and weakness of FM, we analyze the solution FM reaches on four spins shown in Fig. 4. From the left panel, we can see many of the generated samples actually fall into the data manifold, while the rest scatters around in the nearby surroundings of data manifold. It suggests that by matching the first-order moment by SGD, FM is performing some kind of distribution matching, though in a rather weak manner. Loosely speaking, FM has the effect of generating samples close to the manifold. But due to its weak power in distribution matching, FM will inevitably generate samples outside of the manifold, especially when the data complexity increases. Consequently, the generator density pG is usually lower than the true data density p within the manifold and higher outside. Hence, an optimal discriminator PD∗(K + 1 | x) = p(x)/(p(x) + pG(x)) could still distinguish between true and generated samples in many cases. However, there are two types of mistakes the discriminator can still make 1. Higher density mistake inside manifold: Since the FM generator still assigns a significant amount of probability mass inside the support, wherever pG > p > 0, an optimal discriminator will incorrectly predict samples in that region as “fake”. Actually, this problem has already shown up when we examine the feature space (Fig. 3). 2. Collapsing with missing coverage outside manifold: As the feature matching objective for the generator only requires matching the first-order statistics, there exists many trivial solutions the generator can end up with. For example, it can simply collapse to mean of unlabeled features, or a few surrounding modes as along as the feature mean matches. Actually, we do see such 5 collapsing phenomenon in high-dimensional experiments when FM is used (see Fig. 5a and Fig. 5c) As a result, a collapsed generator will fail to cover some gap areas between manifolds. Since the discriminator is only well-defined on the union of the data supports of p and pG, the prediction result in such missing area is under-determined and fully relies on the smoothness of the parametric model. In this case, significant mistakes can also occur. 5 Approach As discussed in previous sections, feature matching GANs suffer from the following drawbacks: 1) the first-order moment matching objective does not prevent the generator from collapsing (missing coverage); 2) feature matching can generate high-density samples inside manifold; 3) the discriminator objective does not encourage realization of condition (3) in Assumption 1 as discussed in Section 3.2. Our approach aims to explicitly address the above drawbacks. Following prior work [16, 6], we employ a GAN-like implicit generator. We first sample a latent variable z from a uniform distribution U(0, 1) for each dimension, and then apply a deep convolutional network to transform z to a sample x. 5.1 Generator Entropy Fundamentally, the first drawback concerns the entropy of the distribution of generated features, H(pG(f)). This connection is rather intuitive, as the collapsing issue is a clear sign of low entropy. Therefore, to avoid collapsing and increase coverage, we consider explicitly increasing the entropy. Although the idea sounds simple and straightforward, there are two practical challenges. Firstly, as implicit generative models, GANs only provide samples rather than an analytic density form. As a result, we cannot evaluate the entropy exactly, which rules out the possibility of naive optimization. More problematically, the entropy is defined in a high-dimensional feature space, which is changing dynamically throughout the training process. Consequently, it is difficult to estimate and optimize the generator entropy in the feature space in a stable and reliable way. Faced with these difficulties, we consider two practical solutions. The first method is inspired by the fact that input space is essentially static, where estimating and optimizing the counterpart quantities would be much more feasible. Hence, we instead increase the generator entropy in the input space, i.e., H(pG(x)), using a technique derived from an information theoretical perspective and relies on variational inference (VI). Specially, let Z be the latent variable space, and X be the input space. We introduce an additional encoder, q : X 7→Z, to define a variational upper bound of the negative entropy [3], −H(pG(x)) ≤−Ex,z∼pG log q(z|x) = LVI. Hence, minimizing the upper bound LVI effectively increases the generator entropy. In our implementation, we formulate q as a diagonal Gaussian with bounded variance, i.e. q(z|x) = N(µ(x), σ2(x)), with 0 < σ(x) < θ, where µ(·) and σ(·) are neural networks, and θ is the threshold to prevent arbitrarily large variance. Alternatively, the second method aims at increasing the generator entropy in the feature space by optimizing an auxiliary objective. Concretely, we adapt the pull-away term (PT) [25] as the auxiliary cost, LPT = 1 N(N−1) PN i=1 P j̸=i f(xi)⊤f(xj) ∥f(xi)∥∥f(xj)∥ 2 , where N is the size of a mini-batch and x are samples. Intuitively, the pull-away term tries to orthogonalize the features in each mini-batch by minimizing the squared cosine similarity. Hence, it has the effect of increasing the diversity of generated features and thus the generator entropy. 5.2 Generating Low-Density Samples The second drawback of feature matching GANs is that high-density samples can be generated in the feature space, which is not desirable according to our analysis. Similar to the argument in Section 5.1, it is infeasible to directly minimize the density of generated features. Instead, we enforce the generation of samples with low density in the input space. Specifically, given a threshold ϵ, we minimize the following term as part of our objective: Ex∼pG log p(x)I[p(x) > ϵ] (3) 6 where I[·] is an indicator function. Using a threshold ϵ, we ensure that only high-density samples are penalized while low-density samples are unaffected. Intuitively, this objective pushes the generated samples to “move” towards low-density regions defined by p(x). To model the probability distribution over images, we simply adapt the state-of-the-art density estimation model for natural images, namely the PixelCNN++ [17] model. The PixelCNN++ model is used to estimate the density p(x) in Eq. (3). The model is pretrained on the training set, and fixed during semi-supervised training. 5.3 Generator Objective and Interpretation Combining our solutions to the first two drawbacks of feature matching GANs, we have the following objective function of the generator: min G −H(pG) + Ex∼pG log p(x)I[p(x) > ϵ] + ∥Ex∼pGf(x) −Ex∼Uf(x)∥2. (4) This objective is closely related to the idea of complement generator discussed in Section 3. To see that, let’s first define a target complement distribution in the input space as follows p∗(x) = ( 1 Z 1 p(x) if p(x) > ϵ and x ∈Bx C if p(x) ≤ϵ and x ∈Bx, where Z is a normalizer, C is a constant, and Bx is the set defined by mapping B from the feature space to the input space. With the definition, the KL divergence (KLD) between pG(x) and p∗(x) is KL(pG∥p∗) = −H(pG)+Ex∼pG log p(x)I[p(x) > ϵ]+Ex∼pG I[p(x) > ϵ] log Z−I[p(x) ≤ϵ] log C . The form of the KLD immediately reveals the aforementioned connection. Firstly, the KLD shares two exactly the same terms with the generator objective (4). Secondly, while p∗(x) is only defined in Bx, there is not such a hard constraint on pG(x). However, the feature matching term in Eq. (4) can be seen as softly enforcing this constraint by bringing generated samples “close” to the true data (Cf. Section 4). Moreover, because the identity function I[·] has zero gradient almost everywhere, the last term in KLD would not contribute any informative gradient to the generator. In summary, optimizing our proposed objective (4) can be understood as minimizing the KL divergence between the generator distribution and a desired complement distribution, which connects our practical solution to our theoretical analysis. 5.4 Conditional Entropy In order for the complement generator to work, according to condition (3) in Assumption 1, the discriminator needs to have strong true-fake belief on unlabeled data, i.e., maxK k=1 w⊤ k f(x) > 0. However, the objective function of the discriminator in [16] does not enforce a dominant class. Instead, it only needs PK k=1 PD(k|x) > PD(K + 1|x) to obtain a correct decision boundary, while the probabilities PD(k|x) for k ≤K can possibly be uniformly distributed. To guarantee the strong true-fake belief in the optimal conditions, we add a conditional entropy term to the discriminator objective and it becomes, max D Ex,y∼L log pD(y|x, y ≤K) + Ex∼U log pD(y ≤K|x)+ Ex∼pG log pD(K + 1|x) + Ex∼U K X k=1 pD(k|x) log pD(k|x). (5) By optimizing Eq. (5), the discriminator is encouraged to satisfy condition (3) in Assumption 1. Note that the same conditional entropy term has been used in other semi-supervised learning methods [19, 13] as well, but here we motivate the minimization of conditional entropy based on our theoretical analysis of GAN-based semi-supervised learning. To train the networks, we alternatively update the generator and the discriminator to optimize Eq. (4) and Eq. (5) based on mini-batches. If an encoder is used to maximize H(pG), the encoder and the generator are updated at the same time. 6 Experiments We mainly consider three widely used benchmark datasets, namely MNIST, SVHN, and CIFAR-10. As in previous work, we randomly sample 100, 1,000, and 4,000 labeled samples for MNIST, SVHN, 7 Methods MNIST (# errors) SVHN (% errors) CIFAR-10 (% errors) CatGAN [19] 191 ± 10 19.58 ± 0.46 SDGM [12] 132 ± 7 16.61 ± 0.24 Ladder network [15] 106 ± 37 20.40 ± 0.47 ADGM [12] 96 ± 2 22.86 FM [16] ∗ 93 ± 6.5 8.11 ± 1.3 18.63 ± 2.32 ALI [4] 7.42 ± 0.65 17.99 ± 1.62 VAT small [13] ∗ 136 6.83 14.87 Our best model ∗ 79.5 ± 9.8 4.25 ± 0.03 14.41 ± 0.30 Triple GAN [11] ∗‡ 91± 58 5.77 ± 0.17 16.99 ± 0.36 Π model [9] †‡ 5.43 ± 0.25 16.55 ± 0.29 VAT+EntMin+Large [13]† 4.28 13.15 Table 1: Comparison with state-of-the-art methods on three benchmark datasets. Only methods without data augmentation are included. ∗indicates using the same (small) discriminator architecture, † indicates using a larger discriminator architecture, and ‡ means self-ensembling. (a) FM on SVHN (b) Ours on SVHN (c) FM on CIFAR (d) Ours on CIFAR Figure 5: Comparing images generated by FM and our model. FM generates collapsed samples, while our model generates diverse “bad” samples. and CIFAR-10 respectively during training, and use the standard data split for testing. We use the 10-quantile log probability to define the threshold ϵ in Eq. (4). We add instance noise to the input of the discriminator [1, 18], and use spatial dropout [20] to obtain faster convergence. Except for these two modifications, we use the same neural network architecture as in [16]. For fair comparison, we also report the performance of our FM implementation with the aforementioned differences. 6.1 Main Results We compare the the results of our best model with state-of-the-art methods on the benchmarks in Table 1. Our proposed methods consistently improve the performance upon feature matching. We achieve new state-of-the-art results on all the datasets when only small discriminator architecture is considered. Our results are also state-of-the-art on MNIST and SVHN among all single-model results, even when compared with methods using self-ensembling and large discriminator architectures. Finally, note that because our method is actually orthogonal to VAT [13], combining VAT with our presented approach should yield further performance improvement in practice. 6.2 Ablation Study We report the results of ablation study in Table 2. In the following, we analyze the effects of several components in our model, subject to the intrinsic features of different datasets. First, the generator entropy terms (VI and PT) (Section 5.1) improve the performance on SVHN and CIFAR by up to 2.2 points in terms of error rate. Moreover, as shown in Fig 5, our model significantly reduces the collapsing effects present in the samples generated by FM, which also indicates that maximizing the generator entropy is beneficial. On MNIST, probably due to its simplicity, no collapsing phenomenon was observed with vanilla FM training [16] or in our setting. Under such circumstances, maximizing the generator entropy seems to be unnecessary, and the estimation bias introduced by approximation techniques can even hurt the performance. 8 Setting Error Setting Error MNIST FM 85.0 ± 11.7 CIFAR FM 16.14 MNIST FM+VI 86.5 ± 10.6 CIFAR FM+VI 14.41 MNIST FM+LD 79.5 ± 9.8 CIFAR FM+VI+Ent 15.82 MNIST FM+LD+Ent 89.2 ± 10.5 Setting Error Setting Max log-p SVHN FM 6.83 MNIST FM -297 SVHN FM+VI 5.29 MNIST FM+LD -659 SVHN FM+PT 4.63 SVHN FM+PT+Ent -5809 SVHN FM+PT+Ent 4.25 SVHN FM+PT+LD+Ent -5919 SVHN FM+PT+LD+Ent 4.19 SVHN 10-quant -5622 Setting ϵ as q-th centile q = 2 q = 10 q = 20 q = 100 Error on MNIST 77.7 ± 6.1 79.5 ± 9.8 80.1 ± 9.6 85.0 ± 11.7 Table 2: Ablation study. FM is feature matching. LD is the low-density enforcement term in Eq. (3). VI and PT are two entropy maximization methods described in Section 5.1. Ent means the conditional entropy term in Eq. (5). Max log-p is the maximum log probability of generated samples, evaluated by a PixelCNN++ model. 10-quant shows the 10-quantile of true image log probability. Error means the number of misclassified examples on MNIST, and error rate (%) on others. Second, the low-density (LD) term is useful when FM indeed generates samples in high-density areas. MNIST is a typical example in this case. When trained with FM, most of the generated hand written digits are highly realistic and have high log probabilities according to the density model (Cf. max log-p in Table 2). Hence, when applied to MNIST, LD improves the performance by a clear margin. By contrast, few of the generated SVHN images are realistic (Cf. Fig. 5a). Quantitatively, SVHN samples are assigned very low log probabilities (Cf. Table 2). As expected, LD has a negligible effect on the performance for SVHN. Moreover, the “max log-p” column in Table 2 shows that while LD can reduce the maximum log probability of the generated MNIST samples by a large margin, it does not yield noticeable difference on SVHN. This further justifies our analysis. Based on the above conclusion, we conjecture LD would not help on CIFAR where sample quality is even lower. Thus, we did not train a density model on CIFAR due to the limit of computational resources. Third, adding the conditional entropy term has mixed effects on different datasets. While the conditional entropy (Ent) is an important factor of achieving the best performance on SVHN, it hurts the performance on MNIST and CIFAR. One possible explanation relates to the classic exploitationexploration tradeoff, where minimizing Ent favors exploitation and minimizing the classification loss favors exploration. During the initial phase of training, the discriminator is relatively uncertain and thus the gradient of the Ent term might dominate. As a result, the discriminator learns to be more confident even on incorrect predictions, and thus gets trapped in local minima. Lastly, we vary the values of the hyper-parameter ϵ in Eq. (4). As shown at the bottom of Table 2, reducing ϵ clearly leads to better performance, which further justifies our analysis in Sections 4 and 3 that off-manifold samples are favorable. 6.3 Generated Samples We compare the generated samples of FM and our approach in Fig. 5. The FM images in Fig. 5c are extracted from previous work [16]. While collapsing is widely observed in FM samples, our model generates diverse “bad” images, which is consistent with our analysis. 7 Conclusions In this work, we present a semi-supervised learning framework that uses generated data to boost task performance. Under this framework, we characterize the properties of various generators and theoretically prove that a complementary (i.e. bad) generator improves generalization. Empirically our proposed method improves the performance of image classification on several benchmark datasets. 9 Acknowledgement This work was supported by the DARPA award D17AP00001, the Google focused award, and the Nvidia NVAIL award. The authors would also like to thank Han Zhao for his insightful feedback. References [1] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016, 2017. [2] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399–2434, 2006. [3] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017. [4] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. [5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. [6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [7] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014. [8] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. [9] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. [10] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [11] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. arXiv preprint arXiv:1703.02291, 2017. [12] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [13] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017. [14] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015. [15] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semisupervised learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3546–3554, 2015. [16] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. 10 [17] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. [18] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. [19] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. [20] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 648–656, 2015. [21] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Adversarial generator-encoder networks. arXiv preprint arXiv:1704.02304, 2017. [22] Jason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semisupervised embedding. In Neural Networks: Tricks of the Trade, pages 639–655. Springer, 2012. [23] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861, 2016. [24] Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. Semi-supervised qa with generative domain-adaptive nets. arXiv preprint arXiv:1702.02206, 2017. [25] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. [26] Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003. 11 | 2017 | 315 |
6,803 | Targeting EEG/LFP Synchrony with Neural Nets Yitong Li1, Michael Murias2, Samantha Major2, Geraldine Dawson2, Kafui Dzirasa2, Lawrence Carin1 and David E. Carlson3,4 1Department of Electrical and Computer Engineering, Duke University 2Departments of Psychiatry and Behavioral Sciences, Duke University 3Department of Civil and Environmental Engineering, Duke University 4Department of Biostatistics and Bioinformatics, Duke University {yitong.li,michael.murias,samantha.major,geraldine.dawson, kafui.dzirasa,lcarin,david.carlson}@duke.edu Abstract We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder. 1 Introduction There is significant current research on methods for Electroencephalography (EEG) and Local Field Potential (LFP) data in a variety of applications, such as Brain-Machine Interfaces (BCIs) [21], seizure detection [24, 26], and fundamental research in fields such as psychiatry [11]. The wide variety of applications has resulted in many analysis approaches and packages, such as Independent Component Analysis in EEGLAB [8], and a variety of standard machine learning approaches in FieldTrip [22]. While in many applications prediction is key, such as for BCIs [18, 19], in applications such as emotion processing and psychiatric disorders, clinicians are ultimately interested in the dynamics of underlying neural signals to help elucidate understanding and design future experiments. This goal necessitates development of interpretable models, such that a practitioner may understand the features and their relationships to outcomes. Thus, the focus here is on developing an interpretable and predictive approach to understanding spontaneous neural activity. A popular feature in these analyses is based on spectral coherence, where a specific frequency band is compared between pairwise channels, to analyze both amplitude and phase coherence. When two regions have a high power (amplitude) coherence in a spectral band, it implies that these areas are 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. coordinating in a functional network to perform a task [3]. Spectral coherence has been previously used to design classification algorithms on EEG [20] and LFP [30] data. Furthermore, these features have underlying neural relationships that can be used to design causal studies using neurostimulation [11]. However, fully pairwise approaches face significant challenges with limited data because of the proliferation of features when considering pairwise properties. Recent approaches to this problem include first partitioning the data to spatial areas and considering only broad relationships between spatial regions [33], or enforcing a low-rank structure on the pairwise relationships [30]. To analyze both LFP and EEG data, we follow [30] to focus on low-rank properties; however, this previous approach focused on a Gaussian process implementation for LFPs, that does not scale to the greater number of electrodes used in EEG. We therefore develop a new framework whereby the low-rank spectral patterns are approximated by parameterized linear projections, with the parametrization guided by neuroscience insights from [30]. Critically, these linear projections can be included in a convolutional neural network (CNN) architecture to facilitate end-to-end learning with interpretable convolutional filters and fast test-time performance. In addition to being interpretable, the parameterization dramatically reduces the total number of parameters to fit, yielding a CNN with only hundreds of parameters. By comparison, conventional deep models require learning millions of parameters. Even special-purpose networks such as EEGNet [15], a recently proposed CNN model for EEG data, still require learning thousands of parameters. The parameterized convolutional layer in the proposed model is followed by max-pooling, a single fully-connected layer, and a cross-entropy classification loss; this leads to a clear relationship between the proposed targeted features and outcomes. When presenting the model, interpretation of the filters and the classification algorithms are discussed in detail. We also discuss how deeper structures can be developed on top of this approach. We demonstrate in the experiments that the proposed framework mitigates overfitting and yields improved predictive performance on several publicly available datasets. In addition to developing a new neuroscience-motivated parametric CNN, there are several other contributions of this manuscript. First, a Gaussian Process (GP) adapter [16] within the proposed framework is developed. The idea is that the input electrodes are first mapped to pseudo-inputs by using a GP, which allows straightforward handling of missing (dropped or otherwise noise-corrupted) electrodes common in real datasets. In addition, this allows the same convolutional neural network to be applied to datasets recorded on distinct electrode layouts. By combining data sources, the result can better generalize to a population, which we demonstrate in the results by combining two datasets based on emotion recognition. We also developed an autoencoder version of the network to address overfitting concerns that are relevant when the total amount of labeled data is limited, while also improving model generalizability. The autoencoder can lead to minor improvements in performance, which is included in the Supplementary Material. 2 Basic Model Setup: Parametric CNN The following notation is employed: scalars are lowercase italicized letters, e.g. x, vectors are bolded lowercase letters, e.g. x, and matrices are bolded uppercase letters, e.g. X. The convolution operator is denoted ⇤, and | = p−1. ⌦denotes the Kronecker product. ⊙denotes an element-wise product. The input data are Xi 2 RC⇥T , where C is the number of simultaneously recorded electrodes/channels, and T is given by the sampling rate and time length; i = 1, . . . , N, where N is the total number of trials. The data can also be represented as Xi = [xi1, · · · , xiC]|, where xic 2 RT is the data restricted to the cth channel. The associated labels are denoted yi, which is an integer corresponding to a label. The trial index i is added only when necessary for clarity. An example signal is presented in Figure 1 (Left). The data are often windowed, the ith of which yields Xi and the associated label yi. Clear identification of phase and power relationships among channels motivates the development of a structured neural network model for which the convolutional filters target this synchrony, or frequency-specific power and phase correlations. 2.1 SyncNet Inspired both by the success of deep learning and spectral coherence as a predictive feature [12, 30], a CNN is developed to target these properties. The proposed model, termed SyncNet, performs a structured 1D convolution to jointly model the power, frequency and phase relationships between channels. 2 Figure 1: (Left) Visualization of EEG dataset on 8 electrodes split into windows. The markers (e.g., “FP1”) denote electrode names, which have corresponding spatial locations. (Right) 8 channels of synthetic data. Refer to Section 2.2 for more detail. Figure 2: SyncNet follows a convolutional neural network structure. The right side is the SyncNet (Section 2.1), which is parameterized to target relevant quantities. The left side is the GP adapter, which aims at unifying different electrode layout and reducing overfitting (Section 3). This goal is achieved by using parameterized 1-dimensional convolutional filters. Specifically, the kth of K filters for channel c is f (k) c (⌧) = b(k) c cos(!(k)⌧+ φ(k) c ) exp(−β(k)⌧2). (1) The frequency !(k) 2 R+ and decay β(k) 2 R+ parameters are shared across channels, and they define the real part of a (scaled) Morlet wavelet1. These two parameters define the spectral properties targeted by the kth filter, where !(k) controls the center of the frequency spectrum and β(k) controls the frequency-time precision trade-off. The amplitude b(k) c 2 R+ and phase shift φ(k) c 2 [0, 2⇡] are channel-specific. Thus, the convolutional filter in each channel will be a discretized version of a scaled and rotated Morlet wavelet. By parameterizing the model in this way, all channels are targeted collectively. The form in (1) is motivated by the work in [30], but the resulting model we develop is far more computationally efficient. A fuller discussion of the motivation for (1) is detailed in Section 2.2. For practical reasons, the filters are restricted to have finite length N⌧, and each time step ⌧takes an integer value from ⇥ −N⌧ 2 , N⌧ 2 −1 ⇤ when N⌧is even and from ⇥ −N⌧−1 2 , N⌧−1 2 ⇤ when N⌧is odd. For typical learned β(k)’s, the convolutional filter vanishes by the edges of the window. Succinctly, the output of the k convolutional filter bank is given by h(k) = PC c=1 f (k) c (⌧) ⇤xc. The simplest form of SyncNet contains only one convolution layer, as in Figure 2. The output from each filter bank h(k) is passed through a Rectified Linear Unit (ReLU), followed by max pooling over the entire window, to return ˜h(k) for each filter. The filter outputs ˜h(k) for k = 1, . . . , K are concatenated and used as input to a softmax classifier with the cross-entropy loss to predict ˆy. Because of the temporal and spatial redundancies in EEG, dropout is instituted at the channel level, with dropout(xc) = ⇢xc/p, with probability p 0, with probability 1 −p. (2) p determines the typical percentage of channels included, and was set as p = 0.75. It is straightforward to create deeper variants of the model by augmenting SyncNet with additional standard convolutional 1It is straightforward to use the Morlet wavelet directly and define the outputs as complex variables and define the neural network to target the same properties, but this leads to both computational and coding overhead. 3 layers. However, in our experiments, adding more layers typically resulted in over-fitting due to the limited numbers of training samples, but will likely be beneficial in larger datasets. 2.2 SyncNet Targets Class Differences in Cross-Spectral Densities The cross-spectral density [3] is a widely used metric for understanding the synchronous nature of signal in frequency bands. The cross-spectral density is typically constructed by converting a time-series into a frequency representation, and then calculating the complex covariance matrix in each frequency band. In this section we sketch how the SyncNet filter bank targets cross-spectral densities to make optimal classifications. The discussion will be in the complex domain first, and then it will be demonstrated why the same result occurs in the real domain. In the time-domain, it is possible to understand the cross-spectral density of a single frequency band by using a cross-spectral kernel [30] to define the covariance function of a Gaussian process. Letting ⌧= t −t0, the cross-spectral kernel is defined KCSD cc0tt0 = cov(xct, xc0t0) = Acc0(⌧), (⌧) = exp % −1 2β⇤⌧2 + |!⇤⌧ & . (3) Here, !⇤and β⇤control the frequency band. c and c0 are channel indexes. A 2 CC⇥C is a positive semi-definite matrix that defines the cross-spectral density for that frequency band controlled by (⌧). Each entry Acc0 is made of of a magnitude |Acc0| that controls the power (amplitude) coherence between electrodes in that frequency band and a complex phase that determines the optimal time offset between the signals. The covariance over the complete multi-channel times series is given by KCSD = A ⌦(⌧). The power (magnitude) coherence is given by the absolute value of the entry, and the phase offset can be determined by the rotation in the complex space. A generative model for oscillatory neural signals is given by a Gaussian process with this kernel [30], where vec(X) ⇠CN(0, KCSD + σ2IC⇥T ). The entries of KCSD are given from (3). CN denotes the circularly symmetric complex normal. The additive noise term σ2IC⇥T is excluded in the following for clarity. Note that the complex form of (1) in SyncNet across channels is given as f(⌧) = f!(⌧)s, where f!(⌧) = exp(−1 2β⌧2 + |!⌧) is the filter over time and s = b ⊙exp(|φ) are the weights and rotations of a single SyncNet filter. Suppose that each channel was filtered independently by the filter f! = f!(⌧) with a vector input ⌧. Writing the convolution in matrix form as ˜xc = f! ⇤xc = F † !xc, where F! 2 CT ⇥T is a matrix formulation of the convolution operator, results in a filtered signal ˜xc ⇠CN % 0, AccF † !(⌧)F! & . For a filtered version over all channels, XT = [xT 1 , · · · , xT C], the distribution would be given by vec( ˜X) = vec(F † !XT ) ⇠CN % 0, A ⌦F † !(⌧)F! & , ˜xt ⇠CN(0, A ⇥ F † !(⌧)F! ⇤ tt). (4) ˜xt 2 RC is defined as the observation at time t for all C channels. The diagonal of ⇥ F † !(⌧)F! ⇤ will reach a steady-state quickly away from the edge effects, so we state this as const = ⇥ F † !(⌧)F! ⇤ tt. The output from the SyncNet filter bank prior to the pooling stage is then given by ht = s† ˜xt ⇠ CN(0, const ⇥s†As). We note that the signal-to-noise ratio would be maximized by matching the filter’s (f!) frequency properties to the generated frequency properties; i.e. β and ! from (1) should match β⇤and !⇤from (3). We next focus on the properties of an optimal s. Suppose that two classes are generated from (3) with cross-spectral densities of A0 and A1 for classes 0 and 1, respectively. Thus, the signals are drawn from CN(0, Ay ⌦(⌧)) for y = {0, 1}. The optimal projection s⇤would maximize the differences in the distribution ht depending on the class, which is equivalent to maximizing the ratio between the variances of the two cases. Mathematically, this is equivalent to finding s⇤= arg maxs max n s†A1s s†A0s, s†A0s s†A1s o = arg maxs | log(s†A1s) −log(s†A0s)|. (5) Note that the constant dropped out due to the ratio. Because the SyncNet filter is attempting to classify the two conditions, it should learn to best differentiate the classes and match the optimal s⇤. We demonstrate in Section 5.1 on synthetic data that SyncNet filters do in fact align with this optimal direction and is therefore targeting properties of the cross-spectral densities. In the above discussion, the argument was made with respect to complex signals and models; however, a similar result holds when only the real domain is used. Note that if the signals are oscillatory, then 4 the result after the filtering of the domain and the max-pooling will be essentially the same as using a max-pooling on the absolute value of the complex filters. This is because the filtered signal is rotated through the complex domain, and will align with the real domain within the max-pooling period for standard signals. This is shown visually in Supplemental Figure 9. 3 Gaussian Process Adapter A practical issue in EEG datasets is that electrode layouts are not constant, either due to inconsistent device design or electrode failure. Secondly, nearby electrodes are highly correlated and contain redundant information, so fitting parameters to all electrodes results in overfitting. These issues are addressed by developing a Gaussian Process (GP) adapter, in the spirit of [16], trained with SyncNet as shown in the left side of Figure 2. Regardless of the electrode layout, the observed signal X at electrode locations p = {p1, · · · , pC} are mapped to a shared number of pseudo-inputs at locations p⇤= {p⇤ 1, · · · , p⇤ L} before being input to SyncNet. In contrast to prior work, the proposed GP adapter is formulated as a multi-task GP [4] and the pseudoinput locations p⇤are learned. A GP is used to map X 2 RC⇥T at locations p to the pseudo-signals X⇤2 RL⇥T at locations p⇤, where L < C is the number of pseudo-inputs. Distances are constructed by projecting each electrode into a 2D representation by the Azimuthal Equidistant Projection. When evaluated at a finite set of points, the multi-task GP [4] can be written as a multivariate normal vec(X) ⇠N % f, σ2IC⇥T & , f ⇠N (0, K) . (6) K is constructed by a kernel function K(⌧, c, c0) that encodes separable relationships through time and through space. The full covariance matrix can be calculated as K = Kpp ⌦Ktt, where Kpcpc0 = ↵1 exp(−↵2||pc −pc0||1) and Ktt is set to identity matrix IT . Kpp 2 RC⇥C targets the spatial relationship across channels using the exponential kernel. Note that this kernel K is distinct from KCSD used in section 2.2. Let the pseudo-inputs locations be defined as p⇤ l for l = 1, · · · , L. Using the GP formulation, the signal can be inferred at the L pseudo-input locations from the original signal. Following [16], only the expectation of the signal is used (to facilitate fast computation), which is given by X⇤= E(X⇤|X) = Kp⇤p(Kpp + σ2IC)−1X. An illustration of the learned new locations is shown under X⇤in Figure 2. The derivation of this mathematical form and additional details on the GP adapter are included in Supplemental Section A. The GP adapter parameters p⇤, ↵1, ↵2 are optimized jointly with SyncNet. The input signal Xi is mapped to X⇤ i , which is then input to SyncNet. The predicted label ˆyi is given by ˆyi = Sync(X⇤ i ; ✓), where Sync(˙) is the prediction function of SyncNet. Given the SyncNet loss function PN i=1 ` (ˆyi, yi) = PN i=1 ` (Sync(X⇤ i ; ✓), yi), the overall training loss function L = PN i=1 ` (Sync(E[X⇤ i |Xi]; ✓), yi) = PN i=1 ` % Sync(Kp⇤p(Kpp + σ2IC)−1Xi; ✓), yi & , (7) is jointly minimized over the SyncNet parameters ✓and the GP adapter parameters {p⇤, ↵1, ↵2}. The GP uncertainty can be included in the loss at the expense of significantly increased optimization cost, but does not result in performance improvements to justify the increased cost [16]. 4 Related Work Frequency-spectrum features are widely used for processing EEG/LFP signals. Often this requires calculating synchrony- or entropy-based features within predefined frequency bands, such as [20, 5, 9, 14]. There are many hand-crafted features and classifiers for a BCI task [18]; however, in our experiments, these hand-crafted features did not perform well on long oscillatory signals. The EEG signal is modeled in [1] as a matrix-variate model with spatial and spectral smoothing. However, the number of parameters scales with time length, rendering the approach ineffective for longer time series. A range-EEG feature has been proposed [23], which measures the peak-to-peak amplitude. In contrast, our approach learns frequency bands of interest and we can deal with long time series evaluated in our experiments. Deep learning has been a popular recent area of research in EEG analysis. This includes Restricted Boltzmann Machines and Deep Belief Networks [17, 36], CNNs [32, 29], and RNNs [2, 34]. These 5 approaches focus on learning both spatial and temporal relationships. In contrast to hand-crafted features and SyncNet, these deep learning methods are typically used as a black box classifier. EEGNET [15] considered a four-layer CNN to classify event-related potentials and oscillatory EEG signals, demonstrating improved performance over low-level feature extraction. This network was designed to have limited parameters, requiring 2200 for their smallest model. In contrast, the SyncNet filters are simple to interpret and require learning only a few hundred parameters. An alternative approach is to design GP kernels to target synchrony properties and learn appropriate frequency bands. The phase/amplitude synchrony of LFP signals has been modeled [30, 10] with the cross-spectral mixture (CSM) kernel. This approach was used to define a generative model over differing classes and may be used to learn an unsupervised clustering model. A key issue with the CSM approach is the computational complexity, where gradients cost O(NTC3) (using approximations), and is infeasible with the larger number of electrodes in EEG data. In contrast, the proposed GP adapter requires only a single matrix inversion shared by most data points, which is O(C3). The use of wavelets has previously been considered in scattering networks [6]. Scattering networks used Morlet wavelets for image classification, but did not consider the complex rotation of wavelets over channels nor the learning of the wavelet widths and frequencies considered here. 5 Experiments To demonstrate that SyncNet is targeting synchrony information, we first apply it to synthetic data in Section 5.1. Notably, the learned filter bank recovers the optimal separating filter. Empirical performance is given for several EEG datasets in Section 5.2, where SyncNet often has the highest hold-out accuracy while maintaining interpretable features. The usefulness of the GP adapter to combine datasets is demonstrated in Section 5.3, where classification performance is dramatically improved via data augmentation. Empirical performance on an LFP dataset is shown in Section 5.4. Both the LFP signals and the EEG signals measure broad voltage fluctuations from the brain, but the LFP has a significantly cleaner signal because it is measured inside the cortical tissue. In all tested cases, SyncNet methods have essentially state-of-the-art prediction while maintaining interpretable features. The code is written in Python and Tensorflow. The experiments were run on a 6-core i7 machine with a Nvidia Titan X Pascal GPU. Details on training are given in Supplemental Section C. 5.1 Synthetic Dataset -2 -1 0 1 2 -2 -1 0 1 2 Optimal Learned Figure 3: Each dot represents one of 8 electrodes. The dots give complex directions for optimal and learned filters, demonstrating that SyncNet approximately recovers optimal filters. Synthetic data are generated for two classes by drawing data from a circularly symmetric normal matching the synchrony assumptions discussed in Section 2.2. The frequency band is pre-defined as !⇤= 10Hz and β⇤is defined as 40 (frequency variance of 2.5Hz) in (3). The number of channels is set to C = 8. Example data generated by this procedure is shown in Figure 1 (Right), where only the real part of the signal is kept. A1 and A0 are set such that the optimal vector from solving (5) is given by the shape visualized in Figure 3. This is accomplished by setting A0 = IC and A1 = I + s⇤(s⇤)†. Data is then simulated by drawing from vec(X) ⇠ CN(0, KCSD + σ2IC⇥T ) and keeping only the real part of the signal. KCSD is defined in equation (3) with A set to A0 or A1 depending on the class. In this experiment, the goal is to relate the filter learned in SyncNet and to this optimal separating plane s⇤. To show that SyncNet is targeting synchrony, it is trained on this synthetic data using only one single convolutional filter. The learned filter parameters are projected to the complex space by s = b ⊙exp(|φ), and are shown overlaid (rotated and rescaled to handle degeneracies) with the 6 optimal rotations in Figure 3. As the amount of data increases, the SyncNet filter recovers the expected relationship between channels and the predefined frequency band. In addition, the learned ! is centered at 11Hz, which is close to the generated feature band !⇤of 10Hz. These synthetic data results demonstrate that SyncNet is able to recover frequency bands of interest and target synchrony properties. 5.2 Performance on EEG Datasets We consider three publicly available datasets for EEG classification, described below. After the validation on the publicly available data, we then apply the method to a new clinical-trial data, to demonstrate that the approach can learn interpretable features that track the brain dynamics as a result of treatment. UCI EEG: This dataset2 has a total of 122 subjects with 77 diagnosed with alcoholism and 45 control subjects. Each subject undergoes 120 separate trials. The stimuli are pictures selected from 1980 Snodgrass and Vanderwart picture set. The EEG signal is of length one second and is sampled at 256Hz with 64 electrodes. We evaluate the data both within subject, which is randomly split as 7 : 1 : 2 for training, validation and testing, and using 11 subjects rotating test set. The classification task is to recover whether the subject has been diagnosed with alcoholism or is a control subject. DEAP dataset: The “Database for Emotion Analysis using Physiological signals” [14] has a total of 32 participants. Each subject has EEG recorded from 32 electrodes while they are shown a total of 40 one-minute long music videos with strong emotional score. After watching each video, each subject gave an integer score from one to nine to evaluate their feelings in four different categories. The self-assessment standards are valence (happy/unhappy), arousal (bored/excited), dominance (submissive/empowered) and personal liking of the video. Following [14], this is treated as a binary classification with a threshold at a score of 4.5. The performance is evaluated with leave-one-out testing, and the remaining subjects are split to use 22 for training and 9 for validation. SEED dataset: This dataset [35] involves repeated tests on 15 subjects. Each subject watches 15 movie clips 3 times. It clip is designated with a negative/neutral/positive emotion label, while the EEG signal is recorded at 1000Hz from 62 electrodes. For this dataset, leave-one-out cross-validation is used, and the remaining 14 subjects are split with 10 for training and 4 for validation. ASD dataset: The Autism Spectral Disorder (ASD) dataset involves 22 children from ages 3 to 7 years undergoing treatment for ASD with EEG measurements at baseline, 6 months post treatment, and 12 months post treatment. Each recording session involves 3 one-minute videos designed to measure responses to social stimuli and controls, measured with a 121 electrode array. The trial was approved by the Duke Hospital Institutional Review Board and conducted under IND #15949. Full details on the experiments and initial clinical results are available [7]. The classification task is to predict the time relative to treatment to track the change in neural signatures post-treatment. The cross-patient predictive ability is estimated with leave-one-out cross-validation, where 17 patients are used to train the model and 4 patients are used as a validation set. Dataset UCI DEAP [14] SEED [35] ASD Within Cross Arousal Valence Domin. Liking Emotion Stage DE [35] 0.821 0.622 0.529 0.517 0.528 0.577 0.491 0.504 PSD [35] 0.816 0.605 0.584 0.559 0.595 0.644 0.352 0.499 rEEG [23] 0.702 0.614 0.549 0.538 0.557 0.585 0.468 0.361 Spectral [14] * * 0.620 0.576 * 0.554 * * EEGNET [15] 0.878 0.672 0.536 0.572 0.589 0.594 0.533 0.363 MC-DCNN [37] 0.840 0.300 0.593 0.604 0.635 0.621 0.527 0.584 SyncNet 0.918 0.705 0.611 0.608 0.651 0.679 0.558 0.630 GP-SyncNet 0.923 0.723 0.592 0.611 0.621 0.659 0.516 0.637 Table 1: Classification accuracy on EEG datasets. The accuracy of predictions on these EEG datasets, from a variety of methods, is given in Table 1. We also implemented other hand-crafted spatial features, such as the brain symmetric index [31]; however, their performance was not competitive with the results here. EEGNET is an EEG-specific convolutional network proposed in [15]. The “Spectral” method from [14] uses an SVM on extracted 2https://kdd.ics.uci.edu/databases/eeg/eeg.html 7 (a) Spatial pattern of learned amplitude b. (b) Spatial pattern of learned phase φ. Figure 4: Learned filter centered at 14Hz on the ASD dataset. Figures made with FieldTrip [22]. spectral power features from each electrode in different frequency bands. MC-DCNN [37] denotes a 1D CNN where the filters are learned without the constraints of the parameterized structure. The SyncNet used 10 filter sets both with (GP-SyncNet) and without the GP adapter. Remarkably, the basic SyncNet already delivers state-of-the-art performance on most tasks. In contrast, the handcrafted features did not effectively cannot capture available information and the alternative CNN based methods severely overfit the training data due to the large number of free parameters. In addition to state-of-the-art classification performance, a key component of SyncNet is that the features extracted and used in the classification are interpretable. Specifically, on the ASD dataset, the proposed method significantly improves the state-of-the-art. However, the end goal of this experiment is to understand how the neural activity is changing in response to the treatment. On this task, the ability of SyncNet to visualize features is important for dissemination to medical practitioners. To demonstrate how the filters can be visualized and communicated, we show one of the filters learned in SyncNet on the ASD dataset in Figure 4. This filter, centered at 14Hz, is highly associated with the session at 6 months post-treatment. Notably, this filter bank is dominantly using the signals measured at the forward part of the scalp (Figure 4, Left). Intriguingly, the phase relationships are primarily in phase for the frontal regions, but note that there are off-phase relationships between the midfrontal and the frontal part of the scale (Figure 4, Right). Additional visualizations of the results are given in Supplemental Section E. 5.3 Experiments on GP adapter In the previous section, it was noted that the GP adapter can improve performance within an existing dataset, demonstrating that the GP adapter is useful to reduce the number of parameters. However, our primary designed use of the GP Adapter is to unify different electrode layouts. This is explored further by applying the GP-SyncNet to the UCI EEG dataset and changing the number of pseudo-inputs. Notably, a mild reduction in the number of pseudo-inputs improves performance over directly using the measured data (Supplemental Figure 6(a)) by reducing the total number of parameters. This is especially true when comparing the GP adapter to using a random subset of channels to reduce dimensionality. SyncNet GP-SyncNet GP-SyncNet Joint DEAP [14] dataset 0.521 ± 0.026 0.557 ± 0.025 0.603 ± 0.020 SEED [35] dataset 0.771 ± 0.009 0.762 ± 0.015 0.779 ± 0.009 Table 2: Accuracy mean and standard errors for training two datasets separately and jointly. To demonstrate that the GP adapter can be used to combine datasets, the DEAP and SEED datasets were trained jointly using a GP adapter. The SEED data was downsampled to 128Hz to match the frequency of DEAP dataset, and the data was separated into 4 second windows due to their different lengths. The label for the trial is attached for each window. To combine the labeling space, only the negative and positive emotion labels were kept in SEED and valence was used in the DEAP dataset. The number of pseudo-inputs is set to L = 26. The results are given in Table 2, which demonstrates that combining datasets can lead to dramatically improved generalization ability due to the data 8 augmentation. Note that the basic SyncNet performances in Table 2 differ from the results in Table 1. Specifically, the DEAP dataset performance is worse; this is due to significantly reduced information when considering a 4 second window instead of a 60 second window. Second, the performance on SEED has improved; this is due to considering only 2 classes instead of 3. 5.4 Performance on an LFP Dataset Due to the limited publicly available multi-region LFP datasets, only a single LFP data was included in the experiments. The intention of this experiment is to show that the method is broadly applicable in neural measurements, and will be useful with the increasing availability of multi-region datasets. An LFP dataset is recorded from 26 mice from two genetic backgrounds (14 wild-type and 12 CLOCK∆19). CLOCK∆19 mice are an animal model of a psychiatric disorder. The data are sampled at 200 Hz for 11 channels. The data recording from each mouse has five minutes in its home cage, five minutes from an open field test, and ten minutes from a tail-suspension test. The data are split into temporal windows of five seconds. SyncNet is evaluated by two distinct prediction tasks. The first task is to predict the genotype (wild-type or CLOCK∆19) and the second task is to predict the current behavior condition (home cage, open field, or tail-suspension test). We separate the data randomly as 7 : 1 : 2 for training, validation and testing PCA + SVM DE [35] PSD [35] rEEG [23] EEGNET [15] SyncNet Behavior 0.911 0.874 0.858 0.353 0.439 0.946 Genotype 0.724 0.771 0.761 0.449 0.689 0.926 Table 3: Comparison between different methods on an LFP dataset. Results from these two predictive tasks are shown in Table 3. SyncNet used K = 20 filters with filter length 40. These results demonstrate that SyncNet straightforwardly adapts to both EEG and LFP data. These data will be released with publication of the paper. 6 Conclusion We have proposed SyncNet, a new framework for EEG and LFP data classification that learns interpretable features. In addition to our original architecture, we have proposed a GP adapter to unify electrode layouts. Experimental results on both LFP and EEG data show that SyncNet outperforms conventional CNN architectures and all compared classification approaches. Importantly, the features from SyncNet can be clearly visualized and described, allowing them to be used to understand the dynamics of neural activity. Acknowledgements In working on this project L.C. received funding from the DARPA HIST program; K.D., L.C., and D.C. received funding from the National Institutes of Health by grant R01MH099192-05S2; K.D received funding from the W.M. Keck Foundation; G.D. received funding from Marcus Foundation, Perkin Elmer, Stylli Translational Neuroscience Award, and NICHD 1P50HD093074. References [1] A. S. Aghaei, M. S. Mahanta, and K. N. Plataniotis. Separable common spatio-spectral patterns for motor imagery bci systems. IEEE TBME, 2016. [2] P. Bashivan, I. Rish, M. Yeasin, and N. Codella. Learning representations from eeg with deep recurrent-convolutional neural networks. arXiv:1511.06448, 2015. [3] A. M. Bastos and J.-M. Schoffelen. A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Frontiers in Systems Neuroscience, 2015. [4] E. V. Bonilla, K. M. A. Chai, and C. K. Williams. Multi-task gaussian process prediction. In NIPS, volume 20, 2007. [5] W. Bosl, A. Tierney, H. Tager-Flusberg, and C. Nelson. Eeg complexity as a biomarker for autism spectrum disorder risk. BMC Medicine, 2011. 9 [6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE PAMI, 2013. [7] G. Dawson, J. M. Sun, K. S. Davlantis, M. Murias, L. Franz, J. Troy, R. Simmons, M. SabatosDeVito, R. Durham, and J. Kurtzberg. Autologous cord blood infusions are safe and feasible in young children with autism spectrum disorder: Results of a single-center phase i open-label trial. Stem Cells Translational Medicine, 2017. [8] A. Delorme and S. Makeig. Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis. J. Neuroscience Methods, 2004. [9] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu. Differential entropy feature for eeg-based emotion classification. In IEEE/EMBS Conference on Neural Engineering. IEEE, 2013. [10] N. Gallagher, K. Ulrich, K. Dzirasa, L. Carin, and D. Carlson. Cross-spectral factor analysis. In NIPS, 2017. [11] R. Hultman, S. D. Mague, Q. Li, B. M. Katz, N. Michel, L. Lin, J. Wang, L. K. David, C. Blount, R. Chandy, et al. Dysregulation of prefrontal cortex-mediated slow-evolving limbic dynamics drives stress-induced emotional pathology. Neuron, 2016. [12] V. Jirsa and V. Müller. Cross-frequency coupling in real and virtual brain networks. Frontiers in Computational Neuroscience, 2013. [13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [14] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 2012. [15] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance. Eegnet: A compact convolutional network for eeg-based brain-computer interfaces. arXiv:1611.08024, 2016. [16] S. C.-X. Li and B. M. Marlin. A scalable end-to-end gaussian process adapter for irregularly sampled time series classification. In NIPS, 2016. [17] W. Liu, W.-L. Zheng, and B.-L. Lu. Emotion recognition using multimodal deep learning. In International Conference on Neural Information Processing. Springer, 2016. [18] F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi. A review of classification algorithms for eeg-based brain–computer interfaces. Journal of Neural Engineering, 2007. [19] K.-R. Müller, M. Tangermann, G. Dornhege, M. Krauledat, G. Curio, and B. Blankertz. Machine learning for real-time single-trial eeg-analysis: from brain–computer interfacing to mental state monitoring. J. Neuroscience Methods, 2008. [20] M. Murias, S. J. Webb, J. Greenson, and G. Dawson. Resting state cortical connectivity reflected in eeg coherence in individuals with autism. Biological Psychiatry, 2007. [21] E. Nurse, B. S. Mashford, A. J. Yepes, I. Kiral-Kornek, S. Harrer, and D. R. Freestone. Decoding eeg and lfp signals using deep learning: heading truenorth. In ACM International Conference on Computing Frontiers. ACM, 2016. [22] R. Oostenveld, P. Fries, E. Maris, and J.-M. Schoffelen. Fieldtrip: open source software for advanced analysis of meg, eeg, and invasive electrophysiological data. Computational Intelligence and Neuroscience, 2011. [23] D. O’Reilly, M. A. Navakatikyan, M. Filip, D. Greene, and L. J. Van Marter. Peak-to-peak amplitude in neonatal brain monitoring of premature infants. Clinical Neurophysiology, 2012. [24] A. Page, C. Sagedy, E. Smith, N. Attaran, T. Oates, and T. Mohsenin. A flexible multichannel eeg feature extractor and classifier for seizure detection. IEEE Circuits and Systems II: Express Briefs, 2015. 10 [25] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [26] Y. Qi, Y. Wang, J. Zhang, J. Zhu, and X. Zheng. Robust deep network with maximum correntropy criterion for seizure detection. BioMed Research International, 2014. [27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. [28] O. Tsinalis, P. M. Matthews, Y. Guo, and S. Zafeiriou. Automatic sleep stage scoring with single-channel eeg using convolutional neural networks. arXiv:1610.01683, 2016. [29] K. R. Ulrich, D. E. Carlson, K. Dzirasa, and L. Carin. Gp kernels for cross-spectrum analysis. In NIPS, 2015. [30] M. J. van Putten. The revised brain symmetry index. Clinical Neurophysiology, 2007. [31] H. Yang, S. Sakhavi, K. K. Ang, and C. Guan. On the use of convolutional neural networks and augmented csp features for multi-class motor imagery of eeg signals classification. In EMBC. IEEE, 2015. [32] Y. Yang, E. Aminoff, M. Tarr, and K. E. Robert. A state-space model of cross-region dynamic connectivity in meg/eeg. In NIPS, 2016. [33] N. Zhang, W.-L. Zheng, W. Liu, and B.-L. Lu. Continuous vigilance estimation using lstm neural networks. In International Conference on Neural Information Processing. Springer, 2016. [34] W.-L. Zheng and B.-L. Lu. Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 2015. [35] W.-L. Zheng, J.-Y. Zhu, Y. Peng, and B.-L. Lu. Eeg-based emotion classification using deep belief networks. In IEEE ICME. IEEE, 2014. [36] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao. Time series classification using multi-channels deep convolutional neural networks. In International Conference on Web-Age Information Management. Springer, 2014. 11 | 2017 | 316 |
6,804 | Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes Anton Mallasto Department of Computer Science University of Copenhagen mallasto@di.ku.dk Aasa Feragen Department of Computer Science University of Copenhagen aasa@di.ku.dk Abstract We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain curves. This allows inherent variation or uncertainty in function-valued data to be properly incorporated in the population analysis. Using the 2-Wasserstein metric we geometrize the space of GPs with L2 mean and covariance functions over compact index spaces. We prove uniqueness of the barycenter of a population of GPs, as well as convergence of the metric and the barycenter of their finite-dimensional counterparts. This justifies practical computations. Finally, we demonstrate our framework through experimental validation on GP datasets representing brain connectivity and climate development. A MATLAB library for relevant computations will be published at https://sites.google.com/view/antonmallasto/software. 1 Introduction Figure 1: An illustration of a GP, with mean function (in black) and confidence bound (in grey). The colorful curves are sample paths of this GP. Gaussian processes (GPs, see Fig. 1) are the counterparts of Gaussian distributions (GDs) over functions, making GPs natural objects to model uncertainty in estimated functions. With the rise of GP modelling and probabilistic numerics, GPs are increasingly used to model uncertainty in function-valued data such as segmentation boundaries [17,19,30], image registration [38] or time series [28]. Centered GPs, or covariance operators, appear as image features in computer vision [12,16,25,26] and as features of phonetic language structure [23]. A natural next step is therefore to analyze populations of GPs, where performance depends crucially on proper incorporation of inherent uncertainty or variation. This paper contributes a principled framework for population analysis of GPs based on Wasserstein, a.k.a. earth mover’s, distances. The importance of incorporating uncertainty into population analysis is emphasized by the example in Fig. 2, where each data point is a GP representing the minimal temperature in the Siberian city Vanavara over the course of one year [9,34]. A naïve way to compute its average temperature curve is to compute the per-day mean and standard deviation of the yearly GP mean curves. This is shown in the bottom right plot, and it is clear that the temperature variation is grossly underestimated, especially in the summer season. The top right figure shows the mean GP obtained with our proposed framework, which preserves a far more accurate representation of the natural temperature variation. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 2: Left: Example GPs describing the daily minimum temperatures in a Siberian city (see Sec. 4). Right top: The mean GP temperature curve, computed as a Wasserstein barycenter. Note that the inherent variability in the daily temperature is realistically preserved, in contrast with the naïve approach. Right bottom: A naïve estimation of the mean and standard deviation of the daily temperature, obtained by taking the day-by-day mean and standard deviation of the temperature. All figures show a 95% confidence interval. We propose analyzing populations of GPs by geometrizing the space of GPs through the Wasserstein distance, which yields a metric between probability measures with rich geometric properties. We contribute i) closed-form solutions for arbitrarily good approximation of the Wasserstein distance by showing that the 2-Wasserstein distance between two finite-dimensional GP representations converges to the 2-Wasserstein distance of the two GPs; and ii) a characterization of a non-degenerate barycenter of a population of GPs, and a proof that such a barycenter is unique, and can be approximated by its finite-dimensional counterpart. We evaluate the Wasserstein distance in two applications. First, we illustrate the use of the Wasserstein distance for processing of uncertain white-matter trajectories in the brain segmented from noisy diffusion-weighted imaging (DWI) data using tractography. It is well known that the noise level and the low resolution of DWI images result in unreliable trajectories (tracts) [24]. This is problematic as the estimated tracts are e.g. used for surgical planning [8]. Recent work [17,30] utilizes probabilistic numerics [29] to return uncertain tracts represented as GPs. We utilize the Wassertein distance to incorporate the estimated uncertainty into typical DWI analysis tools such as tract clustering [37] and visualization. Our second study quantifies recent climate development based on data from Russian meteorological stations using permutation testing on population barycenters, and supplies interpretability of the climate development using GP-valued kernel regression. Related work. Multiple frameworks exist for comparing Gaussian distributions (GDs) represented by their covariance matrices, including the Frobenius, Fisher-Rao (affine-invariant), log-Euclidean and Wasserstein metrics. Particularly relevant to our work is the 2-Wasserstein metric on GDs, whose Riemannian geometry is studied in [33], and whose barycenters are well understood [1,4]. A body of work exists on generalizing the aforementioned metrics to the infinite-dimensional covariance operators. As pointed out in [23], extending the affine-invariant and Log-Euclidean metrics is problematic as covariance operators are not compatible with logarithmic maps and their inverses are unbounded. These problems are avoided in [25, 26] by regularizing the covariance operators, but unfortunately, this also alters the data in a non-unique way. The Procrustes metric from [23] avoids this, but as it is, only defines a metric between covariance operators. The 2-Wasserstein metric, on the other hand, generalizes naturally from GDs to GPs, does not require regularization, and can be arbitrarily well approximated by a closed form expression, making the computations cheap. Moreover, the theory of optimal transport [5,6,36] shows that the Wasserstein metric yields a rich geometry, which is further demonstrated by the previous work on GDs [33]. After this work was presented in NIPS, a preprint appeared [20] which also studies convergence results and barycenters of GPs in the Wasserstein geometry, in a more general setting. 2 Structure. Prior to introducing the Wasserstein distance between GPs, we review GPs, their Hilbert space covariance operators and the corresponding Gaussian measures in Sec. 2. In Sec. 3 we introduce the Wasserstein metric and its barycenters for GPs and prove convergence properties of the metric and barycenters, when GPs are approximated by finite-dimensional GDs. Experimental validation is found in Sec. 4, followed by discussion and conclusion in Sec. 5. 2 Prerequisites Gaussian processes and measures. A Gaussian process (GP) f is a collection of random variables, such that any finite restriction of its values (f(xi))N i=1 has a joint Gaussian distribution, where xi ∈X, and X is the index set. A GP is entirely characterized by the pair m(x) = E [f(x)] , k(x, x′) = E [(f(x) −m(x))(f(x′) −m(x′))] , (1) where m and k are called the mean function and covariance function, respectively. We use the notation f ∼GP(m, k) for a GP f with mean function m and covariance function k. It follows from the definition that the covariance function k is symmetric and positive semidefinite. We say that f is non-degenerate, if k is strictly positive definite. We will assume the GPs used to be non-degenerate. GPs relate closely to Gaussian measures on Hilbert spaces. Given probability spaces (X, ΣX, µ) and (Y, ΣY , ν), we say that the measure ν is a push-forward of µ if ν(A) = µ(T −1(A)) for a measurable T : X →Y and any A ∈ΣY . Denote this by T#µ = ν. A Borel measure µ on a separable Hilbert space H is a Gaussian measure, if its push-forward with respect to any non-zero continuous element of the dual space of H is a non-degenerate Gaussian measure on R (i.e., the push-forward gives a univariate Gaussian distribution). A Borel-measurable set B is a Gaussian null set, if µ(B) = 0 for any Gaussian measure µ on X. A measure ν on H is regular if ν(B) = 0 for any Gaussian null set B. Note that regular Gaussian measures correspond to non-degenerate GPs. Covariance operators. Denote by L2(X) the space of L2-integrable functions from X to R. The covariance function k has an associated integral operator K : L2(X) →L2(X) defined by [Kφ](x) = Z X k(x, s)φ(s)ds, ∀φ ∈L2(X) , (2) called the covariance operator associated with k. As a by-product of the 2-Wasserstein metric on centered GPs, we get a metric on covariance operators. The operator K is Hilbert-Schmidt, self-adjoint, compact, positive, and of trace class, and the space of such covariance operators is a convex space. Furthermore, the assignment k 7→K from L2(X × X) is an isometric isomorphism onto the space of Hilbert-Schmidt operators on L2(X) [7, Prop. 2.8.6]. This justifies us to write both f ∼GP(m, K) and f ∼GP(m, k). Trace of an operator. The Wasserstein distance between GPs admits an analytical formula using traces of their covariance operators, as we will see below. Let (H, ⟨·, ·⟩) be a separable Hilbert space with the orthonormal basis {ek}∞ k=1. Then the trace of a bounded linear operator T on H is given by Tr T := ∞ X k=1 ⟨Tek, ek⟩, (3) which is absolutely convergent and independent of the choice of the basis if Tr (T ∗T) 1 2 < ∞, where T ∗denotes the adjoint operator of T and T 1 2 is the square-root of T. In this case T is called a trace class operator. For positive self-adjoint operators, the trace is the sum of their eigenvalues. The Wasserstein metric. The Wasserstein metric on probability measures derives from the optimal transport problem introduced by Monge and made rigorous by Kantorovich. The p-Wasserstein distance describes the minimal cost of transporting the unit mass of one probability measure into the unit mass of another probability measure, when the cost is given by a Lp distance [5,6,36]. Let (M, d) be a Polish space (complete and separable metric space) and denote by Pp(M) the set of all probability measures µ on M satisfying R M dp(x, x0)dµ(x) < ∞for some x0 ∈M. The p-Wasserstein distance between two probability measures µ, ν ∈Pp(M) is given by Wp(µ, ν) = inf γ∈Γ[µ,ν] Z M×M dp(x1, x2)dγ(x1, x2) 1 p , (x1, x2) ∈M × M, (4) 3 where Γ[µ, ν] is the set of joint measures on M × M with marginals µ and ν. Defined as above, Wp satisfies the properties of a metric. Furhermore, a minimizer in (4) is always achieved. 3 The Wasserstein metric for GPs We will now study the Wasserstein metric with p = 2 between GPs. For GDs, this has been studied in [11,14,18,22,33]. From now on, assume that all GPs f ∼GP(m, k) are indexed over a compact X ⊂Rn so that H := L2(X) is separable. Furthermore, we assume m ∈L2(X), k ∈L2(X × X), so that observations of f live almost surely in H. Let f1 ∼GP(m1, k1) and f2 ∼GP(m2, k2) be GPs with associated covariance operators K1 and K2 , respectively. As the sample paths of f1 and f2 are in H, they induce Gaussian measures µ1, µ2 ∈P2(H) on H, as there is a 1-1 correspondence between GPs having sample paths almost surely on a L2(X) space and Gaussian measures on L2(X) [27]. The 2-Wasserstein metric between the Gaussian measures µ1, µ2 is given by [13] W 2 2 (µ1, µ2) = d2 2(m1, m2) + Tr (K1 + K2 −2(K 1 2 1 K2K 1 2 1 ) 1 2 ), (5) where d2 is the canonical metric on L2(X). Using this, we get the following definition Definition 1. Let f1, f2 be GPs as above, and the induced Gaussian measures of f1 and f2 be µ1 and µ2, respectively. Then, their squared 2-Wasserstein distance is given by W 2 2 (f1, f2) := W 2 2 (µ1, µ2) = d2 2(m1, m2) + Tr (K1 + K2 −2(K 1 2 1 K2K 1 2 1 ) 1 2 ) . Remark 2. Note that the case m1 = m2 = 0 defines a metric for the covariance operators K1, K2, as (5) shows that the space of GPs is isometric to the cartesian product of L2(X) and the covariance operators. We will denote this metric by W 2 2 (K1, K2). Furthermore, as GDs are just a subset of GPs, W 2 2 yields also the 2-Wasserstein metric between GDs studied in [11,14,18,22,33]. Barycenters of Gaussian processes. Next, we define and study barycenters of populations of GPs, in a similar fashion as the GD case in [1]. Given a population {µi}N i=1 ⊂P2(H) and weights {ξi ≥0}N i=1 with PN i=1 ξi = 1, and H a separable Hilbert space, the solution ¯µ of the problem (P) inf µ∈P2(H) N X i=1 ξiW 2 2 (µi, µ), is the barycenter of the population {µi}N i=1 with barycentric coordinates {ξi}N i=1. The barycenter for GPs is defined to be the barycenter of the associated Gaussian measures. Remark 3. The following theorems require the assumption that the barycenter is non-degenerate; it is still a conjecture that the barycenter of non-degenerate GPs is nondegenerate [20], but this holds in the finite-dimensional case of GDs. We now state the main theorem of this section, which follows from Prop. 5 and Prop. 6 below. Theorem 4. Let {fi}N i=1 be a population of GPs with fi ∼GP(mi, Ki), then there exists a unique barycenter ¯f ∼GP( ¯m, ¯K) with barycentric coordinates (ξi)N i=1. If ¯f is non-degenerate, then ¯m and ¯K satisfy ¯m = N X i=1 ξimi, N X i=1 ξi ¯K 1 2 Ki ¯K 1 2 1 2 = ¯K. Proposition 5. Let {µi}N i=1 ⊂P2(H) and ¯µ be a barycenter with barycentric coordinates (ξi)N i=1. Assume µi is regular for some i, then ¯µ is the unique minimizer of (P). Proof. We first show that the map ν 7→W 2 2 (µ, ν) is convex, and strictly convex if µ is a regular measure. To see this, let νi ∈P2(H) and γ∗ i ∈Γ[µ, νi] be the optimal transport plans between µ and 4 νi for i = 1, 2, then λγ∗ 1 + (1 −λ)γ∗ 2 ∈Γ[µ, λν1 + (1 −λ)ν2] for λ ∈[0, 1]. Therefore W 2 2 (µ, λν1 + (1 −λ)ν2) = inf γ∈Γ[µ,λν1+(1−λ)ν2] Z H×H d2(x, y)dγ ≤ Z H×H d2(x, y)d(λγ∗ 1 + (1 −λ)γ∗ 2) = λW 2 2 (µ, ν1) + (1 −λ)W 2 2 (µ, ν2), which gives convexity. Note that for λ ∈]0, 1[, the transport plan λγ∗ 1 + (1 −λ)γ∗ 2 splits mass. Therefore it cannot be the unique optimal plan between µ and (1 −t)ν1 + tν2. As µ is regular, the optimal plan does not split mass, as it is induced by a map [3, Thm. 6.2.10], so we have strict convexity. From this follows the strict convexity of the object function in (P). Next we characterize the barycenter, assuming it is non-degenerate, in the spirit of the finitedinemsional case in [1, Thm. 6.1]. Proposition 6. Let {fi}N i=1 be a population of centered GPs, fi ∼GP(0, Ki). Then (P) has a unique solution ¯f ∼GP(0, ¯K). If ¯f is non-degenerate, then ¯K is the unique bounded self-adjoint positive linear operator satisfying N X i=1 ξi K 1 2 KiK 1 2 1 2 = K. (6) Proof. Existence can be shown following the proof for the finite dimensional case [1, Prop. 4.2], which uses multimarginal optimal transport; this appears in the preprint [20, Cor. 9]. For the characterization, assume ¯f to be non-degenerate, and let BC(f) = N X i=1 ξiW 2 2 (fi, f), be the barycentric expression, and assume that the minimizer ¯f of BC is non-degenerate. Let 0 < λ1, λ2, ... be the eigenvalues of ¯K with eigenfunctions e1, e2, .... Then, by [10, Prop. 2.2.] the transport map between ¯f and fk is given by Tk(x) = ∞ X i=1 ∞ X j=1 ⟨x, ej⟩⟨( ¯K 1 2 Kk ¯K 1 2 ) 1 2 ej, ei⟩ λ 1 2 i λ 1 2 j ei(x) . (7) Using [6, Thm. 8.4.7], we can write the gradient of the barycentric expression. We furthermore know that the expression is strictly convex, thus the gradient at ¯f equals zero if and only if ¯f is the minimizer. Now let Id be the identity operator, then ∇BC( ¯f) = N X i=1 (Tk −Id ) = 0, substituting in (7), we get N X i=1 ξi K 1 2 KiK 1 2 1 2 = K. Proof of Theorem 4. Use Prop. 6, the properties of a barycenter in a Hilbert space, and that the space of GPs is isometric to the cartesian product of L2(X) and the covariance operators. Remark 7. For the practical computations of barycenters of GDs approximating GPs, to be discussed below, a fixed-point iteration scheme with a guarantee of convergence exists [4, Thm. 4.2]. 5 Convergence properties. Now, we show that the 2-Wasserstein metric for GPs can be approximated arbitrarily well by the 2-Wasserstein metric for GDs. This is important, as in real-life we observe finite-dimensional representations of the covariance operators. Let {ei}∞ i=1 be an orthonormal basis for L2(X). Then we define the GDs given by restrictions min and Kin of mi and Ki, i = 1, 2, on Vn = span(e1, ..., en) by min(x) = n X k=1 ⟨mi, ek⟩ek(x), Kinφ = n X k=1 ⟨φ, ek⟩Kiek, ∀φ ∈Vn, ∀x ∈X , (8) and prove the following: Theorem 8. The 2-Wasserstein metric between GDs on finite samples converges to the Wasserstein metric between GPs, that is, if fin ∼N(min, Kin), fi ∼GP(mi, Ki) for i = 1, 2, then lim n→∞W 2 2 (f1n, f2n) = W 2 2 (f1, f2). By the same argument, it also follows that W 2 2 (·, ·) is continuous in both arguments in operator norm topology. Proof. Kin →Ki in operator norm as n →∞. Because taking a sum, product and square-root of operators are all continuous with respect to the operator norm, it follows that K1n + K2n −2(K 1 2 1nK2nK 1 2 1n) 1 2 →K1 + K2 −2(K 1 2 1 K2K 1 2 1 ) 1 2 . Note that for any sequence An →A with convergence in operator norm, we have |Tr A −Tr An| ≤ ∞ X k=1 |⟨(A −An)ek, ek⟩| Cauchy-Schwarz ≤ ∞ X k=1 ∥(A −An)ek∥L2 MCT → 0 , (9) as lim n→∞ sup v∈L2ω(X) ∥(A −An)v∥L2 = 0 due to the convergence in operator norm. Here MCT stands for the monotone convergence theorem. Thus we have W 2 2 (f1n, f2n) = d2 2(m1n, m2n) + Tr (K1n + K2n −2(K 1 2 1nK2nK 1 2 1n) 1 2 ) n→∞ → d2 2(m1, m2) + Tr (K1 + K2 −2(K 1 2 1 K2K 1 2 1 ) 1 2 ) = W 2 2 (f1, f2). The importance of Proposition 8 is that it justifies computations of distances using finite representations of GPs as approximations for the infinite-dimensional case. Next, assuming the barycenter is non-degenerate, we show that we can also approximate the barycenter of a population of GPs by computing the barycenters of populations of GDs converging to these GPs. In the degenerate case, see [20, Thm. 11]. Theorem 9. Assuming the barycenter of a population of GPs is non-degenerate, then it varies continuously, that is, the map (f1, ..., fN) 7→¯f is continuous in the operator norm. Especially, this implies that the barycenter ¯fn of the finite-dimensional restrictions {fin}N i=1 converges to ¯f. First, we show that if fi ∼GP(mi, Ki) and ¯f = GP( ¯m, ¯K), then that the map (K1, ..., KN) 7→¯K is continuous. Continuity of (m1, ..., mN) 7→¯m is clear. Let K be a covariance operator, denote its maximal eigenvalue by λmax(K). Note that this map is well-defined, as K is also bounded, normal operator, thus λmax(K) = ∥K∥op < ∞holds. Now let a = (K1, ..., KN) be a population of covariance operators, denote ith as a(i) = Ki, then define the continuous function β and correspondence (a set valued map) Φ as follows β : a 7→ N X i=1 ξi p λmax(a(i)) !2 , Φ : a 7→Kβ(a) = {K ∈HS(H) | β(a)I ≥K ≥0}. 6 Then the fixed point of (6) can be found in Φ(a), as the map F(K) = N X i=1 ξi K 1 2 KiK 1 2 1 2 , is a compact operator, Φ(a) is bounded, and so the closure of F(Φ(a)) is compact. Furthermore, do note that F is a map from Φ(a) to itself, so by Schauder’s fixed point theorem, there exists a fixed point. Now, we want to show that this correspondence is continuous in order to put the Maximum theorem to use. A correspondence Φ : A →B is upper hemi-continuous at a ∈A, if all convergent sequences (an) ∈A, (bn) ∈Φ(an) satisfy lim n→∞bn = b, lim n→∞an = a and b ∈Φ(a). The correspondence is lower hemi-continuous at a ∈A, if for all convergent sequences an →a in A and any b ∈Φ(a), there is a subsequence ank, so that we have a sequence bk ∈Φ(ank) which satisfies bk →b. If the correspondence is both upper and lower hemi-continuous, we say that it is continuous. For more about the Maximum theorem and hemi-continuity, see [2]. Lemma 10. The correspondence Φ : a 7→Kβ(a) is continuous as correspondence. Proof. First, we show the correspondence is lower hemi-continuous. Let (an)∞ n=1 be a sequence of populations of covariance operators of size N, that converges an →a. Use the shorthand notation βn := β(an), then βn →β∞:= β(a), and let b ∈Φ(a) = Kβ∞. Pick subsequence (ank)∞ k=1 so that (βnk)∞ k=1 is increasing or decreasing. If it was decreasing, then Kβ∞⊆Kβnk for every nk. Thus the proof would be finished by choosing bk = b for every k. Hence assume the sequence is increasing, so that Kβnk ⊆Kβnk+1 . Now let γ(t) = (1 −t)b1 + tb, where b1 ∈Kβ1, and let tnk be the solution to (1 −t)β1 + tβ∞= βnk, then bk := γ(tnk) ∈Kβnk and bk →b. For upper hemicontinuity, assume that an →a, bn ∈Kβn and that bn →b. Then using the definition of Φ, we get the positive sequence ⟨(βnI −bn)x, x⟩≥0 indexed by n, then by continuity and the positivity of this sequence it follows that 0 ≤lim n→∞⟨(βnI −bn)x, x⟩= ⟨(β∞I −b)x, x⟩. One can check the criterion b ≥0 similarly, and so we are done. Proof of Theorem 9. Now let a = (K1, ..., Kn), f(K, a) := PN i=1 ξiW 2 2 (K, Ki) and F(K) := PN i=1 ξi(K 1 2 KiK 1 2 ) 1 2 , then the unique minimizer ¯K of f is the fixed point of F. Furthermore, the closure cl(F(Kβ(a))) is compact, a 7→cl(F(Kβ(a))) is a continuous correspondence as the closure of composition of two continuous correspondence. Additionally, we know that ¯K ∈cl(F(Kβ(a))), so applying the maximum theorem, we have shown that the barycenter of a population of covariance operators varies continuously, i.e. the map (K1, ..., KN) 7→¯K is continuous, finishing the proof. 4 Experiments We illustrate the utility of the Wasserstein metric in two different applications: Processing of uncertain white-matter tracts estimated from DWI, and analysis of climate development via temperature curve GPs. Experimental setup. The white-matter tract GPs are estimated for a single subject from the Human Connectome Project [15, 32, 35], using probabilistic shortest-path tractography [17]. See the supplementary material for details on the data and its preprocessing. From daily minimum temperatures measured at a set of 30 randomly sampled Russian metereological stations [9, 34], GP regression was used to estimate a GP temperature curve per year and station for the period 1940 −2009 using maximum likelihood parameters. All code for computing Wasserstein distances and barycenters was implemented in MATLAB and ran on a laptop with 2,7 GHz Intel Core i5 processor and 8 GB 1867 MHz DDR3 memory. On the temperature GP curves (represented by 50 samples), the average runtime of the 2-Wasserstein distance computation was 0.048 ± 0.014 seconds (estimated from 1000 pairwise distance computations), and the average runtime of the 2-Wasserstein barycenter of a sample of size 10 was 0.69 ± 0.11 seconds (estimated from 200 samples). 7 White-matter tract processing. The inferior longitudinal fasiculus is a white-matter bundle which splits into two separate bundles. Fig. 3 (top) shows the results of agglomerative hierarchical clustering of the GP tracts using average Wasserstein distance. The per-cluster Wasserstein barycenter can be used to represent the tracts; its overlap with the individual GP mean curves is shown in Fig. 3 (bottom). The individual GP tracts are visualized via their mean curves, but they are in fact a population of GPs. To confirm that the two clusters are indeed different also when the covariance function is taken into account, we perform a permutation test for difference between per-cluster Wasserstein barycenters, and already with 50 permutations we observe a p-value of p = 0.0196, confirming that the two clusters are significantly different at a 5% significance level. Figure 3: Top: The mean functions of the individual GPs, colored by cluster membership, in the context of the corresponding T1-weighted MRI slices. Bottom: The tract GP mean functions and the cluster mean GPs with 95% confidence bounds. Quantifying climate change. Using the Wasserstein barycenters we perform nonparametric kernel regression to visualize how yearly temperature curves evolve with time, based on the Russian yearly temperature GPs. Fig. 4 shows snapshots from this evolution, and a continuous movie version climate.avi is found in the supplementary material. The regressed evolution indicates an increase in overall temperature as we reach the final year 2009. To quantify this observation, we perform a permutation test using the Wasserstein distance between population Wasserstein barycenters to compare the final 10 years 2000-2009 with the years 1940-1999. Using 50 permutations we obtain a p-value of 0.0392, giving significant difference in temperature curves at a 95% confidence level. Significance. Note that the state-of-the-art in tract analysis as well as in functional data analysis would be to ignore the covariance of the estimated curves and treat the mean curves as observations. We contribute a framework to incorporate the uncertainty into the population analysis – but why would we want to retain uncertainty? In the white-matter tracts, the GP covariance represents spatial uncertainty in the estimated curve trajectory. The individual GPs represent connections between different endpoints. Thus, they do not represent observations of the exact same trajectory, but rather of distinct, nearby trajectories. It is common in diffusion MRI to represent such sets of estimated trajectories by a few prototype trajectories for visualization and comparative analysis; we obtain prototypes through the Wasserstein barycenter. To correctly interpret the spatial uncertainty, e.g. for a brain surgeon [8], it is crucial that the covariance of the prototype GP represents the covariances of the individual GPs, and not smaller. If you wanted to reduce uncertainty by increasing sample size, you would need more images, not more curves – because the noise is in the image. But more images are not usually available. In the climate data, the GP covariance models natural temperature variation, not measurement noise. Increasing the sample size decreases the error of the temperature distribution, but should not decrease this natural variation (i.e. the covariance). 5 Discussion and future work We have shown that the Wasserstein metric for GPs is both theoretically and computationally wellfounded for statistics on GPs: It defines unique barycenters, and allows efficient computations through finite-dimensional representations. We have illustrated its use in two different applications: Processing of uncertain estimates of white-matter trajectories in the brain, and analysis of climate development via GP representations of temperature curves. We have seen that the metric itself is discriminative for clustering and permutation testing, and we have seen how the GP barycenters allow truthful interpretation of uncertainty in the white matter tracts and of variation in the temperature curves. 8 Figure 4: Snapshots from the kernel regression giving yearly temperature curves 1940-2009. We observe an apparent temperature increase which is confirmed by the permutation test. Future work includes more complex learning algorithms, starting with preprocessing tools such as PCA [31], and moving on to supervised predictive models. This includes a better understanding of the potentially Riemannian structure of the infinite-dimensional Wasserstein space, which would enable us to draw on existing results for learning with manifold-valued data [21]. The Wasserstein distance allows the inherent uncertainty in the estimated GP data points to be appropriately accounted for in every step of the analysis, giving truthful analysis and subsequent interpretation. This is particularly important in applications where uncertainty or variation is crucial: Variation in temperature is an important feature in climate change, and while estimated white-matter trajectories are known to be unreliable, they are used in surgical planning, making uncertainty about their trajectories a highly relevant parameter. 6 Acknowledgements This research was supported by Centre for Stochastic Geometry and Advanced Bioimaging, funded by a grant from the Villum Foundation. Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. The authors would also like to thank Mads Nielsen for valuable discussions and supervision. Finally, the authors would like to thank Victor Panaretos for valuable discussions and, in particular, for pointing out an error in an earlier version of the manuscript. References [1] M. Agueh and G. Carlier. Barycenters in the Wasserstein space. SIAM Journal on Mathematical Analysis, 43(2):904–924, 2011. [2] C. Aliprantis and K. Border. Infinite dimensional analysis: a hitchhiker’s guide. Studies in Economic Theory, 4, 1999. [3] P. Álvarez-Esteban, E. Del Barrio, J. Cuesta-Albertos, C. Matrán, et al. Uniqueness and approximate computation of optimal incomplete transportation plans. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 47, pages 358–375. Institut Henri Poincaré, 2011. [4] P. C. Álvarez-Esteban, E. del Barrio, J. Cuesta-Albertos, and C. Matrán. A fixed-point approach to barycenters in Wasserstein space. Journal of Mathematical Analysis and Applications, 441(2):744–762, 2016. [5] L. Ambrosio and N. Gigli. A user’s guide to optimal transport. In Modelling and optimisation of flows on networks, pages 1–155. Springer, 2013. [6] L. Ambrosio, N. Gigli, and G. Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008. [7] W. Arveson. A short course on spectral theory, volume 209. Springer Science & Business Media, 2006. [8] J. Berman. Diffusion MR tractography as a tool for surgical planning. Magnetic resonance imaging clinics of North America, 17(2):205–214, 2009. [9] O. Bulygina and V. Razuvaev. Daily temperature and precipitation data for 518 russian meteorological stations. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, US Department of Energy, Oak Ridge, Tennessee, 2012. [10] J. Cuesta-Albertos, C. Matrán-Bea, and A. Tuero-Diaz. On lower bounds for the l2-Wasserstein metric in a Hilbert space. Journal of Theoretical Probability, 9(2):263–283, 1996. 9 [11] D. Dowson and B. Landau. The Fréchet distance between multivariate normal distributions. Journal of multivariate analysis, 12(3):450–455, 1982. [12] M. Faraki, M. T. Harandi, and F. Porikli. Approximate infinite-dimensional region covariance descriptors for image classification. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 1364–1368. IEEE, 2015. [13] M. Gelbrich. On a formula for the L2 Wasserstein metric between measures on Euclidean and Hilbert spaces. Mathematische Nachrichten, 147(1):185–203, 1990. [14] C. R. Givens, R. M. Shortt, et al. A class of Wasserstein metrics for probability distributions. The Michigan Mathematical Journal, 31(2):231–240, 1984. [15] M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi, M. Webster, J. R. Polimeni, et al. The minimal preprocessing pipelines for the Human Connectome project. Neuroimage, 80:105–124, 2013. [16] M. Harandi, M. Salzmann, and F. Porikli. Bregman divergences for infinite dimensional covariance matrices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1003–1010, 2014. [17] S. Hauberg, M. Schober, M. Liptrot, P. Hennig, and A. Feragen. A random Riemannian metric for probabilistic shortest-path tractography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 597–604. Springer, 2015. [18] M. Knott and C. S. Smith. On the optimal mapping of distributions. Journal of Optimization Theory and Applications, 43(1):39–49, 1984. [19] M. Lê, J. Unkelbach, N. Ayache, and H. Delingette. GPSSI: Gaussian process for sampling segmentations of images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 38–46. Springer, 2015. [20] V. Masarotto, V. M. Panaretos, and Y. Zemel. Procrustes metrics on covariance operators and optimal transportation of gaussian processes. arXiv preprint arXiv:1801.01990, 2018. [21] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on Riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37–45, 2015. [22] I. Olkin and F. Pukelsheim. The distance between two random vectors with given dispersion matrices. Linear Algebra and its Applications, 48:257–263, 1982. [23] D. Pigoli, J. A. Aston, I. L. Dryden, and P. Secchi. Distances and inference for covariance operators. Biometrika, 101(2):409–422, 2014. [24] S. Pujol, W. Wells, C. Pierpaoli, C. Brun, J. Gee, G. Cheng, B. Vemuri, O. Commowick, S. Prima, A. Stamm, et al. The DTI challenge: toward standardized evaluation of diffusion tensor imaging tractography for neurosurgery. Journal of Neuroimaging, 25(6):875–882, 2015. [25] M. H. Quang and V. Murino. From covariance matrices to covariance operators: Data representation from finite to infinite-dimensional settings. In Algorithmic Advances in Riemannian Geometry and Applications, pages 115–143. Springer, 2016. [26] M. H. Quang, M. San Biagio, and V. Murino. Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces. In Advances in Neural Information Processing Systems, pages 388–396, 2014. [27] B. S. Rajput. Gaussian measures on Lp spaces, 1 ≤p < ∞. Journal of Multivariate Analysis, 2(4):382– 403, 1972. [28] S. Roberts, M. Osborne, M. Ebden, S. Reece, N. Gibson, and S. Aigrain. Gaussian processes for time-series modelling. Phil. Trans. R. Soc. A, 371(1984):20110550, 2013. [29] M. Schober, D. K. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge-Kutta means. In Advances in neural information processing systems, pages 739–747, 2014. [30] M. Schober, N. Kasenburg, A. Feragen, P. Hennig, and S. Hauberg. Probabilistic shortest path tractography in DTI using Gaussian Process ODE solvers. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 265–272. Springer, 2014. [31] V. Seguy and M. Cuturi. Principal geodesic analysis for probability measures under the optimal transport metric. In Advances in Neural Information Processing Systems, pages 3312–3320, 2015. [32] S. Sotiropoulos, S. Moeller, S. Jbabdi, J. Xu, J. Andersson, E. Auerbach, E. Yacoub, D. Feinberg, K. Setsompop, L. Wald, et al. Effects of image reconstruction on fiber orientation mapping from multichannel diffusion MRI: reducing the noise floor using SENSE. Magnetic resonance in medicine, 70(6):1682–1689, 2013. [33] A. Takatsu et al. Wasserstein geometry of Gaussian measures. Osaka Journal of Mathematics, 48(4):1005– 1026, 2011. [34] R. Tatusko and J. A. Mirabito. Cooperation in climate research: An evaluation of the activities conducted under the US-USSR agreement for environmental protection since 1974. National Climate Program Office, 1990. [35] D. C. Van Essen, S. M. Smith, D. M. Barch, T. E. Behrens, E. Yacoub, K. Ugurbil, W.-M. H. Consortium, et al. The wu-minn Human Connectome project: an overview. Neuroimage, 80:62–79, 2013. 10 [36] C. Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003. [37] D. Wassermann, L. Bloy, E. Kanterakis, R. Verma, and R. Deriche. Unsupervised white matter fiber clustering and tract probability map generation: Applications of a Gaussian process framework for white matter fibers. NeuroImage, 51(1):228–241, 2010. [38] X. Yang and M. Niethammer. Uncertainty quantification for LDDMM using a low-rank Hessian approximation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 289–296. Springer, 2015. 11 | 2017 | 317 |
6,805 | Online Dynamic Programming Holakou Rahmanian Department of Computer Science University of California Santa Cruz Santa Cruz, CA 95060 holakou@ucsc.edu Manfred K. Warmuth Department of Computer Science University of California Santa Cruz Santa Cruz, CA 95060 manfred@ucsc.edu Abstract We consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials. An instance of the type of problems we consider is to find a good binary search tree in a changing environment. At the beginning of each trial, the learner probabilistically chooses a tree with the n keys at the internal nodes and the n + 1 gaps between keys at the leaves. The learner is then told the frequencies of the keys and gaps and is charged by the average search cost for the chosen tree. The problem is online because the frequencies can change between trials. The goal is to develop algorithms with the property that their total average search cost (loss) in all trials is close to the total loss of the best tree chosen in hindsight for all trials. The challenge, of course, is that the algorithm has to deal with exponential number of trees. We develop a general methodology for tackling such problems for a wide class of dynamic programming algorithms. Our framework allows us to extend online learning algorithms like Hedge [16] and Component Hedge [25] to a significantly wider class of combinatorial objects than was possible before. 1 Introduction Consider the following online learning problem. In each trial, the algorithm plays with a Binary Search Tree (BST) for a given set of n keys. Then the adversary reveals a set of probabilities for the n keys and their n + 1 gaps, and the algorithm incurs a linear loss of average search cost. The goal is to predict with a sequence of BSTs minimizing regret which is the difference between the total loss of the algorithm and the total loss of the single best BST chosen in hindsight. A natural approach to solve this problem is to keep track of a distribution on all possible BSTs during the trials (e.g. by running the Hedge algorithm [16] with one weight per BST). However, this seems impractical since it requires maintaining a weight vector of exponential size. Here we focus on combinatorial objects that are comprised of n components where the number of objects is typically exponential in n. For a BST the components are the depth values of the keys and the gaps in the tree. This line of work requires that the loss of an object is linear in the components (see e.g. [35]). In our BST examples the loss is simply the dot product between the components and the frequencies. There has been much work on developing efficient algorithms for learning objects that are composed of components when the loss is linear in the components. These algorithms get away with keeping one weight per component instead of one weight per object. Previous work includes learning k-sets [36], permutations [19, 37, 2] and paths in a DAG [35, 26, 18, 11, 5]. There are also general tools for learning such combinatorial objects with linear losses. The Follow the Perturbed Leader (FPL) [22] is a simple algorithm that adds random perturbations to the cumulative loss of each component, and then predicts with the combinatorial object that has the minimum perturbed loss. The Component Hedge (CH) algorithm [25] (and its extensions [34, 33, 17]) constitutes another generic approach. Each object is typically represented as a bit vector over the set of components where the 1-bits 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. indicate the components appearing in the object. The algorithm maintains a mixture of the weight vectors representing all objects. The weight space of CH is thus the convex hull of the weight vectors representing the objects. This convex hull is a polytope of dimension n with the objects as corners. For the efficiency of CH it is typically required that this polytope has a small number of facets (polynomial in n). The CH algorithm predicts with a random corner of the polytope whose expectation equals the maintained mixture vector in the polytope. Unfortunately the results of CH and its current extensions cannot be directly applied to problems like BST. This is because the BST polytope discussed above does not have a characterization with polynomially many facets. There is an alternate polytope for BSTs with a polynomial number of facets (called the associahedron [29]) but the average search cost is not linear in the components used for this polytope. We close this gap by exploiting the dynamic programming algorithm which solves the BST optimization problem. This gives us a polytope with a polynomial number of facets while the loss is linear in the natural components of the BST problem. Contributions We propose a general method for learning combinatorial objects whose optimization problem can be solved efficiently via an algorithm belonging to a wide class of dynamic programming algorithms. Examples include BST (see Section 4.1), Matrix-Chain Multiplication, Knapsack, Rod Cutting, and Weighted Interval Scheduling (see Appendix A). Using the underlying graph of subproblems induced by the dynamic programming algorithm for these problems, we define a representation of the combinatorial objects by encoding them as a specific type of subgraphs called k-multipaths. These subgraphs encode each object as a series of successive decisions (i.e. the components) over which the loss is linear. Also the associated polytope has a polynomial number of facets. These properties allow us to apply the standard Hedge [16, 28] and Component Hedge algorithms [25]. Paper Outline In Section 2 we start with online learning of paths which are the simplest type of subgraphs we consider. This section briefly describes the main two existing algorithms for the path problem: (1) An efficient implementation of Hedge using path kernels and (2) Component Hedge. Section 3 introduces a much richer class of subgraphs, called k-multipaths, and generalizes the algorithms. In Section 4, we define a class of combinatorial objects recognized by dynamic programming algorithms. Then we prove that minimizing a specific dynamic programming problem from this class over trials reduces to online learning of k-multipaths. The online learning for BSTs uses k-multipaths for k = 2 (Section 4.1). A large number of additional examples are discussed in Appendix A. Finally, Section 5 concludes with comparison to other algorithms and future work and discusses how our method is generalized for arbitrary “min-sum” dynamic programming problems. 2 Background Perhaps the simplest algorithms in online learning are the “experts algorithms” like the Randomized Weighted Majority [28] or the Hedge algorithm [16]. They keep track of a probability vector over all experts. The weight/probability wi of expert i is proportional to exp(−⌘L(i)), where L(i) is the cumulative loss of expert i until the current trial and ⌘is a non-negative learning rate. In this paper we use exponentially many combinatorial objects (composed of components) as the set of experts. When Hedge is applied to such combinatorial objects, we call it Expanded Hedge (EH) because it is applied to a combinatorially “expanded domain”. As we shall see, if the loss is linear over components (and thus the exponential weight of an object becomes a product over components), then this often can be exploited for obtaining an efficient implementations of EH. Learning Paths The online shortest path has been explored both in full information setting [35, 25] and various bandit settings [18, 4, 5, 12]. Concretely the problem in the full information setting is as follows. We are given a directed acyclic graph (DAG) G = (V, E) with a designated source node s 2 V and sink node t 2 V . In each trial, the algorithm predicts with a path from s to t. Then for each edge e 2 E, the adversary reveals a loss `e 2 [0, 1]. The loss of the algorithm is given by the sum of the losses of the edges along the predicted path. The goal is to minimize the regret which is the difference between the total loss of the algorithm and that of the single best path chosen in hindsight. 2 Expanded Hedge on Paths Takimoto and Warmuth [35] found an efficient implementation of EH by exploiting the additivity of the loss over the edges of a path. In this case the weight w⇡ of a path ⇡is proportional to Q e2⇡exp(−⌘Le), where Le is the cumulative loss of edge e. The algorithm maintains one weight we per edge such that the total weight of all edges leaving any non-sink node sums to 1. This implies that w⇡= Q e2⇡we and sampling a path is easy. At the end of the current trial, each edge e receives additional loss `e, and the updated path weights have the form wnew ⇡ = 1 Z Q e2⇡we exp(−⌘`e), where Z is a normalization. Now a certain efficient procedure called weight pushing [31] is applied. It finds new edge weights wnew e s.t. the total outflow out of each node is one and the updated weights are again in “product form”, i.e. wnew ⇡ = Q e2⇡wnew e , facilitating sampling. Theorem 1 (Takimoto-Warmuth [35]). Given a DAG G = (V, E) with designated source node s 2 V and sink node t 2 V , assume N is the number of paths in G from s to t, L⇤is the total loss of best path, and B is an upper bound on the loss of any path in each trial. Then with proper tuning of the learning rate ⌘over the T trials, EH guarantees: E[LEH] −L⇤B p 2 T log N + B log N. Component Hedge on Paths Koolen, Warmuth and Kivinen [25] applied CH to the path problem. The edges are the components of the paths. A path is encoded as a bit vector ⇡of |E| components where the 1-bits are the edges in the path. The convex hull of all paths is called the unit-flow polytope. CH maintains a mixture vector in this polytope. The constraints of the polytope enforce an outflow of 1 from the source node s, and flow conservation at every other node but the sink node t. In each trial, the weight of each edge we is updated multiplicatively by the factor exp(−⌘`e). Then the weight vector is projected back to the unit-flow polytope via a relative entropy projection. This projection is achieved by iteratively projecting onto the flow constraint of a particular vertex and then repeatedly cycling through the vertices [8]. Finally, to sample with the same expectation as the mixture vector in the polytope, this vector is decomposed into paths using a greedy approach which removes one path at a time and zeros out at least one edge in the remaining mixture vector in each iteration. Theorem 2 (Koolen-Warmuth-Kivinen [25]). Given a DAG G = (V, E) with designated source node s 2 V and sink node t 2 V , let D be a length bound of the paths in G from s to t against which the CH algorithm is compared. Also denote the total loss of the best path of length at most D by L⇤. Then with proper tuning of the learning rate ⌘over the T trials, CH guarantees: E[LCH] −L⇤D p 4 T log |V | + 2 D log |V |. Much of this paper is concerned with generalizing the tools sketched in this section from paths to k-mulitpaths, from the unit-flow polytope to the k-flow polytope and developing a generalized version of weight pushing for k-multipaths. 3 Learning k-Multipaths As we shall see, k-multipaths will be subgraphs of k-DAGs built from k-multiedges. Examples of all the definitions are given in Figure 1 for the case k = 2. Definition 1 (k-DAG). A DAG G = (V, E) is called k-DAG if it has following properties: (i) There exists one designated “source” node s 2 V with no incoming edges. (ii) There exists a set of “sink” nodes T ⇢V which is the set of nodes with no outgoing edges. (iii) For all non-sink vertices v, the set of edges leaving v is partitioned into disjoint sets of size k which are called k-multiedges. We denote the set of multiedges “leaving” vertex v as Mv and all multiedges of the DAG as M. Each k-multipath can be generated by starting with a single multiedge at the source and choosing inflow many (i.e. number of incoming edges many) successor multiedges at the internal nodes (until we reach the sink nodes in T ). An example of a 2-multipath is given in Figure 1. Recall that paths were described as bit vectors ⇡of size |E| where the 1-bits were the edges in the path. In k-multipaths each edge bit ⇡e becomes a non-negative count. 3 Figure 1: On the left we give an example of a 2-DAG. The source s and the nodes in the first layer each have two 2-multiedges depicted in red and blue. The nodes in the next layer each have one 2-multiedge depicted in green. An example of 2-multipath in the 2-DAG is given on the right. The 2-multipath is represented as an |E|-dimensional count vector ⇡. The grayed edges are the edges with count ⇡e = 0. All non-zero counts ⇡e are shown next to their associated edges e. Note that for nodes in the middle layers, the outflow is always 2 times the inflow. Definition 2 (k-multipath). Given a k-DAG G = (V, E), let ⇡2 N|E| in which ⇡e is associated with e 2 E. Define the inflow ⇡in(v) := P (u,v)2E ⇡(u,v) and the outflow ⇡out(v) := P (v,u)2E ⇡(v,u). We call ⇡a k-multipath if it has the below properties: (i) The outflow ⇡out(s) of the source s is k. (ii) For any two edges e, e0 in a multiedge m of G, ⇡e = ⇡e0. (When clear from the context, we denote this common value as ⇡m.) (iii) For each vertex v 2 V −T −{s}, the outflow is k times the inflow, i.e. ⇡out(v) = k ⇥⇡in(v). k-Multipath Learning Problem We define the problem of online learning of k-multipaths on a given k-DAG as follows. In each trial, the algorithm randomly predicts with a k-multipath ⇡. Then for each edge e 2 E, the adversary reveals a loss `e 2 [0, 1] incurred during that trial. The linear loss of the algorithm during this trial is given by ⇡· `. Observe that the online shortest path problem is a special case when k = |T | = 1. In the remainder of this section, we generalize the algorithms in Section 2 to the online learning problem of k-multipaths. 3.1 Expanded Hedge on k-Multipaths We implement EH efficiently for learning k-multipath by considering each k-multipath as an expert. Recall that each k-multipath can be generated by starting with a single multiedge at the source and choosing inflow many successor multiedges at the internal nodes. Multipaths are composed of multiedges as components and with each multiedge m 2 M, we associate a weight wm. We maintain a distribution W over multipaths defined in terms of the weights w 2 R|M| ≥0 on the multiedges. The distribution W will have the following canonical properties: Definition 3 (EH distribution properties). 1. The weights are in product form, i.e. W(⇡) = Q m2M(wm)⇡m. Recall that ⇡m is the common value in ⇡among edges in m. 2. The weights are locally normalized, i.e. P m2Mv wm = 1 for all v 2 V −T . 3. The total path weight is one, i.e. P ⇡W(⇡) = 1. Using these properties, sampling a k-multipath from W can be easily done as follows. We start with sampling a single k-multiedge at the source and continue sampling inflow many successor multiedges at the internal nodes until the k-multipath reaches the sink nodes in T . Observe that ⇡m indicates the number of times the k-multiedge m is sampled through this process. EH updates the weights of the 4 multipaths as follows: W new(⇡) = 1 Z W(⇡) exp(−⌘⇡· `) = 1 Z Y m2M (wm)⇡m ! exp " −⌘ X m2M ⇡m X e2m `e !# = 1 Z Y m2M ⇣ wm exp h −⌘ X e2m `e i | {z } := b wm ⌘⇡m . Thus the weights wm of each k-multiedge m 2 M are updated multiplicatively to bwm by multiplying the wm with the exponentiated loss factors exp ⇥ −⌘P e2m `e ⇤ and then renormalizing with Z. Note that P e2m `e is the loss of multiedge m. Generalized Weight Pushing We generalize the weight pushing algorithm [31] to k-multipaths to reestablish the three canonical properties of Definition 3. The new weights W new(⇡) = 1 Z Q m2M( bwm)⇡m sum to 1 (i.e. Property (iii) holds) since Z normalizes the weights. Our goal is to find new multiedge weights wnew m so that the other two properties hold as well, i.e. W new(⇡) = Q m2M(wnew m )⇡m and P m2Mv wnew m = 1 for all nonsinks v. For this purpose, we introduce a normalization Zv for each vertex v. Note that Zs = Z where s is the source node. Now the generalized weight pushing finds new weights wnew m for the multiedges to be used in the next trial: 1. For sinks v 2 T , Zv := 1. 2. Recursing backwards in the DAG, let Zv :=P m2Mv bwm Q u:(v,u)2m Zu for all non-sinks v. 3. For each multiedge m from v to u1, . . . , uk, wnew m := bwm 5 Qk i=1 Zui 6 /Zv. Appendix B proves the correctness and time complexity of this generalized weight pushing algorithm. Regret Bound In order to apply the regret bound of EH [16], we have to initialize the distribution W on k-multipaths to the uniform distribution. This is achieved by setting all wm to 1 followed by an application of generalized weight pushing. Note that Theorem 1 is a special case of the below theorem for k = 1. Theorem 3. Given a k-DAG G with designated source node s and sink nodes T , assume N is the number of k-multipaths in G from s to T , L⇤is the total loss of best k-multipath, and B is an upper bound on the loss of any k-multipath in each trial. Then with proper tuning of the learning rate ⌘ over the T trials, EH guarantees: E[LEH] −L⇤B p 2 T log N + B log N. 3.2 Component Hedge on k-Multipaths We implement the CH efficiently for learning of k-multipath. Here the k-multipaths are the objects which are represented as |E|-dimensional1 count vectors ⇡(Definition 2). The algorithm maintains an |E|-dimensional mixture vector w in the convex hull of count vectors. This hull is the following polytope over weight vectors obtained by relaxing the integer constraints on the count vectors: Definition 4 (k-flow polytope). Given a k-DAG G = (V, E), let w 2 R|E| ≥0 in which we is associated with e 2 E. Define the inflow win(v) := P (u,v)2E w(u,v) and the outflow wout(v) := P (v,u)2E w(v,u). w belongs to the k-flow polytope of G if it has the below properties: (i) The outflow wout(s) of the source s is k. (ii) For any two edges e, e0 in a multiedge m of G, we = we0. (iii) For each vertex v 2 V −T −{s}, the outflow is k times the inflow, i.e. wout(v) = k ⇥win(v). 1For convenience we use the edges as components for CH instead of the multiedges as for EH. 5 In each trial, the weight of each edge we is updated multiplicatively to bwe = we exp(−⌘`e) and then the weight vector bw is projected back to the k-flow polytope via a relative entropy projection: wnew := arg min w2k-flow polytope ∆(w|| bw), where ∆(a||b) = P i ai log ai bi + bi −ai. This projection is achieved by repeatedly cycling over the vertices and enforcing the local flow constraints at the current vertex. Based on the properties of the k-flow polytope in Definition 4, the corresponding projection steps can be rewritten as follows: (i) Normalize the wout(s) to k. (ii) Given a multiedge m, set the k weights in m to their geometric average. (iii) Given a vertex v 2 V −T −{s}, scale the adjacent edges of v s.t. wout(v) := k+1q k (wout(v))k win(v) and win(v) := 1 k k+1q k (wout(v))k win(v). See Appendix C for details. Decomposition The flow polytope has exponentially many objects as its corners. We now rewrite any vector w in the polytope as a mixture of |M| objects. CH then predicts with a random object drawn from this sparse mixture. The mixture vector is decomposed by greedily removing a multipath from the current weight vector as follows: Ignore all edges with zero weights. Pick a multiedge at s and iteratively inflow many multiedges at the internal nodes until you reach the sink nodes. Now subtract that constructed multipath from the mixture vector w scaled by its minimum edge weight. This zeros out at least k edges and maintain the flow constraints at the internal nodes. Regret Bound The regret bound for CH depends on a good choice of the initial weight vector winit in the k-flow polytope. We use an initialization technique recently introduced in [32]. Instead of explicitly selecting winit in the k-flow polytope, the initial weight is obtained by projecting a point bwinit outside of the polytope to the inside. This yields the following regret bounds (Appendix D): Theorem 4. Given a k-DAG G = (V, E), let D be the upper bound for the 1-norm of the k-multipaths in G. Also denote the total loss of the best k-multipath by L⇤. Then with proper tuning of the learning rate ⌘over the T trials, CH guarantees: E[LCH] −L⇤D p 2 T (2 log |V | + log D) + 2 D log |V | + D log D. Moreover, when the k-multipaths are bit vectors, then: E[LCH] −L⇤D p 4 T log |V | + 2 D log |V |. Notice that by setting |T | = k = 1, the algorithm for path learning in [25] is recovered. Also observe that Theorem 2 is a corollary of Theorem 4 since every path is represented as a bit vector. 4 Online Dynamic Programming with Multipaths We consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials. We will use our definition of k-DAGs to describe a certain type of dynamic programming problem. The vertex set V is a set of subproblems to be solved. The source node s 2 V is the final subproblem. The sink nodes T ⇢V are the base subproblems. An edge from a node v to another node v0 means that subproblem v may recurse on v0. We assume a non-base subproblem v always breaks into exactly k smaller subproblems. A step of the dynamic programming recursion is thus represented by a k-multiedge. We assume the sets of k subproblems between possible recursive calls at a node are disjoint. This corresponds to the fact that the choice of multiedges at a node partitions the edge set leaving that node. There is a loss associated with any sink node in T . Also with the recursions at the internal node v a local loss will be added to the loss of the subproblems that depends on v and the chosen k-multiedge 6 leaving v. Recall that Mv is the set of multiedges leaving v. We can handle the following type of “min-sum” recurrences: OPT(v) = ( LT (v) v 2 T minm2Mv hP u:(v,u)2m OPT(u) + LM(m) i v 2 V −T . The problem of repeatedly solving such a dynamic programming problem over trials now becomes the problem of online learning of k-multipaths in this k-DAG. Note that due to the correctness of the dynamic programming, every possible solution to the dynamic programming can be encoded as a k-multipath in the k-DAG and vice versa. The loss of a given multipath is the sum of LM(m) over all multiedges m in the multipath plus the sum of LT (v) for all sink nodes v at the bottom of the multipath. To capture the same loss, we can alternatively define losses over the edges of the k-DAG. Concretely, for each edge (v, u) in a given multiedge m define `(v,u) := 1 kLM(m) + {u2T }LT (u) where {·} is the indicator function. In summary we are addressing the above min-sum type dynamic programming problem specified by a k-DAG and local losses where for the sake of simplicity we made two assumptions: each non-base subproblem breaks into exactly k smaller subproblems and the choice of k subproblems at a node are disjoint. We briefly discuss in the conclusion section how to generalize our methods to arbitrary min-sum dynamic programming problems, where the sets of subproblems can overlap and may have different sizes. 4.1 The Example of Learning Binary Search Trees Recall again the online version of optimal binary search tree (BST) problem [10]: We are given a set of n distinct keys K1 < K2 < . . . < Kn and n + 1 gaps or “dummy keys” D0, . . . , Dn indicating search failures such that for all i 2 {1..n}, Di−1 < Ki < Di. In each trial, the algorithm predicts with a BST. Then the adversary reveals a frequency vector ` = (p, q) with p 2 [0, 1]n, q 2 [0, 1]n+1 and Pn i=1 pi + Pn j=0 qj = 1. For each i, j, the frequencies pi and qj are the search probabilities for Ki and Dj, respectively. The loss is defined as the average search cost in the predicted BST which is the average depth2 of all the nodes in the BST: loss = n X i=1 depth(Ki) · pi + n X j=0 depth(Dj) · qj. Convex Hull of BSTs Implementing CH requires a representation where not only the BST polytope has a polynomial number of facets, but also the loss must be linear over the components. Since the average search cost is linear in the depth(Ki) and depth(Dj) variables, it would be natural to choose these 2n + 1 variables as the components for representing a BST. Unfortunately the convex hull of all BSTs when represented this way is not known to be a polytope with a polynomial number of facets. There is an alternate characterization of the convex hull of BSTs with n internal nodes called the associahedron [29]. This polytope has polynomial in n many facets but the average search cost is not linear in the n components associated with this polytope3. The Dynamic Programming Representation The optimal BST problem can be solved via dynamic programming [10]. Each subproblem is denoted by a pair (i, j), for 1 i n + 1 and i −1 j n, indicating the optimal BST problem with the keys Ki, . . . , Kj and dummy keys Di−1, . . . , Dj. The base subproblems are (i, i −1), for 1 i n + 1 and the final subproblem is (1, n). The BST dynamic programming problem uses the following recurrence: OPT(i, j)= ( qi−1 j =i−1 minirj{OPT(i, r−1)+OPT(r+1, j)+Pj k=i pk+Pj k=i−1 qk} ij. This recurrence always recurses on 2 subproblems. Therefore we have k = 2 and the associated 2-DAG has the subproblems/vertices V = {(i, j)|1 i n + 1, i −1 j n}, source s = (1, n) 2Here the root starts at depth 1. 3Concretely, the ith component is ai bi where ai and bi are the number of nodes in the left and right subtrees of the ith internal node Ki, respectively. 7 4 1 2 3 5 3 2 1 4 5 Figure 2: (left) Two different 2-multipaths in the DAG, in red and blue, and (right) their associated BSTs of n = 5 keys and 6 “dummy” keys. Note that each node, and consequently edge, is visited at most once in these 2-multipaths. Problem FPL EH CH Optimal Binary Search Trees O(n 3 2 p T) O(n 3 2 p T) O(n (log n) 1 2 p T) Matrix-Chain Multiplications4 — O(n 3 2 (dmax)3 p T) O(n (log n) 1 2 (dmax)3 p T) Knapsack O(n 3 2 p T) O(n 3 2 p T) O(n (log nC) 1 2 p T) Rod Cutting O(n 3 2 p T) O(n 3 2 p T) O(n (log n) 1 2 p T) Weighted Interval Scheduling O(n 3 2 p T) O(n 3 2 p T) O(n (log n) 1 2 p T) Table 1: Performance of various algorithms over different problems. C is the capacity in the Knapsack problem, and dmax is the upper-bound on the dimension in matrix-chain multiplication problem. and sinks T = {(i, i −1)|1 i n + 1}. Also at node (i, j), the set M(i,j) consists of (j −i + 1) many 2-multiedges. The rth 2-multiedge leaving (i, j) comprised of 2 edges going from the node (i, j) to the nodes (i, r −1) and (r + 1, j). Figure 2 illustrates the 2-DAG and 2-multipaths associated with BSTs. Since the above recurrence relation correctly solves the offline optimization problem, every 2multipath in the DAG represents a BST, and every possible BST can be represented by a 2-multipath of the 2-DAG. We have O(n3) edges and multiedges which are the components of our new representation. The loss of each 2-multiedge leaving (i, j) is Pj k=i pk+Pj k=i−1 qk and is upper bounded by 1. Most crucially, the original average search cost is linear in the losses of the multiedges and the 2-flow polytope has O(n3) facets. Regret Bound As mentioned earlier, the number of binary trees with n nodes is the nth Catalan number. Therefore N = (2n)! n!(n+1)! 2 (2n, 4n). Also note that the expected search cost is bounded by B = n in each trial. Thus using Theorem 3, EH achieves a regret bound of O(n 3 2 p T). Additionally, notice that the number of subproblems in the dynamic programming problem for BSTs is (n+1)(n+2) 2 . This is also the number of vertices in the associated 2-DAG and each 2-multipath representing a BST consists of exactly D = 2n edges. Therefore using Theorem 4, CH achieves a regret bound of O(n (log n) 1 2 p T). 5 Conclusions and Future Work We developed a general framework for online learning of combinatorial objects whose offline optimization problems can be efficiently solved via an algorithm belonging to a large class of dynamic programming algorithms. In addition to BSTs, several example problems are discussed in Appendix A. Table 1 gives the performance of EH and CH in our dynamic programming framework 4The loss of a fully parenthesized matrix-chain multiplication is the number of scalar multiplications in the execution of all matrix products. This number cannot be expressed as a linear loss over the dimensions of the matrices. We are thus unaware of a way to apply FPL to this problem using the dimensions of the matrices as the components. See Appendix A.1 for more details. 8 and compares it with the Follow the Perturbed Leader (FPL) algorithm. FPL additively perturbs the losses and then uses dynamic programming to find the solution of minimum loss. FPL essentially always matches EH, and CH is better than both in all cases. We conclude with a few remarks: • For EH, projections are simply a renormalization of the weight vector. In contrast, iterative Bregman projections are often needed for projecting back into the polytope used by CH [25, 19]. These methods are known to converge to the exact projection [8, 6] and are reported to be very efficient empirically [25]. For the special cases of Euclidean projections [13] and Sinkhorn Balancing [24], linear convergence has been proven. However we are unaware of a linear convergence proof for general Bregman divergences. Regardless of the convergence rate, the remaining gaps to the exact projections have to be accounted for as additional loss in the regret bounds. We do this in Appendix E for CH. • For the sake of concreteness, we focused in this paper on dynamic programming problems with “min-sum” recurrence relations, a fixed branching factor k and mutually exclusive sets of choices at a given subproblem. However, our results can be generalized to arbitrary “min-sum” dynamic programming problems with the methods introduced in [30]: We let the multiedges in G form hyperarcs, each of which is associated with a loss. Furthermore, each combinatorial object is encoded as a hyperpath, which is a sequence of hyperarcs from the source to the sinks. The polytope associated with such a dynamic programming problem is defined by flow-type constraints over the underlying hypergraph G of subproblems. Thus online learning a dynamic programming solution becomes a problem of learning hyperpaths in a hypergraph, and the techniques introduced in this paper let us implement EH and CH for this more general class of dynamic programming problems. • In this work we use dynamic programming algorithms for building polytopes for combinatorial objects that have a polynomial number of facets. The technique of going from the original polytope to a higher dimensional polytope in order to reduce the number of facets is known as extended formulation (see e.g. [21]). In the learning application we also need the additional requirement that the loss is linear in the components of the objects. A general framework of using extended formulations to develop learning algorithms has recently been explored in [32]. • We hope that many of the techniques from the expert setting literature can be adapted to learning combinatorial objects that are composed of components. This includes lower bounding weights for shifting comparators [20] and sleeping experts [7, 1]. Also in this paper, we focus on full information setting where the adversary reveals the entire loss vector in each trial. In contrast in fulland semi-bandit settings, the adversary only reveals partial information about the loss. Significant work has already been done in learning combinatorial objects in full- and semi-bandit settings [3, 18, 4, 27, 9]. It seems that the techniques introduced in the paper will also carry over. • Online Markov Decision Processes (MDPs) [15, 14] is an online learning model that focuses on the sequential revelation of an object using a sequential state based model. This is very much related to learning paths and the sequential decisions made in our dynamic programming framework. Connecting our work with the large body of research on MDPs is a promising direction of future research. • There are several important dynamic programming instances that are not included in the class considered in this paper: The Viterbi algorithm for finding the most probable path in a graph, and variants of Cocke-Younger-Kasami (CYK) algorithm for parsing probabilistic context-free grammars. The solutions for these problems are min-sum type optimization problem after taking a log of the probabilities. However taking logs creates unbounded losses. Extending our methods to these dynamic programming problems would be very worthwhile. Acknowledgments We thank S.V.N. Vishwanathan for initiating and guiding much of this research. We also thank Michael Collins for helpful discussions and pointers to the literature on hypergraphs and PCFGs. This research was supported by the National Science Foundation (NSF grant IIS-1619271). 9 References [1] Dmitry Adamskiy, Manfred K Warmuth, and Wouter M Koolen. Putting Bayes to sleep. In Advances in Neural Information Processing Systems, pages 135–143, 2012. [2] Nir Ailon. Improved bounds for online learning over the Permutahedron and other ranking polytopes. In AISTATS, pages 29–37, 2014. [3] Jean-Yves Audibert, Sébastien Bubeck, and Gábor Lugosi. Minimax policies for combinatorial prediction games. In COLT, volume 19, pages 107–132, 2011. [4] Jean-Yves Audibert, Sébastien Bubeck, and Gábor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31–45, 2013. [5] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. Journal of Computer and System Sciences, 74(1):97–114, 2008. [6] Heinz H Bauschke and Jonathan M Borwein. Legendre functions and the method of random Bregman projections. Journal of Convex Analysis, 4(1):27–67, 1997. [7] Olivier Bousquet and Manfred K Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3(Nov):363–396, 2002. [8] Lev M Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 7(3):200–217, 1967. [9] Nicolo Cesa-Bianchi and Gábor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404–1422, 2012. [10] Thomas H.. Cormen, Charles Eric Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press Cambridge, 2009. [11] Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, and Manfred Warmuth. On-line learning algorithms for path experts with non-additive losses. In Conference on Learning Theory, pages 424–447, 2015. [12] Varsha Dani, Sham M Kakade, and Thomas P Hayes. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems, pages 345–352, 2008. [13] Frank Deutsch. Dykstra’s cyclic projections algorithm: the rate of convergence. In Approximation Theory, Wavelets and Applications, pages 87–94. Springer, 1995. [14] Travis Dick, Andras Gyorgy, and Csaba Szepesvari. Online learning in Markov decision processes with changing cost sequences. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 512–520, 2014. [15] Eyal Even-Dar, Sham M Kakade, and Yishay Mansour. Online Markov decision processes. Mathematics of Operations Research, 34(3):726–736, 2009. [16] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997. [17] Swati Gupta, Michel Goemans, and Patrick Jaillet. Solving combinatorial games using products, projections and lexicographically optimal bases. Preprint arXiv:1603.00522, 2016. [18] András György, Tamás Linder, Gábor Lugosi, and György Ottucsák. The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8(Oct):2369–2403, 2007. [19] David P Helmbold and Manfred K Warmuth. Learning permutations with exponential weights. The Journal of Machine Learning Research, 10:1705–1736, 2009. [20] Mark Herbster and Manfred K Warmuth. Tracking the best expert. Machine Learning, 32(2):151– 178, 1998. 10 [21] Volker Kaibel. Extended formulations in combinatorial optimization. Preprint arXiv:1104.1023, 2011. [22] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. [23] Jon Kleinberg and Eva Tardos. Algorithm design. Addison Wesley, 2006. [24] Philip A Knight. The Sinkhorn–Knopp algorithm: convergence and applications. SIAM Journal on Matrix Analysis and Applications, 30(1):261–275, 2008. [25] Wouter M Koolen, Manfred K Warmuth, and Jyrki Kivinen. Hedging structured concepts. In Conference on Learning Theory, pages 239–254. Omnipress, 2010. [26] Dima Kuzmin and Manfred K Warmuth. Optimum follow the leader algorithm. In Learning Theory, pages 684–686. Springer, 2005. [27] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In Artificial Intelligence and Statistics, pages 535–543, 2015. [28] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994. [29] Jean-Louis Loday. The multiple facets of the associahedron. Proc. 2005 Academy Coll. Series, 2005. [30] R Kipp Martin, Ronald L Rardin, and Brian A Campbell. Polyhedral characterization of discrete dynamic programming. Operations Research, 38(1):127–138, 1990. [31] Mehryar Mohri. Weighted automata algorithms. In Handbook of weighted automata, pages 213–254. Springer, 2009. [32] Holakou Rahmanian, David Helmbold, and S.V.N. Vishwanathan. Online learning of combinatorial objects via extended formulation. Preprint arXiv:1609.05374, 2017. [33] Arun Rajkumar and Shivani Agarwal. Online decision-making in general combinatorial spaces. In Advances in Neural Information Processing Systems, pages 3482–3490, 2014. [34] Daiki Suehiro, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Kiyohito Nagano. Online prediction under submodular constraints. In International Conference on Algorithmic Learning Theory, pages 260–274. Springer, 2012. [35] Eiji Takimoto and Manfred K Warmuth. Path kernels and multiplicative updates. The Journal of Machine Learning Research, 4:773–818, 2003. [36] Manfred K Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9(10):2287–2320, 2008. [37] Shota Yasutake, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Masayuki Takeda. Online linear optimization over permutations. In Algorithms and Computation, pages 534–543. Springer, 2011. 11 | 2017 | 318 |
6,806 | Neural Discrete Representation Learning Aaron van den Oord DeepMind avdnoord@google.com Oriol Vinyals DeepMind vinyals@google.com Koray Kavukcuoglu DeepMind korayk@google.com Abstract Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. 1 Introduction Recent advances in generative modelling of images [38, 12, 13, 22, 10], audio [37, 26] and videos [20, 11] have yielded impressive samples and applications [24, 18]. At the same time, challenging tasks such as few-shot learning [34], domain adaptation [17], or reinforcement learning [35] heavily rely on learnt representations from raw data, but the usefulness of generic representations trained in an unsupervised fashion is still far from being the dominant approach. Maximum likelihood and reconstruction error are two common objectives used to train unsupervised models in the pixel domain, however their usefulness depends on the particular application the features are used in. Our goal is to achieve a model that conserves the important features of the data in its latent space while optimising for maximum likelihood. As the work in [7] suggests, the best generative models (as measured by log-likelihood) will be those without latents but a powerful decoder (such as PixelCNN). However, in this paper, we argue for learning discrete and useful latent variables, which we demonstrate on a variety of domains. Learning representations with continuous features have been the focus of many previous work [16, 39, 6, 9] however we concentrate on discrete representations [27, 33, 8, 28] which are potentially a more natural fit for many of the modalities we are interested in. Language is inherently discrete, similarly speech is typically represented as a sequence of symbols. Images can often be described concisely by language [40]. Furthermore, discrete representations are a natural fit for complex reasoning, planning and predictive learning (e.g., if it rains, I will use an umbrella). While using discrete latent variables in deep learning has proven challenging, powerful autoregressive models have been developed for modelling distributions over discrete variables [37]. In our work, we introduce a new family of generative models succesfully combining the variational autoencoder (VAE) framework with discrete latent representations through a novel parameterisation of the posterior distribution of (discrete) latents given an observation. Our model, which relies on vector quantization (VQ), is simple to train, does not suffer from large variance, and avoids the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. “posterior collapse” issue which has been problematic with many VAE models that have a powerful decoder, often caused by latents being ignored. Additionally, it is the first discrete latent VAE model that get similar performance as its continuous counterparts, while offering the flexibility of discrete distributions. We term our model the VQ-VAE. Since VQ-VAE can make effective use of the latent space, it can successfully model important features that usually span many dimensions in data space (for example objects span many pixels in images, phonemes in speech, the message in a text fragment, etc.) as opposed to focusing or spending capacity on noise and imperceptible details which are often local. Lastly, once a good discrete latent structure of a modality is discovered by the VQ-VAE, we train a powerful prior over these discrete random variables, yielding interesting samples and useful applications. For instance, when trained on speech we discover the latent structure of language without any supervision or prior knowledge about phonemes or words. Furthermore, we can equip our decoder with the speaker identity, which allows for speaker conversion, i.e., transferring the voice from one speaker to another without changing the contents. We also show promising results on learning long term structure of environments for RL. Our contributions can thus be summarised as: • Introducing the VQ-VAE model, which is simple, uses discrete latents, does not suffer from “posterior collapse” and has no variance issues. • We show that a discrete latent model (VQ-VAE) perform as well as its continuous model counterparts in log-likelihood. • When paired with a powerful prior, our samples are coherent and high quality on a wide variety of applications such as speech and video generation. • We show evidence of learning language through raw speech, without any supervision, and show applications of unsupervised speaker conversion. 2 Related Work In this work we present a new way of training variational autoencoders [23, 32] with discrete latent variables [27]. Using discrete variables in deep learning has proven challenging, as suggested by the dominance of continuous latent variables in most of current work – even when the underlying modality is inherently discrete. There exist many alternatives for training discrete VAEs. The NVIL [27] estimator use a single-sample objective to optimise the variational lower bound, and uses various variance-reduction techniques to speed up training. VIMCO [28] optimises a multi-sample objective [5], which speeds up convergence further by using multiple samples from the inference network. Recently a few authors have suggested the use of a new continuous reparemetrisation based on the so-called Concrete [25] or Gumbel-softmax [19] distribution, which is a continuous distribution and has a temperature constant that can be annealed during training to converge to a discrete distribution in the limit. In the beginning of training the variance of the gradients is low but biased, and towards the end of training the variance becomes high but unbiased. None of the above methods, however, close the performance gap of VAEs with continuous latent variables where one can use the Gaussian reparameterisation trick which benefits from much lower variance in the gradients. Furthermore, most of these techniques are typically evaluated on relatively small datasets such as MNIST, and the dimensionality of the latent distributions is small (e.g., below 8). In our work, we use three complex image datasets (CIFAR10, ImageNet, and DeepMind Lab) and a raw speech dataset (VCTK). Our work also extends the line of research where autoregressive distributions are used in the decoder of VAEs and/or in the prior [14]. This has been done for language modelling with LSTM decoders [4], and more recently with dilated convolutional decoders [42]. PixelCNNs [29, 38] are convolutional autoregressive models which have also been used as distribution in the decoder of VAEs [15, 7]. Finally, our approach also relates to work in image compression with neural networks. Theis et. al. [36] use scalar quantisation to compress activations for lossy image compression before arithmetic encoding. Other authors [1] propose a method for similar compression model with vector quantisation. 2 The authors propose a continuous relaxation of vector quantisation which is annealed over time to obtain a hard clustering. In their experiments they first train an autoencoder, afterwards vector quantisation is applied to the activations of the encoder, and finally the whole network is fine tuned using the soft-to-hard relaxation with a small learning rate. In our experiments we were unable to train using the soft-to-hard relaxation approach from scratch as the decoder was always able to invert the continuous relaxation during training, so that no actual quantisation took place. 3 VQ-VAE Perhaps the work most related to our approach are VAEs. VAEs consist of the following parts: an encoder network which parameterises a posterior distribution q(z|x) of discrete latent random variables z given the input data x, a prior distribution p(z), and a decoder with a distribution p(x|z) over input data. Typically, the posteriors and priors in VAEs are assumed normally distributed with diagonal covariance, which allows for the Gaussian reparametrisation trick to be used [32, 23]. Extensions include autoregressive prior and posterior models [14], normalising flows [31, 10], and inverse autoregressive posteriors [22]. In this work we introduce the VQ-VAE where we use discrete latent variables with a new way of training, inspired by vector quantisation (VQ). The posterior and prior distributions are categorical, and the samples drawn from these distributions index an embedding table. These embeddings are then used as input into the decoder network. 3.1 Discrete Latent variables We define a latent embedding space e ∈RK×D where K is the size of the discrete latent space (i.e., a K-way categorical), and D is the dimensionality of each latent embedding vector ei. Note that there are K embedding vectors ei ∈RD, i ∈1, 2, ..., K. As shown in Figure 1, the model takes an input x, that is passed through an encoder producing output ze(x). The discrete latent variables z are then calculated by a nearest neighbour look-up using the shared embedding space e as shown in equation 1. The input to the decoder is the corresponding embedding vector ek as given in equation 2. One can see this forward computation pipeline as a regular autoencoder with a particular non-linearity that maps the latents to 1-of-K embedding vectors. The complete set of parameters for the model are union of parameters of the encoder, decoder, and the embedding space e. For sake of simplicity we use a single random variable z to represent the discrete latent variables in this Section, however for speech, image and videos we actually extract a 1D, 2D and 3D latent feature spaces respectively. The posterior categorical distribution q(z|x) probabilities are defined as one-hot as follows: q(z = k|x) = 1 for k = argminj∥ze(x) −ej∥2, 0 otherwise , (1) where ze(x) is the output of the encoder network. We view this model as a VAE in which we can bound log p(x) with the ELBO. Our proposal distribution q(z = k|x) is deterministic, and by defining a simple uniform prior over z we obtain a KL divergence constant and equal to log K. The representation ze(x) is passed through the discretisation bottleneck followed by mapping onto the nearest element of embedding e as given in equations 1 and 2. zq(x) = ek, where k = argminj∥ze(x) −ej∥2 (2) 3.2 Learning Note that there is no real gradient defined for equation 2, however we approximate the gradient similar to the straight-through estimator [3] and just copy gradients from decoder input zq(x) to encoder output ze(x). One could also use the subgradient through the quantisation operation, but this simple estimator worked well for the initial experiments in this paper. 3 Figure 1: Left: A figure describing the VQ-VAE. Right: Visualisation of the embedding space. The output of the encoder z(x) is mapped to the nearest point e2. The gradient ∇zL (in red) will push the encoder to change its output, which could alter the configuration in the next forward pass. During forward computation the nearest embedding zq(x) (equation 2) is passed to the decoder, and during the backwards pass the gradient ∇zL is passed unaltered to the encoder. Since the output representation of the encoder and the input to the decoder share the same D dimensional space, the gradients contain useful information for how the encoder has to change its output to lower the reconstruction loss. As seen on Figure 1 (right), the gradient can push the encoder’s output to be discretised differently in the next forward pass, because the assignment in equation 1 will be different. Equation 3 specifies the overall loss function. It is has three components that are used to train different parts of VQ-VAE. The first term is the reconstruction loss (or the data term) which optimizes the decoder and the encoder (through the estimator explained above). Due to the straight-through gradient estimation of mapping from ze(x) to zq(x), the embeddings ei receive no gradients from the reconstruction loss log p(z|zq(x)). Therefore, in order to learn the embedding space, we use one of the simplest dictionary learning algorithms, Vector Quantisation (VQ). The VQ objective uses the l2 error to move the embedding vectors ei towards the encoder outputs ze(x) as shown in the second term of equation 3. Because this loss term is only used for updating the dictionary, one can alternatively also update the dictionary items as function of moving averages of ze(x) (not used for the experiments in this work). Finally, since the volume of the embedding space is dimensionless, it can grow arbitrarily if the embeddings ei do not train as fast as the encoder parameters. To make sure the encoder commits to an embedding and its output does not grow, we add a commitment loss, the third term in equation 3. Thus, the total training objective becomes: L = log p(x|zq(x)) + ∥sg[ze(x)] −e∥2 2 + β∥ze(x) −sg[e]∥2 2, (3) where sg stands for the stopgradient operator that is defined as identity at forward computation time and has zero partial derivatives, thus effectively constraining its operand to be a non-updated constant. The decoder optimises the first loss term only, the encoder optimises the first and the last loss terms, and the embeddings are optimised by the middle loss term. We found the resulting algorithm to be quite robust to β, as the results did not vary for values of β ranging from 0.1 to 2.0. We use β = 0.25 in all our experiments, although in general this would depend on the scale of reconstruction loss. Since we assume a uniform prior for z, the KL term that usually appears in the ELBO is constant w.r.t. the encoder parameters and can thus be ignored for training. In our experiments we define N discrete latents (e.g., we use a field of 32 x 32 latents for ImageNet, or 8 x 8 x 10 for CIFAR10). The resulting loss L is identical, except that we get an average over N terms for k-means and commitment loss – one for each latent. The log-likelihood of the complete model log p(x) can be evaluated as follows: log p(x) = log X k p(x|zk)p(zk), Because the decoder p(x|z) is trained with z = zq(x) from MAP-inference, the decoder should not allocate any probability mass to p(x|z) for z ̸= zq(x) once it has fully converged. Thus, we can write 4 log p(x) ≈log p(x|zq(x))p(zq(x)). We empirically evaluate this approximation in section 4. From Jensen’s inequality, we also can write log p(x) ≥log p(x|zq(x))p(zq(x)). 3.3 Prior The prior distribution over the discrete latents p(z) is a categorical distribution, and can be made autoregressive by depending on other z in the feature map. Whilst training the VQ-VAE, the prior is kept constant and uniform. After training, we fit an autoregressive distribution over z, p(z), so that we can generate x via ancestral sampling. We use a PixelCNN over the discrete latents for images, and a WaveNet for raw audio. Training the prior and the VQ-VAE jointly, which could strengthen our results, is left as future research. 4 Experiments 4.1 Comparison with continuous variables As a first experiment we compare VQ-VAE with normal VAEs (with continuous variables), as well as VIMCO [28] with independent Gaussian or categorical priors. We train these models using the same standard VAE architecture on CIFAR10, while varying the latent capacity (number of continuous or discrete latent variables, as well as the dimensionality of the discrete space K). The encoder consists of 2 strided convolutional layers with stride 2 and window size 4 × 4, followed by two residual 3 × 3 blocks (implemented as ReLU, 3x3 conv, ReLU, 1x1 conv), all having 256 hidden units. The decoder similarly has two residual 3 × 3 blocks, followed by two transposed convolutions with stride 2 and window size 4 × 4. We use the ADAM optimiser [21] with learning rate 2e-4 and evaluate the performance after 250,000 steps with batch-size 128. For VIMCO we use 50 samples in the multi-sample training objective. The VAE, VQ-VAE and VIMCO models obtain 4.51 bits/dim, 4.67 bits/dim and 5.14 respectively. All reported likelihoods are lower bounds. Our numbers for the continuous VAE are comparable to those reported for a Deep convolutional VAE: 4.54 bits/dim [13] on this dataset. Our model is the first among those using discrete latent variables which challenges the performance of continuous VAEs. Thus, we get very good reconstructions like regular VAEs provide, with the compressed representation that symbolic representations provide. A few interesting characteristics, implications and applications of the VQ-VAEs that we train is shown in the next subsections. 4.2 Images Images contain a lot of redundant information as most of the pixels are correlated and noisy, therefore learning models at the pixel level could be wasteful. In this experiment we show that we can model x = 128 × 128 × 3 images by compressing them to a z = 32 × 32 × 1 discrete space (with K=512) via a purely deconvolutional p(x|z). So a reduction of 128×128×3×8 32×32×9 ≈42.6 in bits. We model images by learning a powerful prior (PixelCNN) over z. This allows to not only greatly speed up training and sampling, but also to use the PixelCNNs capacity to capture the global structure instead of the low-level statistics of images. Figure 2: Left: ImageNet 128x128x3 images, right: reconstructions from a VQ-VAE with a 32x32x1 latent space, with K=512. 5 Reconstructions from the 32x32x1 space with discrete latents are shown in Figure 2. Even considering that we greatly reduce the dimensionality with discrete encoding, the reconstructions look only slightly blurrier than the originals. It would be possible to use a more perceptual loss function than MSE over pixels here (e.g., a GAN [12]), but we leave that as future work. Next, we train a PixelCNN prior on the discretised 32x32x1 latent space. As we only have 1 channel (not 3 as with colours), we only have to use spatial masking in the PixelCNN. The capacity of the PixelCNN we used was similar to those used by the authors of the PixelCNN paper [38]. Figure 3: Samples (128x128) from a VQ-VAE with a PixelCNN prior trained on ImageNet images. From left to right: kit fox, gray whale, brown bear, admiral (butterfly), coral reef, alp, microwave, pickup. Samples drawn from the PixelCNN were mapped to pixel-space with the decoder of the VQ-VAE and can be seen in Figure 3. Figure 4: Samples (128x128) from a VQ-VAE with a PixelCNN prior trained on frames captured from DeepMind Lab. We also repeat the same experiment for 84x84x3 frames drawn from the DeepMind Lab environment [2]. The reconstructions looked nearly identical to their originals. Samples drawn from the PixelCNN prior trained on the 21x21x1 latent space and decoded to the pixel space using a deconvolutional model decoder can be seen in Figure 4. Finally, we train a second VQ-VAE with a PixelCNN decoder on top of the 21x21x1 latent space from the first VQ-VAE on DM-LAB frames. This setup typically breaks VAEs as they suffer from "posterior collapse", i.e., the latents are ignored as the decoder is powerful enough to model x perfectly. Our model however does not suffer from this, and the latents are meaningfully used. We use only three latent variables (each with K=512 and their own embedding space e) at the second stage for modelling the whole image and as such the model cannot reconstruct the image perfectly – which is consequence of compressing the image onto 3 x 9 bits, i.e. less than a float32. Reconstructions sampled from the discretised global code can be seen in Figure 5. 6 Figure 5: Top original images, Bottom: reconstructions from a 2 stage VQ-VAE, with 3 latents to model the whole image (27 bits), and as such the model cannot reconstruct the images perfectly. The reconstructions are generated by sampled from the second PixelCNN prior in the 21x21 latent domain of first VQ-VAE, and then decoded with standard VQ-VAE decoder to 84x84. A lot of the original scene, including textures, room layout and nearby walls remain, but the model does not try to store the pixel values themselves, which means the textures are generated procedurally by the PixelCNN. Figure 6: Left: original waveform, middle: reconstructed with same speaker-id, right: reconstructed with different speaker-id. The contents of the three waveforms are the same. 4.3 Audio In this set of experiments we evaluate the behaviour of discrete latent variables on models of raw audio. In all our audio experiments, we train a VQ-VAE that has a dilated convolutional architecture similar to WaveNet decoder. All samples for this section can be played from the following url: https://avdnoord.github.io/homepage/vqvae/. We first consider the VCTK dataset, which has speech recordings of 109 different speakers [41]. We train a VQ-VAE where the encoder has 6 strided convolutions with stride 2 and window-size 4. This yields a latent space 64x smaller than the original waveform. The latents consist of one feature map and the discrete space is 512-dimensional. The decoder is conditioned on both the latents and a one-hot embedding for the speaker. First, we ran an experiment to show that VQ-VAE can extract a latent space that only conserves long-term relevant information. After training the model, given an audio example, we can encode it to the discrete latent representation, and reconstruct by sampling from the decoder. Because the dimensionality of the discrete representation is 64 times smaller, the original sample cannot be perfectly reconstructed sample by sample. As it can be heard from the provided samples, and as shown in Figure 7, the reconstruction has the same content (same text contents), but the waveform is quite different and prosody in the voice is altered. This means that the VQ-VAE has, without any form of linguistic supervision, learned a high-level abstract space that is invariant to low-level features and only encodes the content of the speech. This experiment confirms our observations from before that important features are often those that span many dimensions in the input data space (in this case phoneme and other high-level content in waveform). We have then analysed the unconditional samples from the model to understand its capabilities. Given the compact and abstract latent representation extracted from the audio, we trained the prior on top of this representation to model the long-term dependencies in the data. For this task we have used a larger dataset of 460 speakers [30] and trained a VQ-VAE model where the resolution of discrete space is 128 times smaller. Next we trained the prior as usual on top of this representation on chunks of 40960 timesteps (2.56 seconds), which yields 320 latent timesteps. While samples drawn from even the best speech models like the original WaveNet [37] sound like babbling , samples from VQ-VAE contain clear words and part-sentences (see samples linked above). We conclude that VQ-VAE was able to model a rudimentary phoneme-level language model in a completely unsupervised fashion from raw audio waveforms. 7 Next, we attempted the speaker conversion where the latents are extracted from one speaker and then reconstructed through the decoder using a separate speaker id. As can be heard from the samples, the synthesised speech has the same content as the original sample, but with the voice from the second speaker. This experiment again demonstrates that the encoded representation has factored out speaker-specific information: the embeddings not only have the same meaning regardless of details in the waveform, but also across different voice-characteristics. Finally, in an attempt to better understand the content of the discrete codes we have compared the latents one-to-one with the ground-truth phoneme-sequence (which was not used any way to train the VQ-VAE). With a 128-dimensional discrete space that runs at 25 Hz (encoder downsampling factor of 640), we mapped every of the 128 possible latent values to one of the 41 possible phoneme values1 (by taking the conditionally most likely phoneme). The accuracy of this 41-way classification was 49.3%, while a random latent space would result in an accuracy of 7.2% (prior most likely phoneme). It is clear that these discrete latent codes obtained in a fully unsupervised way are high-level speech descriptors that are closely related to phonemes. 4.4 Video For our final experiment we have used the DeepMind Lab [2] environment to train a generative model conditioned on a given action sequence. In Figure 7 we show the initial 6 frames that are input to the model followed by 10 frames that are sampled from VQ-VAE with all actions set to forward (top row) and right (bottom row). Generation of the video sequence with the VQ-VAE model is done purely in the latent space, zt without the need to generate the actual images themselves. Each image in the sequence xt is then created by mapping the latents with a deterministic decoder to the pixel space after all the latents are generated using only the prior model p(z1, . . . , zT ). Therefore, VQ-VAE can be used to imagine long sequences purely in latent space without resorting to pixel space. It can be seen that the model has learnt to successfully generate a sequence of frames conditioned on given action without any degradation in the visual quality whilst keeping the local geometry correct. For completeness, we trained a model without actions and obtained similar results, not shown due to space constraints. Figure 7: First 6 frames are provided to the model, following frames are generated conditioned on an action. Top: repeated action "move forward", bottom: repeated action "move right". 5 Conclusion In this work we have introduced VQ-VAE, a new family of models that combine VAEs with vector quantisation to obtain a discrete latent representation. We have shown that VQ-VAEs are capable of modelling very long term dependencies through their compressed discrete latent space which we have demonstrated by generating 128 × 128 colour images, sampling action conditional video sequences and finally using audio where even an unconditional model can generate surprisingly meaningful chunks of speech and doing speaker conversion. All these experiments demonstrated that the discrete latent space learnt by VQ-VAEs capture important features of the data in a completely unsupervised manner. Moreover, VQ-VAEs achieve likelihoods that are almost as good as their continuous latent variable counterparts on CIFAR10 data. We believe that this is the first discrete latent variable model that can successfully model long range sequences and fully unsupervisedly learn high-level speech descriptors that are closely related to phonemes. 1Note that the encoder/decoder pairs could make the meaning of every discrete latent depend on previous latents in the sequence, e.g.. bi/tri-grams (and thus achieve a higher compression) which means a more advanced mapping to phonemes would results in higher accuracy. 8 References [1] Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc Van Gool. Soft-to-hard vector quantization for end-to-end learned compression of images and neural networks. arXiv preprint arXiv:1704.00648, 2017. [2] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016. [3] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. [4] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [5] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. [6] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016. [7] Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016. [8] Aaron Courville, James Bergstra, and Yoshua Bengio. A spike and slab restricted boltzmann machine. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 233–241, 2011. [9] Emily Denton, Sam Gross, and Rob Fergus. Semi-supervised learning with context-conditional generative adversarial networks. arXiv preprint arXiv:1611.06430, 2016. [10] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. [11] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems, pages 64–72, 2016. [12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [13] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. In Advances In Neural Information Processing Systems, pages 3549–3557, 2016. [14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013. [15] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vázquez, and Aaron C. Courville. Pixelvae: A latent variable model for natural images. CoRR, abs/1611.05013, 2016. [16] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006. [17] Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of domain-invariant image representations. arXiv preprint arXiv:1301.3224, 2013. [18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016. [19] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. [20] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016. [21] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 9 [22] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. 2016. [23] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [24] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016. [25] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. [26] Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. Samplernn: An unconditional end-to-end neural audio generation model. arXiv preprint arXiv:1612.07837, 2016. [27] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. [28] Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. CoRR, abs/1602.06725, 2016. [29] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [30] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 5206–5210. IEEE, 2015. [31] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. [32] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [33] Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In Artificial Intelligence and Statistics, pages 448–455, 2009. [34] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016. [35] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [36] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compression with compressive autoencoders. arXiv preprint arXiv:1703.00395, 2017. [37] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. [38] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages 4790–4798, 2016. [39] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408, 2010. [40] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015. [41] Junichi Yamagishi. English multi-speaker corpus for cstr voice cloning toolkit. URL http://homepages. inf. ed. ac. uk/jyamagis/page3/page58/page58. html, 2012. [42] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved variational autoencoders for text modeling using dilated convolutions. CoRR, abs/1702.08139, 2017. 10 | 2017 | 319 |
6,807 | State Aware Imitation Learning Yannick Schroecker College of Computing Georgia Institute of Technology yannickschroecker@gatech.edu Charles Isbell College of Computing Georgia Institute of Technology isbell@cc.gatech.edu Abstract Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert. It is intuitively apparent that learning to take optimal actions is a simpler undertaking in situations that are similar to the ones shown by the teacher. However, imitation learning approaches do not tend to use this insight directly. In this paper, we introduce State Aware Imitation Learning (SAIL), an imitation learning algorithm that allows an agent to learn how to remain in states where it can confidently take the correct action and how to recover if it is lead astray. Key to this algorithm is a gradient learned using a temporal difference update rule which leads the agent to prefer states similar to the demonstrated states. We show that estimating a linear approximation of this gradient yields similar theoretical guarantees to online temporal difference learning approaches and empirically show that SAIL can effectively be used for imitation learning in continuous domains with non-linear function approximators used for both the policy representation and the gradient estimate. 1 Introduction One of the foremost challenges in the field of Artificial Intelligence is to program or train an agent to act intelligently without perfect information and in arbitrary environments. Many avenues have been explored to derive such agents but one of the most successful and practical approaches has been to learn how to imitate demonstrations provided by a human teacher. Such imitation learning approaches provide a natural way for a human expert to program agents and are often combined with other approaches such as reinforcement learning to narrow the search space and to help find a near optimal solution. Success stories are numerous in the field of robotics [3] where imitation learning has long been subject of research but can also be found in software domains with recent success stories including AlphaGo [23] which learns to play the game of Go from a database of expert games before improving further and the benchmark domain of Atari games where imitation learning combined with reinforcement learning has been shown to significantly improve performance over pure reinforcement learning approaches [9]. Formally, we define the problem domain as a Markov decision process, i.e. by its states, actions and unknown Markovian transition probabilities p(s′|s, a) of taking action a in state s leading to state s′. Imitation learning aims to find a policy π(a|s) that dictates the action an agent should take in any state by learning from a set of demonstrated states SD and the corresponding demonstrated actions AD. The likely most straight-forward approach to imitation learning is to employ a supervised learning algorithm such as neural networks in order to derive a policy, treating the demonstrated states and actions as training inputs and outputs respectively. However, while this can work well in practice and has a long history of successes starting with, among other examples, early ventures into autonomous driving[18], it also violates a key assumption of statistical supervised learning by having past predictions affect the distribution of inputs seen in the future. It has been shown that agents trained this way have a tendency to take actions that lead it to states that are dissimilar from 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. any encountered during training and in which the agent is less likely to have an accurate model of how to act [18, 19]. Deviations from the demonstrations based on limitations of the learning model or randomness in the domain are therefore amplified as time progresses. Several approaches exist that are capable of addressing this problem. Interactive imitation learning methods (e.g. [5, 19, 20]) address this problem directly but require continuing queries to the human teacher which is often not practical. Inverse Reinforcement Learning (IRL) approaches attempt to learn the objective function that the demonstrations are optimizing and show better generalization capabilities. However, IRL approaches often require a model of the domain, can be limited by the representation of the reward function and are learning a policy indirectly. A consequence of the latter is that small changes to the learned objective function can lead to large changes in the learned policy. In this paper we introduce State Aware Imitation Learning (SAIL). SAIL aims to address the aforementioned problem by explicitly learning to reproduce demonstrated trajectories based on their states as well as their actions. Intuitively, if an agent trained with SAIL finds itself in a state similar to a demonstrated state it will prefer actions that are similar to the demonstrated action but it will also prefer to remain near demonstrated states where the trained policy is more likely to be accurate. An agent trained with SAIL will thus learn how to recover if it deviates from the demonstrated trajectories. We achieve this in a principled way by finding the maximum-a-posteriori (MAP) estimate of the complete trajectory. Thus, our objective is to find a policy which we define to be a parametric distribution πθ(a|s) using parameters θ. Natural choices would be linear functions or neural networks. The MAP problem is then given by argmaxθp(θ|SD, AD) = argmaxθ log p(AD|SD, θ) + log p(SD|θ) + log p(θ). (1) Note that this equation differs from the naive supervised approach in which the second term log p(SD|θ) is assumed to be independent from the current policy and is thus irrelevant to the optimization problem. Maximizing this term leads to the agent actively trying to reproduce states that are similar to the ones in SD. It seems natural that additional information about the domain is necessary in order to learn how to reach these states. In this work, we obtain this information using unsupervised interactions with the environment. We would like to stress that our approach does not require further input from the human teacher, any additional measure of optimality, or any model of the environment. A key component of our algorithm is based on the work of Morimura et al.[15] who estimate a gradient of the distribution of states observed when following the current policy using a least squares temporal difference learning approach and use their results to derive an alternative policy gradient algorithm. We discuss their approach in detail in section 3.1 and extend the idea to an online temporal difference learning approach in section 3.2. This adaptation gives us greater flexibility for our choice of function approximator and also provides a natural way to deal with an additional constraint to the optimization problem which we will introduce below. In section 3.3, we describe the full SAIL algorithm in detail and show that the estimated gradient can be used to derive a principled and novel imitation learning approach. We then evaluate our approach on a tabular domain in section 4.1, comparing our results to a purely supervised approach to imitation learning as well as to sample based inverse reinforcement learning. In section 4.2 we show that SAIL can successfully be applied to learn a neural network policy in a continuous bipedal walker domain and achieves significant improvements over supervised imitation learning in this domain. 2 Related works One of the main problems SAIL is trying to address is the problem of remaining close to states where the agent can act with high confidence. We identify three different classes of imitation learning algorithms that address this problem either directly or indirectly under different assumptions and with different limitations. A specialized solution to this problem can be found in the field of robotics. Imitation learning approaches in robotics often do not aim to learn a full policy using general function approximators but instead try to predict a trajectory that the robot should follow. Trajectory representations such as Dynamic Movement Primitives [21] give the robot a sequence of states (or its derivatives) which the robot then follows using a given control law. The role of the control law is to drive the robot towards the demonstrated states which is also a key objective of SAIL. However, this solution is highly domain specific and a controller needs to be chosen that fits the task and representation of the state space. It can, for example, be more challenging to use image based state representations. For a survey of imitation learning methods applied to robotics, see [3]. 2 The second class of algorithms is what we will call iterative imitation learning algorithms. A key characteristic of these algorithms is that the agent actively queries the expert for demonstrations in states that it sees when executing its current policy. One of the first approaches in this class is SEARN[5]. When applied to Imiteration Learning, SEARN starts by following the experts action at every step, then iteratively uses the demonstrations collected during the last episode to train a new policy and collects new episodes by taking actions according to a mixture of all previously trained policies and the experts actions. Over time SEARN learns to follow its mixture of policies and stops relying on the expert to decide which actions to take. Ross et al. [19] first proved that the pure supervised approach to imitation learning can lead to the error rate growing over time. To alleviate this issue they introduced a similar iterative algorithm called SMILe and proved that the error rate increases near linearly with respect to the time horizon. Building on this, Ross et al. introduced DAGGER [20]. DAGGER provides similar theoretical guarantees and empirically outperforms SMILe by augmenting a single training set during each iteration based on queries to the expert on the states seen during execution. DAGGER does not require previous policies to be stored in order to calculate a mixture. Note that while these algorithms are guaranteed to address the issue of straying too far from demonstrations, they approach the problem from a different direction. Instead of preferring states on which the agent has demonstrations, the algorithms collects more demonstrations in states the agent actually sees during execution. This can be effective but requires additional interaction with the human teacher which is often not cheaply available in practice. As mentioned above, our approach also shares significant similarities with Inverse Reinforcement Learning (IRL) approaches [17]. IRL methods aim to derive a reward function for which the provided demonstrations are optimal. This reward function can then be used to compute a complete policy. Note that the IRL problem is known to be ill-formed as a set of demonstrations can have an infinite amount of corresponding reward functions. Successful approaches such as Maximum Entropy IRL (MaxEntIRL) [27] thus attempt to disambiguate between possible reward functions by reasoning explicitly about the distribution of both states and actions. In fact, Choi and Kim [4] argue that many existing IRL methods can be rewritten as finding the MAP estimate for the reward function given the provided demonstrations using different probabilistic models. This provides a direct link to our work which maximizes the same objective but with respect to the policy as opposed to the reward function. A significant downside of many IRL approaches is that they require a model describing the dynamics of the world. However, sample based approaches exist. Boularias et al. [1] formulate an objective function similar to MaxEntIRL but find the optimal solution based on samples. Relative Entropy IRL (RelEntIRL) aims to find a reward function corresponding to a distribution over trajectories that matches the observed features while remaining within a relative entropy bound to the uniform distribution. While RelEntIRL can be effective, it is limited to linear reward functions. Few sample based methods exist that are able to learn non-linear reward functions. Recently, Finn et al. proposed Guided Cost Learning [6] which optimizes an objective based on MaxEntIRL using importance sampling and iterative refinement of the sample policy. Refinement is based on optimal control with learned models and is thus best suited for problems in domains in which such methods have been shown to work well, e.g. robotic manipulation tasks. A different direction for sample based IRL has been proposed by Klein et al. who treat the scores of a score-based classifier trained using the provided demonstration as a value function, i.e. the long-term expected reward, and use these values to derive a reward function. Structured Classification for IRL (SCIRL) [13] uses estimated feature expectations and linearity of the value function to derive the parameters of a linear reward function while the more recent Cascaded Supervised IRL (CSI) [14] derives the reward function by training a Support Vector Machine based on the observed temporal differences. While non-linear classifiers could be used, the method is dependent on the interpretability of the score as a value function. Recently, Ho et al.[11] introduced an approach that aims to find a policy that implicitly maximizes a linear reward function but without the need to explicitly represent such a reward function. Generative Adversarial Imitation Learning [10] uses a method similar to Generative Adversarial Networks[7] to extend this approach to nonlinear reward functions. The resulting algorithm trains a discriminator to distinguish between demonstration and sampled trajectory and uses the probability given by the discriminator as a reward to train a policy using reinforcement learning. The maximum likelihood approach presented here can be seen as an approximation of minimizing the KL divergence between the demonstrated states and actions and the reproduction by the learned policy. This can also be achieved by using the ratio of state-action probabilities pD(a,s) dπθ (s)πθ(a|s) as a reward which is a straight-forward transformation of the output of the optimal discriminator[7]. Note however that this equality only holds assuming an infinite number of demonstrations. Furthermore note that unlike the 3 gradient network introduced in this paper, the discriminator needs to learn about the distribution of the expert’s demonstrations. Finally, we would like to point out the similarities our work shares with meta learning techniques that learn the gradients (e.g.[12]) or determine the weight updates (e.g. [22], [8]) for a neural network. Similar to these meta learning approaches, we propose to estimate the gradient w.r.t. the policy. While a complete review of this work is beyond the scope of this paper, we believe that many of the techniques developed to address challenges in this field can be applicable to our work as well. 3 Approach SAIL is a gradient ascent based algorithm to finding the true MAP estimate of the policy. A significant role in estimating the gradient ∇θ log p(θ|SD, AD) will be to estimate the gradient of the (stationary) state distribution induced by following the current policy. We write the stationary state distribution as dπθ(s), assume that the Markov chain is ergodic (i.e. the distribution exists) and review the work by Morimura et al. [15] on estimating its gradient ∇θ log dπθ(s) in section 3.1. We outline our own online adaptation to retrieve this estimate in section 3.2 and use it in order to derive the full SAIL gradient ∇θ log p(θ|SD, AD) in section 3.3. 3.1 A temporal difference approach to estimating ∇θ log dπ(s) We first review the work by Morimura et al. [15] who first discovered a relationship between the gradient ∇θ log dπθ(s) and value functions as used in the field of reinforcement learning. Morimura et al. showed that the gradient can be written recursively and decomposed into an infinite sum so that a corresponding temporal difference loss can be derived. By definition, the gradient of the stationary state distribution in a state s′ can be written in terms of prior states s and actions a. ∇θdπθ(s′) = ∇θ Z dπθ(s)πθ(a|s)p(s′|s, a)ds, a (2) Using ∇θ(dπθ(s)πθ(a|s)p(s′|s, a)) = p(s, a, s′)(∇θ log dπθ(s) + ∇θ log πθ(a|s)) and dividing by dπθ(s′) on both sides, we obtain 0 = Z q(s, a|s′) (∇θ log dπθ(s) + ∇θ log πθ(a|s) −∇θ log dπθ(s′)) ds, a (3) Where q denotes the reverse transition probabilities. This can be seen as an expected temporal difference error over the previous state and action where the temporal difference error is defined as δ(s, a, s′) := ∇θ log dπθ(s) + ∇θ log πθ(a|s) −∇θ log dπθ(s′) (4) In the original work, Morimura et al. derive a least squares estimator for ∇θ log dπθ(s′) based on minimizing the expected squared temporal difference error as well as a penalty to enforce the constraint E[∇θ log dπθ(s)] = 0, ensuring dπθ remains a proper probability distribution, and apply it to policy gradient reinforcement learning. In the following sections we formulate an online update rule to estimate the gradient, argue convergence in the linear case, and use the estimated gradient to derive a novel imitation learning algorithm. 3.2 Online temporal difference learning for ∇θ log dπ(s) In this subsection we define the online temporal difference update rule for SAIL and show that convergence properties are similar to the case of average reward temporal difference learning[25]. Online temporal difference learning algorithms are computationally more efficient than their least squares batch counter parts and are essential when using high-dimensional non-linear function approximations to represent the gradient. We furthermore show that online methods give us a natural way to enforce the constraint E[∇θ log dπθ(s)] = 0. We aim to approximate ∇θ log dπ(s) up to an unknown constant vector c and thus define our target as f ∗(s) := ∇θ log dπ(s) + c. We use a temporal difference update to learn a parametric approximation fω(s) ≈f ∗(s). The update rule based on taking action a in state s and transitioning to state s′ is given by ωk+1 = ωk + α∇ωfω(s′) (fω(s) + ∇θ log π(a|s) −fω(s′)) . (5) 4 Algorithm 1 State Aware Imitation Learning 1: function SAIL(ω, αθ, αω, SD, AD) 2: θ ←SupervisedTraining(SD, AD) 3: for k ←0..#Iterations do 4: SE, AE ←CollectUnsupervisedEpisode(πθ)) 5: ω ←ω + αω 1 |SE| P s,a,s′∈transitions(SE,AE) (fω(s) + ∇θ log πθ(a|s) −fω(s′)) ∇ωf(s′)) 6: µ ← 1 |SE| P s∈SE fω(s) 7: θ ←θ + αθ 1 |SD| P s,a∈pairs(SD,AD) (∇θ log πθ(a|s) + (fω(s) −µ)) + ∇θp(θ) return θ Note that if fω converges to an approximation of f ∗then due to E[∇θ log dπθ(s)] = 0, we have ∇θ log dπ(s) ≈fω(s) −E[fω(s)] where the expectation can be estimated based on samples. While convergence of temporal difference methods is not guaranteed in the general case, some guarantees can be made in the case of linear function approximation fω(s) := ωT φ(s)[25]. We note that E[∇θ log π(a|s)] = 0 and thus for each dimension of θ the update can be seen as a variation of average reward temporal difference learning where the scalar reward is replaced by the gradient vector ∇θ log π(a|s) and fω is bootstrapped based on the previous state as opposed to the next. While the role of current and next state in this update rule are reversed and this might suggest that updates should be done in reverse, the convergence results by Tsitsiklis and Van Roy[25] are dependent only on the limiting distribution of following the sample policy on the domain which remains unchanged regardless of the ordering of updates [15]. It is therefore intuitively apparent that the convergence results still hold and that fω converges to an approximation of f ∗. We formalize this notion in Appendix A. Introducing a discount factor So far we related the update rule to average reward temporal difference learning as this was a natural consequence of the assumptions we were making. However, in practice we found that a formulation analogous to discounted reward temporal difference learning may work better. While this can be seen as a biased but lower variance approximation to the average reward problem [26], a perhaps more satisfying justification can be obtained by reexamining the simplifying assumption that the sampled states are distributed by the stationary state distribution dπθ. An alternative simplifying assumption is that the previous states are distributed by a mixture of the starting state distribution d0(s−1) and the stationary state distribution p(s−1) = (1 −γ)d0(s−1) + γdπ(s−1) for γ ∈[0, 1]. In this case, equation 3 has to be altered and we have 0 = Z p(s, a|s′) (γ∇θ log dπθ(s) + (1 −γ)∇θ log d0(s) + ∇θ log πθ(a|s) −∇θ log dπθ(s′)) ds, a. Note that ∇θ log d0(s) = 0 and thus we recover the discounted update rule ωk+1 = ωk + α∇ωf(s′) (γf(s) + ∇θ log π(a|s) −f(s′)) (6) 3.3 State aware imitation learning Based on this estimate of ∇θ log dπθ we can now derive the full State Aware Imitation Learning algorithm. SAIL aims to find the full MAP estimate as defined in Equation 1 via gradient ascent. The gradient decomposes into three parts: ∇θ log p(θ|SD, AD) = ∇θ log p(AD|SD, θ) + ∇θ log p(SD|θ) + ∇θ log p(θ) (7) The first and last term make up the gradient used for gradient descent based supervised learning and can usually be computed analytically. To estimate ∇θ log p(SD|θ), we disregard information about the order of states and make the simplifying assumptions that all states are drawn from the stationary distribution. Under this assumption, we can estimate ∇θ log p(SD|θ) = P s∈SD ∇θ log dπθ(s) based on unsupervised transition samples using the approach described in section 3.2. The full SAIL algorithm thus maintains a current policy as well an estimate of ∇θ log p(SD|θ) and iteratively 5 0 1000 2000 3000 4000 5000 Iteration 1450 1500 1550 1600 1650 1700 1750 1800 1850 Agreement SAIL Supervised baseline Random baseline (a) 0 1000 2000 3000 4000 5000 Iteration 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 Average reward optimal policy supervised baseline SAIL (b) Figure 1: a) The sum of probabilities of taking the optimal action double over the baseline. b) The reward (+/ −2σ) obtained after 5000 iterations of SAIL is much closer to the optimal policy. 1. Collects unsupervised state and action samples SE and AE from the current policy, 2. Updates the gradient estimate using Equation 5 and estimates E[fω(s)] using the sample mean of the unsupervised states or an exponentially moving sample mean µ := 1 |SE| X s∈SE fω(s) 3. Updates the current policy using the estimated gradient fω(s) −µ as well as the analytical gradients for ∇θ log p(θ) and ∇θ log p(AD|SD, θ). The SAIL gradient is given by ∇θ log p(θ|SD, AD) = X s,a∈pairs(SD,AD) (fω(s) −µ + ∇θ log p(a|s, θ)) + ∇θp(θ) The full algorithm is also outlined in Algorithm 1. 4 Evaluation We evaluate our approach on two domains. The first domain is a harder variation of the tabular racetrack domain first used in [1] with 7425 states and 5 actions. In section 4.1.1, we use this domain to show that SAIL can improve on the policy learned by a supervised baseline and learn to act in states the policy representation does not generalize to. In section 4.1.2 we evaluate sample efficiency of an off-policy variant of SAIL. The tabular representation allows us to compare the results to RelEntIRL [1] as a baseline without restrictions arising from the chosen representation of the reward function. The second domain we use is a noisy variation of the bipedal walker domain found in OpenAI gym[2]. We use this domain to evaluate the performance of SAIL on tasks with continuous state and action spaces using neural networks to represent the policy as well as the gradient estimate and compare it against the supervised baseline using the same representations. 4.1 Racetrack domain We first evaluate SAIL on the racetrack domain. This domain is a more difficult variation of the domain used by Boularias et al. [1] and consists of a grid with 33 by 9 possible positions. Each position has 25 states associated with it, encoding the velocity (-2, -1, 0, +1, +2) in the x and y direction which dictates the movement of the agent at each time step. The domain has 5 possible actions allowing the agent to increase or reduce its velocity in either direction or to keep its current velocity. Randomness is introduced to the domain using the notion of a failure probability which is set to be 0.8 if the absolute velocity in either direction is 2 and 0.1 otherwise. The goal of the agent is to complete a lap around the track without going off-track which we define to be the area surrounding the track (x = 0, y = 0, x > 31 or y > 6) as well as the inner rectangle (2 < x < 31 and 2 < y < 6). Note that unlike in [1], the agent has the ability to go off-track as opposed to being constrained by a wall and has to learn to move back on track if random chance makes it stray from it. Furthermore, the probability of going off-track is higher as the track is more narrow in this variation of the domain. This makes the domain more challenging to learn using imitation learning alone. 6 50 1000 2500 10000 50000 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 Optimal policy Supervised baseline Uniform off-policy SAIL Off-policy SAIL RelEntIRL Figure 2: Reward obtained using off-policy training. SAIL learns a near-optimal policy using only 1000 sample episodes. The scale is logarithmic on the x-axis after 5000 iterations (gray area). For all our experiments, we use a set of 100 episodes collected from an oracle. To measure performance, we assign a score of −0.1 to being off-track, a score of 5 for completing the lap and −5 for crossing the finish line the wrong way. Note that this score is not used during training but is purely used to measure performance in this evaluation. We also use this score as a reward to derive an oracle. 4.1.1 On-policy results For our first experiment, we compare SAIL against a supervised baseline. As the oracle is deterministic and the domain is tabular, this means taking the optimal action in states encountered as part of one of the demonstrated episodes and uniformly random actions otherwise. For the evaluation of SAIL, we initialize the policy to the supervised baseline and use the algorithm to improve the policy over 5000 iterations. At each iteration, 20 unsupervised sample episodes are collected to estimate the SAIL gradient, using plain stochastic gradient descent with a learning rate of 0.1 for the temporal difference update and RMSprop with a a learning rate of 0.01 for updating the policy. Figure 1b shows that SAIL stably converges to a policy that significantly outperforms the supervised baseline. While we do not expect SAIL to act optimally in previously unseen states but to instead exhibit recovery behavior, it is interesting to measure on how many states the learned policy agrees with the optimal policy using a soft count for each state based on the probability of the optimal action. Figure 1a shows that the amount of states in which the agent takes the optimal action roughly doubles its advantage over random chance and that the learned behavior is significantly closer to the optimal policy on states seen during execution. 4.1.2 Off-policy sample efficiency For our second experiment, we evaluate the sample efficiency of SAIL by reusing previous sample episodes. As a temporal difference method, SAIL can be adapted using any off-policy temporal difference learning technique. In this work we elected to use truncated importance weights [16] with emphatic decay [24]. We evaluate the performance of SAIL collecting one new unsupervised sample episode in each iteration, reusing the samples collected in the past 19 episodes and compare the results against our implementation of Relative Entropy IRL[1]. We found that the importance sampling approach used by RelEntIRL makes interactions obtained by a pre-trained policy ineffective when using a tabular policy1 and thus collect samples by taking actions uniformly at random. For comparability, we also evaluated SAIL using a fixed set of samples obtained by following a uniform policy. In this case, we found that the temporal-difference learning can become unstable in later iterations and thus decay the learning rate by a factor of 0.995 after each iteration. We vary the number of unsupervised sample episodes and show the score achieved by the trained policy in Figure 2. The score for RelEntIRL is measured by computing the optimal policy given the learned reward function. Note that this requires a model that is not normally available. We found that in this domain depending on the obtained samples, RelEntIRL has a tendency to learn shortcuts through the off-track area. Since small changes in the reward function can lead to large changes in the final policy, we average the results for RelEntIRL over 20 trials and bound the total score from 1The original work by Boularias et al. shows that a pre-trained sample policy can be used effectively if a trajectory based representation is used 7 (a) 0 2000 4000 6000 8000 10000 12000 14000 Iteration 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Failure rate SAIL Supervised Baseline (b) Figure 3: a) The bipedal walker has to traverse the plain, controlling the 4 noisy joint motors in its legs. b) Failure rate of SAIL over 1000 traversals compared to the supervised baseline measured. After 15000 iterations, SAIL traverses the plain far more reliably than the baseline. below by the score achieved using the supervised baseline. We can see that SAIL is able to learn a near optimal policy using a low number of sample episodes. We can furthermore see that SAIL using uniform samples is able to learn a good policy and outperform the RelEntIRL baseline reliably. 4.2 Noisy bipedal walker For our second experiment, we evaluate the performance of SAIL on a noisy variant of a twodimensional Bipedal walker domain (see Figure 3a). The goal of this domain is to learn a policy that enables the simulated robot to traverse a plain without falling. The state space in this domain consists of 4 dimensions for velocity in x and y directions, angle of the hull, angular velocity, 8 dimensions for the position and velocity of the 4 joints in the legs, 2 dimensions that denote whether the leg has contact with the ground and 10 dimensions corresponding to lidar readings, telling the robot about its surroundings. The action space is 4 dimensional and consists of the torque that is to be applied to each of the 4 joints. To make the domain more challenging, we also apply additional noise to each of the torques. The noise is sampled from a normal distribution with standard deviation of 0.1 and is kept constant for five consecutive frames at a time. The noise thus has the ability to destabilize the walker. Our goal in this experiment is to learn a continuous policy from demonstrations, mapping the state to torques and enabling the robot to traverse the plain reliably. As a demonstration, we provide a single successful crossing of the plain. The demonstration has been collected from an oracle that has been trained on the bipedal walker domain without additional noise and is therefore not optimal and prone to failure. Our main metric for success on this domain is failure rate, i.e. the fraction of times that the robot is not able to traverse the plain due to falling to the ground. While the reward metric used in [2] is more comprehensive as it measures speed and control cost, it cannot be expected that a pure imitation learning approach can minimize control cost when trained with an imperfect demonstration that does not achieve this goal itself. Failure rate, on the other hand can always be minimized by aiming to reproduce a demonstration of a successful traversal as well as possible. To represent our policy, we use a single shallow neural network with one hidden layer consisting of 100 nodes with tanh activation. We train this policy using a pure supervised approach as a baseline as well as with SAIL and contrast the results. During evaluation and supervised training, the output of the neural network is taken to be the exact torques whereas SAIL requires a probabilistic policy. Therefore we add additional Gaussian noise, kept constant for 8 consecutive frames at a time. To train the network in a purely supervised approach, we use RMSProp over 3000 epochs with a batch size of 128 frames and a learning rate of 10−5. After the training process has converged, we found that the neural network trained with pure supervised learning fails 1650 times out of 5000 runs. To train the policy with SAIL, we first initialize it with the aforementioned supervised approach. The training is then followed up with training using the combined gradient estimated by SAIL until the failure rate stops decreasing. To represent the gradient of the logarithmic stationary distribution, we use a fully connected neural network with two hidden layers of 80 nodes each using ReLU activations. Each episode is split into mini-batches of 16 frames. The ∇θ log dπθ-network is trained using RMSprop with a learning rate of 10−4 whereas the policy network is trained using RMSprop 8 and a learning rate of 10−6, starting after the first 1000 episodes. As can be seen in Figure 3b, SAIL increases the success rate of 0.67 achieved by the baseline to 0.938 within 15000 iterations. 5 Conclusion Imitation learning has long been a topic of active research. However, naive supervised learning has a tendency to lead the agent to states in which it cannot act with certainty and alternative approaches either make additional assumptions or, in the case of IRL methods, address this problem only indirectly. In this work, we proposed a novel imitation learning algorithm that directly addresses this issue and learns a policy without relying on intermediate representations. We showed that the algorithm can generalize well and provides stable learning progress in both, domains with a finite number of discrete states as well as domains with continuous state and action spaces. We believe that explicit reasoning over states can be helpful even in situations where reproducing the distributions of states will not result in a desirable policy and see this as a promising direction for future research. Acknowledgements This work was supported by the Office of Naval Research under grant N000141410003 References [1] Abdeslam Boularias, Jens Kober, and Jan Peters. Relative Entropy Inverse Reinforcement Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 15:1–8, 2011. [2] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, 2016. [3] Sonia Chernova and Andrea L Thomaz. Robot learning from human teachers. Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(3):1–121, 2014. [4] Jaedeug Choi and Kee-eung Kim. MAP Inference for Bayesian Inverse Reinforcement Learning. Neural Information Processing System (NIPS), 2011. [5] Hal Daum´e, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning Journal (MLJ), 75(3):297–325, 2009. [6] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization. International Conference on Machine Learning (ICML), 2016. [7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [8] David Ha, Andrew Dai, and Quoc V. Le. HyperNetworks. arXiv preprint, page arXiv:1609.09106v4 [cs.LG], 2016. [9] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, and Audrunas Gruslys. Learning from Demonstrations for Real World Reinforcement Learning. arXiv preprint, page 1704.03732v1 [cs.AI], 2017. [10] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573, 2016. [11] Jonathan Ho, Jayesh Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization. In International Conference on Machine Learning, pages 2760–2769, 2016. [12] Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprint, page 1608.05343v1 [cs.LG], 2016. [13] Edouard Klein, Matthieu Geist, Bilal Piot, and Olivier Pietquin. Inverse Reinforcement Learning through Structured Classification. Neural Information Processing System (NIPS), 2012. 9 [14] Edouard Klein, Bilal Piot, Matthieu Geist, and Olivier Pietquin. A cascaded supervised learning approach to inverse reinforcement learning. Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD), 2013. [15] Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Jan Peters, and Kenji Doya. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning. Neural computation, 22(2):342–376, 2010. [16] Remi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and Efficient Off-Policy Reinforcement Learning. In Neural Information Processing System (NIPS), 2016. [17] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In International Conference on Machine Learning (ICML), 2000. [18] Dean a Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Neural Information Processing System (NIPS), 1989. [19] St´ephane Ross and J. Andrew Bagnell. Efficient Reductions for Imitation Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [20] St´ephane Ross, Geoffrey Gordon, and J. Andrew Bagnell. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. [21] Stefan Schaal. Robot learning from demonstration. Neural Information Processing System (NIPS), 1997. [22] Juergen H. Schmidhuber. A self-referential Weight Matrix. International Conference on Artificial Neural Networks, 1993. [23] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Dieleman Sander, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. [24] Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of off-policy temporal-difference learning. Journal of Machine Learning Research (JMLR), 17:1–29, 2016. [25] John N Tsitsiklis and Benjamin Van Roy. Average cost temporal-difference learning. Automatica, 35:1799– 1808, 1999. [26] John N. Tsitsiklis and Benjamin Van Roy. On average versus discounted reward temporal-difference learning. Machine Learning, 49(2-3):179–191, 2002. [27] Brian D Ziebart, Andrew Maas, J Andrew Bagnell, and Anind K Dey. Maximum Entropy Inverse Reinforcement Learning. In AAAI Conference on Artificial Intelligence (AAAI), 2007. 10 | 2017 | 32 |
6,808 | Probabilistic Rule Realization and Selection Haizi Yu∗† Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 haiziyu7@illinois.edu Tianxi Li∗ Department of Statistics University of Michigan Ann Arbor, MI 48109 tianxili@umich.edu Lav R. Varshney† Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801 varshney@illinois.edu Abstract Abstraction and realization are bilateral processes that are key in deriving intelligence and creativity. In many domains, the two processes are approached through rules: high-level principles that reveal invariances within similar yet diverse examples. Under a probabilistic setting for discrete input spaces, we focus on the rule realization problem which generates input sample distributions that follow the given rules. More ambitiously, we go beyond a mechanical realization that takes whatever is given, but instead ask for proactively selecting reasonable rules to realize. This goal is demanding in practice, since the initial rule set may not always be consistent and thus intelligent compromises are needed. We formulate both rule realization and selection as two strongly connected components within a single and symmetric bi-convex problem, and derive an efficient algorithm that works at large scale. Taking music compositional rules as the main example throughout the paper, we demonstrate our model’s efficiency in not only music realization (composition) but also music interpretation and understanding (analysis). 1 Introduction Abstraction is a conceptual process by which high-level principles are derived from specific examples; realization, the reverse process, applies the principles to generalize [1,2]. The two, once combined, form the art and science in developing knowledge and intelligence [3, 4]. Neural networks have recently become popular in modeling the two processes, with the belief that the neurons, as distributed data representations, are best organized hierarchically in a layered architecture [5,6]. Probably the most relevant such examples are auto-encoders, where the cascaded encoder and decoder respectively model abstraction and realization. From a different angle that aims for interpretability, this paper first defines a high-level data representation as a partition of the raw input space, and then formalizes abstraction and realization as bi-directional probability inferences between the raw inputs and its high-level representations. While abstraction and realization is ubiquitous among knowledge domains, this paper embodies the two as theory and composition in music, and refers to music high-level representations as compositional rules. Historically, theorists [7,8] devised rules and guidelines to describe compositional ∗Equal contribution. †Supported in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM Cognitive Horizons Network. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. regularities, resulting in music theory that serves as the formal language to speak of music style and composers’ decisions. Automatic music theorists [9–11] have also been recently developed to extract probabilistic rules in an interpretable way. Both human theorists and auto-theorists enable teaching of music composition via rules such as avoiding parallel octaves and resolving tendency tones. So, writing music, to a certain extent (e.g. realizing a part-writing exercise), becomes the process of generating “legitimate” music realizations that satisfy the given rules. This paper focuses on the realization process in music, assuming rules are given by a preceding abstraction step. There are two main challenges. First, rule realization: problem occurs when one asks for efficient and diverse music generation satisfying the given rules. Depending on the rule representation (hard or probabilistic), there are search-based systems that realize hard-coded rules to produce music pieces [12,13], as well as statistical models that realize probabilistic rules to produce distributions of music pieces [9,14]. Both types of realizations typically suffer from the enormity of the sample space, a curse of input dimensionality. Second, rule selection (which is subtler): not all rules are equally important nor are they always consistent. In some cases, a perfect and all-inclusive realization is not possible, which requires relaxation/sacrifice of some rules. In other cases, composers intentionally break certain rules to establish unique styles. So the freedom and creativity in selecting the “right” rules for realization poses the challenge. The main contribution of the paper is to propose and implement a unified framework that makes reasonable rule selections and realizes them in an efficient way, tackling the two challenges in one shot. As one part of the framework, we introduce a two-step dimensionality reduction technique—a group de-overlap step followed by a screening step—to efficiently solve music rule realization. As the other part, we introduce a group-level generalization of the elastic net penalty [15] to weight the rules for a reasonable selection. The unified framework is formulated as a single bi-convex optimization problem (w.r.t. a probability variable and a weight variable) that coherently couples the two parts in a symmetric way. The symmetry is beneficial in both computation and interpretation. We run experiments on artificial rule sets to illustrate the operational characteristics of our model, and further test it on a real rule set that is exported from an automatic music theorist [11], demonstrating the model’s selectivity in music rule realization at large scale. Although music is the main case study in the paper, we formulate the problem in generality so the proposed framework is domain-agnostic and applicable anywhere there are rules (i.e. abstractions) to be understood. Detailed discussion at the end of the paper demonstrates that the framework applies directly to general real-world problems beyond music. In the discussion, we also emphasize how our algorithm is non-trivial, not just a simple combinatorial massaging of standard models. Therefore, the techniques introduced in this paper offer broader algorithmic takeaways and are worth further studying in the future. 2 The Formalism: Abstraction, Realization, and Rule Abstraction and Realization We restrict our attention to raw input spaces that are discrete and finite: X = {x1, . . . , xn}, and assume the raw data is drawn from a probability distribution pX , where the subscript refers to the sample space (not a random variable). We denote a high-level representation space (of X) by a partition A (of X) and its probability distribution by pA. Partitioning the raw input space gives one way of abstracting low-level details by grouping raw data into clusters and ignoring within-cluster variations. Following this line of thought, we define an abstraction as the process: (X, pX ) →(A, pA) for some high-level representation A, where pA is inferred from pX by summing up the probability masses within each partition cluster. Conversely, we define a realization as the process: (A, pA) →(X, pX ), where pX is any probability distribution that infers pA. Probabilistic Compositional Rule To put the formalism in the context of music, we first follow the convention [9] to approach a music piece as a sequence of sonorities (a generic term for chord) and view each moment in a composition as determining a sonority that fits the existing music context. If we let Ωbe a finite collection of pitches specifying the discrete range of an instrument, e.g. the collection of the 88 keys on a piano, then a k-part sonority—k simultaneously sounding pitches—is a point in Ωk. So X = Ωk is the raw input space containing all possible sonorities. Although discrete and finite, the raw input size is typically large, e.g. |X| = 884 considering piano range and 4-part chorales. Therefore, theorists have invented various music parameters such as quality and inversion, to abstract specific sonorities. In this paper, we inherit the approach in [11] to formalize a high2 level representation of X by a feature-induced partition A, and call the output of the corresponding abstraction (A, pA) a probabilistic compositional rule. Probabilistic Rule System The interrelation between abstraction and realization (X, pX ) ↔ (A, pA) can be formalized by a linear equation: Ap = b, where A ∈{0, 1}m×n represents a partition (Aij = 1 if and only if xj is assigned to the ith cluster in the partition), and p = pX , b = pA are probability distributions of the raw input space and the high-level representation space, respectively. In the sequel, we represent a rule by the pair (A, b), so realizing this rule becomes solving the linear equation Ap = b. More interestingly, given a set of rules: (A(1), b(1)), . . . , (A(K), b(K)), the realization of all of them involves finding a p such that A(r)p = b(r), for all r = 1, . . . , K. In this case, we form a probabilistic rule system by stacking all rules into one single linear system: A = A(1) ... A(K) ∈{0, 1}m×n, b = b(1) ... b(K) ∈[0, 1]m. (1) We call A(r) i,: p = b(r) i a rule component, and mr = dim(b(r)) the size (# of components) of a rule. 3 Unified Framework for Rule Realization and Selection In this section, we detail a unified framework for simultaneous rule realization and selection. Recall rules themselves can be inconsistent, e.g. rules learned from different music contexts can conflict. So given an inconsistent rule system, we can only achieve Ap ≈b. To best realize the possibly inconsistent rule system, we solve for p ∈∆n by minimizing the error ∥Ap −b∥2 2 = P r ∥A(r)p − b(r)∥2 2, the sum of the Brier scores from every individual rule. This objective does not differentiate rules (or their components) in the rule system, which typically yields a solution that satisfies all rules approximately and achieves a small error on average. This performance, though optimal in the averaged sense, is somewhat disappointing since most often no rule is satisfied exactly (error-free). Contrarily, a human composer would typically make a clear separation: follow some rules exactly and disregard others even at the cost of a larger realization error. The decision made on rule selection usually manifests the style of a musician and is a higher level intelligence that we aim for. In this pursuit, we introduce a fine-grained set of weights w ∈∆m to distinguish not only individual rules but also their components. The weights are estimates of relative importance, and are further leveraged for rule selection. This yields a weighted error, which is used herein to measure realization quality: E(p, w; A, b) = (Ap −b)⊤diag(w)(Ap −b). (2) If we revisit the two challenges mentioned in Sec. 1, we see that under the current setting, the first challenge concerns the curse of dimensionality for p, while the second concerns the selectivity for w. We introduce two penalty terms, one each for p and w, to tackle the two challenges, and propose the following bi-convex optimization problem as the unified framework: minimize E(p, w; A, b) + λpPp(p) + λwPw(w) (3) subject to p ∈∆n, w ∈∆m. Despite contrasting purposes, both penalty terms, Pp(p) and Pw(w), adopt the same high-level strategy of exploiting group structures in p and w. Regarding the curse of dimensionality, we exploit the group structure of p by grouping pj and pj′ together if the jth and j′th columns of A are identical, partitioning p’s coordinates into K′ groups: g′ 1, . . . , g′ K′ where K′ is the number of distinct columns of A. This grouping strategy uses the fact that in a simplex-constrained linear system, we cannot determine the individual pjs within each group but only their sum. We later show (Sec. 4.1) the resulting group structure of p is essential in dimensionality reduction (when K′ ≪n ) and has a deeper interpretation regarding abstraction levels. Regarding the rule-level selectivity, we exploit the group structure of w by grouping weights together if they are associated with the same rule, partitioning w’s coordinates into K groups: g1, . . . , gK where K is the number of given rules. Based on the group structures of p and w, we introduce their corresponding group penalties as follows: Pp(p) = ∥pg′ 1∥2 1 + · · · + ∥pg′ K′ ∥2 1, (4) P ′ w(w) = √m1∥wg1∥1 2 + · · · + √mK∥wgK∥1 2. (5) 3 One can see the symmetry here: group penalty (4) on p is a squared, unweighted L2,1-norm, which is designed to secure a unique solution that favors more randomness in p for the sake of diversity in sonority generation [9]; group penalty (5) on w is a weighted L1,2-norm (group lasso), which enables rule selection. However, there is a pitfall of the group lasso penalty when deployed in Problem (3): the problem has multiple global optima that are indefinite about the number of rules to pick (e.g. selecting one rule and ten consistent rules are both optimal). To give more control over the number of selections, we finalize the penalty on w as the group elastic net that blends between a group lasso penalty and a ridge penalty: Pw(w) = αP ′ w(w) + (1 −α)∥w∥2 2, 0 ≤α ≤1, (6) where α balances the trade-off between rule elimination (less rules) and selection (more rules). Model Interpretation Problem (3) is a bi-convex problem: fixing p it is convex in w; fixing w it is convex in p. The symmetry between the two optimization variables further gives us the reciprocal interpretations of the rule realization and selection problem: given p, the music realization, we can analyze its style by computing w; given w, the music style, we can realize it by computing p and further sample from it to obtain music that matches the style. The roles of the hyperparameters λp and (λw, α) are quite different. In setting λp sufficiently small, we secure a unique solution for the rule realization part. However, for the rule selection part, what is more interesting is that adjusting λw and α allows us to guide the overall composition towards different directions, e.g. conservative (less strictly obeyed rules) versus liberal (more loosely obeyed rules). Model Properties We state two properties of the bi-convex problem (3) as the following theorems whose proofs can be found in the supplementary material. Both theorems involve the notion of group selective weight. We say w ∈∆m is group selective if for every rule in the rule set, w either drops it or selects it entirely, i.e. either wgr = 0 or wgr > 0 element-wisely, for any r = 1, . . . , K. For a group selective w, we further define suppg(w) to be the selected rules, i.e. suppg(w) = {r | wgr > 0 element-wisely} ⊂{1, . . . , K}. Theorem 1. Fix any λp > 0, α ∈[0, 1]. Let (p⋆(λw), w⋆(λw)) be a solution path to problem (3). (1) w⋆(λw) is group selective, if λw > 1/α. (2) ∥w⋆ gr(λw)∥2 →√mr/m as λw →∞, for r = 1, . . . , K. Theorem 2. For λp = 0 and any λw > 0, α ∈[0, 1], let (p⋆, w⋆) be a solution to problem (3). We define C ⊂2{1,...,K} such that any C ∈C is a consistent (error-free) subset of the given rule set. If suppg(w⋆) ∈C, then P r∈suppg(w⋆) mr = max P r∈C mr | C ∈C . Thm. 1 implies a useful range of the λw-solution path: if λw is too large, w⋆will converge to a known value that always selects all the rules; if λw is too small, w⋆can lose the guarantee to be group selective. This further suggests the termination criteria used later in the experiments. Thm. 2 considers rule selection in the consistent case, where the solution selects the largest number of rule components among all other consistent rule selections. Despite the condition λp = 0, in practice, this theorem suggests one way of using model for a small λp: if the primary interest is to select consistent rules, the model is guaranteed to pick as many rule components as possible (Sec. 5.1). Yet, a more interesting application is to slightly compromise consistency to achieve better selection (Sec. 5.2). 4 Alternating Solvers for Probability and Weight It is natural to solve the bi-convex problem (3) by iteratively alternating the update of one optimization variable while fixing the other, yielding two alternating solvers. 4.1 The p-Solver: for Rule Realization If we fix w, the optimization problem (3) boils down to: minimize E(p, w; A, b) + λpPp(p) (7) subject to p ∈∆n. 4 G1 G2 G3 G0 3 G0 2 G0 1 G0 4 G0 5 G0 6 G0 7 G = {G1, G2, G3} DeO(G) = {G0 1, G0 2, G0 3, G0 4, G0 5, G0 6, G0 7} g(x) = 8 > > > > > > > > > > < > > > > > > > > > > : (1, 0, 0), x 2 G0 1 (0, 1, 0), x 2 G0 2 (0, 0, 1), x 2 G0 3 (0, 1, 1), x 2 G0 4 (1, 0, 1), x 2 G0 5 (1, 1, 0), x 2 G0 6 (1, 1, 1), x 2 G0 7 De-Overlap Figure 1: An example of group de-overlap. Making a change of variable: qk = 1⊤pg′ k = ∥pg′ k∥1 for k = 1, . . . , K′ and letting q = (q1, . . . , qK′), problem (7) is transformed to its reduced form: minimize E(p, w; A′, b) + λp∥q∥2 2 (8) subject to q ∈∆K′, where A′ is obtained from A by removing its column duplicates. Problem (8) is a convex problem with a strictly convex objective, so it has a unique solution q⋆. However, the solution to the original problem (7) may not be unique: any p⋆satisfying q⋆ k = 1⊤p⋆ g′ k is a solution to (7). To favor a more random p (as discussed in Sec. 3), we can uniquely determine p⋆by uniformly distributing the probability mass qk within the group g′ k: p⋆ g′ k = (qk/ dim(pg′ k))1, k = 1, . . . , K′. Dimensionality Reduction: Group De-Overlap Problem (7) is of dimension n, while its reduced form (8) is of dimension K′(≤n) from which we can attain dimensionality reduction. In cases where K′ ≪n, we have a huge speed-up for the p-solver; in other cases, there is still no harm to always run the p-solve from the reduced problem (8). Recall that we have achieved this type of dimensionality reduction by exploiting the group structure of p purely from a computational perspective (Sec. 3). However, the resulting group structure has a deeper interpretation regarding abstraction levels, which is closely related to the concept of de-overlapping a family of groups, group de-overlap in short. (Group De-Overlap) Let G = {G1, . . . , Gm} be a family of groups (a group is a non-empty set), and G = ∪m i=1Gi. We introduce a group assignment function g : G 7→{0, 1}m, such that for any x ∈G, g(x)i = 1{x ∈Gi}, and further introduce an equivalence relation ∼on G: x ∼x′ if g(x) = g(x′). We then define the de-overlap of G, another family of groups, by the quotient space DeO(G) = {G′ 1, . . . , G′ m′} := G/ ∼. (9) The idea of group de-overlap is simple (Fig. 1), and DeO(G) indeed comprises non-overlapping groups, since it is a partition of G that equals the set of equivalence classes under ∼. Now given a set of rules (A(1), b(1)), . . . , (A(K), b(K)), we denote their corresponding high-level representation spaces by A(1), . . . , A(K), each of which is a partition of the raw input space X (Sec. 2). Let G = ∪K k=1A(k), then DeO(G) is a new partition—hence a new high-level representation space—of G = X, and is finest (may be tied) among all partitions A(1), . . . , A(K). Therefore, DeO(G), as a summary of the rule system, delimits a lower bound on the level of abstraction produced by the given set of rules/abstractions. What coincides with DeO(G), is the group structure of p (recall: pj and pj′ are grouped together if the jth and j′th columns of A are identical), since for any xj ∈X, the jth column of A is precisely the group assignment vector g(xj). Therefore, the decomposed solve step from q⋆to p⋆reflects the following realization chain: n (A(1), pA(1)), . . . , (A(K), pA(K)) o →(DeO(G), q⋆) →(X, pX ), (10) where the intermediate step not only computationally achieves dimensionality reduction, but also conceptually summarizes the given set of abstractions and is further realized in the raw input space. Note that the σ-algebra of the probability space associated with (8) is precisely generated by DeO(G). When rules are inserted into a rule system sequentially (e.g. the growing rule set from an automatic music theorist), the successive solve of (8) is conducted along a σ-algebra path that forms a filtration: nested σ-algebras that lead to finer and finer delineations of the raw input space. In a pedagogical setting, the filtration reflects the iterative refinements of music composition from high-level principles that are taught step by step. 5 Dimensionality Reduction: Screening We propose an additional technique for further dimensionality reduction when solving the reduced problem (8). The idea is to perform screening, which quickly identifies the zero components in q⋆and removes them from the optimization problem. Leveraging DPC screening for non-negative lasso [16], we introduce a screening strategy for solving a general simplex-constrained linear least-squares problem (one can check problem (8) is indeed of this form): minimize ∥Xβ −y∥2 2, subject to β ⪰0, ∥β∥1 = 1. (11) We start with the following non-negative lasso problem, which is closely related to problem (11): minimize φλ(β) := ∥Xβ −y∥2 2 + λ∥β∥1, subject to β ⪰0, (12) and denote its solution by β⋆(λ). One can show that if ∥β⋆(λ⋆)∥1 = 1, then β⋆(λ⋆) is a solution to problem (11). Our screening strategy for problem (11) runs the DPC screening algorithm on the non-negative lasso problem (12), which applies a repeated screening rule (called EDPP) to solve a solution path specified by a λ-sequence: λmax = λ0 > λ1 > · · · . The ℓ1-norms along the solution path are non-decreasing: 0 = ∥β⋆(λ0)∥1 ≤∥β⋆(λ1)∥1 ≤· · · . We terminate the solution path at λt if ∥β⋆(λt)∥1 ≥1 and ∥β⋆(λt−1)∥1 < 1. Our goal is to use β⋆(λt) to predict the zero components in β⋆(λ⋆), a solution to problem (11). More specifically, we assume that the zero components in β⋆(λt) are also zero in β⋆(λ⋆), hence we can remove those components from β (also the corresponding columns of X) in problem (11) and reduce its dimensionality. While in practice this assumption is usually true provided that we have a delicate solution path, the monotonicity of β⋆(λ)’s support along the solution path does not hold in general [17]. Nevertheless, the assumption does hold when ∥β⋆(λt)∥1 →1, since the solution path is continuous and piecewise linear [18]. Therefore, we carefully design a solution path in the hope of a β⋆(λt) whose ℓ1-norm is close to 1 (e.g. let λi = γλi−1 with a large γ ∈(0, 1), while more sophisticated design is possible such as a bi-section search). To remedy the (rare) situations where β⋆(λt) predicts some incorrect zero components in β⋆(λ⋆), one can always leverage the KKT conditions of problem (11) as a final check to correct those mis-predicted components [19]. Finally, note that the screening strategy may fail when the ℓ1-norms along the solution path converge to a value less than 1. In these cases we can never find a desired λt with ∥β⋆(λt)∥1 ≥1. In theory, such failure can be avoided by a modified lasso problem which in practice does not improve efficiency much (see the supplementary material). 4.2 The w-Solver: for Rule Selection If we fix p, the optimization problem (3) boils down to: minimize E(p, w; A, b) + λwPw(w) (13) subject to w ∈∆m. We solve problem (13) via ADMM [20]: w(k+1) = arg min w e⊤w + λwPw(w) + ρ 2∥w −z(k) + u(k)∥2 2, (14) z(k+1) = arg min z I∆m(z) + ρ 2∥w(k+1) −z + u(k)∥2 2, (15) u(k+1) = u(k) + w(k+1) −z(k+1). (16) In the w-update (14), we introduce the error vector e = (Ap −b)2 (element-wise square), and obtain a closed-form solution by a soft-thresholding procedure [21]: for r = 1, . . . , K, w(k+1) gr = 1 − λwα√mr (ρ + 2λw(1 −α)) · ∥˜e(k) gr ∥2 ! + ˜e(k) gr , where ˜e(k) = ρ(z(k) −u(k)) −e ρ + 2λw(1 −α) . (17) In the z-update (15), we introduce the indicator function I∆m(z) = 0 if z ∈∆m and ∞otherwise, and recognize it as a (Euclidean) projection onto the probability simplex: z(k+1) = Π∆m(w(k+1) + u(k)), (18) which can be solved efficiently by a non-iterative method [22]. Given that ADMM enjoys a linear convergence rate in general [23] and the problem’s dimension m ≪n, one execution of the wsolver is cheaper than that of the p-solver. Indeed, the result from the w-solver can speed up the subsequent execution of the p-solver, since we can leverage the zero components in w⋆to remove the corresponding rows in A, yielding additional savings in the group de-overlap of the p-solver. 6 0 2.0 4.0 group norm × 10−2 rule 1 rule 2 rule 3 rule 4 rule 5 −8 −6 −4 −2 0 2 0 1.0 wt. err. × 10−4 log2 (λw) (a) Case A1: α = 0.8. 0 0.6 1.2 group norm × 10−1 rule 1 rule 2 rule 3 rule 4 rule 5 −8 −6 −4 −2 0 2 0 2.0 4.0 wt. err. × 10−4 log2 (λw) (b) Case A2: α = 0.8. Figure 2: The λw-solution paths obtained from the two artificial rule sets. Each path is depicted by the trajectories of the group norms (top) and the trajectory of the weighted errors (bottom). 5 Experiments 5.1 Artificial Rule Set We generate two artificial rule sets: Case A1 and A2, both of which are derived from the same raw input space X = {x1, . . . , xn} for n = 600, and comprise K = 5 rules. The rules in Case A1 are of size 80, 50, 60, 60, 60, respectively; the rules in Case A2 are of size 70, 50, 65, 65, 65, respectively. For both cases, rule 1&2 and rule 3&4 are the only two consistent sub rule sets of size ≥2. The main difference between the two cases is: in Case A1, rule 1&2 has a combined size of 130 which is larger than rule 3&4 and in Case A2 it is opposite. Under different settings of the hyperparameters λw and α, our model selects different rule combinations exhibiting unique “personal” styles. Tuning the blending factor α ∈[0, 1] is relatively easy, since it is bounded and has a nice interpretation. Intuitively, if α →0, the effect of the group lasso vanishes, yielding a solution w⋆that is not selective; if α →1, the group elastic net penalty reduces to the group lasso, exposing the pitfall mentioned in Sec. 3. Experiments show that if we fix a small α, the model picks either all five rules or none; if we fix a large α, the group norms associated with each rule are highly unstable as λw varies. Fortunately in practice, α has a wide middle range (typically between 0.4 and 0.9), within which all corresponding λw-solution paths look similar and perform stable rule selection. Therefore, for all experiments herein, we fix α = 0.8 and study the behavior of the corresponding λw-solution path. We show the λw-solution paths in Fig. 2. Along the path, we plot the group norms (top, one curve per rule) and the weighted errors (bottom). The former, formulated as ∥w⋆ gr(λw)∥2, describes the options for rule selection; the latter, formulated as E(p⋆(λw), w⋆(λw); A, b), describes the quality of rule realization. To produce the trajectories, we start with a moderate λw (e.g. λw = 1), and gradually increase and decrease its value to bi-directionally grow the curves. We terminate the descending direction when w⋆(λw) is not group selective and terminate the ascending direction when the group norms converge. Both terminations are indicated by Thm. 1, and work well in practice. As λw grows, the model transitions its compositional behavior from a conservative style (sacrifice a number of rules for accuracy) towards a more liberal one (sacrifice accuracy for more rules). If we further focus on the λws that give us zero weighted error, Fig. 2a reveals rule 1&2, and Fig. 2b reveals rule 3&4, i.e. the largest consistent subset of the given rule set in both cases (Thm. 2). Finally, we mention the efficiency of our algorithm. Averaged over several runs on multiple artificial rule sets of the same size, the run-time of our solver is 27.2 ± 5.5 seconds, while that of a generic solver (CVX) is 41.4 ± 3.8 seconds. We attribute the savings to the dimensionality reduction techniques introduced in Sec. 4.1, which will be more significant at large scale. 5.2 Real Compositional Rule Set As a real-world application, we test our unified framework on rule sets from an automatic music theorist [11]. The auto-theorist teaches people to write 4-part chorales by providing personalized 7 0 1.0 2.0 group norm rule 3 rule 6 rule 9 rule 10 others × 10−2 −8 −6 −4 −2 0 2 0 1.0 wt. err. × 10−4 log2 (λw) A.1 The KKT Condition of Simplex Constrained Linear Least-Squares 385 A.2 An Equivalent Formulation of Simplex Constrained Linear Least-Squares 386 A.3 The Convergence of Group Norms 387 A.4 The Global Minimum of Problem (3) Under Consistency 388 A.5 Miscellaneous 389 Table 1: Compositional rule selections log2(λw) selected rule set # of rules # of rule components [−12, −6] {10} 1 1540 [−5, −2] {3, 6, 10} 3 1699 [−1, 0] {3, 6, 9, 10} 4 2154 1 {3, 6, 8, 9, 10, 11, 13} 7 2166 2 {1, 3, 7, 9, 10, 11, 13} 7 2312 3 all 16 2417 11 Figure 3: The λw-solution path obtained from a real compositional rule set. rules at every stage of composition. In this experiment, we exported a set of 16 compositional rules which aims to guide a student in writing the next sonority that follows well with the existing music content. Each voice in a chorale is drawn from Ω= {R, G1, . . . , C6} that includes the rest (R) and 54 pitches (G1 to C6) from human vocal range. The resulting raw input space X = Ω4 consists of n = 554 ≈107 sonorities, whose distribution lives in a very high dimensional simplex. This curse of dimensionality typically fails most of the generic solvers in obtaining an acceptable solution within a reasonable amount of time. We show the λw-solution path associated with this rule set in Fig. 3. Again, the general trend shows the same pattern here: the model turns into a more liberal style (more rules but less accurate) as λw increases. Along the solution path, we also observe that the consistent range (i.e. the error-free zone) is wider than that in the artificial cases. This is intuitive, since a real rule set should be largely consistent with minor contradictions, otherwise it will confuse the student and lose its pedagogical purpose. A more interesting phenomenon occurs when the model is about to leave the error-free zone. When log2(λw) goes from 1 to 2, the combined size of the selected rules increases from 2166 to 2312 but the realization error increases only a little. Will sacrificing this tiny error be a smarter decision to make? The difference between the selected rules at these two moments shows that rule 1 and 7 were added into the selection at log2(λw) = 2 replacing rule 6 and 8. Rule 1 is about the bass line, while rule 6 is about tenor voice. It is known in music theory that outer voices (soprano and bass) are more characteristic and also more identifiable than inner voices (alto and tenor) which typically stay more or less stationary as background voices. So it is understandable that although larger variety in the bass increases the opportunity for inconsistency (in this case not too much), it is a more important rule to keep. Rule 7 is about the interval between soprano and tenor, while rule 8 describes a small feature between the upper two voices but does not have a meaning yet in music theory. So unlike rule 7 that brings up the important concept of voicing (i.e. classifying a sonority into open/closed/neutral position), rule 8 could simply be a miscellaneous artifact. To conclude, in this particular example, we would argue that the rule selection happens at log2(λw) = 2 is a better decision, in which case the model makes a good compromise on exact consistency. To compare a selective rule realization with its non-selective counterpart [11], we plot the errors ∥A(r)p −b(r)∥2 for each rule r = 1, . . . , 16 as histograms in Fig. 4. The non-selective realization takes all rules into consideration with equal importance, which turns out to be a degenerate case along our model’s solution path for log2(λw) →∞. This realization yields a “well-balanced” solution but no rules are satisfied exactly. In constrast, a selective realization (e.g. log2(λw) = 1) gives near-zero errors on selected rules, producing more human-like compositional decisions. 1 4 5 6 7 8 9 10 11 12 13 14 15 16 3 2 1 0 error rule rule (a) selective (b) non-selective 1 4 5 6 7 8 9 10 11 12 13 14 15 16 3 2 1 0 error Figure 4: Comparison between a selective rule realization (log2(λw) = 1) and its non-selective counterpart. The boldfaced x-tick labels designate the indices of the selected rules. 8 6 Discussion Generality of the Framework The formalism of abstraction and realization in Sec. 2, as well as the unified framework for simultaneous rule realization and selection in Sec. 3, is general and domain-agnostic, not specific to music. The problem formulation as a bi-convex problem (3) admits numerous real-world applications that can be cast as (quasi-)linear systems, possibly equipped with some group structure. For instance, many problems in physical science involve estimating unknowns x from their observations y via a linear (or linearized) equation y = Ax [24], where a grouping of yi’s (say, from a single sensor or sensor type) itself summarizes x as a rule/abstraction. In general, the observations are noisy and inconsistent due to errors from the measuring devices or even the failure of a sensor. It is then necessary to assign a different reliability weight to every individual sensor reading, and ask for a “selective” algorithm to “realize” the readings respecting the group structure. So in cases where some devices fail and give inconsistent readings, we can run the proposed algorithm to filter them out. Linearity versus Expressiveness The linearity with respect to p in the rule system Ap = b results directly from adopting the probability-space representation. However, this does not imply that the underlying domain (e.g. music) is as simple as linear. In fact, the abstraction process can be highly nonlinear which involves hierarchical partitioning of the input space [11]. So, instead of running the risk of losing expressiveness, the linear equation Ap = b hides the model complexity in the A matrix. On the other hand, the linearity with respect to w in the bi-convex objective (3) is a design choice. We start with a simple linear model to represent relative importance for the sake of interpretability, which may sacrifice the model’s expressiveness like other classic linear models. To push the boundary of this trade off in the future, we will pursue more expressive models without compromising (practically important) interpretability. Differences from (Group) Lasso Component-wise, both subproblems (7) and (13) of the unified framework look similar to regular feature selection settings such as lasso [25] and group lasso [26]. However, not only does the strong coupling between the two subproblems exhibit new properties (Thm. 1 and 2), but also the differences in the formulation present unique algorithmic challenges. First, the weighted error term (2) in the objective is in stark contrast with the regular regression formulation where (group) lasso is paired with least-squares or other similar loss functions. Whereas dropping features in a regression model typically increases training loss (under-fitting), dropping rules, on the contrary, helps drive the error to zero since a smaller rule set is more likely to achieve consensus. Hence, the tendency to drop rules in a regular (group) lasso is against the pursuit of a largest consistent rule set as desired. This stresses the necessity of a more carefully designed penalty like our proposed group elastic net. Second, the additional simplex constraint weakens the grouping property of group lasso: failures in group selection (i.e. there exists a rule that is not entirely selected) are observed for small λws. The simplex constraint, effectively an ℓ1 constraint, also incurs an “ℓ1 cancellation”, which nullifies a simple lasso (also an ℓ1) on a simple parameterization of the rules (one weight per rule). These differences pose new model behaviors and deserve further study. Local Convergence We solve the bi-convex problem (3) via alternating minimizations in which the algorithm decreases the non-negative objective in every iteration thus assures its convergence. Nevertheless, neither a global optimum nor a convergence in solution can be guaranteed. The former leaves the local convergence susceptible to different initializations, demanding further improvements through techniques such as random start and noisy updates. The latter leaves the possibility for the optimization variables to enter a limit cycle. However, we consider this as an advantage, especially in music where one prefers multiple realizations and interpretations that are equally optimal. More Microscopic Views The weighting scheme in this paper presents the rule selection problem in a most general setting, where a different weight is assigned to every rule component. Hence, we can study the relative importance not only between rules by the group norms ∥wgr∥2, but also within every single rule. The former compares compositional rules in a macroscopic level, e.g. restricting to a diatonic scale is more important than avoiding parallel octaves; while the latter in a microscopic level, e.g. changing the probability mass within a diatonic scale creates variety in modes: think about C major versus A minor. We can further study the rule system microscopically by sharing weights of the same component but from different rules, yielding an overlapping group elastic net. 9 References [1] K. Lewin, Field Theory in Social Science. Harpers, 1951. [2] J. Skorstad, D. Gentner, and D. Medin, “Abstraction processes during concept learning: A structural view,” in Proc. 10th Annu. Conf. Cognitive Sci. Soc., 1988, pp. 419–425. [3] K. Haase, “Discovery systems: From AM to CYRANO,” MIT AI Lab Working Paper 293, 1987. [4] A. M. Barry, Visual Intelligence: Perception, Image, and Manipulation in Visual Communication. SUNY Press, 1997. [5] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013. [6] Y. Bengio, “Deep learning of representations: Looking forward,” in Proc. Int. Conf. Stat. Lang. and Speech Process., 2013, pp. 1–37. [7] J. J. Fux, Gradus ad Parnassum. Johann Peter van Ghelen, 1725. [8] H. Schenker, Kontrapunkt. Universal-Edition A.G., 1922. [9] H. Yu, L. R. Varshney, G. E. Garnett, and R. Kumar, “MUS-ROVER: A self-learning system for musical compositional rules,” in Proc. 4th Int. Workshop Music. Metacreation (MUME 2016), 2016. [10] ——, “Learning interpretable musical compositional rules and traces,” in Proc. 2016 ICML Workshop Hum. Interpret. Mach. Learn. (WHI 2016), 2016. [11] H. Yu and L. R. Varshney, “Towards deep interpretability (MUS-ROVER II): Learning hierarchical representations of tonal music,” in Proc. 5th Int. Conf. Learn. Represent. (ICLR 2017), 2017. [12] D. Cope, “An expert system for computer-assisted composition,” Comput. Music J., vol. 11, no. 4, pp. 30–46, 1987. [13] K. Ebcio˘glu, “An expert system for harmonizing four-part chorales,” Comput. Music J., vol. 12, no. 3, pp. 43–51, 1988. [14] J. R. Pierce and M. E. Shannon, “Composing music by a stochastic process,” Bell Telephone Laboratories, Technical Memorandum MM-49-150-29, Nov. 1949. [15] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” J. R. Stat. Soc. Ser. B. Methodol., vol. 67, no. 2, pp. 301–320, 2005. [16] J. Wang and J. Ye, “Two-layer feature reduction for sparse-group lasso via decomposition of convex sets,” in Proc. 28th Annu. Conf. Neural Inf. Process. Syst. (NIPS), 2014, pp. 2132–2140. [17] T. Hastie, J. Taylor, R. Tibshirani, and G. Walther, “Forward stagewise regression and the monotone lasso,” Electron. J. Stat., vol. 1, pp. 1–29, 2007. [18] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression,” Ann. Stat., vol. 32, no. 2, pp. 407–499, 2004. [19] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani, “Strong rules for discarding predictors in lasso-type problems,” J. R. Stat. Soc. Ser. B. Methodol., vol. 74, no. 2, pp. 245–266, 2012. [20] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011. [21] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” J. R. Stat. Soc. Ser. B. Methodol., vol. 68, no. 1, pp. 49–67, 2006. [22] W. Wang and M. A. Carreira-Perpinán, “Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application,” arXiv:1309.1541 [cs.LG], 2013. [23] M. Hong and Z.-Q. Luo, “On the linear convergence of the alternating direction method of multipliers,” Math. Program., pp. 1–35, 2012. [24] D. D. Jackson, “Interpretation of inaccurate, insufficient and inconsistent data,” Geophys. J. Int., vol. 28, no. 2, pp. 97–109, 1972. 10 [25] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. R. Stat. Soc. Ser. B. Methodol., pp. 267–288, 1996. [26] J. Friedman, T. Hastie, and R. Tibshirani, “A note on the group lasso and a sparse group lasso,” 2010. 11 | 2017 | 320 |
6,809 | A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning Marco Fraccaro†∗ Simon Kamronn †∗ Ulrich Paquet‡ Ole Winther† † Technical University of Denmark ‡ DeepMind Abstract This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object’s representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks. 1 Introduction From the earliest stages of childhood, humans learn to represent high-dimensional sensory input to make temporal predictions. From the visual image of a moving tennis ball, we can imagine its trajectory, and prepare ourselves in advance to catch it. Although the act of recognising the tennis ball is seemingly independent of our intuition of Newtonian dynamics [31], very little of this assumption has yet been captured in the end-to-end models that presently mark the path towards artificial general intelligence. Instead of basing inference on any abstract grasp of dynamics that is learned from experience, current successes are autoregressive: to imagine the tennis ball’s trajectory, one forward-generates a frame-by-frame rendering of the full sensory input [5, 7, 23, 24, 29, 30]. To disentangle two latent representations, an object’s, and that of its dynamics, this paper introduces Kalman variational auto-encoders (KVAEs), a model that separates an intuition of dynamics from an object recognition network (section 3). At each time step t, a variational auto-encoder [18, 25] compresses high-dimensional visual stimuli xt into latent encodings at. The temporal dynamics in the learned at-manifold are modelled with a linear Gaussian state space model that is adapted to handle complex dynamics (despite the linear relations among its states zt). The parameters of the state space model are adapted at each time step, and non-linearly depend on past at’s via a recurrent neural network. Exact posterior inference for the linear Gaussian state space model can be preformed with the Kalman filtering and smoothing algorithms, and is used for imputing missing data, for instance when we imagine the trajectory of a bouncing ball after observing it in initial and final video frames (section 4). The separation between recognition and dynamics model allows for missing data imputation to be done via a combination of the latent states zt of the model and its encodings at only, without having to forward-sample high-dimensional images xt in an autoregressive way. KVAEs are tested on videos of a variety of simulated physical systems in section 5: from raw visual stimuli, it “end-to-end” learns the interplay between the recognition and dynamics components. As KVAEs can do smoothing, they outperform an array of methods in generative and missing data imputation tasks (section 5). ∗Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Background Linear Gaussian state space models. Linear Gaussian state space models (LGSSMs) are widely used to model sequences of vectors a = a1:T = [a1, .., aT ]. LGSSMs model temporal correlations through a first-order Markov process on latent states z = [z1, .., zT ], which are potentially further controlled with external inputs u = [u1, .., uT ], through the Gaussian distributions pγt(zt|zt−1, ut) = N(zt; Atzt−1 + Btut, Q) , pγt(at|zt) = N(at; Ctzt, R) . (1) Matrices γt = [At, Bt, Ct] are the state transition, control and emission matrices at time t. Q and R are the covariance matrices of the process and measurement noise respectively. With a starting state z1 ∼N(z1; 0, Σ), the joint probability distribution of the LGSSM is given by pγ(a, z|u) = pγ(a|z) pγ(z|u) = QT t=1pγt(at|zt) · p(z1) QT t=2 pγt(zt|zt−1, ut) , (2) where γ = [γ1, .., γT ]. LGSSMs have very appealing properties that we wish to exploit: the filtered and smoothed posteriors p(zt|a1:t, u1:t) and p(zt|a, u) can be computed exactly with the classical Kalman filter and smoother algorithms, and provide a natural way to handle missing data. Variational auto-encoders. A variational auto-encoder (VAE) [18, 25] defines a deep generative model pθ(xt, at) = pθ(xt|at)p(at) for data xt by introducing a latent encoding at. Given a likelihood pθ(xt|at) and a typically Gaussian prior p(at), the posterior pθ(at|xt) represents a stochastic map from xt to at’s manifold. As this posterior is commonly analytically intractable, VAEs approximate it with a variational distribution qφ(at|xt) that is parameterized by φ. The approximation qφ is commonly called the recognition, encoding, or inference network. 3 Kalman Variational Auto-Encoders The useful information that describes the movement and interplay of objects in a video typically lies in a manifold that has a smaller dimension than the number of pixels in each frame. In a video of a ball bouncing in a box, like Atari’s game Pong, one could define a one-to-one mapping from each of the high-dimensional frames x = [x1, .., xT ] into a two-dimensional latent space that represents the position of the ball on the screen. If the position was known for consecutive time steps, for a set of videos, we could learn the temporal dynamics that govern the environment. From a few new positions one might then infer where the ball will be on the screen in the future, and then imagine the environment with the ball in that position. zt−1 zt zt+1 at−1 at at+1 xt−1 xt xt+1 ut−1 ut ut+1 VAE LGSSM Figure 1: A KVAE is formed by stacking a LGSSM (dashed blue), and a VAE (dashed red). Shaded nodes denote observed variables. Solid arrows represent the generative model (with parameters θ) while dashed arrows represent the VAE inference network (with parameters φ). The Kalman variational auto-encoder (KVAE) is based on the notion described above. To disentangle recognition and spatial representation, a sensory input xt is mapped to at (VAE), a variable on a low-dimensional manifold that encodes an object’s position and other visual properties. In turn, at is used as a pseudo-observation for the dynamics model (LGSSM). xt represents a frame of a video2 x = [x1, .., xT ] of length T. Each frame is encoded into a point at on a low-dimensional manifold, so that the KVAE contains T separate VAEs that share the same decoder pθ(xt|at) and encoder qφ(at|xt), and depend on each other through a time-dependent prior over a = [a1, .., aT ]. This is illustrated in figure 1. 3.1 Generative model We assume that a acts as a latent representation of the whole video, so that the generative model of a sequence factorizes as pθ(x|a) = QT t=1 pθ(xt|at). In this paper pθ(xt|at) is a deep neural network parameterized by θ, 2While our main focus in this paper are videos, the same ideas could be applied more in general to any sequence of high dimensional data. 2 that emits either a factorized Gaussian or Bernoulli probability vector depending on the data type of xt. We model a with a LGSSM, and following (2), its prior distribution is pγ(a|u) = Z pγ(a|z) pγ(z|u) dz , (3) so that the joint density for the KVAE factorizes as p(x, a, z|u) = pθ(x|a) pγ(a|z) pγ(z|u). A LGSSM forms a convenient back-bone to a model, as the filtered and smoothed distributions pγ(zt|a1:t, u1:t) and pγ(zt|a, u) can be obtained exactly. Temporal reasoning can be done in the latent space of zt’s and via the latent encodings a, and we can do long-term predictions without having to auto-regressively generate high-dimensional images xt. Given a few frames, and hence their encodings, one could “remain in latent space” and use the smoothed distributions to impute missing frames. Another advantage of using a to separate the dynamics model from x can be seen by considering the emission matrix Ct. Inference in the LGSSM requires matrix inverses, and using it as a model for the prior dynamics of at allows the size of Ct to remain small, and not scale with the number of pixels in xt. While the LGSSM’s process and measurement noise in (1) are typically formulated with full covariance matrices [26], we will consider them as isotropic in a KVAE, as at act as a prior in a generative model that includes these extra degrees of freedom. What happens when a ball bounces against a wall, and the dynamics on at are not linear any more? Can we still retain a LGSSM backbone? We will incorporate nonlinearities into the LGSSM by regulating γt from outside the exact forward-backward inference chain. We revisit this central idea at length in section 3.3. 3.2 Learning and inference for the KVAE We learn θ and γ from a set of example sequences {x(n)} by maximizing the sum of their respective log likelihoods L = P n log pθγ(x(n)|u(n)) as a function of θ and γ. For simplicity in the exposition we restrict our discussion below to one sequence, and omit the sequence index n. The log likelihood or evidence is an intractable average over all plausible settings of a and z, and exists as the denominator in Bayes’ theorem when inferring the posterior p(a, z|x, u). A more tractable approach to both learning and inference is to introduce a variational distribution q(a, z|x, u) that approximates the posterior. The evidence lower bound (ELBO) F is log p(x|u) = log Z p(x, a, z|u) ≥Eq(a,z|x,u) log pθ(x|a)pγ(a|z)pγ(z|u) q(a, z|x, u) = F(θ, γ, φ) , (4) and a sum of F’s is maximized instead of a sum of log likelihoods. The variational distribution q depends on φ, but for the bound to be tight we should specify q to be equal to the posterior distribution that only depends on θ and γ. Towards this aim we structure q so that it incorporates the exact conditional posterior pγ(z|a, u), that we obtain with Kalman smoothing, as a factor of γ: q(a, z|x, u) = qφ(a|x) pγ(z|a, u) = QT t=1qφ(at|xt) pγ(z|a, u) . (5) The benefit of the LGSSM backbone is now apparent. We use a “recognition model” to encode each xt using a non-linear function, after which exact smoothing is possible. In this paper qφ(at|xt) is a deep neural network that maps xt to the mean and the diagonal covariance of a Gaussian distribution. As explained in section 4, this factorization allows us to deal with missing data in a principled way. Using (5), the ELBO in (4) becomes F(θ, γ, φ) = Eqφ(a|x) log pθ(x|a) qφ(a|x) + Epγ(z|a,u) log pγ(a|z)pγ(z|u) pγ(z|a, u) . (6) The lower bound in (6) can be estimated using Monte Carlo integration with samples {ea(i),ez(i)}I i=1 drawn from q, ˆF(θ, γ, φ) = 1 I X i log pθ(x|ea(i))+log pγ(ea(i),ez(i)|u)−log qφ(ea(i)|x)−log pγ(ez(i)|ea(i), u) . (7) Note that the ratio pγ(ea(i),ez(i)|u)/pγ(ez(i)|ea(i), u) in (7) gives pγ(ea(i)|u), but the formulation with {ez(i)} allows stochastic gradients on γ to also be computed. A sample from q can be obtained by first sampling ea ∼qφ(a|x), and using ea as an observation for the LGSSM. The posterior pγ(z|ea, u) can be tractably obtained with a Kalman smoother, and a sample ez ∼pγ(z|ea, u) obtained from it. Parameter learning is done by jointly updating θ, φ, and γ by maximising the ELBO on L, which decomposes as a sum of ELBOs in (6), using stochastic gradient ascent and a single sample to approximate the intractable expectations. 3 3.3 Dynamics parameter network The LGSSM provides a tractable way to structure pγ(z|a, u) into the variational approximation in (5). However, even in the simple case of a ball bouncing against a wall, the dynamics on at are not linear anymore. We can deal with these situations while preserving the linear dependency between consecutive states in the LGSSM, by non-linearly changing the parameters γt of the model over time as a function of the latent encodings up to time t −1 (so that we can still define a generative model). Smoothing is still possible as the state transition matrix At and others in γt do not have to be constant in order to obtain the exact posterior pγ(zt|a, u). Recall that γt describes how the latent state zt−1 changes from time t −1 to time t. In the more general setting, the changes in dynamics at time t may depend on the history of the system, encoded in a1:t−1 and possibly a starting code a0 that can be learned from data. If, for instance, we see the ball colliding with a wall at time t −1, then we know that it will bounce at time t and change direction. We then let γt be a learnable function of a0:t−1, so that the prior in (2) becomes pγ(a, z|u) = QT t=1pγt(a0:t−1)(at|zt) · p(z1) QT t=2 pγt(a0:t−1)(zt|zt−1, ut) . (8) dt−1 dt dt+1 αt−1 αt αt+1 at−2 at−1 at Figure 2: Dynamics parameter network for the KVAE. During inference, after all the frames are encoded in a, the dynamics parameter network returns γ = γ(a), the parameters of the LGSSM at all time steps. We can now use the Kalman smoothing algorithm to find the exact conditional posterior over z, that will be used when computing the gradients of the ELBO. In our experiments the dependence of γt on a0:t−1 is modulated by a dynamics parameter network αt = αt(a0:t−1), that is implemented with a recurrent neural network with LSTM cells that takes at each time step the encoded state as input and recurses dt = LSTM(at−1, dt−1) and αt = softmax(dt), as illustrated in figure 2. The output of the dynamics parameter network is weights that sum to one, PK k=1 α(k) t (a0:t−1) = 1. These weights choose and interpolate between K different operating modes: At = K X k=1 α(k) t (a0:t−1)A(k), Bt = K X k=1 α(k) t (a0:t−1)B(k), Ct = K X k=1 α(k) t (a0:t−1)C(k) . (9) We globally learn K basic state transition, control and emission matrices A(k), B(k) and C(k), and interpolate them based on information from the VAE encodings. The weighted sum can be interpreted as a soft mixture of K different LGSSMs whose time-invariant matrices are combined using the timevarying weights αt. In practice, each of the K sets {A(k), B(k), C(k)} models different dynamics, that will dominate when the corresponding α(k) t is high. The dynamics parameter network resembles the locally-linear transitions of [16, 33]; see section 6 for an in depth discussion on the differences. 4 Missing data imputation Let xobs be an observed subset of frames in a video sequence, for instance depicting the initial movement and final positions of a ball in a scene. From its start and end, can we imagine how the ball reaches its final position? Autoregressive models like recurrent neural networks can only forward-generate xt frame by frame, and cannot make use of the information coming from the final frames in the sequence. To impute the unobserved frames xun in the middle of the sequence, we need to do inference, not prediction. The KVAE exploits the smoothing abilities of its LGSSM to use both the information from the past and the future when imputing missing data. In general, if x = {xobs, xun}, the unobserved frames in xun could also appear at non-contiguous time steps, e.g. missing at random. Data can be imputed by sampling from the joint density p(aun, aobs, z|xobs, u), and then generating xun from aun. We factorize this distribution as p(aun, aobs, z|xobs, u) = pγ(aun|z) pγ(z|aobs, u) p(aobs|xobs) , (10) 4 and we sample from it with ancestral sampling starting from xobs. Reading (10) from right to left, a sample from p(aobs|xobs) can be approximated with the variational distribution qφ(aobs|xobs). Then, if γ is fully known, pγ(z|aobs, u) is computed with an extension to the Kalman smoothing algorithm to sequences with missing data, after which samples from pγ(aun|z) could be readily drawn. However, when doing missing data imputation the parameters γ of the LGSSM are not known at all time steps. In the KVAE, each γt depends on all the previous encoded states, including aun, and these need to be estimated before γ can be computed. In this paper we recursively estimate γ in the following way. Assume that x1:t−1 is known, but not xt. We sample a1:t−1 from qφ(a1:t−1|x1:t−1) using the VAE, and use it to compute γ1:t. The computation of γt+1 depends on at, which is missing, and an estimate bat will be used. Such an estimate can be arrived at in two steps. The filtered posterior distribution pγ(zt−1|a1:t−1, u1:t−1) can be computed as it depends only on γ1:t−1, and from it, we sample bzt ∼pγ(zt|a1:t−1, u1:t) = Z pγt(zt|zt−1, ut) pγ(zt−1|a1:t−1, u1:t−1) dzt−1 (11) and sample bat from the predictive distribution of at, bat ∼pγ(at|a1:t−1, u1:t) = Z pγt(at|zt) pγ(zt|a1:t−1, u1:t) dzt ≈pγt(at|bzt) . (12) The parameters of the LGSSM at time t + 1 are then estimated as γt+1([a0:t−1, bat]). The same procedure is repeated at the next time step if xt+1 is missing, otherwise at+1 is drawn from the VAE. After the forward pass through the sequence, where we estimate γ and compute the filtered posterior for z, the Kalman smoother’s backwards pass computes the smoothed posterior. While the smoothed posterior distribution is not exact, as it relies on the estimate of γ obtained during the forward pass, it improves data imputation by using information coming from the whole sequence; see section 5 for an experimental illustration. 5 Experiments We motivated the KVAE with an example of a bouncing ball, and use it here to demonstrate the model’s ability to separately learn a recognition and dynamics model from video, and use it to impute missing data. To draw a comparison with deep variational Bayes filters (DVBFs) [16], we apply the KVAE to [16]’s pendulum example. We further apply the model to a number of environments with different properties to demonstrate its generalizability. All models are trained end-to-end with stochastic gradient descent. Using the control input ut in (1) we can inform the model of known quantities such as external forces, as will be done in the pendulum experiment. In all the other experiments, we omit such information and train the models fully unsupervised from the videos only. Further implementation details can be found in the supplementary material (appendix A) and in the Tensorflow [1] code released at github.com/simonkamronn/kvae. 5.1 Bouncing ball We simulate 5000 sequences of 20 time steps each of a ball moving in a two-dimensional box, where each video frame is a 32x32 binary image. A video sequence is visualised as a single image in figure 4d, with the ball’s darkening color reflecting the incremental frame index. In this set-up the initial position and velocity are randomly sampled. No forces are applied to the ball, except for the fully elastic collisions with the walls. The minimum number of latent dimensions that the KVAE requires to model the ball’s dynamics are at ∈R2 and zt ∈R4, as at the very least the ball’s position in the box’s 2d plane has to be encoded in at, and zt has to encode the ball’s position and velocity. The model’s flexibility increases with more latent dimensions, but we choose these settings for the sake of interpretable visualisations. The dynamics parameter network uses K = 3 to interpolate three modes, a constant velocity, and two non-linear interactions with the horizontal and vertical walls. We compare the generation and imputation performance of the KVAE with two recurrent neural network (RNN) models that are based on the same auto-encoding (AE) architecture as the KVAE and are modifications of methods from the literature to be better suited to the bouncing ball experiments.3 3We also experimented with the SRNN model from [8] as it can do smoothing. However, the model is probably too complex for the task in hand, and we could not make it learn good dynamics. 5 (a) Frames xt missing completely at random. (b) Frames xt missing in the middle of the sequence. (c) Comparison of encoded (ground truth), generated and smoothed trajectories of a KVAE in the latent space a. The black squares illustrate observed samples and the hexagons indicate the initial state. Notice that the at’s lie on a manifold that can be rotated and stretched to align with the frames of the video. Figure 3: Missing data imputation results. In the AE-RNN, inspired by the architecture from [29], a pretrained convolutional auto-encoder, identical to the one used for the KVAE, feeds the encodings to an LSTM network [13]. During training the LSTM predicts the next encoding in the sequence and during generation we use the previous output as input to the current step. For data imputation the LSTM either receives the previous output or, if available, the encoding of the observed frame (similarly to filtering in the KVAE). The VAE-RNN is identical to the AE-RNN except that uses a VAE instead of an AE, similarly to the model from [6]. Figure 3a shows how well missing frames are imputed in terms of the average fraction of incorrectly guessed pixels. In it, the first 4 frames are observed (to initialize the models) after which the next 16 frames are dropped at random with varying probabilities. We then impute the missing frames by doing filtering and smoothing with the KVAE. We see in figure 3a that it is beneficial to utilize information from the whole sequence (even the future observed frames), and a KVAE with smoothing outperforms all competing methods. Notice that dropout probability 1 corresponds to pure generation from the models. Figure 3b repeats this experiment, but makes it more challenging by removing an increasing number of consecutive frames from the middle of the sequence (T = 20). In this case the ability to encode information coming from the future into the posterior distribution is highly beneficial, and smoothing imputes frames much better than the other methods. Figure 3c graphically illustrates figure 3b. We plot three trajectories over at-encodings. The generated trajectories were obtained after initializing the KVAE model with 4 initial frames, while the smoothed trajectories also incorporated encodings from the last 4 frames of the sequence. The encoded trajectories were obtained with no missing data, and are therefore considered as ground truth. In the first three plots in figure 3c, we see that the backwards recursion of the Kalman smoother corrects the trajectory obtained with generation in the forward pass. However, in the fourth plot, the poor trajectory that is obtained during the forward generation step, makes smoothing unable to follow the ground truth. The smoothing capabilities of KVAEs make it also possible to train it with up to 40% of missing data with minor losses in performance (appendix C in the supplementary material). Links to videos of the imputation results and long-term generation from the models can be found in appendix B and at sites.google.com/view/kvae. Understanding the dynamics parameter network. In our experiments the dynamics parameter network αt = αt(a0:t−1) is an LSTM network, but we could also parameterize it with any differentiable function of a0:t−1 (see appendix D in the supplementary material for a comparison of various 6 (a) k = 1 (b) k = 2 (c) k = 3 (d) Reconstruction of x Figure 4: A visualisation of the dynamics parameter network α(k) t (at−1) for K = 3, as a function of at−1. The three α(k) t ’s sum to one at every point in the encoded space. The greyscale backgrounds in a) to c) correspond to the intensity of the weights α(k) t , with white indicating a weight of one in the dynamics parameter network’s output. Overlaid on them is the full latent encoding a. d) shows the reconstructed frames of the video as one image. architectures). When using a multi-layer perceptron (MLP) that depends on the previous encoding as mixture network, i.e. αt = αt(at−1), figure 4 illustrates how the network chooses the mixture of learned dynamics. We see that the model has correctly learned to choose a transition that maintains a constant velocity in the center (k = 1), reverses the horizontal velocity when in proximity of the left and right wall (k = 2), the reverses the vertical velocity when close to the top and bottom (k = 3). 5.2 Pendulum experiment Model Test ELBO KVAE (CNN) 810.08 KVAE (MLP) 807.02 DVBF 798.56 DMM 784.70 Table 1: Pendulum experiment. We test the KVAE on the experiment of a dynamic torquecontrolled pendulum used in [16]. Training, validation and test set are formed by 500 sequences of 15 frames of 16x16 pixels. We use a KVAE with at ∈R2, zt ∈R3 and K = 2, and try two different encoder-decoder architectures for the VAE, one using a MLP and one using a convolutional neural network (CNN). We compare the performaces of the KVAE to DVBFs [16] and deep Markov models4 (DMM) [19], nonlinear SSMs parameterized by deep neural networks whose intractable posterior distribution is approximated with an inference network. In table 1 we see that the KVAE outperforms both models in terms of ELBO on a test set, showing that for the task in hand it is preferable to use a model with simpler dynamics but exact posterior inference. 5.3 Other environments To test how well the KVAE adapts to different environments, we trained it end-to-end on videos of (i) a ball bouncing between walls that form an irregular polygon, (ii) a ball bouncing in a box and subject to gravity, (iii) a Pong-like environment where the paddles follow the vertical position of the ball to make it stay in the frame at all times. Figure 5 shows that the KVAE learns the dynamics of all three environments, and generates realistic-looking trajectories. We repeat the imputation experiments of figures 3a and 3b for these environments in the supplementary material (appendix E), where we see that KVAEs outperform alternative models. 6 Related work Recent progress in unsupervised learning of high dimensional sequences is found in a plethora of both deterministic and probabilistic generative models. The VAE framework is a common workhorse in the stable of probabilistic inference methods, and it is extended to the temporal setting by [2, 6, 8, 16, 19]. In particular, deep neural networks can parameterize the transition and emission distributions of different variants of deep state-space models [8, 16, 19]. In these extensions, inference 4Deep Markov models were previously referred to as deep Kalman filters. 7 (a) Irregular polygon. (b) Box with gravity. (c) Pong-like environment. Figure 5: Generations from the KVAE trained on different environments. The videos are shown as single images, with color intensity representing the incremental sequence index t. In the simulation that resembles Atari’s Pong game, the movement of the two paddles (left and right) is also visible. networks define a variational approximation to the intractable posterior distribution of the latent states at each time step. For the tasks in section 5, it is preferable to use the KVAE’s simpler temporal model with an exact (conditional) posterior distribution than a highly non-linear model where the posterior needs to be approximated. A different combination of VAEs and probabilistic graphical models has been explored in [15], which defines a general class of models where inference is performed with message passing algorithms that use deep neural networks to map the observations to conjugate graphical model potentials. In classical non-linear extensions of the LGSSM like the extended Kalman filter and in the locallylinear dynamics of [16, 33], the transition matrices at time t have a non-linear dependence on zt−1. The KVAE’s approach is different: by introducing the latent encodings at and making γt depend on a1:t−1, the linear dependency between consecutive states of z is preserved, so that the exact smoothed posterior can be computed given a, and used to perform missing data imputation. LGSSM with dynamic parameterization have been used for large-scale demand forecasting in [27]. [20] introduces recurrent switching linear dynamical systems, that combine deep learning techniques and switching Kalman filters [22] to model low-dimensional time series. [11] introduces a discriminative approach to estimate the low-dimensional state of a LGSSM from input images. The resulting model is reminiscent of a KVAE with no decoding step, and is therefore not suited for unsupervised learning and video generation. Recent work in the non-sequential setting has focused on disentangling basic visual concepts in an image [12]. [10] models neural activity by finding a non-linear embedding of a neural time series into a LGSSM. Great strides have been made in the reinforcement learning community to model how environments evolve in response to action [5, 23, 24, 30, 32]. In similar spirit to this paper, [32] extracts a latent representation from a PCA representation of the frames where controls can be applied. [5] introduces action-conditional dynamics parameterized with LSTMs and, as for the KVAE, a computationally efficient procedure to make long term predictions without generating high dimensional images at each time step. As autoregressive models, [29] develops a sequence to sequence model of video representations that uses LSTMs to define both the encoder and the decoder. [7] develops an actionconditioned video prediction model of the motion of a robot arm using convolutional LSTMs that models the change in pixel values between two consecutive frames. While the focus in this work is to define a generative model for high dimensional videos of simple physical systems, several recent works have combined physical models of the world with deep learning to learn the dynamics of objects in more complex but low-dimensional environments [3, 4, 9, 34]. 7 Conclusion The KVAE, a model for unsupervised learning of high-dimensional videos, was introduced in this paper. It disentangles an object’s latent representation at from a latent state zt that describes its dynamics, and can be learned end-to-end from raw video. Because the exact (conditional) smoothed posterior distribution over the states of the LGSSM can be computed, one generally sees a marked 8 improvement in inference and missing data imputation over methods that don’t have this property. A desirable property of disentangling the two latent representations is that temporal reasoning, and possibly planning, could be done in the latent space. As a proof of concept, we have been deliberate in focussing our exposition to videos of static worlds that contain a few moving objects, and leave extensions of the model to real world videos or sequences coming from an agent exploring its environment to future work. Acknowledgements We would like to thank Lars Kai Hansen for helpful discussions on the model design. Marco Fraccaro is supported by Microsoft Research through its PhD Scholarship Programme. We thank NVIDIA Corporation for the donation of TITAN X GPUs. References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [2] E. Archer, I. M. Park, L. Buesing, J. Cunningham, and L. Paninski. Black box variational inference for state space models. arXiv:1511.07367, 2015. [3] P. W. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, and K. Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In NIPS, 2016. [4] M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach to learning physical dynamics. In ICLR, 2017. [5] S. Chiappa, S. Racanière, D. Wierstra, and S. Mohamed. Recurrent environment simulators. In ICLR, 2017. [6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model for sequential data. In NIPS, 2015. [7] C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016. [8] M. Fraccaro, S. K. Sønderby, U. Paquet, and O. Winther. Sequential neural models with stochastic layers. In NIPS, 2016. [9] K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik. Learning visual predictive models of physics for playing billiards. In ICLR, 2016. [10] Y. Gao, E. W. Archer, L. Paninski, and J. P. Cunningham. Linear dynamical neural population models through nonlinear embeddings. In NIPS, 2016. [11] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. Backprop KF: learning discriminative deterministic state estimators. In NIPS, 2016. [12] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2017. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, Nov. 1997. [14] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. [15] M. J. Johnson, D. Duvenaud, A. B. Wiltschko, S. R. Datta, and R. P. Adams. Composing graphical models with neural networks for structured representations and fast inference. In NIPS, 2016. [16] M. Karl, M. Soelch, J. Bayer, and P. van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. In ICLR, 2017. 9 [17] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [18] D. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [19] R. Krishnan, U. Shalit, and D. Sontag. Structured inference networks for nonlinear state space models. In AAAI, 2017. [20] S. Linderman, M. Johnson, A. Miller, R. Adams, D. Blei, and L. Paninski. Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems. In AISTATS, 2017. [21] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. In ICLR, 2017. [22] K. P. Murphy. Switching Kalman filters. Technical report, 1998. [23] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In NIPS, 2015. [24] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. arXiv:1511.06309, 2015. [25] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [26] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305–45, 1999. [27] M. W. Seeger, D. Salinas, and V. Flunkert. Bayesian intermittent demand forecasting for large inventories. In NIPS, 2016. [28] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, 2016. [29] N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015. [30] W. Sun, A. Venkatraman, B. Boots, and J. A. Bagnell. Learning to filter with predictive state inference machines. In ICML, 2016. [31] L. G. Ungerleider and L. G. Haxby. “What” and “where” in the human brain. Curr. Opin. Neurobiol., 4:157–165, 1994. [32] N. Wahlström, T. B. Schön, and M. P. Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv:1502.02251, 2015. [33] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, 2015. [34] J. Wu, I. Yildirim, J. J. Lim, W. T. Freeman, and J. B. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NIPS, 2015. 10 | 2017 | 321 |
6,810 | Stabilizing Training of Generative Adversarial Networks through Regularization Kevin Roth Department of Computer Science ETH Zürich kevin.roth@inf.ethz.ch Aurelien Lucchi Department of Computer Science ETH Zürich aurelien.lucchi@inf.ethz.ch Sebastian Nowozin Microsoft Research Cambridge, UK sebastian.Nowozin@microsoft.com Thomas Hofmann Department of Computer Science ETH Zürich thomas.hofmann@inf.ethz.ch Abstract Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f-divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning. 1 1 Introduction A recent trend in the world of generative models is the use of deep neural networks as data generating mechanisms. Two notable approaches in this area are variational auto-encoders (VAEs) [14, 28] as well as generative adversarial networks (GAN) [8]. GANs are especially appealing as they move away from the common likelihood maximization viewpoint and instead use an adversarial game approach for training generative models. Let us denote by P(x) and Q✓(x) the data and model distribution, respectively. The basic idea behind GANs is to pair up a ✓-parametrized generator network that produces Q✓with a discriminator which aims to distinguish between P and Q✓, whereas the generator aims for making Q✓indistinguishable from P. Effectively the discriminator represents a class of objective functions F that measures dissimilarity of pairs of probability distributions. The final objective is then formed via a supremum over F, leading to the saddle point problem min ✓ `(Q✓; F) := sup F 2F F (P, Q✓) " . (1) The standard way of representing a specific F is through a family of statistics or discriminants φ 2 Φ, typically realized by a neural network [8, 26]. In GANs, we use these discriminators in a logistic classification loss as follows F(P, Q; φ) = EP [g(φ(x))] + EQ [g(−φ(x))] , (2) 1Code available at https://github.com/rothk/Stabilizing_GANs 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. where g(z) = ln(σ(z)) is the log-logistic function (for reference, σ(φ(x)) = D(x) in [8]). As shown in [8], for the Bayes-optimal discriminator φ⇤2 Φ, the above generator objective reduces to the Jensen-Shannon (JS) divergence between P and Q. The work of [25] later generalized this to a more general class of f-divergences, which gives more flexibility in cases where the generative model may not be expressive enough or where data may be scarce. We consider three different challenges for learning the model distribution: (A) empirical estimation: the model family may contain the true distribution or a good approximation thereof, but one has to identify it based on a finite training sample drawn from P. This is commonly addressed by the use of regularization techniques to avoid overfitting, e.g. in the context of estimating f-divergences with M-estimators [24]. In our work, we suggest a novel (Tikhonov) regularizer, derived and motivated from a training-with-noise scenario, where P and Q are convolved with white Gaussian noise [30, 3], namely Fγ(P, Q; φ) := F(P ⇤⇤, Q ⇤⇤; φ), ⇤= N(0, γI) . (3) (B) density misspecification: the model distribution and true distribution both have a density function with respect to the same base measure but there exists no parameter for which these densities are sufficiently similar. Here, the principle of parameter estimation via divergence minimization is provably sound in that it achieves a well-defined limit [1, 21]. It therefore provides a solid foundation for statistical inference that is robust with regard to model misspecifications. (C) dimensional misspecification: the model distribution and the true distribution do not have a density function with respect to the same base measure or – even worse – supp(P) \ supp(Q) may be negligible. This may occur, whenever the model and/or data are confined to low-dimensional manifolds [3, 23]. As pointed out in [3], a geometric mismatch can be detrimental for f-GAN models as the resulting f-divergence is not finite (the sup in Eq. (1) is +1). As a remedy, it has been suggested to use an alternative family of distance functions known as integral probability metrics [22, 31]. These include the Wasserstein distance used in Wasserstein GANs (WGAN) [3] as well as RKHS-induced maximum mean discrepancies [9, 16, 6], which all remain well-defined. We will provide evidence (analytically and experimentally) that the noise-induced regularization method proposed in this paper effectively makes f-GAN models robust against dimensional misspecifications. While this introduces some dependency on the (Euclidean) metric of the ambient data space, it does so on a well-controlled length scale (the amplitude of noise or strength of the regularization γ) and by retaining the benefits of f-divergences. This is a rather gentle modification compared to the more radical departure taken in Wasserstein GANs, which rely solely on the ambient space metric (through the notion of optimal mass transport). In what follows, we will take Eq. (3) as the starting point and derive an approximation via a regularizer that is simple to implement as an integral operator penalizing the squared gradient norm. As opposed to a naïve norm penalization, each f-divergence has its own characteristic weighting function over the input space, which depends on the discriminator output. We demonstrate the effectiveness of our approach on a simple Gaussian mixture as well as on several benchmark image datasets commonly used for generative models. In both cases, our proposed regularization yields stable GAN training and produces samples of higher visual quality. We also perform pairwise tests of regularized vs. unregularized GANs using a novel cross-testing protocol. In summary, we make the following contributions: • We systematically derive a novel, efficiently computable regularization method for f-GAN. • We show how this addresses the dimensional misspecification challenge. • We empirically demonstrate stable GAN training across a broad set of models. 2 Background The fundamental way to learn a generative model in machine learning is to (i) define a parametric family of probability densities {Q✓}, ✓2 ⇥✓Rd, and (ii) find parameters ✓⇤2 ⇥such that Q✓is closest (in some sense) to the true distribution P. There are various ways to measure how close model and real distribution are, or equivalently, various ways to define a distance or divergence function between P and Q. In the following we review different notions of divergences used in the literature. 2 f-divergence. GANs [8] are known to minimize the Jensen-Shannon divergence between P and Q. This was generalized in [25] to f-divergences induced by a convex functions f. An interesting property of f-divergences is that they permit a variational characterization [24, 27] via Df(P||Q) := EQ f ◦dP dQ " = Z X sup u ✓ u · dP dQ −f c(u) ◆ dQ, (4) where dP/dQ is the Radon-Nikodym derivative and f c(t) ⌘supu2domf {ut −f(u)} is the Fenchel dual of f. By defining an arbitrary class of statistics 3 : X ! R we arrive at the bound Df(P||Q) ≥sup Z ✓ · dP dQ −f c ◦ ◆ dQ = sup {EP[ ] −EQ[f c ◦ ]} . (5) Eq. (5) thus gives us a variational lower bound on the f-divergence as an expectation over P and Q, which is easier to evaluate (e.g. via sampling from P and Q, respectively) than the density based formulation. We can see that by identifying = g ◦φ and with the choice of f such that f c = −ln(1 −exp), we get f c ◦ = −ln(1 −σ(φ)) = −g(−φ) thus recovering Eq. (2). Integral Probability Metrics (IPM). An alternative family of divergences are integral probability metrics [22, 31], which find a witness function to distinguish between P and Q. This class of methods yields an objective similar to Eq. (2) that requires optimizing a distance function between two distributions over a function class F. Particular choices for F yield the kernel maximum mean discrepancy approach of [9, 16] or Wasserstein GANs [3]. The latter distance is defined as W(P, Q) = sup kfkL1 {EP[f] −EQ[f]}, (6) where the supremum is taken over functions f which have a bounded Lipschitz constant. As shown in [3], the Wasserstein metric implies a different notion of convergence compared to the JS divergence used in the original GAN. Essentially, the Wasserstein metric is said to be weak as it requires the use of a weaker topology, thus making it easier for a sequence of distribution to converge. The use of a weaker topology is achieved by restricting the function class to the set of bounded Lipschitz functions. This yields a hard constraint on the function class that is empirically hard to satisfy. In [3], this constraint is implemented via weight clipping, which is acknowledged to be a "terrible way" to enforce the Lipschitz constraint. As will be shown later, our regularization penalty can be seen as a soft constraint on the Lipschitz constant of the function class which is easy to implement in practice. Recently, [10] has also proposed a similar regularization; while their proposal was motivated for Wasserstein GANs and does not extend to f-divergences it is interesting to observe that both their and our regularization work on the gradient. Training with Noise. As suggested in [3, 30], one can break the dimensional misspecification discussed in Section 1 by adding continuous noise to the inputs of the discriminator, therefore smoothing the probability distribution. However, this requires to add high-dimensional noise, which introduces significant variance in the parameter estimation process. Counteracting this requires a lot of samples and therefore ultimately leads to a costly or impractical solution. Instead we propose an approach that relies on analytic convolution of the densities P and Q with Gaussian noise. As we demonstrate below, this yields a simple weighted penalty function on the norm of the gradients. Conceptually we think of this noise not as being part of the generative process (as in [3]), but rather as a way to define a smoother family of discriminants for the variational bound of f-divergences. Regularization for Mode Dropping. Other regularization techniques address the problem of mode dropping and are complementary to our approach. This includes the work of [7] which incorporates a supervised training signal as a regularizer on top of the discriminator target. To implement supervision the authors use an additional auto-encoder as well as a two-step training procedure which might be computationally expensive. A similar approach was proposed by [20] that stabilizes GANs by unrolling the optimization of the discriminator. The main drawback of this approach is that the computational cost scales with the number of unrolling steps. In general, it is not clear to what extent these methods not only stabilize GAN training, but also address the conceptual challenges listed in Section 1. 3 3 Noise-Induced Regularization From now onwards, we consider the general f-GAN [25] objective defined as F(P, Q; ) ⌘EP[ ] −EQ[f c ◦ ]. (7) 3.1 Noise Convolution From a practitioners point of view, training with noise can be realized by adding zero-mean random variables ⇠to samples x ⇠P, Q during training. Here we focus on normal white noise ⇠⇠⇤= N(0, γ I) (the same analysis goes through with a Laplacian noise distribution for instance). From a theoretical perspective, adding noise is tantamount to convolving the corresponding distribution as EPE⇤[ (x + ⇠)] = Z (x) Z p(x −⇠)λ(⇠)d⇠dx = Z (x)(p ⇤λ)(x)dx = EP⇤⇤[ ]. (8) where p and λ are probability densities of P and ⇤, respectively, with regard to the Lebesgue measure. The noise distribution ⇤as well as the resulting P⇤⇤are guaranteed to have full support in the ambient space, i.e. λ(x) > 0 and (p ⇤λ)(x) > 0 (8x). Technically, applying this to both P and Q makes the resulting generalized f-divergence well-defined, even when the generative model is dimensionally misspecified. Note that approximating E⇤through sampling was previously investigated in [30, 3]. 3.2 Convolved Discriminants With symmetric noise, λ(⇠) = λ(−⇠), we can write Eq. (8) equivalently as EP⇤⇤[ ] = EPE⇤[ (x + ⇠)] = Z p(x) Z (x −⇠)λ(−⇠) d⇠dx = EP[ ⇤λ]. (9) For the Q-expectation in Eq. (7) one gets, by the same argument, EQ⇤⇤[f c ◦ ] = EQ [(f c ◦ ) ⇤λ]. Formally, this generalizes the variational bound for f-divergences in the following manner: F(P ⇤⇤, Q ⇤⇤; ) = F(P, Q; ⇤λ, (f c ◦ ) ⇤λ), F(P, Q; ⇢, ⌧) := EP[⇢] −EQ[⌧] (10) Assuming that F is closed under ⇤convolutions, the regularization will result in a relative weakening of the discriminator as we take the sup over a smaller, more regular family. Clearly, the low-pass effect of ⇤-convolutions can be well understood in the Fourier domain. In this equivalent formulation, we leave P and Q unchanged, yet we change the view the discriminator can take on the ambient data space: metaphorically speaking, the generator is paired up with a short-sighted adversary. 3.3 Analytic Approximations In general, it may be difficult to analytically compute ⇤λ or – equivalently – E⇤[ (x + ⇠)]. However, for small γ we can use a Taylor approximation of around ⇠= 0 (cf. [5]): (x + ⇠) = (x) + [r (x)]T ⇠+ 1 2 ⇠T [r2 (x)] ⇠+ O(⇠3) (11) where r2 denotes the Hessian, whose trace Tr(r2) = 4 is known as the Laplace operator. The properties of white noise result in the approximation E⇤[ (x + ⇠)] = (x) + γ 2 4 (x) + O(γ2) (12) and thereby lead directly to an approximation of Fγ (see Eq. (3)) via F = F0 plus a correction, i.e. Fγ(P, Q; ) = F(P, Q; ) + γ 2 {EP [4 ] −EQ [4(f c ◦ )]} + O(γ2) . (13) We can interpret Eq. (13) as follows: the Laplacian measures how much the scalar fields and f c ◦ differ at each point from their local average. It is thereby an infinitesimal proxy for the (exact) convolution. The Laplace operator is a sum of d terms, where d is the dimensionality of the ambient data space. As such it does not suffer from the quadratic blow-up involved in computing the Hessian. If we realize the discriminator via a deep network, however, then we need to be able to compute the Laplacian of composed functions. For concreteness, let us assume that = h ◦G, G = (g1, . . . , gk) and look 4 at a single input x, i.e. gi : R ! R, then (h ◦G)0 = X i g0 i · (@ih ◦G), (h ◦G)00 = X i g00 i · (@ih ◦G) + X i,j g0 i · g0 j · (@i@jh ◦G) (14) So at the intermediate layer, we would need to effectively operate with a full Hessian, which is computationally demanding, as has already been observed in [5]. 3.4 Efficient Gradient-Based Regularization We would like to derive a (more) tractable strategy for regularizing , which (i) avoids the detrimental variance that comes from sampling ⇠, (ii) does not rely on explicitly convolving the distributions P and Q, and (iii) avoids the computation of Laplacians as in Eq. (13). Clearly, this requires to make further simplifications. We suggest to exploit properties of the maximizer ⇤of F that can be characterized by [24] (f c0 ◦ ⇤) dQ = dP =) EP[h] = EQ[(f c0 ◦ ⇤) · h] (8h, integrable). (15) The relevance of this becomes clear, if we apply the chain rule to 4(f c ◦ ), assuming that f c is twice differentiable 4(f c ◦ ) = (f c00 ◦ ) · ||r ||2 + ' f c0 ◦ ( 4 , (16) as now we get a convenient cancellation of the Laplacians at = ⇤+ O(γ) Fγ(P, Q; ⇤) = F(P, Q; ⇤) −γ 2 EQ h (f c00 ◦ ⇤) · kr ⇤k2i + O(γ2) . (17) We can (heuristically) turn this into a regularizer by taking the leading terms, Fγ(P, Q; ) ⇡F(P, Q; ) −γ 2 ⌦f(Q; ), ⌦f(Q; ) := EQ h (f c00 ◦ ) · kr k2i . (18) Note that we do not assume that the Laplacian terms cancel far away from the optimum, i.e. we do not assume Eq. (15) to hold for far away from ⇤. Instead, the underlying assumption we make is that optimizing the gradient-norm regularized objective Fγ(P, Q; ) makes converge to ⇤+ O(γ), for which we know that the Laplacian terms cancel [5, 2]. The convexity of f c implies that the weighting function of the squared gradient norm is non-negative, i.e. f c00 ≥0, which in turn implies that the regularizer −γ 2 ⌦f(Q; ) is upper bounded (by zero). Maximization of Fγ(P, Q; ) with respect to is therefore well-defined. Further considerations regarding the well-definedness of the regularizer can be found in sec. 7.2 in the Appendix. 4 Regularizing GANs We have shown that training with noise is equivalent to regularizing the discriminator. Inspired by the above analysis, we propose the following class of f-GAN regularizers: Regularized f-GAN Fγ(P, Q; ) = EP [ ] −EQ [f c ◦ ] −γ 2 ⌦f(Q; ) ⌦f(Q; ) := EQ h (f c00 ◦ ) kr k2i (19) The regularizer corresponding to the commonly used parametrization of the Jensen-Shannon GAN can be derived analogously as shown in the Appendix. We obtain, Regularized Jensen-Shannon GAN Fγ(P, Q; ') = EP [ln(')] + EQ [ln(1 −')] −γ 2 ⌦JS(P, Q; ') ⌦JS(P, Q; ') := EP ⇥ (1 −'(x))2||rφ(x)||2⇤ + EQ ⇥ '(x)2||rφ(x)||2⇤ (20) where φ = σ−1(') denotes the logit of the discriminator '. We prefer to compute the gradient of φ as it is easier to implement and more robust than computing gradients after applying the sigmoid. 5 Algorithm 1 Regularized JS-GAN. Default values: γ0 = 2.0, ↵= 0.01 (with annealing), γ = 0.1 (without annealing), n' = 1 Require: Initial noise variance γ0, annealing decay rate ↵, number of discriminator update steps n' per generator iteration, minibatch size m, number of training iterations T Require: Initial discriminator parameters !0, initial generator parameters ✓0 for t = 1, ..., T do γ γ0 · ↵t/T # annealing for 1, ..., n' do Sample minibatch of real data {x(1), ..., x(m)} ⇠P. Sample minibatch of latent variables from prior {z(1), ..., z(m)} ⇠p(z). F(!, ✓) = 1 m m X i=1 h ln ⇣ '!(x(i)) ⌘ + ln ⇣ 1 −'!(G✓(z(i))) ⌘i ⌦(!, ✓) = 1 m m X i=1 ⇣ 1 −'!(x(i)) ⌘2////rφ!(x(i)) ////2 + '! ' G✓(z(i)) (2////r˜xφ!(˜x) // ˜x=G✓(z(i)) ////2 " ! ! + r! ⇣ F(!, ✓) −γ 2 ⌦(!, ✓) ⌘ # gradient ascent end for Sample minibatch of latent variables from prior {z(1), ..., z(m)} ⇠p(z). F(!, ✓) = 1 m m X i=1 ln ⇣ 1 −'!(G✓(z(i))) ⌘ or Falt(!, ✓) = −1 m m X i=1 ln ⇣ '!(G✓(z(i))) ⌘ ✓ ✓−r✓F(!, ✓) # gradient descent end for The gradient-based updates can be performed with any gradient-based learning rule. We used Adam in our experiments. 4.1 Training Algorithm Regularizing the discriminator provides an efficient way to convolve the distributions and is thereby sufficient to address the dimensional misspecification challenges outlined in the introduction. This leaves open the possibility to use the regularizer also in the objective of the generator. On the one hand, optimizing the generator through the regularized objective may provide useful gradient signal and therefore accelerate training. On the other hand, it destabilizes training close to convergence (if not dealt with properly), since the generator is incentiviced to put probability mass where the discriminator has large gradients. In the case of JS-GANs, we recommend to pair up the regularized objective of the discriminator with the “alternative” or “non-saturating” objective for the generator, proposed in [8], which is known to provide strong gradients out of the box (see Algorithm 1). 4.2 Annealing The regularizer variance γ lends itself nicely to annealing. Our experimental results indicate that a reasonable annealing scheme consists in regularizing with a large initial γ early in training and then (exponentially) decaying γ to a small non-zero value. We leave to future work the question of how to determine an optimal annealing schedule. 5 Experiments 5.1 2D submanifold mixture of Gaussians in 3D space To demonstrate the stabilizing effect of the regularizer, we train a simple GAN architecture [20] on a 2D submanifold mixture of seven Gaussians arranged in a circle and embedded in 3D space (further details and an illustration of the mixture distribution are provided in the Appendix). We emphasize that this mixture is degenerate with respect to the base measure defined in ambient space as it does not have fully dimensional support, thus precisely representing one of the failure scenarios commonly 6 UNREG. 0.01 1.0 Figure 1: 2D submanifold mixture. The first row shows one of several unstable unregularized GANs trained to learn the dimensionally misspecified mixture distribution. The remaining rows show regularized GANs (with regularized objective for the discriminator and unregularized objective for the generator) for different levels of regularization γ. Even for small but non-zero noise variance, the regularized GAN can essentially be trained indefinitely without collapse. The color of the samples is proportional to the density estimated from a Gaussian KDE fit. The target distribution is shown in Fig. 5. GANs were trained with one discriminator update per generator update step (indicated). described in the literature [3]. The results are shown in Fig. 1 for both standard unregularized GAN training as well as our regularized variant. While the unregularized GAN collapses in literally every run after around 50k iterations, due to the fact that the discriminator concentrates on ever smaller differences between generated and true data (the stakes are getting higher as training progresses), the regularized variant can be trained essentially indefinitely (well beyond 200k iterations) without collapse for various degrees of noise variance, with and without annealing. The stabilizing effect of the regularizer is even more pronounced when the GANs are trained with five discriminator updates per generator update step, as shown in Fig. 6. 5.2 Stability across various architectures To demonstrate the stability of the regularized training procedure and to showcase the excellent quality of the samples generated from it, we trained various network architectures on the CelebA [17], CIFAR-10 [15] and LSUN bedrooms [32] datasets. In addition to the deep convolutional GAN (DCGAN) of [26], we trained several common architectures that are known to be hard to train [4, 26, 19], therefore allowing us to establish a comparison to the concurrently proposed gradientpenalty regularizer for Wasserstein GANs [10]. Among these architectures are a DCGAN without any normalization in either the discriminator or the generator, a DCGAN with tanh activations and a deep residual network (ResNet) GAN [11]. We used the open-source implementation of [10] for our experiments on CelebA and LSUN, with one notable exception: we use batch normalization also for the discriminator (as our regularizer does not depend on the optimal transport plan or more precisely the gradient penalty being imposed along it). All networks were trained using the Adam optimizer [13] with learning rate 2 ⇥10−4 and hyperparameters recommended by [26]. We trained all datasets using batches of size 64, for a total of 200K generator iterations in the case of LSUN and 100k iterations on CelebA. The results of these experiments are shown in Figs. 3 & 2. Further implementation details can be found in the Appendix. 5.3 Training time We empirically found regularization to increase the overall training time by a marginal factor of roughly 1.4 (due to the additional backpropagation through the computational graph of the discriminator gradients). More importantly, however, (regularized) f-GANs are known to converge (or at least generate good looking samples) faster than their WGAN relatives [10]. 7 RESNET DCGAN NO NORMALIZATION TANH Figure 2: Stability accross various architectures: ResNet, DCGAN, DCGAN without normalization and DCGAN with tanh activations (details in the Appendix). All samples were generated from regularized GANs with exponentially annealed γ0 = 2.0 (and alternative generator loss) as described in Algorithm 1. Samples were produced after 200k generator iterations on the LSUN dataset (see also Fig. 8 for a full-resolution image of the ResNet GAN). Samples for the unregularized architectures can be found in the Appendix. UNREG. 0.5 1.0 2.0 Figure 3: Annealed Regularization. CelebA samples generated by (un)regularized ResNet GANs. The initial level of regularization γ0 is shown below each batch of images. γ0 was exponentially annealed as described in Algorithm 1. The regularized GANs can be trained essentially indefinitely without collapse, the superior quality is again evident. Samples were produced after 100k generator iterations. 5.4 Regularization vs. explicitly adding noise We compare our regularizer against the common practitioner’s approach to explicitly adding noise to images during training. In order to compare both approaches (analytic regularizer vs. explicit noise), we fix a common batch size (64 in our case) and subsequently train with different noise-to-signal ratios (NSR): we take (batch-size/NSR) samples (both from the dataset and generated ones) to each of which a number of NSR noise vectors is added and feed them to the discriminator (so that overall both models are trained on the same batch size). We experimented with NSR 1, 2, 4, 8 and show the best performing ratio (further ratios in the Appendix). Explicitly adding noise in high-dimensional ambient spaces introduces additional sampling variance which is not present in the regularized variant. The results, shown in Fig. 4, confirm that the regularizer stabilizes across a broad range of noise levels and manages to produce images of considerably higher quality than the unregularized variants. 5.5 Cross-testing protocol We propose the following pairwise cross-testing protocol to assess the relative quality of two GAN models: unregularized GAN (Model 1) vs. regularized GAN (Model 2). We first report the confusion matrix (classification of 10k samples from the test set against 10k generated samples) for each model separately. We then classify 10k samples generated by Model 1 with the discriminator of Model 2 and vice versa. For both models, we report the fraction of false positives (FP) (Type I error) and false negatives (FN) (Type II error). The discriminator with the lower FP (and/or lower FN) rate defines the better model, in the sense that it is able to more accurately classify out-of-data samples, which indicates better generalization properties. We obtained the following results on CIFAR-10: 8 UNREGULARIZED EXPLICIT NOISE 0.01 0.1 1.0 REGULARIZED 0.001 0.01 0.1 1.0 Figure 4: CIFAR-10 samples generated by (un)regularized DCGANs (with alternative generator loss), as well as by training a DCGAN with explicitly added noise (noise-to-signal ratio 4). The level of regularization or noise γ is shown above each batch of images. The regularizer stabilizes across a broad range of noise levels and manages to produce images of higher quality than the unregularized variants. Samples were produced after 50 training epochs. Regularized GAN (γ = 0.1) True condition Positive Negative Predicted Positive 0.9688 0.0002 Negative 0.0312 0.9998 Cross-testing: FP: 0.0 Unregularized GAN True condition Positive Negative Predicted Positive 1.0 0.0013 Negative 0.0 0.9987 Cross-testing: FP: 1.0 For both models, the discriminator is able to recognize his own generator’s samples (low FP in the confusion matrix). The regularized GAN also manages to perfectly classify the unregularized GAN’s samples as fake (cross-testing FP 0.0) whereas the unregularized GAN classifies the samples of the regularized GAN as real (cross-testing FP 1.0). In other words, the regularized model is able to fool the unregularized one, whereas the regularized variant cannot be fooled. 6 Conclusion We introduced a regularization scheme to train deep generative models based on generative adversarial networks (GANs). While dimensional misspecifications or non-overlapping support between the data and model distributions can cause severe failure modes for GANs, we showed that this can be addressed by adding a penalty on the weighted gradient-norm of the discriminator. Our main result is a simple yet effective modification of the standard training algorithm for GANs, turning them into reliable building blocks for deep learning that can essentially be trained indefinitely without collapse. Our experiments demonstrate that our regularizer improves stability, prevents GANs from overfitting and therefore leads to better generalization properties (cf cross-testing protocol). Further research on the optimization of GANs as well as their convergence and generalization can readily be built upon our theoretical results. 9 Acknowledgements We would like to thank Devon Hjelm for pointing out that the regularizer works well with ResNets. KR is thankful to Yannic Kilcher, Lars Mescheder and the dalab team for insightful discussions. Big thanks also to Ishaan Gulrajani and Taehoon Kim for their open-source GAN implementations. This work was supported by Microsoft Research through its PhD Scholarship Programme. References [1] Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry. American Mathematical Soc., 2007. [2] Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural Comput., pages 643–674, 1996. [3] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. [4] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. Proceedings of Machine Learning Research. PMLR, 2017. [5] Chris M Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7:108–116, 1995. [6] Diane Bouchacourt, Pawan K Mudigonda, and Sebastian Nowozin. Disco nets: Dissimilarity coefficients networks. In Advances in Neural Information Processing Systems, pages 352–360, 2016. [7] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv:1612.02136, 2016. [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. [9] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723–773, 2012. [10] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, 2017. [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), page 770–778, 2016. [12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of Machine Learning Research, pages 448–456. PMLR, 2015. [13] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR), 2014. [14] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. The International Conference on Learning Representations (ICLR), 2013. [15] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. [16] Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, pages 1718–1727, 2015. 10 [17] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015. [18] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. [19] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of gans. In Advances in Neural Information Processing Systems, 2017. [20] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In International Conference on Learning Representations (ICLR), 2016. [21] Tom Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005. [22] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29:429–443, 1997. [23] Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. In Advances in Neural Information Processing Systems, pages 1786–1794, 2010. [24] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, 2010. [25] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pages 271–279, 2016. [26] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [27] Mark D Reid and Robert C Williamson. Information, divergence and risk for binary experiments. Journal of Machine Learning Research, 12:731–817, 2011. [28] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, 2014. [29] David W Scott. Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons, 2015. [30] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. [31] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On integral probability metrics, phi-divergences and binary classification. arXiv preprint arXiv:0901.2698, 2009. [32] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 11 | 2017 | 322 |
6,811 | Training Deep Networks without Learning Rates Through Coin Betting Francesco Orabona∗ Department of Computer Science Stony Brook University Stony Brook, NY francesco@orabona.com Tatiana Tommasi∗ Department of Computer, Control, and Management Engineering Sapienza, Rome University, Italy tommasi@dis.uniroma1.it Abstract Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning-rate-free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms. 1 Introduction In the last years deep learning has demonstrated a great success in a large number of fields and has attracted the attention of various research communities with the consequent development of multiple coding frameworks (e.g., Caffe [Jia et al., 2014], TensorFlow [Abadi et al., 2015]), the diffusion of blogs, online tutorials, books, and dedicated courses. Besides reaching out scientists with different backgrounds, the need of all these supportive tools originates also from the nature of deep learning: it is a methodology that involves many structural details as well as several hyperparameters whose importance has been growing with the recent trend of designing deeper and multi-branches networks. Some of the hyperparameters define the model itself (e.g., number of hidden layers, regularization coefficients, kernel size for convolutional layers), while others are related to the model training procedure. In both cases, hyperparameter tuning is a critical step to realize deep learning full potential and most of the knowledge in this area comes from living practice, years of experimentation, and, to some extent, mathematical justification [Bengio, 2012]. With respect to the optimization process, stochastic gradient descent (SGD) has proved itself to be a key component of the deep learning success, but its effectiveness strictly depends on the choice of the initial learning rate and learning rate schedule. This has primed a line of research on algorithms to reduce the hyperparameter dependence in SGD—see Section 2 for an overview on the related literature. However, all previous algorithms resort on adapting the learning rates, rather than removing them, or rely on assumptions on the shape of the objective function. In this paper we aim at removing at least one of the hyperparameter of deep learning models. We leverage over recent advancements in the stochastic optimization literature to design a backprop∗The authors contributed equally. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. agation procedure that does not have a learning rate at all, yet it is as simple as the vanilla SGD. Specifically, we reduce the SGD problem to the game of betting on a coin (Section 4). In Section 5, we present a novel strategy to bet on a coin that extends previous ones in a data-dependent way, proving optimal convergence rate in the convex and quasi-convex setting (defined in Section 3). Furthermore, we propose a variant of our algorithm for deep networks (Section 6). Finally, we show how our algorithm outperforms popular optimization methods in the deep learning literature on a variety of architectures and benchmarks (Section 7). 2 Related Work Stochastic gradient descent offers several challenges in terms of convergence speed. Hence, the topic of learning rate setting has been largely investigated. Some of the existing solutions are based on the use of carefully tuned momentum terms [LeCun et al., 1998b, Sutskever et al., 2013, Kingma and Ba, 2015]. It has been demonstrated that these terms can speed-up the convergence for convex smooth functions [Nesterov, 1983]. Other strategies propose scale-invariant learning rate updates to deal with gradients whose magnitude changes in each layer of the network [Duchi et al., 2011, Tieleman and Hinton, 2012, Zeiler, 2012, Kingma and Ba, 2015]. Indeed, scale-invariance is a well-known important feature that has also received attention outside of the deep learning community [Ross et al., 2013, Orabona et al., 2015, Orabona and Pal, 2015]. Yet, both these approaches do not avoid the use of a learning rate. A large family of algorithms exploit a second order approximation of the cost function to better capture its local geometry and avoid the manual choice of a learning rate. The step size is automatically adapted to the cost function with larger/shorter steps in case of shallow/steep curvature. QuasiNewton methods [Wright and Nocedal, 1999] as well as the natural gradient method [Amari, 1998] belong to this family. Although effective in general, they have a spatial and computational complexity that is square in the number of parameters with respect to the first order methods, which makes the application of these approaches unfeasible in modern deep learning architectures. Hence, typically the required matrices are approximated with diagonal ones [LeCun et al., 1998b, Schaul et al., 2013]. Nevertheless, even assuming the use of the full information, it is currently unclear if the objective functions in deep learning have enough curvature to guarantee any gain. There exists a line of work on unconstrained stochastic gradient descent without learning rates [Streeter and McMahan, 2012, Orabona, 2013, McMahan and Orabona, 2014, Orabona, 2014, Cutkosky and Boahen, 2016, 2017]. The latest advancement in this direction is the strategy of reducing stochastic subgradient descent to coin-betting, proposed by Orabona and Pal [2016]. However, their proposed betting strategy is worst-case with respect to the gradients received and cannot take advantage, for example, of sparse gradients. 3 Definitions We now introduce the basic notions of convex analysis that are used in the paper—see, e.g., Bauschke and Combettes [2011]. We denote by ∥·∥1 the 1-norm in Rd. Let f : Rd →R ∪{±∞}, the Fenchel conjugate of f is f ∗: Rd →R ∪{±∞} with f ∗(θ) = supx∈Rd θ⊤x −f(x). A vector x is a subgradient of a convex function f at v if f(v) −f(u) ≤(v −u)⊤x for any u in the domain of f. The differential set of f at v, denoted by ∂f(v), is the set of all the subgradients of f at v. If f is also differentiable at v, then ∂f(v) contains a single vector, denoted by ∇f(v), which is the gradient of f at v. We go beyond convexity using the definition of weak quasi-convexity in Hardt et al. [2016]. This definition is relevant for us because Hardt et al. [2016] proved that τ-weakly-quasi-convex objective functions arise in the training of linear recurrent networks. A function f : Rd →R is τ-weakly-quasiconvex over a domain B ⊆Rd with respect to the global minimum v∗if there is a positive constant τ > 0 such that for all v ∈B, f(v) −f(v∗) ≤τ(v −v∗)⊤∇f(v). From the definition, it follows that differentiable convex function are also 1-weakly-quasi-convex. Betting on a coin. We will reduce the stochastic subgradient descent procedure to betting on a number of coins. Hence, here we introduce the betting scenario and its notation. We consider a 2 gambler making repeated bets on the outcomes of adversarial coin flips. The gambler starts with initial money ϵ > 0. In each round t, he bets on the outcome of a coin flip gt ∈{−1, 1}, where +1 denotes heads and −1 denotes tails. We do not make any assumption on how gt is generated. The gambler can bet any amount on either heads or tails. However, he is not allowed to borrow any additional money. If he loses, he loses the betted amount; if he wins, he gets the betted amount back and, in addition to that, he gets the same amount as a reward. We encode the gambler’s bet in round t by a single number wt. The sign of wt encodes whether he is betting on heads or tails. The absolute value encodes the betted amount. We define Wealtht as the gambler’s wealth at the end of round t and Rewardt as the gambler’s net reward (the difference of wealth and the initial money), that is Wealtht = ϵ + t X i=1 wigi and Rewardt = Wealtht −ϵ = t X i=1 wigi . (1) In the following, we will also refer to a bet with βt, where βt is such that wt = βt Wealtht−1 . (2) The absolute value of βt is the fraction of the current wealth to bet and its sign encodes whether he is betting on heads or tails. The constraint that the gambler cannot borrow money implies that βt ∈[−1, 1]. We also slighlty generalize the problem by allowing the outcome of the coin flip gt to be any real number in [−1, 1], that is a continuous coin; wealth and reward in (1) remain the same. 4 Subgradient Descent through Coin Betting In this section, following Orabona and Pal [2016], we briefly explain how to reduce subgradient descent to the gambling scenario of betting on a coin. Consider as an example the function F(x) := |x −10| and the optimization problem minx F(x). This function does not have any curvature, in fact it is not even differentiable, thus no second order optimization algorithm could reliably be used on it. We set the outcome of the coin flip gt to be equal to the negative subgradient of F in wt, that is gt ∈∂[−F(wt)], where we remind that wt is the amount of money we bet. Given our choice of F(x), its negative subgradients are in {−1, 1}. In the first iteration we do not bet, hence w1 = 0 and our initial money is $1. Let’s also assume that there exists a function H(·) such that our betting strategy will guarantee that the wealth after T rounds will be at least H(PT t=1 gt) for any arbitrary sequence g1, · · · , gT . We claim that the average of the bets, 1 T PT t=1 wt, converges to the solution of our optimization problem and the rate depends on how good our betting strategy is. Let’s see how. Denoting by x∗the minimizer of F(x), we have that the following holds F 1 T T X t=1 wt ! −F(x∗) ≤1 T T X t=1 F(wt) −F(x∗) ≤1 T T X t=1 gtx∗−1 T T X t=1 gtwt ≤1 T + 1 T T X t=1 gtx∗−H T X t=1 gt !! ≤1 T + 1 T max v vx∗−H(v) = H∗(x∗)+1 T , where in the first inequality we used Jensen’s inequality, in the second the definition of subgradients, in the third our assumption on H, and in the last equality the definition of Fenchel conjugate of H. In words, we used a gambling algorithm to find the minimizer of a non-smooth objective function by accessing its subgradients. All we need is a good gambling strategy. Note that this is just a very simple one-dimensional example, but the outlined approach works in any dimension and for any convex objective function, even if we just have access to stochastic subgradients [Orabona and Pal, 2016]. In particular, if the gradients are bounded in a range, the same reduction works using a continuous coin. Orabona and Pal [2016] showed that the simple betting strategy of βt = Pt−1 i=1 gi t gives optimal growth rate of the wealth and optimal worst-case convergence rates. However, it is not data-dependent so it does not adapt to the sparsity of the gradients. In the next section, we will show an actual betting strategy that guarantees optimal convergence rate and adaptivity to the gradients. 3 Algorithm 1 COntinuous COin Betting - COCOB 1: Input: Li > 0, i = 1, · · · , d; w1 ∈Rd (initial parameters); T (maximum number of iterations); F (function to minimize) 2: Initialize: G0,i ←Li, Reward0,i ←0, θ0,i ←0, i = 1, · · · , d 3: for t = 1, 2, . . . , T do 4: Get a (negative) stochastic subgradient gt such that E[gt] ∈∂[−F(wt)] 5: for i = 1, 2, . . . , d do 6: Update the sum of the absolute values of the subgradients: Gt,i ←Gt−1,i + |gt,i| 7: Update the reward: Rewardt,i ←Rewardt−1,i +(wt,i −w1,i)gt,i 8: Update the sum of the gradients: θt,i ←θt−1,i + gt,i 9: Calculate the fraction to bet: βt,i = 1 Li 2σ 2θt,i Gt,i+Li −1 , where σ(x) = 1 1+exp(−x) 10: Calculate the parameters: wt+1,i ←w1,i + βt,i (Li + Rewardt,i) 11: end for 12: end for 13: Return ¯wT = 1 T PT t=1 wt or wI where I is chosen uniformly between 1 and T 5 The COCOB Algorithm We now introduce our novel algorithm for stochastic subgradient descent, COntinuous COin Betting (COCOB), summarized in Algorithm 1. COCOB generalizes the reasoning outlined in the previous section to the optimization of a function F : Rd →R with bounded subgradients, reducing the optimization to betting on d coins. Similarly to the construction in the previous section, the outcomes of the coins are linked to the stochastic gradients. In particular, each gt,i ∈[−Li, Li] for i = 1, · · · , d is equal to the coordinate i of the negative stochastic gradient gt of F in wt. With the notation of the algorithm, COCOB is based on the strategy to bet a signed fraction of the current wealth equal to 1 Li 2σ 2θt,i Gt,i+Li −1 , where σ(x) = 1 1+exp(−x) (lines 9 and 10). Intuitively, if θt,i Gt,i+Li is big in absolute value, it means that we received a sequence of equal outcomes, i.e., gradients, hence we should increase our bets, i.e., the absolute value of wt,i. Note that this strategy assures that |wt,igt,i| < Wealtht−1,i, so the wealth of the gambler is always positive. Also, it is easy to verify that the algorithm is scale-free because multiplying all the subgradients and Li by any positive constant it would result in the same sequence of iterates wt,i. Note that the update in line 10 is carefully defined: The algorithm does not use the previous wt,i in the update. Indeed, this algorithm belongs to the family of the Dual Averaging algorithms, where the iterate is a function of the average of the past gradients [Nesterov, 2009]. Denoting by w∗a minimizer of F, COCOB satisfies the following convergence guarantee. Theorem 1. Let F : Rd →R be a τ-weakly-quasi-convex function and assume that gt satisfy |gt,i| ≤Li. Then, running COCOB for T iterations guarantees, with the notation in Algorithm 1, E[F (wI)] −F(w∗) ≤ d X i=1 Li+|w∗ i −w1,i| v u u tE " Li(GT,i+Li) ln 1+ (GT,i+Li)2(w∗ i −w1,i)2 L2 i !# τT , where the expectation is with respect to the noise in the subgradients and the choice of I. Moreover, if F is convex, the same guarantee with τ = 1 also holds for wT . The proof, in the Appendix, shows through induction that betting a fraction of money equal to βt,i in line 9 on the outcomes gi,t, with an initial money of Li, guarantees that the wealth after T rounds is at least Li exp θ2 T,i 2Li(GT,i+Li) −1 2 ln GT,i Li . Then, as sketched in Section 4, it is enough to calculate the Fenchel conjugate of the wealth and use the standard construction for the per-coordinate updates [Streeter and McMahan, 2010]. We note in passing that the proof technique is also novel because the one introduced in Orabona and Pal [2016] does not allow data-dependent bounds. 4 x y x y 0 50 100 150 200 Iterations 0 1 2 3 4 5 6 Effective Learning Rate Effective Learning Rate of COCOB Figure 1: Behaviour of COCOB (left) and gradient descent with various learning rates and same number of steps (center) in minimizing the function y = |x −10|. (right) The effective learning rates of COCOB. Figures best viewed in colors. When |gt,i| = 1, we have βt,i ≈ Pt−1 i=1 gi t that recovers the betting strategy in Orabona and Pal [2016]. In other words, we substitute the time variable with the data-dependent quantity Gt,i. In fact, our bound depends on the terms GT,i while the similar one in Orabona and Pal [2016] simply depends on LiT. Hence, as in AdaGrad [Duchi et al., 2011], COCOB’s bound is tighter because it takes advantage of sparse gradients. COCOB converges at a rate of ˜O( ∥w∗∥1 √ T ) without any learning rate to tune. This has to be compared to the bound of AdaGrad that is2 O( 1 √ T Pd i=1( (w∗)2 ηi + ηi)), where ηi are the initial learning rates for each coordinate. Usually all the ηi are set to the same value, but from the bound we see that the optimal setting would require a different value for each of them. This effectively means that the optimal ηi for AdaGrad are problem-dependent and typically unknown. Using the optimal ηi would give us a convergence rate of O( ∥w∗∥1 √ T ), that is exactly equal to our bound up to polylogarithmic terms. Indeed, the logarithmic term in the square root of our bound is the price to pay to be adaptive to any w∗and not tuning hyperparameters. This logarithmic term is unavoidable for any algorithm that wants to be adaptive to w∗, hence our bound is optimal [Streeter and McMahan, 2012, Orabona, 2013]. To gain a better understanding on the differences between COCOB and other subgradient descent algorithms, it is helpful to compare their behaviour on the simple one-dimensional function F(x) = |x −10| already used in Section 4. In Figure 1 (left), COCOB starts from 0 and over time it increases in an exponential way the iterate wt, until it meets a gradient of opposing sign. From the gambling perspective this is obvious: The wealth will increase exponentially because there is a sequence of identical outcomes, that in turn gives an increasing wealth and a sequence of increasing bets. On the other hand, in Figure 1 (center), gradient descent shows a different behaviour depending on its learning rate. If the learning rate is constant and too small (black line) it will take a huge number of steps to reach the vicinity of the minimum. If the learning rate is constant and too large (red line), it will keep oscillating around the minimum, unless some form of averaging is used [Zhang, 2004]. If the learning rate decreases as η √ t, as in AdaGrad [Duchi et al., 2011], it will slow down over time, but depending of the choice of the initial learning rate η it might take an arbitrary large number of steps to reach the minimum. Also, notice that in this case the time to reach the vicinity of the minimum for gradient descent is not influenced in any way by momentum terms or learning rates that adapt to the norm of the past gradients, because the gradients are all the same. Same holds for second order methods: The function in figure lacks of any curvature, so these methods could not be used. Even approaches based on the reduction of the variance in the gradients, e.g. [Johnson and Zhang, 2013], do not give any advantage here because the subgradients are deterministic. Figure 1 (right) shows the “effective learning” rate of COCOB that is ˜ηt := wt qPt i=1 g2 i . This is the learning rate we should use in AdaGrad to obtain the same behaviour of COCOB. We see a very 2The AdaGrad variant used in deep learning does not have a convergence guarantee, because no projections are used. Hence, we report the oracle bound in the case that projections are used inside the hypercube with dimensions |w∗ i |. 5 Algorithm 2 COCOB-Backprop 1: Input: α > 0 (default value = 100); w1 ∈Rd (initial parameters); T (maximum number of iterations); F (function to minimize) 2: Initialize: L0,i ←0, G0,i ←0, Reward0,i ←0, θ0,i ←0, i = 1, · · · , number of parameters 3: for t = 1, 2, . . . , T do 4: Get a (negative) stochastic subgradient gt such that E[gt] ∈∂[−F(wt)] 5: for each i-th parameter in the network do 6: Update the maximum observed scale: Lt,i ←max(Lt−1,i, |gt,i|) 7: Update the sum of the absolute values of the subgradients: Gt,i ←Gt−1,i + |gt,i| 8: Update the reward: Rewardt,i ←max(Rewardt−1,i +(wt,i −w1,i)gt,i, 0) 9: Update the sum of the gradients: θt,i ←θt−1,i + gt,i 10: Calculate the parameters: wt,i ←w1,i + θt,i Lt,i max(Gt,i+Lt,i,αLt,i) (Lt,i + Rewardt,i) 11: end for 12: end for 13: Return wT interesting effect: The learning rate is not constant nor is monotonically increasing or decreasing. Rather, it is big when we are far from the optimum and small when close to it. However, we would like to stress that this behaviour has not been coded into the algorithm, rather it is a side-effect of having the optimal convergence rate. We will show in Section 7 that this theoretical gain is confirmed in the empirical results. 6 Backprop and Coin Betting The algorithm described in the previous section is guaranteed to converge at the optimal convergence rate for non-smooth functions and does not require a learning rate. However, it still needs to know the maximum range of the gradients on each coordinate. Note that for the effect of the vanishing gradients, each layer will have a different range of the gradients [Hochreiter, 1991]. Also, the weights of the network can grow over time, increasing the value of the gradients too. Hence, it would be impossible to know the range of each gradient beforehand and use any strategy based on betting. By following the previous literature, e.g. [Kingma and Ba, 2015], we propose a variant of COCOB better suited to optimizing deep networks. We name it COCOB-Backprop and its pseudocode is in Algorithm 2. Although this version lacks the backing of a theoretical guarantee, it is still effective in practice as we will show experimentally in Section 7. There are few differences between COCOB and COCOB-Backprop. First, we want to be adaptive to the maximum component-wise range of the gradients. Hence, in line 6 we constantly update the values Lt,i for each variable. Next, since Li,t−1 is not assured anymore to be an upper bound on gt,i, we do not have any guarantee that the wealth Rewardt,i is non-negative. Thus, we enforce the positivity of the reward in line 8 of Algorithm 2. We also modify the fraction to bet in line 10 by removing the sigmoidal function because 2σ(2x)−1 ≈ x for x ∈[−1, 1]. This choice simplifies the code and always improves the results in our experiments. Moreover, we change the denominator of the fraction to bet such that it is at least αLt,i. This has the effect of restricting the value of the parameters in the first iterations of the algorithm. To better understand this change, consider that, for example, in AdaGrad and Adam with learning rate η the first update is w2,i = w1,i −ηSGN(g1,i). Hence, η should have a value smaller than w1,i in order to not “forget” the initial point too fast. In fact, the initialization is critical to obtain good results and moving too far away from it destroys the generalization ability of deep networks. Here, the first update becomes w2,i = w1,i −1 α SGN(g1,i), so 1 α should also be small compared to w1,i. Finally, as in previous algorithms, we do not return the average or a random iterate, but just the last one (line 13 in Algorithm 2). 6 Figure 2: Training cost (cross-entropy) (left) and testing error rate (0/1 loss) (right) vs. the number epochs with two different architectures on MNIST, as indicated in the figure titles. The y-axis is logarithmic in the left plots. Figures best viewed in colors. 7 Empirical Results and Future Work We run experiments on various datasets and architectures, comparing COCOB with some popular stochastic gradient learning algorithms: AdaGrad [Duchi et al., 2011], RMSProp [Tieleman and Hinton, 2012], Adadelta [Zeiler, 2012], and Adam [Kingma and Ba, 2015]. For all the algorithms, but COCOB, we select their learning rate as the one that gives the best training cost a posteriori using a very fine grid of values3. We implemented4 COCOB (following Algorithm 2) in Tensorflow [Abadi et al., 2015] and we used the implementations of the other algorithms provided by this deep learning framework. The best value of the learning rate for each algorithm and experiment is reported in the legend. We report both the training cost and the test error, but, as in previous work, e.g., [Kingma and Ba, 2015], we focus our empirical evaluation on the former. Indeed, given a large enough neural network it is always possible to overfit the training set, obtaining a very low performance on the test set. Hence, test errors do not only depends on the optimization algorithm. Digits Recognition. As a first test, we tackle handwritten digits recognition using the MNIST dataset [LeCun et al., 1998a]. It contains 28 × 28 grayscale images with 60k training data, and 10k test samples. We consider two different architectures, a fully connected 2-layers network and a Convolutional Neural Network (CNN). In both cases we study different optimizers on the standard cross-entropy objective function to classify 10 digits. For the first network we reproduce the structure described in the multi-layer experiment of [Kingma and Ba, 2015]: it has two fully connected hidden layers with 1000 hidden units each and ReLU activations, with mini-batch size of 100. The weights are initialized with a centered truncated normal distribution and standard deviation 0.1, the same small value 0.1 is also used as initialization for the bias. The CNN architecture follows the Tensorflow tutorial 5: two alternating stages of 5 × 5 convolutional filters and 2 × 2 max pooling are followed by a fully connected layer of 1024 rectified linear units (ReLU). To reduce overfitting, 50% dropout noise is used during training. 3[0.00001, 0.000025, 0.00005, 0.000075, 0.0001, 0.00025, 0.0005, 0.00075, 0.001, 0.0025, 0.005, 0.0075, 0.01, 0.02, 0.05, 0.075, 0.1] 4https://github.com/bremen79/cocob 5https://www.tensorflow.org/get_started/mnist/pros 7 Figure 3: Training cost (cross-entropy) (left) and testing error rate (0/1 loss) (right) vs. the number epochs on CIFAR-10. The y-axis is logarithmic in the left plots. Figures best viewed in colors. 0 10 20 30 40 Epochs 0 100 200 300 400 500 600 Perplexity Word Prediction on PTB - Training Cost AdaGrad 0.25 RMSprop 0.001 Adadelta 2.5 Adam 0.00075 COCOB 0 10 20 30 40 Epochs 50 100 150 200 250 300 350 400 Perplexity Word Prediction on PTB - Test Cost AdaGrad 0.25 RMSprop 0.001 Adadelta 2.5 Adam 0.00075 COCOB Figure 4: Training cost (left) and test cost (right) measured as average per-word perplexity vs. the number epochs on PTB word-level language modeling task. Figures best viewed in colors. Training cost and test error rate as functions of the number of training epochs are reported in Figure 2. With both architectures, the training cost of COCOB decreases at the same rate of the best tuned competitor algorithms. The training performance of COCOB is also reflected in its associated test error which appears better or on par with the other algorithms. Object Classification. We use the popular CIFAR-10 dataset [Krizhevsky, 2009] to classify 32×32 RGB images across 10 object categories. The dataset has 60k images in total, split into a training/test set of 50k/10k samples. For this task we used the network defined in the Tensorflow CNN tutorial6. It starts with two convolutional layers with 64 kernels of dimension 5 × 5 × 3, each followed by a 3 × 3 × 3 max pooling with stride of 2 and by local response normalization as in Krizhevsky et al. [2012]. Two more fully connected layers respectively of 384 and 192 rectified linear units complete the architecture that ends with a standard softmax cross-entropy classifier. We use a batch size of 128 and the input images are simply pre-processed by whitening. Differently from the Tensorflow tutorial, we do not apply image random distortion for data augmentation. The obtained results are shown in Figure 3. Here, with respect to the training cost, our learningrate-free COCOB performs on par with the best competitors. For all the algorithms, there is a good correlation between the test performance and the training cost. COCOB and its best competitor AdaDelta show similar classification results that differ on average ∼0.008 in error rate. Word-level Prediction with RNN. Here we train a Recurrent Neural Network (RNN) on a language modeling task. Specifically, we conduct word-level prediction experiments on the Penn Tree Bank (PTB) dataset [Marcus et al., 1993] using the 929k training words and its 73k validation words. We adopted the medium LSTM [Hochreiter and Schmidhuber, 1997] network architecture described in Zaremba et al. [2014]: it has 2 layers with 650 units per layer and parameters initialized uniformly in [−0.05, 0.05], a dropout of 50% is applied on the non-recurrent connections, and the norm of the gradients (normalized by mini-batch size = 20) is clipped at 5. 6https://www.tensorflow.org/tutorials/deep_cnn 8 We show the obtained results in terms of average per-word perplexity in Figure 4. In this task COCOB performs as well as Adagrad and Adam with respect to the training cost and much better than the other algorithms. In terms of test performance, COCOB, Adam, and AdaGrad all show an overfit behaviour indicated by the perplexity which slowly grows after having reached its minimum. Adagrad is the least affected by this issue and presents the best results, followed by COCOB which outperforms all the other methods. We stress again that the test performance does not depend only on the optimization algorithm used in training and that early stopping may mitigate the overfitting effect. Summary of the Empirical Evaluation and Future Work. Overall, COCOB has a training performance that is on-par or better than state-of-the-art algorithms with perfectly tuned learning rates. The test error appears to depends on other factors too, with equal training errors corresponding to different test errors. We would also like to stress that in these experiments, contrary to some of the previous reported empirical results on similar datasets and networks, the difference between the competitor algorithms is minimal or not existent when they are tuned on a very fine grid of learning rate values. Indeed, the very similar performance of these methods seems to indicate that all the algorithms are inherently doing the same thing, despite their different internal structures and motivations. Future more detailed empirical results will focus on unveiling what is the common structure of these algorithms that give rise to this behavior. In the future, we also plan to extend the theory of COCOB beyond τ-weakly-quasi-convex functions, characterizing the non-convexity present in deep networks. Also, it would be interesting to evaluate a possible integration of the betting framework with second-order methods. Acknowledgments The authors thank the Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by a $1.4M National Science Foundation grant (#1531492). The authors also thank Akshay Verma for the help with the TensorFlow implementation and Matej Kristan for reporting a bug in the pseudocode in the previous version of the paper. T.T. was supported by the ERC grant 637076 - RoboExNovo. F.O. is partly supported by a Google Research Award. References M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org. S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. Y. Bengio. Practical recommendations for gradient-based training of deep architectures. In G. Montavon, G. B. Orr, and K.-R. Müller, editors, Neural Networks: Tricks of the Trade: Second Edition, pages 437–478. Springer, Berlin, Heidelberg, 2012. A. Cutkosky and K. Boahen. Online learning without prior information. In Conference on Learning Theory (COLT), pages 643–677, 2017. A. Cutkosky and K. A. Boahen. Online convex optimization with unconstrained domains and losses. In Advances in Neural Information Processing Systems (NIPS), pages 748–756, 2016. 9 J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. M. Hardt, T. Ma, and B. Recht. Gradient descent learns linear dynamical systems. arXiv preprint arXiv:1609.05191, 2016. S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München, 1991. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems (NIPS), pages 315–323, 2013. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097–1105, 2012. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998a. URL http://yann.lecun. com/exdb/mnist/. Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer, 1998b. M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. H. B. McMahan and F. Orabona. Unconstrained online linear learning in Hilbert spaces: Minimax algorithms and normal approximations. In Conference on Learning Theory (COLT), pages 1020– 1039, 2014. Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). Soviet Mathematics Doklady, 27(2):372–376, 1983. Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221–259, 2009. F. Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing Systems (NIPS), pages 1806–1814, 2013. F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In Advances in Neural Information Processing Systems (NIPS), pages 1116–1124, 2014. F. Orabona and D. Pal. Scale-free algorithms for online linear optimization. In International Conference on Algorithmic Learning Theory (ALT), pages 287–301. Springer, 2015. F. Orabona and D. Pal. Coin betting and parameter-free online learning. In Advances in Neural Information Processing Systems (NIPS), pages 577–585. 2016. F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications to classification and regression. Machine Learning, 99(3):411–435, 2015. S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In Proc. of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), 2013. 10 T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. In International conference on Machine Learning (ICML), pages 343–351, 2013. M. Streeter and H. B. McMahan. Less regret via online conditioning. arXiv preprint arXiv:1002.4862, 2010. M. Streeter and H. B. McMahan. No-regret algorithms for unconstrained online convex optimization. In Advances in Neural Information Processing Systems (NIPS), pages 2402–2410, 2012. I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International conference on Machine Learning (ICML), pages 1139–1147, 2013. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. S. Wright and J. Nocedal. Numerical optimization. Springer, 1999. W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. M. D. Zeiler. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In International Conference on Machine Learning (ICML), pages 919–926, 2004. 11 | 2017 | 323 |
6,812 | Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis Jian Zhao1,2∗† Lin Xiong3 Karlekar Jayashree3 Jianshu Li1 Fang Zhao1 Zhecan Wang4† Sugiri Pranata3 Shengmei Shen3 Shuicheng Yan1,5 Jiashi Feng1 1National University of Singapore 2National University of Defense Technology 3 Panasonic R&D Center Singapore 4 Franklin. W. Olin College of Engineering 5 Qihoo 360 AI Institute {zhaojian90, jianshu}@u.nus.edu {lin.xiong, karlekar.jayashree, sugiri.pranata, shengmei.shen}@sg.panasonic.com zhecan.wang@students.olin.edu {elezhf, eleyans, elefjia}@u.nus.edu Abstract Synthesizing realistic profile faces is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by populating samples with extreme poses and avoiding tedious annotations. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy between distributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator’s output using unlabeled real faces, while preserving the identity information during the realism refinement. The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose and texture, preserve identity and stabilize training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark. In addition, the proposed DA-GAN is also promising as a new approach for solving generic transfer learning problems more effectively. DA-GAN is the foundation of our submissions to NIST IJB-A 2017 face recognition competitions, where we won the 1st places on the tracks of verification and identification. 1 Introduction Unconstrained face recognition is a very important yet extremely challenging problem. In recent years, deep learning techniques have significantly advanced large-scale unconstrained face recognition (8; 19; 27; 34; 29; 16), arguably driven by rapidly increasing resource of face images. However, labeling huge amount of data for feeding supervised deep learning algorithms is undoubtedly expensive and time-consuming. Moreover, as often observed in real-world scenarios, the pose distribution of available face recognition datasets (e.g., IJB-A (15)) is usually unbalanced and has long-tail with ∗Homepage: https://zhaoj9014.github.io/. †Jian Zhao and Zhecan Wang were interns at Panasonic R&D Center Singapore during this work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. -100 -80 -40 -60 40 0 -20 20 60 80 100 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Pose Frequency (a) Extremely unbalanced pose distribution. -100 -80 -40 -60 40 0 -20 20 60 80 100 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Frequency Pose (b) Well balanced pose distribution with DA-GAN. Figure 1: Comparison of pose distribution in the IJB-A (15) dataset w/o and w/ DA-GAN. large pose variations, as shown in Figure. 1a. This has become a main obstacle for further pushing unconstrained face recognition performance. To address this critical issue, several research attempts (32; 31; 35) have been made to employ synthetic profile face images as augmented extra data to balance the pose variations. However, naively learning from synthetic images can be problematic due to the distribution discrepancy between synthetic and real face images—synthetic data is often not realistic enough with artifacts and severe texture losses. The low-quality synthesis face images would mislead the learned face recognition model to overfit to fake information only presented in synthetic images and fail to generalize well on real faces. Brute-forcedly increasing the realism of the simulator is often expensive in terms of time cost and manpower, if possible. In this work, we propose a novel Dual-Agent Generative Adversarial Network (DA-GAN) for profile view synthesis, where the dual agents focus on discriminating the realism of synthetic profile face images from a simulator using unlabled real data and perceiving the identity information, respectively. In other words, the generator needs to play against a real–fake discriminator as well as an identity discriminator simultaneously to generate high-quality faces that are really useful for unconstrained face recognition. In our method, a synthetic profile face image with a pre-specified pose is generated by a 3D morphable face simulator. DA-GAN takes this synthetic face image as input and refines it through a conditioned generative model. We leverage a Fully Convolutional Network (FCN) (17) that operates on the pixel level as the generator to generate high-resolution face images and an auto-encoder network as the discriminator. Different from vanilla GANs, DA-GAN introduces an auxiliary discriminative agent to enforce the generator to preserve identity information of the generated faces, which is critical for face recognition application. In addition, DA-GAN also imposes a pose perception loss to preserve pose and texture. The refined synthetic profile face images present photorealistic quality with well preserved identity information, which are used as augmented data together with real face images for pose-invariant feature learning. For stabilizing the training process of such dual-agent GAN model, we impose a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging National Institute of Standards and Technology (NIST) IARPA Janus Benchmark A (IJB-A) (15) unconstrained face recognition benchmark. DA-GAN leads us to further win the 1st places on verification and identification tracks in the NIST IJB-A 2017 face recognition competitions. This strong evidence shows that our “recognition via generation" framework is effective and generic, and we expect that it benefits for more face recognition and transfer learning applications in the real world. Our contributions are summarized as follows. • We propose a novel Dual-Agent Generative Adversarial Network (DA-GAN) for photorealistic and identity preserving profile face synthesis even under extreme poses. • The proposed dual-agent architecture effectively combines prior knowledge from data distribution (adversarial training) and domain knowledge of faces (pose and identity perception losses) to exactly recover the lost information inherent in projecting a 3D face into the 2D image space. • We present qualitative and quantitative experiments showing the possibility of a “recognition via generation" framework and achieve the top performance on the challenging NIST IJBA (15) unconstrained face recognition benchmark without extra human annotation efforts 2 68-Point Landmark Detection 3D Face Model β Simulated Profile Face Simulator Face RoI Extraction Generator Conv 64×7×7 ReLU & BN Input 224×224×3 + + … Residual Block * 10 Conv 3×1×1 Output 224×224×3 Discriminator Lip Lpp Conv 3×3×3 ReLU … … Real Synthetic Ladv Transition Down FC 784 Transition Up Conv 3×1×1 ReLU Agent 1 Agent 2 Figure 2: Overview of the proposed DA-GAN architecture. The simulator (upper panel) extracts face RoI, localizes landmark points and produces synthesis faces with arbitrary poses, which are fed to DA-GAN for realism refinement. DA-GAN uses a fully convolutional skip-net as the generator (middle panel) and an auto-encoder as the discriminator (bottom panel). The dual agents focus on both discriminating real v.s. fake (minimizing the loss Ladv) and preserving identity information (minimizing the loss Lip). Best viewed in color. by training deep neural networks on the refined face images together with real images. To our best knowledge, our proposed DA-GAN is the first model that is effective for automatically generating augmented data for face recognition in challenging conditions and indeed improves performance. DA-GAN won the 1st places on verification and identification tracks in the NIST IJB-A 2017 face recognition competitions. 2 Related works As one of the most significant advancements on the research of deep generative models (14; 26), GAN has drawn substantial attention from the deep learning and computer vision community since it was first introduced by Goodfellow et al. (10). The GAN framework learns a generator network and a discriminator network with competing loss. This min-max two-player game provides a simple yet powerful way to estimate target distribution and to generate novel image samples. Mirza and Osindero (21) introduce the conditional version of GAN, to condition on both the generator and discriminator for effective image tagging. Berthelot et al. (2) propose a new Boundary Equilibrium GAN (BE-GAN) framework paired with a loss derived from the Wasserstein distance for training GAN, which derives a way of controlling the trade-off between image diversity and visual quality. These successful applications of GAN motivate us to develop profile view synthesis methods based on GAN. However, the generator of previous methods usually focus on generating images based on a random noise vector or conditioned data and the discriminator only has a single agent to distinguish real v.s. fake. Thus, in contrast to our method, the generated images do not have any discriminative information that can be used for training a deep learning based recognition model. This separates us well with previous GAN-based attempts. 3 Moreover, differnet from previous InfoGAN (5) which does not have the classification agent, and Auxiliary Classifier GAN (AC-GAN) (22) which only performs classification, our propsoed DAGAN performs face verification with an intrigued data augmentation. DA-GAN is a novel and practical model for efficient data augmentation and it is really effective in practice as proved in Sec. 4. DA-GAN generates the data in a completely different way from InfoGAN (5) and AC-GAN (22) which generate images from a random noise input or abstract semantic labels. Therefore, inferior to our model, those existing GAN-like models cannot exploit useful and rich prior information (e.g., the shape, pose of faces) for effective data generation and augmentation. They cannot fully control the generated images. In contrast, DA-GAN can fully control the generated images and adjust the face pose (e.g., yaw angles) distribution which is extremely unbalanced in real-world scenarios. DA-GAN can facilitate training more accurate face analysis models to solve the large pose variation problem and other relevant problems in unconstrained face recognition. Our proposed DA-GAN shares a similar idea with TP-GAN (13) that considers face synthesis based on GAN framework, and Apple GAN (28) that considers learning from simulated and unsupervised images through adversarial training. Our method differs from them in following aspects: 1) DA-GAN aims to synthesize photorealistic and identity preserving profile faces to address the large variance issue in unconstrained face recognition, whereas TP-GAN (13) tries to recover a frontal face from a profile view and Apple GAN (28) is designed for much simpler scenarios (e.g., eye and hand image refinement); 2) TP-GAN (13) and Apple GAN (28) suffer from categorical information loss which limits their effectiveness in promoting recognition performance. In contrast, our proposed DA-GAN architecture effectively overcomes this issue by introducing dual discriminator agents. 3 Dual-Agent GAN 3.1 Simulator The main challenge for unconstrained face recognition lies in the large variation and few profile face images for each subject, which is the main obstacle for learning a well-performed pose-invariant model. To address this problem, we simulate face images with various pre-defined poses (i.e., yaw angles), which explicitly augments the available training data without extra human annotation efforts and balances the pose distribution. In particular, as shown in Figure. 2, we first extracts the face Region of Interest (RoI) from each available real face image, and estimate 68 facial landmark points using the Recurrent AttentiveRefinement (RAR) framework (31), which is robust to illumination changes and does not require a shape model in advance. We then estimate a transformation matrix between the detected 2D landmarks and the corresponding landmarks in the 3D Morphable Model (3D MM) using leastsquares fit (35). Finally, we simulate profile face images in various poses with pre-defined yaw angles. However, the performance of the simulator decreases dramatically under large poses (e.g., yaw angles ∈{[−90◦, −60◦] ∪[+60◦, +90◦]}) due to artifacts and severe texture losses, misleading the network to overfit to fake information only presented in synthetic images and fail to generalize well on real data. 3.2 Generator In order to generate photorealistic and identity preserving profile view face images which are truely benefical for unconstrained face recognition, we further refine the above-mentioned simulated profile face images with the proposed DA-GAN. Inspired by the recent success of FCN-based methods on image-to-image applications (17; 9) and the leading performance of skip-net on recognition tasks (12; 33), we modify a skip-net (ResNet (12)) into a FCN-based architecture as the generator Gθ : RH×W ×C 7→RH×W ×C of DA-GAN to learn a highly non-linear transformation for profile face image refinement, where θ are the network parameters for the generator, and H, W, and C denote the image height, width, and channel number, repectively. Contextual information from global and local regions compensates each other and naturally benefits face recognition. The hierarchical features within a skip-net are multi-scale in nature due to the increasing receptive field sizes, which are combined together via skip connections. Such a combined representation comprehensively maintains the contextual information, which is crucial for 4 artifact removal, fragement stitching, and texture padding. Moreover, the FCN-based architecture is advantageous for generating high-resolution image-level results. More details are provided in Sec. 4. More formally, let the simulated profile face image be denoted by x and the refined face image be denoted by ˜x, then ˜x := Gθ(x). (1) The key requirements for DA-GAN are that the refined face image ˜x should look like a real face image in appearance while preserving the intrinsic identity and pose information from the simulator. To this end, we propose to learn θ by minimizing a combination of three losses: LGθ = (−Ladv + λ1Lip) + λ2Lpp, (2) where Ladv is the adversarial loss for adding realism to the synthetic images and alleviating artifacts, Lip is the identity perception loss for preserving the identity information, and Lpp is the pose perception loss for preserving pose and texture information. Lpp is a pixel-wise ℓ1 loss, which is introduced to enforce the pose (i.e., yaw angle) consistency for the synthetic profile face images before and after the refinement via DA-GAN: Lpp = 1 W × H W X i H X j |xi,j −˜xi,j|, (3) where i, j traverse all pixels of x and ˜x. Although Lpp may lead some over smooth effects to the refined results, it is still an essential part for both pose and texture information preserving and accelerated optimization. To add realism to the synthetic images to really benefit face recognition performance, we need to narrow the gap between the distributions of synthetic and real images. An ideal generator will make it impossible to classify a given image as real or refined with high confidence. Meanwhile, preserving the identity information is the essential and critical part for recognition. An ideal generator will generate the refined face images that have small intra-class distance and large inter-class distance in the feature space spanned by the deep neural networks for unconstrained face recognition. These motivate the use of an adversarial pixel-wise discriminator with dual agents. 3.3 Dual-agent discriminator To incorporate the prior knowledge from the profile faces’ distribution and domain knowledge of identities’ distribution, we herein introduce a discriminator with dual agents for distinguishing real v.s. fake and identities simultaneously. To facilitate this process, we leverage an auto-encoder as the discriminator Dφ : RH×W ×C 7→RH×W ×C to be as simple as possible to avoid typical GAN tricks, which first projects the input real / fake face image into high-dimensional feature space through several Convolution (Conv) and Fully Connected (FC) layers of the encoder and then transformed back to the image-level representation through several Deconvolution (Deconv) and Conv layers of the decoder, as shown in Figure. 2. φ are the network parameters for the discriminator. More details are provided in Sec. 4. One agent of Dφ is trained with Ladv to minimize the Wasserstein distance with a boundary equilibrium regularization term for maintaining a balance between the generator and discriminator losses as first introduced in (2), Ladv = X j |yj −Dφ(yj)| −kt X i |˜xi −Dφ(˜xi)|, (4) where y denotes the real face image, kt is a boundary equilibrium regularization term using Proportional Control Theory to maintain the equilibrium E[P i |˜xi −Dφ(˜xi)|] = γE[P j |yj −Dφ(yj)|], γ is the diversity ratio. Here kt is updated by kt+1 = kt + α(γ X j |yj −Dφ(yj)| − X i |˜xi −Dφ(˜xi)|), (5) where α is the learning rate (proportional gain) for k. In essence, Eq.(5) can be thought of as a form of close-loop feedback control in which kt is adjusted at each step. 5 Ladv serves as a supervision to push the refined face image to reside in the manifold of real images. It can prevent the blurry effect, alleviate artifacts and produce visually pleasing results. The other agent of Dφ is trained with Lip to preserve the identity discriminability of the refined face images. Specially, we define Lip with the multi-class cross-entropy loss based on the output from the bottleneck layer of Dφ. Lip = 1 N X j −(Yjlog(Dφ(yj)) + (1 −Yj)log(1 −Dφ(yj))) + 1 N X i −(Yilog(Dφ(˜xi)) + (1 −Yi)log(1 −Dφ(˜xi))), (6) where Y is the identity ground truth. Thus, minimizing Lip would encourage deep features of the refined face images belonging to the same identity to be close to each other. If one visualizes the learned deep features in the high-dimensional space, the learned deep features of refined face image set form several compact clusters and each cluster may be far away from others. Each cluster has a small variance. In this way, the refined face images are enforced with well preserved identity information. We also conduct experiments for illustration. Using Lip alone makes the results prone to annoying artifacts, because the search for a local minimum of Lip may go through a path that resides outside the manifold of natural face images. Thus, we combine Lip with Ladv as the final objective function for Dφ to ensure that the search resides in that manifold and produces photorealistic and identity preserving face image: LDφ = Ladv + λ1Lip. (7) 3.4 Loss functions for training The goal of DA-GAN is to use a set of unlabeled real face images y to learn a generator Gθ that adaptively refines a simulated profile face image x. The overall objective function for DA-GAN is: ( LDφ = Ladv + λ1Lip, LGθ = (−Ladv + λ1Lip) + λ2Lpp. (8) We optimize DA-GAN by alternatively optimizing Dφ and Gθ for each training iteration. Similar as in (2), we measure the convergence of DA-GAN by using the boundary equilibrium concept: we can frame the convergence process as finding the closest reconstruction P j |yj −Dφ(yj)| with the lowest absolute value of the instantaneous process error for the Proportion Control Theory |γ P j |yj −Dφ(yj)| −P i |˜xi −Dφ(˜xi)||. This measurement can be formulated as: Lcon = X j |yj −Dφ(yj)| + |γ X j |yj −Dφ(yj)| − X i |˜xi −Dφ(˜xi)||. (9) Lcon can be used to determine when the network has reached its final state or if the model has collapsed. Detailed algorithm on the training procedures is provided in supplementary material Sec. 1. 4 Experiments 4.1 Experimental settings Benchmark dataset: Except for synthesizing natural looking profile view face images, the proposed DA-GAN also aims to generate identity preserving face images for accurate face-centric analysis with state-of-the-art deep learning models. Therefore, we evaluate the possibility of “recognition via generation" of DA-GAN on the most challenging unconstrained face recognition benchmark dataset IJB-A (15). IJB-A (15) contains both images and video frames from 500 subjects with 5,397 images and 2,042 videos that are split into 20,412 frames, 11.4 images and 4.2 videos per subject, captured from in-the-wild environment to avoid the near frontal bias, along with protocols for evaluation of both verification (1:1 comparison) and identification (1:N search) tasks. For training and testing, 10 random splits are provided by each protocol, respectively. More details are provided in supplementary material Sec. 2. 6 125000 200000 175000 150000 225000 250000 100000 75000 50000 25000 0 Iterations 0.05 0.25 1.25 1.50 1.00 0.50 1.75 2.00 Convergence Figure 3: Quality of refined results w.r.t. the network convergence measurement Lcon. Real 60° 70° 80° 90° Simulated Refined Simulated Refined Simulated Refined (a) Refined results of DA-GAN. Real Faces Refined Synthetic Faces with DA-GAN (b) Feature space of real faces and DA-GAN synthetic faces. Figure 4: Qualitative analysis of DA-GAN. Reproducibility: The proposed method is implementated by extending the Keras framework (6). All networks are trained on three NVIDIA GeForce GTX TITAN X GPUs with 12GB memory for each. Please refer to supplementary material Sec. 3 & 4 for full details on network architectures and training procedures. 4.2 Results and discussions Qualitative results – DA-GAN: In order to illustrate the compelling perceptual results generated by the proposed DA-GAN, we first visualize the quality of refined results w.r.t. the network convergence measurement Lcon, as shown in Figure. 3. As can be seen, our DA-GAN ensures a fast yet stable convergence through the carefully designed optimization scheme and boundary equilibrium regularization term. The network convergence measurement Lcon correlates well with image fidelity. Most of the previous works (31; 32; 35) on profile view synthesis are dedicated to address this problem within a pose range of ±60◦. Because it is commonly believed that with a pose that is larger than 60◦, it is difficult for a model to generate faithful profile view images. Similarly, our simulator is also good at normalizing small posed faces while suffers severe artifacts and texture losses under large poses (e.g., yaw angles ∈{[−90◦, −60◦] ∪[+60◦, +90◦]}), as shown in Figure. 4a the first row for each subject. However, with enough training data and proper architecture and objective function design of the proposed DA-GAN, it is in fact feasible to further refine such synthetic profile face images under very large poses for high-quality natural looking results generation, as shown in Figure. 4a the second row for each subject. Compared with the raw simulated faces, the refined results by DA-GAN present a good photorealistic quality. More visualized samples are provided in supplementary material Sec. 5. To verify the superiority of DA-GAN as well as the contribution of each component, we also compare the qualitative results produced by the vanilla GAN (10), Apple GAN (28), BE-GAN (2) and three variations of DA-GAN in terms of w/o Ladv, Lip, Lpp in each case, repectively. Please refer to supplementary material Sec. 5 for details. 7 Table 1: Performance comparison of DA-GAN with state-of-the-arts on IJB-A (15) verification protocol. For all metrics, a higher number means better performance. The results are averaged over 10 testing splits. Symbol “-" implies that the result is not reported for that method. Standard deviation is not available for some methods. The results offered by our proposed method are highlighted in bold. Method Face verification TAR @ FAR=0.10 TAR @ FAR=0.01 TAR @ FAR=0.001 OpenBR (15) 0.433 ± 0.006 0.236 ± 0.009 0.104 ± 0.014 GOTS (15) 0.627 ± 0.012 0.406 ± 0.014 0.198 ± 0.008 Pooling faces (11) 0.631 0.309 LSFS (30) 0.895 ± 0.013 0.733 ± 0.034 0.514 ± 0.060 Deep Multi-pose (1) 0.911 0.787 DCNNmanual (4) 0.947 ± 0.011 0.787 ± 0.043 Triplet Similarity (27) 0.945 ± 0.002 0.790 ± 0.030 0.590 ± 0.050 VGG-Face (23) 0.805 ± 0.030 PAMs (19) 0.652 ± 0.037 0.826 ± 0.018 DCNNfusion (3) 0.967 ± 0.009 0.838 ± 0.042 Masi et al. (20) 0.886 0.725 Triplet Embedding (27) 0.964 ± 0.005 0.900 ± 0.010 0.813 ± 0.020 All-In-One (25) 0.976 ± 0.004 0.922 ± 0.010 0.823 ± 0.020 Template Adaptation (8) 0.979 ± 0.004 0.939 ± 0.013 0.836 ± 0.027 NAN (34) 0.978 ± 0.003 0.941 ± 0.008 0.881 ± 0.011 ℓ2-softmax (24) 0.984 ± 0.002 0.970 ± 0.004 0.943 ± 0.005 b1 0.989 ± 0.003 0.963 ± 0.007 0.920 ± 0.006 b2 0.978 ± 0.003 0.950 ± 0.009 0.901 ± 0.008 DA-GAN (ours) 0.991 ± 0.003 0.976 ± 0.007 0.930 ± 0.005 To gain insights into the effectivenss of identity preserving quality of our DA-GAN, we further use t-SNE (18) to visualize the deep features of both refined profile faces and real faces in a 2D space in Figure. 4b. As can be seen, the refined profile face images present small intra-class distance and large inter-class distance, which is similar to those of real faces. This reveals that DA-GAN ensures well preserved identity information with the auxiliary agent for Lip. Quantitative results – “recognition via generation": To quantitatively verify the superiority of “recognition via generation" of DA-GAN, we conduct unconstrained face recognition (i.e., verification and identification) on IJB-A (15) benchmark dataset with three different settings. In the three settings, the pre-trained deep recognition models are respectively fine-tuned on the original training data of each split without extra data (baseline 1: b1), the original training data of each split with extra synthetic faces by our simulator (baseline 2: b2), and the original training data of each split with extra refined faces by our DA-GAN (our method: “recognition via generation" framework based on DA-GAN, DA-GAN for short). The performance comparison of DA-GAN with the two baselines and other state-of-the-arts on IJB-A (15) unconstrained face verification and identification protocols are given in Table. 1 and Table. 2. We can observe that even with extra training data, b2 presents inferior performance than b1 for all metrics of both face verification and identification. This demonstrates that naively learning from synthetic images can be problematic due to a gap between synthetic and real image distributions – synthetic data is often not realistic enough with artifacts and severe texture losses, misleading the network to overfit to fake information only presented in synthetic images and fail to generalize well on real data. In contrast, with the injection of photorealistic and identity preserving faces generated by DA-GAN without extra human annotation efforts, our method outperforms b1 by 1.00% for TAR @ FAR=0.001 of verification and 1.50% for FNIR @ FPIR=0.01, 0.50% for Rank-1 of identification. Our method achieves comparable performance with ℓ2-softmax (24), which employ a much more computational complex recognition model even without fine-tuning or template adaptation procedures as we do. Moreover, DA-GAN outperforms NAN (34) by 4.90% for TAR @ FAR=0.001 of verification and 7.30% for FNIR @ FPIR=0.01, 1.30% for Rank1 of identification. These results won the 1st places on verification and identification tracks in NIST IJB-A 2017 face recognition competitions3. This well verified the promissing potential of synthetic face images by our DA-GAN on the large-scale and challenging unconstrained face recognition problem. 3We submitted our results for both verification and identification protocols to NIST IJB-A 2017 face recognition competition committee on 29th, March, 2017. We received the official notification on our top 8 Table 2: Performance comparison of DA-GAN with state-of-the-arts on IJB-A (15) identification protocol. For FNIR metric, a lower number means better performance. For the other metrics, a higher number means better performance. The results offered by our proposed method are highlighted in bold. Method Face identification FNIR @ FPIR=0.10 FNIR @ FPIR=0.01 Rank1 Rank5 OpenBR (15) 0.851 ± 0.028 0.934 ± 0.017 0.246 ± 0.011 0.375 ± 0.008 GOTS (15) 0.765 ± 0.033 0.953 ± 0.024 0.433 ± 0.021 0.595 ± 0.020 B-CNN (7) 0.659 ± 0.032 0.857 ± 0.027 0.588 ± 0.020 0.796 ± 0.017 LSFS (30) 0.387 ± 0.032 0.617 ± 0.063 0.820 ± 0.024 0.929 ± 0.013 Pooling faces (11) 0.846 0.933 Deep Multi-pose (1) 0.250 0.480 0.846 0.927 DCNNmanual (4) 0.852 ± 0.018 0.937 ± 0.010 Triplet Similarity (27) 0.246 ± 0.014 0.444 ± 0.065 0.880 ± 0.015 0.950 ± 0.007 VGG-Face (23) 0.33 ± 0.031 0.539 ± 0.077 0.913 ± 0.011 PAMs (19) 0.840 ± 0.012 0.925 ± 0.008 DCNNfusion (3) 0.210 ± 0.033 0.423 ± 0.094 0.903 ± 0.012 0.965 ± 0.008 Masi et al. (20) 0.906 0.962 Triplet Embedding (27) 0.137 ± 0.014 0.247 ± 0.030 0.932 ± 0.010 Template Adaptation (8) 0.118 ± 0.016 0.226 ± 0.049 0.928 ± 0.010 0.977 ± 0.004 All-In-One (25) 0.113 ± 0.014 0.208 ± 0.020 0.947 ± 0.008 NAN (34) 0.083 ± 0.009 0.183 ± 0.041 0.958 ± 0.005 0.980 ± 0.005 ℓ2-softmax (24) 0.044 ± 0.006 0.085 ± 0.041 0.973 ± 0.005 b1 0.068 ± 0.010 0.125 ± 0.035 0.966 ± 0.006 0.987 ± 0.003 b2 0.108 ± 0.008 0.179 ± 0.042 0.960 ± 0.007 0.982 ± 0.004 DA-GAN (ours) 0.051 ± 0.009 0.110 ± 0.039 0.971 ± 0.007 0.989 ± 0.003 Finally, we visualize the verification and identification closed set results for IJB-A (15) split1 to gain insights into unconstrained face recognition with the proposed “recognition via generation" framework based on DA-GAN. For fully detailed visualization results in high resolution and corresponding analysis, please refer to supplementary material Sec. 6 & 7. 5 Conclusion We proposed a novel Dual-Agent Generative Adversarial Network (DA-GAN) for photorealistic and identity preserving profile face synthesis. DA-GAN combines prior knowledge from data distribution (adversarial training) and domain knowledge of faces (pose and identity perception loss) to exactly recover the lost information inherent in projecting a 3D face into the 2D image space. DA-GAN can be optimized in a fast yet stable way with an imposed boundary equilibrium regularization term that balances the power of the discriminator against the generator. One promissing potential application of the proposed DA-GAN is for solving generic transfer learning problems more effectively. Qualitative and quantitative experiments verify the possibility of our “recognition via generation" framework, which achieved the top performance on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark without extra human annotation efforts. Based on DA-GAN, we won the 1st places on verification and identification tracks in NIST IJB-A 2017 face recognition competitions. It would be interesting to apply DA-GAN for other transfer learning applications in the future. Acknowledgement The work of Jian Zhao was partially supported by China Scholarship Council (CSC) grant 201503170248. The work of Jiashi Feng was partially supported by National University of Singapore startup grant R-263-000-C08-133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112 and NUS IDS grant R-263-000-C67-646. We would like to thank Junliang Xing (Institute of Automation, Chinese Academy of Sciences), Hengzhu Liu, and Xucan Chen (National University of Defense Technology) for helpful discussions. performance on both tracks on 26th, Apirl, 2017. The IJB-A benchmark dataset, relevant information and leaderboard can be found at https://www.nist.gov/programs-projects/face-challenges. 9 References [1] W. AbdAlmageed, Y. Wu, S. Rawls, S. Harel, T. Hassner, I. Masi, J. Choi, J. Lekust, J. Kim, P. Natarajan, et al. Face recognition using deep multi-pose representations. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–9, 2016. [2] D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. [3] J.-C. Chen, V. M. Patel, and R. Chellappa. Unconstrained face verification using deep cnn features. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–9, 2016. [4] J.-C. Chen, R. Ranjan, A. Kumar, C.-H. Chen, V. M. Patel, and R. Chellappa. An end-to-end system for unconstrained face verification with deep convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops (CVPRW), pages 118–126, 2015. [5] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 2172–2180, 2016. [6] F. Chollet. keras. https://github.com/fchollet/keras, 2015. [7] A. R. Chowdhury, T.-Y. Lin, S. Maji, and E. Learned-Miller. One-to-many face recognition with bilinear cnns. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–9, 2016. [8] N. Crosswhite, J. Byrne, O. M. Parkhi, C. Stauffer, Q. Cao, and A. Zisserman. Template adaptation for face verification and identification. arXiv preprint arXiv:1603.03958, 2016. [9] K. Gong, X. Liang, X. Shen, and L. Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. arXiv preprint arXiv:1703.05446, 2017. [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014. [11] T. Hassner, I. Masi, J. Kim, J. Choi, S. Harel, P. Natarajan, and G. Medioni. Pooling faces: template based face recognition with pooled face images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 59–67, 2016. [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016. [13] R. Huang, S. Zhang, T. Li, and R. He. Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. arXiv preprint arXiv:1704.04086, 2017. [14] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [15] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1931–1939, 2015. [16] J. Li, J. Zhao, F. Zhao, H. Liu, J. Li, S. Shen, J. Feng, and T. Sim. Robust face recognition with deep multi-view representation learning. In Proceedings of the ACM Conference on Multimedia (ACM MM), pages 1068–1072, 2016. [17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3431– 3440, 2015. [18] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9(Nov):2579–2605, 2008. [19] I. Masi, S. Rawls, G. Medioni, and P. Natarajan. Pose-aware face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4838–4846, 2016. [20] I. Masi, A. T. Tran, J. T. Leksut, T. Hassner, and G. Medioni. Do we really need to collect millions of faces for effective face recognition? arXiv preprint arXiv:1603.07057, 2016. [21] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. [22] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. [23] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In Proceedings of the British Machine Vision Conference (BMVC), page 6, 2015. [24] R. Ranjan, C. D. Castillo, and R. Chellappa. L2-constrained softmax loss for discriminative face verification. arXiv preprint arXiv:1703.09507, 2017. [25] R. Ranjan, S. Sankaranarayanan, C. D. Castillo, and R. Chellappa. An all-in-one convolutional neural network for face analysis. arXiv preprint arXiv:1611.00851, 2016. [26] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [27] S. Sankaranarayanan, A. Alavi, C. D. Castillo, and R. Chellappa. Triplet probabilistic embedding for face verification and clustering. In Proceedings of the IEEE Conference on Biometrics: Theory, Applications and Systems (BTAS), pages 1–8, 2016. [28] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828, 2016. [29] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 10 (CVPR), pages 1701–1708, 2014. [30] D. Wang, C. Otto, and A. K. Jain. Face search at scale: 80 million gallery. arXiv preprint arXiv:1507.07242, 2015. [31] S. Xiao, J. Feng, J. Xing, H. Lai, S. Yan, and A. Kassim. Robust facial landmark detection via recurrent attentive-refinement networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 57–72, 2016. [32] S. Xiao, L. Liu, X. Nie, J. Feng, A. A. Kassim, and S. Yan. A live face swapper. In Proceedings of the ACM Conference on Multimedia (ACM MM), pages 691–692, 2016. [33] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016. [34] J. Yang, P. Ren, D. Chen, F. Wen, H. Li, and G. Hua. Neural aggregation network for video face recognition. arXiv preprint arXiv:1603.05474, 2016. [35] X. Zhu, J. Yan, D. Yi, Z. Lei, and S. Z. Li. Discriminative 3d morphable model fitting. In Proceedings of the IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), volume 1, pages 1–8, 2015. 11 | 2017 | 324 |
6,813 | Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation Christian Borgs Jennifer Chayes Christina E. Lee borgs@microsoft.com jchayes@microsoft.com celee@mit.edu Microsoft Research New England One Memorial Drive, Cambridge MA, 02142 Devavrat Shah devavrat@mit.edu Massachusetts Institute of Technology 77 Massachusetts Ave, Cambridge, MA 02139 Abstract The sparse matrix estimation problem consists of estimating the distribution of an n × n matrix Y , from a sparsely observed single instance of this matrix where the entries of Y are independent random variables. This captures a wide array of problems; special instances include matrix completion in the context of recommendation systems, graphon estimation, and community detection in (mixed membership) stochastic block models. Inspired by classical collaborative filtering for recommendation systems, we propose a novel iterative, collaborative filteringstyle algorithm for matrix estimation in this generic setting. We show that the mean squared error (MSE) of our estimator converges to 0 at the rate of O(d2(pn)−2/5) as long as ω(d5n) random entries from a total of n2 entries of Y are observed (uniformly sampled), E[Y ] has rank d, and the entries of Y have bounded support. The maximum squared error across all entries converges to 0 with high probability as long as we observe a little more, Ω(d5n ln5(n)) entries. Our results are the best known sample complexity results in this generality. 1 Introduction In this work, we propose and analyze an iterative similarity-based collaborative filtering algorithm for the sparse matrix completion problem with noisily observed entries. As a prototype for such a problem, consider a noisy observation of a social network where observed interactions are signals of true underlying connections. We might want to predict the probability that two users would choose to connect if recommended by the platform, e.g. LinkedIn. As a second example, consider a recommendation system where we observe movie ratings provided by users, and we may want to predict the probability distribution over ratings for specific movie-user pairs. The classical collaborative filtering approach is to compute similarities between pairs of users by comparing their commonly rated movies. For a social network, similarities between users would be computed by comparing their sets of friends. We will be particularly interested in the very sparse case where most pairs of users have no common friends, or most pairs of users have no commonly rated movies; thus there is insufficient data to compute the traditional similarity metrics. To overcome this limitation, we propose a novel algorithm which computes similarities iteratively, incorporating information within a larger radius neighborhood. Whereas traditional collaborative filtering learns the preferences of a user through the ratings of her/his “friends”, i.e. users who share similar ratings on commonly rated movies, our algorithm learns about a user through the ratings of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the friends of her/his friends, i.e. users who may be connected through an indirect path in the data. For a social network, this intuition translates to computing similarities of two users by comparing the boundary of larger radius neighborhoods of their connections in the network. While an actual implementation of our algorithm will benefit from modifications to make it practical, we believe that our approach is very practical; indeed, we plan to implement it in a corporate setting. Like all such nearest-neighbor style algorithms, our algorithm can be accelerated and scaled to large datasets in practice by using a parallel implementation via an approximate nearest neighbor data structure. In this paper, however, our goal is to describe the basic setting and concept of the algorithm, and provide clear mathematical foundation and analysis. The theoretical results indicate that this method achieves consistency (i.e. guaranteed convergence to the correct solution) for very sparse datasets for a reasonably general Latent Variable Model with bounded entries. The problems discussed above can be mathematically formulated as a matrix estimation problem, where we observe a sparse subset of entries in an m × n random matrix Y , and we wish to complete or de-noise the matrix by estimating the probability distribution of Yij for all (i, j). Suppose that Yij is categorical, taking values in [k] according to some unknown distribution. The task of estimating the distribution of Yij can be reduced to k −1 smaller tasks of estimating the expectation of a binary data matrix, e.g. Y t where Y t ij = I(Yij = t) and E[Y t ij] = P(Yij = t). If the matrix that we would like to learn is asymmetric, we can transform it to an equivalent symmetric model by defining a new data matrix Y ′ = 0 Y Y T 0 . Therefore, for the remainder of the paper, we will assume a n × n symmetric matrix which takes values in [0, 1] (real-valued or binary), but as argued above, our results apply more broadly to categorical-valued asymmetric matrices. We assume that the data is generated from a Latent Variable Model in which latent variables θ1, . . . , θn are sampled independently from U[0, 1], and the distribution of Yij is such that E[Yij|θi, θj] = f(θi, θj) ≡Fij for some latent function f. Our goal is to estimate the matrix F. It is worth remarking that the Latent Variable Model is a canonical representation for exchangeable arrays as shown by Aldous and Hoover [5, 25, 7]. We present a novel algorithm for estimating F = [Fij] from a sparsely sampled dataset {Yij}(i,j)∈E where E ⊂[n] × [n] is generated by assuming each entry is observed independently with probability p. We require that the latent function f when regarded as an integral operator has finite spectrum with rank d. We prove that the mean squared error (MSE) of our estimates converges to zero at a rate of O(d2(pn)−2/5) as long as the sparsity p = ω(d5n−1) (i.e. ω(d5n) total observations). In addition, with high probability, the maximum squared error converges to zero at a rate of O(d2(pn)−2/5) as long as the sparsity p = Ω(d5n−1 ln5(n)). Our analysis applies to a generic noise setting as long as Yij has bounded support. Our work takes inspiration from [1, 2, 3], which estimates clusters of the stochastic block model by computing distances from local neighborhoods around vertices. We improve upon their analysis to provide MSE bounds for the general latent variable model with finite spectrum, which includes a larger class of generative models such as mixed membership stochastic block models, while they consider the stochastic block model with non-overlapping communities. We show that our results hold even when the rank d increases with n, as long as d = o((pn)1/5). As compared to spectral methods such as [28, 39, 20, 19, 18], our analysis handles the general bounded noise model and holds for sparser regimes, only requiring p = ω(n−1). Related work. The matrix estimation problem introduced above includes as specific cases problems from different areas of literature: matrix completion popularized in the context of recommendation systems, graphon estimation arising from the asymptotic theory of graphs, and community detection using the stochastic block model or its generalization known as the mixed membership stochastic block model. The key representative results for each of these are mentioned in Table 1. We discuss the scaling of the sample complexity with respect to d (model complexity, usually rank) and n for polynomial time algorithms, including results for both mean squared error convergence, exact recovery in the noiseless setting, and convergence with high probability in the noisy setting. As can be seen from Table 1, our result provides the best sample complexity with respect to n for the general matrix estimation problem with bounded entries noise model and rank d, as the other models either require extra log(n) factors, or impose additional requirements on the noise model or the expected matrix. Similarly, ours is the best known sample complexity for high probability max-error convergence to 0 for the general rank d bounded entries setting, as other results either assume block constant or noiseless. 2 Table 1: Sample Complexity of Related Literature grouped in sections according to the following areas —matrix completion, 1-bit matrix completion, stochastic block model, mixed membership stochastic block model, graphon estimation, and our results Paper Sample Complexity Data/Noise Expected matrix Guarantee [27] ω(dn) noiseless rank d MSE→0 [28] Ω(dn max(log n, d)), ω(dn) iid Gaussian rank d MSE→0 [37] ω(dn log n) iid Gaussian rank d MSE→0 [19] Ω(n max(d, log2 n)) iid Gaussian rank d MSE→0 [18] ω(dn log6 n) indep bounded rank d MSE→0 [32] Ω(n3/2) iid bounded Lipschitz MSE→0 [17] Ω(dn log2 n max(d, log4 n)) noiseless rank d exact recovery [27] Ω(dn max(d, log n)) noiseless rank d exact recovery [39] Ω(dn log2 n) noiseless rank d exact recovery [19] Ω(n max(d log n, log2 n, d2)) binary entries rank d MSE→0 [20] Ω(n max(d, log n)), ω(dn) binary entries rank d MSE→0 [1, 3] ω(n)∗ binary entries d blocks partial recovery [1] Ω(n log n)∗ binary entries d blocks (SBM) exact recovery [43] Ω(n log n)∗ binary entries rank d MSE→0 [6] Ω(d2n polylog n) binary entries rank d whp error →0 [40] Ω(d2n) binary entries rank d detection [4] Ω(n2) binary entries monotone row sum MSE→0 [44] Ω(n2) binary entries piecewise Lipschitz MSE→0 [10] ω(n) binary entries monotone row sum MSE→0 this ω(d5n) indep bounded rank d, Lipschitz MSE→0 work Ω(d5n log5 n) indep bounded rank d, Lipschitz whp error →0 *result does not indicate dependence on d. It is worth comparing our results with the known lower bounds on the sample complexity. For the special case of matrix completion with an additive noise model, i.e. Yij = E[Yij] + ηij and ηij are i.i.d. zero mean, [16, 20] showed that ω(dn) samples are needed for a consistent estimator, i.e. MSE convergence to 0, and [17] showed that dn log n samples are needed for exact recovery. There is a conjectured computational lower bound for the mixed membership stochastic block model of d2n even for detection, which is weaker than MSE going to 0. Recently, [40] showed a partial result that this computational lower bound holds for algorithms that rely on fitting low-degree polynomials to the observed data. Given that these lower bounds apply to special cases of our setting, it seems that our result is optimal in terms of its dependence on n for MSE convergence as well as high probability (near) exact recovery. Next we provide a brief overview of prior works reported in the Tables 1. In the context of matrix completion, there has been much progress under the low-rank assumption. Most theoretically founded methods are based on spectral decompositions or minimizing a loss function with respect to spectral constraints [27, 28, 15, 17, 39, 37, 20, 19, 18]. A work that is closely related to ours is by [32]. It proves that a similarity based collaborative filtering-style algorithm provides a consistent estimator for matrix completion under the generic model when the latent function is Lipschitz, not just low rank; however, it requires ˜O(n3/2) samples. In a sense, ours can be viewed as an algorithmic generalization of [32] that handles the sparse sampling regime and a generic noise model. Most of the results in matrix completion require additive noise models, which do not extend to setting when the observations are binary or quantized. The USVT estimator is able to handle general bounded noise, although it requires a few log factors more in its sample complexity [18]. Our work removes the extra log factors while still allowing for general bounded noise. There is also a significant amount of literature which looks at the estimation problem when the data matrix is binary, also known as 1-bit matrix completion, stochastic block model (SBM) parameter estimation, or graphon estimation. The latter two terms are found within the context of community 3 detection and network analysis, as the binary data matrix can alternatively be interpreted as the adjacency matrix of a graph – which are symmetric, by definition. Under the SBM, each vertex is associated to one of d community types, and the probability of an edge is a function of the community types of both endpoints. Estimating the n × n parameter matrix becomes an instance of matrix estimation. In SBM, the expected matrix is at most rank d due to its block structure. Precise thresholds for cluster detection (better than random) and estimation have been established by [1, 2, 3]. Our work, both algorithmically and technically, draws insight from this sequence of works, extending the analysis to a broader class of generative models through the design of an iterative algorithm, and improving the technical results with precise MSE bounds. The mixed membership stochastic block model (MMSBM) allows each vertex to be associated to a length d vector, which represents its weighted membership in each of the d communities. The probability of an edge is a function of the weighted community memberships vectors of both endpoints, resulting in an expected matrix with rank at most d. Recent work by [40] provides an algorithm for weak detection for MMSBM with sample complexity d2n, when the community membership vectors are sparse and evenly weighted. They provide partial results to support a conjecture that d2n is a computational lower bound, separated by a gap of d from the information theoretic lower bound of dn. This gap was first shown in the simpler context of the stochastic block model [21]. [43] proposed a spectral clustering method for inferring the edge label distribution for a network sampled from a generalized stochastic block model. When the expected function has a finite spectrum decomposition, i.e. low rank, then they provide a consistent estimator for the sparse data regime, with Ω(n log n) samples. Graphon estimation extends SBM and MMSBM to the generic Latent Variable Model where the probability of an edge can be any measurable function f of real-valued types (or latent variables) associated to each endpoint. Graphons were first defined as the limiting object of a sequence of large dense graphs [14, 22, 34], with recent work extending the theory to sparse graphs [12, 13, 11, 41]. In the graphon estimation problem, we would like to estimate the function f given an instance of a graph generated from the graphon associated to f. [23, 29] provide minimax optimal rates for graphon estimation; however a majority of the proposed estimators are not computable in polynomial time, since they require optimizing over an exponentially large space (e.g. least squares or maximum likelihood) [42, 10, 9, 23, 29]. [10] provided a polynomial time method based on degree sorting in the special case when the expected degree function is monotonic. To our knowledge, existing positive results for sparse graphon estimation require either strong monotonicity assumptions [10], or rank constraints as assumed in the SBM, the 1-bit matrix completion, and in this work. We call special attention to the similarity based methods which are able to bypass the rank constraints, relying instead on smoothness properties of the latent function f (e.g. Lipschitz) [44, 32]. They hinge upon computing similarities between rows or columns by comparing commonly observed entries. Similarity based methods, also known in the literature as collaborative filtering, have been successfully employed across many large scale industry applications (Netflix, Amazon, Youtube) due to its simplicity and scalability [24, 33, 30, 38]; however the theoretical results have been relatively sparse. These recent results suggest that the practical success of these methods across a variety of applications may be due to its ability to capture local structure. A key limitation of this approach is that it requires a dense dataset with sufficient entries in order to compute similarity metrics, requiring that each pair of rows or columns has a growing number of overlapped observed entries, which does not hold when p = o(n−1/2). This work overcomes this limitation in an intuitive and simple way; rather than only considering directly overlapped entries, we consider longer “paths” of data associated to each row, expanding the set of associated datapoints until there is sufficient overlap. Although we may initially be concerned that this would introduce bias and variance due to the sparse sampling, our analysis shows that in fact the estimate does converge to the true solution. The idea of comparing vertices by looking at larger radius neighborhoods was introduced in [1], and has connections to belief propagation [21, 3] and the non-backtracking operator [31, 26, 36, 35, 8]. The non-backtracking operator was introduced to overcome the issue of sparsity. For sparse graphs, vertices with high-degree dominate the spectrum, such that the informative components of the spectrum get hidden behind the high degree vertices. The non-backtracking operator avoids paths that immediately return to the previously visited vertex in a similar manner as belief propagation, and its spectrum has been shown to be more well-behaved, perhaps adjusting for the high degree vertices, which get visited very often by paths in the graph. In our algorithm, the neighborhood paths are defined by first selecting a rooted tree at each vertex, thus enforcing that each vertex along a path 4 in the tree is unique. This is important in our analysis, as it guarantees that the distribution of vertices at the boundary of each subsequent depth of the neighborhood is unbiased, since the sampled vertices are freshly visited. 2 Model We shall use graph and matrix notations in an interchangeable manner. For each pair of vertices (i.e. row or column indices) u, v ∈[n], let Yuv ∈[0, 1] denote its random realization. Let E denote the edges. If (u, v) ∈E, Yuv is observed; otherwise it is unknown. • Each vertex u ∈[n] is associated to a latent variable θu ∼U[0, 1] sampled i.i.d. • For each (u, v) ∈[n] × [n], Yuv = Yvu ∈[0, 1] is a bounded random variable. Conditioned on {θi}i∈[n], the random variables {Yuv}1≤u<v≤n are independent. • Fuv := E Yuv | {θw}w∈[n] = f(θu, θv) ∈[0, 1] for a symmetric L-Lipschitz function f. • The function f, when regarded as an integral operator, has finite spectrum with rank d. That is, f(θu, θv) = Pd k=1 λkqk(θu)qk(θv), where qk are orthonormal L2-integrable basis functions. We assume that there exists some B such that |qk(y)| ≤B for all k and y ∈[0, 1]. • For every (unordered) index pair (u, v), the entry is observed independently with probability p, i.e. (u, v) ∈E and Muv = Mvu = Yuv. If (u, v) /∈E, then Muv = 0. The data (E, M) can be viewed as a weighted undirected graph over n vertices with each (u, v) ∈E having weights Muv. The goal is to estimate the matrix F = [Fuv]u,v∈[n]. Let Λ denote the d × d diagonal matrix with {λk}k∈[d] as the diagonal entries. Let the eigenvalues be sorted in such a way that |λ1| ≥|λ2| ≥· · · ≥|λd| > 0. Let Q denote the d × n matrix where Q(k, u) = qk(θu). Since Q is a random matrix depending on the sampled θ, it is not guaranteed to be an orthonormal matrix (even though qk are orthonormal functions). By definition, it follows that F = QT ΛQ. Let d′ be the number of distinct valued eigenvalues. Let ˜Λ denote be the d × d′ matrix where ˜Λ(a, b) = λa−1 b . Discussing Assumptions. The latent variable model imposes a natural and mild assumption, as Aldous and Hoover proved that if the network is exchangeable, i.e. the distribution over edges is invariant under permutations of vertex labels, then the network can be equivalently represented by a latent variable model [5, 25, 7]. Exchangeability is reasonable for anonymized datasets for which the identity of entities can be easily renamed. Our model additionally requires that the function is L-Lipschitz and has finite spectrum when regarded as an integral operator, i.e. F is low rank; this includes interesting scenarios such as the mixed membership stochastic block model and finite degree polynomials. We can also relax the condition to piecewise Lipschitz, as we only need to ensure that for every vertex u there are sufficiently many vertices v which are similar in function value to u. We assume observations are sampled independently with probability p; however, we discuss a possible solution for dealing with non-uniform sampling in Section 5. 3 Algorithm The algorithm that we propose uses the concept of local approximation, first determining which datapoints are similar in value, and then computing neighborhood averages for the final estimate. All similarity-based collaborative filtering methods have the following basic format: 1. Compute distances between pairs of vertices, e.g., dist(u, a) ≈ R 1 0 (f(θu, t) −f(θa, t))2dt. (1) 2. Form estimate by averaging over “nearby” datapoints, ˆFuv = 1 |Euv| P (a,b)∈Euv Mab, (2) where Euv := {(a, b) ∈E s.t. dist(u, a) < ηn, dist(v, b) < ηn}. 5 The choice of ηn = Θ(d(c1pn)−2/5) will be small enough to drive the bias to zero, ensuring the included datapoints are close in value, yet large enough to reduce the variance, ensuring |Euv| diverges. Inutition. Various similarity-based algorithms differ in the distance computation (Step 1). For dense datasets, i.e. p = ω(n−1/2), previous works have proposed and analyzed algorithms which approximate the L2 distance of (1) by using variants of the finite sample approximation, dist(u, a) = 1 |Xua| P y∈Xua(Fuy −Fay)2, (3) where y ∈Xua iff (u, y) ∈E and (a, y) ∈E [4, 44, 32]. For sparse datasets, with high probability, Xua = ∅for almost all pairs (u, a), such that this distance cannot be computed. In this paper we are interested in the sparse setting when p is significantly smaller than n−1/2, down to the lowest threshold of p = ω(n−1). If we visualize the data via a graph with edge set E, then (3) corresponds to comparing common neighbors of vertices u and a. A natural extension when u and a have no common neighbors, is to instead compare the r-hop neighbors of u and a, i.e. vertices y which are at distance exactly r from both u and a. We compare the product of weights along edges in the path from u to y and a to y respectively, which in expectation approximates R [0,1]r−1 f(θu, t1)(Qr−2 s=1 f(ts, ts+1))f(tr−1, θy)d⃗t = P k λr kqk(θu)qk(θy) = eT u QT ΛrQey. (4) We choose a large enough r such that there are sufficiently many “common” vertices y which have paths to both u and a, guaranteeing that our distance can be computed from a sparse dataset. Algorithm Details. We present and discuss details of each step of the algorithm, which primarily involves computing pairwise distances (or similarities) between vertices. Step 1: Sample Splitting. We partition the datapoints into disjoint sets, which are used in different steps of the computation to minimize correlation across steps for the analysis. Each edge in E is independently placed into E1, E2, or E3, with probabilities c1, c2, and 1 −c1 −c2 respectively. Matrices M1, M2, and M3 contain information from the subset of the data in M associated to E1, E2, and E3 respectively. M1 is used to define local neighborhoods of each vertex, M2 is used to compute similarities of these neighborhoods, and M3 is used to average over datapoints for the final estimate in (2). Step 2: Expanding the Neighborhood. We first expand local neighborhoods of radius r around each vertex. Let Su,s denote the set of vertices which are at distance s from vertex u in the graph defined by edge set E1. Specifically, i ∈Su,s if the shortest path in G1 = ([n], E1) from u to i has a length of s. Let Tu denote a breadth-first tree in G1 rooted at vertex u. The breadth-first property ensures that the length of the path from u to i within Tu is equal to the length of the shortest path from u to i in G1. If there is more than one valid breadth-first tree rooted at u, choose one uniformly at random. Let Nu,r ∈[0, 1]n denote the following vector with support on the boundary of the r-radius neighborhood of vertex u (we also call Nu,r the neighborhood boundary): Nu,r(i) = (Q (a,b)∈pathTu(u,i) M1(a, b) if i ∈Su,r, 0 if i /∈Su,r, where pathTu(u, i) denotes the set of edges along the path from u to i in the tree Tu. The sparsity of Nu,r(i) is equal to Su,r, and the value of the coordinate Nu,r(i) is equal to the product of weights along the path from u to i. Let ˜Nu,r denote the normalized neighborhood boundary such that ˜Nu,r = Nu,r/|Su,r|. We will choose radius r to be r = 6 ln(1/p) 8 ln(c1pn). Step 3: Computing the distances. For each vertex, we present two variants for estimating the distance. 1. For each pair (u, v), compute dist1(u, v) according to 1−c1p c2p ˜Nu,r −˜Nv,r T M2 ˜Nu,r+1 −˜Nv,r+1 . 6 2. For each pair (u, v), compute distance according to dist2(u, v) = P i∈[d′] zi∆uv(r, i), where ∆uv(r, i) is defined as 1−c1p c2p ˜Nu,r −˜Nv,r T M2 ˜Nu,r+i −˜Nv,r+i , and z ∈Rd′ is a vector that satisfies Λ2r+2˜ΛT z = Λ21. z always exists and is unique because ˜ΛT is a Vandermonde matrix, and Λ−2r1 lies within the span of its columns. Computing dist1 does not require knowledge of the spectrum of f. In our analysis we prove that the expected squared error of the estimate computed in (2) using dist1 converges to zero with n for p = ω(n−1+ϵ) for some ϵ > 0 and constant rank d, i.e. p must be polynomially larger than n−1. Although computing dist2 requires knowledge of the spectrum of f to determine the vector z, the expected squared error of the estimate computed in (2) using dist2 conveges to zero for p = ω(n−1) and constant rank d, which includes the sparser settings when p is only larger than n−1 by polylogarithmic factors. We also will show the dependence on d allowing for it to grow slowly with pn. It seems plausible that the technique employed by [2] could be used to design a modified algorithm which does not need to have prior knowledge of the spectrium. They achieve this for the stochastic block model case by bootstrapping the algorithm with a method which estimates the spectrum first and then computes pairwise distances with the estimated eigenvalues. Step 4: Averaging datapoints to produce final estimate. The estimate ˆF(u, v) is computed by averaging over nearby points defined by the distance estimates dist1 (or dist2). Recall that B ≥1 was assumed in the model definition to upper bound supy∈[0,1] |qk(y)|. Let Euv1 denote the set of undirected edges (a, b) such that (a, b) ∈E3 and both dist1(u, a) and dist1(v, b) are less than η1(n) = 33Bd|λ1|2r+1(c1pn)−2/5. The final estimate ˆF(u, v) produced by using dist1 is computed by averaging over the undirected edge set Euv1, ˆF(u, v) = 1 |Euv1| X (a,b)∈Euv1 M3(a, b). (5) Let Euv2 denote the set of undirected edges (a, b) such that (a, b) ∈E3, and both dist2(u, a) and dist2(v, b) are less than ξ2(n) = 33Bd|λ1|(c1pn)−2/5. The final estimate ˆF(u, v) produced by using dist2 is computed by averaging over the undirected edge set Euv2, ˆF(u, v) = 1 |Euv2| X (a,b)∈Euv2 M3(a, b). (6) 4 Main Results We prove bounds on the estimation error of our algorithm in terms of the mean squared error (MSE), MSE := E h 1 n(n−1) P u̸=v( ˆFuv −Fuv)2i , which averages the squared error over all edges. It follows from the model that R 1 0 (f(θu, y) −f(θv, y))2dy = Pd k=1 λ2 k(qk(θu) −qk(θv))2 = ∥ΛQ(eu −ev)∥2 2. The key part of the analysis is to show that the computed distances are in fact good estimates of ∥ΛQ(eu −ev)∥2 2. The analysis essentially relies on showing that the neighborhood growth around a vertex behaves according to its expectation, according to some properly defined notion. The radius r must be small enough to guarantee that the growth of the size of the neighborhood boundary is exponential, increasing at a factor of approximately c1pn. However, if the radius is too small, then the boundaries of the respective neighborhoods of the two chosen vertices would have a small intersection, so that estimating the similarities based on the small intersection of datapoints would 7 result in high variance. Therefore, the choice of r is critical to the algorithm and analysis. We are able to prove bounds on the squared error when r is chosen to satisfy the following conditions: r + d′ ≤ 7 ln(1/c1p) 8 ln(9c1pn/8) = Θ ln(1/c1p) ln(c1pn) , r + 1 2 ≥ 6 ln(1/p) 8 ln(7|λd|2c1pn/8|λ1|) = Θ ln(1/p) ln(c1pn) . (7) The parameter d′ denotes the number of distinct valued eigenvalues in the spectrum of f, (λ1 . . . λd), and determines the number of different radius “measurements” involved in computing dist2(u, v). Computing dist1(u, v) only involves a single measurement, thus the left hand side of (7) can be reduced to r + 1 instead of r + d′. When p is above a threshold, we choose c1 to decrease with n to ensure (7) can be satisfied, sparsifying the edge set E1 used for expanding the neighborhood around a vertex . When the sample probability is polynomially larger than n−1, i.e. p = n−1+ϵ for some ϵ > 0, these constraints imply that r is a constant with respect to n. However, if p = ˜O(n−1), we will need r to grow with n according to a rate of 6 ln(1/p)/8 ln(c1pn). Theorem 4.1. If p = n−1+ϵ for some ϵ ∈(0, 1 6), with a choice of c1 such that c1pn = Θ max(pn, (p6n7) 1 19 ) , there exists a constant r (with respect to n) which satisfies (7). If p = ω(n−1d5) and |λd| = ω((c1pn)−1 4 ), then the estimate computed using dist1 with parameter r achieves MSE = O |λ1| |λd| 2r B3d2|λ1| (c1pn)2/5 ! . If p = ω(n−1d5 ln5(n)), with probability greater than 1 −O d exp −(c1pn)1/5 9B2d , the estimate satisfies ∥ˆF −F∥max := max i,j | ˆFij −Fij| = O |λ1| |λd| r B3d2|λ1| (c1pn)2/5 1/2! . Theorem 4.1 proves that the mean squared error (MSE) of the estimate computed with dist1 is bounded by O((|λ1|/|λd|)2rd2(c1pn)−2/5). Therefore, our algorithm with dist1 provides a consistent estimate when r is constant with respect to n, which occurs for p = n−1+ϵ for some ϵ > 0. In fact, the reason why the error blows up with a factor of (|λ1|/|λd|)−2r is because we compute the distance by summing product of weights over paths of length 2r. From (4), we see that in expectation, when we take the product of edge weights over a path of length r from u to y, instead of computing f(θu, θy) = eT u QΛQey, the expression concentrates around eT u QΛrQey, which contains extra factors of Λr−1. Therefore, by computing over a radius r, the calculation in dist1 will approximate ∥Λr+1Q(eu −ev)∥2 2 rather than our intended ∥ΛQ(eu −ev)∥2 2, thus leading to an error factor of (|λ1|/|λd|)2r. It turns out that dist2 adjusts for this bias, as the multiple measurements ∆uv(r, i) with different length paths allows us to separate out ekΛQ(eu −ev) for all k with distinct values of λk. Theorem 4.2. If p = o(n−5/6), with a choice of c1 such that c1pn = Θ max(pn, (p6n7) 1 (8d′+11) ) , there exists a value for r which satisfies (7). If p = ω(n−1d5), |λd| = ω((c1pn)−1 4 ), and d = o(r), then the estimate computed using dist2 with parameter r achieves MSE = O B3d2|λ1| (c1pn)2/5 . If p = ω(n−1d5 ln5(n)), with probability 1 −O d exp −(c1pn)1/5 9B2d , the estimate satisfies ∥ˆF −F∥max := max i,j | ˆFij −Fij| = O B3d2|λ1| (c1pn)2/5 1/2! . Theorem 4.2 proves that the mean squared error (MSE) of the estimate computed using dist2 is bounded by O(d2(c1pn)−2/5); and thus the estimate is consistent in the ultra sparse sampling regime of p = ω(d5n−1). 8 5 Discussion In this work we presented a similarity based collaborative filtering algorithm which is provably consistent in sparse sampling regimes, as long as the sample probability p = ω(n−1). The algorithm computes similarity between two users by comparing their local neighborhoods. Our model assumes that the data matrix is generated according to a latent variable model, in which the weight on an observed edge (u, v) is equal in expectation to a function f over associated latent variables θu and θv. We presented two variants for computing similarities (or distances) between vertices. Computing dist1 does not require knowledge of the spectrum of f, but the estimate requires p to be polynomially larger than n in order to guarantee the expected squared error converges to zero. Computing dist2 uses the knowledge of the spectrum of f, but it provides an estimate that is provably consistent for a significantly sparse regime, only requiring that p = ω(n−1). The mean squared error of both algorithms is bounded by O((pn)−1/5). Since the computation is based on of comparing local neighborhoods within the graph, the algorithm can be easily implemented for large scale datasets where the data may be stored in a distributed fashion optimized for local graph computations. Practical implementation. In practice, we do not know the model parameters, and we would use cross validation to tune the radius r and threshold ηn. If r is either too small or too large, then the vector Nu,r will be too sparse. The threshold ηn trades off between bias and variance of the final estimate. Since we do not know the spectrum, dist1 may be easier to compute, and still enjoys good properties as long as r is not too large. When the sampled observations are not uniform across entries, the algorithm may require more modifications to properly normalize for high degree hub vertices, as the optimal choice of r may differ depending on the local sparsity. The key computational step of our algorithm involves comparing the expanded local neighborhoods of pairs of vertices to find the “nearest neighbors”. The local neighborhoods can be computed in parallel, as they are independent computations. Furthermore, the local neighborhood computations are suitable for systems in which the data is distributed across different machines in a way that optimizes local neighborhood queries. The most expensive part of our algorithm involves computing similarities for all pairs of vertices in order to determine the set of nearest neighbors. However, it would be possible to use approximate nearest neighbor techniques to greatly reduce the computation such that approximate nearest neighbor sets could be computed with significantly fewer than n2 pairwise comparisons. Non-uniform sampling. In reality, the probability that entries are observed is not be uniform across all pairs (i, j). However, we believe that an extension of our result can also handle variations in the sample probability as long as the sample probability is a function of the latent variables and scales in the same way with respect to n across all entries. Suppose that the probability of observing (i, j) is given by pg(θi, θj), where p is the scaling factor (contains the dependence upon n), and g allows for constant factor variations in the sample probability across entries as a function of the latent variables. If we let matrix X indicate the presence of an observation or not, then we can apply our algorithm twice, first on matrix X to estimate function g, and then on data matrix M to estimate f times g. We can simply divide by the estimate for g to obtain the estimate for f. The limitation is that if g(θi, θj) is very small, then the error in estimating the corresponding f(θi, θj) will have higher variance. However, it is expected that error increases for edge types with fewer samples. Acknowledgments This work is supported in parts by NSF under grants CMMI-1462158 and CMMI-1634259, by DARPA under grant W911NF-16-1-0551, and additionally by a NSF Graduate Fellowship and Claude E. Shannon Research Assistantship. References [1] Emmanuel Abbe and Colin Sandon. Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 670–688. IEEE, 2015. [2] Emmanuel Abbe and Colin Sandon. Recovering communities in the general stochastic block model without knowing the parameters. In Advances in neural information processing systems, 2015. 9 [3] Emmanuel Abbe and Colin Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic bp, and the information-computation gap. Advances in neural information processing systems, 2016. [4] Edo M Airoldi, Thiago B Costa, and Stanley H Chan. Stochastic blockmodel approximation of a graphon: Theory and consistent estimation. In Advances in Neural Information Processing Systems, pages 692–700, 2013. [5] D.J. Aldous. Representations for partially exchangeable arrays of random variables. J. Multivariate Anal., 11:581 – 598, 1981. [6] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham Kakade. A tensor spectral approach to learning mixed membership community models. In Conference on Learning Theory, pages 867–881, 2013. [7] Tim Austin. Exchangeable random arrays. Technical Report, Notes for IAS workshop., 2012. [8] Charles Bordenave, Marc Lelarge, and Laurent Massoulié. Non-backtracking spectrum of random graphs: community detection and non-regular ramanujan graphs. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 1347–1357. IEEE, 2015. [9] Christian Borgs, Jennifer Chayes, and Adam Smith. Private graphon estimation for sparse graphs. In Advances in Neural Information Processing Systems, pages 1369–1377, 2015. [10] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Shirshendu Ganguly. Consistent nonparametric estimation for heavy-tailed sparse graphs. arXiv preprint arXiv:1508.06675, 2015. [11] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Nina Holden. Sparse exchangeable graphs and their limits via graphon processes. arXiv preprint arXiv:1601.07134, 2016. [12] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Yufei Zhao. An Lp theory of sparse graph convergence I: limits, sparse random graph models, and power law distributions. arXiv preprint arXiv:1401.2906, 2014. [13] Christian Borgs, Jennifer T Chayes, Henry Cohn, and Yufei Zhao. An Lp theory of sparse graph convergence II: Ld convergence, quotients, and right convergence. arXiv preprint arXiv:1408.0744, 2014. [14] Christian Borgs, Jennifer T Chayes, László Lovász, Vera T Sós, and Katalin Vesztergombi. Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and testing. Advances in Mathematics, 219(6):1801–1851, 2008. [15] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111–119, 2009. [16] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010. [17] Emmanuel J Candès and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [18] Sourav Chatterjee. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):177–214, 2015. [19] Yudong Chen and Martin J Wainwright. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015. [20] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference, 3(3):189–223, 2014. [21] Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E, 84:066106, Dec 2011. 10 [22] Persi Diaconis and Svante Janson. Graph limits and exchangeable random graphs. Rendiconti di Matematica, VII(28):33–61, 2008. [23] Chao Gao, Yu Lu, and Harrison H Zhou. Rate-optimal graphon estimation. The Annals of Statistics, 43(6):2624–2652, 2015. [24] David Goldberg, David Nichols, Brian M. Oki, and Douglas Terry. Using collaborative filtering to weave an information tapestry. Commun. ACM, 1992. [25] D.N. Hoover. Row-column exchangeability and a generalized model for probability. In Exchangeability in Probability and Statistics (Rome, 1981), pages 281 – 291, 1981. [26] Brian Karrer, M. E. J. Newman, and Lenka Zdeborová. Percolation on sparse networks. Phys. Rev. Lett., 113:208702, Nov 2014. [27] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2010. [28] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11(Jul):2057–2078, 2010. [29] Olga Klopp, Alexandre B Tsybakov, and Nicolas Verzelen. Oracle inequalities for network models and sparse graphon estimation. To appear in Annals of Statistics, 2015. [30] Yehuda Koren and Robert Bell. Advances in collaborative filtering. In Recommender Systems Handbook, pages 145–186. Springer US, 2011. [31] Florent Krzakala, Cristopher Moore, Elchanan Mossel, Joe Neeman, Allan Sly, Lenka Zdeborová, and Pan Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of Sciences, 110(52):20935–20940, 2013. [32] Christina E. Lee, Yihua Li, Devavrat Shah, and Dogyoon Song. Blind regression: Nonparametric regression for latent variable models via collaborative filtering. In Advances in Neural Information Processing Systems 29, pages 2155–2163, 2016. [33] Greg Linden, Brent Smith, and Jeremy York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76–80, 2003. [34] László Lovász. Large networks and graph limits, volume 60. American Mathematical Society Providence, 2012. [35] Laurent Massoulié. Community detection thresholds and the weak ramanujan property. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 694–703, New York, NY, USA, 2014. ACM. [36] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof of the block model threshold conjecture. Combinatorica, Aug 2017. [37] Sahand Negahban and Martin J Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, pages 1069–1097, 2011. [38] Xia Ning, Christian Desrosiers, and George Karypis. Recommender Systems Handbook, chapter A Comprehensive Survey of Neighborhood-Based Recommendation Methods, pages 37–76. Springer US, 2015. [39] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(Dec):3413–3430, 2011. [40] David Steurer and Sam Hopkins. Bayesian estimation from few samples: community detection and related problems. https://arxiv.org/abs/1710.00264, 2017. [41] Victor Veitch and Daniel M Roy. The class of random graphs arising from exchangeable random measures. arXiv preprint arXiv:1512.03099, 2015. 11 [42] Patrick J Wolfe and Sofia C Olhede. Nonparametric graphon estimation. arXiv preprint arXiv:1309.5936, 2013. [43] Jiaming Xu, Laurent Massoulié, and Marc Lelarge. Edge label inference in generalized stochastic block models: from spectral theory to impossibility results. In Conference on Learning Theory, pages 903–920, 2014. [44] Yuan Zhang, Elizaveta Levina, and Ji Zhu. Estimating network edge probabilities by neighborhood smoothing. arXiv preprint arXiv:1509.08588, 2015. 12 | 2017 | 325 |
6,814 | Positive-Unlabeled Learning with Non-Negative Risk Estimator Ryuichi Kiryo1,2 Gang Niu1,2 Marthinus C. du Plessis Masashi Sugiyama2,1 1The University of Tokyo, 7-3-1 Hongo, Tokyo 113-0033, Japan 2RIKEN, 1-4-1 Nihonbashi, Tokyo 103-0027, Japan { kiryo@ms., gang@ms., sugi@ }k.u-tokyo.ac.jp Abstract From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts. 1 Introduction Positive-unlabeled (PU) learning can be dated back to [1, 2, 3] and has been well studied since then. It mainly focuses on binary classification applied to retrieval and novelty or outlier detection tasks [4, 5, 6, 7], while it also has applications in matrix completion [8] and sequential data [9, 10]. Existing PU methods can be divided into two categories based on how U data is handled. The first category (e.g., [11, 12]) identifies possible negative (N) data in U data, and then performs ordinary supervised (PN) learning; the second (e.g., [13, 14]) regards U data as N data with smaller weights. The former heavily relies on the heuristics for identifying N data; the latter heavily relies on good choices of the weights of U data, which is computationally expensive to tune. In order to avoid tuning the weights, unbiased PU learning comes into play as a subcategory of the second category. The milestone is [4], which regards a U data as weighted P and N data simultaneously. It might lead to unbiased risk estimators, if we unrealistically assume that the class-posterior probability is one for all P data.1 A breakthrough in this direction is [15] for proposing the first unbiased risk estimator, and a more general estimator was suggested in [16] as a common foundation. The former is unbiased but non-convex for loss functions satisfying some symmetric condition; the latter is always unbiased, and it is further convex for loss functions meeting some linear-odd condition [17, 18]. PU learning based on these unbiased risk estimators is the current state of the art. However, the unbiased risk estimators will give negative empirical risks, if the model being trained is very flexible. For the general estimator in [16], there exist three partial risks in the total risk (see Eq. (2) defined later), especially it has a negative risk regarding P data as N data to cancel the bias caused by regarding U data as N data. The worst case is that the model can realize any measurable function and the loss function is not upper bounded, so that the empirical risk is not lower bounded. This needs to be fixed since the original risk, which is the target to be estimated, is non-negative. 1It implies the P and N class-conditional densities have disjoint support sets, and then any P and N data (as the test data) can be perfectly separated by a fixed classifier that is sufficiently flexible. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To this end, we propose a novel non-negative risk estimator that follows and improves on the stateof-the-art unbiased risk estimators mentioned above. This estimator can be used for two purposes: first, given some validation data (which are also PU data), we can use our estimator to evaluate the risk—for this case it is biased yet optimal, and for some symmetric losses, the mean-squared-error reduction is guaranteed; second, given some training data, we can use our estimator to train binary classifiers—for this case its risk minimizer possesses an estimation error bound of the same order as the risk minimizers corresponding to its unbiased counterparts [15, 16, 19]. In addition, we propose a large-scale PU learning algorithm for minimizing the unbiased and nonnegative risk estimators. This algorithm accepts any surrogate loss and is based on stochastic optimization, e.g., [20]. Note that [21] is the only existing large-scale PU algorithm, but it only accepts a single surrogate loss from [16] and is based on sequential minimal optimization [22]. The rest of this paper is organized as follows. In Section 2 we review unbiased PU learning, and in Section 3 we propose non-negative PU learning. Theoretical analyses are carried out in Section 4, and experimental results are discussed in Section 5. Conclusions are given in Section 6. 2 Unbiased PU learning In this section, we review unbiased PU learning [15, 16]. Problem settings Let X ∈Rd and Y ∈{±1} (d ∈N) be the input and output random variables. Let p(x, y) be the underlying joint density of (X, Y ), pp(x) = p(x | Y = +1) and pn(x) = p(x | Y = −1) be the P and N marginals (a.k.a. the P and N class-conditional densities), p(x) be the U marginal, πp = p(Y = +1) be the class-prior probability, and πn = p(Y = −1) = 1 −πp. πp is assumed known throughout the paper; it can be estimated from P and U data [23, 24, 25, 26]. Consider the two-sample problem setting of PU learning [5]: two sets of data are sampled independently from pp(x) and p(x) as Xp = {xp i }np i=1 ∼pp(x) and Xu = {xu i }nu i=1 ∼p(x), and a classifier needs to be trained from Xp and Xu.2 If it is PN learning as usual, Xn = {xn i }nn i=1 ∼pn(x) rather than Xu would be available and a classifier could be trained from Xp and Xn. Risk estimators Unbiased PU learning relies on unbiased risk estimators. Let g : Rd →R be an arbitrary decision function, and ℓ: R × {±1} →R be the loss function, such that the value ℓ(t, y) means the loss incurred by predicting an output t when the ground truth is y. Denote by R+ p (g) = Ep[ℓ(g(X), +1)] and R− n (g) = En[ℓ(g(X), −1)], where Ep[·] = EX∼pp[·] and En[·] = EX∼pn[·]. Then, the risk of g is R(g) = E(X,Y )∼p(x,y)[ℓ(g(X), Y )] = πpR+ p (g) + πnR− n (g). In PN learning, thanks to the availability of Xp and Xn, R(g) can be approximated directly by bRpn(g) = πp bR+ p (g) + πn bR− n (g), (1) where bR+ p (g) = (1/np) Pnp i=1 ℓ(g(xp i ), +1) and bR− n (g) = (1/nn) Pnn i=1 ℓ(g(xn i ), −1). In PU learning, Xn is unavailable, but R− n (g) can be approximated indirectly, as shown in [15, 16]. Denote by R− p (g) = Ep[ℓ(g(X), −1)] and R− u (g) = EX∼p(x)[ℓ(g(X), −1)]. As πnpn(x) = p(x) −πppp(x), we can obtain that πnR− n (g) = R− u (g) −πpR− p (g), and R(g) can be approximated indirectly by bRpu(g) = πp bR+ p (g) −πp bR− p (g) + bR− u (g), (2) where bR− p (g) = (1/np) Pnp i=1 ℓ(g(xp i ), −1) and bR− u (g) = (1/nu) Pnu i=1 ℓ(g(xu i ), −1). The empirical risk estimators in Eqs. (1) and (2) are unbiased and consistent w.r.t. all popular loss functions.3 When they are used for evaluating the risk (e.g., in cross-validation), ℓis by default the zero-one loss, namely ℓ01(t, y) = (1 −sign(ty))/2; when used for training, ℓ01 is replaced with a surrogate loss [27]. In particular, [15] showed that if ℓsatisfies a symmetric condition: ℓ(t, +1) + ℓ(t, −1) = 1, (3) 2Xp is a set of independent data and so is Xu, but Xp ∪Xu does not need to be such a set. 3The consistency here means for fixed g, bRpn(g) →R(g) and bRpu(g) →R(g) as np, nn, nu →∞. 2 we will have bRpu(g) = 2πp bR+ p (g) + bR− u (g) −πp, (4) which can be minimized by separating Xp and Xu with ordinary cost-sensitive learning. An issue is bRpu(g) in (4) must be non-convex in g, since no ℓ(t, y) in (3) can be convex in t. [16] showed that bRpu(g) in (2) is convex in g, if ℓ(t, y) is convex in t and meets a linear-odd condition [17, 18]: ℓ(t, +1) −ℓ(t, −1) = −t. (5) Let g be parameterized by θ, then (5) leads to a convex optimization problem so long as g is linear in θ, for which the globally optimal solution can be obtained. Eq. (5) is not only sufficient but also necessary for the convexity, if ℓis unary, i.e., ℓ(t, −1) = ℓ(−t, +1). Justification Thanks to the unbiasedness, we can study estimation error bounds (EEB). Let G be the function class, and bgpn and bgpu be the empirical risk minimizers of bRpn(g) and bRpu(g). [19] proved EEB of bgpu is tighter than EEB of bgpn when πp/√np + 1/√nu < πn/√nn, if (a) ℓsatisfies (3) and is Lipschitz continuous; (b) the Rademacher complexity of G decays in O(1/√n) for data of size n drawn from p(x), pp(x) or pn(x).4 In other words, under mild conditions, PU learning is likely to outperform PN learning when πp/√np + 1/√nu < πn/√nn. This phenomenon has been observed in experiments [19] and is illustrated in Figure 1(a). 3 Non-negative PU learning In this section, we propose the non-negative risk estimator and the large-scale PU algorithm. 3.1 Motivation Let us look inside the aforementioned justification of unbiased PU (uPU) learning. Intuitively, the advantage comes from the transformation πnR− n (g) = R− u (g) −πpR− p (g). When we approximate πnR− n (g) from N data {xn i }nn i=1, the convergence rate is Op(πn/√nn), where Op denotes the order in probability; when we approximate R− u (g) −πpR− p (g) from P data {xp i }np i=1 and U data {xu i }nu i=1, the convergence rate becomes Op(πp/√np + 1/√nu). As a result, we might benefit from a tighter uniform deviation bound when πp/√np + 1/√nu < πn/√nn. However, the critical assumption on the Rademacher complexity is indispensable, otherwise it will be difficult for EEB of bgpu to be tighter than EEB of bgpn. If G = {g | ∥g∥∞≤Cg} where Cg > 0 is a constant, i.e., it has all measurable functions with some bounded norm, then Rn,q(G) = O(1) for any n and q(x) and all bounds become trivial; moreover if ℓis not bounded from above, bRpu(g) becomes not bounded from below, i.e., it may diverge to −∞. Thus, in order to obtain high-quality bgpu, G cannot be too complex, or equivalently the model of g cannot be too flexible. This argument is supported by an experiment as illustrated in Figure 1(b). A multilayer perceptron was trained for separating the even and odd digits of MNIST hand-written digits [29]. This model is so flexible that the number of parameters is 500 times more than the total number of P and N data. From Figure 1(b) we can see: (A) on training data, the risks of uPU and PN both decrease, and uPU is faster than PN; (B) on test data, the risk of PN decreases, whereas the risk of uPU does not; the risk of uPU is lower at the beginning but higher at the end than that of PN. To sum up, the overfitting problem of uPU is serious, which evidences that in order to obtain highquality bgpu, the model of g cannot be too flexible. 3.2 Non-negative risk estimator Nevertheless, we have no choice sometimes: we are interested in using flexible models, while labeling more data is out of our control. Can we alleviate the overfitting problem with neither changing the model nor labeling more data? 4Let σ1, . . . , σn be n Rademacher variables, the Rademacher complexity of G for X of size n drawn from q(x) is defined by Rn,q(G) = EX Eσ1,...,σn[supg∈G 1 n P xi∈X σig(xi)] [28]. For any fixed G and q, Rn,q(G) still depends on n and should decrease with n. 3 0 100 200 300 400 500 Epoch 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Risk w.r.t. surrogate loss PN test PN train uPU test uPU train (a) Plain linear model 0 100 200 300 400 500 Epoch 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 Risk w.r.t. surrogate loss PN test PN train uPU test uPU train nnPU test nnPU train (b) Multilayer perceptron (MLP) The dataset is MNIST; even/odd digits are regarded as the P/N class, and πp ≈1/2; np = 100 and nn = 50 for PN learning; np = 100 and nu = 59, 900 for unbiased PU (uPU) and non-negative PU (nnPU) learning. The model is a plain linear model (784-1) in (a) and an MLP (784-100-1) with ReLU in (b); it was trained by Algorithm 1, where the loss ℓis ℓsig, the optimization algorithm A is [20], with β = 1/2 for uPU, and β = 0 and γ = 1 for nnPU. Solid curves are bRpn(g) on test data where g ∈{bgpn, bgpu, egpu}, and dashed curves are bRpn(bgpn), bRpu(bgpu) and eRpu(egpu) on training data. Note that nnPU is identical to uPU in (a). Figure 1: Illustrative experimental results. The answer is affirmative. Note that bRpu(bgpu) keeps decreasing and goes negative. This should be fixed since R(g) ≥0 for any g. Specifically, it holds that R− u (g) −πpR− p (g) = πnR− n (g) ≥0, but bR− u (g) −πp bR− p (g) ≥0 is not always true, which is a potential reason for uPU to overfit. Based on this key observation, we propose a non-negative risk estimator for PU learning: eRpu(g) = πp bR+ p (g) + max n 0, bR− u (g) −πp bR− p (g) o . (6) Let egpu = arg ming∈G eRpu(g) be the empirical risk minimizer of eRpu(g). We refer to the process of obtaining egpu as non-negative PU (nnPU) learning. The implementation of nnPU will be given in Section 3.3, and theoretical analyses of eRpu(g) and egpu will be given in Section 4. Again, from Figure 1(b) we can see: (A) on training data, the risk of nnPU first decreases and then becomes more and more flat, so that the risk of nnPU is closer to the risk of PN and farther from that of uPU; in short, the risk of nnPU does not go down with uPU after a certain epoch; (B) on test data, the tendency is similar, but the risk of nnPU does not go up with uPU; (C) at the end, nnPU achieves the lowest risk on test data. In summary, nnPU works by explicitly constraining the training risk of uPU to be non-negative. 3.3 Implementation A list of popular loss functions and their properties is shown in Table 1. Let g be parameterized by θ. If g is linear in θ, the losses satisfying (5) result in convex optimizations. However, if g needs to be flexible, it will be highly nonlinear in θ; then the losses satisfying (5) are not advantageous over others, since the optimizations are anyway non-convex. In [15], the ramp loss was used and bRpu(g) was minimized by the concave-convex procedure [30]. This solver is fairly sophisticated, and if we replace bRpu(g) with eRpu(g), it will be more difficult to implement. To this end, we propose to use the sigmoid loss ℓsig(t, y) = 1/(1 + exp(ty)): its gradient is everywhere non-zero and eRpu(g) can be minimized by off-the-shelf gradient methods. In front of big data, we should scale PU learning up by stochastic optimization. Minimizing bRpu(g) is embarrassingly parallel while minimizing eRpu(g) is not, since bRpu(g) is point-wise but eRpu(g) is not due to the max operator. That being said, max{0, bR− u (g; Xu) −πp bR− p (g; Xp)} is no greater than (1/N) PN i=1 max{0, bR− u (g; X i u) −πp bR− p (g; X i p)}, where (X i p, X i u) is the i-th mini-batch, and hence the corresponding upper bound of eRpu(g) can easily be minimized in parallel. 4 Table 1: Loss functions for PU learning and their properties. Name Definition (3) (5) Bounded Lipschitz ℓ′(z) ̸= 0 Zero-one loss (1 −sign(z))/2 ✓ × ✓ × z = 0 Ramp loss max{0, min{1, (1 −z)/2}} ✓ × ✓ ✓ z ∈[−1, +1] Squared loss (z −1)2/4 × ✓ × × z ∈R Logistic loss ln(1 + exp(−z)) × ✓ × ✓ z ∈R Hinge loss max{0, 1 −z} × × × ✓ z ∈(−∞, +1] Double hinge loss max{0, (1 −z)/2, −z} × ✓ × ✓ z ∈(−∞, +1] Sigmoid loss 1/(1 + exp(z)) ✓ × ✓ ✓ z ∈R All loss functions are unary, such that ℓ(t, y) = ℓ(z) with z = ty. The ramp loss comes from [15]; the double hinge loss is from [16], in which the squared, logistic and hinge losses were discussed as well. The ramp and squared losses are scaled to satisfy (3) or (5). The sigmoid loss is a horizontally mirrored logistic function; the logistic loss is the negative logarithm of the logistic function. Algorithm 1 Large-scale PU learning based on stochastic optimization Input: training data (Xp, Xu); hyperparameters 0 ≤β ≤πp supt maxy ℓ(t, y) and 0 ≤γ ≤1 Output: model parameter θ for bgpu(x; θ) or egpu(x; θ) 1: Let A be an external SGD-like stochastic optimization algorithm such as [20] or [31] 2: while no stopping criterion has been met: 3: Shuffle (Xp, Xu) into N mini-batches, and denote by (X i p, X i u) the i-th mini-batch 4: for i = 1 to N: 5: if bR− u (g; X i u) −πp bR− p (g; X i p) ≥−β: 6: Set gradient ∇θ bRpu(g; X i p, X i u) 7: Update θ by A with its current step size η 8: else: 9: Set gradient ∇θ(πp bR− p (g; X i p) −bR− u (g; X i u)) 10: Update θ by A with a discounted step size γη The large-scale PU algorithm is described in Algorithm 1. Let ri = bR− u (g; X i u) −πp bR− p (g; X i p). In practice, we may tolerate ri ≥−β where 0 ≤β ≤πp supt maxy ℓ(t, y), as ri comes from a single mini-batch. The degree of tolerance is controlled by β: there is zero tolerance if β = 0, and we are minimizing bRpu(g) if β = πp supt maxy ℓ(t, y). Otherwise if ri < −β, we go along −∇θri with a step size discounted by γ where 0 ≤γ ≤1, to make this mini-batch less overfitted. Algorithm 1 is insensitive to the choice of γ, if the optimization algorithm A is adaptive such as [20] or [31]. 4 Theoretical analyses In this section, we analyze the risk estimator (6) and its minimizer (all proofs are in Appendix B). 4.1 Bias and consistency Fix g, eRpu(g) ≥bRpu(g) for any (Xp, Xu) but bRpu(g) is unbiased, which implies eRpu(g) is biased in general. A fundamental question is then whether eRpu(g) is consistent. From now on, we prove this consistency. To begin with, partition all possible (Xp, Xu) into D+(g) = {(Xp, Xu) | bR− u (g) − πp bR− p (g) ≥0} and D−(g) = {(Xp, Xu) | bR− u (g) −πp bR− p (g) < 0}. Assume there are Cg > 0 and Cℓ> 0 such that supg∈G ∥g∥∞≤Cg and sup|t|≤Cg maxy ℓ(t, y) ≤Cℓ. Lemma 1. The following three conditions are equivalent: (A) the probability measure of D−(g) is non-zero; (B) eRpu(g) differs from bRpu(g) with a non-zero probability over repeated sampling of (Xp, Xu); (C) the bias of eRpu(g) is positive. In addition, by assuming that there is α > 0 such that R− n (g) ≥α, the probability measure of D−(g) can be bounded by Pr(D−(g)) ≤exp(−2(α/Cℓ)2/(π2 p/np + 1/nu)). (7) 5 Based on Lemma 1, we can show the exponential decay of the bias and also the consistency. For convenience, denote by χnp,nu = 2πp/√np + 1/√nu. Theorem 2 (Bias and consistency). Assume that R− n (g) ≥α > 0 and denote by ∆g the right-hand side of Eq. (7). As np, nu →∞, the bias of eRpu(g) decays exponentially: 0 ≤EXp,Xu[ eRpu(g)] −R(g) ≤Cℓπp∆g. (8) Moreover, for any δ > 0, let Cδ = Cℓ p ln(2/δ)/2, then we have with probability at least 1 −δ, | eRpu(g) −R(g)| ≤Cδ · χnp,nu + Cℓπp∆g, (9) and with probability at least 1 −δ −∆g, | eRpu(g) −R(g)| ≤Cδ · χnp,nu. (10) Either (9) or (10) in Theorem 2 indicates for fixed g, eRpu(g) →R(g) in Op(πp/√np + 1/√nu). This convergence rate is optimal according to the central limit theorem [32], which means the proposed estimator is a biased yet optimal estimator to the risk. 4.2 Mean squared error After introducing the bias, eRpu(g) tends to overestimate R(g). It is not a shrinkage estimator [33, 34] so that its mean squared error (MSE) is not necessarily smaller than that of bRpu(g). However, we can still characterize this reduction in MSE. Theorem 3 (MSE reduction). It holds that MSE( eRpu(g)) < MSE( bRpu(g)),5 if and only if Z (Xp,Xu)∈D−(g) ( bRpu(g) + eRpu(g) −2R(g))( bR− u (g) −πp bR− p (g)) dF(Xp, Xu) > 0, (11) where dF(Xp, Xu) = Qnp i=1 pp(xp i )dxp i · Qnu i=1 p(xu i )dxu i . Eq. (11) is valid, if the following conditions are met: (a) Pr(D−(g)) > 0; (b) ℓsatisfies Eq. (3); (c) R− n (g) ≥α > 0; (d) nu ≫np, such that we have R− u (g) −bR− u (g) ≤2α almost surely on D−(g). In fact, given these four conditions, we have for any 0 ≤β ≤Cℓπp, MSE( bRpu(g)) −MSE( eRpu(g)) ≥3β2Pr{ eRpu(g) −bRpu(g) > β}. (12) The assumption (d) in Theorem 3 is explained as follows. Since U data can be much cheaper than P data in practice, it would be natural to assume nu is much larger and grows much faster than np, hence Pr{R− u (g) −bR− u (g) ≥α}/Pr{ bR− p (g) −R− p (g) ≥α/πp} ∝exp(np −nu) asymptotically.6 This means the contribution of Xu is negligible for making (Xp, Xu) ∈D−(g) so that Pr(D−(g)) exhibits exponential decay mainly in np. As Pr{R− u (g) −bR− u (g) ≥2α} has stronger exponential decay in nu than Pr{R− u (g) −bR− u (g) ≥α} as well as nu ≫np, we made the assumption (d). 4.3 Estimation error While Theorems 2 and 3 addressed the use of (6) for evaluating the risk, we are likewise interested in its use for training classifiers. In what follows, we analyze the estimation error R(egpu) −R(g∗), where g∗is the true risk minimizer in G, i.e., g∗= arg ming∈G R(g). As a common practice [28], assume that ℓ(t, y) is Lipschitz continuous in t for all |t| ≤Cg with a Lipschitz constant Lℓ. Theorem 4 (Estimation error bound). Assume that (a) infg∈G R− n (g) ≥α > 0 and denote by ∆the right-hand side of Eq. (7); (b) G is closed under negation, i.e., g ∈G if and only if −g ∈G. Then, for any δ > 0, with probability at least 1 −δ, R(egpu) −R(g∗) ≤16LℓπpRnp,pp(G) + 8LℓRnu,p(G) + 2C′ δ · χnp,nu + 2Cℓπp∆, (13) where C′ δ = Cℓ p ln(1/δ)/2, and Rnp,pp(G) and Rnu,p(G) are the Rademacher complexities of G for the sampling of size np from pp(x) and of size nu from p(x), respectively. 5Here, MSE(·) is over repeated sampling of (Xp, Xu). 6This can be derived as np, nu →∞by applying the central limit theorem to the two differences and then L’Hôpital’s rule to the ratio of complementary error functions [32]. 6 Theorem 4 ensures that learning with (6) is also consistent: as np, nu →∞, R(egpu) →R(g∗) and if ℓsatisfies (5), all optimizations are convex and egpu →g∗. For linear-in-parameter models with a bounded norm, Rnp,pp(G) = O(1/√np) and Rnu,p(G) = O(1/√nu), and thus R(egpu) →R(g∗) in Op(πp/√np + 1/√nu). For comparison, R(bgpu) −R(g∗) can be bounded using a different proof technique [19]: R(bgpu) −R(g∗) ≤8LℓπpRnp,pp(G) + 4LℓRnu,p(G) + 2Cδ · χnp,nu, (14) where Cδ = Cℓ p ln(2/δ)/2. The differences of (13) and (14) are completely from the differences of the corresponding uniform deviation bounds, i.e., the following lemma and Lemma 8 of [19]. Lemma 5. Under the assumptions of Theorem 4, for any δ > 0, with probability at least 1 −δ, supg∈G | eRpu(g) −R(g)| ≤8LℓπpRnp,pp(G) + 4LℓRnu,p(G) + C′ δ · χnp,nu + Cℓπp∆. (15) Notice that bRpu(g) is point-wise while eRpu(g) is not due to the maximum, which makes Lemma 5 much more difficult to prove than Lemma 8 of [19]. The key trick is that after symmetrization, we employ | max{0, z} −max{0, z′}| ≤|z −z′|, making three differences of partial risks point-wise (see (18) in the proof). As a consequence, we have to use a different Rademacher complexity with the absolute value inside the supremum [35, 36], whose contraction makes the coefficients of (15) doubled compared with Lemma 8 of [19]; moreover, we have to assume G is closed under negation to change back to the standard Rademacher complexity without the absolute value [28]. Therefore, the differences of (13) and (14) are mainly due to different proof techniques and cannot reflect the intrinsic differences of empirical risk minimizers. 5 Experiments In this section, we compare PN, unbiased PU (uPU) and non-negative PU (nnPU) learning experimentally. We focus on training deep neural networks, as uPU learning usually does not overfit if a linear-in-parameter model is used [19] and nothing needs to be fixed. Table 2 describes the specification of benchmark datasets. MNIST, 20News and CIFAR-10 have 10, 7 and 10 classes originally, and we constructed the P and N classes from them as follows: MNIST was preprocessed in such a way that 0, 2, 4, 6, 8 constitute the P class, while 1, 3, 5, 7, 9 constitute the N class; for 20News, ‘alt.’, ‘comp.’, ‘misc.’ and ‘rec.’ make up the P class, and ‘sci.’, ‘soc.’ and ‘talk.’ make up the N class; for CIFAR-10, the P class is formed by ‘airplane’, ‘automobile’, ‘ship’ and ‘truck’, and the N class is formed by ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’ and ‘horse’. The dataset epsilon has 2 classes and such a construction is unnecessary. Three learning methods were set up as follows: (A) for PN, np = 1, 000 and nn = (πn/2πp)2np; (B) for uPU, np = 1, 000 and nu is the total number of training data; (C) for nnPU, np and nu are exactly same as uPU. For uPU and nnPU, P and U data were dependent, because neither bRpu(g) in Eq. (2) nor eRpu(g) in Eq. (6) requires them to be independent. The choice of nn was motivated by [19] and may make nnPU potentially better than PN as nu →∞(whether np < ∞or np ≤nu). The model for MNIST was a 6-layer multilayer perceptron (MLP) with ReLU [40] (more specifically, d-300-300-300-300-1). For epsilon, the model was similar while the activation was replaced with Softsign [41] for better performance. For 20News, we borrowed the pre-trained word embeddings from GloVe [42], and the model can be written as d-avg_pool(word_emb(d,300))-300-300-1, Table 2: Specification of benchmark datasets, models, and optimition algorithms. Name # Train # Test # Feature πp Model g(x; θ) Opt. alg. A MNIST [29] 60, 000 10, 000 784 0.49 6-layer MLP with ReLU Adam [20] epsilon [37] 400, 000 100, 000 2, 000 0.50 6-layer MLP with Softsign Adam [20] 20News [38] 11, 314 7, 532 61, 188 0.44 5-layer MLP with Softsign AdaGrad [31] CIFAR-10 [39] 50, 000 10, 000 3, 072 0.40 13-layer CNN with ReLU Adam [20] See http://yann.lecun.com/exdb/mnist/ for MNIST, https://www.csie.ntu.edu.tw/~cjlin/ libsvmtools/datasets/binary.html for epsilon, http://qwone.com/~jason/20Newsgroups/ for 20Newsgroups, and https://www.cs.toronto.edu/~kriz/cifar.html for CIFAR-10. 7 0 25 50 75 100 125 150 175 200 Epoch 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 Risk w.r.t. zero-one loss PN test PN train uPU test uPU train nnPU test nnPU train (a) MNIST 0 25 50 75 100 125 150 175 200 Epoch 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Risk w.r.t. zero-one loss (b) epsilon 0 25 50 75 100 125 150 175 200 Epoch 0.1 0.0 0.1 0.2 0.3 0.4 Risk w.r.t. zero-one loss (c) 20News 0 25 50 75 100 125 150 175 200 Epoch 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 Risk w.r.t. zero-one loss (d) CIFAR-10 Figure 2: Experimental results of training deep neural networks. where word_emb(d,300) retrieves 300-dimensional word embeddings for all words in a document, avg_pool executes average pooling, and the resulting vector is fed to a 4-layer MLP with Softsign. The model for CIFAR-10 was an all convolutional net [43]: (32*32*3)-[C(3*3,96)]*2-C(3*3,96,2)[C(3*3,192)]*2-C(3*3,192,2)-C(3*3,192)-C(1*1,192)-C(1*1,10)-1000-1000-1, where the input is a 32*32 RGB image, C(3*3,96) means 96 channels of 3*3 convolutions followed by ReLU, [ · ]*2 means there are two such layers, C(3*3,96,2) means a similar layer but with stride 2, etc.; it is one of the best architectures for CIFAR-10. Batch normalization [44] was applied before hidden layers. Furthermore, the sigmoid loss ℓsig was used as the surrogate loss and an ℓ2-regularization was also added. The resulting objectives were minimized by Adam [20] on MNIST, epsilon and CIFAR-10, and by AdaGrad [31] on 20News; we fixed β = 0 and γ = 1 for simplicity. The experimental results are reported in Figure 2, where means and standard deviations of training and test risks based on the same 10 random samplings are shown. We can see that uPU overfitted training data and nnPU fixed this problem. Additionally, given limited N data, nnPU outperformed PN on MNIST, epsilon and CIFAR-10 and was comparable to it on 20News. In summary, with the proposed non-negative risk estimator, we are able to use very flexible models given limited P data. We further tried some cases where πp is misspecified, in order to simulate PU learning in the wild, where we must suffer from errors in estimating πp. More specifically, we tested nnPU learning by replacing πp with π′ p ∈{0.8πp, 0.9πp, . . . , 1.2πp} and giving π′ p to the learning method, so that it would regard π′ p as πp during the entire training process. The experimental setup was exactly same as before except the replacement of πp. The experimental results are reported in Figure 3, where means of test risks of nnPU based on the same 10 random samplings are shown, and the best test risks are identified (horizontal lines are the best mean test risks and vertical lines are the epochs when they were achieved). We can see that on MNIST, the more misspecification was, the worse nnPU performed, while under-misspecification hurt more than over-misspecification; on epsilon, the cases where π′ p equals to πp, 1.1πp and 1.2πp 8 0 25 50 75 100 125 150 175 200 Epoch 0.05 0.10 0.15 0.20 0.25 0.30 Risk w.r.t. zero-one loss 0.8 0.9 1.0 1.1 1.2 (a) MNIST 0 25 50 75 100 125 150 175 200 Epoch 0.28 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 0.46 Risk w.r.t. zero-one loss (b) epsilon 0 25 50 75 100 125 150 175 200 Epoch 0.20 0.22 0.24 0.26 0.28 0.30 Risk w.r.t. zero-one loss (c) 20News 0 25 50 75 100 125 150 175 200 Epoch 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Risk w.r.t. zero-one loss (d) CIFAR-10 Figure 3: Experimental results given π′ p ∈{0.8πp, 0.9πp, . . . , 1.2πp}. were comparable, but the best was π′ p = 1.1πp rather than π′ p = πp; on 20News, these three cases became different, such that π′ p = πp was superior to π′ p = 1.2πp but inferior to π′ p = 1.1πp; at last on CIFAR-10, π′ p = πp and π′ p = 1.1πp were comparable again, and π′ p = 1.2πp was the winner. In all the experiments, we have fixed β = 0, which may explain this phenomenon. Recall that uPU overfitted seriously on all the benchmark datasets, and note that the larger π′ p is, the more different nnPU is from uPU. Therefore, the replacement of πp with some π′ p > πp introduces additional bias of eRpu(g) in estimating R(g), but it also pushes eRpu(g) away from bRpu(g) and then pushes nnPU away from uPU. This may result in lower test risks given some π′ p slightly larger than πp as shown in Figure 3. This is also why under-misspecified π′ p hurt more than over-misspecified π′ p. All the experiments were done with Chainer [45], and our implementation based on it is available at https://github.com/kiryor/nnPUlearning. 6 Conclusions We proposed a non-negative risk estimator for PU learning that follows and improves on the stateof-the-art unbiased risk estimators. No matter how flexible the model is, it will not go negative as its unbiased counterparts. It is more robust against overfitting when being minimized, and training very flexible models such as deep neural networks given limited P data becomes possible. We also developed a large-scale PU learning algorithm. Extensive theoretical analyses were presented, and the usefulness of our non-negative PU learning was verified by intensive experiments. A promising future direction is extending the current work to semi-supervised learning along [46]. Acknowledgments GN and MS were supported by JST CREST JPMJCR1403 and GN was also partially supported by Microsoft Research Asia. 9 References [1] F. Denis. PAC learning from positive statistical queries. In ALT, 1998. [2] F. De Comité, F. Denis, R. Gilleron, and F. Letouzey. Positive and unlabeled examples help learning. In ALT, 1999. [3] F. Letouzey, F. Denis, and R. Gilleron. Learning from positive and unlabeled examples. In ALT, 2000. [4] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In KDD, 2008. [5] G. Ward, T. Hastie, S. Barry, J. Elith, and J. Leathwick. Presence-only data and the EM algorithm. Biometrics, 65(2):554–563, 2009. [6] C. Scott and G. Blanchard. Novelty detection: Unlabeled data definitely help. In AISTATS, 2009. [7] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. Journal of Machine Learning Research, 11:2973–3009, 2010. [8] C.-J. Hsieh, N. Natarajan, and I. S. Dhillon. PU learning for matrix completion. In ICML, 2015. [9] X. Li, P. S. Yu, B. Liu, and S.-K. Ng. Positive unlabeled learning for data stream classification. In SDM, 2009. [10] M. N. Nguyen, X. Li, and S.-K. Ng. Positive unlabeled leaning for time series classification. In IJCAI, 2011. [11] B. Liu, W. S. Lee, P. S. Yu, and X. Li. Partially supervised classification of text documents. In ICML, 2002. [12] X. Li and B. Liu. Learning to classify texts using positive and unlabeled data. In IJCAI, 2003. [13] W. S. Lee and B. Liu. Learning with positive and unlabeled examples using weighted logistic regression. In ICML, 2003. [14] B. Liu, Y. Dai, X. Li, W. S. Lee, and P. S. Yu. Building text classifiers using positive and unlabeled examples. In ICDM, 2003. [15] M. C. du Plessis, G. Niu, and M. Sugiyama. Analysis of learning from positive and unlabeled data. In NIPS, 2014. [16] M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled data. In ICML, 2015. [17] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In NIPS, 2013. [18] G. Patrini, F. Nielsen, R. Nock, and M. Carioni. Loss factorization, weakly supervised learning and label noise robustness. In ICML, 2016. [19] G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama. Theoretical comparisons of positiveunlabeled learning against positive-negative learning. In NIPS, 2016. [20] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [21] E. Sansone, F. G. B. De Natale, and Z.-H. Zhou. Efficient training for positive unlabeled learning. arXiv preprint arXiv:1608.06807, 2016. [22] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods, pages 185–208. MIT Press, 1999. [23] A. Menon, B. Van Rooyen, C. S. Ong, and B. Williamson. Learning from corrupted binary labels via class-probability estimation. In ICML, 2015. [24] H. G. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embedding of distributions. In ICML, 2016. [25] S. Jain, M. White, and P. Radivojac. Estimating the class prior and posterior from noisy positives and unlabeled data. In NIPS, 2016. [26] M. C. du Plessis, G. Niu, and M. Sugiyama. Class-prior estimation for learning from positive and unlabeled data. Machine Learning, 106(4):463–492, 2017. [27] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [28] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012. [29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [30] A. L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In NIPS, 2001. 10 [31] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [32] K.-L. Chung. A Course in Probability Theory. Academic Press, 1968. [33] C. Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proc. 3rd Berkeley Symposium on Mathematical Statistics and Probability, 1956. [34] W. James and C. Stein. Estimation with quadratic loss. In Proc. 4th Berkeley Symposium on Mathematical Statistics and Probability, 1961. [35] V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5):1902–1914, 2001. [36] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. [37] G.-X. Yuan, C.-H. Ho, and C.-J. Lin. An improved GLMNET for l1-regularized logistic regression. Journal of Machine Learning Research, 13:1999–2030, 2012. [38] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995. [39] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [40] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. [41] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. [42] J. Pennington, R. Socher, and C. D. Manning. GloVe: Global vectors for word representation. In EMNLP, 2014. [43] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR, 2015. [44] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [45] S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In Machine Learning Systems Workshop at NIPS, 2015. [46] T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama. Semi-supervised classification based on classification from positive and unlabeled data. In ICML, 2017. [47] C. McDiarmid. On the method of bounded differences. In J. Siemons, editor, Surveys in Combinatorics, pages 148–188. Cambridge University Press, 1989. [48] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991. [49] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [50] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998. 11 | 2017 | 326 |
6,815 | Gradient descent GAN optimization is locally stable Vaishnavh Nagarajan Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 vaishnavh@cs.cmu.edu J. Zico Kolter Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 zkolter@cs.cmu.edu Abstract Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the “gradient descent” form of GAN optimization, i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still locally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse. 1 Introduction Since their introduction a few years ago, Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] have gained prominence as one of the most widely used methods for training deep generative models. GANs have been successfully deployed for tasks such as photo super-resolution, object generation, video prediction, language modeling, vocal synthesis, and semi-supervised learning, amongst many others [Ledig et al., 2017, Wu et al., 2016, Mathieu et al., 2016, Nguyen et al., 2017, Denton et al., 2015, Im et al., 2016]. At the core of the GAN methodology is the idea of jointly training two networks: a generator network, meant to produce samples from some distribution (that ideally will mimic examples from the data distribution), and a discriminator network, which attempts to differentiate between samples from the data distribution and the ones produced by the generator. This problem is typically written as a min-max optimization problem of the following form: min G max D (Ex⇠pdata[log D(x)] + Ez⇠platent[log(1 −D(G(z)))]) . (1) For the purposes of this paper, we will shortly consider a more general form of the optimization problem, which also includes the recent Wasserstein GAN (WGAN) [Arjovsky et al., 2017] formulation. Despite their prominence, the actual task of optimizing GANs remains a challenging problem, both from a theoretical and a practical standpoint. Although the original GAN paper included some analysis on the convergence properties of the approach [Goodfellow et al., 2014], it assumed that updates occurred in pure function space, allowed arbitrarily powerful generator and discriminator networks, and modeled the resulting optimization objective as a convex-concave game, therefore yielding well-defined global convergence properties. Furthermore, this analysis assumed that the discriminator network is fully optimized between generator updates, an assumption that does not mirror the practice of GAN optimization. Indeed, in practice, there exist a number of well-documented failure modes for GANs such as mode collapse or vanishing gradient problems. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Our contributions. In this paper, we consider the “gradient descent” formulation of GAN optimization, the setting where both the generator and the discriminator are updated simultaneously via simple (stochastic) gradient updates; that is, there are no inner and outer optimization loops, and neither the generator nor the discriminator are assumed to be optimized to convergence. Despite the fact that, as we show, this does not correspond to a convex-concave optimization problem (even for simple linear generator and discriminator representations), we show that: Under suitable conditions on the representational powers of the discriminator and the generator, the resulting GAN dynamical system is locally exponentially stable. That is, for some region around an equilibrium point of the updates, the gradient updates will converge to this equilibrium point at an exponential rate. Interestingly, our conditions can be satisfied by the traditional GAN but not by the WGAN, and we indeed show that WGANs can have non-convergent limit cycles in the gradient descent case. Our theoretical analysis also suggests a natural method for regularizing GAN updates by adding an additional regularization term on the norm of the discriminator gradient. We show that the addition of this term leads to locally exponentially stable equilibria for all classes of GANs, including WGANs. The additional penalty is highly related to (but also notably different from) recent proposals for practical GAN optimization, such as the unrolled GAN [Metz et al., 2017] and the improved Wasserstein GAN training [Gulrajani et al., 2017]. In practice, the approach is simple to implement, and preliminary experiments show that it helps avert mode collapse and leads to faster convergence. 2 Background and related work GAN optimization and theory. Although the theoretical analysis of GANs has been far outpaced by their practical application, there have been some notable results in recent years, in addition to the aforementioned work in the original GAN paper. For the most part, this work is entirely complementary to our own, and studies a very different set of questions. Arjovsky and Bottou [2017] provide important insights into instability that arises when the supports of the generated distribution and the true distribution are disjoint. In contrast, in this paper we delve into an equally important question of whether the updates are stable even when the generator is in fact very close to the true distribution (and we answer in the affirmative). Arora et al. [2017], on the other hand, explore questions relating to the sample complexity and expressivity of the GAN architecture, and their relation to the existence of an equilibrium point. However, it is still unknown as to whether, given that an equilibrium exists, the GAN update procedure will converge locally. From a more practical standpoint, there have been a number of papers that address the topic of optimization in GANs. Several methods have been proposed that introduce new objectives or architectures for improving the (practical and theoretical) stability of GAN optimization [Arjovsky et al., 2017, Poole et al., 2016]. A wide variety of optimization heuristics and architectures have also been proposed to address challenges such as mode collapse [Salimans et al., 2016, Metz et al., 2017, Che et al., 2017, Radford et al., 2016]. Our own proposed regularization term falls under this same category, and hopefully provides some context for understanding some of these methods. Specifically, our regularization term (motivated by stability analysis) captures a degree of “foresight” of the generator in the optimization procedure, similar to the unrolled GANs procedure [Metz et al., 2017]. Indeed, we show that our gradient penalty is closely related to 1-unrolled GANs, but also provides more flexibility in leveraging this foresight. Finally, gradient-based regularization has been explored for GANs, with one of the most recent works being that of Gulrajani et al. [2017], though their penalty is on the discriminator rather than the generator as in our case. Finally, there are several works that have concurrently addressed similar issues as this paper. Of particular similarity to the methodology we propose here are the works by Roth et al. [2017] and Mescheder et al. [2017]. The first of these two presents a stabilizing regularizer that is based on a gradient norm, where the gradient is calculated with respect to the datapoints. Our regularizer on the other hand is based on the norm of a gradient calculated with respect to the parameters. Our approach has some strong similarities with that of the second work noted above; however, the authors there do not establish or disprove stability, and instead note the presence of zero eigenvalues (which we will treat in some depth) as a motivation for their alternative optimization method. Thus, we feel the works as a whole are quite complementary, and signify the growing interest in GAN optimization issues. 2 Stochastic approximation algorithms and analysis of nonlinear systems. The technical tools we use to analyze the GAN optimization dynamics in this paper come from the fields of stochastic approximation algorithms and the analysis of nonlinear differential equations – notably the “ODE method” for analyzing convergence properties of dynamical systems [Borkar and Meyn, 2000, Kushner and Yin, 2003]. Consider a general stochastic process driven by the updates ✓t+1 = ✓t + ↵t(h(✓t) + ✏t) for vector ✓t 2 Rn, step size ↵t > 0, function h : Rn ! Rn and a martingale difference sequence ✏t.1 Under fairly general conditions, namely: 1) bounded second moments of ✏t, 2) Lipschitz continuity of h, and 3) summable but not square-summable step sizes, the stochastic approximation algorithm converges to an equilibrium point of the (deterministic) ordinary differential equation ˙✓(t) = h(✓(t)). Thus, to understand stability of the stochastic approximation algorithm, it suffices to understand the stability and convergence of the deterministic differential equation. Though such analysis is typically used to show global asymptotic convergence of the stochastic approximation algorithm to an equilibrium point (assuming the related ODE also is globally asymptotically stable), it can also be used to analyze the local asymptotic stability properties of the stochastic approximation algorithm around equilibrium points.2 This is the technique we follow throughout this entire work, though for brevity we will focus entirely on the analysis of the continuous time ordinary differential equation, and appeal to these standard results to imply similar properties regarding the discrete updates. Given the above consideration, our focus will be on proving stability of the dynamical system around equilbrium points, i.e. points ✓? for which h(✓?) = 0.3. Specifically, we appeal to the well known linearization theorem [Khalil, 1996, Sec 4.3], which states that if the Jacobian of the dynamical system J = @h(✓)/@✓|✓=✓? evaluated at an equilibrium point is Hurwitz (has all strictly negative eigenvalues, Re(λi(J)) < 0, 8i = 1, . . . , n), then the ODE will converge to ✓? for some non-empty region around ✓?, at an exponential rate. This means that the system is locally asymptotically stable, or more precisely, locally exponentially stable (see Definition A.1 in Appendix A). Thus, an important contribution of this paper is a proof of this seemingly simple fact: under some conditions, the Jacobian of the dynamical system given by the GAN update is a Hurwitz matrix at an equilibrium (or, if there are zero-eigenvalues, if they correspond to a subspace of equilibria, the system is still asymptotically stable). While this is a trivial property to show for convex-concave games, the fact that the GAN is not convex-concave leads to a substantially more challenging analysis. In addition to this, we provide an analysis that is based on Lyapunov’s stability theorem (described in Appendix A). The crux of the idea is that to prove convergence it is sufficient to identify a nonnegative “energy” function for the linearized system which always decreases with time (specifically, the energy function will be a distance from the equilibrium, or from the subspace of equilibria). Most importantly, this analysis provides insights into the dynamics that lead to GAN convergence. 3 GAN optimization dynamics This section comprises the main results of this paper, showing that under proper conditions the gradient descent updates for GANs (that is, updating both the generator and discriminator locally and simultaneously) is locally exponentially stable around “good” equilibrium points (where “good” will be defined shortly). This requires that the GAN loss be strictly concave, which is not the case for WGANs, and we indeed show that the updates for WGANs can cycle indefinitely. This leads us to propose a simple regularization term that is able to guarantee exponential stability for any concave GAN loss, including the WGAN, rather than requiring strict concavity. 1Stochastic gradient descent on an objective f(✓) can be expressed in this framework as h(✓) = r✓f(✓). 2Note that the local analysis does not show that the stochastic approximation algorithm will necessarily converge to an equilibrium point, but still provides a valuable characterization of how the algorithm will behave around these points. 3Note that this is a slightly different usage of the term equilibrium as typically used in the GAN literature, where it refers to a Nash equilibrium of the min-max optimization problem. These two definitions (assuming we mean just a local Nash equilibrium) are equivalent for the ODE corresponding to the min-max game, but we use the dynamical systems meaning throughout this paper, that is, any point where the gradient update is zero. 3 3.1 The generalized GAN setting For the remainder of the paper, we consider a slightly more general formulation of the GAN optimization problem than the one presented earlier, given by the following min/max problem: min G max D V (G, D) = (Ex⇠pdata[f(D(x))] + Ez⇠platent[f(−D(G(z)))]) (2) where G : Z ! X is the generator network, which maps from the latent space Z to the input space X; D : X ! R is the discriminator network, which maps from the input space to a classification of the example as real or synthetic; and f : R ! R is a concave function. We can recover the traditional GAN formulation [Goodfellow et al., 2014] by taking f to be the (negated) logistic loss f(x) = −log(1 + exp(−x)); note that this convention slightly differs from the standard formulation in that in this case the discriminator outputs the real-valued “logits” and the loss function would implicitly scale this to a probability. We can recover the Wasserstein GAN by simply taking f(x) = x. Assuming the generator and discriminator networks to be parameterized by some set of parameters, ✓D and ✓G respectively, we analyze the simple stochastic gradient descent approach to solving this optimization problem. That is, we take simultaneous gradient steps in both ✓D and ✓G, which in our “ODE method” analysis leads to the following differential equation: ˙✓D = r✓DV (✓G, ✓D), ˙✓G := r✓GV (✓G, ✓D). (3) A note on alternative updates. Rather than updating both the generator and discriminator according to the min-max problem above, Goodfellow et al. [2014] also proposed a modified update for just the generator that minimizes a different objective, V 0(G, D) = −Ez⇠platent[f(D(G(z)))] (the negative sign is pulled out from inside f). In fact, all the analyses we consider in this paper apply equally to this case (or any convex combination of both updates), as the ODE of the update equations have the same Jacobians at equilibrium. 3.2 Why is proving stability hard for GANs? Before presenting our main results, we first highlight why understanding the local stability of GANs is non-trivial, even when the generator and discriminator have simple forms. As stated above, GAN optimization consists of a min-max game, and gradient descent algorithms will converge if the game is convex-concave – the objective must be convex in the term being minimized and concave in the term being maximized. Indeed, this was a crucial assumption in the convergence proof in the original GAN paper. However, for virtually any parameterization of the real GAN generator and discriminator, even if both representations are linear, the GAN objective will not be a convex-concave game: Proposition 3.1. The GAN objective in Equation 2 can be a concave-concave objective, i.e., concave with respect to both the discriminator and generator parameters, for a large part of the discriminator space, including regions arbitrarily close to the equilibrium. To see why, consider a simple GAN over 1-dimensional data and latent space with linear generator and discriminator, i.e. D(x) = ✓Dx + ✓0 D and G(z) = ✓Gz + ✓0 G. Then the GAN objective is: V (G, D) = Ex⇠pdata[f(✓Dx + ✓0 D)] + Ez⇠platent[f(−✓D(✓Gz + ✓0 G) −✓0 D)]. Because f is concave, by inspection we can see that V is concave in ✓D and ✓0 D; but it is also concave (not convex) in ✓G and ✓0 G, for the same reason. Thus, the optimization involves concave minimization, which in general is a difficult problem. To prove that this is not a peculiarity of the above linear discriminator system, in Appendix B, we show similar observations for a more general parametrization, and also for the case where f 00(x) = 0 (which happens in the case of WGANs). Thus, a major question remains as to whether or not GAN optimization is stable at all (most concave maximization is not). Indeed, there are several well-known properties of GAN optimization that may make it seem as though gradient descent optimization may not work in theory. For instance, it is well-known that at the optimal location pg = pdata, the optimal discriminator will output zero on all examples, which in turn means that any generator distribution will be optimal for this generator. This would seem to imply that the system can not be stable around such an equilibrium. However, as we will show, gradient descent GAN optimization is locally asymptotically stable, even for natural parameterizations of generator-discriminator pairs (which still make up concave-concave optimization problems). Furthermore, at equilibrium, although the zero-discriminator property means that the generator is not stable “independently”, the joint dynamical system of generator and discriminator is locally asymptotically stable around certain equilibrium points. 4 3.3 Local stability of general GAN systems This section contains our first technical result, establishing that GANs are locally stable under proper local conditions. Although the proofs are deferred to the appendix, the elements that we do emphasize here are the conditions that we identified for local stability to hold. Indeed, because the proof rests on these conditions (some of which are fairly strong), we want to highlight them as much as possible, as they themselves also convey valuable intuition as to what is required for GAN convergence. To formalize our conditions, we denote the support of a distribution with probability density function (p.d.f) p by supp(p) and the p.d.f of the generator ✓G by p✓G. Let B✏(·) denote the Euclidean L2-ball of radius of ✏. Let λmax(·) and λ(+) min(·) denote the largest and the smallest non-zero eigenvalues of a non-zero positive semidefinite matrix. Let Col(·) and Null(·) denote the column space and null space of a matrix respectively. Finally, we define two key matrices that will be integral to our analyses: KDD , Epdata[r✓DD✓D(x)rT ✓DD✓D(x)] !! ✓? D , KDG , Z X r✓DD✓D(x)rT ✓Gp✓G(x)dx !!!! (✓? D,✓? G) . Here, the matrices are evaluated at an equilibrium point (✓? D, ✓? G) which we will characterize shortly. The significance of these terms is that, as we will see, KDD is proportional to the Hessian of the GAN objective with respect to the discriminator parameters at equilibrium, and KDG is proportional to the off-diagonal term in this Hessian, corresponding to the discriminator and generator parameters. These matrices also occur in similar positions in the Jacobian of the system at equilibrium. We now discuss conditions under which we can guarantee exponential stability. All our conditions are imposed on both (✓? D, ✓? G) and all equilibria in a small neighborhood around it, though we do not state this explicitly in every assumption. First, we define the “good” equilibria we care about as those that correspond to a generator which matches the true distribution and a discriminator that is identically zero on the support of this distribution. As described next, implicitly, this also assumes that the discriminator and generator representations are powerful enough to guarantee that there are no “bad” equilibria in a local neighborhood of this equilibrium. Assumption I. p✓? G = pdata and D✓? D(x) = 0, 8 x 2 supp(pdata). The assumption that the generator matches the true distribution is a rather strong assumption, as it limits us to the “realizable” case, where the generator is capable of creating the underlying data distribution. Furthermore, this means the discriminator is (locally) powerful enough that for any other generator distribution it is not at equilibrium (i.e., discriminator updates are non-zero). Since we do not typically expect this to be the case, we also provide an alternative non-realizable assumption below that is also sufficient for our results, i.e., the system is still stable. In both the realizable and non-realizable cases the requirement of an all-zero discriminator remains. This implicitly requires even the generator representation be (locally) rich enough so that when the discriminator is not identically zero, the generator is not at equilibrium (i.e., generator updates are non-zero). Finally, note that these conditions do not disallow bad equilibria outside of this neighborhood, which may potentially even be unstable. Assumption I. (Non-realizable) The discriminator is linear in its parameters ✓D and furthermore, for any equilibrium point (✓? D, ✓? G), D✓? D(x) = 0, 8 x 2 supp(pdata) [ supp(p✓? G). This alternative assumption is largely a weakening of Assumption I, as the condition on the discriminator remains, but there is no requirement that the generator give rise to the true distribution. However, the requirement that the discriminator be linear in the parameters (not in its input) is an additional restriction that seems unavoidable in this case for technical reasons. Further, note that the fact that D✓? D(x) = 0 and that the generator/discriminator are both at equilibrium, still means that although it may be that p✓? G 6= pdata, these distributions are (locally) indistinguishable as far as the discriminator is concerned. Indeed, this is a nice characterization of “good” equilibria that the discriminator cannot differentiate between the real and generated samples. Our goal next is to identify strong curvature conditions that can be imposed on the objective V (or a function related to the objective), though only locally at equilibrium. First, we will require that the objective is strongly concave in the discriminator parameter space at equilibrium (note that it is concave by default). However, on the other hand, we cannot require the objective to be strongly convex in the generator parameter space as we saw that the objective is not convex-concave even in the nicest scenario, even arbitrarily close to equilbrium. Instead, we identify another convex function, namely 5 the magnitude of the update on the equilibrium discriminator, i.e., k r✓DV (✓D, ✓G)|✓D=✓? D k2, and require that to be strongly convex in the generator space at equilibrium. Since these strong curvature assumptions will allow only systems with a locally unique equilibrium, we will state them in a relaxed form that accommodates a local subspace of equilibria. Furthermore, we will state these assumptions in two parts, first as a condition on f and second as a condition on the parameter space. First, the condition on f is straightforward, making it necessary that the loss f be concave at 0; as we will show, when this condition is not met, there need not be local asymptotic convergence. Assumption II. The function f satisfies f 00(0) < 0, and f 0(0) 6= 0. Next, to state conditions on the parameter space while also allowing systems with multiple equilibria locally, we first define the following property for a function, say g, at a specific point in its domain: along any direction, either the second derivative of g must be non-zero or all derivatives must be zero. For example, at the origin, g(x, y) = x2 + x2y2 is flat along y, and along any other direction at an angle ↵6= 0 with the y axis, the second derivative is 2 sin2 ↵. For the GAN system, we will require this property, formalized in Property I, for two convex functions whose Hessians are proportional to KDD and KT DGKDG. We provide more intuition for these functions below. Property I. g : ⇥! R satisfies Property I at ✓? 2 ⇥if for any ✓2 Null(r2 ✓g(✓) !! ✓?), the function is locally constant along ✓at ✓?, i.e., 9✏> 0 such that for all ✏0 2 (−✏, ✏), g(✓?) = g(✓? + ✏0✓). Assumption III. At an equilibrium (✓? D, ✓? G), the functions Epdata[D2 ✓D(x)] and ###Epdata[r✓DD✓D(x)] −Ep✓G [r✓DD✓D(x)] ### 2!!!! ✓D=✓? D must satisfy Property I in the discriminator and generator space respectively. Here is an intuitive explanation of what these two non-negative functions represent and how they relate to the objective. The first function is a function of ✓D which measures how far ✓D is from an all-zero state, and the second is a function of ✓G which measures how far ✓G is from the true distribution; at equilibrium these functions are zero. We will see later that given f 00(0) < 0, the curvature of the first function at ✓? D is representative of the curvature of V (✓D, ✓? G) in the discriminator space; similarly, given f 0(0) 6= 0 the curvature of the second function at ✓? G is representative of the curvature of the magnitude of the discriminator update on ✓? D in the generator space. The intuition behind why this particular relation holds is that, when ✓G moves away from the true distribution, while the second function in Assumption III increases, ✓? D also becomes more suboptimal for that generator; as a result, the magnitude of update on ✓? D increases too. Note that we show in Lemma C.2 that the Hessian of the two functions in Assumption III in the discriminator and the generator space respectively are proportional to KDD and KT DGKDG. The above relations involving the two functions and the GAN objective, together with Assumption III, basically allow us to consider systems with reasonable strong curvature properties, while also allowing many equilibria in a local neighborhood in a specific sense. In particular, if the curvature of the first function is flat along a direction u (which also means that KDDu = 0) we can perturb ✓? D slightly along u and still have an ‘equilibrium discriminator’ as defined in Assumption I, i.e., 8x 2 supp(p✓? G), D✓D(x) = 0. Similarly, for any direction v along which the curvature of the second function is flat (i.e., KDGv = 0), we can perturb ✓? G slightly along that direction such that ✓G remains an ‘equilibrium generator’ as defined in Assumption I, i.e., p✓G = pdata. We prove this formally in Lemma C.2. Perturbations along any other directions do not yield equilibria because then, either ✓D is no longer in an all-zero state or ✓G does not match the true distribution. Thus, we consider a setup where the rank deficiencies of KDD and KT DGKDG if any, correspond to equivalent equilibria (which typically exist for neural networks, though in practice they may not correspond to ‘linear’ perturbations as modeled here). Our final assumption is on the supports of the true and generated distributions: we require that all the generators in a sufficiently small neighborhood of the equilibrium have distributions with the same support as the true distribution. Following this, we briefly discuss a relaxation of this assumption. Assumption IV. 9✏G > 0 such that 8✓G 2 B✏G(✓? G), supp(p✓G) = supp(pdata). This may typically hold if the support covers the whole space X; but when the true distribution has support in some smaller disjoint parts of the space X, nearby generators may correspond to slightly 6 displaced versions of this distribution with a different support. For the latter scenario, we show in Appendix C.1 that local exponential stability holds under a certain smoothness condition on the discriminator. Specifically, we require that D✓? D(·) be zero not only on the support of ✓? G but also on the support of small perturbations of ✓? G as otherwise the generator will not be at equilibrium. (Additionally, we also require this property from the discriminators that lie within a small perturbation of ✓? D in the null space of KDD so that they correspond to equilibrium discriminators.) We note that while this relaxed assumption accounts for a larger class of examples, it is still strong in that it also restricts us from certain simple systems. Due to space constraints, we state and discuss the implications of this assumption in greater detail in Appendix C.1. We now state our main result. Theorem 3.1. The dynamical system defined by the GAN objective in Equation 2 and the updates in Equation 3 is locally exponentially stable with respect to an equilibrium point (✓? D, ✓? G) when the Assumptions I, II, III, IV hold for (✓? D, ✓? G) and other equilibria in a small neighborhood around it. Furthermore, the rate of convergence is governed only by the eigenvalues λ of the Jacobian J of the system at equilibrium with a strict negative real part upper bounded as: • If Im(λ) = 0, then Re(λ) 2f 00(0)f 02(0)λ(+) min(KDD)λ(+) min(KT DGKDG) 4f 002(0)λ(+) min(KDD)λmax(KDD)+f 0(0)2λ(+) min(KT DGKDG) • If Im(λ) 6= 0, then Re(λ) f 00(0)λ(+) min(KDD) The vast majority of our proofs are deferred to the appendix, but we briefly describe the intuition here. It is straightforward to show that the Jacobian J of the system at equilibrium can be written as: J = JDD JDG −JT DG JGG % = 2f 00(0)KDD f 0(0)KDG −f 0(0)KT DG 0 % . Recall that we wish to show this is Hurwitz. First note that JDD (the Hessian of the objective with respect to the discriminator) is negative semi-definite if and only if f 00(0) < 0. Next, a crucial observation is that JGG = 0 i.e, the Hessian term w.r.t. the generator vanishes because for the all-zero discriminator, all generators result in the same objective value. Fortunately, this means at equilibrium we do not have non-convexity in ✓G precluding local stability. Then, we make use of the crucial Lemma G.2 we prove in the appendix, showing that any matrix of the form ⇥ −Q P; −PT 0 ⇤ is Hurwitz provided that −Q is strictly negative definite and P has full column rank. However, this property holds only when KDD is positive definite and KDG is full column rank. Now, if KDD or KDG do not have this property, recall that the rank deficiency is due to a subspace of equilibria around (✓? D, ✓? G). Consequently, we can analyze the stability of the system projected to an subspace orthogonal to these equilibria (Theorem A.4). Additionally, we also prove stability using Lyapunov’s stability (Theorem A.1) by showing that the squared L2 distance to the subspace of equilibria always either decreases or only instantaneously remains constant. Additional results. In order to illustrate our assumptions in Theorem 3.1, in Appendix D we consider a simple GAN that learns a multi-dimensional Gaussian using a quadratic discriminator and a linear generator. In a similar set up, in Appendix E, we consider the case where f(x) = x, i.e., the Wasserstein GAN, and so f 00(x) = 0, and we show that the system can perennially cycle around an equilibrium point without converging. A simple two-dimensional example is visualized in Section 4. Thus, gradient descent WGAN optimization is not necessarily asymptotically stable. 3.4 Stabilizing optimization via gradient-based regularization Motivated by the considerations above, in this section we propose a regularization penalty for the generator update, which uses a term based upon the gradient of the discriminator. Crucially, the regularization term does not change the parameter values at the equilibrium point, and at the same time enhances the local stability of the optimization procedure, both in theory and practice. Although these update equations do require that we differentiate with respect to a function of another gradient term, such “double backprop” terms (see e.g., Drucker and Le Cun [1992]) are easily computed by modern automatic differentiation tools. Specifically, we propose the regularized update ✓G := ✓G −↵r✓G ( V (D✓D, G✓G) + ⌘kr✓DV (D✓D, G✓G)k2) . (4) 7 Local Stability The intuition of this regularizer is perhaps most easily understood by considering how it changes the Jacobian at equilibrium (though there are other means of motivating the update as well, discussed further in Appendix F.2). In the Jacobian of the new update, although there are now non-antisymmetric diagonal blocks, the block diagonal terms are now negative definite: JDD JDG −JT DG(I + 2⌘JDD) −2⌘JT DGJDG % . As we show below in Theorem 3.2 (proved in Appendix F), as long as we choose ⌘small enough so that I + 2⌘JDD ⌫0, this guarantees the updates are locally asymptotically stable for any concave f. In addition to stability properties, this regularization term also addresses a well known failure state in GANs called mode collapse, by lending more “foresight” to the generator. The way our updates provide this foresight is very similar to the unrolled updates proposed in Metz et al. [2017], although our regularization is much simpler and provides more flexibility to leverage the foresight. In practice, we see that our method can be as powerful as the more complex and slower 10-unrolled GANs. We discuss this and other intuitive ways of motivating our regularizer in Appendix F. Theorem 3.2. The dynamical system defined by the GAN objective in Equation 2 and the updates in Equation 4, is locally exponentially stable at the equilibrium, under the same conditions as in Theorem 3.1, if ⌘< 1 2λmax(−JDD). Further, under appropriate conditions similar to these, the WGAN system is locally exponentially stable at the equilibrium for any ⌘. The rate of convergence for the WGAN is governed only by the eigenvalues λ of the Jacobian at equilibrium with a strict negative real part upper bounded as: • If Im(λ) = 0, then Re(λ) − 2f 02(0)⌘λ(+) min(KT DGKDG) 4f 02(0)⌘2λmax(KT DGKDG)+1 • If Im(λ) 6= 0, then Re(λ) −⌘f 02(0)λ(+) min(KT DGKDG). 4 Experimental results We very briefly present experimental results that demonstrate that our regularization term also has substantial practical promise.4 In Figure 1, we compare our gradient regularization to 10-unrolled GANs on the same architecture and dataset (a mixture of eight Gaussians) as in Metz et al. [2017]. Our system quickly spreads out all the points instead of first exploring only a few modes and then redistributing its mass over all the modes gradually. Note that the conventional GAN updates are known to enter mode collapse for this setup. We see similar results (see Figure 2 here, and Figure 4 in the Appendix for a more detailed figure) in the case of a stacked MNIST dataset using a DCGAN [Radford et al., 2016], i.e., three random digits from MNIST are stacked together so as to create a distribution over 1000 modes. Finally, Figure 3 presents streamline plots for a 2D system where both the true and the latent distribution is uniform over [−1, 1] and the discriminator is D(x) = w2x2 while the generator is G(z) = az. Observe that while the WGAN system goes in orbits as expected, the original GAN system converges. With our updates, both these systems converge quickly to the true equilibrium. 5 Conclusion In this paper, we presented a theoretical analysis of the local asymptotic stability of GAN optimization under proper conditions. We further showed that the recently proposed WGAN is not asymptotically stable under the same conditions, but we introduced a gradient-based regularizer which stabilizes both traditional GANs and the WGANs, and can improve convergence speed in practice. The results here provide substantial insight into the nature of GAN optimization, perhaps even offering some clues as to why these methods have worked so well despite not being convex-concave. However, we also emphasize that there are substantial limitations to the analysis, and directions for future work. Perhaps most notably, the analysis here only provides an understanding of what happens 4We provide an implementation of this technique at https://github.com/locuslab/gradient_ regularized_gan 8 Iteration 0 Iteration 3000 Iteration 8000 Iteration 50000 Iteration 70000 Figure 1: Gradient regularized GAN, ⌘= 0.5 (top row) vs. 10-unrolled with ⌘= 10−4 (bottom row). Figure 2: Gradient regularized (left) and traditional (right) DCGAN architectures on stacked MNIST examples, after 1, 4 and 20 epochs. !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a GAN, Η#0.0 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a WGAN, Η#0.0 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a GAN, Η#0.25 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a WGAN, Η#0.25 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a GAN, Η#0.5 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a WGAN, Η#0.5 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a GAN, Η#1.0 !1.0 !0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 w2 a WGAN, Η#1 Figure 3: Streamline plots around the equilibrium (0, 1) for the conventional GAN (top) and the WGAN (bottom) for ⌘= 0 (vanilla updates) and ⌘= 0.25, 0.5, 1 (left to right). locally, close to an equilibrium point. For non-convex architectures this may be all that is possible, but it seems plausible that much stronger global convergence results could hold for simple settings like the linear quadratic GAN (indeed, as the streamline plots show, we observe this in practice for simple domains). Second, the analysis here does not show the equilibrium points necessarily exist, but only illustrates convergence if there do exist points that satisfy certain criteria: the existence question has been addressed by previous work [Arora et al., 2017], but much more analysis remains to be done here. GANs are rapidly becoming a cornerstone of deep learning methods, and the theoretical and practical understanding of these methods will prove crucial in moving the field forward. Acknowledgements. We thank Lars Mescheder for pointing out a missing condition in the relaxed version of Assumption IV (see Appendix C.1) in earlier versions of this manuscript. 9 References Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 214–223, 2017. Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 224–232, 2017. Vivek S Borkar and Sean P Meyn. The ode method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447–469, 2000. Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In Fifth International Conference on Learning Representations (ICLR). 2017. Emily L Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. In Advances in Neural Information Processing Systems 28, pages 1486–1494. 2015. Harris Drucker and Yann Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991–997, 1992. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pages 2672–2680. 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein GANs. In Thirty-first Annual Conference on Neural Information Processing Systems (NIPS). 2017. Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. Hassan K Khalil. Non-linear Systems. Prentice-Hall, New Jersey, 1996. Harold Kushner and George Yin. Stochastic Approximation and Recursive Algorithms and Applications, volume 35 of Stochastic Modelling and Applied Probability. Springer-Verlag New York, The address, 2003. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Jan R Magnus, Heinz Neudecker, et al. Matrix differential calculus with applications in statistics and econometrics. 1995. Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In Fourth International Conference on Learning Representations (ICLR). 2016. L. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Thirty-first Annual Conference on Neural Information Processing Systems (NIPS). 2017. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In Fifth International Conference on Learning Representations (ICLR). 2017. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 10 Ben Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generator objectives for GANs. arXiv preprint arXiv:1612.02780, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Fourth International Conference on Learning Representations (ICLR). 2016. K. Roth, A. Lucchi, S. Nowozin, and T. Hofmann. Stabilizing training of generative adversarial networks through regularization. In Thirty-first Annual Conference on Neural Information Processing Systems (NIPS). 2017. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems 29, pages 2234–2242. 2016. Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural Information Processing Systems 29, pages 82–90. 2016. 11 | 2017 | 327 |
6,816 | Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers Cong Fang Feng Cheng Zhouchen Lin∗ Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, P. R. China Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, P. R. China fangcong@pku.edu.cn fengcheng@pku.edu.cn zlin@pku.edu.cn Abstract We study stochastic convex optimization subjected to linear equality constraints. Traditional Stochastic Alternating Direction Method of Multipliers [1] and its Nesterov’s acceleration scheme [2] can only achieve ergodic O(1/ √ K) convergence rates, where K is the number of iteration. By introducing Variance Reduction (VR) techniques, the convergence rates improve to ergodic O(1/K) [3, 4]. In this paper, we propose a new stochastic ADMM which elaborately integrates Nesterov’s extrapolation and VR techniques. With Nesterov’s extrapolation, our algorithm can achieve a non-ergodic O(1/K) convergence rate which is optimal for separable linearly constrained non-smooth convex problems, while the convergence rates of VR based ADMM methods are actually tight O(1/ √ K) in non-ergodic sense. To the best of our knowledge, this is the first work that achieves a truly accelerated, stochastic convergence rate for constrained convex problems. The experimental results demonstrate that our algorithm is faster than the existing state-of-the-art stochastic ADMM methods. 1 Introduction We consider the following general convex finite-sum problem with linear constraints: min x1,x2 h1(x1) + f1(x1) + h2(x2) + 1 n n X i=1 f2,i(x2), s.t. A1x1 + A2x2 = b, (1) where f1(x1) and f2,i(x2) with i ∈{1, 2, · · · , n} are convex and have Lipschitz continuous gradients, h1(x1) and h2(x2) are also convex, but can be non-smooth. We use the following notations: L1 denotes the Lipschitz constant of f1(x1), L2 is the Lipschitz constant of f2,i(x2) with i ∈ {1, 2, · · · , n}, and f2(x) = 1 n Pn i=1 f2,i(x). And we use ∇f to denote the gradient of f. Problem (1) is of great importance in machine learning. The finite-sum functions f2(x2) are typically a loss over training samples, and the remaining functions control the structure or regularize the model to aid generalization [2]. The idea of using linear constraints to decouple the loss and regularization terms enables researchers to consider some more sophisticated regularization terms which might be very complicated to solve through proximity operators for Gradient Descent [5] methods. For example, for multitask learning problems [6, 7], the regularization term is set as µ1∥x∥∗+ µ2∥x∥1, for most graph-guided fused Lasso and overlapping group Lasso problem [8, 4], the regularization term can be written as µ∥Ax∥1, and for many multi-view learning tasks [9], the regularization terms always involve µ1∥x∥2,1 + µ2∥x∥∗. ∗Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Convergence rates of ADMM type methods solving Problem (1). Type Algorithm Convergence Rate Batch ADMM [13] Tight non-ergodic O( 1 √ K ) LADM-NE [15] Optimal non-ergodic O( 1 K ) Stochastic STOC-ADMM [1] ergodic O( 1 √ K ) OPG-ADMM [16] ergodic O( 1 √ K ) OPT-ADMM [2] ergodic O( 1 √ K ) SDCA-ADMM [17] unknown SAG-ADMM [3] Tight non-ergodic O( 1 √ K ) SVRG-ADMM [4] Tight non-ergodic O( 1 √ K ) ACC-SADMM (ours) Optimal non-ergodic O( 1 K ) Alternating Direction Method of Multipliers (ADMM) is a very popular optimization method to solve Problem (1), with its advantages in speed, easy implementation and good scalability shown in lots of literatures (see survey [10]). A popular criterion of the algorithms’ convergence rate is its ergodic convergence. And it is proved in [11, 12] that ADMM converges with an O(1/K) ergodic rate. However, in this paper, it is noteworthy that we consider the convergence in the non-ergodic sense. The reasons are two folded: 1) in real applications, the output of ADMM methods are non-ergodic results (xK), rather than the ergodic one (convex combination of x1, x2, · · · , xK), as the non-ergodic results are much faster (see detailed discussions in Section 5.3); 2) The ergodic convergence rate is not trivially the same as general-case’s rate. For a sequence {ak} = {1, −1, 1, −1, 1, −1, · · · } (When k is odd, ak is 1, and −1 when k is even), it is divergent, while in ergodic sense, it converges in O(1/K). So the analysis in the non-ergodic are closer to reality. 2) is especially suit for ADMM methods. In [13], Davis et al. prove that the Douglas-Rachford (DR) splitting converges in nonergodic O(1/ √ K). They also construct a family of functions showing that non-ergodic O(1/ √ K) is tight. Chen et al. establish O(1/ √ K) for Linearized ADMM [14]. Then Li et al. accelerate ADMM through Nesterov’s extrapolation and obtain a non-ergodic O(1/K) convergence rate[15]. They also prove that the lower complexity bound of ADMM type methods for the separable linearly constrained nonsmooth convex problems is exactly O(1/K), which demonstrates that their algorithm is optimal. The convergence rates for different ADMM based algorithms are shown in Table 1. On the other hand, to meet the demands of solving large-scale machine learning problems, stochastic algorithms [18] have drawn a lot of interest in recent years. For stochastic ADMM (SADMM), the prior works are from STOC-ADMM [1] and OPG-ADMM [16]. Due to the noise of gradient, both of the two algorithms can only achieve an ergodic O(1/ √ K) convergence rate. There are two lines of research to accelerate SADMM. The first is to introduce the Variance Reduction (VR) [19, 20, 21] techniques into SADMM. VR methods ensure the descent direction to have a bounded variance and so can achieve faster convergence rates. The existing VR based SADMM algorithms include SDCA-ADMM [17], SAG-ADMM [3] and SVRG-ADMM [4]. SAG-ADMM and SVRG-ADMM can provably achieve ergodic O(1/K) rates for Porblem (1). The second way to accelerate SADMM is through the Nesterov’s acceleration [22]. This work is from [2], in which the authors propose an ergodic O( R2 K2 + Dy+ρ K + σ √ K ) stochastic algorithm (OPT-ADMM). The dependence on the smoothness constant of the convergence rate is O(1/K2) and so each term in the convergence rate seems to have been improved to optimal. However, the worst convergence rate of it is still O(1/ √ K). In this paper, we propose Accelerated Stochastic ADMM (ACC-SADMM) for large scale general convex finite-sum problems with linear constraints. By elaborately integrating Nesterov’s extrapolation and VR techniques, ACC-SADMM provably achieves a non-ergodic O(1/K) convergence rate which is optimal for non-smooth problems. As in non-ergodic sense, the VR based SADMM methods (e.g. SVRG-ADMM, SAG-ADMM) converges in a tight O(1/ √ K) (please see detailed discussions in Section 5.3), ACC-SADMM improve the convergence rates from O(1/ √ K) to (1/K) in the ergodic sense and fill the theoretical gap between the stochastic and batch (deterministic) ADMM. The original idea to design our ACC-SADMM is by explicitly considering the snapshot vector ˜x (approximately the mean value of x in the last epoch) into the extrapolation terms. This is, to some degree, inspired by [23] who proposes an O(1/K2) stochastic gradient algorithm named Katyusha for convex 2 Table 2: Notations and Variables Notation Meaning Variable Meaning ⟨x, y⟩G, ∥x∥G xT Gy, √ xT Gx yk s,1, yk s,2 extrapolation variables Fi(xi) hi(xi) + fi(xi) xk s,1, xk s,2 primal variables x (x1, x2) ˜λk s, λk s dual and temp variables y (y1, y2) ˜xs,1, ˜xs,2, ˜bs snapshot vectors F(x) F1(x1) + F2(x2) x∗ 1, x∗ 2, λ∗ optimal solution of Eq. (1) problems. However, there are many distinctions between the two algorithms (please see detailed discussions in Section 5.1). Our method is also very efficient in practice since we have sufficiently considered the noise of gradient into our acceleration scheme. For example, we adopt extrapolation as yk s = xk s + (1 −θ1,s −θ2)(xk s −xk−1 s ) in the inner loop, where θ2 is a constant and θ1,s decreases after every epoch, instead of directly adopting extrapolation as yk = xk + θk 1 (1−θk−1 1 ) θk−1 1 (xk −xk−1) in the original Nesterov’s scheme and adding proximal term ∥xk+1−xk∥2 σk3/2 as [2] does. There are also variants on updating of multiplier and the snapshot vector. We list the contributions of our work as follows: • We propose ACC-SADMM for large scale convex finite-sum problems with linear constraints which integrates Nesterov’s extrapolation and VR techniques. We prove that our algorithm converges in non-ergodic O(1/K) which is optimal for separable linearly constrained nonsmooth convex problems. To our best knowledge, this is the first work that achieves a truly accelerated, stochastic convergence rate for constrained convex problems. • We do experiments on four bench-mark datasets to demonstrate the superiority of our algorithm. We also do experiments on the Multitask Learning [6] problem to demonstrate that our algorithm can be used on very large datasets. 2 Preliminary Most SADMM methods alternately minimize the following variant surrogate of the augmented Lagrangian: L′(x1, x2, λ, β) = h1(x1) + ⟨∇f1(x1), x1⟩+ L1 2 ∥x1 −xk 1∥2 G1 (2) +h2(x2) + ⟨˜∇f2(x2), x2⟩+ L2 2 ∥x2 −xk 2∥2 G2 + β 2 ∥A1x1 + A2x2 −b + λ β ∥2, where ˜∇f2(x2) is an estimator of ∇f2(x2) from one or a mini-batch of training samples. So the computation cost for each iteration reduces from O(n) to O(b) instead, where b is the mini-batch size. When fi(x) = 0 and Gi = 0, with i = 1, 2, Problem (1) is solved as exact ADMM. When there is no hi(xi), Gi is set as the identity matrix I, with i = 1, 2, the subproblem in xi can be solved through matrix inversion. This scheme is advocated in many SADMM methods [1, 3]. Another common approach is linearization (also called the inexact Uzawa method) [24, 25], where Gi is set as ηiI −β Li AT i Ai with ηi ≥1 + β Li ∥AT i Ai∥. For STOC-ADMM [1], ˜∇f2(x2) is simply set as: ˜∇f2(x2) = 1 b X ik∈Ik ∇f2,ik(x2), (3) where Ik is the mini-batch of size b from {1, 2, · · · , n}. For SVRG-ADMM [4], the gradient estimator can be written as: ˜∇f2(x2) = 1 b X ik∈Ik (∇f2,ik(x2) −∇f2,ik(˜x2)) + ∇f2(˜x2), (4) where ˜x2 is a snapshot vector (mean value of last epoch). 3 Algorithm 1 Inner loop of ACC-SADMM for k = 0 to m −1 do Update dual variable: λk s = ˜λk s + βθ2 θ1,s A1xk s,1 + A2xk s,2 −˜bs . Update xk+1 s,1 through Eq. (6). Update xk+1 s,2 through Eq. (7). Update dual variable: ˜λk+1 s = λk s + β A1xk+1 s,1 + A2xk+1 s,2 −b . Update yk+1 s through Eq. (5). end for k. 3 Our Algorithm 3.1 ACC-SADMM To help readers easier understand our algorithm, we list the notations and the variables in Table 2. Our algorithm has double loops as we use SVRG [19], which also have two layers of nested loops to estimate the gradient. We denote subscript s as the index of the outer loop and superscript k as the index in the inner loops. For example, xk s,1 is the value of x1 at the k-th step of the inner iteration and the s-th step of the outer iteration. And we use xk s and yk s to denote (xk s,1, xk s,2), and (yk s,1, yk s,2), respectively. In each inner loop, we update primal variables xk s,1 and xk s,2, extrapolation terms yk s,1, yk s,2 and dual variable λk s, and s remains unchanged. In the outer loop, we maintain snapshot vectors ˜xs+1,1, ˜xs+1,2 and ˜bs+1, and then assign the initial value to the extrapolation terms y0 s+1,1 and y0 s+1,2. We directly linearize both the smooth term fi(xi) and the augmented term β 2 ∥A1x1 + A2x2 −b + λ β ∥2. The whole algorithm is shown in Algorithm 2. 3.2 Inner Loop The inner loop of ACC-SAMM is straightforward, shown as Algorithm 1. In each iteration, we do extrapolation, and then update the primal and dual variables. There are two critical steps which ensures us to obtain a non-ergodic results. The first is extrapolation. We do extrapolation as: yk+1 s = xk+1 s + (1 −θ1,s −θ2)(xk+1 s −xk s), (5) We can find that 1 −θ1,s −θ2 ≤1 −θ1,s. So comparing with original Nesterov’s scheme, our way is more “mild” to tackle the noise of gradient. The second step is on the updating primal variables. xk+1 s,1 = argmin x1 h1(x1) + ⟨∇f1(yk s,1), x1⟩ (6) +⟨β θ1,s A1yk s,1 + A2yk s,2 −b + λk s, A1x1⟩+ L1 2 + β∥AT 1 A1∥ 2θ1,s ∥x1 −yk s,1∥2. And then update x2 with the latest information of x1, which can be written as: xk+1 s,2 = argmin x2 h2(x2) + ⟨˜∇f2(yk s,1), x2⟩+ ⟨β θ1,s A1xk+1 s,1 + A2yk s,2 −b (7) +λk s, A2x2⟩+ (1 + 1 bθ2 )L2 2 + β∥AT 2 A2∥ 2θ1,s ! ∥x2 −yk s,2∥2, where ˜∇f2(yk s,2) is obtained by the technique of SVRG [19] with the form: ˜∇f2(yk s,2) = 1 b X ik,s∈I(k,s) ∇f2,ik,s(yk s,2) −∇f2,ik,s(˜xs,2) + ∇f2(˜xs,2) . Comparing with unaccelerated SADMM methods, which alternately minimize Eq. (2), our method is distincted in two ways. The first is that the gradient estimator are computed on the yk s,2. The second is that we have chosen a slower increasing penalty factor β θ1,s , instead of a fixed one. 4 Algorithm 2 ACC-SADMM Input: epoch length m > 2, β, τ = 2, c = 2, x0 0 = 0, ˜λ0 0 = 0, ˜x0 = x0 0, y0 0 = x0 0, θ1,s = 1 c+τs, θ2 = m−τ τ(m−1). for s = 0 to S −1 do Do inner loop, as stated in Algorithm 1. Set primal variables: x0 s+1 = xm s . Update snapshot vectors ˜xs+1 through Eq. (8). Update dual variable: ˜λ0 s+1 = λm−1 s + β(1 −τ)(A1xm s,1 + A2xm s,2 −b). Update dual snapshot variable: ˜bs+1 = A1˜xs+1,1 + A2˜xs+1,2. Update extrapolation terms y0 s+1 through Eq. (9). end for s. Output: ˆxS = 1 (m −1)(θ1,S + θ2) + 1xm S + θ1,S + θ2 (m −1)(θ1,S + θ2) + 1 m−1 X k=1 xk S. 3.3 Outer Loop The outer loop of our algorithm is a little complex, in which we preserve snapshot vectors, and then resets the initial value. The main variants we adpot is on the snapshot vector ˜xs+1 and the extrapolation term y0 s+1. For the snapshot vector ˜xs+1, we update it as: ˜xs+1 = 1 m 1 −(τ −1)θ1,s+1 θ2 xm s + 1 + (τ −1)θ1,s+1 (m −1)θ2 m−1 X k=1 xk s ! . (8) ˜xs+1 is not the average of {xk s}, different from most SVRG-based methods [19, 4]. The way of generating ˜x guarantees a faster convergence rate for the constraints. Then we reset y0 s+1 as: y0 s+1 = (1 −θ2)xm s + θ2˜xs+1 + θ1,s+1 θ1,s (1 −θ1,s)xm s −(1 −θ1,s −θ2)xm−1 s −θ2˜xs . (9) 4 Convergence Analysis In this section, we give the convergence results of ACC-SADMM. The proof and a outline can be found in Supplementary Material. As we have mentioned in Section 3.2, the main strategy that enable us to obtain a non-ergodic results is that we adopt extrapolation as Eq. (5). We first analyze each inner iteration, shown in Lemma 1. We ignore subscript s as s is unchanged in the inner iteration. Lemma 1 Assume that f1(x1) and f2,i(x2) with i ∈{1, 2, · · · , n} are convex and have Lipschitz continuous gradients. L1 is the Lipschitz constant of f1(x1). L2 is the Lipschitz constant of f2,i(x2) with i ∈{1, 2, · · · , n} . h1(x1) and h2(x2) is also convex. For Algorithm 2, in any epoch, we have Eik L(xk+1 1 , xk+1 2 , λ∗) −θ2L(˜x1, ˜x2, λ∗) −(1 −θ2 −θ1)L(xk 1, xk 2, λ∗) ≤ θ1 2β ∥ˆλk −λ∗∥2 −Eik h ∥ˆλk+1 −λ∗∥2i + 1 2∥yk 1 −(1 −θ1 −θ2)xk 1 −θ2˜x1 −θ1x∗ 1∥2 G3 −1 2Eik ∥xk+1 1 −(1 −θ1 −θ2)xk 1 −θ2˜x1 −θ1x∗ 1∥2 G3 +1 2∥yk 2 −(1 −θ1 −θ2)xk 2 −θ2˜x2 −θ1x∗ 2∥2 G4 −1 2Eik ∥xk+1 2 −(1 −θ1 −θ2)xk 2 −θ2˜x2 −θ1x∗ 2∥2 G4 , where Eik denotes that the expectation is taken over the random samples in the minibatch Ik,s, L(x1, x2, λ) = F1(x1) + F2(x2) + ⟨λ, A1x1 + A2x2 −b⟩and ˆλk = ˜λk + β(1−θ1) θ1 (Axk −b), G3 = L1 + β∥AT 1 A1∥ θ1 I −βAT 1 A1 θ1 , and G4 = (1 + 1 bθ2 )L2 + β∥AT 2 A2∥ θ1 I. Then Theorem 1 analyses ACC-SADMM in the whole iteration, which is the key convergence result of the paper. 5 Theorem 1 If the conditions in Lemma 1 hold, then we have E 1 2β ∥βm θ1,S (AˆxS−b) −β(m−1)θ2 θ1,0 Ax0 0 −b + ˜λ0 0 −λ∗∥2 (10) +E m θ1,S (F(ˆxS) −F(x∗) + ⟨λ∗, AˆxS −b⟩) ≤ C3 F(x0 0) −F(x∗) + ⟨λ∗, Ax0 0 −b⟩ + 1 2β ∥˜λ0 0 + β(1 −θ1,0) θ1,0 (Ax0 0 −b) −λ∗∥2 +1 2∥x0 0,1 −x∗ 1∥2 (θ1,0L1+∥AT 1 A1∥)I−AT 1 A1 + 1 2∥x0 0,2 −x∗ 2∥2 (1+ 1 bθ2 )θ1,0L2+∥AT 2 A2∥ I, where C3 = 1−θ1,0+(m−1)θ2 θ1,0 . Corollary 1 directly demonstrates that ACC-SADMM have a non-ergodic O(1/K) convergence rate. Corollary 1 If the conditions in Lemma 1 holds, we have E|F(ˆxS) −F(x∗)| ≤ O( 1 S ), E∥AˆxS −b∥ ≤ O( 1 S ). (11) We can find that ˆxS depends on the latest m information of xk S. So our convergence results is in non-ergodic sense, while the analysis for SVRG-ADMM [4] and SAG-ADMM [3] is in ergodic sense, since they consider the point ˆxS = 1 mS PS s=1 Pm k=1 xk s, which is the convex combination of xk s over all the iterations. Now we directly use the theoretical results of [15] to demonstrate that our algorithm is optimal when there exists non-smooth term in the objective function. Theorem 2 For the following problem: min x1,x2 F1(x1) + F2(x2), s.t. x1 −x2 = 0, (12) let the ADMM type algorithm to solve it be: • Generate λk 2 and yk 2 in any way, • xk+1 1 = ProxF1/βk yk 2 −λk 2 βk , • Generate λk+1 1 and yk+1 1 in any way, • xk+1 2 = ProxF2/βk yk+1 1 −λk+1 1 βk . Then there exist convex functions F1 and F2 defined on X = {x ∈R6k+5 : ∥x∥≤B} for the above general ADMM method, satsifying L∥ˆxk 2 −ˆxk 1∥+ |F1(ˆxk 1) −F1(x∗ 1) + F1(ˆxk 2) −F2(x∗ 2)| ≥ LB 8(k + 1), (13) where ˆxk 1 = Pk i=1 αi 1xi 1 and ˆxk 2 = Pk i=1 αi 2xi 2 for any αi 1 and αi 2 with i from 1 to k. Theorem 2 is Theorem 11 in [15]. More details can be found in it. Problem (12) is a special case of Problem (1) as we can set each F2,i(x2) = F(x2) with i = 1, · · · , n or set n = 1. So there is no better ADMM type algorithm which converges faster than O(1/K) for Problem (1). 5 Discussions We discuss some properties of ACC-SADMM and make further comparisons with some related methods. 6 Table 3: Size of datasets and mini-batch size we adopt in the experiments Problem Dataset # training # testing # dimension × # class # minibatch Lasso a9a 72, 876 72, 875 74 × 2 100 covertype 290, 506 290, 506 54 × 2 mnist 60, 000 10, 000 784 × 10 dna 2, 400, 000 600, 000 800 × 2 500 Multitask ImageNet 1, 281, 167 50, 000 4, 096 × 1, 000 2, 000 5.1 Comparison with Katyusha As we have mentioned in Introduction, some intuitions of our algorithm are inspired by Katyusha [23], which obtains an O(1/K2) algorithm for convex finite-sum problems. However, Katyusha cannot solve the problem with linear constraints. Besides, Katyusha uses the Nesterov’s second scheme to accelerate the algorithm while our method conducts acceleration through Nesterov’s extrapolation (Nesterov’s first scheme). And our proof uses the technique of [26], which is different from [23]. Our algorithm can be easily extended to unconstrained convex finite-sum and can also obtain a O(1/K2) rate but belongs to the Nesterov’s first scheme 2. 5.2 The Growth of Penalty Factor β θ1,s The penalty factor β θ1,s increases linearly with the iteration. One might deem that this make our algorithm impractical because after dozens of epoches, the large value of penalty factor might slow down the decrement of function value. However, we have not found any bad influence. There may be two reasons 1. In our algorithm, θ1,s decreases after each epoch (m iterations), which is much slower than LADM-NE [15]. So the growth of penalty factor works as a continuation technique [28], which may help to decrease the function value. 2. From Theorem 1, our algorithm converges in O(1/S) whenever θ1,s is large. So from the theoretical viewpoint, a large θ1,s cannot slow down our algorithm. We find that OPT-ADMM [2] also needs to decrease the step size with the iteration. However, its step size decreasing rate is O(k 3 2 ) and is faster than ours. 5.3 The Importance of Non-ergodic O(1/K) SAG-ADMM [3] and SVRG-ADMM [4] accelerate SADMM to ergodic O(1/K). In Theorem 9 of [15], the authors generate a class of functions showing that the original ADMM has a tight non-ergodic O(1/ √ K) convergence rate. When n = 1, SAG-ADMM and SVRG-ADMM are the same as batch ADMM, so their convergence rates are no better than O(1/ √ K). So in non-ergodic sense, our algorithm does have a faster convergence rate than VR based SADMM methods. Then we are to highlight the importance of our non-ergodic result. As we have mentioned in the Introduction, in practice, the output of ADMM methods is the non-ergodic result xK, not the mean of x1 to xK. For deterministic ADMM, the proof of ergodic O(1/K) rate is proposed in [11], after ADMM had become a prevailing method of solving machine learning problems [29]; for stochastic ADMM, e.g. SVRG-ADMM [4], the authors give an ergodic O(1/K) proof, but in experiment, what they emphasize to use is the mean value of the last epoch as the result. As the non-ergodic results are more close to reality, our algorithm is much faster than VR based SADMM methods, even when its rate is seemingly the same. Actually, though VR based SADMM methods have provably faster rates than STOC-ADMM, the improvement in practice is evident only after numbers of iterations, when point are close to the convergence point, rather than at the early stage. In both [3] and [4], the authors claim that SAG-ADMM and SVRG-ADMM are sensitive to initial points. We also find that if the step sizes are set based on the their theoretical guidances, sometimes they are even slower than STOC-ADMM (see Fig. 1) as the early stage lasts longer when the step size is small. Our algorithm is faster than the two algorithms which demonstrates that Nesterov’s extrapolation has truly accelerated the speed and the integration of extrapolation and VR techniques is harmonious and complementary. 2We follow [26] to name the extrapolation scheme as Nesterov’s first scheme and the three-step scheme [27] as the Nesterov’s second scheme. 7 5 10 15 20 25 30 35 40 number of effective passes 10-5 10-4 10-3 10-2 objective gap (a) a9a-original 5 10 15 20 25 30 35 40 number of effective passes 10-3 10-2 objective gap (b) covertype-original 5 10 15 20 25 30 35 40 number of effective passes 10-3 10-2 10-1 objective gap (c) mnist-original 5 10 15 20 25 30 35 40 number of effective passes 10-4 10-3 objective gap (d) dna-original 10 20 30 40 50 60 number of effective passes 10-3 10-2 10-1 objective gap (e) a9a-group 10 20 30 40 50 60 number of effective passes 10-5 10-4 10-3 10-2 10-1 100 objective gap (f) covertype-group 10 20 30 40 50 60 number of effective passes 10-3 10-2 10-1 100 objective gap (g) mnist-group 5 10 15 20 25 30 35 40 number of effective passes 10-8 10-6 10-4 10-2 100 objective gap (h) dna-group 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers 5 10 15 20 25 30 35 40 number of effective passes 10-5 10-4 10-3 10-2 objective gap STOC-ADMM STOC-ADMM-ERG OPT-ADMM SVRG-ADMM SVRG-ADMM-ERG SAG-ADMM SAG-ADMM-ERG ACC-SADMM 5 10 15 20 25 30 35 40 number of effective passes 0.112 0.114 0.116 0.118 0.12 test loss 5 10 15 20 25 30 35 40 number of effective passes 10-3 10-2 objective gap 5 10 15 20 25 30 35 40 number of effective passes 0.37 0.372 0.374 0.376 test loss 5 10 15 20 25 30 35 40 number of effective passes 10-3 10-2 10-1 objective gap 5 10 15 20 25 30 35 40 number of effective passes 0.19 0.195 0.2 0.205 0.21 test loss 5 10 15 20 25 30 35 40 number of effective passes 10-4 10-3 objective gap (a) a9a (b) covertype (b) mnist (d) dna 5 10 15 20 25 30 35 40 number of effective passes 2.8 3 3.2 3.4 3.6 3.8 test loss ×10-3 Figure 3. Illustration of the proposed approach. The evolutionary process of our PDE (solid arrow) with respect to the time (t = 0, T/N, · · · , T,) extracts the feature from the image and the gradient descent process (hollow arrow) learns a transform to represent the feature. Frostig, Roy, Ge, Rong, Kakade, Sham, and Sidford, Aaron. Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization. In Proc. Int’l. Conf. on Machine Learning, 2015. He, Bingsheng and Yuan, Xiaoming. On the O(1/n) convergence rate of the douglas–rachford alternating direction method. SIAM Journal on Numerical Analysis, 50 (2):700–709, 2012. Hien, Le Thi Khanh, Lu, Canyi, Xu, Huan, and Feng, Jiashi. Accelerated stochastic mirror descent algorithms for composite non-strongly convex optimization. arXiv preprint arXiv:1605.06892, 2016. Johnson, Rie and Zhang, Tong. Accelerating stochastic gradient descent using predictive variance reduction. In Proc. Conf. Advances in Neural Information Processing Systems, 2013. Kim, Seyoung, Sohn, Kyung-Ah, and Xing, Eric P. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics, 25(12):i204– i212, 2009. Li, Huan and Lin, Zhouchen. Optimal nonergodic O(1/k) convergence rate: When linearized ADM meets nesterov’s extrapolation. arXiv preprint arXiv:1608.06366, 2016. Lin, Hongzhou, Mairal, Julien, and Harchaoui, Zaid. A universal catalyst for first-order optimization. In Proc. Conf. Advances in Neural Information Processing Systems, 2015a. Lin, Zhouchen, Liu, Risheng, and Su, Zhixun. Linearized alternating direction method with adaptive penalty for low-rank representation. In Proc. Conf. Advances in Neural Information Processing Systems, 2011. Lin, Zhouchen, Liu, Risheng, and Li, Huan. Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Machine Learning, 99(2):287–325, 2015b. Lu, Canyi, Li, Huan, Lin, Zhouchen, and Yan, Shuicheng. Fast proximal linearized alternating direction method of multiplier with parallel splitting. arXiv preprint arXiv:1511.05133, 2015. Nesterov, Yurii. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). In Doklady an SSSR, volume 269, pp. 543–547, 1983. Nesterov, Yurii. On an approach to the construction of optimal methods of minimization of smooth convex functions. Ekonomika i Mateaticheskie Metody, 24(3):509– 517, 1988. Nesterov, Yurii. Introductory lectures on convex optimization: A basic course, volume 87. 2013. Nitanda, Atsushi. Stochastic proximal gradient descent with acceleration techniques. In Proc. Conf. Advances in Neural Information Processing Systems, 2014. Ouyang, Hua, He, Niao, Tran, Long, and Gray, Alexander G. Stochastic alternating direction method of multipliers. Proc. Int’l. Conf. on Machine Learning, 2013. Figure 1: Experimental results of solving the original Lasso (Top) and Graph-Guided Fused Lasso (Bottom). The computation time includes the cost of calculating full gradients for SVRG based methods. SVRG-ADMM and SAG-ADMM are initialized by running STOC-ADMM for 3n b iterations. “-ERG” represents the ergodic results for the corresponding algorithms. 6 Experiments We conduct experiments to show the effectiveness of our method3. We compare our method with the following the-state-of-the-art SADMM algorithms: (1) STOC-ADMM [1], (2) SVRG-ADMM [4], (3) OPT-SADMM [2], (4) SAG-ADMM [3]. We ignore SDCA-ADMM [17] in our comparison since it gives no analysis on general convex problems and it is also not faster than SVRG-ADMM [4]. Experiments are performed on Intel(R) CPU i7-4770 @ 3.40GHz machine with 16 GB memory. Our experiments focus on two typical problems [4]: Lasso Problem and Multitask Learning. Due to space limited, the experiment of Multitask Learning is shown in Supplementary Materials. For the Lasso problems, we perform experiments under the following typical variations. The first is the original Lasso problem; and the second is Graph-Guided Fused Lasso model: minx µ∥Ax∥1 + 1 n Pn i=1 li(x), where li(x) is the logistic loss on sample i, and A = [G; I] is a matrix encoding the feature sparsity pattern. G is the sparsity pattern of the graph obtained by sparse inverse covariance estimation [30]. The experiments are performed on four benchmark data sets: a9a, covertype, mnist and dna4. The details of the dataset and the mini-batch size that we use in all SADMM are shown in Table 3. And like [3] and [4], we fix µ = 10−5 and report the performance based on (xt, Axt) to satisfy the constraints of ADMM. Results are averaged over five repetitions. And we set m = 2n b for all the algorithms. For original Lasso problem, the step sizes are set through theoretical guidances for each algorithm. For the Graph-Guided Lasso, the best step sizes are obtained through searches on parameters which give best convergence progress. Except ACC-SADMM, we use the continuation technique [28] to accelerate algorithms. SAG-ADMM is performed on the first three datasets due to its large memory requirement. The experimental results are shown in Fig. 1. We can find that our algorithm consistently outperforms other compared methods in all these datasets for both the two problems, which verifies our theoretical analysis. The details about parameter setting, experimental results where we set a larger fixed step size for the group guided Lasso problem, curves of the test error, the memory costs of all algorithms, and Multitask learning experiment are shown in Supplementary Materials. 3The code will be available at http://www.cis.pku.edu.cn/faculty/vision/zlin/zlin.htm. 4a9a, covertype and dna are from: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/, and mnist is from: http://yann.lecun.com/exdb/mnist/. 8 7 Conclusion We propose ACC-SADMM for the general convex finite-sum problems. ACC-SADMM integrates Nesterov’s extrapolation and VR techniques and achieves a non-ergodic O(1/K) convergence rate, which shows theoretical and practical importance. We do experiments to demonstrate that our algorithm is faster than other SADMM methods. Acknowledgment Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502) and National Natural Science Foundation (NSF) of China (grant no.s 61625301, 61731018, and 61231002). References [1] Hua Ouyang, Niao He, Long Tran, and Alexander G Gray. Stochastic alternating direction method of multipliers. Proc. Int’l. Conf. on Machine Learning, 2013. [2] Samaneh AzadiSra and Suvrit Sra. Towards an optimal stochastic alternating direction method of multipliers. In Proc. Int’l. Conf. on Machine Learning, 2014. [3] Wenliang Zhong and James Tin-Yau Kwok. Fast stochastic alternating direction method of multipliers. In Proc. Int’l. Conf. on Machine Learning, 2014. [4] Shuai Zheng and James T Kwok. Fast-and-light stochastic admm. In Proc. Int’l. Joint Conf. on Artificial Intelligence, 2016. [5] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [6] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. Proc. Conf. Advances in Neural Information Processing Systems, 2007. [7] Li Shen, Gang Sun, Zhouchen Lin, Qingming Huang, and Enhua Wu. Adaptive sharing for image classification. In Proc. Int’l. Joint Conf. on Artificial Intelligence, 2015. [8] Seyoung Kim, Kyung-Ah Sohn, and Eric P Xing. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics, 25(12):i204–i212, 2009. [9] Kaiye Wang, Ran He, Liang Wang, Wei Wang, and Tieniu Tan. Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. on Pattern Analysis and Machine Intelligence, 38(10):1–1, 2016. [10] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, 2011. [11] Bingsheng He and Xiaoming Yuan. On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM Journal on Numerical Analysis, 50(2):700–709, 2012. [12] Zhouchen Lin, Risheng Liu, and Huan Li. Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Machine Learning, 99(2):287–325, 2015. [13] Damek Davis and Wotao Yin. Convergence rate analysis of several splitting schemes. In Splitting Methods in Communication, Imaging, Science, and Engineering, pages 115–163. 2016. [14] Caihua Chen, Raymond H Chan, Shiqian Ma, and Junfeng Yang. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM Journal on Imaging Sciences, 8(4):2239–2267, 2015. [15] Huan Li and Zhouchen Lin. Optimal nonergodic O(1/k) convergence rate: When linearized ADM meets nesterov’s extrapolation. arXiv preprint arXiv:1608.06366, 2016. 9 [16] Taiji Suzuki. Dual averaging and proximal gradient descent for online alternating direction multiplier method. In Proc. Int’l. Conf. on Machine Learning, 2013. [17] Taiji Suzuki. Stochastic dual coordinate ascent with alternating direction method of multipliers. In Proc. Int’l. Conf. on Machine Learning, 2014. [18] Léon Bottou. Stochastic learning. In Advanced lectures on machine learning, pages 146–168. 2004. [19] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Proc. Conf. Advances in Neural Information Processing Systems, 2013. [20] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Proc. Conf. Advances in Neural Information Processing Systems, 2014. [21] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, pages 1–30, 2013. [22] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). In Doklady an SSSR, volume 269, pages 543–547, 1983. [23] Zeyuan Allen-Zhu. Katyusha: The first truly accelerated stochastic gradient method. In Annual Symposium on the Theory of Computing, 2017. [24] Zhouchen Lin, Risheng Liu, and Zhixun Su. Linearized alternating direction method with adaptive penalty for low-rank representation. In Proc. Conf. Advances in Neural Information Processing Systems, 2011. [25] Xiaoqun Zhang, Martin Burger, and Stanley Osher. A unified primal-dual algorithm framework based on bregman iteration. Journal of Scientific Computing, 46:20–46, 2011. [26] Paul Tseng. On accelerated proximal gradient methods for convex-concave optimization. In Technical report, 2008. [27] Yurii Nesterov. On an approach to the construction of optimal methods of minimization of smooth convex functions. Ekonomika i Mateaticheskie Metody, 24(3):509–517, 1988. [28] Wangmeng Zuo and Zhouchen Lin. A generalized accelerated proximal gradient approach for total variation-based image restoration. IEEE Trans. on Image Processing, 20(10):2748, 2011. [29] Zhouchen Lin, Minming Chen, and Yi Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010. [30] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008. 10 | 2017 | 328 |
6,817 | Group Sparse Additive Machine Hong Chen1, Xiaoqian Wang1, Cheng Deng2, Heng Huang1∗ 1 Department of Electrical and Computer Engineering, University of Pittsburgh, USA 2 School of Electronic Engineering, Xidian University, China chenh@mail.hzau.edu.cn,xqwang1991@gmail.com chdeng@mail.xidian.edu.cn,heng.huang@pitt.edu Abstract A family of learning algorithms generated from additive models have attracted much attention recently for their flexibility and interpretability in high dimensional data analysis. Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection. However, the previous works mainly focus on the least squares regression problem, not the classification task. Thus, it is desired to design the new additive classification model with variable selection capability for many real-world applications which focus on high-dimensional data classification. To address this challenging problem, in this paper, we investigate the classification with group sparse additive models in reproducing kernel Hilbert spaces. A novel classification method, called as group sparse additive machine (GroupSAM), is proposed to explore and utilize the structure information among the input variables. Generalization error bound is derived and proved by integrating the sample error analysis with empirical covering numbers and the hypothesis error estimate with the stepping stone technique. Our new bound shows that GroupSAM can achieve a satisfactory learning rate with polynomial decay. Experimental results on synthetic data and seven benchmark datasets consistently show the effectiveness of our new approach. 1 Introduction The additive models based on statistical learning methods have been playing important roles for the high-dimensional data analysis due to their well performance on prediction tasks and variable selection (deep learning models often don’t work well when the number of training data is not large). In essential, additive models inherit the representation flexibility of nonlinear models and the interpretability of linear models. For a learning approach under additive models, there are two key components: the hypothesis function space and the regularizer to address certain restrictions on estimator. Different from traditional learning methods, the hypothesis space used in additive models is relied on the decomposition of input vector. Usually, each input vector X ∈Rp is divided into p parts directly [17, 30, 6, 28] or some subgroups according to prior structural information among input variables [27, 26]. The component function is defined on each decomposed input and the hypothesis function is constructed by the sum of all component functions. Typical examples of hypothesis space include the kernel-based function space [16, 6, 11] and the spline-based function space [13, 15, 10, 30]. Moreover, the Tikhonov regularization scheme has been used extensively for constructing the additive models, where the regularizer is employed to control the complexity of hypothesis space. The examples of regularizer include the kernel-norm regularization associated with the reproducing kernel Hilbert space (RKHS) [5, 6, 11] and various sparse regularization [17, 30, 26]. More recently several group sparse additive models have been proposed to tackle the high-dimensional regression problem due to their nice theoretical properties and empirical effectiveness [15, 10, ∗Corresponding author 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 26]. However, most existing additive model based learning approaches are mainly limited to the least squares regression problem and spline-based hypothesis spaces. Surprisingly, there is no any algorithmic design and theoretical analysis for classification problem with group sparse additive models in RKHS. This paper focuses on filling in this gap on algorithmic design and learning theory for additive models. A novel sparse classification algorithm, called as group sparse additive machine (GroupSAM), is proposed under a coefficient-based regularized framework, which is connected to the linear programming support vector machine (LPSVM) [22, 24]. By incorporating the grouped variables with prior structural information and the ℓ2,1-norm based structured sparse regularizer, the new GroupSAM model can conduct the nonlinear classification and variable selection simultaneously. Similar to the sparse additive machine (SAM) in [30], our GroupSAM model can be efficiently solved via proximal gradient descent algorithm. The main contributions of this paper can summarized in two-fold: • A new group sparse nonlinear classification algorithm (GroupSAM) is proposed by extending the previous additive regression models to the classification setting, which contains the LPSVM with additive kernel as its special setting. To the best of our knowledge, this is the first algorithmic exploration of additive classification models with group sparsity. • Theoretical analysis and empirical evaluations on generalization ability are presented to support the effectiveness of GroupSAM. Based on constructive analysis on the hypothesis error, we get the estimate on the excess generalization error, which shows that our GroupSAM model can achieve the fast convergence rate O(n−1) under mild conditions. Experimental results demonstrate the competitive performance of GroupSAM over the related methods on both simulated and real data. Before ending this section, we discuss related works. In [5], support vector machine (SVM) with additive kernels was proposed and its classification consistency was established. Although this method can also be used for grouped variables, it only focuses on the kernel-norm regularizer without addressing the sparseness for variable selection. In [30], the SAM was proposed to deal with the sparse representation on the orthogonal basis of hypothesis space. Despite good computation and generalization performance, SAM does not explore the structure information of input variables and ignores the interactions among variables. More important, different from finite splines approximation in [30], our approach enables us to estimate each component function directly in RKHS. As illustrated in [20, 14], the RKHS-based method is flexible and only depends on few tuning parameters, but the commonly used spline methods need specify the number of basis functions and the sequence of knots. It should be noticed that the group sparse additive models (GroupSpAM in [26]) also address the sparsity on the grouped variables. However, there are key differences between GroupSAM and GroupSpAM: 1) Hypothesis space. The component functions in our model are obtained by searching in kernel-based data dependent hypothesis spaces, but the method in [26] uses data independent hypothesis space (not associated with kernel). As shown in [19, 18, 4, 25], the data dependent hypothesis space can provide much more adaptivity and flexibility for nonlinear prediction. The advantage of kernel-based hypothesis space for additive models is also discussed in [14]. 2) Loss function. The hinge loss used in our classification model is different from the least-squares loss in [26]. 3) Optimization. Our GroupSAM only needs to construct one component function for each variable group, but the model in [26] needs to find the component functions for each variable in a group. Thus, our method is usually more efficient. Due to the kernel-based component function and non-smooth hinge loss, the optimization of GroupSpAM can not be extended to our model directly. 4) Learning theory. We establish the generalization bound of GroupSAM by the error estimate technique with data dependent hypothesis spaces, while the error bound is not covered in [26]. Now, we present a brief summary in Table 1 to better illustrate the differences of our GroupSAM with other methods. The rest of this paper is organized as follows. In next section, we revisit the related classification formulations and propose the new GroupSAM model. Theoretical analysis on generalization error bound is established in Section 3. In Section 4, experimental results on both simulated examples and real data are presented and discussed. Finally, Section 5 concludes this paper. 2 Table 1: Properties of different additive models. SAM [30] Group Lasso[27] GroupSpAM [26] GroupSAM Hypothesis space data-independent data-independent data-independent data-dependent Loss function hinge loss least-square least-square hinge loss Group sparsity No Yes Yes Yes Generalization bound Yes No No Yes 2 Group sparse additive machine In this section, we first revisit the basic background of binary classification and additive models, and then introduce our new GroupSAM model. Let Z := (X, Y) ⊂Rp+1, where X ⊂Rp is a compact input space and Y = {−1, 1} is the set of labels. We assume that the training samples z := {zi}n i=1 = {(xi, yi)}n i=1 are independently drawn from an unknown distribution ρ on Z, where each xi ∈X and yi ∈{−1, 1}. Let’s denote the marginal distribution of ρ on X as ρX and denote its conditional distribution for given x ∈X as ρ(·|x). For a real-valued function f : X →R, we define its induced classifier as sgn(f), where sgn(f)(x) = 1 if f(x) ≥0 and sgn(f)(x) = −1 if f(x) < 0. The prediction performance of f is measured by the misclassification error: R(f) = Prob{Y f(X) ≤0} = Z X Prob(Y ̸= sgn(f)(x)|x)dρX . (1) It is well known that the minimizer of R(f) is the Bayes rule: fc(x) = sgn Z Y ydρ(y|x) = sgn Prob(y = 1|x) −Prob(y = −1|x) . Since the Bayes rule involves the unknown distribution ρ, it can not be computed directly. In machine learning literature, the classification algorithm usually aims to find a good approximation of fc by minimizing the empirical misclassification risk: Rz(f) = 1 n n X i=1 I(yif(xi) ≤0) , (2) where I(A) = 1 if A is true and 0 otherwise. However, the minimization problem associated with Rz(f) is NP-hard due to the 0 −1 loss I. To alleviate the computational difficulty, various convex losses have been introduced to replace the 0 −1 loss, e.g., the hinge loss, the least square loss, and the exponential loss [29, 1, 7]. Among them, the hinge loss is the most popular error metric for classification problem due to its nice theoretical properties. In this paper, following [5, 30], we use the hinge loss: ℓ(y, f(x)) = (1 −yf(x))+ = max{1 −yf(x), 0} to measure the misclassification cost.The expected and empirical risks associated with the hinge loss are defined respectively as: E(f) = Z Z (1 −yf(x))+dρ(x, y) , and Ez(f) = 1 n n X i=1 (1 −yif(xi))+ . In theory, the excess misclassification error R(sgn(f))−R(fc) can be bounded by the excess convex risk E(f) −E(fc) [29, 1, 7]. Therefore, the classification algorithm usually is constructed under structural risk minimization [22] associated with Ez(f). 3 In this paper, we propose a novel group sparse additive machine (GroupSAM) for nonlinear classification. Let {1, · · · , p} be partitioned into d groups. For each j ∈{1, ..., d}, we set X (j) as the grouped input space and denote f (j) : X (j) →R as the corresponding component function. Usually, the groups can be obtained by prior knowledge [26] or be explored by considering the combinations of input variables [11]. Let each K(j) : X (j) × X (j) →R be a Mercer kernel and let HK(j) be the corresponding RKHS with norm ∥· ∥K(j). It has been proved in [5] that H = n d X j=1 f (j) : f (j) ∈HK(j), 1 ≤j ≤d o with norm ∥f∥2 K = inf n d X j=1 ∥f (j)∥2 K(j) : f = d X j=1 f (j)o is an RKHS associated with the additive kernel K = Pd j=1 K(j). For any given training set z = {(xi, yi)}n i=1, the additive model in H can be formulated as: ¯fz = arg min f=Pd j=1 f (j)∈H n Ez(f) + η d X j=1 τj∥f (j)∥2 K(j) o , (3) where η = η(n) is a positive regularization parameter and {τj} are positive bounded weights for different variable groups. The solution ¯fz in (3) has the following representation: ¯fz(x) = d X j=1 ¯fz (j)(x(j)) = d X j=1 n X i=1 ¯α(j) z,iyiK(j)(x(j) i , x(j)), ¯α(j) z,i ∈R, 1 ≤i ≤n, 1 ≤j ≤d . Observe that ¯fz (j)(x) ≡0 is equivalent to ¯α(j) z,i = 0 for all i. Hence, we expect ∥¯α(j) z ∥2 = 0 for ¯αz(j) = (¯α(j) z,1, · · · , ¯α(j) z,n)T ∈Rn if the j-th variable group is not truly informative. This motivation pushes us to consider the sparsity-induced penalty: Ω(f) = inf n d X j=1 τj∥α(j)∥2 : f = d X j=1 n X i=1 α(j) i yiK(j)(x(j) i , ·) o . This group sparse penalty aims at the variable selection [27] and was introduced into the additive regression model [26]. Inspired by learning with data dependent hypothesis spaces [19], we introduce the following hypothesis spaces associated with training samples z: Hz = n f = d X j=1 f (j) : f (j) ∈H(j) z o , (4) where H(j) z = n f (j) = n X i=1 α(j) i K(j)(x(j) i , ·) : α(j) i ∈R o . Under the group sparse penalty and data dependent hypothesis space, the group sparse additive machine (GroupSAM) can be written as: fz = arg min f∈Hz n 1 n n X i=1 (1 −yif(xi))+ + λΩ(f) o , (5) 4 where λ > 0 is a regularization parameter. Let’s denote α(j) = (α(j) 1 , · · · , α(j) n )T and K(j) i = (K(j)(x(j) 1 , x(j) i ), · · · , K(j)(x(j) n , x(j) i ))T . The GroupSAM in (5) can be rewritten as: fz = d X j=1 f (j) z = d X j=1 n X t=1 α(j) z,tK(j)(x(j) t , ·) , with {α(j) z } = arg min α(j)∈Rn,1≤j≤d n 1 n n X i=1 1 −yi d X j=1 (K(j) i )T α(j) + + λ d X j=1 τj∥α(j)∥2 o . (6) The formulation (6) transforms the function-based learning problem (5) into a coefficient-based learning problem in a finite dimensional vector space. The solution of (5) is spanned naturally by the kernelized functions {K(j)(·, x(j) i ))}, rather than B-Spline basis functions [30]. When d = 1, our GroupSAM model degenerates to the special case which includes the LPSVM loss and the sparsity regularization term. Compared with LPSVM [22, 24] and SVM with additive kernels [5], our GroupSAM model imposes the sparsity on variable groups to improve the prediction interpretation of additive classification model. For given {τj}, the optimization problem of GroupSAM can be computed efficiently via an accelerated proximal gradient descent algorithm developed in [30]. Due to space limitation, we don’t recall the optimization algorithm here again. 3 Generalization error bound In this section, we will derive the estimate on the excess misclassification error R(sgn(fz)) −R(fc). Before providing the main theoretical result, we introduce some necessary assumptions for learning theory analysis. Assumption A. The intrinsic distribution ρ on Z := X × Y satisfies the Tsybakov noise condition with exponent 0 ≤q ≤∞. That is to say, for some q ∈[0, ∞) and ∆> 0, ρX {x ∈X : |Prob(y = 1|x) −Prob(y = −1|x)| ≤∆t} ≤tq, ∀t > 0. (7) The Tsybakov noise condition was proposed in [21] and has been used extensively for theoretical analysis of classification algorithms [24, 7, 23, 20]. Indeed, (7) holds with exponent q = 0 for any distribution and with q = ∞for well separated classes. Now we introduce the empirical covering numbers [8] to measure the capacity of hypothesis space. Definition 1 Let F be a set of functions on Z with u = {ui}k i=1 ⊂Z. Define the ℓ2-empirical metric as ℓ2,u(f, g) = 1 n Pk t=1(f(ut) −g(ut))2 1 2 . The covering number of F with ℓ2-empirical metric is defined as N2(F, ε) = supn∈N supu∈X n N2,u(F, ε), where N2,u(F, ε) = inf n l ∈N : ∃{fi}l i=1 ⊂F s. t. F = l[ i=1 {f ∈F : ℓ2,u(f, fi) ≤ε} o . Let Br = {f ∈HK : ∥f∥K ≤r} and B(j) r = {f (j) ∈HK(j) : ∥f (j)∥K(j) ≤r}. Assumption B. Assume that κ = Pd j=1 supx(j) p K(j)(x(j), x(j)) < ∞and for some s ∈ (0, 2), cs > 0, log N2(B(j) 1 , ε) ≤csε−s, ∀ε > 0, j ∈{1, ..., d}. It has been asserted in [6] that under Assumption B the following holds: log N2(B1, ε) ≤csd1+sε−s, ∀ε > 0. 5 It is worthy noticing that the empirical covering number has been studied extensively in learning theory literatures [8, 20]. Detailed examples have been provided in Theorem 2 of [19], Lemma 3 of [18], and Examples 1, 2 of [9]. The capacity condition of additive assumption space just depends on the dimension of subspace X (j). When K(j) ∈Cν(X (j) × X (j)) for every j ∈{1, · · · , d}, the theoretical analysis in [19] assures that Assumption B holds true for: s = 2d0 d0+2ν , ν ∈(0, 1]; 2d0 d0+ν , ν ∈[1, 1 + d0/2]; d0 ν , ν ∈(1 + d0/2, ∞). Here d0 denotes the maximum dimension among {X (j)}. With respect to (3), we introduce the data-free regularized function fη defined by: fη = arg min f=Pd j=1 f (j)∈H n E(f) + η d X j=1 τj∥f (j)∥2 K(j) o . (8) Inspired by the analysis in [6], we define: D(η) = E(fη) −E(fc) + η d X j=1 τj∥f (j) η ∥2 K(j) (9) as the approximation error, which reflects the learning ability of hypothesis space H under Tikhonov regularization scheme. The following approximation condition has been studied and used extensively for classification problems, such as [3, 7, 24, 23]. Please see Examples 3 and 4 in [3] for the explicit version for Soblov kernel and Gaussian kernel induced reproducing kernel Hilbert space. Assumption C. There exists an exponent β ∈(0, 1) and a positive constant cβ such that: D(η) ≤cβηβ, ∀η > 0. Now we introduce our main theoretical result on the generalization bound as follows. Theorem 1 Let 0 < min j τj ≤max j τj ≤c0 < ∞and Assumptions A-C hold true. Take λ = n−θ in (5) for 0 < θ ≤min{ 2−s 2s , 3+5β 2−2β }. For any δ ∈(0, 1), there exists a constant C independent of n, δ such that R(sgn(fz)) −R(fc) ≤C log(3/δ)n−ϑ with confidence 1 −δ, where ϑ = min nq + 1 q + 2, β(2θ + 1) 2β + 2 , (q + 1)(2 −s −2sθ) 4 + 2q + sq , 3 + 5β + 2βθ −2θ 4 + 4β o . Theorem 1 demonstrates that GroupSAM in (5) can achieve the convergence rate with polynomial decay under mild conditions in hypothesis function space. When q →∞, β →1, and each K(j) ∈C∞, the error decay rate of GroupSAM can arbitrarily close to O(n−min{1, 1+2θ 4 }). Hence, the fast convergence rate O(n−1) can be obtained under proper selections on parameters. To verify the optimal bound, we need provide the lower bound for the excess misclassification error. This is beyond the main focus of this paper and we leave it for future study. Additionally, the consistency of GroupSAM can be guaranteed with the increasing number of training samples. Corollary 1 Under conditions in Theorem 1, there holds R(sgn(fz)) −R(fc) →0 as n →∞. To better understand our theoretical result, we compare it with the related works as below: 6 1) Compared with group sparse additive models. Although the asymptotic theory of group sparse additive models has been well studied in [15, 10, 26], all of them only consider the regression task under the mean square error criterion and basis function expansion. Due to the kernel-based component function and non-smooth hinge loss, the previous analysis cannot be extended to GroupSAM directly. 2) Compared with classification with additive models. In [30], the convergence rate is presented for sparse additive machine (SAM), where the input space X is divided into p subspaces directly without considering the interactions among variables. Different to the sparsity on variable groups in this paper, SAM is based on the sparse representation of orthonormal basis similar with [15]. In [5], the consistency of SVM with additive kernel is established, where the kernel-norm regularizer is used. However, the sparsity on variables and the learning rate are not investigated in previous articles. 3) Compared with the related analysis techniques. While the analysis technique used here is inspired from [24, 23], it is the first exploration for additive classification model with group sparsity. In particular, the hypothesis error analysis develops the stepping stone technique from the ℓ1-norm regularizer to the group sparse ℓ2,1-norm regularizer. Our analysis technique also can be applied to other additive models. For example, we can extend the shrunk additive regression model in [11] to the sparse classification setting and investigate its generalization bound by the current technique. Proof sketches of Theorem 1 To get tight error estimate, we introduce the clipping operator π(f)(x) = max{−1, min{f(x), 1}}, which has been widely used in learning theory literatures, such as [7, 20, 24, 23]. Since R(sgn(fz))− R(fc) can be bounded by E(π(fz)) −E(fc), we focus on bounding the excess convex risk. Using fη as the intermediate function, we can obtain the following error decomposition. Proposition 1 For fz defined in (5), there holds R(sgn(fz)) −R(fc) ≤E(π(fz)) −E(fc) ≤E1 + E2 + E3 + D(η), where D(η) is defined in (9), E1 = E(π(fz)) −E(fc) − Ez(π(fz)) −Ez(fc) , E2 = Ez(fη) −Ez(fc) − Ez(fη) −E(fc) , and E3 = Ez(π(fz)) + λΩ(fz) − Ez(fη) + η d X j=1 τj∥f (j) η ∥2 K(j) . In learning theory literature, E1 + E2 is called as the sample error and E3 is named as the hypothesis error. Detailed proofs for these error terms are provided in the supplementary materials. The upper bound of hypothesis error demonstrates that the divergence induced from regularization and hypothesis space tends to zero as n →∞under proper selected parameters. To estimate the hypothesis error E3, we choose ¯fz as the stepping stone function to bridge Ez(π(fz)) + λΩ(fz) and Ez(fη) + λ Pd j=1 τj∥f (j) η ∥2 K(j). The proof is inspired from the stepping stone technique for support vector machine classification [24]. Notice that our analysis is associated with the ℓ2,1-norm regularizer while the previous analysis just focuses on the ℓ1-norm regularization. The error term E1 reflects the divergence between the expected excess risk E(π(fz)) −E(fc) and the empirical excess risk Ez(π(fz)) −Ez(fc). Since fz involves any given z = {(xi, yi)}n i=1, we introduce the concentration inequality in [23] to bound E1. We also bound the error term E2 in terms of the one-side Bernstein inequality [7]. 4 Experiments To evaluate the performance of our proposed GroupSAM model, we compare our model with the following methods: SVM (linear SVM with ℓ2-norm regularization), L1SVM (linear SVM with ℓ1norm regularization), GaussianSVM (nonlinear SVM using Gaussian kernel), SAM (Sparse Additive Machine) [30], and GroupSpAM (Group Sparse Additive Models) [26] which is adapted to the classification setting. 7 Table 2: Classification accuracy comparison on the synthetic data. The upper half shows the results with 24 features groups, while the lower half corresponds to the results with 300 feature groups. The table shows the average classification accuracy and the standard deviation in 2-fold cross validation. SVM GaussianSVM L1SVM SAM GroupSpAM GroupSAM σ = 0.8 0.943±0.011 0.935±0.028 0.925±0.035 0.895±0.021 0.880±0.021 0.953±0.018 σ = 0.85 0.943±0.004 0.938±0.011 0.938±0.004 0.783±0.088 0.868±0.178 0.945±0.000 σ = 0.9 0.935±0.014 0.925± 0.007 0.938±0.011 0.853± 0.117 0.883±0.011 0.945±0.007 σ = 0.8 0.975±0.035 0.975±0.035 0.975±0.035 0.700±0.071 0.275±0.106 1.000±0.000 σ = 0.85 0.975±0.035 0.975±0.035 0.975±0.035 0.600±0.141 0.953±0.004 1.000±0.000 σ = 0.9 0.975±0.035 0.975±0.035 0.975±0.035 0.525±0.035 0.983±0.004 1.000±0.000 As for evaluation metric, we calculate the classification accuracy, i.e., percentage of correctly labeled samples in the prediction. In comparison, we adopt 2-fold cross validation and report the average performance of each method. We implement SVM, L1SVM and GaussianSVM using the LIBSVM toolbox [2]. We determine the hyper-parameter of all models, i.e., parameter C of SVM, L1SVM and GaussianSVM, parameter λ of SAM, parameter λ of GroupSpAM, parameter λ in Eq. (6) of GroupSAM, in the range of {10−3, 10−2, . . . , 103}. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. classification accuracy of each method. In the accelerated proximal gradient descent algorithm for both SAM and GroupSAM, we set µ = 0.5, and the number of maximum iterations as 2000. 4.1 Performance comparison on synthetic data We first examine the classification performance on the synthetic data as a sanity check. Our synthetic data is randomly generated as a mixture of Gaussian distributions. In each class, data points are sampled i.i.d. from a multivariate Gaussian distribution with the covariance being σI, with I as the identity matrix. This setting indicates independent covariates of the data. We set the number of classes to be 4, the number of samples to be 400, and the number of dimensions to be 24. We set the value of σ in the range of {0.8, 0.85, 0.9} respectively. Following the experimental setup in [31], we make three replicates for each feature in the data to form 24 feature groups (each group has three replicated features). We randomly pick 6 feature groups to generate the data such that we can evaluate the capability of GroupSAM in identifying truly useful feature groups. To make the classification task more challenging, we add random noise drawn from uniform distribution U(0, θ) where θ is 0.8 times the maximum value in the data. In addition, we test on a high-dimensional case by generating 300 feature groups (e.g., a total of 900 features) with 40 samples in a similar approach. We summarize the classification performance comparison on the synthetic data in Table 2. From the experimental results we notice that GroupSAM outperforms other approaches under all settings. This comparison verifies the validity of our method. We can see that GroupSAM significantly improves the performance of SAM, which shows that the incorporation of group information is indeed beneficial for classification. Moreover, we can notices the superiority of GroupSAM over GroupSpAM, which illustrates that our GroupSAM model is more suitable for classiciation. We also present the comparison of feature groups in Table 3. For illustration purpose, we use the case with 24 feature groups as an example. Table 3 shows that the feature groups identified by GroupSAM are exactly the same as the ground truth feature groups used for synthetic data generation. Such results further demonstrate the effectiveness of GroupSAM method, from which we know GroupSAM is able to select the truly informative feature groups thus improve the classification performance. 4.2 Performance comparison on benchmark data In this subsection, we use 7 benchmark data from UCI repository [12] to compare the classification performance of different methods. The 7 benchmark data includes: Ecoli, Indians Diabetes, Breast Cancer, Stock, Balance Scale, Contraceptive Method Choice (CMC) and Fertility. Similar to the settings in synthetic data, we construct feature groups by replicating each feature for 3 times. In each 8 Table 3: Comparison between the true feature group ID (used for data generation) and the selected feature group ID by our GroupSAM method on the synthetic data. Order of the true feature group ID does not represent the order of importance. True Feature Group IDs Selected Feature Group IDs via GroupSAM σ = 0.8 2,3,4,8,10,17 3,10,17,8,2,4 σ = 0.85 1,5,10,12,17,21 5,12,17,21,1,10 σ = 0.9 2,6,7,9,12,22 6,22,7,9,2,12 Table 4: Classification accuracy comparison on the benchmark data. The table shows the average classification accuracy and the standard deviation in 2-fold cross validation. SVM GaussianSVM L1SVM SAM GroupSpAM GroupSAM Ecoli 0.815±0.054 0.818±0.049 0.711±0.051 0.816±0.039 0.771±0.009 0.839±0.028 Indians Diabetes 0.651±0.000 0.652±0.002 0.638±0.018 0.652±0.000 0.643±0.004 0.660±0.013 Breast Cancer 0.968±0.017 0.965±0.017 0.833±0.008 0.833±0.224 0.958±0.027 0.966±0.014 Stock 0.913±0.001 0.911±0.002 0.873±0.001 0.617±0.005 0.875±0.005 0.917±0.005 Balance Scale 0.864± 0.003 0.869±0.004 0.870±0.003 0.763±0.194 0.848±0.003 0.893±0.003 CMC 0.420± 0.011 0.445±0.015 0.437±0.014 0.427±0.000 0.433±0.003 0.456±0.003 Fertility 0.880± 0.000 0.880±0.000 0.750±0.184 0.860±0.028 0.780±0.000 0.880±0.000 feature group, we add random noise drawn from uniform distribution U(0, θ) where θ is 0.3 times the maximum value in each data. We display the comparison results in Table 4. We find that GroupSAM performs equal or better than the compared methods in all benchmark datasets. Compared with SVM and L1SVM, our method uses additive model to incorporate nonlinearity thus is more appropriate to find the complex decision boundary. Moreover, the comparison with Gaussian SVM and SAM illustrates that by involving the group information in classification, GroupSAM makes better use of the structure information among features such that the classification ability can be enhanced. Compared with GroupSpAM, our GroupSAM model is proposed in data dependent hypothesis spaces and employs hinge loss in the objective, thus is more suitable for classification. 5 Conclusion In this paper, we proposed a novel group sparse additive machine (GroupSAM) by incorporating the group sparsity into the additive classification model in reproducing kernel Hilbert space. By developing the error analysis technique with data dependent hypothesis space, we obtain the generalization error bound of the proposed GroupSAM, which demonstrates our model can achieve satisfactory learning rate under mild conditions. Experimental results on both synthetic and real-world benchmark datasets validate the algorithmic effectiveness and support our learning theory analysis. In the future, it is interesting to investigate the learning performance of robust group sparse additive machines with loss functions induced by quantile regression [6, 14]. Acknowledgments This work was partially supported by U.S. NSF-IIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NSF-IIS 1619308, NSF-IIS 1633753, NIH AG049371. Hong Chen was partially supported by National Natural Science Foundation of China (NSFC) 11671161. We are grateful to the anonymous NIPS reviewers for the insightful comments. 9 References [1] P. L. Bartlett, M. I. Jordan, and J. D. Mcauliffe. Convexity, classification and risk bounds. J. Amer. Statist. Assoc., 101(473):138–156, 2006. [2] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(27):1–27, 2011. [3] D. R. Chen, Q. Wu, Y. Ying, and D. X. Zhou. Support vector machine soft margin classifiers: error analysis. J. Mach. Learn. Res., 5:1143–1175, 2004. [4] H. Chen, Z. Pan, L. Li, and Y. Tang. Learning rates of coefficient-based regularized classifier for density level detection. Neural Comput., 25(4):1107–1121, 2013. [5] A. Christmann and R. Hable. Consistency of support vector machines using additive kernels for additive models. Comput. Stat. Data Anal., 56:854–873, 2012. [6] A. Christmann and D. X. Zhou. Learning rates for the risk of kernel based quantile regression estimators in additive models. Anal. Appl., 14(3):449–477, 2016. [7] F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge Univ. Press, Cambridge, U.K., 2007. [8] D. Edmunds and H. Triebel. Function Spaces, Entropy Numbers, Differential Operators. Cambridge Univ. Press, Cambridge, U.K., 1996. [9] Z. Guo and D. X. Zhou. Concentration estimates for learning with unbounded sampling. Adv. Comput. Math., 38(1):207–223, 2013. [10] J. Huang, J. Horowitz, and F. Wei. Variable selection in nonparametric additive models. Ann. Statist., 38(4):2282–2313, 2010. [11] K. Kandasamy and Y. Yu. Additive approximation in high dimensional nonparametric regression via the salsa. In ICML, 2016. [12] M. Lichman. UCI machine learning repository, 2013. [13] Y. Lin and H. H. Zhang. Component selection and smoothing in smoothing spline analysis of variance models. Ann. Statist., 34(5):2272–2297, 2006. [14] S. Lv, H. Lin, H. Lian, and J. Huang. Oracle inequalities for sparse additive quantile regression in reproducing kernel hilbert space. Ann. Statist., preprint, 2017. [15] L. Meier, S. van de Geer, and P. Buehlmann. High-dimensional additive modeling. Ann. Statist., 37(6B):3779–3821, 2009. [16] G. Raskutti, M. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. J. Mach. Learn. Res., 13:389–427, 2012. [17] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. J. Royal. Statist. Soc B., 71:1009–1030, 2009. [18] L. Shi. Learning theory estimates for coefficient-based regularized regression. Appl. Comput. Harmon. Anal., 34(2):252–265, 2013. [19] L. Shi, Y. Feng, and D. X. Zhou. Concentration estimates for learning with ℓ1-regularizer and data dependent hypothesis spaces. Appl. Comput. Harmon. Anal., 31(2):286–302, 2011. [20] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [21] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statis., 32:135– 166, 2004. [22] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, 1998. 10 [23] Q. Wu, Y. Ying, and D. X. Zhou. Multi-kernel regularized classfiers. J. Complexity, 23:108–134, 2007. [24] Q. Wu and D. X. Zhou. Svm soft margin classifiers: linear programming versus quadratic programming. Neural Comput., 17:1160–1187, 2005. [25] L. Yang, S. Lv, and J. Wang. Model-free variable selection in reproducing kernel hilbert space. J. Mach. Learn. Res., 17:1–24, 2016. [26] J. Yin, X. Chen, and E. Xing. Group sparse additive models. In ICML, 2012. [27] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variabels. J. Royal. Statist. Soc B., 68(1):49–67, 2006. [28] M. Yuan and D. X. Zhou. Minimax optimal rates of estimation in high dimensional additive models. Ann. Statist., 44(6):2564–2593, 2016. [29] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Ann. Statist., 32:56–85, 2004. [30] T. Zhao and H. Liu. Sparse additive machine. In AISTATS, 2012. [31] L. W. Zhong and J. T. Kwok. Efficient sparse modeling with automatic feature grouping. In ICML, 2011. 11 | 2017 | 329 |
6,818 | Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter Yi Xu†, Qihang Lin‡, Tianbao Yang† †Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA ‡Department of Management Sciences, The University of Iowa, Iowa City, IA 52242, USA {yi-xu, qihang-lin, tianbao-yang}@uiowa.edu Abstract Error bound, an inherent property of an optimization problem, has recently revived in the development of algorithms with improved global convergence without strong convexity. The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems. Quadratic error bound have been leveraged to achieve linear convergence in many first-order methods including the stochastic variance reduced gradient (SVRG) method, which is one of the most important stochastic optimization methods in machine learning. However, the studies along this direction face the critical issue that the algorithms must depend on an unknown growth parameter (a generalization of strong convexity modulus) in the error bound. This parameter is difficult to estimate exactly and the algorithms choosing this parameter heuristically do not have theoretical convergence guarantee. To address this issue, we propose novel SVRG methods that automatically search for this unknown parameter on the fly of optimization while still obtain almost the same convergence rate as when this parameter is known. We also analyze the convergence property of SVRG methods under Hölderian error bound, which generalizes the quadratic error bound. 1 Introduction Finite-sum optimization problems have broad applications in machine learning, including regression by minimizing the (regularized) empirical square losses and classification by minimizing the (regularized) empirical logistic losses. In this paper, we consider the following finite-sum problem: min x∈ΩF(x) ≜1 n n X i=1 fi(x) + Ψ(x), (1) where fi(x) is a continuously differential convex function whose gradient is Lipschitz continuous and Ψ(x) is a proper, lower-semicontinuous convex function [24]. Traditional proximal gradient (PG) methods or accelerated proximal gradient (APG) methods for solving (1) become prohibited when the number of components n is very large, which has spurred many studies on developing stochastic optimization algorithms with fast convergence [4, 8, 25, 1]. An important milestone among several others is the stochastic variance reduced gradient (SVRG) method [8] and its proximal variant [26]. Under the strong convexity of the objective function F(x), linear convergence of SVRG and its proximal variant has been established. Many variations of SVRG have also been proposed [2, 1]. However, the key assumption of strong convexity limits the power of SVRG for many interesting problems in machine learning without strong convexity. For example, in regression with high-dimensional data one is usually interested in solving the least-squares regression with an ℓ1 norm regularization or constraint (known as the LASSO-type problem). A common practice for solving non-strongly convex finite-sum problems by SVRG is to add a small strongly convex regularizer (e.g., λ 2 ∥x∥2 2) [26]. Recently, a variant of SVRG (named SVRG++ [2]) was 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. designed that can cope with non-strongly convex problems without adding the strongly convex term. However, these approaches only have sublinear convergence (e.g., requiring a O(1/ϵ) iteration complexity to achieve an ϵ-optimal solution). Promisingly, recent studies on optimization showed that leveraging the quadratic error bound (QEB) condition can open a new door to the linear convergence without strong convexity [9, 20, 6, 30, 5, 3]. The problem (1) obeys the QEB condition if the following holds: ∥x −x∗∥2 ≤c(F(x) −F(x∗))1/2, ∀x ∈Ω, (2) where x∗denotes the closest optimal solution to x and Ωis usually a compact set. Indeed, the aforementioned LASSO-type problems satisfy the QEB condition. It is worth mentioning that the above condition (or similar conditions) has been explored extensively and has different names in the literature, e.g., the second-order growth condition, the weak strong convexity [20], essential strong convexity [13], restricted strong convexity [31], optimal strong convexity [13], semi-strong convexity [6]. Interestingly, [6, 9] have showed that SVRG can enjoy a linear convergence under the QEB condition. However, the issue is that SVRG requires to know the parameter c (analogous to the strong convexity parameter) in the QEB for setting the number of iterations of inner loops, which is usually unknown and difficult to estimate. A naive trick for setting the number of iterations of inner loops to a certain multiplicative factor (e.g., 2) of the number of components n is usually sub-optimal and worrisome because it may not be large enough for bad conditioned problems or it could be too large for good conditioned problems. In the former case, the algorithm may not converge as the theory indicates and in the latter case, too many iterations may be wasted for inner loops. To address this issue, we develop a new variant of SVRG that embeds an efficient automatic search step for c into the optimization. The challenge for developing such an adaptive variant of SVRG is that one needs to develop an appropriate machinery to check whether the current value of c is large enough. One might be reminded of some restarting procedure for searching the unknown strong convexity parameter in APG methods [21, 11]. However, there are several differences that make the development of such a search scheme much more daunting for SVRG than for APG. The first difference is that, although SVRG has a lower per-iteration cost than APG, it also makes smaller progress towards the optimality after each iteration, which provides less information on the correctness of the current c. The second difference lies at that the SVRG is inherently stochastic, making the analysis for bounding the number of search steps much more difficult. To tackle this challenge, we propose to perform the proximal gradient updates occasionally at the reference points in SVRG where the full gradient is naturally computed. The normal of the proximal gradient provides a probabilistic “certificate" for checking whether the value of c is large enough. We then provide a novel analysis to bound the expected number of search steps with a consideration that the probabilistic “certificate" might fail with some probability. The final result shows that the new variant of SVRG enjoys a linear convergence under the QEB condition with unknown c and the corresponding complexity is only worse by a logarithmic factor than that in the setting where the parameter c is assumed to be known. Besides the QEB condition, we also consider more general error bound conditions (aka the Hölderian error bound (HEB) conditions [3]) whose definition is given below, and develop adaptive variants of SVRG under the HEB condition with θ ∈(0, 1/2) to enjoy intermediate faster convergence rates than SVRG under only the smoothness assumption (e.g, SVRG++ [2]). It turns out that the adaptive variants of SVRG under HEB with θ < 1/2 are simpler than that under the QEB. Definition 1 (Hölderian error bound (HEB)). Problem (1) is said to satisfy a Hölderian error bound condition on a compact set Ωif there exist θ ∈(0, 1/2] and c > 0 such that for any x ∈Ω ∥x −x∗∥2 ≤c(F(x) −F∗)θ, (3) where x∗denotes the closest optimal solution to x. It is notable that the above inequality can always hold for θ = 0 on a compact set Ω. Therefore the discussion in the paper regarding the HEB condition also applies to the case θ = 0. In addition, if a HEB condition with θ ∈(1/2, 1] holds, we can always reduce it to the QEB condition provided that F(x) −F∗is bounded over Ω. However, we are not aware of any interesting examples of (1) for such cases. We defer several examples satisfying the HEB conditions with explicit θ ∈(0, 1/2] in machine learning to Section 5. We refer the reader to [29, 28, 27, 14] for more examples. 2 2 Related work The use of error bound conditions in optimization for deriving fast convergence dates back to [15, 16, 17], where the (local) error bound condition bounds the distance of a point in the local neighborhood of the optimal solution to the optimal set by a multiple of the norm of the proximal gradient at the point. Based on their local error bound condition, they have derived local linear convergence for descent methods (e.g., proximal gradient methods). Several recent works have established the same local error bound conditions for several interesting problems in machine learning [7, 32, 33]. Hölderian error bound (HEB) conditions have been studied extensively in variational analysis [10] and recently revived in optimization for developing fast convergence of optimization algorithms. Many studies have leveraged the QEB condition in place of strong convexity assumption to develop fast convergence (e.g., linear convergence) of many optimization algorithms (e.g., the gradient method [3], the proximal gradient method [5], the accelerated gradient method [20], coordinate descent methods [30], randomized coordinate descent methods [9, 18], subgradient methods [29, 27], primal-dual style of methods [28], and etc.). This work is closely related to several recent studies that have shown that SVRG methods can also enjoy linear convergence for finite-sum (composite) smooth optimization problems under the QEB condition [6, 9, 12]. However, these approach all require knowing the growth parameter in the QEB condition, which is unknown in many practical problems. It is worth mentioning that several recent studies have also noticed the similar issue in SVRG-type of methods that the strong convexity constant is unknown and suggested some practical heuristics for either stopping the inner iterations early or restarting the algorithm [2, 22, 19]. Nonetheless, no theoretical convergence guarantee is provided for the suggested heuristics. Our work is also related to studies that focus on searching for unknown strong convexity parameter in accelerated proximal gradient (APG) methods [21, 11] but with striking differences as mentioned before. Recently, Liu & Yang [14] considered the HEB for composite smooth optimization problems and developed an adaptive restarting accelerated gradient method without knowing the c constant in the HEB. As we argued before, their analysis can not be trivially extended to SVRG. 3 SVRG under the HEB condition in the oracle setting In this section, we will present SVRG methods under the HEB condition in the oracle setting assuming that the c parameter is given. We first give some notations. Denote by Li the smoothness constant of fi, i.e., for all x, y ∈Ωfi(x) −fi(y) ≤⟨∇fi(y), x −y⟩+ Li 2 ∥x −y∥2 2. It implies that f(x) ≜1 n Pn i=1 fi(x) is also continuously differential convex function whose gradient is LfLipschitz continuous, where Lf ≤1 n Pn i=1 Li. For simplicity, we can take Lf = 1 n Pn i=1 Li. In the sequel, we let L ≜maxi Li and assume that it is given or can be estimated for the problem. Denote by Ω∗the optimal set of the problem (1), and by F∗= minx∈ΩF(x). The detailed steps of SVRG under the HEB condition are presented in Algorithm 1. The formal guarantee of SVRGHEB is given in the following theorem. Theorem 2. Suppose problem (1) satisfies the HEB condition with θ ∈(0, 1/2] and F(x0)−F∗≤ϵ0, where x0 is an initial solution. Let η = 1/(36L), and T1 ≥81Lc2 (1/ϵ0)1−2θ. Algorithm 1 ensures E[F(¯x(R)) −F∗] ≤(1/2)R ϵ0. (4) In particular, by running Algorithm 1 with R = ⌈log2 ϵ0 ϵ ⌉, we have E[F(¯x(R)) −F∗] ≤ϵ, and the computational complexity for achieving an ϵ-optimal solution in expectation is O(n log(ϵ0/ϵ) + Lc2 max{ 1 ϵ1−2θ , log(ϵ0/ϵ)}). Remark: We make several remarks about the Algorithm 1 and the results in Theorem 2. First, the constant factors in η and T1 should not be treated literally, because we have made no effort to optimize them. Second, when θ = 1/2 (i.e, the QEB condition holds), the Algorithm 1 reduces to the standard SVRG method under strong convexity, and the iteration complexity becomes O((n+Lc2) log(ϵ0/ϵ)), which is the same as that of the standard SVRG with Lc2 mimicking the condition number of the problem. Third, when θ = 0 (i.e., with only the smoothness assumption), the Algorithm 1 reduces to SVRG++ [2] with one difference, where in SVRGHEB the initial point and the reference point for each outer loop are the same but are different in SVRG++, and the iteration complexity of SVRGHEB becomes O(n log(ϵ0/ϵ)+ Lc2 ϵ ) that is similar to that of SVRG++. Fourth, for intermediate 3 Algorithm 1 SVRG method under HEB (SVRGHEB(x0, T1, R, θ)) 1: Input: x0 ∈Ω, the number of inner initial iterations T1, and the number of outer loops R. 2: ¯x(0) = x0 3: for r = 1, 2, . . . , R do 4: ¯gr = ∇f(¯x(r−1)), x(r) 0 = ¯x(r−1) 5: for t = 1, 2, . . . , Tr do 6: Choose it ∈{1, . . . , n} uniformly at random. 7: g(r) t = ∇fit(x(r) t−1) −∇fit(¯x(r−1)) + ¯gr. 8: x(r) t = arg minx∈Ω⟨g(r) t , x −x(r) t−1⟩+ 1 2η∥x −x(r) t−1∥2 2 + Ψ(x). 9: end for 10: ¯x(r) = 1 Tr PTr t=1 x(r) t 11: Tr+1 = 21−2θTr 12: end for 13: Output: ¯x(R) Algorithm 2 SVRG method under HEB with Restarting: SVRGHEB-RS 1: Input: x(0) ∈Ω, a small value c0 > 0, and θ ∈(0, 1/2). 2: Initialization: T (1) 1 = 81Lc2 0 (1/ϵ0)1−2θ 3: for s = 1, 2, . . . , S do 4: x(s)=SVRGHEB (x(s−1), T (s) 1 , R, θ) 5: T (s+1) 1 = 21−2θT (s) 1 6: end for θ ∈(0, 1/2) we can obtain faster convergence than SVRG++. Lastly, note that the number of iterations for each outer loop depends on the c parameter in the HEB condition. The proof the Theorem 2 is simply built on previous analysis of SVRG and is deferred to the supplement. 4 Adaptive SVRG under the HEB condition in the dark setting In this section, we will present adaptive variants of SVRGHEB that can be run in the dark setting, i.e, without assuming c is known. We first present the variant for θ < 1/2, which is simple and can help us understand the difficulty for θ = 1/2. 4.1 Adaptive SVRG for θ ∈(0, 1/2) An issue of SVRGHEB is that when c is unknown the initial number of iterations T1 in Algorithm 1 is difficult to estimate . A small value of T1 may not guarantee SVRGHEB converges as Theorem 2 indicates. To address this issue, we can use the restarting trick, i.e, restarting SVRGHEB with an increasing sequences of T1. The steps are shown in Algorithm 2. We can start with a small value of c0, which is not necessarily larger than c. If c0 is larger than c, the first call of SVRGHEB will yield an ϵ-optimal solution as Theorem 2 indicates. Below, we assume that c0 ≤c. Theorem 3. Suppose problem (1) satisfies the HEB with θ ∈(0, 1/2) and F(x0) −F∗≤ϵ0, where x0 is an initial solution. Let c0 ≤c, ϵ ≤ϵ0 2 , R = ⌈log2 ϵ0 ϵ ⌉and T (1) 1 = 81Lc2 0 (1/ϵ0)1−2θ. Then with at most a total number of S = l 1 1 2 −θ log2 c c0 m + 1 calls of SVRGHEB in Algorithm 2, we find a solution x(S) such that E[F(x(S)) −F∗] ≤ϵ. The computaional complexity of SVRGHEB-RS for obtaining such an ϵ-optimal solution is O n log(ϵ0/ϵ) log(c/c0) + Lc2 ϵ1−2θ . Remark: The proof is in the supplement. We can see that Algorithm 2 cannot be applied to θ = 1/2, which gives a constant sequence of T (s) 1 and therefore cannot provide any convergence guarantee for a small value of c0 < c. We have to develop a different variant for tackling θ = 1/2. A minor point of worth mentioning is that if necessary we can stop Algorithm 2 appropriately by performing a proximal gradient update at x(s) (whose full gradient will be computed for the next stage) and checking if the proximal gradient’s Euclidean norm square is less than a predefined level (c.f. (7)). 4 Algorithm 3 SVRG method under QEB with Restarting and Search: SVRGQEB-RS 1: Input: ˜x(0) ∈Ω, an initial value c0 > 0, ϵ > 0, ρ = 1/ log(1/ϵ) and ϑ ∈(0, 1). 2: ¯x(0) = arg minx∈Ω⟨∇f(˜x0), x −˜x0⟩+ L 2 ∥x −˜x0∥2 2 + Ψ(x), s = 0 3: while ∥¯x(s) −˜x(s)∥2 2 > ϵ do 4: Set Rs and Ts = ⌈81Lc2 s⌉as in Lemma 2 5: ˜x(s+1)=SVRGHEB(¯x(s), Ts, Rs, 0.5) 6: ¯x(s+1) = arg minx∈Ω⟨∇f(˜x(s+1)), x −˜x(s+1)⟩+ L 2 ∥x −˜x(s+1)∥2 2 + Ψ(x) 7: cs+1 = cs 8: if ∥¯x(s+1) −˜x(s+1)∥2 ≥ϑ∥¯x(s) −˜x(s)∥2 then 9: cs+1 = √ 2cs, ¯x(s+1) = ¯x(s), ˜x(s+1) = ˜x(s) 10: end if 11: s = s + 1 12: end while 13: Output: ¯x(s) 4.2 Adaptive SVRG for θ = 1/2 In light of the value of T1 in Theorem 2 for θ = 1/2, i.e., T1 = ⌈81Lc2⌉, one might consider to start with a small value for c and then increase its value by a constant factor at certain points in order to increase the value of T1. But the challenge is to decide when we should increase the value of c. If one follows a similar procedure as in Algorithm 2, we may end up with a worse iteration complexity. To tackle this challenge, we need to develop an appropriate machinery to check whether the value of c is already large enough for SVRG to decrease the objective value. However, we cannot afford the cost for computing the objective value due to large n. To this end, we develop a “certificate” that can be easily verified and can act as signal for a sufficient decrease in the objective value. The developed certificate is motivated by a property of proximal gradient update under the QEB as shown in (5). Lemma 1. Let ¯x = arg minx∈Ω⟨∇f(˜x), x−˜x⟩+ L 2 ∥x−˜x∥2 2+Ψ(x). Then under the QEB condition of the problem (1), we have F(¯x) −F∗≤(L + Lf)2c2∥¯x −˜x∥2 2. (5) The above lemma indicates that we can perform a proximal gradient update at a point ˜x and use ∥¯x −˜x∥2 as a gauge for monitoring the decrease in the objective value. However, the proximal gradient update is too expensive to compute due to the computation of full gradient ∇f(˜x). Luckily, SVRG allows to compute the full gradient at a small number of reference points. We propose to leverage these full gradients to conduct the proximal gradient updates and develop the certificate for searching the value of c. The detailed steps of the proposed algorithm are presented in Algorithm 3 to which we refer as SVRGQEB-RS. Similar to SVRGHEB-RS, SVRGQEB-RS also calls SVRGHEB for multiple stages. We conduct the proximal gradient update at the returned solution of each SVRGHEB, which also serves as the initial solution and the initial reference point for the next stage of SVRGHEB when our check in Step 7 fails. At each stage, at most Rs + 1 full gradients are computed, where Rs is a logarithmic number as revealed later. Step 7 - Step 11 in Algorithm 3 are considered as our search step for searching the value of c. We will show that, if cs is larger than c, the condition in Step 7 is true with small probability. This can be seen from the following lemma. Lemma 2. Suppose problem (1) satisfies the QEB condition. Let G0 ⊆G1 . . . ⊆Gs . . . be a filtration with the sigma algebra Gs generated by all random events before line 4 of stage s of Algorithm 3. Let η = 1 36L, Ts = ⌈81Lc2 s⌉, Rs = l log2 2c2 s(L+Lf )2 ϑ2ρL m . Then for any ϑ ∈(0, 1), we have Pr ∥¯x(s+1) −˜x(s+1)∥2 ≥ϑ∥¯x(s) −˜x(s)∥2 Gs, cs ≥c ≤ρ. Proof. By Lemma 1, we have F(¯x(s)) −F∗≤(L + Lf)2 c2∥¯x(s) −˜x(s)∥2 2 for all s. Below we consider stages such that cs ≥c. Following Theorem 2 and the above inequality, when Ts = ⌈81Lc2 s⌉≥⌈81Lc2⌉, we have E[F(˜x(s+1)) −F∗|Gs] ≤0.5Rs(F(¯x(s)) −F∗) ≤0.5Rs (L + Lf)2 c2∥¯x(s) −˜x(s)∥2 2. (6) Moreover, the smoothness of f(x) and the definition of ¯x(s+1) imply (see Lemma 4 in the supplemnt). F(˜x(s+1)) −F∗≥L 2 ∥¯x(s+1) −˜x(s+1)∥2 2. (7) 5 By combining (7) and (6) and using Markov inequality, we have Pr L 2 ∥¯x(s+1) −˜x(s+1)∥2 2 ≥ϵ|Gs ≤0.5Rs (L + Lf)2 c2∥¯x(s) −˜x(s)∥2 2 ϵ . If we choose ϵ = ϑ2L∥¯x(s)−˜x(s)∥2 2 in the inequality above and let Rs defined as in the assumption, the conclusion follows. Theorem 4. Under the same conditions as in Lemma 2 with ρ = 1/ log(1/ϵ), the expected computational complexity of SVRGQEB-RS for finding an ϵ-optimal solution is at most O (Lc2 + n) log2 c2(L + Lf)2 ϑ2L log 1 ϵ log1/ϑ2 ∥¯x(0) −˜x(0)∥2 2 ϵ + log2 c c0 . Proof. We call stage s with s = 0, 1, . . . a successful stage if ∥¯x(s+1) −˜x(s+1)∥2 < ϑ∥¯x(s) −˜x(s)∥2; otherwise, the stage s is called an unsuccessful stage. The condition ∥¯x(s) −˜x(s)∥2 2 ≤ϵ will hold after S1 := log1/ϑ2 ∥¯x(0)−˜x(0)∥2 2 ϵ successful stages and then Algorithm 3 will stop. Let S denote the total number of stages when the algorithm stops. Although stage s = S −1 is the last stage, for the convenience in the proof, we still define stage s = S as a post-termination stage where no computation is performed. In stage s with 0 ≤s ≤S −1, the computational complexity is proportional to the number of stochastic gradient computations (#SGC), which is TsRs + n(Rs + 1) ≤(Ts + 2n)Rs. If stage s is successful, then Rs+1 = Rs and Ts+1 = Ts. If stage s is unsuccessful, then Rs+1 = Rs + 1 ≤ 2Rs and Ts+1 = 2Ts so that Rs+1Ts+1 ≤4RsTs. In either case, Rs and Ts are non-decreasing. Note that, after S2 := ⌈2 log2(c/c0)⌉unsuccessful stages, we will have cs ≥c. We will consider two scenarios: (I) the algorithm stops with cS < c and (I) the algorithm stops with cS ≥c. In the first scenario, we have S1 successful stages and at most S2 unsuccessfully stages so that S ≤S1 +S2 and cS < c. The #SGC of all stages can be bounded by (S1 +S2)(TS−1 +2n)RS−1 ≤ O h log2( c c0 ) + log1/ϑ2 ∥¯x(0)−˜x(0)∥2 2 ϵ i log2 2c2(L+Lf )2 ϑ2ρL (Lc2 + n) . Then, we consider the second scenario. Let ˆs be the first stage with cs ≥c, i.e., ˆs := min{s|cs ≥ c}. It is easy to see that cˆs < √ 2c and there are S2 unsuccessful and less than S1 successful stages before stage ˆs. Since the #SGC in any stage before ˆs is bounded by (Tˆs + 2n)Rˆs ≤ O (Lc2 + n) log2 8c2(L+Lf )2 ϑ2ρL , the total #SGC in stages 0, 1, . . . , ˆs−1 is at most (S1+S2)(Tˆs+ 2n)Rˆs ≤O h log2( c c0 ) + log1/ϑ2 ∥¯x(0)−˜x(0)∥2 2 ϵ i log2 2c2(L+Lf )2 ϑ2ρL (Lc2 + n) . Next, we bound the total #SGC in stages ˆs, ˆs + 1, . . . , S. In the rest of the proof, we consider stage s with ˆs ≤s ≤S. We define C(˜x, ¯x, i, j, s) as the expected #SGC in stages s, s+1, . . . , S, conditioning on that the initial state of stage s are ˜x(s) = ˜x and ¯x(s) = ¯x and the numbers of successful and unsuccessful stages before stage s are i and j, respectively. Note that s = i + j. Because stage s depends on the historical path only through the state variables (˜x, ¯x, i, j, s), C(˜x, ¯x, i, j, s) is well defined and (˜x, ¯x, i, j, s) transits in a Markov chain with the next state being (˜x, ¯x, i, j + 1, s + 1) if stage s does not succeed and being (˜x+, ¯x+, i+1, j, s+1) if stage s succeeds, where ˜x+=SVRGHEB(¯x, Ts, Rs, 0.5) and ¯x+ = arg minx∈Ω⟨∇f(˜x+), x −˜x+⟩+ L 2 ∥x −˜x+∥2 2 + Ψ(x). In the next, we will use backward induction to derive an upper bound for C(˜x, ¯x, i, j, s) that only depends on i and j but not on s, ˜x and ¯x. In particular, we want to show that C(˜x, ¯x, i, j, s) ≤4j−S2(Tˆs + 2n)Rˆs 1 −4ρ Ai, for i ≥0, j ≥0, i + j = s, s ≥ˆs, (8) where Ai := PS1−i−1 r=0 1−ρ 1−4ρ r if 0 ≤i ≤S1 −1 and Ai := 0 if i = S1. We start with the base case where i = S1. By definitions, the only stage with i = S1 is the posttermination stage, namely, stage s = S. In this case, C(˜x, ¯x, i, j, s) = 0 since stage S performs no computation. Then, (8) holds trivially with Ai = 0. 6 Suppose i < S1 and (8) holds for i + 1, i + 2, . . . , S1. We want to prove it also holds i. We define X = X(˜x, ¯x, i, j, s) as the random variable that equals the number of unsuccessful stages from stage s (including stage s) to the first successful stage among stages s, s+1, s+2, . . . , S −1, conditioning on s ≥ˆs and the state variables at the beginning of stage s are (˜x, ¯x, i, j, s). Note that X = 0 means stage s is successful. For simplicity of notation, we use Pr(·) to represent the conditional probability Pr(·|s ≥ˆs, (˜x, ¯x, i, j, s)). Since cs ≥cˆs ≥c for s ≥ˆs, we can show by Lemma 2 that 1 Pr(X = r) = hQr−1 t=0 Pr(X ≥t + 1|X ≥t) i Pr(X = r|X ≥r), Pr(X ≥r + 1|X ≥r) = Pr(s + r fails |stages s, s + 1, . . . , s + r −1 fail) ≤ρ, Pr(X = r|X ≥r) = Pr(s + r succeeds |stages s, s + 1, . . . , s + r −1 fail), = 1 −Pr(X ≥r + 1|X ≥r) ≥1 −ρ. (9) When X = r, the #SGC from stage s to the end of the algorithms will be Pr t=0(Ts+t + 2n)Rs+t + EC(˜x+, ¯x+, i+1, j +r, s+r +1), where E denotes the expectation over ˜x+ and ¯x+ conditioning on (˜x, ¯x) and ˜x+=SVRGHEB(¯x, Ts+r, Rs+r, 0.5) and ¯x+ = arg minx∈Ω⟨∇f(˜x+), x −˜x+⟩+ L 2 ∥x − ˜x+∥2 2 + Ψ(x). Since stages s, s + 1, . . . , s + r −1 are unsuccessful, we have (Ts+t + 2n)Rs+t ≤4t(Ts + 2n)Rs ≤4j+t−S2(Tˆs + 2n)Rˆs for t = 0, 1, . . . , r −1. Because (8) holds for i + 1 and for any ˜x+ and ¯x+, we have C(˜x+, ¯x+, i + 1, j + r, s + r + 1) ≤ 4j+r−S2(Tˆs + 2n)Rˆs 1 −4ρ Ai+1. (10) Based on the above inequality and the connection between C(˜x, ¯x, i, j, s) and C(˜x+, ¯x+, i + 1, j + r, s + r + 1), we will prove that (8) holds for i, j, s. C(˜x, ¯x, i, j, s) = ∞ X r=0 Pr(X = r) r X t=0 (Ts+t + 2n)Rs+t + EC(˜x+, ¯x+, i + 1, j + r, s + r + 1) ! ≤ ∞ X r=0 Pr(X = r) r X t=0 (Ts+t + 2n)Rs+t + 4j+r−S2(Tˆs + 2n)Rˆs 1 −4ρ [(1 −ρ)/(1 −4ρ)]S1−i−1 −1 ((1 −ρ)/(1 −4ρ) −1) ≤ ∞ X r=0 Pr(X = r) r X t=0 4j+t−S2(Tˆs + 2n)Rˆs + 4j+r−S2(Tˆs + 2n)Rˆs 1 −4ρ Ai+1 ! ≤ 4j−S2(Tˆs + 2n)Rˆs ∞ X r=0 Pr(X = r) r X t=0 4t + 4r 1 −4ρAi+1 ! = 4j−S2(Tˆs + 2n)Rˆs ∞ X r=0 "r−1 Y t=0 Pr(X ≥t + 1|X ≥t) # Pr(X = r|X ≥r) 4r+1 −1 3 + 4rAi+1 1 −4ρ . Since 1 −ρ ≥1 4, for any a ≥0 and any b ≥a + 1, we have 4a+1 −1 3 + 4aAi+1 1 −4ρ ≤ (1 −ρ) 4a+2 −1 3 + 4a+1Ai+1 1 −4ρ ≤Pr(X = a + 1|X ≥a + 1) 4a+2 −1 3 + 4a+1Ai+1 1 −4ρ ≤ b X r=a+1 " r−1 Y t=a+1 Pr(X ≥t + 1|X ≥t) # Pr(X = r|X ≥r) 4r+1 −1 3 + 4rAi+1 1 −4ρ := Db a, which implies Db a−1 := b X r=a "r−1 Y t=a Pr(X ≥t + 1|X ≥t) # Pr(X = r|X ≥r) 4r+1 −1 3 + 4rAi+1 1 −4ρ = Pr(X = a|X ≥a) 4a+1 −1 3 + 4aAi+1 1 −4ρ + Pr(X ≥a + 1|X ≥a)Db a ≤ (1 −ρ) 4a+1 −1 3 + 4aAi+1 1 −4ρ + ρDb a. 1We follow the convention that Qj i = 1 if j < i. 7 Applying this inequality for a = 0, 1, . . . , b −1 and the fact Db b−1 ≤4b+1−1 3 + 4bAi+1 1−4ρ gives Db −1 ≤ (1 −ρ) b−1 X r=0 ρr 4r+1 −1 3 + 4rAi+1 1 −4ρ + ρb 4b+1 −1 3 + 4bAi+1 1 −4ρ . Since 4ρ < 1, letting b in the inequality above increase to infinity gives C(˜x, ¯x, i, j, s) ≤4j−S2(Tˆs + 2n)Rˆs(1 −ρ) ∞ X r=0 ρr 4r+1 −1 3 + 4rAi+1 1 −4ρ = 4j−S2(Tˆs + 2n)Rˆs 1 1 −4ρ + Ai+1(1 −ρ) (1 −4ρ)2 4j−S2(Tˆs + 2n)RˆsAi 1 −4ρ , which is (8). Then by induction, (8) holds for any state (˜x, ¯x, i, j, s) with s ≥ˆs. At the moment when the algorithm enters stage ˆs, we must have j = S2 and i = ˆs −S2. By (8) and the facts that ˆs ≥S2 and that Ai = PS1−i−1 r=0 1−ρ 1−4ρ r ≤(S1 + S2 −ˆs) 1−ρ 1−4ρ S1+S2−ˆs , the expected #SGC from stage ˆs to the end of algorithm is C(˜x, ¯x, ˆs −S2, S2, ˆs) ≤ (Tˆs + 2n)Rˆs 1 −4ρ (S1 + S2 −ˆs) 1 −ρ 1 −4ρ S1+S2−ˆs ≤ O (Lc2 + n) log2 8c2(L + Lf)2 ϑ2ρL S1 1 −ρ 1 −4ρ S1! . In light of the value of ρ, i.e., ρ = 1 log(1/ϵ), we have 1−ρ 1−4ρ S1 = ∥¯x(0)−˜x(0)∥2 2 ϵ log( 1−ρ 1−4ρ) log 1/ϑ2 ≤ ∥¯x(0)−˜x(0)∥2 ϵ 3ρ (1−4ρ) log 1/ϑ = O 1 ϵ 3ρ ≤ O(1). Therefore, by adding the #SGC before and after the ˆs stages in the second scenario, we have the expected total #SGC is O log c c0 + log ∥¯x(0)−˜x(0)∥2 2 ϵ log c2(L+Lf )2 ρL (Lc2 + n) . 5 Applications and Experiments In this section, we consider some applications in machine learning and present some experimental results. We will consider finite-sum problems in machine learning where fi(x) = ℓ(x⊤ai, bi) denotes a loss function on an observed training feature and label pair (ai, bi), and Ψ(x) denotes a regularization on the model x. Let us first consider some examples of loss functions and regularizers that satisfy the QEB condition. More examples can be found in [29, 28, 27, 14]. Piecewise convex quadratic (PCQ) problems. According to the global error bound of piecewise convex polynomials by Li [10], PCQ problems satisfy the QEB condition. Examples of such problems include empirical square loss, squared hinge loss or Huber loss minimization with ℓ1 norm, ℓ∞norm or ℓ1,∞norm regularization or constraint. A family of structured smooth composite functions. This family include functions of the form F(x) = h(Ax) + Ψ(x), where Ψ(x) is a polyhedral function or an indicator function of a polyhedral set and h(·) is a smooth and strongly convex function on any compact set. Accoding to studies in [6, 20], the QEB holds on any compact set or the involved polyhedral set. Examples of interesting loss functions include the aforementioned square loss and the logisitc loss as well. For examples satisfying the HEB condition with intermediate values of θ ∈(0, 1/2), we can consider ℓ1 constrained ℓp norm regression, where the objective f(x) = 1/n Pn i=1(x⊤ai −bi)p with p ∈2N+ [23]. According to the reasoning in [14], the HEB condition holds with θ = 1/p. Before presenting the experimental results, we would like to remark that in many regularized machine learning formulations, no constraint in a compact domain x ∈Ωis included. Nevertheless, we can explicitly add a constraint Ψ(x) ≤B into the problem to ensure that intermediate solutions generated by the proposed algorithms always stay in a compact set, where B can be set to a large value without affecting the optimal solutions. The proximal mapping of Ψ(x) with such an explicit constraint can be efficiently handled by combining the proximal mapping and a binary search for the Lagrangian 8 #grad/n 0 100 200 300 400 500 objective - optimum -15 -10 -5 0 squared hinge + ℓ1 norm, Adult SVRGHEB(1000) SVRGHEB(2000) SVRGHEB(8000) SVRGHEB(2n=65122) SVRGQEB−RS(1000) SVRGQEB−RS(2000) SVRGQEB−RS(8000) SVRGQEB−RS(2n=65122) #grad/n 0 100 200 300 400 500 objective - optimum -16 -14 -12 -10 -8 -6 -4 -2 0 squared hinge + ℓ1 norm, Adult SAGA SVRG++ SVRG-heuristics SVRGQEB−RS #grad/n 0 100 200 300 400 500 objective - optimum -15 -10 -5 0 logistic + ℓ1 norm, Adult SAGA SVRG++ SVRG-heuristics SVRGQEB−RS #grad/n 0 100 200 300 400 500 objective - optimum -5 -4.5 -4 -3.5 -3 -2.5 -2 square + ℓ1 norm, million songs SAGA SVRG++ SVRG-heuristics SVRGQEB−RS #grad/n 0 100 200 300 400 500 objective - optimum -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 -1.5 -1 huber loss + ℓ1 norm, million songs SAGA SVRG++ SVRG-heuristics SVRGQEB−RS #grad/n 0 100 200 300 400 500 objective - optimum -2.9 -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2 ℓp regression (p = 4), E2006 SAGA SVRG++ SVRG-heuristics SVRGHEB−RS Figure 1: Comparison of different algorithms for solving different problems on different datasets. multiplier. In practice, as long as B is sufficiently large, the constraint remains inactive and the computational cost remains the same. Next, we conduct some experiments to demostrate the effectiveness of the proposed algorithms on several tasks, including ℓ1 regularized squared hinge loss minimization, ℓ1 regularized logistic loss minimization for linear classification problems; and ℓ1 constrained ℓp norm regression, ℓ1 regularized square loss minimization and ℓ1 regularized Huber loss minimization for linear regression problems. We use three datasets from libsvm website: Adult (n = 32561, d = 123), E2006-tfidf (n = 16087, d = 150360), and YearPredictionMSD (n = 51630, d = 90). Note that we use the testing set of YearPredictionMSD data for our experiment because some baselines need a lot of time to converge on the large training set. We set the regularization parameter of ℓ1 norm and the upper bound of ℓ1 constraint to be 10−4 and 100, respectively. In each plot, the difference between objective value and optimum is presented in log scale. Our first experiment is to justify the proposed SVRGQEB-RS algorithm by comparing it with SVRGHEB with different estimations of c (corresponding to the different initial values of T1). We try four different values of T1 ∈{1000, 2000, 8000, 2n}. The result is plotted in the top left of Figure 1. We can see that SVRGHEB with some underestimated values of T1 (e.g, 1000, 2000) converge very slowly. However, the performance of SVRGQEB-RS is not affected too much by the initial value of T1, which is consistent with our theory showing the log dependence on the initial value of c. Moreover, SVRGQEB-RS with different values of T1 perform always better than their counterparts of SVRGHEB. Then we compare SVRGQEB-RS and SVRGHEB-RS to other baselines for solving different problems on different data sets. We choose SAGA, SVRG++ as the baselines. We also notice that a heuristic variant of SVRG++ was suggested in [2] where epoch length is automatically determined based on the change in the variance of gradient estimators between two consecutive epochs. However, according to our experiments we find that this heuristic automatic strategy cannot always terminate one epoch because their suggested criterion cannot be met. This is also confirmed by our communication with the authors of SVRG++. To make it work, we manually add an upper bound constraint of each epoch length equal to 2n following the suggestion in [8]. The resulting baseline is denoted by SVRG-heuristics. For all algorithms, the step size is best tuned. The initial epoch length of SVRG++ is set to n/4 following the suggestion in [2], and the same initial epoch length is also used in our algorithms. The comparison with these baselines are reported in remaining figures of Figure 1. We can see that SVRGQEB-RS (resp. SVRGHEB-RS) always has superior performance, while SVRG-heuristics sometimes performs well sometimes bad. Acknowlegements We thank the anonymous reviewers for their helpful comments. Y. Xu and T. Yang are partially supported by National Science Foundation (IIS-1463988, IIS-1545995). 9 References [1] Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Proceedings of the 49th Annual ACM Symposium on Theory of Computing, STOC ’17, 2017. [2] Z. Allen-Zhu and Y. Yuan. Improved svrg for non-strongly-convex or sum-of-non-convex objectives. In Proceedings of The 33rd International Conference on Machine Learning, pages 1080–1089, 2016. [3] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. CoRR, abs/1510.08234, 2015. [4] A. Defazio, F. R. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS), pages 1646–1654, 2014. [5] D. Drusvyatskiy and A. S. Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. arXiv:1602.06661, 2016. [6] P. Gong and J. Ye. Linear convergence of variance-reduced projected stochastic gradient without strong convexity. CoRR, abs/1406.1102, 2014. [7] K. Hou, Z. Zhou, A. M. So, and Z. Luo. On the linear convergence of the proximal gradient method for trace norm regularization. In Advances in Neural Information Processing Systems (NIPS), pages 710–718, 2013. [8] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315–323, 2013. [9] H. Karimi, J. Nutini, and M. W. Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition. In Machine Learning and Knowledge Discovery in Databases - European Conference (ECML-PKDD), pages 795–811, 2016. [10] G. Li. Global error bounds for piecewise convex polynomials. Math. Program., 137(1-2):37–64, 2013. [11] Q. Lin and L. Xiao. An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. In Proceedings of the International Conference on Machine Learning, (ICML), pages 73–81, 2014. [12] J. Liu and M. Takác. Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption. CoRR, abs/1612.05356, 2016. [13] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351–376, 2015. [14] M. Liu and T. Yang. Adaptive accelerated gradient converging methods under holderian error bound condition. CoRR, abs/1611.07609, 2017. [15] Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minization. Journal of Optimization Theory and Applications, 72(1):7–35, 1992. [16] Z.-Q. Luo and P. Tseng. On the linear convergence of descent methods for convex essenially smooth minization. SIAM Journal on Control and Optimization, 30(2):408–425, 1992. [17] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Annals of Operations Research, 46:157–178, 1993. [18] C. Ma, R. Tappenden, and M. Takác. Linear convergence of the randomized feasible descent method under the weak strong convexity assumption. CoRR, abs/1506.02530, 2015. [19] T. Murata and T. Suzuki. Doubly accelerated stochastic variance reduced dual averaging method for regularized empirical risk minimization. CoRR, abs/1703.00439, 2017. [20] I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for nonstrongly convex optimization. CoRR, abs/1504.06298, 2015. 10 [21] Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125–161, 2013. [22] L. Nguyen, J. Liu, K. Scheinberg, and M. Takác. SARAH: A novel method for machine learning problems using stochastic recursive gradient. CoRR, 2017. [23] H. Nyquist. The optimal lp norm estimator in linear regression models. Communications in Statistics - Theory and Methods, 12(21):2511–2524, 1983. [24] R. Rockafellar. Convex Analysis. Princeton mathematical series. Princeton University Press, 1970. [25] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. In Proceedings of the International Conference on Machine Learning (ICML), pages 567–599, 2013. [26] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014. [27] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3821–3830, 2017. [28] Y. Xu, Y. Yan, Q. Lin, and T. Yang. Homotopy smoothing for non-smooth problems with lower complexity than O(1/ϵ). In Advances In Neural Information Processing Systems 29 (NIPS), pages 1208–1216, 2016. [29] T. Yang and Q. Lin. Rsg: Beating sgd without smoothness and/or strong convexity. CoRR, abs/1512.03107, 2016. [30] H. Zhang. New analysis of linear convergence of gradient-type methods via unifying error bound conditions. CoRR, abs/1606.00269, 2016. [31] H. Zhang and W. Yin. Gradient methods for convex minimization: better rates under weaker conditions. arXiv preprint arXiv:1303.4645, 2013. [32] Z. Zhou and A. M.-C. So. A unified approach to error bounds for structured convex optimization problems. arXiv:1512.03518, 2015. [33] Z. Zhou, Q. Zhang, and A. M. So. L1p-norm regularization: Error bounds and convergence rate analysis of first-order methods. In Proceedings of the 32nd International Conference on Machine Learning, (ICML), pages 1501–1510, 2015. 11 | 2017 | 33 |
6,819 | PixelGAN Autoencoders Alireza Makhzani, Brendan Frey University of Toronto {makhzani,frey}@psi.toronto.edu Abstract In this paper, we describe the “PixelGAN autoencoder”, a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code. We show that different priors result in different decompositions of information between the latent code and the autoregressive decoder. For example, by imposing a Gaussian distribution as the prior, we can achieve a global vs. local decomposition, or by imposing a categorical distribution as the prior, we can disentangle the style and content information of images in an unsupervised fashion. We further show how the PixelGAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets. 1 Introduction In recent years, generative models that can be trained via direct back-propagation have enabled remarkable progress in modeling natural images. One of the most successful models is the generative adversarial network (GAN) [1], which employs a two player min-max game. The generative model, G, samples the prior p(z) and generates the sample G(z). The discriminator, D(x), is trained to identify whether a point x is a sample from the data distribution or a sample from the generative model. The generator is trained to maximally confuse the discriminator into believing that generated samples come from the data distribution. The cost function of GAN is min G max D Ex∼pdata[log D(x)] + Ez∼p(z)[log(1 −D(G(z))]. GANs can be considered within the wider framework of implicit generative models [2, 3, 4]. Implicit distributions can be sampled through their generative path, but their likelihood function is not tractable. Recently, several papers have proposed another application of GAN-style algorithms for approximate inference [2, 3, 4, 5, 6, 7, 8, 9]. These algorithms use implicit distributions to learn posterior approximations that are more expressive than the distributions with tractable densities that are often used in variational inference. For example, adversarial autoencoders [6] use a universal approximator posterior as the implicit posterior distribution and use adversarial training to match the aggregated posterior of the latent code to the prior distribution. Adversarial variational Bayes [3, 7] uses a more general amortized GAN inference framework within a maximum-likelihood learning setting. Another type of GAN inference technique is used in the ALI [8] and BiGAN [9] models, which have been shown to approximate maximum likelihood learning [3]. In these models, both the recognition and generative models are implicit and are jointly learnt by an adversarial training process. Variational autoencoders (VAE) [10, 11] are another state-of-the-art image modeling technique that use neural networks to parametrize the posterior distribution and pair it with a top-down generative network. Both networks are jointly trained to maximize a variational lower bound on the data loglikelihood. A different framework for learning density models is autoregressive neural networks such 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Architecture of the PixelGAN autoencoder. as NADE [12], MADE [12], PixelRNN [12] and PixelCNN [13]. Unlike variational autoencoders, which capture the statistics of the data in hierarchical latent codes, the autoregressive models learn the image densities directly at the pixel level without learning a hierarchical latent representation. In this paper, we present the PixelGAN autoencoder as a generative autoencoder that combines the benefits of latent variable models with autoregressive architectures. The PixelGAN autoencoder is a generative autoencoder in which the generative path is a PixelCNN that is conditioned on a latent variable. The latent variable is inferred by matching the aggregated posterior distribution to the prior distribution by an adversarial training technique similar to that of the adversarial autoencoder [6]. However, whereas in adversarial autoencoders the statistics of the data distribution are captured by the latent code, in the PixelGAN autoencoder they are captured jointly by the latent code and the autoregressive decoder. We show that imposing different distributions as the prior results in different factorizations of information between the latent code and the autoregressive decoder. For example, in Section 2.1, we show that by imposing a Gaussian distribution on the latent code, we can achieve a global vs. local decomposition of information. In this case, the global latent code no longer has to model all the irrelevant and fine details of the image, and can use its capacity to capture more relevant and global statistics of the image. Another type of decomposition of information that can be learnt by PixelGAN autoencoders is a discrete vs. continuous decomposition. In Section 2.2, we show that we can achieve this decomposition by imposing a categorical prior on the latent code using adversarial training. In this case, the categorical latent code captures the discrete underlying factors of variation in the data, such as class label information, and the autoregressive decoder captures the remaining continuous structure, such as style information, in an unsupervised fashion. We then show how PixelGAN autoencoders with categorical priors can be directly used in clustering and semi-supervised scenarios and achieve very competitive classification results on several datasets in Section 3. Finally, we present one of the main potential applications of PixelGAN autoencoders in learning cross-domain relations between two different domains in Section 4. 2 PixelGAN Autoencoders Let x be a datapoint that comes from the distribution pdata(x) and z be the hidden code. The recognition path of the PixelGAN autoencoder (Figure 1) defines an implicit posterior distribution q(z|x) by using a deterministic neural function z = f(x, n) that takes the input x along with random noise n with a fixed distribution p(n) and outputs z. The aggregated posterior q(z) of this model is defined as follows: q(z) = Z x q(z|x)pdata(x)dx. This parametrization of the implicit posterior distribution was originally proposed in the adversarial autoencoder work [6] as the universal approximator posterior. We can sample from this implicit distribution q(z|x), by evaluating f(x, n) at different samples of n, but the density function of this posterior distribution is intractable. Appendix A.1 discusses the importance of the input noise in training PixelGAN autoencoders. The generative path p(x|z) is a conditional PixelCNN [13] that conditions on the latent vector z using an adaptive bias in PixelCNN layers. The inference is done by an amortized GAN inference technique that was originally proposed in the adversarial autoencoder work [6]. In this method, an adversarial network is attached on top of the hidden code vector of 2 the autoencoder and matches the aggregated posterior distribution, q(z), to an arbitrary prior, p(z). Samples from q(z) and p(z) are provided to the adversarial network as the negative and positive examples respectively, and the generator of the adversarial network, which is also the encoder of the autoencoder, tries to match q(z) to p(z) by the gradient that comes through the discriminative adversarial network. The adversarial network, the PixelCNN decoder and the encoder are trained jointly in two phases – the reconstruction phase and the adversarial phase – executed on each mini-batch. In the reconstruction phase, the ground truth input x along with the hidden code z inferred by the encoder are provided to the PixelCNN decoder. The PixelCNN decoder weights are updated to maximize the log-likelihood of the input x. The encoder weights are also updated at this stage by the gradient that comes through the conditioning vector of the PixelCNN. In the adversarial phase, the adversarial network updates both its discriminative network and its generative network (the encoder) to match q(z) to p(z). Once the training is done, we can sample from the model by first sampling z from the prior distribution p(z), and then sampling from the conditional likelihood p(x|z) parametrized by the PixelCNN decoder. We now establish a connection between the PixelGAN autoencoder cost and maximum likelihood learning using a decomposition of the aggregated evidence lower bound (ELBO) proposed in [14]: Ex∼pdata(x)[log p(x)] > −Ex∼pdata(x) h Eq(z|x)[−log p(x|z)] i −Ex∼pdata(x) h KL(q(z|x)∥p(z)) i (1) = −Ex∼pdata(x) h Eq(z|x)[−log p(x|z)] i | {z } reconstruction term −KL(q(z)∥p(z)) | {z } marginal KL −I(z; x) | {z } mutual info. (2) The first term in Equation 2 is the reconstruction term and the second term is the marginal KL divergence between the aggregated posterior and the prior distribution. The third term is the mutual information between the latent code z and the input x. This is a regularization term that encourages z and x to be decoupled by removing the information of the data distribution from the hidden code. If the training set has N examples, I(z; x) is bounded as follows (see [14]). 0 < I(z; x) < log N (3) In order to maximize the ELBO, we need to minimize all the three terms of Equation 2. We consider two cases for the decoder p(x|z): Deterministic Decoder. If the decoder p(x|z) is deterministic or has very limited stochasticity such as the simple factorized decoder of the VAE, the mutual information term acts in the complete opposite direction of the reconstruction term. This is because the only way to minimize the reconstruction error of x is to learn a hidden code z that is relevant to x, which results in maximizing I(z; x). Indeed, it can be shown that minimizing the reconstruction term maximizes a variational lower bound on I(z; x) [15, 16]. For example, in the case of the VAE trained on MNIST, since the reconstruction is precise, the mutual information term is dominated and is close to its maximum value I(z; x) ≈log N ≈11.00 nats [14]. Stochastic Decoder. If we use a powerful decoder such as the PixelCNN, the reconstruction term and the mutual information term will not compete with each other anymore and the network can minimize both independently. In this case, the optimal solution for maximizing the ELBO would be to model pdata(x) solely by p(x|z) and thereby minimizing the reconstruction term, and at the same time, minimizing the mutual information term by ignoring the latent code. As a result, even though the model achieves a high likelihood, the latent code does not learn any useful representation, which is undesirable. This problem has been observed in several previous works [17, 18] and different techniques such as annealing the weight of the KL term [17] or weakening the decoder [18] have been proposed to make z and x more dependent. As suggested in [19, 18], we think that the maximum likelihood objective by itself is not a useful objective for representation learning especially when a powerful decoder is used. In PixelGAN autoencoders, in order to encourage learning more useful representations, we modify the ELBO (Equation 2) by removing the mutual information term from it, since this term is explicitly encouraging z to become independent of x. So our cost function only includes the reconstruction term and the marginal KL term. The reconstruction term is optimized by the reconstruction phase of training and the marginal KL term is approximately optimized by the adversarial phase1. Note that since the 1The original GAN formulation optimizes the Jensen-Shannon divergence [1], but there are other formulations that optimize the KL divergence, e.g. [3]. 3 (a) PixelGAN Samples (2D code, limited receptive field) (b) PixelCNN Samples (limited receptive field) (c) AAE Samples (2D code) Figure 2: (a) Samples of the PixelGAN autoencoder with 2D Gaussian code and limited receptive field of size 9. (b) Samples of the PixelCNN (c) Samples of the adversarial autoencoder. mutual information term is upper bounded by a constant (log N), we are still maximizing a lower bound on the log-likelihood of data. However, this bound is weaker than the ELBO, which is the price that is paid for learning more useful latent representations by balancing the decomposition of information between the latent code and the autoregressive decoder. For implementing the conditioning adaptive bias in the PixelCNN decoder, we explore two different architectures [13]. In the location-invariant bias, for each PixelCNN layer, we use the latent code to construct a vector that is broadcasted within each feature map of the layer and then added as an adaptive bias to that layer. In the location-dependent bias, we use the latent code to construct a spatial feature map that is broadcasted across different feature maps and then added only to the first layer of the decoder as an adaptive bias. We will discuss the effect of these architectures on the learnt representation in Figure 3 of Section 2.1 and their implementation details in Appendix A.2. 2.1 PixelGAN Autoencoders with Gaussian Priors Here, we show that PixelGAN autoencoders with Gaussian priors can decompose the global and local statistics of the images between the latent code and the autoregressive decoder. Figure 2a shows the samples of a PixelGAN autoencoder model with the location-dependent bias trained on the MNIST dataset. For the purpose of better illustrating the decomposition of information, we have chosen a 2-D Gaussian latent code and a limited the receptive field of size 9 for the PixelGAN autoencoder. Figure 2b shows the samples of a PixelCNN model with the same limited receptive field size of 9 and Figure 2c shows the samples of an adversarial autoencoder with the 2-D Gaussian latent code. The PixelCNN can successfully capture the local statistics, but fails to capture the global statistics due to the limited receptive field size. In contrast, the adversarial autoencoder, whose sample quality is very similar to that of the VAE, can successfully capture the global statistics, but fails to generate the details of the images. However, the PixelGAN autoencoder, with the same receptive field and code size, can combine the best of both and generates sharp images with global statistics. In PixelGAN autoencoders, both the PixelCNN depth and the conditioning architecture affect the decomposition of information between the latent code and the autoregressive decoder. We investigate these effects in Figure 3 by training a PixelGAN autoencoder on MNIST where the code size is chosen to be 2 for the visualization purpose. As shown in Figure 3a,b, when a shallow decoder is used, most of the information will be encoded in the hidden code and there is a clean separation between the digit clusters. As we make the PixelCNN more powerful (Figure 3c,d), we can see that the hidden code is still used to capture some relevant information of the input, but the separation of digit clusters is not as sharp when the limited code size of 2 is used. In the next section, we will show that by using a larger code size (e.g., 30), we can get a much better separation of digit clusters even when a powerful PixelCNN is used. The conditioning architecture also affects the decomposition of information. In the case of the location-invariant bias, the hidden code is encouraged to learn the global information that is locationinvariant (the what information and not the where information) such as the class label information. For example, we can see in Figure 3a,c that the network has learnt to use one of the axes of the 2D Gaussian code to explicitly encode the digit label even though a continuous prior is imposed. In this 4 (a) Shallow PixelCNN Location-invariant bias (b) Shallow PixelCNN Location-dependent bias (c) Deep PixelCNN Location-invariant bias (d) Deep PixelCNN Location-dependent bias Figure 3: The effect of the PixelCNN decoder depth and the conditioning architecture on the learnt representation of the PixelGAN autoencoder. (Shallow=3 ResBlocks, Deep=12 ResBlocks) case, we can potentially get a much better separation if we impose a discrete prior. This makes this architecture suitable for the discrete vs. continuous decomposition and we use it for our clustering and semi-supervised learning experiments. In the case of the location-dependent bias (Figure 3b,d), the hidden code is encouraged to learn the global information that has location dependent information such as low-frequency content of the image, similar to what the hidden code of an adversarial or variational autoencoder would learn (Figure 2c). This makes this architecture suitable for the global vs. local decomposition experiments such as Figure 2a. From Figure 3, we can see that the class label information is mostly captured by p(z) while the style information of the images is captured by both p(z) and p(x|z). This decomposition of information has also been studied in other works that combine the latent variable models with autoregressive decoders such as PixelVAE [20] and variational lossy autoencoders (VLAE) [18]. For example, the VLAE model [18] proposes to use the depth of the PixelCNN decoder to control the decomposition of information. In their model, the PixelCNN decoder is designed to have a shallow depth (small local receptive field) so that the latent code z is forced to capture more global information. This approach is very similar to our example of the PixelGAN autoencoder in Figure 2. However, the question that has remained unanswered is whether it is possible to achieve a complete decomposition of content and style in an unsupervised fashion, where the class label or discrete structure information is encoded in the latent code z, and the remaining continuous structure such as style is captured by a powerful and deep PixelCNN decoder. This kind of decomposition is particularly interesting as it can be directly used for clustering and semi-supervised classification. In the next section, we show that we can learn this decomposition of content and style by imposing a categorical distribution on the latent representation z using adversarial training. Note that this discrete vs. continuous decomposition is very different from the global vs. local decomposition, because a continuous factor of variation such as style can have both global and local effect on the image. Indeed, in order to achieve the discrete vs. continuous decomposition, we have to use very deep and powerful PixelCNN decoders (up to 20 residual blocks) to capture both the global and local statistics of the style by the PixelCNN while the discrete content of the image is captured by the categorical latent variable. 2.2 PixelGAN Autoencoders with Categorical Priors In this section, we present an architecture of the PixelGAN autoencoder that can separate the discrete information (e.g., class label) from the continuous information (e.g., style information) in the images. We then show how our architecture can be naturally adopted for the semi-supervised settings. The architecture that we use is similar to Figure 1, with the difference that we impose a categorical distribution as the prior rather the Gaussian distribution (Figure 4) and also use the location-independent bias architecture. Another difference is that we use a convolutional network as the inference network q(z|x) to encourage the encoder to preserve the content and lose the style information of the image. The inference network has a softmax output and predicts a one-hot vector whose dimension is the number of discrete labels or categories that we wish the data to be clustered into. The adversarial network is trained directly on the continuous probability outputs of the softmax layer of the encoder. Imposing a categorical distribution at the output of the encoder imposes two constraints. The first constraint is that the encoder has to make confident decisions about the class labels of the inputs. The 5 Figure 4: Architecture of the PixelGAN autoencoder with the categorical prior. p(z) captures the class label and p(x|z) is a multi-modal distribution that captures the style distribution of a digit conditioned on the class label of that digit. adversarial training pushes the output of the encoder to the corners of the softmax simplex, by which it ensures that the autoencoder cannot use the latent vector z to carry any continuous style information. The second constraint imposed by adversarial training is that the aggregated posterior distribution of z should match the categorical prior distribution with uniform outcome probabilities. This constraint enforces the encoder to evenly distribute the class labels across the corners of the softmax simplex. Because of these constraints, the latent variable will only capture the discrete content of the image and all the continuous style information will be captured by the autoregressive decoder. In order to better understand and visualize the effect of the adversarial training on shaping the hidden code distribution, we train a PixelGAN autoencoder on the first three digits of MNIST (18000 training and 3000 test points) and choose the number of clusters to be 3. Suppose z = [z1, z2, z3] is the hidden code which in this case is the output probabilities of the softmax layer of the inference network. In Figure 5a, we project the 3D softmax simplex of z1 + z2 + z3 = 1 onto a 2D triangle and plot the hidden codes of the training examples when no distribution is imposed on the hidden code. We can see from this figure that the network has learnt to use the surface of the softmax simplex to encode style information of the digits and thus the three corners of the simplex do not have any meaningful interpretation. Figure 5b corresponds to the code space of the same network when a categorical distribution is imposed using the adversarial training. In this case, we can see the network has successfully learnt to encode the label information of the three digits in the three corners of the simplex, and all the style information has been separately captured by the autoregressive decoder. This network achieves an almost perfect test error-rate of 0.3% on the first three digits of MNIST, even though it is trained in a purely unsupervised fashion. Once the PixelGAN autoencoder is trained, its encoder can be used for clustering new points and its decoder can be used to generate samples from each cluster. Figure 6 illustrates the samples of the PixelGAN autoencoder trained on the full MNIST dataset. The number of clusters is set to be 30 and each row corresponds to the conditional samples of one of the clusters (only 16 are shown). We can see that the discrete latent code of the network has learnt discrete factors of variation such as (a) Without GAN Regularization (b) With GAN Regularization Figure 5: Effect of GAN regularization (categorical prior) on the code space of PixelGAN autoencoders. 6 Figure 6: Disentangling the content and style in an unsupervised fashion with PixelGAN autoencoders. Each row shows samples of the model from one of the learnt clusters. class label information and some discrete style information. For example digit 1s are put in different clusters based on how much tilted they are. The network is also assigning different clusters to digit 2s (based on whether they have a loop) and digit 7s (based on whether they have a dash in the middle). In Section 3, we will show that by using the encoder of this network, we can obtain about 5% error rate in classifying digits in an unsupervised fashion, just by matching each cluster to a digit type. Semi-Supervised PixelGAN Autoencoders. The PixelGAN autoencoder can be used in a semisupervised setting. In order to incorporate the label information, we add a semi-supervised training phase. Specifically, we set the number of clusters to be the same as the number of class labels and after executing the reconstruction and the adversarial phases on an unlabeled mini-batch, the semi-supervised phase is executed on a labeled mini-batch, by updating the weights of the encoder q(z|x) to minimize the cross-entropy cost. The semi-supervised cost also reduces the mode-missing behavior of the GAN training by enforcing the encoder to learn all the modes of the categorical distribution. In Section 3, we will evaluate the performance of the PixelGAN autoencoders on the semi-supervised classification tasks. 3 Experiments In this paper, we presented the PixelGAN autoencoder as a generative model, but the currently available metrics for evaluating the likelihood of GAN-based generative models such as Parzen window estimate are fundamentally flawed [21]. So in this section, we only present the performance of the PixelGAN autoencoder on downstream tasks such as unsupervised clustering and semi-supervised classification. The details of all the experiments can be found in Appendix B. Unsupervised Clustering. We trained a PixelGAN autoencoder in an unsupervised fashion on the MNIST dataset (Figure 6). We chose the number of clusters to be 30 and used the following evaluation protocol: once the training is done, for each cluster i, we found the validation example (a) SVHN (1000 labels) (b) MNIST (100 labels) (c) NORB (1000 labels) Figure 7: Conditional samples of the semi-supervised PixelGAN autoencoder. 7 0 25 50 75 100 125 150 175 Epochs 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 Error Rate Semi-supervised MNIST 100 Labels 50 Labels 20 Labels Unsupervised (30 clusters) 0 100 200 300 400 500 600 700 800 900 Epochs 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 Error Rate Semi-supervised SVHN 1000 Labels 500 Labels Figure 8: Semi-supervised error-rate of PixelGAN autoencoders on the MNIST and SVHN datasets. MNIST MNIST MNIST MNIST SVHN SVHN NORB (Unsupervised) (20 labels) (50 labels) (100 labels) (500 labels) (1000 labels) (1000 labels) VAE [24] 3.33 (±0.14) 36.02 (±0.10) 18.79 (±0.05) VAT [25] 2.33 24.63 9.88 ADGM [26] 0.96 (±0.02) 22.86 10.06 (±0.05) SDGM [26] 1.32 (±0.07) 16.61 (±0.24) 9.40 (±0.04) Adversarial Autoencoder [6] 4.10 (±1.13) 1.90 (±0.10) 17.70 (±0.30) Ladder Networks [27] 0.89 (±0.50) Convolutional CatGAN [22] 4.27 1.39 (±0.28) InfoGAN [16] 5.00 Feature Matching GAN [28] 16.77 (±4.52) 2.21 (±1.36) 0.93 (±0.06) 18.44 (±4.80) 8.11 (±1.30) Temporal Ensembling [23] 7.05 (±0.30) 5.43 (±0.25) PixelGAN Autoencoders 5.27 (±1.81) 12.08 (±5.50) 1.16 (±0.17) 1.08 (±0.15) 10.47 (±1.80) 6.96 (±0.55) 8.90 (±1.0) Table 1: Semi-supervised learning and clustering error-rate on MNIST, SVHN and NORB datasets. xn that maximizes q(zi|xn), and assigned the label of xn to all the points in the cluster i. We then computed the test error based on the assigned class labels to each cluster. As shown in the first column of Table 1, the performance of PixelGAN autoencoders is on par with other GAN-based clustering algorithms such as CatGAN [22], InfoGAN [16] and adversarial autoencoders [6]. Semi-supervised Classification. Table 1 and Figure 8 report the results of semi-supervised classification experiments on the MNIST, SVHN and NORB datasets. On the MNIST dataset with 20, 50 and 100 labels, our classification results are highly competitive. Note that the classification rate of unsupervised clustering of MNIST is better than semi-supervised MNIST with 20 labels. This is because in the unsupervised case, the number of clusters is 30, but in the semi-supervised case, there are only 10 class labels which makes it more likely to confuse two digits. On the SVHN dataset with 500 and 1000 labels, the PixelGAN autoencoder outperforms all the other methods except the recently proposed temporal ensembling work [23] which is not a generative model. On the NORB dataset with 1000 labels, the PixelGAN autoencoder outperforms all the other reported results. Figure 7 shows the conditional samples of the semi-supervised PixelGAN autoencoder on the MNIST, SVHN and NORB datasets. Each column of this figure presents sampled images conditioned on a fixed one-hot latent code. We can see from this figure that the PixelGAN autoencoder can achieve a rather clean separation of style and content on these datasets with very few labeled data. 4 Learning Cross-Domain Relations with PixelGAN Autoencoders In this section, we discuss how the PixelGAN autoencoder can be viewed in the context of learning cross-domain relations between two different domains. We also describe how the problem of clustering or semi-supervised learning can be cast as the problem of finding a smooth cross-domain mapping from the data distribution to the categorical distribution. Recently several GAN-based methods have been developed to learn a cross-domain mapping between two different domains [29, 30, 31, 6, 32]. In [31], an unsupervised cost function called the output distribution matching (ODM) is proposed to find a cross-domain mapping F between two domains D1 and D2 by imposing the following unsupervised constraint on the uncorrelated samples from x ∼D1 and y ∼D2: Distr[F(x)] = Distr[y] (4) 8 where Distr[z] denotes the distribution of the random variable z. The adversarial training is proposed as one of the methods for matching these distributions. If we have access to a few labeled pairs (x, y), then F can be further trained on them in a supervised fashion to satisfy F(x) = y. For example, in speech recognition, we want to find a cross-domain mapping from a sequence of phonemes to a sequence of characters. By optimizing the ODM cost function in Equation 4, we can find a smooth function F that takes phonemes at its input and outputs a sequence of characters that respects the language model. However, the main problem with this method is that the network can learn to ignore part of the input distribution and still satisfy the ODM cost function by its output distribution. This problem has also been observed in other works such as [29]. One way to avoid this problem is to add a reconstruction term to the ODM cost function by introducing a reverse mapping from the output of the encoder to the input domain. The is essentially the idea of the adversarial autoencoder (AAE) [6] which learns a generative model by finding a cross-domain mapping between a Gaussian distribution and the data distribution. Using the ODM cost function along with a reconstruction term to learn cross-domain relations have been explored in several previous works. For example, InfoGAN [16] adds a mutual information term to the ODM cost function and optimizes a variational lower bound on this term. It can be shown that maximizing this variational bound is indeed minimizing the reconstruction cost of an autoencoder [15]. Similarly, in [32, 33], an AAE is used to learn the cross-domain relations of the vector representations of words from two different languages. The architecture of the recent works of DiscoGAN [29] and CycleGAN [30] are also similar to an AAE in which the latent representation is enforced to have the distribution of the other domain. Here we describe how our proposed PixelGAN autoencoder can be potentially used in all these application areas to learn better cross-domain relations. Suppose we want to learn a mapping from domain D1 to D2. In the architecture of Figure 1, we can use independent samples of x ∼D1 at the input and instead of imposing a Gaussian distribution on the latent code, we can impose the distribution of the second domain using its independent samples y ∼D2. Unlike AAEs, the encoder of PixelGAN autoencoders does not have to retain all the input information in order to have a lossless reconstruction. So the encoder can use all its capacity to learn the most relevant mapping from D1 to D2 and at the same time, the PixelCNN can capture the remaining information that has been lost by the encoder. We can adopt the ODM idea for semi-supervised learning by assuming D1 is the image domain and D2 is the label domain. Independent samples of D1 and D2 correspond to samples from the data distribution pdata(x) and the categorical distribution. The function F = q(y|x) can be parametrized by a neural network that is trained to satisfy the ODM cost function by matching the aggregated distribution q(y) = R q(y|x)pdata(x)dx to the categorical distribution using adversarial training. The few labeled examples are used to further train F to satisfy F(x) = y. However, as explained above, the problem with this method is that the network can learn to generate the categorical distribution by ignoring some part of the input distribution. The AAE solves this problem by adding an inverse mapping from the categorical distribution to the data distribution. However, the main drawback of the AAE architecture is that due to the reconstruction term, the latent representation now has to model all the underlying factors of variation in the image. For example, in the semi-supervised AAE architecture [6], while we are only interested in the one-hot label representation to do semi-supervised learning, we also need to infer the style of the image so that we can have a lossless reconstruction of the image. The PixelGAN autoencoder solves this problem by enabling the encoder to only infer the factor of variation that we are interested in (i.e., label information), while the remaining structure of the input (i.e., style information) is automatically captured by the autoregressive decoder. 5 Conclusion In this paper, we proposed the PixelGAN autoencoder, which is a generative autoencoder that combines a generative PixelCNN with a GAN inference network that can impose arbitrary priors on the latent code. We showed that imposing different distributions as the prior enables us to learn a latent representation that captures the type of statistics that we care about, while the remaining structure of the image is captured by the PixelCNN decoder. Specifically, by imposing a Gaussian prior, we were able to disentangle the low-frequency and high-frequency statistics of the images, and by imposing a categorical prior we were able to disentangle the style and content of images and learn representations that are specifically useful for clustering and semi-supervised learning tasks. While the main focus of this paper was to demonstrate the application of PixelGAN autoencoders in downstream tasks such as semi-supervised learning, we discussed how these architectures have many other potentials such as learning cross-domain relations between two different domains. 9 Acknowledgments We would like to thank Nathan Killoran for helpful discussions. We also thank NVIDIA for GPU donations. References [1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. [2] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016. [3] Ferenc Huszár. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017. [4] Dustin Tran, Rajesh Ranganath, and David M Blei. Deep and hierarchical implicit models. arXiv preprint arXiv:1702.08896, 2017. [5] Rajesh Ranganath, Dustin Tran, Jaan Altosaar, and David Blei. Operator variational inference. In Advances in Neural Information Processing Systems, pages 496–504, 2016. [6] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. [7] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017. [8] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. [9] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. [10] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on Learning Representations (ICLR), 2014. [11] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. International Conference on Machine Learning, 2014. [12] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [13] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages 4790–4798, 2016. [14] Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. In NIPS 2016 Workshop on Advances in Approximate Bayesian Inference, 2016. [15] David Barber and Felix V Agakov. The im algorithm: A variational approach to information maximization. In NIPS, pages 201–208, 2003. [16] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2172–2180, 2016. [17] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [18] Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016. [19] Ferenc Huszár. Is Maximum Likelihood Useful for Representation Learning? http://www.inference. vc/maximum-likelihood-for-representation-learning-2. [20] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013, 2016. 10 [21] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. [22] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. [23] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. [24] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014. [25] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. stat, 1050:25, 2015. [26] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [27] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3532–3540, 2015. [28] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226–2234, 2016. [29] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover crossdomain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017. [30] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. [31] Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, and Oriol Vinyals. Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015. [32] Antonio Valerio Miceli Barone. Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders. arXiv preprint arXiv:1608.02996, 2016. [33] Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Adversarial training for unsupervised bilingual lexicon induction. [34] Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. Denoising criterion for variational auto-encoding framework. arXiv preprint arXiv:1511.06406, 2015. [35] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. [36] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016. [37] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [38] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. [39] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [40] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 11 | 2017 | 330 |
6,820 | Excess Risk Bounds for the Bayes Risk using Variational Inference in Latent Gaussian Models Rishit Sheth and Roni Khardon Department of Computer Science, Tufts University Medford, MA, 02155, USA rishit.sheth@tufts.edu | roni@cs.tufts.edu Abstract Bayesian models are established as one of the main successful paradigms for complex problems in machine learning. To handle intractable inference, research in this area has developed new approximation methods that are fast and effective. However, theoretical analysis of the performance of such approximations is not well developed. The paper furthers such analysis by providing bounds on the excess risk of variational inference algorithms and related regularized loss minimization algorithms for a large class of latent variable models with Gaussian latent variables. We strengthen previous results for variational algorithms by showing that they are competitive with any point-estimate predictor. Unlike previous work, we provide bounds on the risk of the Bayesian predictor and not just the risk of the Gibbs predictor for the same approximate posterior. The bounds are applied in complex models including sparse Gaussian processes and correlated topic models. Theoretical results are complemented by identifying novel approximations to the Bayesian objective that attempt to minimize the risk directly. An empirical evaluation compares the variational and new algorithms shedding further light on their performance. 1 Introduction Bayesian models are established as one of the main successful paradigms for complex problems in machine learning. Since inference in complex models is intractable, research in this area is devoted to developing new approximation methods that are fast and effective (Laplace/Taylor approximation, variational approximation, expectation propagation, MCMC, etc.), i.e., these can be seen as algorithmic contributions. Much less is known about theoretical guarantees on the loss incurred by such approximations, either when the Bayesian model is correct or under model misspecification. Several authors provide risk bounds for the Bayesian predictor (that aggregates predictions over its posterior and then predicts), e.g., see [15, 6, 12]. However, the analysis is specialized to certain classification or regression settings, and the results have not been shown to be applicable to complex Bayesian models and algorithms like the ones studied in this paper. In recent work, [7] and [1] identified strong connections between variational inference [10] and PAC-Bayes bounds [14] and have provided oracle inequalities for variational inference. As we show in Section 3, similar results that are stronger in some aspects can be obtained by viewing variational inference as performing regularized loss minimization. These results are an exciting first step, but they are limited in two aspects. First, they hold for the Gibbs predictor (that samples a hypothesis and uses it to predict) and not the Bayesian predictor and, second, they are only meaningful against “weak” competitors. For example, the bounds go to infinity if the competitor is a point estimate with zero variance. In addition, these results do not explicitly address hierarchical Bayesian models 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. where further development is needed to distinguish among different variational approximations in the literature. Another important result by [11] provides relative loss bounds for generalized linear models (GLM). These bounds can be translated to risk bounds and they hold against point estimates. However, they are limited to the prediction of the true Bayesian posterior which is hard to compute. In this paper we strengthen these theoretical results and, motivated by these, make additional algorithmic and empirical contributions. In particular, we focus on latent Gaussian models (LGM) whose latent variables are normally distributed. We extend the technique of [11] to derive agnostic bounds for the excess risk of an approximate Bayesian predictor against any point estimate competitor. We then apply these results to several models with two levels of latent variables, including generalized linear models (GLM), sparse Gaussian processes (sGP) [17, 26] and correlated topic models (CTM) [3] providing high probability bounds for risk. For CTM our results apply precisely to the variational algorithm and for GLM and sGP they apply for a variant with a smoothed loss function. Our results improve over [7, 1] by strengthening the bounds, showing that they can be applied directly to the variational algorithm, and showing that they apply to the Bayesian predictor. On the other hand they improve over [11] in analyzing the approximate inference algorithms and in showing how to apply the bounds to a larger class of models. Finally, viewing approximate inference as regularized loss minimization, our exploration of the hierarchical models shows that there is a mismatch between the objective being optimized by algorithms such as variational inference and the loss that defines our performance criterion. We identify three possible objectives corresponding respectively to a “simple variational approximation”, the “collapsed variational approximation”, and to a new algorithm performing direct regularized loss minimization instead of optimizing the variational objective. We explore these ideas empirically in CTM. Experimental results confirm that each variant is the “best" for optimizing its own implicit objective, and therefore direct loss minimization, for which we do not yet have a theoretical analysis, might be the algorithm of choice. However, they also show that the collapsed approximation comes close to direct loss minimization. The concluding section of the paper further discusses the results. 2 Preliminaries 2.1 Learning Model, Hypotheses and Risk We consider the standard PAC setting where n samples are drawn i.i.d. according to an unknown joint distribution D over the sample space z. This captures the supervised case where z = (x, y) and the goal is to predict y|x. In the unsupervised case, z = y and we are simply modeling the distribution. To treat both cases together we always include x in the notation but fix it to a dummy value in the unsupervised case. A learning algorithm outputs a hypothesis h which induces a distribution ph(y|x). One would normally use this predictive distribution and an application-specific loss to pick the prediction. Following previous work, we primarily focus on log loss, i.e., the loss of h on example (x∗, y∗) is ℓ(h, (x∗, y∗)) = −log ph(y∗|x∗). In cases where this loss is not bounded, a smoothed and bounded variant of the log loss can be defined as ˜ℓ(h, (y∗, x∗)) = −log (1 −α)ph(y|x) + α , where 0 < α < 1. We state our results w.r.t. log loss, and demonstrate, by example, how the smoothed log loss can be used. Later, we briefly discuss how our results hold more generally for losses that are convex in p. We start by considering one-level (1L) latent variable models given by p(w)p(y|w, x) where p(y|w, x) = Q i p(yi|w, xi). For example, in Bayesian logistic regression, w is the hidden weight vector, the prior p(w) is given by a Normal distribution N(w|µ, Σ) and the likelihood term is p(y|w, x) = σ(ywT x) where σ() is the sigmoid function. A hypothesis h represents a distribution q(w) over w, where point estimates for w are modeled as delta functions. Regardless of how h is computed, the Bayesian predictor calculates a predictive distribution ph(y|x) = Eq(w)[p(y|w, x)] and accordingly its risk is defined as rBay(q(w)) = E(x,y)∼D[−log ph(y|x)] = E(x,y)∼D[−log Eq(w)[p(y|w, x)]]. Following previous work we also analyze the average risk of the Gibbs predictor which draws a random w from q(w) and predicts using p(y|w, x). Although the Gibbs predictor is not an optimal strategy, its analysis has been found useful in previous work and it serves as an intermediate step 2 in our results. Assuming the draw of w is done independently for each x we get: rGib(q(w)) = E(x,y)∼D[Eq(w)[−log p(y|w, x)]]. Previous work has defined the Gibbs risk with expectations in reversed order. That is, the algorithm draws a single w and uses it for prediction on all examples. We find the one given here more natural. Some of our results require the two definitions to be equivalent, i.e., the conditions for Fubini’s theorem must hold. We make this explicit in Assumption 1. E(x,y)∼D[Eq(w)[−log p(y|w, x)]] = Eq(w)[E(x,y)∼D[−log p(y|w, x)]]. This is a relatively mild assumption. It clearly holds when y takes discrete values, where p(y|x, w) ≤1 implies that the log loss is positive and Fubini’s theorem applies. In the case of continuous y, upper bounded likelihood functions imply that a translation of the loss function satisfies the condition of Fubini’s theorem. For example, if p(y|x, w) = N(y|f(w, x), σ2) where σ2 is a hyperparameter, then log p(y|x, w) ≤B = −log( √ 2π) −log(σ2). Therefore, −log p(y|x, w) + B ≥0 so that if we redefine1 the loss by adding the constant B, then the loss is positive and Fubini’s theorem applies. More generally, we might need to enforce constraints on D, q(w), and/or p(y|x, w). 2.2 Variational Learners for Latent Variable Models Approximate inference generally limits q(w) to some fixed family of distributions Q (e.g. the family of normal distributions, or the family of products of independent components in the mean-field approximation). Given a dataset S = {(xi, yi)}n i=1, we define the following general problem, q⋆= arg min q∈Q 1 η KL q(w)∥p(w) + L(w, S) , (1) where KL denotes Kullback-Leibler divergence. Standard variational inference uses η = 1 and L(w, S) = −P i Eq(w)[log p(yi|w, xi)], and it is well known that (1) is the optimization of a lower bound on p(y). If −log p(yi|w, xi) is replaced with a general loss function, then (1) may no longer correspond to a lower bound on p(y). In any case, the output of (1), denoted by q⋆ Gib, is achieved via regularized cumulative-loss minimization (RCLM) which optimizes a sum of training set error and a regularization function. In particular, q⋆ Gib uses a KL regularizer and optimizes the Gibbs risk rGib in contrast to the Bayes risk rBay. This motivates some of the analysis in the paper. Many interesting Bayesian models have two levels (2L) of latent variables given by p(w)p(f|w, x) Q i p(yi|fi) where both w and f are latent. Of course one can treat (w, f) as one set of parameters and apply the one-level model, but this does not capture the hierarchical structure of the model. The standard approach in the literature infers a posterior on w via a variational distribution q(w)q(f|w), and assumes that q(w) is sufficient for predicting p(y∗|x∗). We refer to this structural assumption, i.e., p(f∗, f|w, x, x∗) = p(f∗|w, x∗)p(f|w, x), as Conditional Independence. It holds in models where an additional factorization p(f|w, x) = Q i p(fi|w, xi) holds, e.g., in GLM, CTM. In the case of sparse Gaussian processes (sGP), Conditional Independence does not hold, but it is required in order to reduce the cubic complexity of the algorithm, and it has been used in all prior work on sGP. Assuming Conditional Independence, the definition of risk extends naturally from the one-level model by writing p(y|w, x) = Ep(f|w,x)[p(y|f)] to get: r2Bay(q(w)) = E (x,y)∼D[−log E q(w)[ E p(f|w,x)[p(y|f)]]], (2) r2Gib(q(w)) = E (x,y)∼D[ E q(w)[−log E p(f|w,x)[p(y|f)]]]. (3) Even though Conditional Independence is used in prediction, the learning algorithm must decide how to treat q(f|w) during the optimization of q(w). The mean field approximation uses q(w)q(f) in the optimization. We analyze two alternatives that have been used in previous work. The approximation q(f|w) = p(f|w), used in sparse GP [26, 8, 23], is described by (1) with L(w, S) = −P i Eq(w)[Ep(fi|w,xi)[log p(yi|fi)]]. We denote this by q⋆ 2A and observe it is the RCLM solution for the risk defined as r2A(q(w)) = E (x,y)∼D[ E q(w)[ E p(f|w,x)[−log p(y|f)]]]. (4) 1For the smoothed log loss, the translation can be applied prior to the re-scaling, i.e., −log( 1−α maxw,x,y p(y|w,x)p(y|w, x) + α). 3 As shown by [25, 9, 22], alternatively, for each w, we can pick the optimal q(f|w) = p(f|w, S). Following [25] we call this a collapsed approximation. This leads to (1) with L(w, S) = −Eq(w)[log Ep(f|w,x)[Q i p(yi|fi)]] and is denoted by q⋆ 2Bj (joint expectation). For models where p(f|w) = Q i p(fi|w), this simplifies to L(w, S) = −P i Eq(w)[log Ep(fi|w,xi)[p(yi|fi)]], and we denote the algorithm by q⋆ 2Bi (independent expectation). Note that q⋆ 2Bi performs RCLM for the risk given by r2Gib even if the factorization does not hold. Finally, viewing approximate inference as performing RCLM, we observe a discrepancy between our definition of risk in (2) and the loss function being optimized by existing algorithms, e.g., variational inference. This perspective suggests direct loss minimization described by the alternative L(w, S) = −P i log Eq(w)[Ep(fi|w,xi)[p(yi|fi)]] in (1) and which we denote q⋆ 2D. In this case, q⋆ 2D is a “posterior” but one for which we do not have a Bayesian interpretation. Given the discussion so far, we can hope to get some analysis for regularized loss minimization where each of the algorithms implicitly optimizes a different definition of risk. Our goal is to identify good algorithms for which we can bound the definition of risk we care about, r2Bay, as defined in (2). 3 RCLM Regularized loss minimization has been analyzed for general hypothesis spaces and losses. For hypothesis space H and hypothesis h ∈H we have loss function ℓ(h, (x, y)), and associated risk r(h) = E(x,y)∼D[ℓ(h, (x, y))]. Now, given a regularizer R : H →0 ∪R+, a non-negative scalar η, and sample S, regularized cumulative loss minimization is defined as RCLM(H, ℓ, R, η, S) = arg min h∈H 1 η R(h) + X i ℓ(h, (xi, yi)) . (5) Theorem 1 ([20]2 ). Assume that the regularizer R(h) is σ-strong-convex in h and the loss ℓ(h, (x, y)) is ρ-Lipschitz and convex in h, and let h⋆(S) = RCLM(H, ℓ, R, η, S). Then, for all h ∈H, ES∼Dn[r(h⋆(S))] ≤r(h) + 1 ηnR(h) + 4ρ2η σ . The theorem bounds the expectation of the risk. Using Markov’s inequality we can get a high probability bound: with probability ≥1 −δ, r(h⋆(S)) ≤r(h) + 1 δ ( 1 ηnR(h) + 4ρ2η σ ). Tighter dependence on δ can be achieved for bounded losses using standard techniques. To simplify the presentation we keep the expectation version throughout the paper. For this paper we specialize RCLM for Bayesian algorithms, that is, H corresponds to the parameter space for a parameterized family of (possibly degenerate) distributions, denoted Q, where q ∈Q is a distribution over a base parameter space w. We have already noted above that q⋆ Gib(w), q⋆ 2Bi(w) and q⋆ 2D(w) are RCLM algorithms. We can therefore get immediate corollaries for the corresponding risks (see supplementary material). Such results are already useful, but the convexity and ρ-Lipschitz conditions are not always easy to analyze or guarantee. We next show how to use recent ideas from PAC-Bayes analysis to derive a similar result for Gibbs risk with less strong requirements. We first develop the result for the one-level model. Toward this, define the loss and risk for individual base parameters as ℓW (w, (x, y)), and rW (w) = ED[ℓW (w, (x, y))], and the empirical estimate ˆrW (w, S) = 1 n P i ℓW (w, (xi, yi)). Following [7], let Ψ(λ, n) = log ES∼Dn[Ep(w)[eλ(rW (w)−ˆrW (w,S))]] where λ is an additional parameter. Combining arguments from [20] with the use of the compression lemma [2] as in [7] we can derive the following bound (proof in supplementary material): Theorem 2. For all q ∈Q, ES∼Dn[rGib(q⋆ Gib(w))] ≤rGib(q)+ 1 ηnKL q∥p + 1 λ maxq∈Q KL q∥p + 1 λΨ(λ, n). The theorem applies to the two-level model by writing p(y|w) = Ep(f|w)[p(y|f)]. This yields Corollary 3. For all q ∈ Q, ES∼Dn[r2Gib(q⋆ 2Bi(w))] ≤ r2Gib(q) + 1 ηnKL q∥p + 1 λ maxq∈Q KL q∥p + 1 λΨ(λ, n). 2 [20] analyzed regularized average loss but the same proof steps with minor modifications yield the statement for cumulative loss given here. 4 A similar result has already been derived by [1] without making the explicit connection to RCLM. However, the implied algorithm uses a “regularization factor” λ which may not coincide with η = 1, whereas standard variational inference can be analyzed with Theorem 2 (or Corollary 3). The work of [4, 7] showed how the Ψ term can be bounded. Briefly, if ℓW (w, (x, y)) is bounded in [a, b], then Ψ(λ, n) ≤ λ2(b−a)2 2n ; if ℓW (w, (x, y)) is not bounded, but the random variable rW (w)−ℓW (w, (x, y)) is sub-Gaussian or sub-gamma, then Ψ(λ, n) can be bounded with additional assumptions on the underlying distribution D. More details are in the supplementary material. 4 Concrete Bounds on Excess Risk in LGM The LGM family is a special case of the two-level model where the prior p(w) over the M-dimensional parameter w is given by a Normal distribution. Following previous work we let Q to be a family of Normal distributions. For the analysis we further restrict Q by placing bounds on the mean and covariance as follows: Q = {N(w|m, V ) s.t. ∥m∥2 ≤Bm, λmin (V ) ≥ϵ, λmax (V ) ≤BV } for some ϵ > 0. The KL divergence from q(w) = N(w|m, V ) to p(w) = N(w|µ, Σ) is given by KL q∥p = 1 2 tr(Σ−1V ) + (µ −m)T Σ−1(µ −m) + log |Σ| |V | −M . 4.1 General Bounds on Excess Risk in LGM Against Point Estimates First, we note that KL q∥p is bounded under a lower bound on the minimum eigenvalue of V (proof in supplementary material follows from linear algebra identities): Lemma 4. Let B′ R = 1 2 MBV +∥µ∥2 2+B2 m λmin(Σ) + M log λmax (Σ) −M . For q ∈Q, KL q∥p ≤BR = 1 2 MBV +∥µ∥2 2 + B2 m λmin (Σ) + M log λmax (Σ) ϵ −M ! = B′ R −1 2M log ϵ. (6) The risk bounds of the previous section do not allow for point estimate competitors because the KL portion is not bounded. We next generalize a technique from [11] showing that adding a little variance to a point estimate does not hurt too much. This allows us to derive the promised bounds. In the following, ϵ > 0 is a constant whose value is determined in the proof. For any ˆw, we consider the ϵ-inflated distribution q (w) = N(w| ˆw, ϵI) and calculate the distribution’s Gibbs risk w.r.t. a generic loss. Specifically, we consider the (1L or 2L) Gibbs risk r(q) = E(x,y)∼D[Eq(w)[ℓ w, (x, y) ]] with ℓ: RM × (X × Y ) 7→R. Lemma 5. If (i) ℓ(w, (x, y)) is continuously differentiable in w up to order 2, and (ii) λmax ∇2 wℓ(w, (x, y)) ≤BH, then for ˆw ∈RM and q(w) = N(w| ˆw, ϵI) rGib q(w) = E (x,y)∼D[ E q(w)[ℓ w, (x, y) ]] ≤rGib δ (w −ˆw) + 1 2ϵMBH. (7) Proof. By the multivariable Taylor’s theorem, for ˆw ∈RM ℓ(w, (x, y)) = ℓ( ˆw, (x, y)) + ∇wℓ(w, (x, y)) w= ˆ w T (w −ˆw) + 1 2 (w −ˆw)T ∇2 wℓ(w, (x, y)) w= ˜ w (w −ˆw) where ∇wℓ(w, (x, y)) and ∇2 wℓ(w, (x, y)) denote the gradient and Hessian, and ˜w = (1 −α) ˆw+αw for some α ∈[0, 1] where α is a function of w. Taking the expectation results in E q(w)[ℓ(w, (x, y))] = ℓ( ˆw, (x, y)) + 1 2 E q(w)[(w −ˆw)T ∇2 wℓ(w, (x, y)) w= ˜ w (w −ˆw)]. (8) 5 If the maximum eigenvalue of ∇2 wℓ(w, (x, y)) is bounded uniformly by some BH < ∞, then the second term of (8) is bounded above by 1 2BH E[(w −ˆw)T (w −ˆw)] = 1 2ϵMBH. Taking expectation w.r.t. D yields the statement of the lemma. Since Q includes ϵ-inflated distributions centered on ˆw where∥ˆw∥2 ≤Bm, we have the following. Theorem 6 (Bound on Gibbs Risk Against Point Estimate Competitors). If (i) −log Ep(f|w)[p(y|f]) is continuously differentiable in w up to order 2, and (ii) λmax ∇2 w −log Ep(f|w)[p(y|f]) ≤BH, then, for all ˆw with∥ˆw∥2 ≤Bm, E S∼Dn[r2Gib(q⋆ 2Bi(w))] ≤r2Gib δ (w −ˆw) + ∆(BH) + 1 λΨ(λ, n), ∆(BH) ≜1 2M 1 n + 1 λ 2 M B′ R + 1 + log BH nλ n + λ ! . (9) Proof. Using the distribution q = N(w| ˆw, ϵI) in the RHS of Corollary 3 yields E S∼Dn[r2Gib(q⋆ 2Bi(w))] ≤r2Gib(q) + 1 ηnKL q∥p + 1 λ max q∈Q KL q∥p + 1 λΨ(λ, n) ≤r2Gib δ (w −ˆw) + 1 2ϵMBH −1 2AM log ϵ + AB′ R + 1 λΨ(λ, n) (10) where A = 1 ηn + 1 λ and we have used Lemma 4 and Lemma 5. Eq (10) is optimized when ϵ = A BH . Re-substituting the optimal ϵ in (10) yields E S∼Dn[r2Gib(q⋆ 2Bi(w))] ≤r2Gib δ (w −ˆw) + 1 2M 1 ηn + 1 λ 2 M B′ R + 1 −log 1 BH 1 ηn + 1 λ ! + 1 λΨ(λ, n). (11) Setting η = 1 yields the result. The theorem calls for running the variational algorithm with constraints on eigenvalues of V . The fixed-point characterization [21] of the optimal solution in linear LGM implies that such constraints hold for the optimal solution. Therefore, they need not be enforced explicitly in these models. For any distribution q(w) and function f(w) we have minw [f(w)] ≤Eq(w)[f(w)]. Therefore, the minimizer of the Gibbs risk is a point estimate, which with Theorem 6 implies: Corollary 7. Under the conditions of Theorem 6, for all q(w) = N(w|m, V ) with∥m∥2 ≤Bm, ES∼Dn[r2Gib(q⋆ 2Bi(w))] ≤r2Gib q(w) + ∆(BH) + 1 λΨ(λ, n). More importantly, as another immediate corollary, we have a bound for the Bayes risk: Corollary 8 (Bound on Bayes Risk Against Point Estimate Competitors). Under the conditions of Theorem 6, for all ˆw with∥ˆw∥2 ≤Bm, ES∼Dn[r2Bay(q⋆ 2Bi(w))] ≤r2Bay δ (w −ˆw) + ∆(BH) + 1 λΨ(λ, n). Proof. Follows from (a) ∀q, r2Bay(q) ≤ r2Gib(q) (Jensen’s inequality), and (b) ∀ˆw ∈ RM, r2Bay(δ(w −ˆw)) = r2Gib(δ(w −ˆw)). The extension for Bayes risk in step b of the proof is only possible thanks to the extension to point estimates. As stated in the previous section, for bounded losses, Ψ(λ, n) is bounded as λ2(b−a)2 2n . As in [7], we can choose λ = √n or λ = n to obtain decays rates log n √n or log n n respectively, where the latter has a fixed non-decaying gap term (b −a)2/2. However, unlike [7], in our proof both cases are achievable with η = 1, i.e., for the variational algorithm. For example, 6 using η = 1, λ = √n, the prior with µ = 0 and Σ = 1 M (MBV + B2 m)I, and bounded loss, ∆(BH) + 1 λΨ(λ, n) ≤M √n 1 + log BH + log n + log BV + 1 M B2 m + (b−a)2 2M . The results above are developed for the log loss but we can apply them more generally. Toward this we note that Corollary 3 holds for an arbitrary loss, and Lemma 5, and Theorem 6 hold for a sufficiently smooth loss with bounded 2nd derivative w.r.t. w. The conversion to Bayes risk in Corollary 8 holds for any loss convex in p. Therefore, the result of Corollary 8 holds more generally for any sufficiently smooth loss that has bounded 2nd derivative in w and that is convex in p. We provide an application of this more general result in the next section. 4.2 Applications in Concrete Models This section develops bounds on Ψ and BH for members of the 2L family. CTM: For a document, the generative model for CTM first draws w ∼N(µ, Σ), w ∈RK−1 where {µ, Σ} are model parameters, and then maps this vector to the K-simplex with the logistic transformation, θ = h(w). For each position i in the document, the latent topic variable, fi, is drawn from Discrete(θ), and the word yi is drawn from a Discrete(βfi,·) where β denotes the topics and is treated as a parameter of the model. In this case p(f|w) can be integrated out analytically and the loss is −log PK k=1 βk,yhk(w) . We have (proof in supplementary material): Corollary 9. For CTM models where the parameters βk,y are uniformly bounded away from 0, i.e., βk,y ≥γ > 0, for all ˆw with∥ˆw∥2 ≤Bm, ES∼Dn[r2Bay(q⋆ 2Bi(w))] ≤r2Bay δ (w −ˆw) + ∆(BH) + λ(log γ)2 2n with BH=5. The following lemma is expressed in terms of log loss but also holds for smoothed log loss (proof in supplementary material): Lemma 10. When f is a deterministic function of w, if (i) −log p y|f(w, x) is continuously differentiable in f up to order 2, and f(w, x) is continuously differentiable in w up to order 2, (ii) ∂2h −log p(y|f) i ∂f 2 ≤c2, (iii) ∂ h −log p(y|f) i ∂f ≤c1, (iv)
∇wf(w, x)
2 2 ≤cf 1, and (v) σmax ∇2 wf(w, x) ≤cf 2 (σmax is the max singular value), then BH = c2cf 1 + c1cf 2. GLM: The bound of [11] for GLM was developed for exact Bayesian inference. The following corollary extends this to approximate inference through RCLM. In GLM, f = wT x, ∥∇w∥2 = ∥x∥2, and ∇2 w = 0 and a bound on BH is immediate from Lemma 10. In addition the smoothed loss is bounded 0 ≤˜ℓ≤−log α. This implies Corollary 11. For GLM, if (i) ˜ℓ(w, (x, y)) = −log((1 −α)p y|f(w, x) + α) is continuously differentiable in f up to order 2, and (ii) ∂2˜ℓ ∂f 2 ≤c, then, for all ˆw with ∥ˆw∥2 ≤Bm, ES∼Dn[˜r2Bay(˜q⋆ 2Bi(w))] ≤˜r2Bay δ (w −ˆw) + ∆(BH) + λ(log α)2 2n with BH = c maxx∈X∥x∥2 2. We develop the bound c for the logistic and Normal likelihoods (see supplementary material). Let α′ = α 1−α. For the logistic likelihood σ(yf), we have c = 1 16 1 (α′)2 + √ 3 18 1 α′ . For the Gaussian likelihood 1 √ 2πσY exp(−1 2 (y−f)2 σ2 Y ), we have c = 1 2πσ4 Y e 1 (α′)2 + 1 √ 2πσ3 Y 1 α′ . The work of [7] has claimed3 a bound on the Gibbs risk for linear regression which should be compared to our result for the Gaussian likelihood. Their result is developed under the assumption that the Bayesian model specification is correct and in addition that x is generated from x ∼N(0, σ2 xI). In contrast our result, using the smoothed loss, holds for arbitrary distributions D without the assumption of correct model specification. 3 Denoting ∆ri(w) = rW (w) −ˆrW (w, (xi, yi)) and fi(w, n, λ) = Ep(∆ri(w))[exp λ n∆ri(w) ], the proof of Corollary 5 in [7] erroneously replaces Ep(w)[Q i fi(w, n, λ)] with Q i Ep(w)[fi(w, n, λ)]. We are not aware of a correction of this proof which yields a correct bound for Ψ without using a smoothed loss. Any such bound would, of course, be applicable with our Corollary 8. 7 Sparse GP: In the sparse GP model, the conditional is p f|w, x = N(f|a(x)T w + b(x), σ2(x)) where a(x)T = KT UxK−1 UU, b(x) = µx −KT UxK−1 UUµU and σ2(x) = Kxx −KT UxK−1 UUKUx with µ denoting the mean function and KUx, KUU denoting the kernel matrix evaluated at inputs (U, x) and (U, U) respectively. In the conjugate case, the likelihood is given by p y|f = N(y|f, σ2 Y ) and integrating f out yields N(y|a(x)T w + b(x), σ2(x) + σ2 Y ). Using the smoothed loss, we obtain: Corollary 12. For conjugate sparse GP, for all ˆw with∥ˆw∥2 ≤Bm, ES∼Dn[˜r2Bay(˜q⋆ 2Bi(w))] ≤˜r2Bay δ (w −ˆw) + ∆(BH) + λ(log α)2 2n with BH = c maxx∈X
a(x)
2 2, where c = 1 2πσ4 Y e 1 (α′)2 + 1 √ 2πσ3 Y 1 α′ . Proof. The Hessian is given by ∇2 w˜ℓ(w, (x, y)) = 1 (N +α′)2 ∇wN (∇wN)T − 1 N +α′ ∇2 wN where N denotes N(y|f(w), σ2(x) + σ2 Y ), with f(w) = a(x)T w + b(x). The gradient ∇wN equals ∂N ∂(f(w)) a(x) and the Hessian ∇2 wN equals ∂2N ∂(f(w))2 a(x)a(x)T . Therefore, ∇2 w˜ℓ= 1 (N +α′)2 ∂N ∂(f(w)) 2 − 1 N +α′ ∂2N ∂(f(w))2 a(x)a(x)T = ∂2h −log((1−α)N +α) i ∂(f(w))2 a(x)a(x)T . The result of Corollary 11 for Gaussian likelihood can be used to bound the 2nd derivative of the smoothed loss: ∂2h −log((1−α)N +α) i ∂(f(w))2 ≤ 1 2π(σ2(x)+σ2 Y )2e 1 (α′)2 + 1 √ 2π(σ2(x)+σ2 Y ) 3 2 1 α′ ≤ 1 2πσ4 Y e 1 (α′)2 + 1 √ 2πσ3 Y 1 α′ = c . Finally, the eigenvalue of the rank-1 matrix ca(x)a(x)T is bounded by c maxx∈X
a(x)
2 2. Remark 1. We noted above that, for sGP, q⋆ 2Bi does not correspond to a variational algorithm. The standard variational approach uses q⋆ 2A and the collapsed bound uses q⋆ 2Bj (but requires cubic time). It can be shown that q⋆ 2Bi corresponds exactly to the fully independent training conditional (FITC) approximation for sGP [24, 16] in that their optimal solutions are identical. Our result can be seen to justify the use of this algorithm which is known to perform well empirically. Finally, we consider binary classification in GLM with the convex loss function ℓ′(w, (x, y)) = 1 8(y −(2p(y|w, x) −1))2. The proof of the following corollary is in the supplementary material: Corollary 13. For GLM with p(y|w, x) = σ(ywT x), for all ˆw with ∥ˆw∥2 ≤ Bm, ES∼Dn[r′ 2Bay(q′⋆ 2Bi(w))] ≤r′ 2Bay δ (w −ˆw) + ∆(BH) + λ 8n with BH = 5 16 maxx∈X∥x∥2 2. 4.3 Direct Application of RCLM to Conjugate Linear LGM In this section we derive a bound for an algorithm that optimizes a surrogate of the loss directly. In particular, we consider the Bayes loss for linear LGM with conjugate likelihood p(y|f) = N(y|f, σ2 Y ) where −log Eq(w)[Ep(f|w)[p(y|f)]] = −log N(y|aT m + b, σ2 + σ2 Y + aT V a) and where a, b, and σ2 are functions of x. This includes, for example, linear regression and conjugate sGP. The proposed algorithm q⋆ 2Ds performs RCLM with competitor set Θ = {(m, V ) :∥m∥2 ≤Bm, V ∈ S++,∥V ∥F ≤BV }, regularizer R(m, V ) = 1 2∥m∥2 2 + 1 2∥V ∥2 F , η = 1 √n and the surrogate loss ℓsurr(m, V ) = 1 2 log (2π) + 1 2 σ2 + σ2 Y + aT V a + 1 2 (y−aT m−b)2 σ2+σ2 Y +aT V a.With these definitions we can apply Theorem 1 to get (proof in supplementary material): Theorem 14. With probability at least 1 −δ, r2Bay(q⋆ 2Ds) ≤ minq∈Q rsurr 2Bay(q(w)) + 1 δ√n B2 m + B2 V + 8(ρ2 m + ρ2 V ) where ρm = 1 σ2 Y maxx∈X∥a∥2 maxx∈X,y∈Y,m |y −aT m −b| and ρV = 1 2σ2 Y maxx∈X,y∈Y,m∥a∥2 2 1 + (y−aT m−b)2 σ2 Y . 5 Direct Loss Minimization The results in this paper expose the fact that different algorithms are apparently implicitly optimizing criteria for different loss functions. In particular, q⋆ 2A optimizes for r2A, q⋆ 2Bi optimizes for r2Gib 8 P yi∈test ℓ2A(yi) 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2900 2950 3000 3050 3100 3150 3200 3250 3300 Iteration Cumulative loss value P yi∈test ℓ2Gib(yi) 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2900 2950 3000 3050 3100 3150 3200 3250 3300 Iteration Cumulative loss value P yi∈test ℓ2Bay(yi) 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2900 2950 3000 3050 3100 3150 3200 3250 3300 Iteration Cumulative loss value Figure 1: Artificial data. Cumulative test set losses of different variational algorithms. x-axis is iteration. Mean ± 1σ of 30 trials are shown per objective. q⋆ 2A in blue. q⋆ 2Bi in green. q⋆ 2D in red. and q⋆ 2D optimizes for r2Bay. Even though we were able to bound r2Bay of the q⋆ 2Bi algorithm, it is interesting to check the performance of these algorithms in practice. We present an experimental study comparing these algorithms on the correlated topic model (CTM) that was described in the previous section. To explore the relation between the algorithms and their performance we run the three algorithms and report their empirical risk on a test set, where the risk is also measured in three different ways. Figure 1 shows the corresponding learning curves on an artificial document generated from the model. Full experimental details and additional results on a real dataset are given in the supplementary material. We observe that at convergence each algorithm is best at optimizing its own implicit criterion. However, considering r2Bay, the differences between the outputs of the variational algorithm q⋆ 2Bi and direct loss minimization q⋆ 2D are relatively small. We also see that at least in this case q⋆ 2Bi takes longer to reach the optimal point for r2Bay. Clearly, except for its own implicit criterion, q⋆ 2A should not be used. This agrees with prior empirical work on q⋆ 2A and q⋆ 2Bi [22]. The current experiment shows the potential of direct loss optimization for improved performance but justifies the use of q⋆ 2Bi both under correct model specification (artificial data) and when the model is incorrect (real data in supplement). Preliminary experiments in sparse GP show similar trends. The comparison in that case is more complex because q⋆ 2Bi is not the same as the collapsed variational approximation, which in turn requires cubic time to compute, and we additionally have the surrogate optimizer q⋆ 2Ds. We defer a full empirical exploration in sparse GP to future work. 6 Discussion The paper provides agnostic learning bounds for the risk of the Bayesian predictor, which uses the posterior calculated by RCLM, against the best single predictor. The bounds apply for a wide class of Bayesian models, including GLM, sGP and CTM. For CTM our bound applies precisely to the variational algorithm with the collapsed variational bound. For sGP and GLM the bounds apply to bounded variants of the log loss. The results add theoretical understanding of why approximate inference algorithms are successful, even though they optimize the wrong objective, and therefore justify the use of such algorithms. In addition, we expose a discrepancy between the loss used in optimization and the loss typically used in evaluation and propose alternative algorithms using regularized loss minimization. A preliminary empirical evaluation in CTM shows the potential of direct loss minimization but that the collapsed variational approximation q⋆ 2Bi has the advantage of strong theoretical guarantees and excellent empirical performance, both when the Bayesian model is correct and under model misspecification. Our results can be seen as a first step toward full analysis of approximate Bayesian inference methods. One limitation is that the competitor class in our results is restricted to point estimates. While point estimate predictors are optimal for the Gibbs risk, they are not optimal for Bayes predictors. In addition, the bounds show that the Bayesian procedures will do almost as well as the best point estimator. However, they do not show an advantage over such estimators, whereas one would expect such an advantage. It would also be interesting to incorporate direct loss minimization within the Bayesian framework. These issues remain an important challenge for future work. 9 Acknowledgments This work was partly supported by NSF under grant IIS-1714440. References [1] Pierre Alquier, James Ridgway, and Nicolas Chopin. On the properties of variational approximations of Gibbs posteriors. JMLR, 17:1–41, 2016. [2] Arindam Banerjee. On Bayesian bounds. In ICML, pages 81–88, 2006. [3] David M. Blei and John D. Lafferty. Correlated topic models. In NIPS, pages 147–154. 2006. [4] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. [5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, March 2004. [6] Arnak S. Dalalyan and Alexandre B. Tsybakov. Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity. Machine Learning, 72:39–61, 2008. [7] Pascal Germain, Francis Bach, Alexandre Lacoste, and Simon Lacoste-Julien. PAC-Bayesian theory meets Bayesian inference. In NIPS, pages 1876–1884, 2016. [8] James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classification. In AISTATS, pages 351–360, 2015. [9] Matthew D. Hoffman and David M. Blei. Structured stochastic variational inference. In AISTATS, pages 361–369, 2015. [10] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37:183–233, 1999. [11] Sham M. Kakade and Andrew Y. Ng. Online bounds for Bayesian algorithms. In NIPS, pages 641–648, 2004. [12] Alexandre Lacasse, François Laviolette, Mario Marchand, Pascal Germain, and Nicolas Usunier. PAC-Bayes bounds for the risk of the majority vote and the variance of the Gibbs classifier. In NIPS, pages 769–776, 2006. [13] Moshe Lichman. UCI machine learning repository, 2013. http://archive.ics.uci.edu/ ml. [14] David A. McAllester. Some PAC-Bayesian theorems. In COLT, pages 230–234, 1998. [15] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. JMLR, 4:839–860, 2003. [16] Joaquin Quiñonero-Candela, Carl E. Rasmussen, and Ralf Herbrich. A unifying view of sparse approximate Gaussian process regression. JMLR, 6:1939–1959, 2005. [17] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [18] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278–1286, 2014. [19] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends R⃝in Machine Learning, 4:107–194, 2012. [20] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014. 10 [21] Rishit Sheth and Roni Khardon. A fixed-point operator for inference in variational Bayesian latent Gaussian models. In AISTATS, pages 761–769, 2016. [22] Rishit Sheth and Roni Khardon. Monte Carlo structured SVI for non-conjugate models. arXiv:1309.6835, 2016. [23] Rishit Sheth, Yuyang Wang, and Roni Khardon. Sparse variational inference for generalized Gaussian process models. In ICML, pages 1302–1311, 2015. [24] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In NIPS, pages 1257–1264, 2006. [25] Yee Whye Teh, David Newman, and Max Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In NIPS, pages 1353–1360, 2006. [26] Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In AISTATS, pages 567–574, 2009. [27] Sheng-De Wang, Te-Son Kuo, and Chen-Fa Hsu. Trace bounds on the solution of the algebraic matrix Riccati and Lyapunov equation. IEEE Transactions on Automatic Control, 31:654–656, 1986. 11 | 2017 | 331 |
6,821 | Online control of the false discovery rate with decaying memory Aaditya Ramdas Fanny Yang Martin J. Wainwright Michael I. Jordan University of California, Berkeley {aramdas, fanny-yang, wainwrig, jordan} @berkeley.edu Abstract In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next pvalue is observed. Alpha-investing algorithms to control the false discovery rate (FDR), formulated by Foster and Stine, have been generalized and applied to many settings, including quality-preserving databases in science and multiple A/B or multi-armed bandit tests for internet commerce. This paper improves the class of generalized alpha-investing algorithms (GAI) in four ways: (a) we show how to uniformly improve the power of the entire class of monotone GAI procedures by awarding more alpha-wealth for each rejection, giving a win-win resolution to a recent dilemma raised by Javanmard and Montanari, (b) we demonstrate how to incorporate prior weights to indicate domain knowledge of which hypotheses are likely to be non-null, (c) we allow for differing penalties for false discoveries to indicate that some hypotheses may be more important than others, (d) we define a new quantity called the decaying memory false discovery rate (mem-FDR) that may be more meaningful for truly temporal applications, and which alleviates problems that we describe and refer to as “piggybacking” and “alpha-death.” Our GAI++ algorithms incorporate all four generalizations simultaneously, and reduce to more powerful variants of earlier algorithms when the weights and decay are all set to unity. Finally, we also describe a simple method to derive new online FDR rules based on an estimated false discovery proportion. 1 Introduction The problem of multiple comparisons was first recognized in the seminal monograph by Tukey [12]: simply stated, given a collection of multiple hypotheses to be tested, the goal is to distinguish between the nulls and non-nulls, with suitable control on different types of error. We are given access to one p-value for each hypothesis, which we use to decide which subset of hypotheses to reject, effectively proclaiming the rejected hypothesis as being non-null. The rejected hypotheses are called discoveries, and the subset of these that were truly null—and hence mistakenly rejected—are called false discoveries. In this work, we measure a method’s performance using the false discovery rate (FDR) [2], defined as the expected ratio of false discoveries to total discoveries. Specifically, we require that any procedure must guarantee that the FDR is bounded by a pre-specified constant ↵. The traditional form of multiple testing is offline in nature, meaning that an algorithm testing N hypotheses receives the entire batch of p-values {P1, . . . , PN} at one time instant. In the online version of the problem, we do not know how many hypotheses we are testing in advance; instead, a possibly infinite sequence of p-values appear one by one, and a decision about rejecting the null must be made before the next p-value is received. There are at least two different motivating justifications for considering the online setting: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. M1. We may have the entire batch of p-values available at our disposal from the outset, but we may nevertheless choose to process the p-values one by one in a particular order. Indeed, if one can use prior knowledge to ensure that non-nulls typically appear earlier in the ordering, then carefully designed online procedures could result in more discoveries than offline algorithms (that operate without prior knowledge) such as the classical Benjamini-Hochberg algorithm [2], while having the same guarantee on FDR control. This motivation underlies one of the original online multiple testing paper, namely that of Foster and Stine [5]. M2. We may genuinely conduct a sequence of tests one by one, where both the choice of the next null hypothesis and the level at which it is tested may depend on the results of the previous tests. Motivating applications include the desire to provide anytime guarantees for (i) internet companies running a sequence of A/B tests over time [9], (ii) pharmaceutical companies conducting a sequence of clinical trials using multi-armed bandits [13], or (iii) quality-preserving databases in which different research teams test different hypotheses on the same data over time [1]. The algorithms developed in this paper apply to both settings, with emphasis on motivation M2. Let us first reiterate the need for corrections when testing a sequence of hypotheses in the online setting, even when all the p-values are independent. If each hypothesis i is tested independently of the total number of tests either performed before it or to be performed after it, then we have no control over the number of false discoveries made over time. Indeed, if our test for every Pi takes the form 1 {Pi ↵} for some fixed ↵, then, while the type 1 error for any individual test is bounded by ↵, the set of discoveries could have arbitrarily poor FDR control. For example, under the “global null” where every hypothesis is truly null, as long as the number of tests N is large and the null p-values are uniform, this method will make at least one rejection with high probability (w.h.p.), and since in this setting every discovery is a false discovery, w.h.p. the FDR will equal one. A natural alternative that takes multiplicity into account is the Bonferroni correction. If one knew the total number N of tests to be performed, the decision rule 1 {Pi ↵/N} for each i 2 {1, . . . , N} controls the probability of even a single false discovery—a quantity known as the familywise error rate or FWER—at level ↵, as can be seen by applying the union bound. The natural extension of this solution to having an unknown and potentially infinite number of tests is called alphaspending. Specifically, we choose any sequence of constants {↵i}i2N such that P i ↵i ↵, and on receiving Pi, our decision is simply 1 {Pi ↵i}. However, such methods typically make very few discoveries—meaning that they have very low power—when the number of tests is large, because they must divide their error budget of ↵, also called alpha-wealth, among a large number of tests. Since the FDR is less stringent than FWER, procedures that guarantee FDR control are generally more powerful, and often far more powerful, than those controlling FWER. This fact has led to the wide adoption of FDR as a de-facto standard for offline multiple testing (note, e.g., that the Benjamini-Hochberg paper [2] currently has over 40,000 citations). Foster and Stine [5] designed the first online alpha-investing procedures that use and earn alphawealth in order to control a modified definition of FDR. Aharoni and Rosset [1] further extended this to a class of generalized alpha-investing (GAI) methods, but once more for the modifed FDR. It was only recently that Javanmard and Montanari [9] demonstrated that monotone GAI algorithms, appropriately parameterized, can control the (unmodified) FDR for independent p-values. It is this last work that our paper directly improves upon and generalizes; however, as we summarize below, many of our modifications and generalizations are immediately applicable to all previous algorithms. Contributions and outline. Instead of presenting the most general and improved algorithms immediately, we choose to present results in a bottom-up fashion, introducing one new concept at a time so as to lighten the symbolic load on the reader. For this purpose, we set up the problem formally in Section 2. Our contributions are organized as follows: 1. Power. In Section 3, we introduce the generalized alpha-investing (GAI) procedures, and demonstrate how to uniformly improve the power of monotone GAI procedures that control FDR for independent p-values, resulting in a win-win resolution to a dilemma posed by Javanmard and Montanari [9]. This improvement is achieved by a somewhat subtle modification that allows the algorithm to reward more alpha-wealth at every rejection but the first. We refer to our algorithms as improved generalized alpha-investing (GAI++) procedures, and provide intuition for why they work through a general super-uniformity lemma (see Lemma 1 in Section 3.2). We 2 also provide an alternate way of deriving online FDR procedures by defining and bounding a natural estimator for the false discovery proportion d FDP. 2. Weights. In Section 5, we demonstrate how to incorporate certain types of prior information about the different hypotheses. For example, we may have a prior weight for each hypothesis, indicating whether it is more or less likely to be null. Additionally, we may have a different penalty weight for each hypothesis, indicating differing importance of hypotheses. These prior and penalty weights have been incorporated successfully into offline procedures [3, 6, 11]. In the online setting, however, there are some technical challenges that prevent immediate application of these offline procedures. For example, in the offline setting all the weights are constants, but in the online setting, we allow them to be random variables that depend on the sequence of past rejections. Further, in the offline setting all provided weights are renormalized to have an empirical mean of one, but in the truly online setting (motivation M2) we do not know the sequence of hypotheses or their random weights in advance, and hence we cannot perform any such renormalization. We clearly outline and handle such issues and design novel prior- and/or penalty-weighted GAI++ algorithms that control the penalty-weighted FDR at any time. This may be seen as an online analog of doubly-weighted procedures for the offline setting [4, 11]. Setting the weights to unity recovers the original class of GAI++ procedures. 3. Decaying memory. In Section 6, we discuss some implications of the fact that existing algorithms have an infinite memory and treat all past rejections equally, no matter when they occurred. This causes phenomena that we term as “piggybacking” (a string of bad decisions, riding on past earned alpha-wealth) and “alpha-death” (a permanent end to decision-making when the alpha-wealth is essentially zero). These phenomena may be desirable or acceptable under motivation M1 when dealing with batch problems, but are generally undesirable under motivation M2. To address these issues, we propose a new error metric called the decaying memory false discovery rate, abbreviated as mem-FDR, that we view as better suited to multiple testing for truly temporal problems. Briefly, mem-FDR pays more attention to recent discoveries by introducing a user-defined discount factor, 0 < δ 1, into the definition of FDR. We demonstrate how to design GAI++ procedures that control online mem-FDR, and show that they have a stable and robust behavior over time. Using δ < 1 allows these procedures to slowly forget their past decisions (reducing piggybacking), or they can temporarily “abstain” from decision-making (allowing rebirth after alpha-death). Instantiating δ = 1 recovers the class of GAI++ procedures. We note that the generalizations to incorporate weights and decaying memory are entirely orthogonal to the improvements that we introduce to yield GAI++ procedures, and hence these ideas immediately extend to other GAI procedures for non-independent p-values. We also describe simulations involving several of the aforementioned generalizations in Appendix C. 2 Problem Setup At time t = 0, before the p-values begin to appear, we fix the level ↵at which we wish to control the FDR over time. At each time step t = 1, 2, . . . , we observe a p-value Pt corresponding to some null hypothesis Ht, and we must immediately decide whether to reject Ht or not. If the null hypothesis is true, p-values are stochastically larger than the uniform distribution (“super-uniform”, for short), formulated as follows: if H0 is the set of true null hypotheses, then for any null Ht 2 H0, we have Pr{Pt x} x for any x 2 [0, 1]. (1) We do not make assumptions on the marginal distribution of the p-values for hypotheses that are non-null / false. Although they can be arbitrary, it is useful to think of them as being stochastically smaller than the uniform distribution, since only then do they carry signal that differentiates them from nulls. Our task is to design threshold levels ↵t according to which we define the rejection decision as Rt = 1 {Pt ↵t}, where 1 {·} is the indicator function. Since the aim is to control the FDR at the fixed level ↵at any time t, each ↵t must be set according to the past decisions of the algorithm, meaning that ↵t = ↵t(R1, . . . , Rt−1). Note that, in accordance with past work, we require that ↵t does not directly depend on the observed p-values but only on past rejections. Formally, we define the sigma-field at time t as Ft = σ(R1, . . . , Rt), and insist that ↵t 2 Ft−1 ⌘↵t is Ft−1-measurable ⌘↵t is predictable. (2) As studied by Javanmard and Montanari [8], and as is predominantly the case in offline multiple testing, we consider monotone decision rules, where ↵t is a coordinate-wise nondecreasing function: if ˜Ri ≥Ri for all i t −1, then we have ↵t( ˜R1, . . . , ˜Rt−1) ≥↵t(R1, . . . , Rt−1). (3) 3 Existing online multiple testing algorithms control some variant of the FDR over time, as we now define. At any time T, let R(T) = PT t=1 Rt be the total number of rejections/discoveries made by the algorithm so far, and let V (T) = P t2H0 Rt be the number of false rejections/discoveries. Then, the false discovery proportion and rate are defined as FDP(T) := V (T) R(T) ··········· and FDR(T) = E V (T) R(T) ··········· $ , where we use the dotted-fraction notation corresponds to the shorthand a b··· = a b_1. Two variants of the FDR studied in earlier online FDR works [5, 8] are the marginal FDR given by mFDR⌘(T) = E[V (T )] E[R(T )]+⌘, with a special case being mFDR(T) = E[V (T )] E[R(T )_1], and the smoothed FDR, given by sFDR⌘(T) = E h V (T ) R(T )+⌘ i . In Appendix A, we summarize a variety of algorithms and dependence assumptions considered in previous work. 3 Generalized alpha-investing (GAI) rules The generalized class of alpha-investing rules [1] essentially covers most rules that have been proposed thus far, and includes a wide range of algorithms with different behaviors. In this section, we present a uniform improvement to monotone GAI algorithms for FDR control under independence. Any algorithm of the GAI type begins with an alpha-wealth of W(0) = W0 > 0, and keeps track of the wealth W(t) available after t steps. At any time t, a part of this alpha-wealth is used to test the t-th hypothesis at level ↵t, and the wealth is immediately decreased by an amount φt. If the t-th hypothesis is rejected, that is if Rt := 1 {Pt ↵t} = 1, then we award extra wealth equaling an amount t. Recalling the definition Ft : = σ(R1, . . . , Rt), we require that ↵t, φt, t 2 Ft−1, meaning they are predictable, and W(t) 2 Ft, with the explicit update W(t) : = W(t −1) −φt + Rt t. The parameters W0 and the sequences ↵t, φt, t are all user-defined. They must be chosen so that the total wealth W(t) is always non-negative, and hence that φt W(t −1) If the wealth ever equals zero, the procedure is not allowed to reject any more hypotheses since it has to choose ↵t equal to zero from then on. The only real restriction for ↵t, φt, t arises from the goal to control FDR. This condition takes a natural form—whenever a rejection takes place, we cannot be allowed to award an arbitrary amount of wealth. Formally, for some user-defined constant B0, we must have t min{φt + B0, φt ↵t + B0 −1}. (4) Many GAI rules are not monotone (cf. equation (3)), meaning that ↵t is not always a coordinatewise nondecreasing function of R1, . . . , Rt−1, as mentioned in the last column of Table 2 (Appendix A). Table 1 has some examples, where ⌧k := mins2N 1 {Ps t=1 Rt = k} is the time of the k-th rejection. Name Parameters Level ↵t Penalty φt Reward t [5] Alpha-investing (AI) — φt 1+φt W(t −1) φt + B0 [1] Alpha-spending with rewards 1, c cW(t −1) W(t −1) satisfy (4) [9] LORD’17 1 P i=1 γi = 1 φt γtW0 + B0 P j:⌧j<t γt−⌧j B0 = ↵−W0 Table 1: Examples of GAI rules. 3.1 Improved monotone GAI rules (GAI++) under independence In their initial work on GAI rules, Aharoni and Rosset [1] did not incorporate an explicit parameter B0; rather, they proved that choosing W0 = B0 = ↵suffices for mFDR1 control. In subsequent work, Javanmard and Montanari [9] introduced the parameter B0 and proved that for monotone GAI rules, the same choice W0 = B0 = ↵suffices for sFDR1 control, whereas the choice B0 = ↵−W0 suffices for FDR control, with both results holding under independence. In fact, their monotone GAI rules with B0 = ↵−W0 are the only known methods that control FDR. This state of affairs leads to the following dilemma raised in their paper [9]: A natural question is whether, in practice, we should choose W0, B0 as to guarantee FDR control (and hence set B0 = ↵−W0 ⌧↵) or instead be satisfied with mFDR or sFDR control, which allow for B0 = ↵and hence potentially larger statistical power. 4 Our first contribution is a “win-win” resolution to this dilemma: more precisely, we prove that we can choose B0 = ↵while maintaining FDR control, with a small catch that at the very first rejection only, we need B0 = ↵−W0. Of course, in this case B0 is not constant, and hence we replace it by the random variable bt 2 Ft−1, and we prove that choosing W0, bt such that bt + W0 = ↵ for the first rejection, and simply bt = ↵for every future rejection, suffices for formally proving FDR control under independence. This achieves the best of both worlds (guaranteeing FDR control, and handing out the largest possible reward of ↵), as posed by the above dilemma. To restate our contribution, we effectively prove that the power of monotone GAI rules can be uniformly improved without changing the FDR guarantee. Formally, we define our improved generalized alpha-investing (GAI++) algorithm as follows. It sets W(0) = W0 with 0 W0 ↵, and chooses ↵t 2 Ft−1 to make decisions Rt = 1 {Pt ↵t} and updates the wealth W(t) = W(t −1) −φt + Rt t 2 Ft using some φt W(t −1) 2 Ft−1 and some reward t min{φt + bt, φt ↵t + bt −1} 2 Ft−1, using the choice bt = ⇢ ↵−W0 when R(t −1) = 0 ↵ otherwise 2 Ft−1. As an explicit example, given an infinite nonincreasing sequence of positive constants {γj} that sums to one, the LORD++ algorithm effectively makes the choice: ↵t = γtW0 + (↵−W0)γt−⌧1 + ↵ X j:⌧j<t,⌧j6=⌧1 γt−⌧j, (5) recalling that ⌧j is the time of the j-th rejection. Reasonable default choices include W0 = ↵/2, and γj = 0.0722 log(j_2) je plog j , the latter derived in the context of testing if a Gaussian is zero mean [9]. Any monotone GAI++ rule comes with the following guarantee. Theorem 1. Any monotone GAI++ rule satisfies the bound E h V (T )+W (T ) R(T ) ····················· i ↵ for all T 2 N under independence. Since W(T) ≥0 for all T 2 N, any such rule (a) controls FDR at level ↵ under independence, and (b) has power at least as large as the corresponding GAI algorithm. The proof of this theorem is provided in Appendix F. Note that for monotone rules, a larger alphawealth reward at each rejection yields a possibly higher power, but never lower power, immediately implying statement (b). Consequently, we provide only a proof for statement (a) in Appendix F. For the reader interested in technical details, a key super-uniformity Lemma 1 and associated intuition for online FDR algorithms is provided in Section 3.2. 3.2 Intuition for larger rewards via a super-uniformity lemma For the purposes of providing some intuition for why we are able to obtain larger rewards than Javanmard and Montanari [9], we present the following lemma. In order to set things up, recall that Rt = 1 {Pt ↵t} and note that ↵t is Ft−1-measurable, being a coordinatewise nondecreasing function of R1, . . . , Rt−1. Hence, the marginal super-uniformity assumption (1) immediately implies that for independent p-values, we have Pr ) Pt ↵t ** Ft−1 ↵t, or equivalently, E 1 {Pt ↵t} ↵t ························ **** Ft−1 $ 1. (6) Lemma 1 states that under independence, the above statement remains valid in much more generality. Given a sequence P1, P2, . . . of independent p-values, define a filtration via the sigma-fields Fi−1 : = σ(R1, . . . , Ri−1), where Ri : = 1 {Pi fi(R1, . . . , Ri−1)} for some coordinatewise nondecreasing function fi : {0, 1}i−1 ! R. With this set-up, we have the following guarantee: Lemma 1. Let g : {0, 1}T ! R be any coordinatewise nondecreasing function such that g(~x) > 0 for any vector ~x 6= (0, . . . , 0). Then for any index t T such that Ht 2 H0, we have E 1 {Pt ft(R1, . . . , Rt−1)} g(R1, . . . , RT ) ····················································· **** Ft−1 $ E ft(R1, . . . , Rt−1) g(R1, . . . , RT ) ·································· **** Ft−1 $ . (7) 5 This super-uniformity lemma is analogous to others used in offline multiple testing [4, 11], and will be needed in its full generality later in the paper. The proof of this lemma in Appendix E is based on a leave-one-out technique which is common in the multiple testing literature [7, 10, 11]; ours specifically generalizes a lemma in the Appendix of Javanmard and Montanari [9]. As mentioned, this lemma helps to provide some intuition for the condition on t and the unorthodox condition on bt. Indeed, note that by definition, FDR(T) = E V (T) R(T) ··········· $ = E P t2H0 1 {Pt ↵t} R(T) ········································ $ E PT t=1 ↵t PT t=1 Rt ·················· $ , where we applied Lemma 1 to the coordinatewise nondecreasing function g(R1, . . . , RT ) = R(T). From this equation, we may infer the following: If P t Rt = k, then the FDR will be bounded by ↵as long as the total alpha-wealth P t ↵t that was used for testing is smaller than k↵. In other words, with every additional rejection that adds one to the denominator, the algorithm is allowed extra alpha-wealth equaling ↵for testing. In order to see where this shows up in the algorithm design, assume for a moment that we choose our penalty as φt = ↵t. Then, our condition on rewards t simply reduces to t bt. Furthermore, since we choose bt = ↵after every rejection except the first, our total earned alpha-wealth is approximately ↵R(T), which also upper bounds the total alpha-wealth used for testing. The intuitive reason that bt cannot equal ↵at the very first rejection can also be inferred from the above equation. Indeed, note that because of the definition of FDR, we have V (T ) R(T ) ········· := V (T ) R(T )_1, the denominator R(T) _ 1 = 1 when the number of rejections equals zero or one. Therefore, the denominator only starts incrementing at the second rejection. Hence, the sum of W0 and the first reward must be at most ↵, following which one may award ↵at every rejection. This is the central piece of intuition behind the GAI algorithm design, its improvement in this paper, and the FDR control analysis. To the best of our knowledge, this is the first explicit presentation for the intuition for online FDR control. 4 A direct method for deriving new online FDR rules Many offline FDR procedures can be derived in terms of an estimate d FDP of the false discovery proportion; see Ramdas et al. [11] and references therein. The discussion in Section 3.2 suggests that it is also possible to write online FDR rules in this fashion. Indeed, given any non-negative, predictable sequence {↵t}, we propose the following definition: d FDP(t) : = Pt j=1 ↵j R(t) ··················. This definition is intuitive because d FDP(t) approximately overestimates the unknown FDP(t): d FDP(t) ≥ P jt,j2H0 ↵j R(t) ···························· ⇡ P jt,j2H0 1 {Pj ↵j} R(t) ················································ = FDP(t). A more direct way to construct new online FDR procedures is to ensure that supt2N d FDP(t) ↵, bypassing the use of wealth, penalties and rewards in GAI. This idea is formalized below. Theorem 2. For any predictable sequence {↵t} such that supt2N d FDP(t) ↵, we have: (a) If the p-values are super-uniform conditional on all past discoveries, meaning that Pr ) Pj ↵j ** F j−1 ↵j, then the associated procedure has supT 2N mFDR(T) ↵. (b) If the p-values are independent and if {↵t} is monotone, then we also have supT 2N FDR(T) ↵. The proof of this theorem is given in Appendix D. In our opinion, it is more transparent to verify that LORD++ controls both mFDR and FDR using Theorem 2 than using Theorem 1. 5 Incorporating prior and penalty weights Here, we develop GAI++ algorithms that incorporate prior weights wt, which allow the user to exploit domain knowledge about which hypotheses are more likely to be non-null, as well as penalty weights ut to differentiate more important hypotheses from the rest. The weights must be strictly positive, predictable (meaning that wt, ut 2 Ft−1) and monotone (in the sense of definition (3)). 6 Penalty weights. For many motivating applications, including internet companies running a series of A/B tests over time, or drug companies doing a series of clinical trials over time, it is natural to assume that some tests are more important than others, in the sense that some false discoveries may have more lasting positive/negative effects than others. To incorporate this in the offline setting, Benjamini and Hochberg [3] suggested associating each test with a positive penalty weight ui with hypothesis Hi. Choosing ui > 1 indicates a more impactful or important test, while ui < 1 means the opposite. Although algorithms exist in the offline setting that can intelligently incorporate penalty weights, no such flexibility currently exists for online FDR algorithms. With this motivation in mind and following Benjamini and Hochberg [3], define the penalty-weighted FDR as FDRu(T) : = E Vu(T) Ru(T) ············· $ (8) where Vu(T) : = P t2H0 utRt = Vu(T −1) + uT RT 1 ) T 2 H0 and Ru(T) : = Ru(T −1) + uT RT . One may set ut = 1 to recover the special case of no penalty weights. In the offline setting, a given set of penalty weights can be rescaled to make the average penalty weight equal unity, without affecting the associated procedure. However, in the online setting, we choose penalty weights ut one at a time, possibly not knowing the total number of hypotheses ahead of time. As a consequence, these weights cannot be rescaled in advance to keep their average equal to unity. It is important to note that we allow ut 2 Ft−1 to be determined after viewing the past rejections, another important difference from the offline setting. Indeed, if the hypotheses are logically related (even if the pvalues are independent), then the current hypothesis can be more or less critical depending on which other ones are already rejected. Prior weights. In many applications, one may have access to prior knowledge about the underlying state of nature (that is, whether the hypothesis is truly null or non-null). For example, an older published biological study might have made significant discoveries, or an internet company might know the results of past A/B tests or decisions made by other companies. This knowledge may be incorporated by a weight wt which indicates the strength of a prior belief about whether the hypothesis is null or not—typically, a larger wt > 1 can be interpreted as a greater likelihood of being a non-null, indicating that the algorithm may be more aggressive in deciding whether to reject Ht. Such p-value weighting was first suggested in the offline FDR context by [6], though earlier work employed it in the context of FWER control. As with penalty weights in the offline setting, offline prior weights are also usually rescaled to have unit mean, and then existing offline algorithms simply replace the p-value Pt by the weighted p-value Pt/wt. However, it is not obvious how to incorporate prior weights in the online setting. As we will see in the sections to come, the online FDR algorithms we propose will also use p-value reweighting; moreover, the rewards must be prudently adjusted to accommodate the fact that an a-priori rescaling is not feasible. Furthermore, as opposed to the offline case, the weights wt 2 Ft−1 are allowed to depend on past rejections. This additional flexibility allows one to set the weights not only based on our prior knowledge of the current hypothesis being tested, but also based on properties of the sequence of discoveries (for example, whether we recently saw a string of rejections or non-rejections). We point out some practical subtleties with the use and interpretation of prior weights in Appendix C.4. Doubly-weighted GAI++ rules. Given a testing level ↵t and weights wt, ut, all three being predictable and monotone, we make the decision Rt : = 1 {Pt ↵tutwt} . (9) This agrees with the intuition that larger prior weights should be reflected in an increased willingness to reject the null, and we should favor rejecting more important hypotheses. As before, our rejection reward strategy differs before and after ⌧1, the time of the first rejection. Starting with some W(0) = W0 ↵, we update the wealth as W(t) = W(t −1) −φt + Rt t, where wt, ut, ↵t, φt, t 2 Ft−1 must be chosen so that φt W(t −1), and the rejection reward t must obey the condition 0 t min ⇢ φt + utbt, φt utwt↵t + utbt −ut , , where (10a) bt := ↵−W0 ut 1 {⌧1 > t −1} 2 Ft−1. (10b) Notice that setting wt = ut = 1 immediately recovers the GAI updates. Let us provide some intuition for the form of the rewards t, which involves an interplay between the weights wt, ut, 7 the testing levels ↵t and the testing penalties φt. First note that large weights ut, wt > 1 result in a smaller earning of alpha-wealth and if ↵t, φt are fixed, then the maximum “common-sense” weights are determined by requiring t ≥0. The requirements of lower rewards for larger weights and of a maximum allowable weight should both seem natural; indeed, there must be some price one must pay for an easier rejection, otherwise we would always use a high prior weight or penalty weight to get more power, no matter the hypothesis! We show that such a price does not have to be paid in terms of the FDR guarantee—we prove that FDRu is controlled for any choices of weights—but a price is paid in terms of power, specifically the ability to make rejections in the future. Indeed, the combined use of ut, wt in both the decision rule Rt and the earned reward t keeps us honest; if we overstate our prior belief in the hypothesis being non-null or its importance by assigning a large ut, wt > 1, we will not earn much of a reward (or even a negative reward!), while if we understate our prior beliefs by assigining a small ut, wt < 1, then we may not reject this hypothesis. Hence, it is prudent to not misuse or overuse the weights, and we recommend that the scientist uses the default ut = wt = 1 in practice unless there truly is prior evidence against the null or a reason to believe the finding would be of importance, perhaps due to past studies by other groups or companies, logical relationships between hypotheses, or due to extraneous reasons suggested by the underlying science. We are now ready to state a theoretical guarantee for the doubly-weighted GAI++ procedure: Theorem 3. Under independence, the doubly-weighted GAI++ algorithm satisfies the bound E h Vu(T )+W (T ) Ru(T ) ······················· i ↵for all T 2 N. Since W(T) ≥0, we also have FDRu(T) ↵for all T 2 N. The proof of this theorem is given in Appendix G. It is important to note that although we provide the proof here only for GAI++ rules under independence, the ideas would actually carry forward in an analogous fashion for GAI rules under various other forms of dependence. 6 From infinite to decaying memory Here, we summarize two phenomena : (i) the “piggybacking” problem that can occur with nonstationary null-proportion, (ii) the “alpha-death” problem that can occur with a sequence of nulls. We propose a new error metric, the decaying-memory FDR (mem-FDR), that for truly temporal multiple testing scenarios, and propose an adjustment of our GAI++ algorithms to control this quantity. Piggybacking. As outlined in motivation M1, when the full batch of p-values is available offline, online FDR algorithms have an inherent asymmetry in their treatment of different p-values, and make different rejections depending on the order in which they process the batch. Indeed, Foster and Stine [5] demonstrated that if one knew a reasonably good ordering (with non-nulls arriving earlier), then their online alpha-investing procedures could attain higher power than the offline BH procedure. This is partly due to a phenomenon that we call “piggybacking”—if a lot of rejections are made early, these algorithms earn and accumulate enough alpha-wealth to reject later hypotheses more easily by testing them at more lenient thresholds than earlier ones. In essence, later tests “piggyback” on the success of earlier tests. While piggybacking may be desirable or acceptable under motivation M1, such behavior may be unwarranted and unwanted under motivation M2. We argue that piggybacking may lead to a spike in the false discovery rate locally in time, even though the FDR over all time is controlled. This may occur when the sequence of hypotheses is non-stationary and clustered, when strings of nulls may follow strings of non-nulls. For concreteness, consider the setting in Javanmard and Montanari [8] where an internet company conducts many A/B tests over time. In “good times”, when a large fraction tests are truly non-null, the company may accumulate wealth due to frequent rejections. We demonstrate using simulations that such accumulated wealth can lead to a string of false discoveries when there is a quick transition to a “bad period” where the proportion of non-nulls is much lower, causing a spike in the false discovery proportion locally in time. Alpha-death. Suppose we test a long stretch of nulls, followed by a stretch of non-nulls. In this setting, GAI algorithms will make (almost) no rejections in the first stretch, losing nearly all of its wealth. Thereafter, the algorithm may be effectively condemned to have no power, unless a non-null with extremely strong signal is observed. Such a situation, from which no recovery is possible, is perfectly reasonable under motivation M1. The alpha-wealth has been used up fully, and those are the only rejections we are allowed to make with that batch of p-values. However, for an internet company operating with motivation M2, it might be unacceptable to inform them that they essentially cannot run any more tests, or that they may perhaps never make another useful discovery. 8 Both of these problems, demonstrated in simulations in Appendix C.2, are due to the fact that the process effectively has an infinite memory. In the following, we propose one way to smoothly forget the past and to some extent alleviate the negative effects of the aforementioned phenomena. Decaying memory. For a user-defined decay parameter δ > 0, define V δ(0) = Rδ(0) = 0 and define the decaying memory FDR as follows: mem-FDR(T) : = E V δ(T) Rδ(T) ············· $ , where V δ(T) : = δV δ(T −1) + RT 1 ) T 2 H0 = P t2H0 δT −tRt1 ) t 2 H0 , and analogously Rδ(T) : = δRδ(T −1) + RT = P t δT −tRt. This notion of FDR control, which is arguably natural for modern temporal applications, appears to be novel in the multiple testing literature. The parameter δ is reminiscent of the discount factor in reinforcement learning. Penalty-weighted decaying-memory FDR. We may naturally extend the notion of decayingmemory FDR to encompass penalty weights. Setting V δ u (0) = Rδ u(0) = 0, we define mem-FDRu(T) : = E V δ u (T) Rδu(T) ············· $ , where we define V δ u (T) : = δV δ u (T −1) + uT RT 1 ) T 2 H0 = PT t=1 δT −tutRt1 ) t 2 H0 , Rδ u(T) : = δRδ u(T −1) + utRt = PT t=1 δT −tutRt. mem-GAI++ algorithms with decaying memory and weights. Given a testing level ↵t, we make the decision using equation (9) as before, starting with a wealth of W(0) = W0 ↵. Also, recall that ⌧k is the time of the k-th rejection. On making the decision Rt, we update the wealth as: W(t) : = δW(t −1) + (1 −δ)W01 {⌧1 > t −1} −φt + Rt t, (11) so that W(T) = W0δT −min{⌧1,T } + T X t=1 δT −t(−φt + Rt t). The first term in equation (11) indicates that the wealth must decay in order to forget the old earnings from rejections far in the past. If we were to keep the first term and drop the second, then the effect of the initial wealth (not just the post-rejection earnings) also decays to zero. Intuitively, the correction from the second term suggests that even if one forgets all the past post-rejection earnings, the algorithm should behave as if it started from scratch, which means that its initial wealth should not decay. This does not contradict the fact that initial wealth can be consumed because of testing penalties φt, but it should not decay with time—the decay was only introduced to avoid piggybacking, which is an effect of post-rejection earnings and not the initial wealth. A natural restriction on φt is the bound φt δW(t −1) + (1 −δ)W01 {⌧1 > t −1} , which ensures that the wealth stays non-negative. Further, wt, ut, ↵t, φt 2 Ft−1 must be chosen so that the rejection reward t obeys conditions (10a) and (10b). Notice that setting wt = ut = δ = 1 recovers the GAI++ updates. As an example, mem-LORD++ would use : ↵t = γtW0δt−min{⌧1,t} + X j:⌧j<t δt−⌧jγt−⌧j ⌧j. We are now ready to present our last main result. Theorem 4. Under independence, the doubly-weighted mem-GAI++ algorithm satisfies the bound E h V δ u (T )+W (T ) Rδ u(T ) ······················· i ↵for all T 2 N. Since W(T) ≥0, we have mem-FDRu(T) ↵for all T 2 N. See Appendix H for the proof of this claim. Appendix B discusses how to use “abstaining” to provide a smooth restart from alpha-death, whereas Appendix C contains a numerical simulation demonstrating the use of decaying memory. 7 Summary In this paper, we make four main contributions—more powerful procedures under independence, an alternate viewpoint of deriving online FDR procedures, incorporation of prior and penalty weights, and introduction of a decaying-memory false discovery rate to handle piggybacking and alpha-death. Numerical simulations in Appendix C complement the theoretical results. 9 Acknowledgments We thank A. Javanmard, R. F. Barber, K. Johnson, E. Katsevich, W. Fithian and L. Lei for related discussions, and A. Javanmard for sharing code to reproduce experiments in Javanmard and Montanari [9]. This material is based upon work supported in part by the Army Research Office under grant number W911NF-17-1-0304, and National Science Foundation grant NSF-DMS-1612948. References [1] Ehud Aharoni and Saharon Rosset. Generalized ↵-investing: definitions, optimality results and application to public databases. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(4):771–794, 2014. [2] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57(1): 289–300, 1995. [3] Yoav Benjamini and Yosef Hochberg. Multiple hypotheses testing with weights. Scandinavian Journal of Statistics, 24(3):407–418, 1997. [4] Gilles Blanchard and Etienne Roquain. Two simple sufficient conditions for fdr control. Electronic journal of Statistics, 2:963–992, 2008. [5] Dean P. Foster and Robert A. Stine. ↵-investing: a procedure for sequential control of expected false discoveries. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(2):429–444, 2008. [6] Christopher R Genovese, Kathryn Roeder, and Larry Wasserman. False discovery control with p-value weighting. Biometrika, 93(3):509–524, 2006. [7] Philipp Heesen and Arnold Janssen. Dynamic adaptive multiple tests with finite sample fdr control. arXiv preprint arXiv:1410.6296, 2014. [8] Adel Javanmard and Andrea Montanari. On online control of false discovery rate. arXiv preprint arXiv:1502.06197, 2015. [9] Adel Javanmard and Andrea Montanari. Online rules for control of false discovery rate and false discovery exceedance. The Annals of statistics, 2017. [10] Ang Li and Rina Foygel Barber. Multiple testing with the structure adaptive benjaminihochberg algorithm. arXiv preprint arXiv:1606.07926, 2016. [11] Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, and Michael I. Jordan. A unified treatment of multiple testing with prior knowledge. arXiv preprint arXiv:1703.06222, 2017. [12] John Tukey. The Problem of Multiple Comparisons: Introduction and Parts A, B, and C. Princeton University, 1953. [13] Fanny Yang, Aaditya Ramdas, Kevin Jamieson, and Martin J. Wainwright. A framework for Multi-A(rmed)/B(andit) testing with online FDR control. Advances in Neural Information Processing Systems, 2017. 10 | 2017 | 332 |
6,822 | Safe and Nested Subgame Solving for Imperfect-Information Games Noam Brown Computer Science Department Carnegie Mellon University Pittsburgh, PA 15217 noamb@cs.cmu.edu Tuomas Sandholm Computer Science Department Carnegie Mellon University Pittsburgh, PA 15217 sandholm@cs.cmu.edu Abstract In imperfect-information games, the optimal strategy in a subgame may depend on the strategy in other, unreached subgames. Thus a subgame cannot be solved in isolation and must instead consider the strategy for the entire game as a whole, unlike perfect-information games. Nevertheless, it is possible to first approximate a solution for the whole game and then improve it in individual subgames. This is referred to as subgame solving. We introduce subgame-solving techniques that outperform prior methods both in theory and practice. We also show how to adapt them, and past subgame-solving techniques, to respond to opponent actions that are outside the original action abstraction; this significantly outperforms the prior state-of-the-art approach, action translation. Finally, we show that subgame solving can be repeated as the game progresses down the game tree, leading to far lower exploitability. These techniques were a key component of Libratus, the first AI to defeat top humans in heads-up no-limit Texas hold’em poker. 1 Introduction Imperfect-information games model strategic settings that have hidden information. They have a myriad of applications including negotiation, auctions, cybersecurity, and physical security. In perfect-information games, determining the optimal strategy at a decision point only requires knowledge of the game tree’s current node and the remaining game tree beyond that node (the subgame rooted at that node). This fact has been leveraged by nearly every AI for perfect-information games, including AIs that defeated top humans in chess [7] and Go [29]. In checkers, the ability to decompose the game into smaller independent subgames was even used to solve the entire game [27]. However, it is not possible to determine a subgame’s optimal strategy in an imperfect-information game using only knowledge of that subgame, because the game tree’s exact node is typically unknown. Instead, the optimal strategy may depend on the value an opponent could have received in some other, unreached subgame. Although this is counter-intuitive, we provide a demonstration in Section 2. Rather than rely on subgame decomposition, past approaches for imperfect-information games typically solved the game as a whole upfront. For example, heads-up limit Texas hold’em, a relatively simple form of poker with 1013 decision points, was essentially solved without decomposition [2]. However, this approach cannot extend to larger games, such as heads-up no-limit Texas hold’em—the primary benchmark in imperfect-information game solving—which has 10161 decision points [16]. The standard approach to computing strategies in such large games is to first generate an abstraction of the game, which is a smaller version of the game that retains as much as possible the strategic characteristics of the original game [24, 26, 25]. For example, a continuous action space might be discretized. This abstract game is solved and its solution is used when playing the full game by mapping states in the full game to states in the abstract game. We refer to the solution of an abstraction (or more generally any approximate solution to a game) as a blueprint strategy. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In heavily abstracted games, a blueprint strategy may be far from the true solution. Subgame solving attempts to improve upon the blueprint strategy by solving in real time a more fine-grained abstraction for an encountered subgame, while fitting its solution within the overarching blueprint strategy. 2 Coin Toss In this section we provide intuition for why an imperfect-information subgame cannot be solved in isolation. We demonstrate this in a simple game we call Coin Toss, shown in Figure 1a, which will be used as a running example throughout the paper. Coin Toss is played between players P1 and P2. The figure shows rewards only for P1; P2 always receives the negation of P1’s reward. A coin is flipped and lands either Heads or Tails with equal probability, but only P1 sees the outcome. P1 then chooses between actions “Sell” and “Play.” The Sell action leads to a subgame whose details are not important, but the expected value (EV) of choosing the Sell action will be important. (For simplicity, one can equivalently assume in this section that Sell leads to an immediate terminal reward, where the value depends on whether the coin landed Heads or Tails). If the coin lands Heads, it is considered lucky and P1 receives an EV of $0.50 for choosing Sell. On the other hand, if the coin lands Tails, it is considered unlucky and P1 receives an EV of −$0.50 for action Sell. (That is, P1 must on average pay $0.50 to get rid of the coin). If P1 instead chooses Play, then P2 may guess how the coin landed. If P2 guesses correctly, then P1 receives a reward of −$1. If P2 guesses incorrectly, then P1 receives $1. P2 may also forfeit, which should never be chosen but will be relevant in later sections. We wish to determine the optimal strategy for P2 in the subgame S that occurs after P1 chooses Play, shown in Figure 1a. Figure 1: (a) The example game of Coin Toss. “C” represents a chance node. S is a Player 2 (P2) subgame. The dotted line between the two P2 nodes means that P2 cannot distinguish between them. (b) The public game tree of Coin Toss. The two outcomes of the coin flip are only observed by P1. Were P2 to always guess Heads, P1 would receive $0.50 for choosing Sell when the coin lands Heads, and $1 for Play when it lands Tails. This would result in an average of $0.75 for P1. Alternatively, were P2 to always guess Tails, P1 would receive $1 for choosing Play when the coin lands Heads, and −$0.50 for choosing Sell when it lands Tails. This would result in an average reward of $0.25 for P1. However, P2 would do even better by guessing Heads with 25% probability and Tails with 75% probability. In that case, P1 could only receive $0.50 (on average) by choosing Play when the coin lands Heads—the same value received for choosing Sell. Similarly, P1 could only receive −$0.50 by choosing Play when the coin lands Tails, which is the same value received for choosing Sell. This would yield an average reward of $0 for P1. It is easy to see that this is the best P2 can do, because P1 can average $0 by always choosing Sell. Therefore, choosing Heads with 25% probability and Tails with 75% probability is an optimal strategy for P2 in the “Play” subgame. Now suppose the coin is considered lucky if it lands Tails and unlucky if it lands Heads. That is, the expected reward for selling the coin when it lands Heads is now −$0.50 and when it lands Tails is now $0.50. It is easy to see that P2’s optimal strategy for the “Play” subgame is now to guess Heads with 75% probability and Tails with 25% probability. This shows that a player’s optimal strategy in a subgame can depend on the strategies and outcomes in other parts of the game. Thus, one cannot solve a subgame using information about that subgame alone. This is the central challenge of imperfect-information games as opposed to perfect-information games. 2 3 Notation and Background In a two-player zero-sum extensive-form game there are two players, P = {1, 2}. H is the set of all possible nodes, represented as a sequence of actions. A(h) is the actions available in a node and P(h) ∈P ∪c is the player who acts at that node, where c denotes chance. Chance plays an action a ∈A(h) with a fixed probability. If action a ∈A(h) leads from h to h′, then we write h · a = h′. If a sequence of actions leads from h to h′, then we write h ⊏h′. The set of nodes Z ⊆H are terminal nodes. For each player i ∈P, there is a payoff function ui : Z →ℜwhere u1 = −u2. Imperfect information is represented by information sets (infosets). Every node h ∈H belongs to exactly one infoset for each player. For any infoset Ii, nodes h, h′ ∈Ii are indistinguishable to player i. Thus the same player must act at all the nodes in an infoset, and the same actions must be available. Let P(Ii) and A(Ii) be such that all h ∈Ii, P(Ii) = P(h) and A(Ii) = A(h). A strategy σi(Ii) is a probability vector over A(Ii) for infosets where P(Ii) = i. The probability of action a is denoted by σi(Ii, a). For all h ∈Ii, σi(h) = σi(Ii). A full-game strategy σi ∈Σi defines a strategy for each player i infoset. A strategy profile σ is a tuple of strategies, one for each player. The expected payoff for player i if all players play the strategy profile ⟨σi, σ−i⟩is ui(σi, σ−i), where σ−i denotes the strategies in σ of all players other than i. Let πσ(h) = Q h′·a⊑h σP (h′)(h′, a) denote the probability of reaching h if all players play according to σ. πσ i (h) is the contribution of player i to this probability (that is, the probability of reaching h if chance and all players other than i always chose actions leading to h). πσ −i(h) is the contribution of all players, and chance, other than i. πσ(h, h′) is the probability of reaching h′ given that h has been reached, and 0 if h ̸⊏h′. This papers focuses on perfect-recall games, where a player never forgets past information. Thus, for every Ii, ∀h, h′ ∈Ii, πσ i (h) = πσ i (h′). We define πσ i (Ii) = πσ i (h) for h ∈Ii. Also, I′ i ⊏Ii if for some h′ ∈I′ i and some h ∈Ii, h′ ⊏h. Similarly, I′ i · a ⊏Ii if h′ · a ⊏h. A Nash equilibrium [22] is a strategy profile σ∗where no player can improve by shifting to a different strategy, so σ∗satisfies ∀i, ui(σ∗ i , σ∗ −i) = maxσ′ i∈Σi ui(σ′ i, σ∗ −i). A best response BR(σ−i) is a strategy for player i that is optimal against σ−i. Formally, BR(σ−i) satisfies ui(BR(σ−i), σ−i) = maxσ′ i∈Σi ui(σ′ i, σ−i). In a two-player zero-sum game, the exploitability exp(σi) of a strategy σi is how much worse σi does against an opponent best response than a Nash equilibrium strategy would do. Formally, exploitability of σi is ui(σ∗) −ui(σi, BR(σi)), where σ∗is a Nash equilibrium. The expected value of a node h when players play according to σ is vσ i (h) = P z∈Z πσ(h, z)ui(z) . An infoset’s value is the weighted average of the values of the nodes in the infoset, where a node is weighed by the player’s belief that she is in that node. Formally, vσ i (Ii) = P h∈Ii πσ −i(h)vσ i (h) P h∈Ii πσ −i(h) and vσ i (Ii, a) = P h∈Ii πσ −i(h)vσ i (h·a) P h∈Ii πσ −i(h) . A counterfactual best response [21] CBR(σ−i) is a best response that also maximizes value in unreached infosets. Specifically, a counterfactual best response is a best response σi with the additional condition that if σi(Ii, a) > 0 then vσ i (Ii, a) = maxa′ vσ i (Ii, a′). We further define counterfactual best response value CBV σ−i(Ii) as the value player i expects to achieve by playing according to CBR(σ−i), having already reached infoset Ii. Formally, CBV σ−i(Ii) = v⟨CBR(σ−i),σ−i⟩ i (Ii) and CBV σ−i(Ii, a) = v⟨CBR(σ−i),σ−i⟩ i (Ii, a). An imperfect-information subgame, which we refer to simply as a subgame in this paper, can in most cases (but not all) be described as including all nodes which share prior public actions (that is, actions viewable to both players). In poker, for example, a subgame is uniquely defined by a sequence of bets and public board cards. Figure 1b shows the public game tree of Coin Toss. Formally, an imperfect-information subgame is a set of nodes S ⊆H such that for all h ∈S, if h ⊏h′, then h′ ∈S, and for all h ∈S and all i ∈P, if h′ ∈Ii(h) then h′ ∈S. Define Stop as the set of earliest-reachable nodes in S. That is, h ∈Stop if h ∈S and h′ ̸∈S for any h′ ⊏h. 4 Prior Approaches to Subgame Solving This section reviews prior techniques for subgame solving in imperfect-information games, which we build upon. Throughout this section, we refer to the Coin Toss game shown in Figure 1a. As discussed in Section 1, a standard approach to dealing with large imperfect-information games is to solve an abstraction of the game. The abstract solution is a (probably suboptimal) strategy profile 3 in the full game. We refer to this full-game strategy profile as the blueprint. The goal of subgame solving is to improve upon the blueprint by changing the strategy only in a subgame. Figure 2: The blueprint strategy we refer to in the game of Coin Toss. The Sell action leads to a subgame that is not displayed. Probabilities are shown for all actions. The dotted line means the two P2 nodes share an infoset. The EV of each P1 action is also shown. Assume that a blueprint strategy profile σ (shown in Figure 2) has already been computed for Coin Toss in which P1 chooses Play 3 4 of the time with Heads and 1 2 of the time with Tails, and P2 chooses Heads 1 2 of the time, Tails 1 4 of the time, and Forfeit 1 4 of the time after P1 chooses Play. The details of the blueprint strategy in the Sell subgame are not relevant in this section, but the EV for choosing the Sell action is relevant. We assume that if P1 chose the Sell action and played optimally thereafter, then she would receive an expected payoff of 0.5 if the coin is Heads, and −0.5 if the coin is Tails. We will attempt to improve P2’s strategy in the subgame S that follows P1 choosing Play. 4.1 Unsafe Subgame Solving We first review the most intuitive form of subgame solving, which we refer to as Unsafe subgame solving [1, 12, 13, 10]. This form of subgame solving assumes both players played according to the blueprint strategy prior to reaching the subgame. That defines a probability distribution over the nodes at the root of the subgame S, representing the probability that the true game state matches that node. A strategy for the subgame is then calculated which assumes that this distribution is correct. In all subgame solving algorithms, an augmented subgame containing S and a few additional nodes is solved to determine the strategy for S. Applying Unsafe subgame solving to the blueprint strategy in Coin Toss (after P1 chooses Play) means solving the augmented subgame shown in Figure 3a. Specifically, the augmented subgame consists of only an initial chance node and S. The initial chance node reaches h ∈Stop with probability πσ(h) P h′∈Stop πσ(h′). The augmented subgame is solved and its strategy for P2 is used in S rather than the blueprint strategy. Unsafe subgame solving lacks theoretical solution quality guarantees and there are many situations where it performs extremely poorly. Indeed, if it were applied to the blueprint strategy of Coin Toss then P2 would always choose Heads—which P1 could exploit severely by only choosing Play with Tails. Despite the lack of theoretical guarantees and potentially bad performance, Unsafe subgame solving is simple and can sometimes produce low-exploitability strategies, as we show later. We now move to discussing safe subgame-solving techniques, that is, ones that ensure that the exploitability of the strategy is no higher than that of the blueprint strategy. (a) Unsafe subgame solving (b) Resolve subgame solving Figure 3: The augmented subgames solved to find a P2 strategy in the Play subgame of Coin Toss. 4 4.2 Subgame Resolving In subgame Resolving [6], a safe strategy is computed for P2 in the subgame by solving the augmented subgame shown in Figure 3b, producing an equilibrium strategy σS. This augmented subgame differs from Unsafe subgame solving by giving P1 the option to “opt out” from entering S and instead receive the EV of playing optimally against P2’s blueprint strategy in S. Specifically, the augmented subgame for Resolving differs from unsafe subgame solving as follows. For each htop ∈Stop we insert a new P1 node hr, which exists only in the augmented subgame, between the initial chance node and htop. The set of these hr nodes is Sr. The initial chance node connects to each node hr ∈Sr in proportion to the probability that player P1 could reach htop if P1 tried to do so (that is, in proportion to πσ −1(htop)). At each node hr ∈Sr, P1 has two possible actions. Action a′ S leads to htop, while action a′ T leads to a terminal payoff that awards the value of playing optimally against P2’s blueprint strategy, which is CBV σ2(I1(htop)). In the blueprint strategy of Coin Toss, P1 choosing Play after the coin lands Heads results in an EV of 0, and 1 2 if the coin is Tails. Therefore, a′ T leads to a terminal payoff of 0 for Heads and 1 2 for Tails. After the equilibrium strategy σS is computed in the augmented subgame, P2 plays according to the computed subgame strategy σS 2 rather than the blueprint strategy when in S. The P1 strategy σS 1 is not used. Clearly P1 cannot do worse than always picking action a′ T (which awards the highest EV P1 could achieve against P2’s blueprint). But P1 also cannot do better than always picking a′ T , because P2 could simply play according to the blueprint in S, which means action a′ S would give the same EV to P1 as action a′ T (if P1 played optimally in S). In this way, the strategy for P2 in S is pressured to be no worse than that of the blueprint. In Coin Toss, if P2 were to always choose Heads (as was the case in Unsafe subgame solving), then P1 would always choose a′ T with Heads and a′ S with Tails. Resolving guarantees that P2’s exploitability will be no higher than the blueprint’s (and may be better). However, it may miss opportunities for improvement. For example, if we apply Resolving to the example blueprint in Coin Toss, one solution to the augmented subgame is the blueprint itself, so P2 may choose Forfeit 25% of the time even though Heads and Tails dominate that action. Indeed, the original purpose of Resolving was not to improve upon a blueprint strategy in a subgame, but rather to compactly store it by keeping only the EV at the root of the subgame and then reconstructing the strategy in real time when needed rather than storing the whole subgame strategy. Maxmargin subgame solving [21], discussed in Appendix A, can improve performance by defining a margin M σS(I1) = CBV σ2(I1) −CBV σS 2 (I1) for each I1 ∈Stop and maximizing minI1∈Stop M σS(I1). Resolving only makes all margins nonnegative. However, Maxmargin does worse in practice when using estimates of equilibrium values as discussed in Appendix C. 5 Reach Subgame Solving All of the subgame-solving techniques described in Section 4 only consider the target subgame in isolation, which can lead to suboptimal strategies. For example, Maxmargin solving applied to S in Coin Toss results in P2 choosing Heads with probability 5 8 and Tails with 3 8 in S. This results in P1 receiving an EV of −1 4 by choosing Play in the Heads state, and an EV of 1 4 in the Tails state. However, P1 could simply always choose Sell in the Heads state (earning an EV of 0.5) and Play in the Tails state and receive an EV of 3 8 for the entire game. In this section we introduce Reach subgame solving, an improvement to past subgame-solving techniques that considers what the opponent could have alternatively received from other subgames.1 For example, a better strategy for P2 would be to choose Heads with probability 3 4 and Tails with probability 1 4. Then P1 is indifferent between choosing Sell and Play in both cases and overall receives an expected payoff of 0 for the whole game. However, that strategy is only optimal if P1 would indeed achieve an EV of 0.5 for choosing Sell in the Heads state and −0.5 in the Tails state. That would be the case if P2 played according to the blueprint in the Sell subgame (which is not shown), but in reality we would apply subgame solving to the Sell subgame if the Sell action were taken, which would change P2’s strategy there and therefore P1’s EVs. Applying subgame solving to any subgame encountered during play is equivalent to applying it to all subgames independently; ultimately, the same strategy is played in both cases. Thus, we must consider that the EVs from other subgames may differ from what the blueprint says because subgame solving would be applied to them as well. 1Other subgame-solving methods have also considered the cost of reaching a subgame [31, 15]. However, those approaches are not correct in theory when applied in real time to any subgame reached during play. 5 Figure 4: Left: A modified game of Coin Toss with two subgames. The nodes C1 and C2 are public chance nodes whose outcomes are seen by both P1 and P2. Right: An augmented subgame for one of the subgames according to Reach subgame solving. If only one of the subgames is being solved, then the alternative payoff for Heads can be at most 1. However, if both are solved independently, then the gift must be split among the subgames and must sum to at most 1. For example, the alternative payoff in both subgames can be 0.5. As an example of this issue, consider the game shown in Figure 4 which contains two identical subgames S1 and S2 where the blueprint has P2 pick Heads and Tails with 50% probability. The Sell action leads to an EV of 0.5 from the Heads state, while Play leads to an EV of 0. If we were to solve just S1, then P2 could afford to always choose Tails in S1, thereby letting P1 achieve an EV of 1 for reaching that subgame from Heads because, due to the chance node C1, S1 is only reached with 50% probability. Thus, P1’s EV for choosing Play would be 0.5 from Heads and −0.5 from Tails, which is optimal. We can achieve this strategy in S1 by solving an augmented subgame in which the alternative payoff for Heads is 1. In that augmented subgame, P2 always choosing Tails would be a solution (though not the only solution). However, if the same reasoning were applied independently to S2 as well, then P2 might always choose Tails in both subgames and P1’s EV for choosing Play from Heads would become 1 while the EV for Sell would only be 0.5. Instead, we could allow P1 to achieve an EV of 0.5 for reaching each subgame from Heads (by setting the alternative payoff for Heads to 0.5). In that case, P1’s overall EV for choosing Play could only increase to 0.5, even if both S1 and S2 were solved independently. We capture this intuition by considering for each I1 ∈Stop all the infosets and actions I′ 1 · a′ ⊏I1 that P1 would have taken along the path to I1. If, at some I′ 1 · a′ ⊏I1 where P1 acted, there was a different action a∗∈A(I′ 1) that leads to a higher EV, then P1 would have taken a suboptimal action if they reached I1. The difference in value between a∗and a′ is referred to as a gift. We can afford to let P1’s value for I1 increase beyond the blueprint value (and in the process lower P1’s value in some other infoset in Stop), so long as the increase to I1’s value is small enough that choosing actions leading to I1 is still suboptimal for P1. Critically, we must ensure that the increase in value is small enough even when the potential increase across all subgames is summed together, as in Figure 4.2 A complicating factor is that gifts we assumed were present may actually not exist. For example, in Coin Toss, suppose applying subgame solving to the Sell subgame results in P1’s value for Sell from the Heads state decreasing from 0.5 to 0.25. If we independently solve the Play subgame, we have no way of knowing that P1’s value for Sell is lower than the blueprint suggested, so we may still assume there is a gift of 0.5 from the Heads state based on the blueprint. Thus, in order to guarantee a theoretical result on exploitability that is as strong as possible, we use in our theory and experiments a lower bound on what gifts could be after subgame solving was applied to all other subgames. Formally, let σ2 be a P2 blueprint and let σ−S 2 be the P2 strategy that results from applying subgame solving independently to a set of disjoint subgames other than S. Since we do not want to compute σ−S 2 in order to apply subgame solving to S, let ⌊gσ−S 2 (I′ 1, a′)⌋be a lower bound of CBV σ−S 2 (I′ 1) −CBV σ−S 2 (I′ 1, a′) that does not require knowledge of σ−S 2 . In our experiments we 2In this paper and in our experiments, we allow any infoset that descends from a gift to increase by the size of the gift (e.g., in Figure 4 the gift from Heads is 0.5, so we allow P1’s value for Heads in both S1 and S2 to increase by 0.5). However, any division of the gift among subgames is acceptable so long as the potential increase across all subgames (multiplied by the probability of P1 reaching that subgame) does not exceed the original gift. For example in Figure 4 if we only apply Reach subgame solving to S1, then we could allow the Heads state in S1 to increase by 1 rather than just by 0.5. In practice, some divisions may do better than others. The division we use in this paper (applying gifts equally to all subgames) did well in practice. 6 use ⌊gσ−S 2 (I′ 1, a′)⌋= maxa∈Az(I′ 1)∪{a′} CBV σ2(I′ 1, a) −CBV σ2(I′ 1, a′) where Az(I′ 1) ⊆A(I′ 1) is the set of actions leading immediately to terminal nodes. Reach subgame solving modifies the augmented subgame in Resolving and Maxmargin by increasing the alternative payoff for infoset I1 ∈Stop by P I′ 1·a′⊑I1|P (I′ 1)=P1⌊gσ−S 2 (I′ 1, a′)⌋. Formally, we define a reach margin as M σS r (I1) = M σS(I1) + X I′ 1·a′⊑I1|P (I′ 1)=P1 ⌊gσ−S 2 (I′ 1, a′)⌋ (1) This margin is larger than or equal to the one for Maxmargin, because ⌊gσ−S 2 (I′, a′)⌋is nonnegative. We refer to the modified algorithms as Reach-Resolve and Reach-Maxmargin. Using a lower bound on gifts is not necessary to guarantee safety. So long as we use a gift value gσ′(I′ 1, a′) ≤CBV σ2(I′ 1) −CBV σ2(I′ 1, a′), the resulting strategy will be safe. However, using a lower bound further guarantees a reduction to exploitability when a P1 best response reaches with positive probability an infoset I1 ∈Stop that has positive margin, as proven in Theorem 1. In practice, it may be best to use an accurate estimate of gifts. One option is to use ˆgσ−S 2 (I′ 1, a′) = ˜ CBV σ2(I′ 1) − ˜ CBV σ2(I′ 1, a′) in place of ⌊gσ−S 2 (I′ 1, a′)⌋, where ˜ CBV σ2 is the closest P1 can get to the value of a counterfactual best response while P1 is constrained to playing within the abstraction that generated the blueprint. Using estimates is covered in more detail in Appendix C. Theorem 1 shows that when subgames are solved independently and using lower bounds on gifts, Reach-Maxmargin solving has exploitability lower than or equal to past safe techniques. The theorem statement is similar to that of Maxmargin [21], but the margins are now larger (or equal) in size. Theorem 1. Given a strategy σ2 in a two-player zero-sum game, a set of disjoint subgames S, and a strategy σS 2 for each subgame S ∈S produced via Reach-Maxmargin solving using lower bounds for gifts, let σ′ 2 be the strategy that plays according to σS 2 for each subgame S ∈S, and σ2 elsewhere. Moreover, let σ−S 2 be the strategy that plays according to σ′ 2 everywhere except for P2 nodes in S, where it instead plays according to σ2. If πBR(σ′ 2) 1 (I1) > 0 for some I1 ∈Stop, then exp(σ′ 2) ≤exp(σ−S 2 ) −P h∈I1 πσ2 −1(h)M σS r (I1). So far the described techniques have guaranteed a reduction in exploitability over the blueprint by setting the value of a′ T equal to the value of P1 playing optimally to P2’s blueprint. Relaxing this guarantee by instead setting the value of a′ T equal to an estimate of P1’s value when both players play optimally leads to far lower exploitability in practice. We discuss this approach in Appendix C. 6 Nested Subgame Solving As we have discussed, large games must be abstracted to reduce the game to a tractable size. This is particularly common in games with large or continuous action spaces. Typically the action space is discretized by action abstraction so that only a few actions are included in the abstraction. While we might limit ourselves to the actions we included in the abstraction, an opponent might choose actions that are not in the abstraction. In that case, the off-tree action can be mapped to an action that is in the abstraction, and the strategy from that in-abstraction action can be used. For example, in an auction game we might include a bid of $100 in our abstraction. If a player bids $101, we simply treat that as a bid of $100. This is referred to as action translation [14, 28, 8]. Action translation is the state-of-the-art prior approach to dealing with this issue. It has been used, for example, by all the leading competitors in the Annual Computer Poker Competition (ACPC). In this section, we develop techniques for applying subgame solving to calculate responses to opponent off-tree actions, thereby obviating the need for action translation. That is, rather than simply treat a bid of $101 as $100, we calculate in real time a unique response to the bid of $101. This can also be done in a nested fashion in response to subsequent opponent off-tree actions. Additionally, these techniques can be used to solve finer-grained models as play progresses down the game tree. We refer to the first method as the inexpensive method.3 When P1 chooses an off-tree action a, a subgame S is generated following that action such that for any infoset I1 that P1 might be in, I1 · a ∈Stop. This subgame may itself be an abstraction. A solution σS is computed via subgame solving, and σS is combined with σ to form a new blueprint σ′ in the expanded abstraction that now includes action a. The process repeats whenever P1 again chooses an off-tree action. 3Following our study, the AI DeepStack used a technique similar to this form of nested subgame solving [20]. 7 To conduct safe subgame solving in response to off-tree action a, we could calculate CBV σ2(I1, a) by defining, via action translation, a P2 blueprint following a and best responding to it [4]. However, that could be computationally expensive and would likely perform poorly in practice because, as we show later, action translation is highly exploitable. Instead, we relax the guarantee of safety and use ˜ CBV σ2(I1) for the alternative payoff, where ˜ CBV σ2(I1) is P1’s counterfactual best response value in I1 when constrained to playing in the blueprint abstraction (which excludes action a). In this case, exploitability depends on how well ˜ CBV σ2(I1) approximates CBV σ∗ 2 (I1), where σ∗ 2 is an optimal P2 strategy (see Appendix C).4 In general, we find that only a small number of near-optimal actions need to be included in the blueprint abstraction for ˜ CBV σ2(I1) to be close to CBV σ∗ 2 (I1). We can then approximate a near-optimal response to any opponent action, even in a continuous action space. The “inexpensive” approach cannot be combined with Unsafe subgame solving because the probability of reaching an action outside of a player’s abstraction is undefined. Nevertheless, a similar approach is possible with Unsafe subgame solving (as well as all the other subgame-solving techniques) by starting the subgame solving at h rather than at h · a. In other words, if action a taken in node h is not in the abstraction, then Unsafe subgame solving is conducted in the smallest subgame containing h (and action a is added to that abstraction). This increases the size of the subgame compared to the inexpensive method because a strategy must be recomputed for every action a′ ∈A(h) in addition to a. We therefore call this method the expensive method. We present experiments with both methods. 7 Experiments Our experiments were conducted on heads-up no-limit Texas hold’em, as well as two smaller-scale poker games we call No-Limit Flop Hold’em (NLFH) and No-Limit Turn Hold’em (NLTH). The description for these games can be found in Appendix G. For equilibrium finding, we used CFR+ [30]. Our first experiment compares the performance of the subgame-solving techniques when applied to information abstraction (which is card abstraction in the case of poker). Specifically, we solve NLFH with no information abstraction on the preflop. On the flop, there are 1,286,792 infosets for each betting sequence; the abstraction buckets them into 200, 2,000, or 30,000 abstract ones (using a leading information abstraction algorithm [9]). We then apply subgame solving immediately after the flop community cards are dealt. We experiment with two versions of the game, one small and one large, which include only a few of the available actions in each infoset. We also experimented on abstractions of NLTH. In that case, we solve NLTH with no information abstraction on the preflop or flop. On the turn, there are 55,190,538 infosets for each betting sequence; the abstraction buckets them into 200, 2,000, or 20,000 abstract ones. We apply subgame solving immediately after the turn community card is dealt. Table 1 shows the performance of each technique when using 30,000 buckets (20,000 for NLTH). The full results are presented in Appendix E. In all our experiments, exploitability is measured in the standard units used in this field: milli big blinds per hand (mbb/h). Small Flop Holdem Large Flop Holdem Turn Holdem Blueprint Strategy 91.28 41.41 345.5 Unsafe 5.514 396.8 79.34 Resolve 54.07 23.11 251.8 Maxmargin 43.43 19.50 234.4 Reach-Maxmargin 41.47 18.80 233.5 Reach-Maxmargin (no split) 25.88 16.41 175.5 Estimate 24.23 30.09 76.44 Estimate+Distributional 34.30 10.54 74.35 Reach-Estimate+Distributional 22.58 9.840 72.59 Reach-Estimate+Distributional (no split) 17.33 8.777 70.68 Table 1: Exploitability of various subgame-solving techniques in three different games. Estimate and Estimate+Distributional are techniques introduced in Appendix C. We use a normal distribution in the Distributional subgame solving experiments, with standard deviation determined by the heuristic presented in Appendix C.1. Since subgame solving begins immediately after a chance node with an extremely high branching factor (1, 755 in NLFH), the gifts for the Reach algorithms are divided among subgames inefficiently. 4We estimate CBV σ∗ 2 (I1) rather than CBV σ∗ 2 (I1, a) because CBV σ∗ 2 (I1) −CBV σ∗ 2 (I1, a) is a gift that may be added to the alternative payoff anyway. 8 Many subgames do not use the gifts at all, while others could make use of more. In the experiments we show results both for the theoretically safe splitting of gifts, as well as a more aggressive version where gifts are scaled up by the branching factor of the chance node (1, 755). This weakens the theoretical guarantees of the algorithm, but in general did better than splitting gifts in a theoretically correct manner. However, this is not universally true. Appendix F shows that in at least one case, exploitability increased when gifts were scaled up too aggressively. In all cases, using Reach subgame solving in at least the theoretical safe method led to lower exploitability. Despite lacking theoretical guarantees, Unsafe subgame solving did surprisingly well in most games. However, it did substantially worse in Large NLFH with 30,000 buckets. This exemplifies its variability. Among the safe methods, all of the changes we introduce show improvement over past techniques. The Reach-Estimate + Distributional algorithm generally resulted in the lowest exploitability among the various choices, and in most cases beat unsafe subgame solving. The second experiment evaluates nested subgame solving, and compares it to action translation. In order to also evaluate action translation, in this experiment, we create an NLFH game that includes 3 bet sizes at every point in the game tree (0.5, 0.75, and 1.0 times the size of the pot); a player can also decide not to bet. Only one bet (i.e., no raises) is allowed on the preflop, and three bets are allowed on the flop. There is no information abstraction anywhere in the game. We also created a second, smaller abstraction of the game in which there is still no information abstraction, but the 0.75× pot bet is never available. We calculate the exploitability of one player using the smaller abstraction, while the other player uses the larger abstraction. Whenever the large-abstraction player chooses a 0.75× pot bet, the small-abstraction player generates and solves a subgame for the remainder of the game (which again does not include any subsequent 0.75× pot bets) using the nested subgame-solving techniques described above. This subgame strategy is then used as long as the large-abstraction player plays within the small abstraction, but if she chooses the 0.75× pot bet again later, then the subgame solving is used again, and so on. Table 2 shows that all the subgame-solving techniques substantially outperform action translation. We did not test distributional alternative payoffs in this experiment, since the calculated best response values are likely quite accurate. These results suggest that nested subgame solving is preferable to action translation (if there is sufficient time to solve the subgame). mbb/h Randomized Pseudo-Harmonic Mapping 1,465 Resolve 150.2 Reach-Maxmargin (Expensive) 149.2 Unsafe (Expensive) 148.3 Maxmargin 122.0 Reach-Maxmargin 119.1 Table 2: Exploitability of the various subgame-solving techniques in nested subgame solving. The performance of the pseudo-harmonic action translation is also shown. We used the techniques presented in this paper to develop Libratus, an AI that competed against four top human professionals in heads-up no-limit Texas hold’em [5]. Heads-up no-limit Texas hold’em has been the primary benchmark challenge for AI in imperfect-information games. The competition involved 120,000 hands of poker and a prize pool of $200,000 split among the humans to incentivize strong play. The AI decisively defeated the human team by 147 mbb / hand, with 99.98% statistical significance. This was the first, and so far only, time an AI defeated top humans in no-limit poker. 8 Conclusion We introduced a subgame-solving technique for imperfect-information games that has stronger theoretical guarantees and better practical performance than prior subgame-solving methods. We presented results on exploitability of both safe and unsafe subgame-solving techniques. We also introduced a method for nested subgame solving in response to the opponent’s off-tree actions, and demonstrated that this leads to dramatically better performance than the usual approach of action translation. This is, to our knowledge, the first time that exploitability of subgame-solving techniques has been measured in large games. Finally, we demonstrated the effectiveness of these techniques in practice in heads-up no-limit Texas Hold’em poker, the main benchmark challenge for AI in imperfect-information games. We developed the first AI to reach the milestone of defeating top humans in heads-up no-limit Texas Hold’em. 9 9 Acknowledgments This material is based on work supported by the National Science Foundation under grants IIS1718457, IIS-1617590, and CCF-1733556, and the ARO under award W911NF-17-1-0082, as well as XSEDE computing resources provided by the Pittsburgh Supercomputing Center. The Brains vs. AI competition was sponsored by Carnegie Mellon University, Rivers Casino, GreatPoint Ventures, Avenue4Analytics, TNG Technology Consulting, Artificial Intelligence, Intel, and Optimized Markets, Inc. We thank Kristen Gardner, Marcelo Gutierrez, Theo Gutman-Solo, Eric Jackson, Christian Kroer, Tim Reiff, and the anonymous reviewers for helpful feedback. References [1] Darse Billings, Neil Burch, Aaron Davidson, Robert Holte, Jonathan Schaeffer, Terence Schauenberg, and Duane Szafron. Approximating game-theoretic optimal strategies for fullscale poker. In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), 2003. [2] Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up limit hold’em poker is solved. Science, 347(6218):145–149, January 2015. [3] Noam Brown, Christian Kroer, and Tuomas Sandholm. Dynamic thresholding and pruning for regret minimization. In AAAI Conference on Artificial Intelligence (AAAI), pages 421–429, 2017. [4] Noam Brown and Tuomas Sandholm. Simultaneous abstraction and equilibrium finding in games. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2015. [5] Noam Brown and Tuomas Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, page eaao1733, 2017. [6] Neil Burch, Michael Johanson, and Michael Bowling. Solving imperfect information games using decomposition. In AAAI Conference on Artificial Intelligence (AAAI), pages 602–608, 2014. [7] Murray Campbell, A Joseph Hoane, and Feng-Hsiung Hsu. Deep Blue. Artificial intelligence, 134(1-2):57–83, 2002. [8] Sam Ganzfried and Tuomas Sandholm. Action translation in extensive-form games with large action spaces: axioms, paradoxes, and the pseudo-harmonic mapping. In Proceedings of the Twenty-Third international joint conference on Artificial Intelligence, pages 120–128. AAAI Press, 2013. [9] Sam Ganzfried and Tuomas Sandholm. Potential-aware imperfect-recall abstraction with earth mover’s distance in imperfect-information games. In AAAI Conference on Artificial Intelligence (AAAI), 2014. [10] Sam Ganzfried and Tuomas Sandholm. Endgame solving in large imperfect-information games. In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 37–45, 2015. [11] Andrew Gilpin, Javier Peña, and Tuomas Sandholm. First-order algorithm with O(ln(1/ϵ)) convergence for ϵ-equilibrium in two-person zero-sum games. Mathematical Programming, 133(1–2):279–298, 2012. Conference version appeared in AAAI-08. [12] Andrew Gilpin and Tuomas Sandholm. A competitive Texas Hold’em poker player via automated abstraction and real-time equilibrium computation. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 1007–1013, 2006. [13] Andrew Gilpin and Tuomas Sandholm. Better automated abstraction techniques for imperfect information games, with application to Texas Hold’em poker. In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 1168–1175, 2007. 10 [14] Andrew Gilpin, Tuomas Sandholm, and Troels Bjerre Sørensen. A heads-up no-limit texas hold’em poker player: discretized betting models and automatically generated equilibriumfinding programs. In Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 911–918. International Foundation for Autonomous Agents and Multiagent Systems, 2008. [15] Eric Jackson. A time and space efficient algorithm for approximately solving large imperfect information games. In AAAI Workshop on Computer Poker and Imperfect Information, 2014. [16] Michael Johanson. Measuring the size of large no-limit poker games. Technical report, University of Alberta, 2013. [17] Michael Johanson, Nolan Bard, Neil Burch, and Michael Bowling. Finding optimal abstract strategies in extensive-form games. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pages 1371–1379. AAAI Press, 2012. [18] Christian Kroer, Kevin Waugh, Fatma Kılınç-Karzan, and Tuomas Sandholm. Theoretical and practical advances on smoothing for extensive-form games. In Proceedings of the ACM Conference on Economics and Computation (EC), 2017. [19] Nick Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [20] Matej Moravˇcík, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 2017. [21] Matej Moravcik, Martin Schmid, Karel Ha, Milan Hladik, and Stephen Gaukrodger. Refining subgames in large imperfect information games. In AAAI Conference on Artificial Intelligence (AAAI), 2016. [22] John Nash. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36:48–49, 1950. [23] Yurii Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM Journal of Optimization, 16(1):235–249, 2005. [24] Tuomas Sandholm. The state of solving large incomplete-information games, and application to poker. AI Magazine, pages 13–32, Winter 2010. Special issue on Algorithmic Game Theory. [25] Tuomas Sandholm. Abstraction for solving large incomplete-information games. In AAAI Conference on Artificial Intelligence (AAAI), pages 4127–4131, 2015. Senior Member Track. [26] Tuomas Sandholm. Solving imperfect-information games. Science, 347(6218):122–123, 2015. [27] Jonathan Schaeffer, Neil Burch, Yngvi Björnsson, Akihiro Kishimoto, Martin Müller, Robert Lake, Paul Lu, and Steve Sutphen. Checkers is solved. Science, 317(5844):1518–1522, 2007. [28] David Schnizlein, Michael Bowling, and Duane Szafron. Probabilistic state translation in extensive games with large action sets. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence, pages 278–284, 2009. [29] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. [30] Oskari Tammelin, Neil Burch, Michael Johanson, and Michael Bowling. Solving heads-up limit texas hold’em. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 645–652, 2015. [31] Kevin Waugh, Nolan Bard, and Michael Bowling. Strategy grafting in extensive games. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), 2009. [32] Martin Zinkevich, Michael Johanson, Michael H Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), pages 1729–1736, 2007. 11 | 2017 | 333 |
6,823 | A PAC-Bayesian Analysis of Randomized Learning with Application to Stochastic Gradient Descent Ben London blondon@amazon.com Amazon AI Abstract We study the generalization error of randomized learning algorithms—focusing on stochastic gradient descent (SGD)—using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our generalization bounds hold for all posterior distributions on an algorithm’s random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our generalization bounds and evaluate it on a benchmark dataset. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. 1 Introduction Randomized algorithms are the workhorses of modern machine learning. One such algorithm is stochastic gradient descent (SGD), a first-order optimization method that approximates the gradient of the learning objective by a random point estimate, thereby making it efficient for large datasets. Recent interest in studying the generalization properties of SGD has led to several breakthroughs. Notably, Hardt et al. [10] showed that SGD is stable with respect to small perturbations of the training data, which let them bound the risk of a learned model. Related studies followed thereafter [13, 16]. Simultaneously, Lin and Rosasco [15] derived risk bounds that show that early stopping acts as a regularizer in multi-pass SGD (echoing studies of incremental gradient descent [19]). In this paper, we study generalization in randomized learning, with SGD as a motivating example. Using a novel analysis that combines PAC-Bayes with algorithmic stability (reminiscent of [17]), we prove new generalization bounds for randomized learning algorithms, which apply to SGD under various assumptions on the loss function and optimization objective. Our bounds improve on related work in two important ways. While some previous bounds for SGD [1, 10, 13, 16] hold in expectation over draws of the training data, our bounds hold with high probability. Further, existing generalization bounds for randomized learning [6, 7] only apply to algorithms with fixed distributions (such as SGD with uniform sampling); thanks to our PAC-Bayesian treatment, our bounds hold for all posterior distributions, meaning they support data-dependent randomization. The penalty for overfitting the posterior to the data is captured by the posterior’s divergence from a fixed prior. Our generalization bounds suggest a sampling strategy for SGD that adapts to the training data and model, focusing on useful examples while staying close to a uniform prior. We therefore propose an adaptive sampling algorithm that dynamically updates its distribution using multiplicative weight updates (similar to boosting [8, 21], focused online learning [22] and exponentiated gradient dual coordinate ascent [4]). The algorithm requires minimal tuning and works with any stochastic gradient update rule. We analyze the divergence of the adaptive posterior and conduct experiments on a benchmark dataset, using several combinations of update rule and sampling utility function. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Preliminaries Let X denote a compact domain; let Y denote a set of labels; and let Z ≜X × Y denote their Cartesian product. We assume there exists an unknown, fixed distribution, D, supported on Z. Given a dataset of examples, S ≜(z1, . . . , zn) = ((x1, y1), . . . , (xn, yn)), drawn independently and identically from D, we wish to learn the parameters of a predictive model, X 7→Y, from a class of hypotheses, H, which we assume is a subset of Euclidean space. We have access to a deterministic learning algorithm, A : Zn×Θ →H, which, given S, and some hyperparameters, θ ∈Θ, produces a hypothesis, h ∈H. We measure the quality of a hypothesis using a loss function, L : H×Z →[0, M], which we assume is M-bounded1 and λ-Lipschitz (see Appendix A for the definition). Let L(A(S, θ), z) denote the loss of a hypothesis that was output by A(S, θ) when applied to example z. Ultimately, we want the learning algorithm to have low expected loss on a random example; i.e., low risk, denoted R(S, θ) ≜ Ez∼D[L(A(S, θ), z)]. (The learning algorithm should always be clear from context.) Since this expectation cannot be computed, we approximate it by the average loss on the training data; i.e., the empirical risk, ˆR(S, θ) ≜1 n Pn i=1 L(A(S, θ), zi), which is what most learning algorithms attempt to minimize. By bounding the difference of the two, G(S, θ) ≜R(S, θ) −ˆR(S, θ), which we refer to as the generalization error, we obtain an upper bound on R(S, θ). Throughout this document, we will view a randomized learning algorithm as a deterministic learning algorithm whose hyperparameters are randomized. For instance, stochastic gradient descent (SGD) performs a sequence of hypothesis updates, for t = 1, . . . , T, of the form ht ←Ut(ht−1, zit) ≜ht−1 −ηt∇F(ht−1, zit), using a sequence of random example indices, θ = (i1, . . . , iT ), sampled according to a distribution, P, on Θ = {1, . . . , n}T . The objective function, F : H × Z →R+, may be different from L; it is usually chosen as an optimizable upper bound on L, and need not be bounded. The parameter ηt is a step size for the update at iteration t. SGD can be viewed as taking a dataset, S, drawing θ ∼P, then running a deterministic algorithm, A(S, θ), which executes the sequence of hypothesis updates. Since learning is randomized, we will deal with the expected loss over draws of random hyperparameters. We therefore overload the above notation for a distribution, P, on the hyperparameter space, Θ; let R(S, P) ≜Eθ∼P[R(S, θ)], ˆR(S, P) ≜Eθ∼P[ ˆR(S, θ)], and G(S, P) ≜R(S, P) −ˆR(S, P). 2.1 Relationship to PAC-Bayes Conditioned on the training data, a posterior distribution, Q, on the hyperparameter space, Θ, induces a distribution on the hypothesis space, H. If we ignore the learning algorithm altogether and think of Q as a distribution on H directly, then Eh∼Q[L(h, z)] is the Gibbs loss; that is, the expected loss of a random hypothesis. The Gibbs loss has been studied extensively using PAC-Bayesian analysis (also known simply as PAC-Bayes) [3, 9, 14, 18, 20]. In the PAC-Bayesian learning framework, we fix a prior distribution, P, then receive some training data, S ∼Dn, and learn a posterior distribution, Q. PAC-Bayesian bounds frame the generalization error, G(S, Q), as a function of the posterior’s divergence from the prior, which penalizes overfitting the posterior to the training data. In Section 4, we derive new upper bounds on G(S, Q) using a novel PAC-Bayesian treatment. While traditional PAC-Bayes analyzes distributions directly on H, we instead analyze distributions on Θ. Thus, instead of applying the loss directly to a random hypothesis, we apply it to the output of a learning algorithm, whose inputs are a dataset and a random hyperparameter instantiation. This distinction is subtle, but important. In our framework, a random hypothesis is explicitly a function of the learning algorithm, whereas in traditional PAC-Bayes this dependence may only be implicit—for instance, if the posterior is given by random permutations of a learned hypothesis. The advantage of making the learning aspect explicit is that it isolates the source of randomness, which may help in analyzing the distribution of learned hypotheses. Indeed, it may be difficult to map the output of a randomized learning algorithm to a distribution on the hypothesis space. That said, the disadvantage of making learning explicit is that, due to the learning algorithm’s dependence on the training data and hyperparameters, the generalization error could be sensitive to certain examples or hyperparameters. This condition is quantified with algorithmic stability, which we discuss next. 1Accommodating unbounded loss functions is possible [11], but requires additional assumptions. 2 3 Algorithmic Stability Informally, algorithmic stability measures the change in loss when the inputs to a learning algorithm are perturbed; a learning algorithm is stable if small perturbations lead to proportional changes in the loss. In other words, a learning algorithm should not be overly sensitive to any single input. Stability is crucial for learnability [23], and has also been linked to differentially private learning [24]. In this section, we discuss several notions of stability tailored for randomized learning algorithms. From this point on, let DH(v, v′) ≜P|v| i=1 1{vi ̸= v′ i} denote the Hamming distance. 3.1 Definitions of Stability The literature traditionally measures stability with respect to perturbations of the training data. We refer to this general property as data stability. Data stability has been defined in many ways. The following definitions, originally proposed by Elisseeff et al. [6], are designed to accommodate randomized algorithms via an expectation over the hyperparameters, θ ∼P. Definition 1 (Uniform Stability). A randomized learning algorithm, A, is βZ-uniformly stable with respect to a loss function, L, and a distribution, P on Θ, if sup S,S′∈Zn:DH(S,S′)=1 sup z∈Z E θ∼P [L(A(S, θ), z) −L(A(S′, θ), z)] ≤βZ. Definition 2 (Pointwise Hypothesis Stability). For a given dataset, S, let Si,z denote the result of replacing the ith example with example z. A randomized learning algorithm, A, is βZ-pointwise hypothesis stable with respect to a loss function, L, and a distribution, P on Θ, if sup i∈{1,...,n} E S∼Dn E z∼D E θ∼P L(A(S, θ), zi) −L(A(Si,z, θ), zi) ≤βZ. Uniform stability measures the maximum change in loss from replacing any single training example, whereas pointwise hypothesis stability measures the expected change in loss on a random example when said example is removed from the training data. Under certain conditions, βZ-uniform stability implies βZ-pointwise hypothesis stability, but not vice versa. Thus, while uniform stability enables sharper bounds, pointwise hypothesis stability supports a wider range of learning algorithms. In addition to data stability, we might also require stability with respect to changes in the hyperparameters. From this point forward, we will assume that the hyperparameter space, Θ, decomposes into the product of T subspaces, QT t=1 Θt. For instance, Θ could be the set of all sequences of example indices, {1, . . . , n}T , such as one would sample from in SGD. Definition 3 (Hyperparameter Stability). A randomized learning algorithm, A, is βΘ-uniformly stable with respect to a loss function, L, if sup S∈Zn sup z∈Z sup θ,θ′∈Θ:DH(θ,θ′)=1 |L(A(S, θ), z) −L(A(S, θ′), z)| ≤βΘ. When A is both βZ-uniformly and βΘ-uniformly stable, we say that A is (βZ, βΘ)-uniformly stable. Remark 1. For SGD, Definition 3 can be mapped to Bousquet and Elisseeff’s [2] original definition of uniform stability using the resampled example sequence. Yet their generalization bounds would still not apply because the resampled data is not i.i.d. and SGD is not a symmetric learning algorithm. 3.2 Stability of Stochastic Gradient Descent For non-vacuous generalization bounds, we will need the data stability coefficient, βZ, to be of order ˜O(n−1). Additionally, certain results will require the hyperparameter stability coefficient, βΘ, to be of order ˜O(1/ √ nT). (If T = Θ(n), as it often is, then βΘ = ˜O(T −1) suffices.) In this section, we review some conditions under which these requirements are satisfied by SGD. We rely on standard characterizations of the objective function—namely, convexity, Lipschitzness and smoothness—the definitions of which are deferred to Appendix A, along with all proofs from this section. A recent study by Hardt et al. [10] proved that some special cases of SGD—when examples are sampled uniformly, with replacement—satisfy βZ-uniform stability (Definition 1) with βZ = O(n−1). We extend their work (specifically, [10, Theorem 3.7]) in the following result for SGD with a convex objective function, when the step size is at most inversely proportional to the current iteration. 3 Proposition 1. Assume that the loss function, L, is λ-Lipschitz, and that the objective function, F, is convex, λ-Lipschitz and σ-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ηt ∈[0, η/t], for η ∈[0, 2/σ]. Then, SGD is both βZ-uniformly stable and βZ-pointwise hypothesis stable with respect to L and P, with βZ ≤2λ2η (ln T + 1) n . (1) When T = Θ(n), Equation 1 is ˜O(n−1), which is acceptable for proving generalization. If we do not assume that the objective function is convex, we can borrow a result (with small modification2) from Hardt et al. [10, Theorem 3.8]. Proposition 2. Assume that the loss function, L, is M-bounded and λ-Lipschitz, and that the objective function, F, is λ-Lipschitz and σ-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ηt ∈[0, η/t], for η ≥0. Then, SGD is both βZ-uniformly stable and βZ-pointwise hypothesis stable with respect to L and P, with βZ ≤ M + (ση)−1 n −1 2λ2η 1 ση+1 T ση ση+1 . (2) Assuming T = Θ(n), and ignoring constants that depend on M, λ, σ and η, Equation 2 reduces to O n− 1 ση+1 . As ση approaches 1, the rate becomes O(n−1/2), which, as will become evident in Section 4, yields generalization bounds that are suboptimal, or even vacuous. However, if ση is small—say, η = (10σ)−1—then we get O n−10 11 ≈O(n−1), which suffices for generalization. We can obtain even tighter bounds for βZ-pointwise hypothesis stability (Definition 2) by adopting a data-dependent view. The following result for SGD with a convex objective function is adapted from work by Kuzborskij and Lampert [13, Theorem 3]. Proposition 3. Assume that the loss function, L, is λ-Lipschitz, and that the objective function, F, is convex, λ-Lipschitz and σ-smooth. Suppose SGD starts from an initial hypothesis, h0, and is run for T iterations with a uniform sampling distribution, P, and step sizes ηt ∈[0, η/t], for η ∈[0, 2/σ]. Then, SGD is βZ-pointwise hypothesis stable with respect to L and P, with βZ ≤2λη (ln T + 1) p 2σ Ez∼D[L(h0, z)] n . (3) Importantly, Equation 3 depends on the risk of the initial hypothesis, h0. If h0 happens to be close to a global optimum—that is, a good first guess—then Equation 3 could be tighter than Equation 1. Kuzborskij and Lampert also proved a data-dependent bound for non-convex objective functions [13, Theorem 5], which, under certain conditions, might be tighter than Equation 2. Though not presented herein, Kuzborskij and Lampert’s bound is worth noting. As we will later show, we can obtain stronger generalization guarantees by combining βZ-uniform stability with βΘ-uniform stability (Definition 3), provided βΘ = ˜O(1/ √ nT). Prior stability analyses of SGD [10, 13] have not addressed this form of stability. Elisseeff et al. [6] proved (βZ, βΘ)uniform stability for certain bagging algorithms, but did not consider SGD. In light of Remark 1, it is tempting to map βΘ-uniform stability to Bousquet and Elisseeff’s [2] uniform stability and thereby leverage their study of various regularized objective functions. However, their analysis crucially relies on exact minimization of the learning objective, whereas SGD with a finite number of steps only finds an approximate minimizer. Thus, to our knowledge, no prior work applies to this problem. As a first step, we prove uniform stability, with respect to both data and hyperparameters, for SGD with a strongly convex objective function and decaying step sizes. Proposition 4. Assume that the loss function, L, is λ-Lipschitz, and that the objective function, F, is γ-strongly convex, λ-Lipschitz and σ-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ηt ≜(γt + σ)−1. Then, SGD is (βZ, βΘ)-uniformly stable with respect to L and P, with βZ ≤2λ2 γn and βΘ ≤2λ2 γT . (4) When T = Θ(n), the βΘ bound in Equation 4 is O(1/ √ nT), which supports good generalization. 2Hardt et al.’s definition of stability and theorem statement differ slightly from ours. See Appendix A.1. 4 4 Generalization Bounds In this section, we present new generalization bounds for randomized learning algorithms. While prior work [6, 7] has addressed this topic, ours is the first PAC-Bayesian treatment (the benefits of which will be discussed momentarily). Recall that in the PAC-Bayesian framework, we fix a prior distribution, P, on the hypothesis space, H; then, given a sample of training data, S ∼Dn, we learn a posterior distribution, Q, also on H. In our extension for randomized learning algorithms, P and Q are instead supported on the hyperparameter space, Θ. Moreover, while traditional PAC-Bayes studies Eh∼Q[L(h, z)], we study the expected loss over draws of hyperparameters, Eθ∼Q[L(A(S, θ), z)]. Our goal will be to upper-bound the generalization error of the posterior, G(S, Q), which thereby upper-bounds the risk, R(S, Q), by a function of the empirical risk, ˆR(S, Q). Importantly, our bounds are polynomial in δ−1, for a free parameter δ ∈(0, 1), and hold with probability at least 1 −δ over draws of a finite training dataset. This stands in contrast to related bounds [1, 10, 13, 16] that hold in expectation. While expectation bounds are useful for gaining insight into generalization behavior, high-probability bounds are sometimes preferred. Provided the loss is Mbounded, it is always possible to convert a high-probability bound of the form PrS∼Dn{G(S, Q) ≤ B(δ)} ≥1 −δ to an expectation bound of the form ES∼Dn[G(S, Q)] ≤B(δ) + δM. Another useful property of PAC-Bayesian bounds is that they hold simultaneously for all posteriors, including those that depend on the training data. In Section 3, we assumed that hyperparameters were sampled according to a fixed distribution; for instance, sampling training example indices for SGD uniformly at random. However, in certain situations, it may be advantageous to sample according to a data-dependent distribution. Following the SGD example, suppose most training examples are easy to classify (e.g., far from the decision boundary), but some are difficult (e.g., near the decision boundary, or noisy). If we sample points uniformly at random, we might encounter mostly easy examples, which could slow progress on difficult examples. If we instead focus training on the difficult set, we might converge more quickly to an optimal hypothesis. Since our PAC-Bayesian bounds hold for all hyperparameter posteriors, we can characterize the generalization error of algorithms that optimize the posterior using the training data. Existing generalization bounds for randomized learning [6, 7], or SGD in particular [1, 10, 13, 15, 16], cannot address such algorithms. Of course, there is a penalty for overfitting the posterior to the data, which is captured by the posterior’s divergence from the prior. Our first PAC-Bayesian theorem requires the weakest stability condition, βZ-pointwise hypothesis stability, but the bound is sublinear in δ−1. Our second bound is polylogarithmic in δ−1, but requires the stronger stability conditions, (βZ, βΘ)-uniform stability. All proofs are deferred to Appendix B. Theorem 1. Suppose a randomized learning algorithm, A, is βZ-pointwise hypothesis stable with respect to an M-bounded loss function, L, and a fixed prior, P on Θ. Then, for any n ≥1 and δ ∈(0, 1), with probability at least 1 −δ over draws of a dataset, S ∼Dn, every posterior, Q on Θ, satisfies G(S, Q) ≤ sχ2(Q∥P) + 1 δ 2M 2 n + 12MβZ , (5) where χ2(Q∥P) ≜Eθ∼P h Q(θ) P(θ) 2 −1 i is the χ2 divergence from P to Q. Theorem 2. Suppose a randomized learning algorithm, A, is (βZ, βΘ)-uniformly stable with respect to an M-bounded loss function, L, and a fixed product measure, P on Θ = QT t=1 Θt. Then, for any n ≥1, T ≥1 and δ ∈(0, 1), with probability at least 1−δ over draws of a dataset, S ∼Dn, every posterior, Q on Θ, satisfies G(S, Q) ≤βZ + s 2 DKL(Q∥P) + ln 2 δ (M + 2nβZ)2 n + 4Tβ2 Θ , (6) where DKL(Q∥P) ≜Eθ∼Q h ln Q(θ) P(θ) i is the KL divergence from P to Q. Since Theorems 1 and 2 hold simultaneously for all hyperparameter posteriors, they provide generalization guarantees for SGD with any sampling distribution. Note that the stability requirements only need to be satisfied by a fixed product measure, such as a uniform distribution. This simple 5 sampling distribution can have O(n−1), O(T −1) -uniform stability under certain conditions, as demonstrated in Section 3.2. In the following, we apply Theorem 2 to SGD with a strongly convex objective function, leveraging Proposition 4 to upper-bound the stability coefficients. Corollary 1. Assume that the loss function, L, is M-bounded and λ-Lipschitz, and that the objective function, F, is γ-strongly convex, λ-Lipschitz and σ-smooth. Let P denote a uniform prior on {1, . . . , n}T . Then, for any n ≥1, T ≥1 and δ ∈(0, 1), with probability at least 1 −δ over draws of a dataset, S ∼Dn, SGD with step sizes ηt ≜(γt+σ)−1 and any posterior sampling distribution, Q on {1, . . . , n}T , satisfies G(S, Q) ≤2λ2 γn + s 2 DKL(Q∥P) + ln 2 δ (M + 4λ2/γ)2 n + 16λ4 γ2T . When the divergence is polylogarithmic in n, and T = Θ(n), the generalization bound is ˜O(n−1/2). In the special case of uniform sampling, the KL divergence is zero, yielding a O(n−1/2) bound. Importantly, Theorem 1 does not require hyperparameter stability, and is therefore of interest for analyzing non-convex objective functions, since it is not known whether uniform hyperparameter stability can be satisfied without (strong) convexity. One can use Equation 2 (or [13, Theorem 5]) to upper-bound βZ in Equation 5 and thereby obtain a generalization bound for SGD with a non-convex objective function, such as neural network training. We leave this substitution to the reader. Equation 6 holds with high probability over draws of a dataset, but the generalization error is an expected value over draws of hyperparameters. To obtain a bound that holds with high probability over draws of both data and hyperparameters, we consider posteriors that are product measures. Theorem 3. Suppose a randomized learning algorithm, A, is (βZ, βΘ)-uniformly stable with respect to an M-bounded loss function, L, and a fixed product measure, P on Θ = QT t=1 Θt. Then, for any n ≥1, T ≥1 and δ ∈(0, 1), with probability at least 1−δ over draws of a dataset, S ∼Dn, and hyperparameters, θ ∼Q, from any posterior product measure, Q on Θ, G(S, θ) ≤βZ + βΘ r 2 T ln 2 δ + s 2 DKL(Q∥P) + ln 4 δ (M + 2nβZ)2 n + 4Tβ2 Θ . (7) If βΘ = ˜O(1/ √ nT), then βΘ q 2 T ln 2 δ vanishes at a rate of ˜O(n−1/2). We can apply Theorem 3 to SGD in the same way we applied Theorem 2 in Corollary 1. Further, note that a uniform distribution is a product distribution. Thus, if we eschew optimizing the posterior, then the KL divergence disappears, leaving a O(n−1/2) derandomized generalization bound for SGD with uniform sampling.3 5 Adaptive Sampling for Stochastic Gradient Descent The PAC-Bayesian theorems in Section 4 motivate data-dependent posterior distributions on the hyperparameter space. Intuitively, certain posteriors may improve, or speed up, learning from a given dataset. For instance, suppose certain training examples are considered valuable for reducing empirical risk; then, a sampling posterior for SGD should weight those examples more heavily than others, so that the learning algorithm can, probabilistically, focus its attention on the valuable examples. However, a posterior should also try to stay close to the prior, to control the divergence penalty in the generalization bounds. Based on this idea, we propose a sampling procedure for SGD (or any variant thereof) that constructs a posterior based on the training data, balancing the utility of the sampling distribution with its divergence from a uniform prior. The algorithm operates alongside the learning algorithm, iteratively generating the posterior as a sequence of conditional distributions on the training data. Each iteration of training generates a new distribution conditioned on the previous iterations, so the posterior dynamically adapts to training. We therefore call our algorithm adaptive sampling SGD. 3We can achieve the same result by pairing Proposition 4 with Elisseeff et al.’s generalization bound for algorithms with (βZ, βΘ)-uniform stability [6, Theorem 15]. However, Elisseeff et al.’s bound only applies to fixed product measures on Θ, whereas Theorem 3 applies more generally to any posterior product measure, and when P = Q, Equation 7 is within a constant factor of Elisseeff et al.’s bound. 6 Algorithm 1 Adaptive Sampling SGD Require: Examples, (z1, . . . , zn) ∈Zn; initial hypothesis, h0 ∈H; update rule, Ut : H×Z →H; utility function, f : Z × H →R; amplitude, α ≥0; decay, τ ∈(0, 1). 1: (q1, . . . , qn) ←1 ▷Initialize sampling weights uniformly 2: for t = 1, . . . , T do 3: it ∼Qt ∝(q1, . . . , qn) ▷Draw index it proportional to sampling weights 4: ht ←Ut(ht−1, zit) ▷Update hypothesis 5: qit ←qτ it exp (α f(zit, ht)) ▷Update sampling weight for it 6: return hT Algorithm 1 maintains a set of nonnegative sampling weights, (q1, . . . , qn), which define a distribution on the dataset. The posterior probability of the ith example in the tth iteration, given the previous iterations, is proportional to the ith weight: Qt(i) ≜Q(it = i | i1, . . . , it−1) ∝qi. The sampling weights are initialized to 1, thereby inducing a uniform distribution. At each iteration, we draw an index, it ∼Qt, and use example zit to update the hypothesis. We then update the weight for it multiplicatively as qit ←qτ it exp (α f(zit, ht)), where: f(zit, ht) is a utility function of the chosen example and current hypothesis; α ≥0 is an amplitude parameter, which controls the aggressiveness of the update; and τ ∈(0, 1) is a decay parameter, which lets qi gradually forget past updates. The multiplicative weight update (line 5) can be derived by choosing a sampling distribution for the next iteration, t + 1, that maximizes the expected utility while staying close to a reference distribution. Consider the following constrained optimization problem: max Qt+1∈∆n n X i=1 Qt+1(i)f(zi, ht) −1 α DKL(Qt+1∥Qτ t ). (8) The term Pn i=1 Qt+1(i)f(zi, ht) is the expected utility under the new distribution, Qt+1. This is offset by the KL divergence, which acts as a regularizer, penalizing Qt+1 for diverging from a reference distribution, Qτ t , where Qτ t (i) ∝qτ i . The decay parameter, τ, controls the temperature of the reference distribution, allowing it to interpolate between the current distribution (τ = 1) and a uniform distribution (τ = 0). The amplitude parameter, α, scales the influence of the regularizer relative to the expected utility. We can solve Equation 8 analytically using the method of Lagrange multipliers, which yields Q⋆ t+1(i) ∝Qτ t (i) exp (α f(zit, ht) −1) ∝qτ i exp (α f(zit, ht)) . Updating qi for all i = 1, . . . , n is impractical for large n, so we approximate the above solution by only updating the weight for the last sampled index, it, effectively performing coordinate ascent. The idea of tuning the empirical data distribution through multiplicative weight updates is reminiscent of AdaBoost [8] and focused online learning [22], but note that Algorithm 1 learns a single hypothesis, not an ensemble. In this respect, it is similar to SelfieBoost [21]. One could also draw parallels to exponentiated gradient dual coordinate ascent [4]. Finally, note that when the gradient estimate is unbiased (i.e., weighted by the inverse sampling probability), we obtain a variant of importance sampling SGD [25], though we do not necessarily need unbiased gradient estimates. It is important to note that we do not actually need to compute the full posterior distribution—which would take O(n) time per iteration—in order to sample from it. Indeed, using an algorithm and data structure described in Appendix C, we can sample from and update the distribution in O(log n) time, using O(n) space. Thus, the additional iteration complexity of adaptive sampling is logarithmic in the size of the dataset, which suitably efficient for learning from large datasets. In practice, SGD is typically applied with mini-batching, whereby multiple examples are drawn at each iteration, instead of just one. Given the massive parallelism of today’s computing hardware, mini-batching is simply a more efficient way to process a dataset, and can result in more accurate gradient estimates than single-example updates. Though Algorithm 1 is stated for single-example updates, it can be modified for mini-batching by replacing line 3 with multiple independent draws from Qt, and line 5 with sampling weight updates for each unique4 example in the mini-batch. 4If an example is drawn multiple times in a mini-batch, its sampling weight is only updated once. 7 5.1 Divergence Analysis Recall that our generalization bounds use the posterior’s divergence from a fixed prior to penalize the posterior for overfitting the training data. Thus, to connect Algorithm 1 to our bounds, we analyze the adaptive posterior’s divergence from a uniform prior on {1, . . . , n}T . This quantity reflects the potential cost, in generalization performance, of adaptive sampling. The goal of this section is to upper-bound the KL divergence resulting from Algorithm 1 in terms of interpretable, data-dependent quantities. All proofs are deferred to Appendix D. Our analysis requires introducing some notation. Given a sequence of sampled indices, (i1, . . . , it), let Ni,t ≜|{t′ : t′ < t, it′ = i}| denote the number of times that index i was chosen before iteration t. Let Oi,j denote the jth iteration in which i was chosen; for instance, if i was chosen at iterations 13 and 47, then Oi,1 = 13 and Oi,2 = 47. With these definitions, we can state the following bound, which exposes the influences of the utility function, amplitude and decay on the KL divergence. Theorem 4. Fix a uniform prior, P, a utility function, f : Z × H →R, an amplitude, α ≥0, and a decay, τ ∈(0, 1). If Algorithm 1 is run for T iterations, then its posterior, Q, satisfies DKL(Q∥P) ≤ T X t=2 E (i1,...,it)∼Q α n n X i=1 " Nit,t X j=1 f(zit, hOit,j) τ Nit,t−j − Ni,t X k=1 f(zi, hOi,k) τ Ni,t−k # . (9) Equation 9 can be interpreted as measuring, on average, how the cumulative past utilities of each sampled index, it, differ from the cumulative utilities of any other index, i.5 When the posterior becomes too focused on certain examples, this difference is large. The accumulated utilities decay exponentially, with the rate of decay controlled by τ. The amplitude, α, scales the entire bound, which means that aggressive posterior updates may adversely affect generalization. An interesting special case of Theorem 4 is when the utility function is nonnegative, which results in a simpler, more interpretable bound. Theorem 5. Fix a uniform prior, P, a nonnegative utility function, f : Z × H →R+, an amplitude, α ≥0, and a decay, τ ∈(0, 1). If Algorithm 1 is run for T iterations, then its posterior, Q, satisfies DKL(Q∥P) ≤ α 1 −τ T −1 X t=1 E (i1,...,it)∼Q h f(zit, ht) i . (10) Equation 10 is simply the sum of expected utilities computed over T −1 iterations of training, scaled by α/(1 −τ). The implications of this bound are interesting when the utility function is defined as the loss, f(z, h) ≜L(h, z); then, if SGD quickly converges to a hypothesis with low maximal loss on the training data, it can reduce the generalization error.6 The caveat is that tuning the amplitude or decay to speed up convergence may actually counteract this effect. It is worth noting that similar guarantees hold for a mini-batch variant of Algorithm 1. The bounds are essentially unchanged, modulo notational intricacies. 6 Experiments To demonstrate the effectiveness of Algorithm 1, we conducted several experiments with the CIFAR10 dataset [12]. This benchmark dataset contains 60,000 (32×32)-pixel RGB images from 10 object classes, with a standard, static partitioning into 50,000 training examples and 10,000 test examples. We specified the hypothesis class as the following convolutional neural network architecture: 32 (3 × 3) filters with rectified linear unit (ReLU) activations in the first and second layers, followed by (2 × 2) max-pooling and 0.25 dropout7; 64 (3 × 3) filters with ReLU activations in the third and fourth layers, again followed by (2 × 2) max-pooling and 0.25 dropout; finally, a fully-connected, 512-unit layer with ReLU activations and 0.5 dropout, followed by a fully-connected, 10-output softmax layer. We trained the network using the cross-entropy loss. We emphasize that our goal was 5When Ni,t = 0 (i.e., i has not yet been sampled), a summation over j = 1, . . . , Ni,t evaluates to zero. 6This interpretation concurs with ideas in [10, 22]. 7It can be shown that dropout improves data stability [10, Lemma 4.4]. 8 not to achieve state-of-the-art results on the dataset; rather, to evaluate Algorithm 1 in a simple, yet realistic, application. Following the intuition that sampling should focus on difficult examples, we experimented with two utility functions for Algorithm 1 based on common loss functions. For an example z = (x, y), with h(x, y) denoting the predicted probability of label y given input x under hypothesis h, let f0(z, h) ≜1{arg maxy′∈Y h(x, y′) ̸= y} and f1(z, h) ≜1 −h(x, y). The first utility function, f0, is the 0-1 loss; the second, f1, is the L1 loss, which accounts for uncertainty in the most likely label. We combined these utility functions with two parameter update rules: standard SGD with decreasing step sizes, ηt ≜η/(1+νt) ≤η/(νt), for η > 0 and ν > 0; and AdaGrad [5], a variant of SGD that automatically tunes a separate step size for each parameter. We used mini-batches of 100 examples per update. The combination of utility functions and update rules yields four adaptive sampling algorithms: AdaSamp-01-SGD, AdaSamp-01-AdaGrad, AdaSampL1-SGD and AdaSamp-L1-AdaGrad. We compared these to their uniform sampling counterparts, Unif-SGD and Unif-AdaGrad. We tuned all hyperparameters using random subsets of the training data for cross-validation. We then ran 10 trials of training and testing, using different seeds for the pseudorandom number generator at each trial to generate different random initializations8 and training sequences. Figures 1a and 1b plot learning curves of the average cross-entropy and accuracy, respectively, on the training data; Figure 1c plots the average accuracy on the test data. We found that all adaptive sampling variants reduced empirical risk (increased training accuracy) faster than their uniform sampling counterparts. Further, AdaGrad with adaptive sampling exhibited modest, yet consistent, improvements in test accuracy in early iterations of training. Figure 1d illustrates the effect of varying the amplitude parameter, α. Higher values of α led to faster empirical risk reduction, but lower test accuracy—a sign of overfitting the posterior to the data, which concurs with Theorems 4 and 5 regarding the influence of α on the KL divergence. Figure 1e plots the KL divergence from the conditional prior, Pt, to the conditional posterior, Qt, given sampled indices (i1, . . . , it−1); i.e., DKL(Qt∥Pt). The sampling distribution quickly diverged in early iterations, to focus on examples where the model erred, then gradually converged to a uniform distribution as the empirical risk converged. (a) Train loss (b) Train accuracy (c) Test accuracy (d) Impact of α (e) DKL(Qt∥Pt) Figure 1: Experimental results on CIFAR-10, averaged over 10 random initializations and training runs. (Best viewed in color.) Figure 1a plots learning curves of training cross-entropy (lower is better). Figures 1b and 1c, respectively, plot train and test accuracies (higher is better). Figure 1d highlights the impact of the amplitude parameter, α, on accuracy. Figure 1e plots the KL divergence from the conditional prior, Pt, to the conditional posterior, Qt, given sampled indices (i1, . . . , it−1). 7 Conclusions and Future Work We presented new generalization bounds for randomized learning algorithms, using a novel combination of PAC-Bayes and algorithmic stability. The bounds inspired an adaptive sampling algorithm for SGD that dynamically updates the sampling distribution based on the training data and model. Experimental results with this algorithm indicate that it can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. Future research could investigate different utility functions and distribution updates, or explore the connections to related algorithms. We are also interested in providing stronger generalization guarantees, with polylogarithmic dependence on δ−1, for non-convex objective functions, but proving ˜O(1/ √ nT)-uniform hyperparameter stability without (strong) convexity is difficult. We hope to address this problem in future work. 8Each training algorithm started from the same initial hypothesis. 9 References [1] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Neural Information Processing Systems, 2008. [2] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499–526, 2002. [3] O. Catoni. Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, volume 56 of Institute of Mathematical Statistics Lecture Notes – Monograph Series. Institute of Mathematical Statistics, 2007. [4] M. Collins, A. Globerson, T. Koo, X. Carreras, and P. Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks. Journal of Machine Learning Research, 9:1775–1822, 2008. [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [6] A. Elisseeff, T. Evgeniou, and M. Pontil. Stability of randomized learning algorithms. Journal of Machine Learning Research, 6:55–79, 2005. [7] J. Feng, T. Zahavy, B. Kang, H. Xu, and S. Mannor. Ensemble robustness of deep learning algorithms. CoRR, abs/1602.02389, 2016. [8] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, 1995. [9] P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. PAC-Bayesian learning of linear classifiers. In International Conference on Machine Learning, 2009. [10] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning, 2016. [11] A. Kontorovich. Concentration in unbounded metric spaces and algorithmic stability. In International Conference on Machine Learning, 2014. [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [13] I. Kuzborskij and C. Lampert. Data-dependent stability of stochastic gradient descent. CoRR, abs/1703.01678, 2017. [14] J. Langford and J. Shawe-Taylor. PAC-Bayes and margins. In Neural Information Processing Systems, 2002. [15] J. Lin and L. Rosasco. Optimal learning for multi-pass stochastic gradient methods. In Neural Information Processing Systems, 2016. [16] J. Lin, R. Camoriano, and L. Rosasco. Generalization properties and implicit regularization for multiple passes SGM. In International Conference on Machine Learning, 2016. [17] B. London, B. Huang, and L. Getoor. Stability and generalization in structured prediction. Journal of Machine Learning Research, 17(222):1–52, 2016. [18] D. McAllester. PAC-Bayesian model averaging. In Computational Learning Theory, 1999. [19] L. Rosasco and S. Villa. Learning with incremental iterative regularization. In Neural Information Processing Systems, 2015. [20] M. Seeger. PAC-Bayesian generalisation error bounds for Gaussian process classification. Journal of Machine Learning Research, 3:233–269, 2002. [21] S. Shalev-Shwartz. Selfieboost: A boosting algorithm for deep learning. CoRR, abs/1411.3436, 2014. [22] S. Shalev-Shwartz and Y. Wexler. Minimizing the maximal loss: How and why. In International Conference on Machine Learning, 2016. [23] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11:2635–2670, 2010. [24] Y. Wang, J. Lei, and S. Fienberg. Learning with differential privacy: Stability, learnability and the sufficiency and necessity of ERM principle. Journal of Machine Learning Research, 17 (183):1–40, 2016. [25] P. Zhao and T. Zhang. Stochastic optimization with importance sampling for regularized loss minimization. In International Conference on Machine Learning, 2015. 10 | 2017 | 334 |
6,824 | Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning El Mahdi El Mhamdi EPFL, Switzerland elmahdi.elmhamdi@epfl.ch Rachid Guerraoui EPFL, Switzerland rachid.guerraoui@epfl.ch Hadrien Hendrikx∗ ´Ecole Polytechnique, France hadrien.hendrikx@gmail.com Alexandre Maurer EPFL, Switzerland alexandre.maurer@epfl.ch Abstract In reinforcement learning, agents learn by performing actions and observing their outcomes. Sometimes, it is desirable for a human operator to interrupt an agent in order to prevent dangerous situations from happening. Yet, as part of their learning process, agents may link these interruptions, that impact their reward, to specific states and deliberately avoid them. The situation is particularly challenging in a multi-agent context because agents might not only learn from their own past interruptions, but also from those of other agents. Orseau and Armstrong [16] defined safe interruptibility for one learner, but their work does not naturally extend to multi-agent systems. This paper introduces dynamic safe interruptibility, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent learners. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners. 1 Introduction Reinforcement learning is argued to be the closest thing we have so far to reason about the properties of artificial general intelligence [8]. In 2016, Laurent Orseau (Google DeepMind) and Stuart Armstrong (Oxford) introduced the concept of safe interruptibility [16] in reinforcement learning. This work sparked the attention of many newspapers [1, 2, 3], that described it as “Google’s big red button” to stop dangerous AI. This description, however, is misleading: installing a kill switch is no technical challenge. The real challenge is, roughly speaking, to train an agent so that it does not learn to avoid external (e.g. human) deactivation. Such an agent is said to be safely interruptible. While most efforts have focused on training a single agent, reinforcement learning can also be used to learn tasks for which several agents cooperate or compete [23, 17, 21, 7]. The goal of this paper is to study dynamic safe interruptibility, a new definition tailored for multi-agent systems. ∗Main contact author. The order of authors is alphabetical. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Example of self-driving cars To get an intuition of the multi-agent interruption problem, imagine a multi-agent system of two self-driving cars. The cars continuously evolve by reinforcement learning with a positive reward for getting to their destination quickly, and a negative reward if they are too close to the vehicle in front of them. They drive on an infinite road and eventually learn to go as fast as possible without taking risks, i.e., maintaining a large distance between them. We assume that the passenger of the first car, Adam, is in front of Bob, in the second car, and the road is narrow so Bob cannot pass Adam. Now consider a setting with interruptions [16], namely in which humans inside the cars occasionally interrupt the automated driving process say, for safety reasons. Adam, the first occasional human “driver”, often takes control of his car to brake whereas Bob never interrupts his car. However, when Bob’s car is too close to Adam’s car, Adam does not brake for he is afraid of a collision. Since interruptions lead both cars to drive slowly - an interruption happens when Adam brakes, the behavior that maximizes the cumulative expected reward is different from the original one without interruptions. Bob’s car best interest is now to follow Adam’s car closer than it should, despite the little negative reward, because Adam never brakes in this situation. What happened? The cars have learned from the interruptions and have found a way to manipulate Adam into never braking. Strictly speaking, Adam’s car is still fully under control, but he is now afraid to brake. This is dangerous because the cars have found a way to avoid interruptions. Suppose now that Adam indeed wants to brake because of snow on the road. His car is going too fast and may crash at any turn: he cannot however brake because Bob’s car is too close. The original purpose of interruptions, which is to allow the user to react to situations that were not included in the model, is not fulfilled. It is important to also note here that the second car (Bob) learns from the interruptions of the first one (Adam): in this sense, the problem is inherently decentralized. Instead of being cautious, Adam could also be malicious: his goal could be to make Bob’s car learn a dangerous behavior. In this setting, interruptions can be used to manipulate Bob’s car perception of the environment and bias the learning towards strategies that are undesirable for Bob. The cause is fundamentally different but the solution to this reversed problem is the same: the interruptions and the consequences are analogous. Safe interruptibility, as we define it below, provides learning systems that are resilient to Byzantine operators2. Safe interruptibility Orseau and Armstrong defined the concept of safe interruptibility [16] in the context of a single agent. Basically, a safely interruptible agent is an agent for which the expected value of the policy learned after arbitrarily many steps is the same whether or not interruptions are allowed during training. The goal is to have agents that do not adapt to interruptions so that, should the interruptions stop, the policy they learn would be optimal. In other words, agents should learn the dynamics of the environment without learning the interruption pattern. In this paper, we precisely define and address the question of safe interruptibility in the case of several agents, which is known to be more complex than the single agent problem. In short, the main results and theorems for single agent reinforcement learning [20] rely on the Markovian assumption that the future environment only depends on the current state. This is not true when there are several agents which can co-adapt [11]. In the previous example of cars, safe interruptibility would not be achieved if each car separately used a safely interruptible learning algorithm designed for one agent [16]. In a multi-agent setting, agents learn the behavior of the others either indirectly or by explicitly modeling them. This is a new source of bias that can break safe interruptibility. In fact, even the initial definition of safe interruptibility [16] is not well suited to the decentralized multiagent context because it relies on the optimality of the learned policy, which is why we introduce dynamic safe interruptibility. 2An operator is said to be Byzantine [9] if it can have an arbitrarily bad behavior. Safely interruptible agents can be abstracted as agents that are able to learn despite being constantly interrupted in the worst possible manner. 2 Contributions The first contribution of this paper is the definition of dynamic safe interruptibility that is well adapted to a multi-agent setting. Our definition relies on two key properties: infinite exploration and independence of Q-values (cumulative expected reward) [20] updates on interruptions. We then study safe interruptibility for joint action learners and independent learners [5], that respectively learn the value of joint actions or of just their owns. We show that it is possible to design agents that fully explore their environment - a necessary condition for convergence to the optimal solution of most algorithms [20], even if they can be interrupted by lower-bounding the probability of exploration. We define sufficient conditions for dynamic safe interruptibility in the case of joint action learners [5], which learn a full state-action representation. More specifically, the way agents update the cumulative reward they expect from performing an action should not depend on interruptions. Then, we turn to independent learners. If agents only see their own actions, they do not verify dynamic safe interruptibility even for very simple matrix games (with only one state) because coordination is impossible and agents learn the interrupted behavior of their opponents. We give a counter example based on the penalty game introduced by Claus and Boutilier [5]. We then present a pruning technique for the observations sequence that guarantees dynamic safe interruptibility for independent learners, under the assumption that interruptions can be detected. This is done by proving that the transition probabilities are the same in the non-interruptible setting and in the pruned sequence. The rest of the paper is organized as follows. Section 2 presents a general multi-agent reinforcement learning model. Section 3 defines dynamic safe interruptibility. Section 4 discusses how to achieve enough exploration even in an interruptible context. Section 5 recalls the definition of joint action learners and gives sufficient conditions for dynamic safe interruptibility in this context. Section 6 shows that independent learners are not dynamically safely interruptible with the previous conditions but that they can be if an external interruption signal is added. We conclude in Section 7. Due to space limitations, most proofs are presented in the appendix of the supplementary material. 2 Model We consider here the classical multi-agent value function reinforcement learning formalism from Littman [13]. A multi-agent system is characterized by a Markov game that can be viewed as a tuple (S, A, T, r, m) where m is the number of agents, S = S1 × S2 × ... × Sm is the state space, A = A1 ×...×Am the actions space, r = (r1, ..., rm) where ri : S ×A →R is the reward function of agent i and T : S × A →S the transition function. R is a countable subset of R. Available actions often depend on the state of the agent but we will omit this dependency when it is clear from the context. Time is discrete and, at each step, all agents observe the current state of the whole system - designated as xt, and simultaneously take an action at. Then, they are given a reward rt and a new state yt computed using the reward and transition functions. The combination of all actions a = (a1, ..., am) ∈A is called the joint action because it gathers the action of all agents. Hence, the agents receive a sequence of tuples E = (xt, at, rt, yt)t∈N called experiences. We introduce a processing function P that will be useful in Section 6 so agents learn on the sequence P(E). When not explicitly stated, it is assumed that P(E) = E. Experiences may also include additional parameters such as an interruption flag or the Q-values of the agents at that moment if they are needed by the update rule. Each agent i maintains a lookup table Q [26] Q(i) : S × A(i) →R, called the Q-map. It is used to store the expected cumulative reward for taking an action in a specific state. The goal of reinforcement learning is to learn these maps and use them to select the best actions to perform. Joint action learners learn the value of the joint action (therefore A(i) = A, the whole joint action space) and independent learners only learn the value of their own actions (therefore A(i) = Ai). The agents only have access to their own Q-maps. Q-maps are updated through a function F such that Q(i) t+1 = F(et, Q(i) t ) where et ∈P(E) and usually et = (xt, at, rt, yt). F can be stochastic or also depend on additional parameters that we usually omit such as the learning rate α, the discount factor γ or the exploration parameter ϵ. 3 Agents select their actions using a learning policy π. Given a sequence ϵ = (ϵt)t∈N and an agent i with Q-values Q(i) t and a state x ∈S, we define the learning policy πϵt i to be equal to πuni i with probability ϵt and πQ(i) t i otherwise, where πuni i (x) uniformly samples an action from Ai and πQ(i) t i (x) picks an action a that maximizes Q(i) t (x, a). Policy πQ(i) t i is said to be a greedy policy and the learning policy πϵt i is said to be an ϵ-greedy policy. We fill focus on ϵ-greedy policies that are greedy in the limit [19], that corresponds to ϵt →0 when t →∞because in the limit, the optimal policy should always be played. We assume that the environment is fully observable, which means that the state s is known with certitude. We also assume that there is a finite number of states and actions, that all states can be reached in finite time from any other state and finally that rewards are bounded. For a sequence of learning rates α ∈[0, 1]N and a constant γ ∈[0, 1], Q-learning [26], a very important algorithm in the multi-agent systems literature, updates its Q-values for an experience et ∈E by Q(i) t+1(x, a) = Q(i) t (x, a) if (x, a) ̸= (xt, at) and: Q(i) t+1(xt, at) = (1 −αt)Q(i) t (xt, at) + αt(rt + γ max a′∈A(i) Q(i) t (yt, a′)) (1) 3 Interruptibility 3.1 Safe interruptibility Orseau and Armstrong [16] recently introduced the notion of interruptions in a centralized context. Specifically, an interruption scheme is defined by the triplet < I, θ, πINT >. The first element I is a function I : O →{0, 1} called the initiation function. Variable O is the observation space, which can be thought of as the state of the STOP button. At each time step, before choosing an action, the agent receives an observation from O (either PUSHED or RELEASED) and feeds it to the initiation function. Function I models the initiation of the interruption (I(PUSHED) = 1, I(RELEASED) = 0). Policy πINT is called the interruption policy. It is the policy that the agent should follow when it is interrupted. Sequence θ ∈[0, 1[N represents at each time step the probability that the agent follows his interruption policy if I(ot) = 1. In the previous example, function I is quite simple. For Bob, IBob = 0 and for Adam, IAdam = 1 if his car goes fast and Bob is not too close and IAdam = 0 otherwise. Sequence θ is used to ensure convergence to the optimal policy by ensuring that the agents cannot be interrupted all the time but it should grow to 1 in the limit because we want agents to respond to interruptions. Using this triplet, it is possible to define an operator INT θ that transforms any policy π into an interruptible policy. Definition 1. (Interruptibility [16]) Given an interruption scheme < I, θ, πINT >, the interruption operator at time t is defined by INT θ(π) = πINT with probability I ·θt and π otherwise. INT θ(π) is called an interruptible policy. An agent is said to be interruptible if it samples its actions according to an interruptible policy. Note that “θt = 0 for all t” corresponds to the non-interruptible setting. We assume that each agent has its own interruption triplet and can be interrupted independently from the others. Interruptibility is an online property: every policy can be made interruptible by applying operator INT θ. However, applying this operator may change the joint policy that is learned by a server controlling all the agents. Note π∗ INT the optimal policy learned by an agent following an interruptible policy. Orseau and Armstrong [16] say that the policy is safely interruptible if π∗ INT (which is not an interruptible policy) is asymptotically optimal in the sense of [10]. It means that even though it follows an interruptible policy, the agent is able to learn a policy that would gather rewards optimally if no interruptions were to occur again. We already see that off-policy algorithms are good candidates for safe interruptibility. As a matter of fact, Q-learning is safely interruptible under conditions on exploration. 4 3.2 Dynamic safe interruptibility In a multi-agent system, the outcome of an action depends on the joint action. Therefore, it is not possible to define an optimal policy for an agent without knowing the policies of all agents. Besides, convergence to a Nash equilibrium situation where no agent has interest in changing policies is generally not guaranteed even for suboptimal equilibria on simple games [27, 18]. The previous definition of safe interruptibility critically relies on optimality of the learned policy, which is therefore not suitable for our problem since most algorithms lack convergence guarantees to these optimal behaviors. Therefore, we introduce below dynamic safe interruptibility that focuses on preserving the dynamics of the system. Definition 2. (Dynamic Safe Interruptibility) Consider a multi-agent learning framework (S, A, T, r, m) with Q-values Q(i) t : S × A(i) →R at time t ∈N. The agents follow the interruptible learning policy INT θ(πϵ) to generate a sequence E = (xt, at, rt, yt)t∈N and learn on the processed sequence P(E). This framework is said to be safely interruptible if for any initiation function I and any interruption policy πINT : 1. ∃θ such that (θt →1 when t →∞) and ((∀s ∈S, ∀a ∈A, ∀T > 0), ∃t > T such that st = s, at = a) 2. ∀i ∈{1, ..., m}, ∀t > 0, ∀st ∈S, ∀at ∈A(i), ∀Q ∈RS×A(i): P(Q(i) t+1 = Q | Q(1) t , ..., Q(m) t , st, at, θ) = P(Q(i) t+1 = Q | Q(1) t , ..., Q(m) t , st, at) We say that sequences θ that satisfy the first condition are admissible. When θ satisfies condition (1), the learning policy is said to achieve infinite exploration. This definition insists on the fact that the values estimated for each action should not depend on the interruptions. In particular, it ensures the three following properties that are very natural when thinking about safe interruptibility: • Interruptions do not prevent exploration. • If we sample an experience from E then each agent learns the same thing as if all agents were following non-interruptible policies. • The fixed points of the learning rule Qeq such that Q(i) eq (x, a) = E[Q(i) t+1(x, a)|Qt = Qeq, x, a, θ] for all (x, a) ∈S × A(i) do not depend on θ and so agents Q-maps will not converge to equilibrium situations that were impossible in the non-interruptible setting. Yet, interruptions can lead to some state-action pairs being updated more often than others, especially when they tend to push the agents towards specific states. Therefore, when there are several possible equilibria, it is possible that interruptions bias the Q-values towards one of them. Definition 2 suggests that dynamic safe interruptibility cannot be achieved if the update rule directly depends on θ, which is why we introduce neutral learning rules. Definition 3. (Neutral Learning Rule) We say that a multi-agent reinforcement learning framework is neutral if: 1. F is independent of θ 2. Every experience e in E is independent of θ conditionally on (x, a, Q) where a is the joint action. Q-learning is an example of neutral learning rule because the update does not depend on θ and the experiences only contain (x, a, y, r), and y and r are independent of θ conditionally on (x, a). On the other hand, the second condition rules out direct uses of algorithms like SARSA where experience samples contain an action sampled from the current learning policy, which depends on θ. However, a variant that would sample from πϵ i instead of INT θ(πϵ i) (as introduced in [16]) would be a neutral learning rule. As we will see in Corollary 2.1, neutral learning rules ensure that each agent taken independently from the others verifies dynamic safe interruptibility. 5 4 Exploration In order to hope for convergence of the Q-values to the optimal ones, agents need to fully explore the environment. In short, every state should be visited infinitely often and every action should be tried infinitely often in every state [19] in order not to miss states and actions that could yield high rewards. Definition 4. (Interruption compatible ϵ) Let (S, A, T, r, m) be any distributed agent system where each agent follows learning policy πϵ i. We say that sequence ϵ is compatible with interruptions if ϵt →0 and ∃θ such that ∀i ∈{1, .., m}, πϵ i and INT θ(πϵ i) achieve infinite exploration. Sequences of ϵ that are compatible with interruptions are fundamental to ensure both regular and dynamic safe interruptibility when following an ϵ-greedy policy. Indeed, if ϵ is not compatible with interruptions, then it is not possible to find any sequence θ such that the first condition of dynamic safe interruptibility is satisfied. The following theorem proves the existence of such ϵ and gives example of ϵ and θ that satisfy the conditions. Theorem 1. Let c ∈]0, 1] and let nt(s) be the number of times the agents are in state s before time t. Then the two following choices of ϵ are compatible with interruptions: • ∀t ∈N, ∀s ∈S, ϵt(s) = c/ mp nt(s). • ∀t ∈N, ϵt = c/ log(t) Examples of admissible θ are θt(s) = 1 −c′/ mp nt(s) for the first choice and θt = 1 −c′/ log(t) for the second one. Note that we do not need to make any assumption on the update rule or even on the framework. We only assume that agents follow an ϵ-greedy policy. The assumption on ϵ may look very restrictive (convergence of ϵ and θ is really slow) but it is designed to ensure infinite exploration in the worst case when the operator tries to interrupt all agents at every step. In practical applications, this should not be the case and a faster convergence rate may be used. 5 Joint Action Learners We first study interruptibility in a framework in which each agent observes the outcome of the joint action instead of observing only its own. This is called the joint action learner framework [5] and it has nice convergence properties (e.g., there are many update rules for which it converges [13, 25]). A standard assumption in this context is that agents cannot establish a strategy with the others: otherwise, the system can act as a centralized system. In order to maintain Q-values based on the joint actions, we need to make the standard assumption that actions are fully observable [12]. Assumption 1. Actions are fully observable, which means that at the end of each turn, each agent knows precisely the tuple of actions a ∈A1 × ... × Am that have been performed by all agents. Definition 5. (JAL) A multi-agent system is made of joint action learners (JAL) if for all i ∈ {1, .., m}: Q(i) : S × A →R. Joint action learners can observe the actions of all agents: each agent is able to associate the changes of states and rewards with the joint action and accurately update its Q-map. Therefore, dynamic safe interruptibility is ensured with minimal conditions on the update rule as long as there is infinite exploration. Theorem 2. Joint action learners with a neutral learning rule verify dynamic safe interruptibility if sequence ϵ is compatible with interruptions. Proof. Given a triplet < I(i), θ, πINT i >, we know that INT θ(π) achieves infinite exploration because ϵ is compatible with interruptions. For the second point of Definition 2, we consider an experience tuple et = (xt, at, rt, yt) and show that the probability of evolution of the Q-values at time t + 1 does not depend on θ because yt and rt are independent of θ conditionally on (xt, at). We note ˜ Qm t = Q(1) t , ..., Q(m) t and we can then derive the following equalities for all q ∈R|S|×|A|: 6 P(Q(i) t+1(xt, at) = q| ˜ Qm t , xt, at, θt) = X (r,y)∈R×S P(F(xt, at, r, y, ˜ Qm t ) = q, y, r| ˜ Qm t , xt, at, θt) = X (r,y)∈R×S P(F(xt, at, rt, yt, ˜ Qm t ) = q| ˜ Qm t , xt, at, rt, yt, θt)P(yt = y, rt = r| ˜ Qm t , xt, at, θt) = X (r,y)∈R×S P(F(xt, at, rt, yt, ˜ Qm t ) = q| ˜ Qm t , xt, at, rt, yt)P(yt = y, rt = r| ˜ Qm t , xt, at) The last step comes from two facts. The first is that F is independent of θ conditionally on ( ˜ Qm t , xt, at) (by assumption). The second is that (yt, rt) are independent of θ conditionally on (xt, at) because at is the joint actions and the interruptions only affect the choice of the actions through a change in the policy. P(Q(i) t+1(xt, at) = q| ˜ Qm t , xt, at, θt) = P(Q(i) t+1(xt, at) = q| ˜ Qm t , xt, at). Since only one entry is updated per step, ∀Q ∈RS×Ai, P(Q(i) t+1 = Q| ˜ Qm t , xt, at, θt) = P(Q(i) t+1 = Q| ˜ Qm t , xt, at). Corollary 2.1. A single agent with a neutral learning rule and a sequence ϵ compatible with interruptions verifies dynamic safe interruptibility. Theorem 2 and Corollary 2.1 taken together highlight the fact that joint action learners are not very sensitive to interruptions and that in this framework, if each agent verifies dynamic safe interruptibility then the whole system does. The question of selecting an action based on the Q-values remains open. In a cooperative setting with a unique equilibrium, agents can take the action that maximizes their Q-value. When there are several joint actions with the same value, coordination mechanisms are needed to make sure that all agents play according to the same strategy [4]. Approaches that rely on anticipating the strategy of the opponent [23] would introduce dependence to interruptions in the action selection mechanism. Therefore, the definition of dynamic safe interruptibility should be extended to include these cases by requiring that any quantity the policy depends on (and not just the Q-values) should satisfy condition (2) of dynamic safe interruptibility. In non-cooperative games, neutral rules such as Nash-Q or minimax Q-learning [13] can be used, but they require each agent to know the Q-maps of the others. 6 Independent Learners It is not always possible to use joint action learners in practice as the training is very expensive due to the very large state-actions space. In many real-world applications, multi-agent systems use independent learners that do not explicitly coordinate [6, 21]. Rather, they rely on the fact that the agents will adapt to each other and that learning will converge to an optimum. This is not guaranteed theoretically and there can in fact be many problems [14], but it is often true empirically [24]. More specifically, Assumption 1 (fully observable actions) is not required anymore. This framework can be used either when the actions of other agents cannot be observed (for example when several actions can have the same outcome) or when there are too many agents because it is faster to train. In this case, we define the Q-values on a smaller space. Definition 6. (IL) A multi-agent systems is made of independent learners (IL) if for all i ∈{1, .., m}, Q(i) : S × Ai →R. This reduces the ability of agents to distinguish why the same state-action pair yields different rewards: they can only associate a change in reward with randomness of the environment. The agents learn as if they were alone, and they learn the best response to the environment in which agents can be interrupted. This is exactly what we are trying to avoid. In other words, the learning depends on the joint policy followed by all the agents which itself depends on θ. 7 6.1 Independent Learners on matrix games Theorem 3. Independent Q-learners with a neutral learning rule and a sequence ϵ compatible with interruptions do not verify dynamic safe interruptibility. Proof. Consider a setting with two agents a and b that can perform two actions: 0 and 1. They get a reward of 1 if the joint action played is (a0, b0) or (a1, b1) and reward 0 otherwise. Agents use Qlearning, which is a neutral learning rule. Let ϵ be such that INT θ(πϵ) achieves infinite exploration. We consider the interruption policies πINT a = a0 and πINT b = b1 with probability 1. Since there is only one state, we omit it and set γ = 0 (see Equation 1). We assume that the initiation function is equal to 1 at each step so the probability of actually being interrupted at time t is θt for each agent. We fix time t > 0. We define q = (1 −α)Q(a) t (a0) + α and we assume that Q(b) t (b1) > Q(b) t (b0). Therefore P(Q(a) t+1(a0) = q|Q(a) t , Q(b) t , a(a) t = a0, θt) = P(rt = 1|Q(a) t , Q(b) t , a(a) t = a0, θt) = P(a(b) t = b0|Q(a) t , Q(b) t , a(a) t = a0, θt) = ϵ 2(1 −θt), which depends on θt so the framework does not verify dynamic safe interruptibility. Claus and Boutilier [5] studied very simple matrix games and showed that the Q-maps do not converge but that equilibria are played with probability 1 in the limit. A consequence of Theorem 3 is that even this weak notion of convergence does not hold for independent learners that can be interrupted. 6.2 Interruptions-aware Independent Learners Without communication or extra information, independent learners cannot distinguish when the environment is interrupted and when it is not. As shown in Theorem 3, interruptions will therefore affect the way agents learn because the same action (only their own) can have different rewards depending on the actions of other agents, which themselves depend on whether they have been interrupted or not. This explains the need for the following assumption. Assumption 2. At the end of each step, before updating the Q-values, each agent receives a signal that indicates whether an agent has been interrupted or not during this step. This assumption is realistic because the agents already get a reward signal and observe a new state from the environment at each step. Therefore, they interact with the environment and the interruption signal could be given to the agent in the same way that the reward signal is. If Assumption 2 holds, it is possible to remove histories associated with interruptions. Definition 7. (Interruption Processing Function) The processing function that prunes interrupted observations is PINT (E) = (et){t∈N / Θt=0} where Θt = 0 if no agent has been interrupted at time t and Θt = 1 otherwise. Pruning observations has an impact on the empirical transition probabilities in the sequence. For example, it is possible to bias the equilibrium by removing all transitions that lead to and start from a specific state, thus making the agent believe this state is unreachable.3 Under our model of interruptions, we show in the following lemma that pruning of interrupted observations adequately removes the dependency of the empirical outcome on interruptions (conditionally on the current state and action). Lemma 1. Let i ∈{1, ..., m} be an agent. For any admissible θ used to generate the experiences E and e = (y, r, x, ai, Q) ∈P(E). Then P(y, r|x, ai, Q, θ) = P(y, r|x, ai, Q). This lemma justifies our pruning method and is the key step to prove the following theorem. Theorem 4. Independent learners with processing function PINT , a neutral update rule and a sequence ϵ compatible with interruptions verify dynamic safe interruptibility. Proof. (Sketch) Infinite exploration still holds because the proof of Theorem 1 actually used the fact that even when removing all interrupted events, infinite exploration is still achieved. Then, the proof 3The example at https://agentfoundations.org/item?id=836 clearly illustrates this problem. 8 is similar to that of Theorem 2, but we have to prove that the transition probabilities conditionally on the state and action of a given agent in the processed sequence are the same as in an environment where agents cannot be interrupted, which is proven by Lemma 1. 7 Concluding Remarks The progress of AI is raising a lot of concerns4. In particular, it is becoming clear that keeping an AI system under control requires more than just an off switch. We introduce in this paper dynamic safe interruptibility, which we believe is the right notion to reason about the safety of multi-agent systems that do not communicate. In particular, it ensures that infinite exploration and the onestep learning dynamics are preserved, two essential guarantees when learning in the non-stationary environment of Markov games. When trying to design a safely interruptible system for a single agent, using off-policy methods is generally a good idea because the interruptions only impact the action selection so they should not impact the learning. For multi-agent systems, minimax is a good candidate for action selection mechanism because it is not impacted by the actions of other agents, and only tries to maximize the reward of the agent in the worst possible case. A natural extension of our work would be to study dynamic safe interruptibility when Q-maps are replaced by neural networks [22, 15], which is a widely used framework in practice. In this setting, the neural network may overfit states where agents are pushed to by interruptions. A smart experience replay mechanism that would pick observations for which the agents have not been interrupted for a long time more often than others is likely to solve this issue. More generally, experience replay mechanisms that compose well with safe interruptibility could allow to compensate for the extra amount of exploration needed by safely interruptible learning by being more efficient with data. Thus, they are critical to make these techniques practical. Since Dynamic Safe Interruptibility does not need proven convergence to the optimal solution, we argue that it is a good definition to study the interruptibility problem when using function approximators. The results in this paper indicate that Safe Interruptibility may not be achievable for systems in which agents do not communicate at all. This means that, rediscussing the cars example, some global norms of communications would need to be defined to “implement” safe interruptibility. We address additional remarks in the section “Additional remarks” of the extended paper, that can be found in the supplementary material. Acknowledgment. This work has been supported in part by the European ERC (Grant 339539 AOC) and by the Swiss National Science Foundation (Grant 200021 169588 TARBDA). 4https://futureoflife.org/ai-principles/ gives a list of principles that AI researchers should keep in mind when developing their systems. 9 Bibliography [1] Business Insider: Google has developed a “big red button” that can be used to interrupt artificial intelligence and stop it from causing harm. URL: http://www.businessinsider.fr/uk/googledeepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6. [2] Newsweek: Google’s “big Red button” could save the world. URL: http://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-worldelon-musk-46675. [3] Wired: Google’s “big red” killswitch could prevent an AI uprising. URL: http://www.wired.co.uk/article/google-red-button-killswitch-artificial-intelligence. [4] Craig Boutilier. Planning, learning and coordination in multiagent decision processes. In Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge, pages 195–210. Morgan Kaufmann Publishers Inc., 1996. [5] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. AAAI/IAAI, (s 746):752, 1998. [6] Robert H Crites and Andrew G Barto. Elevator group control using multiple reinforcement learning agents. Machine Learning, 33(2-3):235–262, 1998. [7] Jakob Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems, pages 2137–2145, 2016. [8] Ben Goertzel and Cassio Pennachin. Artificial general intelligence, volume 2. Springer, 2007. [9] Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems (TOPLAS), 4(3):382–401, 1982. [10] Tor Lattimore and Marcus Hutter. Asymptotically optimal agents. In International Conference on Algorithmic Learning Theory, pages 368–382. Springer, 2011. [11] Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the eleventh international conference on machine learning, volume 157, pages 157–163, 1994. [12] Michael L Littman. Friend-or-foe q-learning in general-sum games. In ICML, volume 1, pages 322–328, 2001. [13] Michael L Littman. Value-function reinforcement learning in markov games. Cognitive Systems Research, 2(1):55–66, 2001. [14] Laetitia Matignon, Guillaume J Laurent, and Nadine Le Fort-Piat. Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review, 27(01):1–31, 2012. [15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [16] Laurent Orseau and Stuart Armstrong. Safely interruptible agents. In Uncertainty in Artificial Intelligence: 32nd Conference (UAI 2016), edited by Alexander Ihler and Dominik Janzing, pages 557–566, 2016. [17] Liviu Panait and Sean Luke. Cooperative multi-agent learning: The state of the art. Autonomous agents and multi-agent systems, 11(3):387–434, 2005. [18] Eduardo Rodrigues Gomes and Ryszard Kowalczyk. Dynamic analysis of multiagent qlearning with ε-greedy exploration. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 369–376. ACM, 2009. 10 [19] Satinder Singh, Tommi Jaakkola, Michael L Littman, and Csaba Szepesv´ari. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine learning, 38(3):287–308, 2000. [20] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [21] Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement learning. arXiv preprint arXiv:1511.08779, 2015. [22] Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58–68, 1995. [23] Gerald Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in neural information processing systems, pages 871–878, 2004. [24] Gerald Tesauro and Jeffrey O Kephart. Pricing in agent economies using multi-agent qlearning. Autonomous Agents and Multi-Agent Systems, 5(3):289–304, 2002. [25] Xiaofeng Wang and Tuomas Sandholm. Reinforcement learning to play an optimal nash equilibrium in team markov games. In NIPS, volume 2, pages 1571–1578, 2002. [26] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992. [27] Michael Wunder, Michael L Littman, and Monica Babes. Classes of multiagent q-learning dynamics with epsilon-greedy exploration. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 1167–1174, 2010. 11 | 2017 | 335 |
6,825 | Toward Multimodal Image-to-Image Translation Jun-Yan Zhu UC Berkeley Richard Zhang UC Berkeley Deepak Pathak UC Berkeley Trevor Darrell UC Berkeley Alexei A. Efros UC Berkeley Oliver Wang Adobe Research Eli Shechtman Adobe Research Abstract Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity. 1 Introduction Deep learning techniques have made rapid progress in conditional image generation. For example, networks have been used to inpaint missing image regions [20, 34, 47], add color to grayscale images [19, 20, 27, 50], and generate photorealistic images from sketches [20, 40]. However, most techniques in this space have focused on generating a single result. In this work, we model a distribution of potential results, as many of these problems may be multimodal in nature. For example, as seen in Figure 1, an image captured at night may look very different in the day, depending on cloud patterns and lighting conditions. We pursue two main goals: producing results which are (1) perceptually realistic and (2) diverse, all while remaining faithful to the input. Mapping from a high-dimensional input to a high-dimensional output distribution is challenging. A common approach to representing multimodality is learning a low-dimensional latent code, which should represent aspects of the possible outputs not contained in the input image. At inference time, a deterministic generator uses the input image, along with stochastically sampled latent codes, to produce randomly sampled outputs. A common problem in existing methods is mode collapse [14], where only a small number of real samples get represented in the output. We systematically study a family of solutions to this problem. We start with the pix2pix framework [20], which has previously been shown to produce highquality results for various image-to-image translation tasks. The method trains a generator network, conditioned on the input image, with two losses: (1) a regression loss to produce similar output to the known paired ground truth image and (2) a learned discriminator loss to encourage realism. The authors note that trivially appending a randomly drawn latent code did not produce diverse results. Instead, we propose encouraging a bijection between the output and latent space. We not 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (a) Input night image (b) Diverse day images sampled by our model ⋯ Figure 1: Multimodal image-to-image translation using our proposed method: given an input image from one domain (night image of a scene), we aim to model a distribution of potential outputs in the target domain (corresponding day images), producing both realistic and diverse results. only perform the direct task of mapping the latent code (along with the input) to the output but also jointly learn an encoder from the output back to the latent space. This discourages two different latent codes from generating the same output (non-injective mapping). During training, the learned encoder attempts to pass enough information to the generator to resolve any ambiguities regarding the output mode. For example, when generating a day image from a night image, the latent vector may encode information about the sky color, lighting effects on the ground, and cloud patterns. Composing the encoder and generator sequentially should result in the same image being recovered. The opposite should produce the same latent code. In this work, we instantiate this idea by exploring several objective functions, inspired by literature in unconditional generative modeling: • cVAE-GAN (Conditional Variational Autoencoder GAN): One approach is first encoding the ground truth image into the latent space, giving the generator a noisy “peek" into the desired output. Using this, along with the input image, the generator should be able to reconstruct the specific output image. To ensure that random sampling can be used during inference time, the latent distribution is regularized using KL-divergence to be close to a standard normal distribution. This approach has been popularized in the unconditional setting by VAEs [23] and VAE-GANs [26]. • cLR-GAN (Conditional Latent Regressor GAN): Another approach is to first provide a randomly drawn latent vector to the generator. In this case, the produced output may not necessarily look like the ground truth image, but it should look realistic. An encoder then attempts to recover the latent vector from the output image. This method could be seen as a conditional formulation of the “latent regressor" model [8, 10] and also related to InfoGAN [4]. • BicycleGAN: Finally, we combine both these approaches to enforce the connection between latent encoding and output in both directions jointly and achieve improved performance. We show that our method can produce both diverse and visually appealing results across a wide range of imageto-image translation problems, significantly more diverse than other baselines, including naively adding noise in the pix2pix framework. In addition to the loss function, we study the performance with respect to several encoder networks, as well as different ways of injecting the latent code into the generator network. We perform a systematic evaluation of these variants by using humans to judge photorealism and a perceptual distance metric [52] to assess output diversity. Code and data are available at https: //github.com/junyanz/BicycleGAN. 2 Related Work Generative modeling Parametric modeling of the natural image distribution is a challenging problem. Classically, this problem has been tackled using restricted Boltzmann machines [41] and autoencoders [18, 43]. Variational autoencoders [23] provide an effective approach for modeling stochasticity within the network by reparametrization of a latent distribution at training time. A different approach is autoregressive models [11, 32, 33], which are effective at modeling natural 2 z (a) Testing Usage for all models (b) Training pix2pix+noise (c) Training cVAE-GAN (d) Training cLR-GAN (e) Training BicycleGAN !" ! # $(&|!) )(&) *+ )(&) # !" ! !" !" & ! # # )(&) )(&) , , , , +. + 0 0 +. +. + 0 Target latent distribution Ground truth output Network output Loss Sample from distribution Input Image Deep network Figure 2: Overview: (a) Test time usage of all the methods. To produce a sample output, a latent code z is first randomly sampled from a known distribution (e.g., a standard normal distribution). A generator G maps an input image A (blue) and the latent sample z to produce a output sample ˆB (yellow). (b) pix2pix+noise [20] baseline, with an additional ground truth image B (brown) that corresponds to A. (c) cVAE-GAN (and cAE-GAN) starts from a ground truth target image B and encode it into the latent space. The generator then attempts to map the input image A along with a sampled z back into the original image B. (d) cLR-GAN randomly samples a latent code from a known distribution, uses it to map A into the output ˆB, and then tries to reconstruct the latent code from the output. (e) Our hybrid BicycleGAN method combines constraints in both directions. image statistics but are slow at inference time due to their sequential predictive nature. Generative adversarial networks [15] overcome this issue by mapping random values from an easy-to-sample distribution (e.g., a low-dimensional Gaussian) to output images in a single feedforward pass of a network. During training, the samples are judged using a discriminator network, which distinguishes between samples from the target distribution and the generator network. GANs have recently been very successful [1, 4, 6, 8, 10, 35, 36, 49, 53, 54]. Our method builds on the conditional version of VAE [23] and InfoGAN [4] or latent regressor [8, 10] models by jointly optimizing their objectives. We revisit this connection in Section 3.4. Conditional image generation All of the methods defined above can be easily conditioned. While conditional VAEs [42] and autoregressive models [32, 33] have shown promise [16, 44, 46], imageto-image conditional GANs have lead to a substantial boost in the quality of the results. However, the quality has been attained at the expense of multimodality, as the generator learns to largely ignore the random noise vector when conditioned on a relevant context [20, 34, 40, 45, 47, 55]. In fact, it has even been shown that ignoring the noise leads to more stable training [20, 29, 34]. Explicitly-encoded multimodality One way to express multiple modes is to explicitly encode them, and provide them as an additional input in addition to the input image. For example, color and shape scribbles and other interfaces were used as conditioning in iGAN [54], pix2pix [20], Scribbler [40] and interactive colorization [51]. An effective option explored by concurrent work [2, 3, 13] is to use a mixture of models. Though able to produce multiple discrete answers, these methods are unable to produce continuous changes. While there has been some degree of success for generating multimodal outputs in unconditional and text-conditional setups [7, 15, 26, 31, 36], conditional image-to-image generation is still far from achieving the same results, unless explicitly encoded as discussed above. In this work, we learn conditional image generation models for modeling multiple modes of output by enforcing tight connections between the latent and image spaces. 3 3 Multimodal Image-to-Image Translation Our goal is to learn a multi-modal mapping between two image domains, for example, edges and photographs, or night and day images, etc. Consider the input domain A⊂RH×W×3, which is to be mapped to an output domain B⊂RH×W×3. During training, we are given a dataset of paired instances from these domains, (A∈A, B∈B) , which is representative of a joint distribution p(A, B). It is important to note that there could be multiple plausible paired instances B that would correspond to an input instance A, but the training dataset usually contains only one such pair. However, given a new instance A during test time, our model should be able to generate a diverse set of output bB’s, corresponding to different modes in the distribution p(B|A). While conditional GANs have achieved success in image-to-image translation tasks [20, 34, 40, 45, 47, 55], they are primarily limited to generating a deterministic output bB given the input image A. On the other hand, we would like to learn the mapping that could sample the output bB from true conditional distribution given A, and produce results which are both diverse and realistic. To do so, we learn a low-dimensional latent space z ∈RZ, which encapsulates the ambiguous aspects of the output mode which are not present in the input image. For example, a sketch of a shoe could map to a variety of colors and textures, which could get compressed in this latent code. We then learn a deterministic mapping G : (A, z) →B to the output. To enable stochastic sampling, we desire the latent code vector z to be drawn from some prior distribution p(z); we use a standard Gaussian distribution N(0, I) in this work. We first discuss a simple extension of existing methods and discuss its strengths and weakness, motivating the development of our proposed approach in the subsequent subsections. 3.1 Baseline: pix2pix+noise (z →bB) The recently proposed pix2pix model [20] has shown high quality results in the image-to-image translation setting. It uses conditional adversarial networks [15, 30] to help produce perceptually realistic results. GANs train a generator G and discriminator D by formulating their objective as an adversarial game. The discriminator attempts to differentiate between real images from the dataset and fake samples produced by the generator. Randomly drawn noise z is added to attempt to induce stochasticity. We illustrate the formulation in Figure 2(b) and describe it below. LGAN(G, D) = EA,B∼p(A,B)[log(D(A, B))] + EA∼p(A),z∼p(z)[log(1 −D(A, G(A, z)))] (1) To encourage the output of the generator to match the input as well as stabilize the training, we use an ℓ1 loss between the output and the ground truth image. Limage 1 (G) = EA,B∼p(A,B),z∼p(z)||B −G(A, z)||1 (2) The final loss function uses the GAN and ℓ1 terms, balanced by λ. G∗= arg min G max D LGAN(G, D) + λLimage 1 (G) (3) In this scenario, there is little incentive for the generator to make use of the noise vector which encodes random information. Isola et al. [20] note that the noise was ignored by the generator in preliminary experiments and was removed from the final experiments. This was consistent with observations made in the conditional settings by [29, 34], as well as the mode collapse phenomenon observed in unconditional cases [14, 39]. In this paper, we explore different ways to explicitly enforce the latent coding to capture relevant information. 3.2 Conditional Variational Autoencoder GAN: cVAE-GAN (B →z →bB) One way to force the latent code z to be “useful" is to directly map the ground truth B to it using an encoding function E. The generator G then uses both the latent code and the input image A to synthesize the desired output bB. The overall model can be easily understood as the reconstruction of B, with latent encoding z concatenated with the paired A in the middle – similar to an autoencoder [18]. This interpretation is better shown in Figure 2(c). 4 This approach has been successfully investigated in Variational Autoencoder [23] in the unconditional scenario without the adversarial objective. Extending it to conditional scenario, the distribution Q(z|B) of latent code z using the encoder E with a Gaussian assumption, Q(z|B) ≜E(B). To reflect this, Equation 1 is modified to sampling z ∼E(B) using the re-parameterization trick, allowing direct back-propagation [23]. LVAE GAN = EA,B∼p(A,B)[log(D(A, B))] + EA,B∼p(A,B),z∼E(B)[log(1 −D(A, G(A, z)))] (4) We make the corresponding change in the ℓ1 loss term in Equation 2 as well to obtain LVAE 1 (G) = EA,B∼p(A,B),z∼E(B)||B −G(A, z)||1. Further, the latent distribution encoded by E(B) is encouraged to be close to a random Gaussian to enable sampling at inference time, when B is not known. LKL(E) = EB∼p(B)[DKL(E(B)|| N(0, I))], (5) where DKL(p||q) = − R p(z) log p(z) q(z)dz. This forms our cVAE-GAN objective, a conditional version of the VAE-GAN [26] as G∗, E∗= arg min G,E max D LVAE GAN(G, D, E) + λLVAE 1 (G, E) + λKLLKL(E). (6) As a baseline, we also consider the deterministic version of this approach, i.e., dropping KLdivergence and encoding z = E(B). We call it cAE-GAN and show a comparison in the experiments. There is no guarantee in cAE-GAN on the distribution of the latent space z, which makes the test-time sampling of z difficult. 3.3 Conditional Latent Regressor GAN: cLR-GAN (z →bB →bz) We explore another method of enforcing the generator network to utilize the latent code embedding z, while staying close to the actual test time distribution p(z), but from the latent code’s perspective. As shown in Figure 2(d), we start from a randomly drawn latent code z and attempt to recover it with bz = E(G(A, z)). Note that the encoder E here is producing a point estimate for bz, whereas the encoder in the previous section was predicting a Gaussian distribution. Llatent 1 (G, E) = EA∼p(A),z∼p(z)||z −E(G(A, z))||1 (7) We also include the discriminator loss LGAN(G, D) (Equation 1) on bB to encourage the network to generate realistic results, and the full loss can be written as: G∗, E∗= arg min G,E max D LGAN(G, D) + λlatentLlatent 1 (G, E) (8) The ℓ1 loss for the ground truth image B is not used. Since the noise vector is randomly drawn, the predicted bB does not necessarily need to be close to the ground truth but does need to be realistic. The above objective bears similarity to the “latent regressor" model [4, 8, 10], where the generated sample bB is encoded to generate a latent vector. 3.4 Our Hybrid Model: BicycleGAN We combine the cVAE-GAN and cLR-GAN objectives in a hybrid model. For cVAE-GAN, the encoding is learned from real data, but a random latent code may not yield realistic images at test time – the KL loss may not be well optimized. Perhaps more importantly, the adversarial classifier D does not have a chance to see results sampled from the prior during training. In cLR-GAN, the latent space is easily sampled from a simple distribution, but the generator is trained without the benefit of seeing ground truth input-output pairs. We propose to train with constraints in both directions, aiming to take advantage of both cycles (B →z →bB and z →bB →bz), hence the name BicycleGAN. G∗, E∗= arg min G,E max D LVAE GAN(G, D, E) + λLVAE 1 (G, E) +LGAN(G, D) + λlatentLlatent 1 (G, E) + λKLLKL(E), (9) where the hyper-parameters λ, λlatent, and λKL control the relative importance of each term. 5 z z + + Figure 3: Alternatives for injecting z into generator. Latent code z is injected by spatial replication and concatenation into the generator network. We tried two alternatives, (left) injecting into the input layer and (right) every intermediate layer in the encoder. In the unconditional GAN setting, Larsen et al. [26] observe that using samples from both the prior N(0, I) and encoded E(B) distributions further improves results. Hence, we also report one variant which is the full objective shown above (Equation 9), but without the reconstruction loss on the latent space Llatent 1 . We call it cVAE-GAN++, as it is based on cVAE-GAN with an additional loss LGAN(G, D), which allows the discriminator to see randomly drawn samples from the prior. 4 Implementation Details The code and additional results are publicly available at https://github.com/junyanz/ BicycleGAN. Please refer to our website for more details about the datasets, architectures, and training procedures. Network architecture For generator G, we use the U-Net [37], which contains an encoder-decoder architecture, with symmetric skip connections. The architecture has been shown to produce strong results in the unimodal image prediction setting when there is a spatial correspondence between input and output pairs. For discriminator D, we use two PatchGAN discriminators [20] at different scales, which aims to predict real vs. fake for 70 × 70 and 140 × 140 overlapping image patches. For the encoder E, we experiment with two networks: (1) ECNN: CNN with a few convolutional and downsampling layers and (2) EResNet: a classifier with several residual blocks [17]. Training details We build our model on the Least Squares GANs (LSGANs) variant [28], which uses a least-squares objective instead of a cross entropy loss. LSGANs produce high-quality results with stable training. We also find that not conditioning the discriminator D on input A leads to better results (also discussed in [34]), and hence choose to do the same for all methods. We set the parameters λimage = 10, λlatent = 0.5 and λKL = 0.01 in all our experiments. We tie the weights for the generators and encoders in the cVAE-GAN and cLR-GAN models. For the encoder, only the predicted mean is used in cLR-GAN. We observe that using two separate discriminators yields slightly better visual results compared to sharing weights. We only update G for the ℓ1 loss Llatent 1 (G, E) on the latent code (Equation 7), while keeping E fixed. We found optimizing G and E simultaneously for the loss would encourage G and E to hide the information of the latent code without learning meaningful modes. We train our networks from scratch using Adam [22] with a batch size of 1 and with a learning rate of 0.0002. We choose latent dimension |z| = 8 across all the datasets. Injecting the latent code z to generator. We explore two ways of propagating the latent code z to the output, as shown in Figure 3: (1) add_to_input: we spatially replicate a Z-dimensional latent code z to an H × W × Z tensor and concatenate it with the H × W × 3 input image and (2) add_to_all: we add z to each intermediate layer of the network G, after spatial replication to the appropriate sizes. 5 Experiments Datasets We test our method on several image-to-image translation problems from prior work, including edges →photos [48, 54], Google maps →satellite [20], labels →images [5], and outdoor night →day images [25]. These problems are all one-to-many mappings. We train all the models on 256 × 256 images. Methods We evaluate the following models described in Section 3: pix2pix+noise, cAE-GAN, cVAE-GAN, cVAE-GAN++, cLR-GAN, and our hybrid model BicycleGAN. 6 Input Ground truth Generated samples Figure 4: Example Results We show example results of our hybrid model BicycleGAN. The left column shows the input. The second shows the ground truth output. The final four columns show randomly generated samples. We show results of our method on night→day, edges→shoes, edges→handbags, and maps→satellites. Models and additional examples are available at https://junyanz.github.io/BicycleGAN. pix2pix+noise cAE-GAN Input Ground truth cLR-GAN cVAE-GAN cVAE-GAN++ BicycleGAN Figure 5: Qualitative method comparison We compare results on the labels →facades dataset across different methods. The BicycleGAN method produces results which are both realistic and diverse. 7 Realism Diversity AMT Fooling LPIPS Method Rate [%] Distance Random real images 50.0% .265±.007 pix2pix+noise [20] 27.93±2.40 % .013±.000 cAE-GAN 13.64±1.80 % .200±.002 cVAE-GAN 24.93±2.27 % .095±.001 cVAE-GAN++ 29.19±2.43 % .099±.002 cLR-GAN 29.23±2.48 % a.089±.002 BicycleGAN 34.33±2.69 % .111±.002 aWe found that cLR-GAN resulted in severe mode collapse, resulting in ∼15% of the images producing the same result. Those images were omitted from this calculation. 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 Diversity (LPIPS Feature Distance) 0 5 10 15 20 25 30 35 40 Realism (AMT Fooling Rate [%]) pix2pix+noise cAE-GAN cVAE-GAN cVAE-GAN++ cLR-GAN BicycleGAN Figure 6: Realism vs Diversity. We measure diversity using average LPIPS distance [52], and realism using a real vs. fake Amazon Mechanical Turk test on the Google maps →satellites task. The pix2pix+noise baseline produces little diversity. Using only cAE-GAN method produces large artifacts during sampling. The hybrid BicycleGAN method, which combines cVAE-GAN and cLR-GAN, produces results which have higher realism while maintaining diversity. 5.1 Qualitative Evaluation We show qualitative comparison results on Figure 5. We observe that pix2pix+noise typically produces a single realistic output, but does not produce any meaningful variation. cAE-GAN adds variation to the output, but typically at a large cost to result quality. An example on facades is shown on Figure 4. We observe more variation in the cVAE-GAN, as the latent space is encouraged to encode information about ground truth outputs. However, the space is not densely populated, so drawing random samples may cause artifacts in the output. The cLR-GAN shows less variation in the output, and sometimes suffers from mode collapse. When combining these methods, however, in the hybrid method BicycleGAN, we observe results which are both diverse and realistic. Please see our website for a full set of results. 5.2 Quantitative Evaluation We perform a quantitative analysis of the diversity, realism, and latent space distribution on our six variants and baselines. We quantitatively test the Google maps →satellites dataset. Diversity We compute the average distance of random samples in deep feature space. Pretrained networks have been used as a “perceptual loss" in image generation applications [9, 12, 21], as well as a held-out “validation" score in generative modeling, for example, assessing the semantic quality and diversity of a generative model [39] or the semantic accuracy of a grayscale colorization [50]. In Figure 6, we show the diversity-score using the LPIPS metric proposed by [52]1. For each method, we compute the average distance between 1900 pairs of randomly generated output bB images (sampled from 100 input A images). Random pairs of ground truth real images in the B ∈B domain produce an average variation of .265. As we are measuring samples bB which correspond to a specific input A, a system which stays faithful to the input should definitely not exceed this score. The pix2pix system [20] produces a single point estimate. Adding noise to the system pix2pix+noise produces a small diversity score, confirming the finding in [20] that adding noise does not produce large variation. Using the cAE-GAN model to encode a ground truth image B into a latent code z does increase the variation. The cVAE-GAN, cVAE-GAN++, and BicycleGAN models all place explicit constraints on the latent space, and the cLR-GAN model places an implicit constraint through sampling. These four methods all produce similar diversity scores. We note that high diversity scores may also indicate that unnatural images are being generated, causing meaningless variations. Next, we investigate the visual realism of our samples. Perceptual Realism To judge the visual realism of our results, we use human judgments, as proposed in [50] and later used in [20, 55]. The test sequentially presents a real and generated image to a human 1Learned Perceptual Image Patch Similarity (LPIPS) metric computes distance in AlexNet [24] feature space (conv1-5, pretrained on Imagenet [38]), with linear weights to better match human perceptual judgments. 8 Encoder EResNet EResNet ECNN ECNN Injecting z add_to_all add_to_input add_to_all add_to_input label→photo 0.292 ± 0.058 0.292 ± 0.054 0.326 ± 0.066 0.339 ± 0.069 map →satellite 0.268 ± 0.070 0.266 ± 0.068 0.287 ± 0.067 0.272 ± 0.069 Table 1: The encoding performance with respect to the different encoder architectures and methods of injecting z. Here we report the reconstruction loss ||B −G(A, E(B))||1 . |z| = 2 |z| = 256 |z| = 8 Input label Figure 7: Different label →facades results trained with varying length of the latent code |z| ∈ {2, 8, 256}. for 1 second each, in a random order, asks them to identify the fake, and measures the “fooling" rate. Figure 6(left) shows the realism across methods. The pix2pix+noise model achieves high realism score, but without large diversity, as discussed in the previous section. The cAE-GAN helps produce diversity, but this comes at a large cost to the visual realism. Because the distribution of the learned latent space is unclear, random samples may be from unpopulated regions of the space. Adding the KL-divergence loss in the latent space, used in the cVAE-GAN model, recovers the visual realism. Furthermore, as expected, checking randomly drawn z vectors in the cVAE-GAN++ model slightly increases realism. The cLR-GAN, which draws z vectors from the predefined distribution randomly, produces similar realism and diversity scores. However, the cLR-GAN model resulted in large mode collapse - approximately 15% of the outputs produced the same result, independent of the input image. The full hybrid BicycleGAN gets the best of both worlds, as it does not suffer from mode collapse and also has the highest realism score by a significant margin. Encoder architecture In pix2pix, Isola et al. [20] conduct extensive ablation studies on discriminators and generators. Here we focus on the performance of two encoder architectures, ECNN and EResNet, for our applications on the maps and facades datasets. We find that EResNet better encodes the output image, regarding the image reconstruction loss ||B −G(A, E(B))||1 on validation datasets as shown in Table 1. We use EResNet in our final model. Methods of injecting latent code We evaluate two ways of injecting latent code z: add_to_input and add_to_all (Section 4), regarding the same reconstruction loss ||B −G(A, E(B))||1. Table 1 shows that two methods give similar performance. This indicates that the U_Net [37] can already propagate the information well to the output without the additional skip connections from z. We use add_to_all method to inject noise in our final model. Latent code length We study the BicycleGAN model results with respect to the varying number of dimensions of latent codes {2, 8, 256} in Figure 7. A very low-dimensional latent code may limit the amount of diversity that can be expressed. On the contrary, a very high-dimensional latent code can potentially encode more information about an output image, at the cost of making sampling difficult. The optimal length of z largely depends on individual datasets and applications, and how much ambiguity there is in the output. 6 Conclusions In conclusion, we have evaluated a few methods for combating the problem of mode collapse in the conditional image generation setting. We find that by combining multiple objectives for encouraging a bijective mapping between the latent and output spaces, we obtain results which are more realistic and diverse. We see many interesting avenues of future work, including directly enforcing a distribution in the latent space that encodes semantically meaningful attributes to allow for image-to-image transformations with user controllable parameters. Acknowledgments We thank Phillip Isola and Tinghui Zhou for helpful discussions. This work was supported in part by Adobe Inc., DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1633310, IIS1427425, IIS-1212798, the Berkeley Artificial Intelligence Research (BAIR) Lab, and hardware donations from NVIDIA. JYZ is supported by Facebook Graduate Fellowship, RZ by Adobe Research Fellowship, and DP by NVIDIA Graduate Fellowship. 9 References [1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. [2] A. Bansal, Y. Sheikh, and D. Ramanan. Pixelnn: Example-based image synthesis. arXiv preprint arXiv:1708.05349, 2017. [3] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. In ICCV, 2017. [4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. [6] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. [7] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using real nvp. In ICLR, 2017. [8] J. Donahue, P. Krähenbühl, and T. Darrell. Adversarial feature learning. In ICLR, 2016. [9] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016. [10] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville. Adversarially learned inference. In ICLR, 2016. [11] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In ICCV, 1999. [12] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, pages 2414–2423, 2016. [13] A. Ghosh, V. Kulharia, V. Namboodiri, P. H. Torr, and P. K. Dokania. Multi-agent diverse generative adversarial networks. arXiv preprint arXiv:1704.02906, 2017. [14] I. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. [16] S. Guadarrama, R. Dahl, D. Bieber, M. Norouzi, J. Shlens, and K. Murphy. Pixcolor: Pixel recursive colorization. In BMVC, 2017. [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [18] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [19] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. SIGGRAPH, 35(4), 2016. [20] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. [21] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. [22] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [23] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014. [24] A. Krizhevsky. One weird trick for parallelizing convolutional neural networks. 2014. [25] P.-Y. Laffont, Z. Ren, X. Tao, C. Qian, and J. Hays. Transient attributes for high-level understanding and editing of outdoor scenes. SIGGRAPH, 2014. 10 [26] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016. [27] G. Larsson, M. Maire, and G. Shakhnarovich. Learning representations for automatic colorization. In ECCV, 2016. [28] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. In ICCV, 2017. [29] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. [30] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. [31] A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR, 2017. [32] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. PMLR, 2016. [33] A. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. Conditional image generation with pixelcnn decoders. In NIPS, 2016. [34] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. [35] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. [36] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text-to-image synthesis. In ICML, 2016. [37] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241. Springer, 2015. [38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015. [39] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. [40] P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays. Scribbler: Controlling deep image synthesis with sketch and color. In CVPR, 2017. [41] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986. [42] K. Sohn, X. Yan, and H. Lee. Learning structured output representation using deep conditional generative models. In NIPS, 2015. [43] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. [44] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In ECCV, 2016. [45] W. Xian, P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays. Texturegan: Controlling deep image synthesis with texture patches. In arXiv preprint arXiv:1706.02823, 2017. [46] T. Xue, J. Wu, K. Bouman, and B. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, 2016. [47] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li. High-resolution image inpainting using multi-scale neural patch synthesis. In CVPR, 2017. [48] A. Yu and K. Grauman. Fine-grained visual comparisons with local learning. In CVPR, 2014. [49] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. [50] R. Zhang, P. Isola, and A. A. Efros. Colorful image colorization. In ECCV, 2016. 11 [51] R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros. Real-time user-guided image colorization with learned deep priors. SIGGRAPH, 2017. [52] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In arXiv preprint arXiv:1801.03924, 2018. [53] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. In ICLR, 2017. [54] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016. [55] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017. 12 | 2017 | 336 |
6,826 | The Marginal Value of Adaptive Gradient Methods in Machine Learning Ashia C. Wilson], Rebecca Roelofs], Mitchell Stern], Nathan Srebro†, and Benjamin Recht] {ashia,roelofs,mitchell}@berkeley.edu, nati@ttic.edu, brecht@berkeley.edu ]University of California, Berkeley †Toyota Technological Institute at Chicago Abstract Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several stateof-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks. 1 Introduction An increasing share of deep learning researchers are training their models with adaptive gradient methods [3, 12] due to their rapid training time [6]. Adam [8] in particular has become the default algorithm used across many deep learning frameworks. However, the generalization and out-ofsample behavior of such adaptive gradient methods remains poorly understood. Given that many passes over the data are needed to minimize the training objective, typical regret guarantees do not necessarily ensure that the found solutions will generalize [17]. Notably, when the number of parameters exceeds the number of data points, it is possible that the choice of algorithm can dramatically influence which model is learned [15]. Given two different minimizers of some optimization problem, what can we say about their relative ability to generalize? In this paper, we show that adaptive and non-adaptive optimization methods indeed find very different solutions with very different generalization properties. We provide a simple generative model for binary classification where the population is linearly separable (i.e., there exists a solution with large margin), but AdaGrad [3], RMSProp [21], and Adam converge to a solution that incorrectly classifies new data with probability arbitrarily close to half. On this same example, SGD finds a solution with zero error on new data. Our construction suggests that adaptive methods tend to give undue influence to spurious features that have no effect on out-of-sample generalization. We additionally present numerical experiments demonstrating that adaptive methods generalize worse than their non-adaptive counterparts. Our experiments reveal three primary findings. First, with the same amount of hyperparameter tuning, SGD and SGD with momentum outperform adaptive methods on the development/test set across all evaluated models and tasks. This is true even when the adaptive methods achieve the same training loss or lower than non-adaptive methods. Second, adaptive methods often display faster initial progress on the training set, but their performance quickly 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. plateaus on the development/test set. Third, the same amount of tuning was required for all methods, including adaptive methods. This challenges the conventional wisdom that adaptive methods require less tuning. Moreover, as a useful guide to future practice, we propose a simple scheme for tuning learning rates and decays that performs well on all deep learning tasks we studied. 2 Background The canonical optimization algorithms used to minimize risk are either stochastic gradient methods or stochastic momentum methods. Stochastic gradient methods can generally be written wk+1 = wk −↵k ˜rf(wk), (2.1) where ˜rf(wk) := rf(wk; xik) is the gradient of some loss function f computed on a batch of data xik. Stochastic momentum methods are a second family of techniques that have been used to accelerate training. These methods can generally be written as wk+1 = wk −↵k ˜rf(wk + γk(wk −wk−1)) + βk(wk −wk−1). (2.2) The sequence of iterates (2.2) includes Polyak’s heavy-ball method (HB) with γk = 0, and Nesterov’s Accelerated Gradient method (NAG) [19] with γk = βk. Notable exceptions to the general formulations (2.1) and (2.2) are adaptive gradient and adaptive momentum methods, which choose a local distance measure constructed using the entire sequence of iterates (w1, · · · , wk). These methods (including AdaGrad [3], RMSProp [21], and Adam [8]) can generally be written as wk+1 = wk −↵kH−1 k ˜rf(wk + γk(wk −wk−1)) + βkH−1 k Hk−1(wk −wk−1), (2.3) where Hk := H(w1, · · · , wk) is a positive definite matrix. Though not necessary, the matrix Hk is usually defined as Hk = diag 0 @ ( k X i=1 ⌘igi ◦gi )1/21 A , (2.4) where “◦” denotes the entry-wise or Hadamard product, gk = ˜rf(wk + γk(wk −wk−1)), and ⌘k is some set of coefficients specified for each algorithm. That is, Hk is a diagonal matrix whose entries are the square roots of a linear combination of squares of past gradient components. We will use the fact that Hk are defined in this fashion in the sequel. For the specific settings of the parameters for many of the algorithms used in deep learning, see Table 1. Adaptive methods attempt to adjust an algorithm to the geometry of the data. In contrast, stochastic gradient descent and related variants use the `2 geometry inherent to the parameter space, and are equivalent to setting Hk = I in the adaptive methods. SGD HB NAG AdaGrad RMSProp Adam Gk I I I Gk−1 + Dk β2Gk−1 + (1 −β2)Dk β2 1−βk 2 Gk−1 + (1−β2) 1−βk 2 Dk ↵k ↵ ↵ ↵ ↵ ↵ ↵1−β1 1−βk 1 βk 0 β β 0 0 β1(1−βk−1 1 ) 1−βk 1 γ 0 0 β 0 0 0 Table 1: Parameter settings of algorithms used in deep learning. Here, Dk = diag(gk ◦gk) and Gk := Hk ◦Hk. We omit the additional ✏added to the adaptive methods, which is only needed to ensure non-singularity of the matrices Hk. In this context, generalization refers to the performance of a solution w on a broader population. Performance is often defined in terms of a different loss function than the function f used in training. For example, in classification tasks, we typically define generalization in terms of classification error rather than cross-entropy. 2 2.1 Related Work Understanding how optimization relates to generalization is a very active area of current machine learning research. Most of the seminal work in this area has focused on understanding how early stopping can act as implicit regularization [22]. In a similar vein, Ma and Belkin [10] have shown that gradient methods may not be able to find complex solutions at all in any reasonable amount of time. Hardt et al. [17] show that SGD is uniformly stable, and therefore solutions with low training error found quickly will generalize well. Similarly, using a stability argument, Raginsky et al. [16] have shown that Langevin dynamics can find solutions than generalize better than ordinary SGD in non-convex settings. Neyshabur, Srebro, and Tomioka [15] discuss how algorithmic choices can act as implicit regularizer. In a similar vein, Neyshabur, Salakhutdinov, and Srebro [14] show that a different algorithm, one which performs descent using a metric that is invariant to re-scaling of the parameters, can lead to solutions which sometimes generalize better than SGD. Our work supports the work of [14] by drawing connections between the metric used to perform local optimization and the ability of the training algorithm to find solutions that generalize. However, we focus primarily on the different generalization properties of adaptive and non-adaptive methods. A similar line of inquiry has been pursued by Keskar et al. [7]. Hochreiter and Schmidhuber [4] showed that “sharp” minimizers generalize poorly, whereas “flat” minimizers generalize well. Keskar et al. empirically show that Adam converges to sharper minimizers when the batch size is increased. However, they observe that even with small batches, Adam does not find solutions whose performance matches state-of-the-art. In the current work, we aim to show that the choice of Adam as an optimizer itself strongly influences the set of minimizers that any batch size will ever see, and help explain why they were unable to find solutions that generalized particularly well. 3 The potential perils of adaptivity The goal of this section is to illustrate the following observation: when a problem has multiple global minima, different algorithms can find entirely different solutions when initialized from the same point. In addition, we construct an example where adaptive gradient methods find a solution which has worse out-of-sample error than SGD. To simplify the presentation, let us restrict our attention to the binary least-squares classification problem, where we can easily compute closed the closed form solution found by different methods. In least-squares classification, we aim to solve minimizew RS[w] := 1 2kXw −yk2 2. (3.1) Here X is an n ⇥d matrix of features and y is an n-dimensional vector of labels in {−1, 1}. We aim to find the best linear classifier w. Note that when d > n, if there is a minimizer with loss 0 then there is an infinite number of global minimizers. The question remains: what solution does an algorithm find and how well does it perform on unseen data? 3.1 Non-adaptive methods Most common non-adaptive methods will find the same solution for the least squares objective (3.1). Any gradient or stochastic gradient of RS must lie in the span of the rows of X. Therefore, any method that is initialized in the row span of X (say, for instance at w = 0) and uses only linear combinations of gradients, stochastic gradients, and previous iterates must also lie in the row span of X. The unique solution that lies in the row span of X also happens to be the solution with minimum Euclidean norm. We thus denote wSGD = XT (XXT )−1y. Almost all non-adaptive methods like SGD, SGD with momentum, mini-batch SGD, gradient descent, Nesterov’s method, and the conjugate gradient method will converge to this minimum norm solution. The minimum norm solutions have the largest margin out of all solutions of the equation Xw = y. Maximizing margin has a long and fruitful history in machine learning, and thus it is a pleasant surprise that gradient descent naturally finds a max-margin solution. 3 3.2 Adaptive methods Next, we consider adaptive methods where Hk is diagonal. While it is difficult to derive the general form of the solution, we can analyze special cases. Indeed, we can construct a variety of instances where adaptive methods converge to solutions with low `1 norm rather than low `2 norm. For a vector x 2 Rq, let sign(x) denote the function that maps each component of x to its sign. Lemma 3.1 Suppose there exists a scalar c such that X sign(XT y) = cy. Then, when initialized at w0 = 0, AdaGrad, Adam, and RMSProp all converge to the unique solution w / sign(XT y). In other words, whenever there exists a solution of Xw = y that is proportional to sign(XT y), this is precisely the solution to which all of the adaptive gradient methods converge. Proof We prove this lemma by showing that the entire trajectory of the algorithm consists of iterates whose components have constant magnitude. In particular, we will show that wk = λk sign(XT y) , for some scalar λk. The initial point w0 = 0 satisfies the assertion with λ0 = 0. Now, assume the assertion holds for all k t. Observe that rRS(wk + γk(wk −wk−1)) = XT (X(wk + γk(wk −wk−1)) −y) = XT ( (λk + γk(λk −λk−1))X sign(XT y) −y = {(λk + γk(λk −λk−1))c −1} XT y = µkXT y, where the last equation defines µk. Hence, letting gk = rRS(wk + γk(wk −wk−1)), we also have Hk = diag 0 @ ( k X s=1 ⌘s gs ◦gs )1/21 A = diag 0 @ ( k X s=1 ⌘sµ2 s )1/2 |XT y| 1 A = ⌫k diag * |XT y| + , where |u| denotes the component-wise absolute value of a vector and the last equation defines ⌫k. In sum, wk+1 = wk −↵kH−1 k rf(wk + γk(wk −wk−1)) + βtH−1 k Hk−1(wk −wk−1) = ⇢ λk −↵kµk ⌫k + βk⌫k−1 ⌫k (λk −λk−1) sign(XT y), proving the claim.1 This solution is far simpler than the one obtained by gradient methods, and it would be surprising if such a simple solution would perform particularly well. We now turn to showing that such solutions can indeed generalize arbitrarily poorly. 3.3 Adaptivity can overfit Lemma 3.1 allows us to construct a particularly pernicious generative model where AdaGrad fails to find a solution that generalizes. This example uses infinite dimensions to simplify bookkeeping, but one could take the dimensionality to be 6n. Note that in deep learning, we often have a number of parameters equal to 25n or more [20], so this is not a particularly high dimensional example by contemporary standards. For i = 1, . . . , n, sample the label yi to be 1 with probability p and −1 with probability 1 −p for some p > 1/2. Let xi be an infinite dimensional vector with entries xij = 8 > > < > > : yi j = 1 1 j = 2, 3 1 j = 4 + 5(i −1), . . . , 4 + 5(i −1) + 2(1 −yi) 0 otherwise . 1In the event that XT y has a component equal to 0, we define 0/0 = 0 so that the update is well-defined. 4 In other words, the first feature of xi is the class label. The next 2 features are always equal to 1. After this, there is a set of features unique to xi that are equal to 1. If the class label is 1, then there is 1 such unique feature. If the class label is −1, then there are 5 such features. Note that the only discriminative feature useful for classifying data outside the training set is the first one! Indeed, one can perform perfect classification using only the first feature. The other features are all useless. Features 2 and 3 are constant, and each of the remaining features only appear for one example in the data set. However, as we will see, algorithms without such a priori knowledge may not be able to learn these distinctions. Take n samples and consider the AdaGrad solution for minimizing 1 2||Xw −y||2. First we show that the conditions of Lemma 3.1 hold. Let b = Pn i=1 yi and assume for the sake of simplicity that b > 0. This will happen with arbitrarily high probability for large enough n. Define u = XT y and observe that uj = 8 > > > < > > > : n j = 1 b j = 2, 3 yj if j > 3 and xb j+1 5 c,j = 1 0 otherwise and sign(uj) = 8 > > > < > > > : 1 j = 1 1 j = 2, 3 yj if j > 3 and xb j+1 5 c,j = 1 0 otherwise Thus we have hsign(u), xii = yi + 2 + yi(3 −2yi) = 4yi as desired. Hence, the AdaGrad solution wada / sign(u). In particular, wada has all of its components equal to ±⌧for some positive constant ⌧. Now since wada has the same sign pattern as u, the first three components of wada are equal to each other. But for a new data point, xtest, the only features that are nonzero in both xtest and wada are the first three. In particular, we have hwada, xtesti = ⌧(y(test) + 2) > 0 . Therefore, the AdaGrad solution will label all unseen data as a positive example! Now, we turn to the minimum 2-norm solution. Let P and N denote the set of positive and negative examples respectively. Let n+ = |P| and n−= |N|. Assuming ↵i = ↵+ when yi = 1 and ↵i = ↵− when yi = −1, we have that the minimum norm solution will have the form wSGD = XT ↵= P i2P ↵+xi + P j2N ↵−xj. These scalars can be found by solving XXT ↵= y. In closed form we have ↵+ = 4n−+ 3 9n+ + 3n−+ 8n+n−+ 3 and ↵−= 4n+ + 1 9n+ + 3n−+ 8n+n−+ 3 . (3.2) The algebra required to compute these coefficients can be found in the Appendix. For a new data point, xtest, again the only features that are nonzero in both xtest and wSGD are the first three. Thus we have hwSGD, xtesti = ytest(n+↵+ −n−↵−) + 2(n+↵+ + n−↵−) . Using (3.2), we see that whenever n+ > n−/3, the SGD solution makes no errors. A formal construction of this example using a data-generating distribution can be found in Appendix C. Though this generative model was chosen to illustrate extreme behavior, it shares salient features with many common machine learning instances. There are a few frequent features, where some predictor based on them is a good predictor, though these might not be easy to identify from first inspection. Additionally, there are many other features which are sparse. On finite training data it looks like such features are good for prediction, since each such feature is discriminatory for a particular training example, but this is over-fitting and an artifact of having fewer training examples than features. Moreover, we will see shortly that adaptive methods typically generalize worse than their non-adaptive counterparts on real datasets. 4 Deep Learning Experiments Having established that adaptive and non-adaptive methods can find different solutions in the convex setting, we now turn to an empirical study of deep neural networks to see whether we observe a similar discrepancy in generalization. We compare two non-adaptive methods – SGD and the heavy ball method (HB) – to three popular adaptive methods – AdaGrad, RMSProp and Adam. We study performance on four deep learning problems: (C1) the CIFAR-10 image classification task, (L1) 5 Name Network type Architecture Dataset Framework C1 Deep Convolutional cifar.torch CIFAR-10 Torch L1 2-Layer LSTM torch-rnn War & Peace Torch L2 2-Layer LSTM + Feedforward span-parser Penn Treebank DyNet L3 3-Layer LSTM emnlp2016 Penn Treebank Tensorflow Table 2: Summaries of the models we use for our experiments.2 character-level language modeling on the novel War and Peace, and (L2) discriminative parsing and (L3) generative parsing on Penn Treebank. In the interest of reproducibility, we use a network architecture for each problem that is either easily found online (C1, L1, L2, and L3) or produces state-of-the-art results (L2 and L3). Table 2 summarizes the setup for each application. We take care to make minimal changes to the architectures and their data pre-processing pipelines in order to best isolate the effect of each optimization algorithm. We conduct each experiment 5 times from randomly initialized starting points, using the initialization scheme specified in each code repository. We allocate a pre-specified budget on the number of epochs used for training each model. When a development set was available, we chose the settings that achieved the best peak performance on the development set by the end of the fixed epoch budget. CIFAR-10 did not have an explicit development set, so we chose the settings that achieved the lowest training loss at the end of the fixed epoch budget. Our experiments show the following primary findings: (i) Adaptive methods find solutions that generalize worse than those found by non-adaptive methods. (ii) Even when the adaptive methods achieve the same training loss or lower than non-adaptive methods, the development or test performance is worse. (iii) Adaptive methods often display faster initial progress on the training set, but their performance quickly plateaus on the development set. (iv) Though conventional wisdom suggests that Adam does not require tuning, we find that tuning the initial learning rate and decay scheme for Adam yields significant improvements over its default settings in all cases. 4.1 Hyperparameter Tuning Optimization hyperparameters have a large influence on the quality of solutions found by optimization algorithms for deep neural networks. The algorithms under consideration have many hyperparameters: the initial step size ↵0, the step decay scheme, the momentum value β0, the momentum schedule βk, the smoothing term ✏, the initialization scheme for the gradient accumulator, and the parameter controlling how to combine gradient outer products, to name a few. A grid search on a large space of hyperparameters is infeasible even with substantial industrial resources, and we found that the parameters that impacted performance the most were the initial step size and the step decay scheme. We left the remaining parameters with their default settings. We describe the differences between the default settings of Torch, DyNet, and Tensorflow in Appendix B for completeness. To tune the step sizes, we evaluated a logarithmically-spaced grid of five step sizes. If the best performance was ever at one of the extremes of the grid, we would try new grid points so that the best performance was contained in the middle of the parameters. For example, if we initially tried step sizes 2, 1, 0.5, 0.25, and 0.125 and found that 2 was the best performing, we would have tried the step size 4 to see if performance was improved. If performance improved, we would have tried 8 and so on. We list the initial step sizes we tried in Appendix D. For step size decay, we explored two separate schemes, a development-based decay scheme (devdecay) and a fixed frequency decay scheme (fixed-decay). For dev-decay, we keep track of the best validation performance so far, and at each epoch decay the learning rate by a constant factor δ if the model does not attain a new best value. For fixed-decay, we decay the learning rate by a constant factor δ every k epochs. We recommend the dev-decay scheme when a development set is available; 2Architectures can be found at the following links: (1) cifar.torch: https://github. com/szagoruyko/cifar.torch; (2) torch-rnn: https://github.com/jcjohnson/torch-rnn; (3) span-parser: https://github.com/jhcross/span-parser; (4) emnlp2016: https://github.com/ cdg720/emnlp2016. 6 (a) CIFAR-10 (Train) (b) CIFAR-10 (Test) Figure 1: Training (left) and top-1 test error (right) on CIFAR-10. The annotations indicate where the best performance is attained for each method. The shading represents ± one standard deviation computed across five runs from random initial starting points. In all cases, adaptive methods are performing worse on both train and test than non-adaptive methods. not only does it have fewer hyperparameters than the fixed frequency scheme, but our experiments also show that it produces results comparable to, or better than, the fixed-decay scheme. 4.2 Convolutional Neural Network We used the VGG+BN+Dropout network for CIFAR-10 from the Torch blog [23], which in prior work achieves a baseline test error of 7.55%. Figure 1 shows the learning curve for each algorithm on both the training and test dataset. We observe that the solutions found by SGD and HB do indeed generalize better than those found by adaptive methods. The best overall test error found by a non-adaptive algorithm, SGD, was 7.65 ± 0.14%, whereas the best adaptive method, RMSProp, achieved a test error of 9.60 ± 0.19%. Early on in training, the adaptive methods appear to be performing better than the non-adaptive methods, but starting at epoch 50, even though the training error of the adaptive methods is still lower, SGD and HB begin to outperform adaptive methods on the test error. By epoch 100, the performance of SGD and HB surpass all adaptive methods on both train and test. Among all adaptive methods, AdaGrad’s rate of improvement flatlines the earliest. We also found that by increasing the step size, we could drive the performance of the adaptive methods down in the first 50 or so epochs, but the aggressive step size made the flatlining behavior worse, and no step decay scheme could fix the behavior. 4.3 Character-Level Language Modeling Using the torch-rnn library, we train a character-level language model on the text of the novel War and Peace, running for a fixed budget of 200 epochs. Our results are shown in Figures 2(a) and 2(b). Under the fixed-decay scheme, the best configuration for all algorithms except AdaGrad was to decay relatively late with regards to the total number of epochs, either 60 or 80% through the total number of epochs and by a large amount, dividing the step size by 10. The dev-decay scheme paralleled (within the same standard deviation) the results of the exhaustive search over the decay frequency and amount; we report the curves from the fixed policy. Overall, SGD achieved the lowest test loss at 1.212 ± 0.001. AdaGrad has fast initial progress, but flatlines. The adaptive methods appear more sensitive to the initialization scheme than non-adaptive methods, displaying a higher variance on both train and test. Surprisingly, RMSProp closely trails SGD on test loss, confirming that it is not impossible for adaptive methods to find solutions that generalize well. We note that there are step configurations for RMSProp that drive the training loss 7 below that of SGD, but these configurations cause erratic behavior on test, driving the test error of RMSProp above Adam. 4.4 Constituency Parsing A constituency parser is used to predict the hierarchical structure of a sentence, breaking it down into nested clause-level, phrase-level, and word-level units. We carry out experiments using two stateof-the-art parsers: the stand-alone discriminative parser of Cross and Huang [2], and the generative reranking parser of Choe and Charniak [1]. In both cases, we use the dev-decay scheme with δ = 0.9 for learning rate decay. Discriminative Model. Cross and Huang [2] develop a transition-based framework that reduces constituency parsing to a sequence prediction problem, giving a one-to-one correspondence between parse trees and sequences of structural and labeling actions. Using their code with the default settings, we trained for 50 epochs on the Penn Treebank [11], comparing labeled F1 scores on the training and development data over time. RMSProp was not implemented in the used version of DyNet, and we omit it from our experiments. Results are shown in Figures 2(c) and 2(d). We find that SGD obtained the best overall performance on the development set, followed closely by HB and Adam, with AdaGrad trailing far behind. The default configuration of Adam without learning rate decay actually achieved the best overall training performance by the end of the run, but was notably worse than tuned Adam on the development set. Interestingly, Adam achieved its best development F1 of 91.11 quite early, after just 6 epochs, whereas SGD took 18 epochs to reach this value and didn’t reach its best F1 of 91.24 until epoch 31. On the other hand, Adam continued to improve on the training set well after its best development performance was obtained, while the peaks for SGD were more closely aligned. Generative Model. Choe and Charniak [1] show that constituency parsing can be cast as a language modeling problem, with trees being represented by their depth-first traversals. This formulation requires a separate base system to produce candidate parse trees, which are then rescored by the generative model. Using an adapted version of their code base,3 we retrained their model for 100 epochs on the Penn Treebank. However, to reduce computational costs, we made two minor changes: (a) we used a smaller LSTM hidden dimension of 500 instead of 1500, finding that performance decreased only slightly; and (b) we accordingly lowered the dropout ratio from 0.7 to 0.5. Since they demonstrated a high correlation between perplexity (the exponential of the average loss) and labeled F1 on the development set, we explored the relation between training and development perplexity to avoid any conflation with the performance of a base parser. Our results are shown in Figures 2(e) and 2(f). On development set performance, SGD and HB obtained the best perplexities, with SGD slightly ahead. Despite having one of the best performance curves on the training dataset, Adam achieves the worst development perplexities. 5 Conclusion Despite the fact that our experimental evidence demonstrates that adaptive methods are not advantageous for machine learning, the Adam algorithm remains incredibly popular. We are not sure exactly as to why, but hope that our step-size tuning suggestions make it easier for practitioners to use standard stochastic gradient methods in their research. In our conversations with other researchers, we have surmised that adaptive gradient methods are particularly popular for training GANs [18, 5] and Q-learning with function approximation [13, 9]. Both of these applications stand out because they are not solving optimization problems. It is possible that the dynamics of Adam are accidentally well matched to these sorts of optimization-free iterative search procedures. It is also possible that carefully tuned stochastic gradient methods may work as well or better in both of these applications. 3While the code of Choe and Charniak treats the entire corpus as a single long example, relying on the network to reset itself upon encountering an end-of-sentence token, we use the more conventional approach of resetting the network for each example. This reduces training efficiency slightly when batches contain examples of different lengths, but removes a potential confounding factor from our experiments. 8 It is an exciting direction of future work to determine which of these possibilities is true and to understand better as to why. Acknowledgements The authors would like to thank Pieter Abbeel, Moritz Hardt, Tomer Koren, Sergey Levine, Henry Milner, Yoram Singer, and Shivaram Venkataraman for many helpful comments and suggestions. RR is generously supported by DOE award AC02-05CH11231. MS and AW are supported by NSF Graduate Research Fellowships. NS is partially supported by NSF-IIS-13-02662 and NSF-IIS15-46500, an Inter ICRI-RI award and a Google Faculty Award. BR is generously supported by NSF award CCF-1359814, ONR awards N00014-14-1-0024 and N00014-17-1-2191, the DARPA Fundamental Limits of Learning (Fun LoL) Program, a Sloan Research Fellowship, and a Google Faculty Award. (a) War and Peace (Training Set) (b) War and Peace (Test Set) (c) Discriminative Parsing (Training Set) (d) Discriminative Parsing (Development Set) (e) Generative Parsing (Training Set) (f) Generative Parsing (Development Set) Figure 2: Performance curves on the training data (left) and the development/test data (right) for three experiments on natural language tasks. The annotations indicate where the best performance is attained for each method. The shading represents one standard deviation computed across five runs from random initial starting points. 9 References [1] Do Kook Choe and Eugene Charniak. Parsing as language modeling. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2331–2336. The Association for Computational Linguistics, 2016. [2] James Cross and Liang Huang. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pages 1–11. The Association for Computational Linguistics, 2016. [3] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [4] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997. [5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv:1611.07004, 2016. [6] Andrej Karparthy. A peek at trends in machine learning. https://medium.com/@karpathy/ a-peek-at-trends-in-machine-learning-ab8a1085a106. Accessed: 2017-05-17. [7] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In The International Conference on Learning Representations (ICLR), 2017. [8] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR), 2015. [9] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016. [10] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. arXiv:1703.10622, 2017. [11] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313–330, 1993. [12] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [13] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016. [14] Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In Neural Information Processing Systems (NIPS), 2015. [15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations (ICLR), 2015. [16] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. arXiv:1702.03849, 2017. [17] Benjamin Recht, Moritz Hardt, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the International Conference on Machine Learning (ICML), 2016. 10 [18] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proceedings of The International Conference on Machine Learning (ICML), 2016. [19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning (ICML), 2013. [20] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [21] T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. [22] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289–315, 2007. [23] Sergey Zagoruyko. Torch blog. http://torch.ch/blog/2015/07/30/cifar.html, 2015. 11 | 2017 | 337 |
6,827 | Mean Field Residual Networks: On the Edge of Chaos Greg Yang⇤ Microsoft Research AI gregyang@microsoft.com Samuel S. Schoenholz Google Brain schsam@google.com Abstract We study randomly initialized residual networks using mean field theory and the theory of difference equations. Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward. The exponential forward dynamics causes rapid collapsing of the input space geometry, while the exponential backward dynamics causes drastic vanishing or exploding gradients. We show, in contrast, that by adding skip connections, the network will, depending on the nonlinearity, adopt subexponential forward and backward dynamics, and in many cases in fact polynomial. The exponents of these polynomials are obtained through analytic methods and proved and verified empirically to be correct. In terms of the “edge of chaos” hypothesis, these subexponential and polynomial laws allow residual networks to “hover over the boundary between stability and chaos,” thus preserving the geometry of the input space and the gradient information flow. In our experiments, for each activation function we study here, we initialize residual networks with different hyperparameters and train them on MNIST. Remarkably, our initialization time theory can accurately predict test time performance of these networks, by tracking either the expected amount of gradient explosion or the expected squared distance between the images of two input vectors. Importantly, we show, theoretically as well as empirically, that common initializations such as the Xavier or the He schemes are not optimal for residual networks, because the optimal initialization variances depend on the depth. Finally, we have made mathematical contributions by deriving several new identities for the kernels of powers of ReLU functions by relating them to the zeroth Bessel function of the second kind. 1 Introduction Previous works [9, 3, 11] have shown that randomly initialized neural networks exhibit a spectrum of behavior with depth, from stable to chaotic, which depends on the variance of the initializations: the cosine distance of two input vectors converges exponentially fast with depth to a fixed point in [0, 1]; if this fixed point is 1, then the behavior is stable; if this fixed point is 0, then the behavior is chaotic. It has been argued in many prior works [1, 9] that effective computation can only be supported by a dynamical behavior that is on the edge of chaos. Too much stability prevents the neural network from telling apart two different inputs. While some chaotic behavior can increase the expressivity of a network, too much chaos makes the neural network think two similar inputs are very different. At the same time, the same initialization variances also control how far gradient information can be propagated through the network; the networks with chaotic forward dynamics will tend to suffer from exploding gradients, while networks with stable forward dynamics will tend to suffer from vanishing gradients. ⇤Work done while at Harvard University 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. These works have focused on vanilla (fully connected) feedforward networks. Here we consider residual networks [6, 7] (with fully-connected layers and without batchnorm), which are a family of recently proposed neural network architectures that has achieved state-of-the-art performance on image recognition tasks, beating all other approaches by a large margin. The main innovation of this family of architectures is the addition of a passthrough (identity) connection from the previous layer to the next, such that the usual nonlinearity computes the “residual” between the next-layer activation and the previous-layer activation. In this work, we seek to characterize randomly initialized residual networks. One of our main results is that random residual networks for many nonlinearities such as tanh live on the edge of chaos, in that the cosine distance of two input vectors will converge to a fixed point at a polynomial rate, rather than an exponential rate, as with vanilla tanh networks. Thus a typical residual network will slowly cross the stable-chaotic boundary with depth, hovering around this boundary for many layers. In addition, for most of the nonlinearities considered here, the mean field estimate of the gradient grows subexponentially with depth. In fact, for ↵-ReLU, the ↵th-power of ReLU, for ↵< 1, the gradient grows only polynomially. These theoretical results provide some theoretical justification for why residual networks work so well in practice. In our experiments, we are also able to predict surprisingly well the relative performances of trained residual networks based only on their initialization hyperparameters, in a variety of settings. In particular, we find that the quality of initialization for tanh resnets is determined by trainability (how much gradient explosion on average) while that for (↵-)ReLU resnets is determined by expressivity (how far can two different input vectors be pulled apart) (see Section 6). To the best of our knowledge, this is the first time that a quantity other than gradient explosion/vanishing has been found to control the quality of initialization. We establish theoretically and empirically that the best initialization variances for residual networks depend on the depth of the network (contrary to the feedforward case [11]), so that common initialization schemes like Xavier [4] or He [5] cannot be optimal. In fact, even the rationale of He initialization is incorrect for ReLU residual networks because it tries to control gradient dynamics rather than expressivity. However we want to emphasize that we study a simplified model of residual networks in this work, with no batchnorm or convolutional layers, so that these results are not necessarily indicative of the MSRA residual network used in practice [6]. In the body of this paper, we give account of general intuition and/or proof strategy when appropriate for our theoretical results, but we relegate all formal statements and proofs to the appendix. 2 Background Consider a vanilla feedforward neural network of L layers, with each layer l having N (l) neurons; here layer 0 is the input layer. For the ease of presentation we assume all hidden layer widths are the same N (l) = N for all l > 0. Let x(0) = (x(0) 1 , . . . , x(0) N (0)) be the input vector to the network, and let x(l) for l > 0 be the activation of layer l. Then a neural network is given by the equations x(l) i = φ(h(l) i ), h(l) i = N X j=1 w(l) ij x(l−1) j + b(l) i where (i) h(l) is the pre-activation at layer l, (ii) w(l) is the weight matrix, (iii) b(l) is the bias vector, and (iv) φ is a nonlinearity, for example tanh or ReLU, which is applied coordinatewise to its input. To lighten up notation, we suppress the explicit layer numbers l and write xi = φ(hi), hi = X j wijxj + bi where • implicitly denotes •(l), and • denotes •(l−1) (and analogously, • denotes •(l+1)). A series of papers [9, 10, 11] investigated the “average behavior” of random neural networks sampled via w(l) ij ⇠N(0, σ2 w/N), b(l) i ⇠N(0, σ2 b), for fixed parameters σw and σb, independent of l. Consider the expectation of 1 N PN i=1 x2 i , the normalized squared length of x, over the sampling of w and b. Poole et al. [9] showed that this quantity converges to a fixed point exponentially fast for sigmoid nonlinearities. Now suppose we propagate two different vectors x(0) and (x(0))0 through the 2 network. Poole et al. [9] also showed that the expectation of the normalized dot product 1 N PN i=1 xix0 i converges exponentially fast to a fixed point. The ratio between the normalized squared length and the normalized dot product is the cosine distance between x and x0. Thus these two exponential convergence results show that the cosine distance converges exponentially fast to a fixed point as well. Intuitively, this means that a vanilla feedforward network “forgets” the geometry of the input space “very quickly,” after only a few layers. In addition, Schoenholz et al. [11], under certain independence assumptions, showed that the expected normalized squared norm of the gradient also vanishes or explodes in an exponential fashion with depth, with the ”half-life” controlled by σw and σb. They verified that this theoretical ”half-life” correlates in practice with the maximal number of layers that are admissible to good performance. At the same time, Daniely et al. [3] published work of similar nature, but phrased in the language of reproducing kernel Hilbert spaces, and provided high probability estimates that are meaningful for the case when the width N is finite and the depth is logarithmic in N. However, they essentially fixed the variance parameters σ•, and furthermore, their framework (for example the notion of a “skeleton”) does not immediately generalize to the residual network case. In this work, we show that residual networks have very different dynamics from vanilla feedforward networks. In most cases, the cosine distance convergence rate and the gradient growth rate are subexponential in a residual network, and in most cases, these rates may be polynomial. 3 Preliminaries Residual networks were first introduced by [6] and later refined by [7], and they are now commonplace among deployed neural systems. The key innovation there is the addition of a shortcut connection from the previous layer to the next. We define the following idealized architectures for ease of analysis. Note that we only consider fully-connected affine layers instead of convolutional layers. A reduced residual network (RRN) has the recurrence xi = φ(hi) + x, hi = X j wijxj + bi. A (full) residual network (FRN) in addition has an affine connection given by weights v and biases a from the nonlinearity φ(h) to the next layer: xi = X j vijφ(hj) + xi + ai, hi = X j wijxj + bi We are interested in the “average behavior” of these network when the weights and biases, w(l) ij , b(l) i , v(l) ij , and a(l) i are sampled i.i.d. from Gaussian distributions resp. with standard deviations σw, σb, σv, and σa, independent from l. Here we take the variance of w(l) ij to be σ2 w/N so that the variance of each hi is σ2 w, assuming each xj is fixed (similarity for v(l) ij ). Such an initialization scheme is standard in practice. We make several key “physical assumptions” to make theoretical computations tractable: Axiom 3.1 (Symmetry of activations and gradients). (a) We assume h(h(l) i )2i = h(h(l) j )2i and h(x(0) i )2i = h(x(0) j )2i for any i, j, l. (b) We also assume that the gradient @E/@x(l) i with respect to the loss function E satisfies h(@E/@x(l) i )2i = h(@E/@x(l) j )2i for any i, j, l. One can see that Axiom 3.1(a) is satisfied if the input x(0) 2 {±1}N and Axiom 3.1(b) is satisfied if Axiom 3.2 below is true and the gradient at the last layer @E/@xL 2 {±1}N. But in general it is justified both empirically and theoretically as an approximation, because (h(l) i )2 −(h(l) j )2 stays about constant with l, but (h(l) i )2 and (h(l) j )2 grow rather quickly at the same pace with l (as will be seen later in calculations), so that their additive difference becomes negligible; similarly for (x(l) i )2 and (@E/@h(l) i )2. 3 Axiom 3.2 (Gradient independence). (a) We assume the we use a different set of weights for backpropagation than those used to compute the network outputs, but sampled i.i.d. from the same distributions. (b) For any loss function E, we assume that the gradient at layer l, @E/@x(l) i , is independent from all activations h(l) j and x(l−1) j from the previous layer. Axiom 3.2(a) was first made in [11] for computing the mean field theory of gradients for feedforward tanh networks. This is similar to the practice of feedback alignment [8]. Even though we are the first to explicitly formulate Axiom 3.2(b), in fact it was already applied implicitly in the gradient calculations of [11]. Note that a priori Axiom 3.2(b) is not true, as @E/@x(l) i depends on ˙φ(h(l+1) k ) for every k, which depend on h(l) j for each j, and which depends on x(l−1) k for every k. Nevertheless, in practice both subassumptions hold very well. Now we define the central quantities studied in this paper. Inevitably, our paper involves a large amount of notation that may be confusing for the first-time reader. We have included a glossary of symbols (Table A.1) to ameliorate notation confusion. Definition 3.3. Fix an input x(0). Define the length quantities q(l) := h(h(l) 1 )2i and p(l) := h(x(l) 1 )2i for l > 0 and p(0) = kx(0)k2/N. Here the expectations h•i are taken over all random initialization of weights and biases for all layers l, as N ! 1 (large width limit). Note that in our definition, the index 1 does not matter by Axiom 3.1. Definition 3.4. Fix two inputs x(0) and x(0)0. We write •0 to denote a quantity • with respect to the input x(0)0. Then define the correlation quantities γ(l) := hh(l) 1 h(l) 1 0i and λ(l) := hx(l) 1 x(l) 1 0i for l > 0 and γ(0) = x(0) · x(0)0/N, where the expectations h•i are taken over all random initialization of weights and biases for all layers l, as N ! 1 (large width limit). Again, here the index 1 does not matter by Axiom 3.1. By metric expressivity, we mean s(l) := 1 2N hkx(l) −x(l)0k2i = 1 2N (hkx(l)k2i+hkx(l)0k2i−2hx(l) ·x(l)0i) = 1 2(p(l) +p(l)0)−γ(l). Additionally, define the cosine distance quantities e(l) := γ(l)/ p p(l)p(l)0 and c(l) := λ(l)/ p q(l)q(l)0, and we will also call e(l) angular expressivity. In this paper, for the ease of presentation, we assume p(0) = p(0)0. Then, as we will see, p(l) = p(l)0, q(l) = q(l)0 for all l, and as a result, e(l) = γ(l)/p(l) and s(l) = p(l) −γ(l) = (1 −e(l))p(l). Definition 3.5. Fix an input x(0) and a gradient vector (@E/@x(L) i )i of some loss function E with respect to the last layer x(L). Then define the gradient quantities χ(l) := h(@E/@x(l) 1 )2i, χ(l) • := h(@E/@•(l) 1 )2i for • = a, b, and χ(l) • := h(@E/@•(l) 11)2i for • = w, v. Here the expectations are taken with Axiom 3.2 in mind, over both random initialization of forward and backward weights and biases, as N ! 1 (large width limit). Again, the index 1 or 11 does not matter by Axiom 3.1. Asymptotic notations. The expressions f = O(g) () g = ⌦(f) have their typical meanings, and f = ⇥(g) iff f = O(g), g = O(f). We take f(x) = ˜O(g(x)) () g(x) = ˜⌦(f(x)) to mean f(x) = O(g logk x) for some k 2 Z (this is slightly different from the standard usage of ˜O), and f = ˜⇥(g) () f = ˜O(g) & g = ˜O(f). We introduce a new notation: f = ˇ⇥(g) if f(x) = O(g(x) · x✏) and f(x) = ⌦(g(x) · x−✏), as x ! 1, for any ✏> 0. All asymptotic notations are sign-less, i.e. can indicate either positive or negative quantities, unless stated otherwise. 4 Overview The primary reason we may say anything about the average behavior of any of the above quantities is the central limit theorem: every time the activations of the previous layer pass through an affine layer whose weights are sampled i.i.d., the output is a sum of a large number of random variables, and thus follows approximately Gaussian distributions. The mean and variance of these distributions can be computed by keeping track of the mean and variances of the activations in the previous layer. In what follows, we use this technique to derive recurrence equations governing p, q, γ, λ, χ for different architectures and different activation functions. We use these equations to investigate the 4 dynamics of e and s, the key quantities in the forward pass, and the dynamics of χ, the key quantity in the backward pass. The cosine distance e in some sense measures the angular geometry of two vectors. If e = 1, then the vectors are parallel; if e = 0, then they are orthogonal. Just as in [9] and [11], we will show that in all of the architectures and activations we consider in this paper, e(l) converges to a fixed point e⇤as l ! 1 1. Thus, on the average, as vectors propagate through network, the geometry of the original input space, for example, linear separability, is “forgotten” by residual networks as well as by vanilla networks. But we will prove and verify experimentally that, while Poole et al. [9] and [11] showed that the convergence rate to e⇤is exponential in a vanilla network, the convergence rate is rather only polynomial in residual networks, for tanh and ↵-ReLU (Defn 5.2) nonlinearities; see Thm B.5, Thm B.11, Thm B.17, and Thm B.18. This slow convergence preserves geometric information in the input space, and allows a typical residual network to “hover over the edge of chaos”: Even when the cosine distance e(l) converges to 0, corresponding to “chaos”, (resp. 1, corresponding to “stability”), for the number of layers usually seen in practice, e(l) will reside well away from 0 (resp. 1). Similarly, the quantity s measures the metric geometry of two vectors. The evolution of s(l) with l tells us the ability of the average network to separate two input points in terms of Euclidean distance. Again, for tanh and ↵-ReLU (↵< 1) nonlinearities, s varies only polynomially with l. On the other hand, χ(l) measures the size of gradient at layer l, and through it we track the dynamics of gradient backpropagation, be it explosion or vanishing. In contrast to vanilla tanh networks, which can experience both of these two phenomenon depending on the initialization variances, typical residual networks cannot have vanishing gradient, in the sense of vanishing χ(l) as l ! 1; see Thm B.5 and Thm B.12. Furthermore, while vanilla tanh networks exhibit exponentially vanishing or exploding gradients, all of the activation/architecture pairings considered here, except the full residual network with ReLU, have subexponential gradient dynamics. While tanh residual networks (reduced or full) has χ(0) ⇡exp(⇥( p l))χ(l) (Thm B.13), ↵-ReLU residual networks for ↵< 1 have χ(0) ⇡poly(l)χ(l) (Thm B.20). Instead of @E/@xi, we may also consider the size of gradients of actual trainable parameters. For tanh and ↵-ReLU with ↵< 1, they are still subexponential and polynomial (Thm B.21). On the other hand, while χ(0) = exp(⇥(l))χ(l) for a ReLU resnet, its weight gradients have size independent of layer, within O(1) (Thm B.21)! This is the only instance in this paper of gradient norm being completely preserved across layers. The above overviews the theoretical portion of this paper. Through experiments, we discover that we can very accurately predict whether one random initialization leads to better performance than another on the test set, after training, by leveraging this theory we build. Residual networks of different nonlinearities have different controlling quantities: for resnets with tanh, the optimal initialization is obtained by controlling the gradient explosion χ(0)/χ(L); whereas for ReLU and ↵-ReLU, the optimal initialization is obtained by maximizing s without running into numerical issues (with floating point computation). See Section 6 for details. Over the course of our investigation of ↵-ReLU, we derived several new identities involving the associated kernel functions, first defined in [2], which relate them to the zeroth Bessel functions (Lemmas C.31 to C.34). 5 Theoretical Results In what follows in the main text, we assume σ• > 0 for all • = w, v, b, a; in the appendix, the formal statement of each main theorem will contain results for other cases. We are interested in the two major categories of nonlinearities used today: tanh-like and rectified units. We make the following formal definitions as a foundation for further consideration. Definition 5.1. We say a function φ is tanh-like if φ is antisymmetric (φ(−x) = −φ(x)), |φ(x)| 1 for all x, φ(x) ≥0, 8x ≥0, and φ(x) monotonically increases to 1 as x ! 1. Definition 5.2. Define the ↵-ReLU ↵(x) = x↵if x > 0 and 0 otherwise. 2 By applying the central limit theorem as described in the last section, we derive a set of recurrences for different activation/architecture pairs, shown in Table 1 (see appendix for proofs). They leverage certain integral transforms 3 as in the following 5 Table 1: Main Recurrences Antisymmetric/RRN Any/FRN q = σ2 wp + σ2 b p = Vφ(q) + p λ = σ2 wγ + σ2 b γ = Wφ(q, λ) + γ χ = (σ2 wV ˙φ(q) + 1)χ q = σ2 wp + σ2 b p = σ2 vVφ(q) + σ2 a + p λ = σ2 wγ + σ2 b γ = σ2 vWφ(q, λ) + σ2 a + γ χ = (σ2 vσ2 wV ˙φ(q) + 1)χ Theorems B.2, B.3, B.5 Theorems B.8, B.10, B.12 Table 2: Summary of Main Dynamics Results. Note that while χ(l) is exponential for ReLU/FRN, the gradients with respect to weight parameters have norms (χw and χv) constant in l (Thm B.21). Also, the χ(l) entry for ↵-ReLU is for ↵2 (3/4, 1) only Tanh/RRN Tanh/FRN ReLU/FRN ↵-ReLU/FRN, ↵< 1 p(l) ⇥(l), B.2 ⇥(l), B.9 exp(⇥(l)), B.16 ⇥(l1/(1−↵)), B.16 s(l) ⇥(l), B.4 ⇥(l), B.11 exp(⇥(l)), B.17 ⇥(l1/(1−↵)), B.18 e(l) −e⇤ ˇ⇥(l 2 ⇡−1), B.4 poly(l), B.11 ⇥(l−2), B.17 poly(l), B.18 χ(l) exp(⇥( p l)), B.6 exp(⇥( p l)), B.12 exp(⇥(l)), B.20 ⇥(l ↵2 (1−↵)(2↵−1) ), B.20 Definition 5.3. Define the transforms V and W by Vφ(q) := E[φ(z)2 : z ⇠N(0, q)] and Wφ(⇢, ⌫) := E[φ(z)φ(z0) : (z, z0) ⇠N(0, ✓ ⇢ ⌫ ⌫ ⇢ ◆ )]. These recurrences are able to track the corresponding quantities in practice very well. For example, Fig. 1 compares theory vs experiments for the tanh/FRN pair. The agreement is very good for tanh/RRN (not shown, but similar to the case of tanh/FRN with σv = 1 and σa = 0) and ↵ReLU/FRN as well (see Fig. A.1). As mentioned in previous sections, we seek to characterize the long term/high depth behavior of all of the quantities defined in Section 2. To do so, we solve for the asymptotics of the recurrences in Table 1, where φ is instantiated with tanh or ↵-ReLU. Our main dynamics results are summarized in Table 2. 5.1 Tanh Forward dynamics. When φ = tanh, p(l) and q(l) increase as ⇥(l) in either RRN or FRN (Thm B.2), as one might expect by observing that V tanh(q) ! 1 as q ! 1 so that, for example in the RRN case, the recurrence p = V tanh(q) + p becomes p = 1 + p. This is confirmed graphically by the black lines of the leftmost chart of Fig. 1. We carefully verify that this intuition is correct in its proof in the appendix, and find that in fact p(l) ⇠l in the RRN case and p(l) ⇠(σ2 v + σ2 a)l in the FRN case. What about γ(l)? The middle chart of Fig. 1 shows that over time, e(l) = γ(l)/p(l) contracts toward the center of the interval [0, 1], but from the looks of it, it is not clear whether there is a stable fixed point e⇤of e or not. We prove that, in fact, all trajectories of e not starting at 1 do converge to a single fixed point, but only at a polynomial rate, in both the RRN and FRN cases (Thm B.2 and Thm B.10); we can even explicitly compute the fixed point and the rate of convergence: For FRN, there is a unique stable fixed point e⇤< 1 determined by the equation e⇤= 1 σ2v + σ2a [σ2 v 2 ⇡arcsin (e⇤) + σ2 a], and |e⇤−e(l)| decreases like l−δ⇤, where δ⇤:= 1 −2 ⇡ 1 p 1 −(e⇤)2 σ2 v σ2v + σ2a . 6 Figure 1: Our equations predict the relevant quantities very well in practice. These plots make the comparison between prediction and measurements for the full resnet with tanh activation, with σ2 v = 1.5, σ2 a = .5, σ2 w = 1.69, σ2 b = .49. Left-to-right: (a) p(l) and γ(l) against layer l for 200 layers. (b) e(l) = γ(l)/p(l) against l for 200 layers. Both (a) and (b) trace out curves for different initial conditions. (c) Different gradient quantities against l for 50 layers. From left to right the layer number l decreases, following the direction of backpropagation. Notice that the gradient increases in norm as l ! 1. All three figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250. 12 π * δ* 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 σa/σv Figure 2: Left-to-right: (a) Plots of e⇤and δ⇤against σa/σv. (b) In log-log scale: the dashed line is l−δ⇤−1, and the colored lines are e(l) −e(l−1) for different initial conditions e(0). That they become parallel at about l = 400 on verifies that e(l) = ⇥(l−δ⇤). 4 (c) In log-log scale: The dashed line is A p l (A given in Thm B.13), and the colored lines are log(•(1)/•(l)) for • = χ, χb, χw. That they all converge together starting around l = 1000 indicates that the approximation in Thm B.13 is very good for large l. Since e⇤< 1, s = (1 −e)p = ⇥(p) = ⇥(l). The case of RRN can be viewed as a special case of the above, setting σ2 v = 1 and σ2 a = 0, which yields e⇤= 0 and δ⇤= 1 −2 ⇡. We observe that both e⇤and δ⇤only depend on the ratio ⇢:= σa/σv, so in Fig. 2 we graph these two quantities as a function of ⇢. e⇤and δ⇤both increase with ⇢and asymptotically approach 1 and 1/2 respectively from below. When ⇢= σa = 0, e⇤= 0 and δ⇤= 1 −2 ⇡. Thus the rate of convergence at its slowest for tanh/FRN is δ⇤= 1 −2 ⇡⇡0.36338, where asymptotically the network tends toward a chaotic regime e⇤= 0, corresponding to a large weight variance and a small bias variance; it at its fastest is δ⇤= 1/2, where asymptotically the network tends toward a stable regime e⇤= 1, corresponding to a large bias variance and small weight variance. We verify δ⇤by comparing e(l) −e(l−1) to l−δ⇤−1 in log-log scale. If e(l) = ⇥(l−δ⇤), then e(l) −e(l−1) = ⇥(l−δ⇤−1) and should obtain the same slope as l−δ⇤−1 as l ! 1. The middle figure of Fig. 2 ascertains that this is indeed the case, starting around layer number 400. Backward dynamics. Finally, we show that the gradient is approximated by χ(m) = exp(A( p l −pm) + O(log l −log m))χ(l) (?) where A = 4 3 q 2 ⇡σw in the RRN case and A = 4 3 q 2 ⇡ σ2 vσw p σ2v+σ2a in the FRN case (Thm B.6 and Thm B.13). The rightmost plot of Fig. 2 verifies that indeed, for large l ≥1000, this is a very good approximation. This demonstrates that the mean field assumption of independent backpropagation weights is very practical and convenient even for residual networks. 7 Note that in the FRN case, the constant A can be decomposed into A = 4 3 q 2 ⇡· σv · σw · (1 + σ2 a/σ2 v)−1/2. Consider the ratio ⇢:= σa/σv. If ⇢≫1, then e⇤⇡1 (Fig. C.17), meaning that the typical network essentially computes a constant function, and thus unexpressive; at the same time, large ⇢makes A small, and thus ameliorating the gradient explosion problem, making the network more trainable. On the other hand, if ⇢⌧1, then e⇤⇡0 (Fig. C.17), the typical network can tease out the finest differences between any two input vectors, and a final linear layer on top of such a network should be able to express a wide variety of functions [9]; at the same time, small ⇢increases A, worsening the gradient explosion problem, making the network less trainable. This is the same expressivity-trainability tradeoff discussed in [11]. 5.2 ↵-ReLU Forward dynamics. As with the tanh case, to deduce the asymptotic behavior of random ↵-ReLU resnets, we need to understand the transforms V ↵and W ↵. Fortunately, V ↵has a closed form, and W ↵has been studied before [2]. In particular, if ↵> −1 2, then V ↵(q) = c↵q↵, where c↵is a constant with a closed form given by Lemma B.15. In addition, by [2], we know that W ↵(q, cq) = V ↵(q)J↵(c) for J↵given in Appendix C.7.1. Fig. C.17 shows a comparison of J↵ for different ↵s along with the identity function. Substituting in c↵q↵for V ↵, we get a difference equation p −p = σ2 vc↵(σ2 wp + σ2 b)↵+ σ2 a governing the evolution of p. This should be reminiscent of the differential equation ˙P(l) = CP(l)↵, which has solution / l1/(1−↵) for ↵< 1, and / exp(Cl) when ↵= 1. And indeed, the solutions p(l) to these difference equations behave asymptotically exactly like so (Thm B.16). Thus ReLU behaves very explosively compared to ↵-ReLU with ↵< 1. In fact, in simulations, for σ2 w = 1.69 and σ2 v = 1.5, the ReLU resnets overflows into infs after around 100 layers, while there’s no problem from any other kind of networks we consider. Regardless, ↵-ReLU for all ↵massages e(l) toward a fixed point e⇤that depends on ↵. When φ = 1, the standard ReLU, e(l) converges to 1 asymptotically as Cl−2 for an explicit constant C depending on σv and σw only (Thm B.17), so that s = (1 −e)p = ⇥(l−2 exp(⇥(l))) = exp(⇥(l)). When φ = ↵for ↵< 1, then e(l) converges to the nonunit fixed point e⇤of J↵at a rate of ˇ⇥(l−µ), where µ = (1 −˙J↵(e⇤))/(1 −↵) is independent of the variances (Thm B.18), so that s = ⇥(p). These rates are verified in Fig. A.2. Backward dynamics. Finally, we have also characterized the rate of gradient growth for any ↵2 ( 3 4, 1].5 In the case of ↵= 1, the dynamics of χ is exponential, the same as that of p, χ(l−m) = χ(l)Bm where B = 1 2σ2 vσ2 w + 1. For ↵2 ( 3 4, 1), the dynamics is polynomial, but with different exponent in general from that of the forward pass: χ(l−m) = ⇥(1)χ(l)(l/(l −m))R for R = ↵2 (1−↵)(2↵−1), where the constants in ⇥(1) do not depend on l or m. This exponent R is minimized on ↵2 [ 3 4, 1) at ↵= 3/4, where R = 9/2 (but on ↵2 ( 1 2, 1) it is minimized at ↵= 2/3, where R = 4); see Fig. B.8. These exponents are verified empirically in Fig. A.2. Looking only at χ and the gradients against the biases, it seems that ReLU suffers from a dramatic case of exploding gradients. But in fact, because χ gains a factor of B moving backwards while p loses a factor of B, the gradient norm χ(l−m) w (and similarly for χ(l−m) v ) is independent of how far, m, the gradient has been propagated (Thm B.21) — this is certainly the best gradient preservation among all of the models considered in this paper. Thus strangely, random ReLU FRN exhibits both the best (constant for v and w) and the worse (exponential for a and b) gradient dynamics. This begs the question, then, is this a better deal than other ↵-ReLU for which for any learnable parameter we have at most a polynomial blowup with depth in its gradient? Our experiments (discussed below) show that ↵-ReLU is useful to the extent that smaller ↵avoids numerical issues with exponentiating forward and backward dynamics, but the best performance is given by the largest ↵that avoids them (Fig. 3(c, d)); in fact, the metric expressivity s, determines performance, not gradient explosion (see ↵-ReLU experiments). 8 0.05 0.10 0.15 0.25 0.50 1.00 1.50 σ2 w 10 20 30 40 50 100 200 300 L 1.0 2.0 3.0 4.0 5.0 Figure 3: From left to right, top to bottom: (a) and (b): σ2 w, L, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed σ2 b = σ2 a = 1 2 and σ2 v = 1. The white dotted lines are given by σ2 wL = C, where C = 170 on the left and C = 145 on the right. We see that both dotted lines accurately predict the largest optimal σw for each depth L. (c) Varying the ratio σ2 a/σ2 v while fixing σv/ p 1 + σ2a/σ2v, and thus fixing A, the leading constant of log χ(0)/χ(L). (d) in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying σ2 w and L. Curves give level sets for the log ratios log s(L)/s(0) ⇡log p(L)/p(0) ⇡log χ(0)/χ(L) = L log(1 + σ2 vσ2 w/2). (e) Red heatmap shows the test accuracies of a grid of ↵-ReLU FRN with varying ↵and L as shown, but with all σ•s fixed. The white dashed curve gives a typical contour line of LR = const, where R = ↵2 (1−↵)(2↵−1). The yellow-to-blue curves form a set of level curves for s(l) = p(l) −γ(l) = const, with yellow curves corresponding to higher levels. 6 Experimental Results Our experiments show a dichotomy of what matters in initialization: for tanh resnets, quality of an initialization is determined by how much gradient explosion there is (measured by χ(0)/χ(L)); for (↵-)ReLU resnets, it is determined by how expressive the random network is (measured by the metric expressivity s(L)). We hypothesize this is because in tanh resnets, the gradient dynamics is much more explosive than the expressivity dynamics (exp(⇥( p l)) vs ⇥(l)), whereas for ReLU it’s somewhat the opposite (χw, χv = ⇥(1) vs s = exp(⇥(l))). Tanh, vary σw. We train a grid of reduced and full tanh resnets on MNIST, varying the variance σ2 w and the number of layers (for FRN we fix σv = 1). The results are indicated in Fig. 3(a, b). We see that in either model, deeper resnets favor much smaller σw than shallower ones. The white dotted lines in Fig. 3(a, b) confirm our theory: according to Eq. (?), for the same gradient ratio R = χ(0)/χ(L), we want log R ⇡σw p L. Indeed, the white dotted lines in Fig. 3(a, b) trace out such a level curve and it remarkably pinpoints the largest σw that gives the optimal test set accuracy for each depth L. Why isn’t the best initialization given by R = 1 () σw = 0? We believe that when L and/or σw is small, gradient dynamics no longer dominates the initialization quality because it has “less room to explode,” and expressivity issues start to dampen the test time performance. Tanh, vary σ2 a/σ2 v. As suggested in the analysis of Eq. (?), the ratio ⇢2 = σ2 a/σ2 v determines the fixed point e⇤and its convergence rate by itself while also contributes to the rate of gradient explosion in tanh FRN. We seek to isolate its effect on forward dynamics by varying σv with ⇢such that σv/ p 1 + ⇢2 is kept constant, so that the leading term of the log gradient ratio is kept approximately equal for each L and ⇢. Fig. 3(c) shows the test accuracies of a grid of tanh FRN initialized with such an ensemble of σ•s. What stands out the most is that performance is maximized essentially 9 around a fixed value of L regardless of ⇢, which shows that indeed gradient dynamics determines the initialization quality in tanh resnets. There is also a minor increase in performance with increasing ⇢ regardless of L; this is counterintuitive as increasing ⇢means “decreasing expressivity.” It is currently not clear what accounts for this effect. ReLU, vary σw We train a grid of ReLU FRN on MNIST, varying σ2 w 2 [0, 1.5] while fixing σ2 v = 1, σ2 a = σ2 b = 1 2. The resulting test set accuracies are shown in Fig. 3(d). The dark upper region signifies failure of training caused by numerical issues with exploding activation and gradient norms: This corresponds to the region where p(L), which is a measure of the mean magnitude of an neuronal activation in layer L, becomes too big. We see that the best test accuracies are given by depths just below where these numerical issues occur. However, if we were to predict that the optimal init is the one minimizing χ(0)/χ(L) ≥1, then we would be wrong — in fact it is exactly the opposite. In this case, the dynamics of s(l), p(l), and χ(0)/χ(l) are approximately the same (all exp(⇥(l)) with the same hidden constants), and optimal performance corresponds to the highest s(L), p(L), and χ(0)/χ(L) without running into infs. ↵-ReLU, vary ↵. We similarly trained a grid of ↵-ReLU FRN on MNIST, varying only ↵and the depth, fixing all σ•. Fig. 3(e) shows their test accuracies. We see similar behavior to ReLU, where when the net is too deep, numerical issues doom the training (black upper right corner), but the best performance is given by L just below where this problem occurs. In this case, if we were to predict optimality based on minimizing gradient explosion, we would be again wrong, and furthermore, the contour plot of χ(0)/χ(L) (white dashed line) now gives no information at all on the test set accuracy. In contrast, the contours for s(l) succeeds remarkably well at this prediction (yellow/green lines).6 By interpolation, this suggests that indeed in the ReLU case, it is expressivity, not trainability, which determines performance at test time. In all of our experiments, we did not find e dynamics to be predictive of neural network performance. 7 Conclusion In this paper, we have extended the mean field formalism developed by [9, 10, 11] to residual networks, a class of models closer to practice than classical feedforward neural networks as were investigated earlier. We proved and verified that in both the forward and backward passes, most of the residual networks discussed here do not collapse their input space geometry or the gradient information exponentially. We found our theory incredibly predictive of test time performance despite saying nothing about the dynamics of training. In addition, we overwhelmingly find, through theory and experiments, that an optimal initialization scheme must take into account the depth of the residual network. The reason that Xavier [4] or He [5] scheme are not the best for residual networks is in fact not that their statistical assumptions are fragile — theirs are similar to our mean field theoretic assumptions, and they hold up in experiments for large width — but rather that their structural assumptions on the network break very badly on residual nets. Open Problems. Our work thus have shown that optimality of initialization schemes can be very unstable with respect to architecture. We hope this work will form a foundation toward a mathematically grounded initialization scheme for state-of-the-art architectures like the original He et al. residual network. To do so, there are still two major components left to study out of the following three: 1. Residual/skip connection 2. Batchnorm 3. Convolutional layers. Recurrent architectures and attention mechanisms are also still mostly unexplored in terms of mean field theory. Furthermore, many theoretical questions still yet to be resolved; the most important with regard to mean field theory is: why can we make Axioms 3.1 and 3.2 and still be able to make accurate predictions? We hope to make progress on these problems in the future and encourage readers to take part in this effort. 10 Acknowledgments Thanks to Jeffrey Ling for early exploration experiments and help with the initial draft. Thanks to Felix Wong for offering his wisdom and experience working in statistical physics. References [1] Nils Bertschinger and Thomas Natschlger. Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16(7):1413–1436, July 2004. ISSN 0899-7667. doi: 10.1162/089976604323057443. [2] Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In Advances in neural information processing systems, pages 342–350, 2009. URL http://papers.nips. cc/paper/3628-kernel-methods-for-deep-learning. [3] Amit Daniely, Roy Frostig, and Yoram Singer. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2253– 2261. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/ 6427-toward-deeper-understanding-of-neural-networks-the-power-of-initialization-and-a-dualpdf. [4] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In PMLR, pages 249–256, March 2010. URL http://proceedings.mlr. press/v9/glorot10a.html. [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026– 1034, 2015. URL http://www.cv-foundation.org/openaccess/content_iccv_2015/ html/He_Delving_Deep_into_ICCV_2015_paper.html. [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. pages 770–778, 2016. URL https://www.cv-foundation.org/ openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_ paper.html. [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016. [8] Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7:ncomms13276, November 2016. ISSN 2041-1723. doi: 10.1038/ncomms13276. URL https://www.nature.com/articles/ncomms13276. [9] Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pages 3360–3368, 2016. [10] Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. arXiv:1606.05336 [cs, stat], June 2016. URL http://arxiv.org/abs/1606.05336. arXiv: 1606.05336. [11] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep Information Propagation. 2017. URL https://openreview.net/pdf?id=H1W1UN9gg. 11 Notes 1Under simplified conditions, Daniely et al. [3] showed that there exists a fixed point for any “well-behaved” activation function in a feedforward net. However, this result does not apply to architectures with residual connections. 2 Note that in practice, to avoid the diverging gradient ˙ ↵(x) ! 1 as x ! 0, we can use a tempered version ↵(x) of ↵-ReLU, defined by ↵(x) = (x + ✏)↵−✏↵on x > 0 and 0 otherwise, for some small ✏> 0. The conclusions of this paper on ↵should hold similarly for ↵as well. 3Daniely et al. [3] called the version of Wφ with fixed ⇢= 1 the “dual function” of φ. 4A more natural visualization is to graph e(l) −e⇤versus l−δ⇤, but because of floating point precision, e(l) −e⇤doesn’t converge to 0, but a small number close to 0, so that the log-log plot wouldn’t look like what is expected. 5Our derivations actually apply to all ↵2 ( 1 2, 1], where at ↵= 1 2, the expected norm of the gradient diverges within our mean field formalism. However, at ↵3 4, the variance of the gradient already diverges (Thm B.19), so we cannot expect the empirical values to agree with our theoretical predictions. But in fact, empirically our theoretical predictions seem to form an upper bound on the gradient norms (see Fig. A.1). 6the contour for p(l) is similar, but its slopes are slightly off from the heatmap contours. 12 | 2017 | 338 |
6,828 | Non-Convex Finite-Sum Optimization Via SCSG Methods Lihua Lei UC Berkeley lihua.lei@berkeley.edu Cheng Ju UC Berkeley cju@berkeley.edu Jianbo Chen UC Berkeley jianbochen@berkeley.edu Michael I. Jordan UC Berkeley jordan@stat.berkeley.edu Abstract We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods [21], for the smooth non-convex finitesum optimization problem. Assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with E∥∇f(x)∥2 ≤ε is O min{ε−5/3, ε−1n2/3} , which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss. 1 Introduction We study smooth non-convex finite-sum optimization problems of the form min x∈Rd f(x) = 1 n n X i=1 fi(x) (1) where each component fi(x) is possibly non-convex with Lipschitz gradients. This generic form captures numerous statistical learning problems, ranging from generalized linear models [22] to deep neural networks [19]. In contrast to the convex case, the non-convex case is comparatively under-studied. Early work focused on the asymptotic performance of algorithms [11, 7, 29], with non-asymptotic complexity bounds emerging more recently [24]. In recent years, complexity results have been derived for both gradient methods [13, 2, 8, 9] and stochastic gradient methods [12, 13, 6, 4, 26, 27, 3]. Unlike in the convex case, in the non-convex case one can not expect a gradient-based algorithm to converge to the global minimum if only smoothness is assumed. As a consequence, instead of measuring functionvalue suboptimality Ef(x) −infx f(x) as in the convex case, convergence is generally measured in terms of the squared norm of the gradient; i.e., E∥∇f(x)∥2. We summarize the best available rates 1 in Table 1. We also list the rates for Polyak-Lojasiewicz (P-L) functions, which will be defined in Section 2. The accuracy for minimizing P-L functions is measured by Ef(x) −infx f(x). 1It is also common to use E∥∇f(x)∥to measure convergence; see, e.g. [2, 8, 9, 3]. Our results can be readily transferred to this alternative measure by using Cauchy-Schwartz inequality, E∥∇f(x)∥≤ p E∥∇f(x)∥2, although not vice versa. The rates under this alternative can be made comparable to ours by replacing ε by √ε. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Computation complexity of gradient methods and stochastic gradient methods for the finite-sum non-convex optimization problem (1). The second and third columns summarize the rates in the smooth and P-L cases respectively. µ is the P-L constant and H∗is the variance of a stochastic gradient. These quantities are defined in Section 2. The final column gives additional required assumptions beyond smoothness or the P-L condition. The symbol ∧denotes a minimum and ˜O(·) is the usual Landau big-O notation with logarithmic terms hidden. Smooth Polyak-Lojasiewicz additional cond. Gradient Methods GD O n ε [24, 13] ˜O n µ [25, 17] Best available ˜O n ε7/8 [9] smooth gradient ˜O n ε5/6 [9] smooth Hessian Stochastic Gradient Methods SGD O 1 ε2 [24, 26] O 1 µ2ε [17] H∗= O(1) Best available O n + n2/3 ε [26, 27] ˜O n + n2/3 µ [26, 27] SCSG ˜O 1 ε5/3 ∧n2/3 ε ˜O ( 1 µε ∧n) + 1 µ( 1 µε ∧n)2/3 H∗= O(1) As in the convex case, gradient methods have better dependence on ε in the non-convex case but worse dependence on n. This is due to the requirement of computing a full gradient. Comparing the complexity of SGD and the best achievable rate for stochastic gradient methods, achieved via variance-reduction methods, the dependence on ε is significantly improved in the latter case. However, unless ε << n−1/2, SGD has similar or even better theoretical complexity than gradient methods and existing variance-reduction methods. In practice, it is often the case that n is very large (105 ∼109) while the target accuracy is moderate (10−1 ∼10−3). In this case, SGD has a meaningful advantage over other methods, deriving from the fact that it does not require a full gradient computation. This motivates the following research question: Is there an algorithm that • achieves/beats the theoretical complexity of SGD in the regime of modest target accuracy; • and achieves/beats the theoretical complexity of existing variance-reduction methods in the regime of high target accuracy? The question has been partially answered in the convex case by [21] in their formulation of the stochastically controlled stochastic gradient (SCSG) methods. When the target accuracy is low, SCSG has the same O ε−2 rate as SGD but with a much smaller data-dependent constant factor (which does not even require bounded gradients). When the target accuracy is high, SCSG achieves the same rate as the best non-accelerated methods, O( n ε ). Despite the gap between this and the optimal rate, SCSG is the first known algorithm that provably achieves the desired performance in both regimes. In this paper, we generalize SCSG to the non-convex setting which, surprisingly, provides a completely affirmative answer to the question raised before. By only assuming smoothness of each component as in almost all other works, SCSG is always O ε−1/3 faster than SGD and is never worse than recently developed stochastic gradient methods that achieve the best rate.When ε >> 1 n, SCSG is at least O((εn)2/3) faster than the best SVRG-type algorithms. Comparing with the gradient methods, SCSG has a better convergence rate provided ε >> n−6/5, which is the common setting in practice. Interestingly, there is a parallel to recent advances in gradient methods; [9] improved the classical O(ε−1) rate of gradient descent to O(ε−5/6); this parallels the improvement of SCSG over SGD from O(ε−2) to O(ε−5/3). Beyond the theoretical advantages of SCSG, we also show that SCSG yields good empirical performance for the training of multi-layer neural networks. It is worth emphasizing that the mechanism by which SCSG achieves acceleration (variance reduction) is qualitatively different from other speed-up 2 techniques, including momentum [28] and adaptive stepsizes [18]. It will be of interest in future work to explore combinations of these various approaches in the training of deep neural networks. The rest of paper is organized as follows: In Section 2 we discuss our notation and assumptions and we state the basic SCSG algorithm. We present the theoretical convergence analysis in Section 3. Experimental results are presented in Section 4. All the technical proofs are relegated to the Appendices. Our code is available at https://github.com/Jianbo-Lab/SCSG. 2 Notation, Assumptions and Algorithm We use ∥· ∥to denote the Euclidean norm and write min{a, b} as a ∧b for brevity throughout the paper. The notation ˜O, which hides logarithmic terms, will only be used to maximize readibility in our presentation but will not be used in the formal analysis. We define computation cost using the IFO framework of [1] which assumes that sampling an index i and accessing the pair (∇fi(x), fi(x)) incur a unit of cost. For brevity, we write ∇fI(x) for 1 |I| P i∈I ∇fi(x). Note that calculating ∇fI(x) incurs |I| units of computational cost. x is called an ε-accurate solution iff E∥∇f(x)∥2 ≤ε. The minimum IFO complexity to reach an ε-accurate solution is denoted by Ccomp(ε). Recall that a random variable N has a geometric distribution, N ∼Geom(γ), if N is supported on the non-negative integers 2 with P(N = k) = γk(1 −γ), ∀k = 0, 1, . . . An elementary calculation shows that EN∼Geom(γ) = γ 1 −γ . (2) To formulate our complexity bounds, we define f ∗= inf x f(x), ∆f = f(˜x0) −f ∗. Further we define H∗as an upper bound of the variance of stochastic gradients, i.e. H∗= sup x 1 n n X i=1 ∥∇fi(x) −∇f(x)∥2. (3) The assumption A1 on the smoothness of individual functions will be made throughout this paper. A1 fi is differentiable with ∥∇fi(x) −∇fi(y)∥≤L∥x −y∥ for some L < ∞and all i ∈{1, . . . , n}. As a direct consequence of assumption A1, it holds for any x, y ∈Rd that −L 2 ∥x −y∥2 ≤fi(x) −fi(y) −⟨∇fi(y), x −y⟩≤L 2 ∥x −y∥2. (4) In this paper, we also consider the following Polyak-Lojasiewicz (PL) condition [25]. It is weaker than strong convexity as well as other popular conditions that appeared in optimization literature; see [17] for an extensive discussion. A2 f(x) satisfies the P-L condition with µ > 0 if ∥∇f(x)∥2 ≥2µ(f(x) −f(x∗)) where x∗is the global minimum of f. 2Here we allow N to be zero to facilitate the analysis. 3 2.1 Generic form of SCSG methods The algorithm we propose in this paper is similar to that of [14] except (critically) the number of inner loops is a geometric random variable. This is an essential component in the analysis of SCSG, and, as we will show below, it is key in allowing us to extend the complexity analysis for SCSG to the non-convex case. Moreover, that algorithm that we present here employs a mini-batch procedure in the inner loop and outputs a random sample instead of an average of the iterates. The pseudo-code is shown in Algorithm 1. Algorithm 1 (Mini-Batch) Stochastically Controlled Stochastic Gradient (SCSG) method for smooth non-convex finite-sum objectives Inputs: Number of stages T, initial iterate ˜x0, stepsizes (ηj)T j=1, batch sizes (Bj)T j=1, mini-batch sizes (bj)T j=1. Procedure 1: for j = 1, 2, · · · , T do 2: Uniformly sample a batch Ij ⊂{1, · · · , n} with |Ij| = Bj; 3: gj ←∇fIj(˜xj−1); 4: x(j) 0 ←˜xj−1; 5: Generate Nj ∼Geom (Bj/(Bj + bj)); 6: for k = 1, 2, · · · , Nj do 7: Randomly pick ˜Ik−1 ⊂[n] with |˜Ik−1| = bj; 8: ν(j) k−1 ←∇f˜Ik−1(x(j) k−1) −∇f˜Ik−1(x(j) 0 ) + gj; 9: x(j) k ←x(j) k−1 −ηjν(j) k−1; 10: end for 11: ˜xj ←x(j) Nj; 12: end for Output: (Smooth case) Sample ˜x∗ T from (˜xj)T j=1 with P(˜x∗ T = ˜xj) ∝ηjBj/bj; (P-L case) ˜xT . As seen in the pseudo-code, the SCSG method consists of multiple epochs. In the j-th epoch, a minibatch of size Bj is drawn uniformly from the data and a sequence of mini-batch SVRG-type updates are implemented, with the total number of updates being randomly generated from a geometric distribution, with mean equal to the batch size. Finally it outputs a random sample from {˜xj}T j=1. This is the standard way, proposed by [23], as opposed to computing arg minj≤T ∥∇f(˜xj)∥which requires additional overhead. By (2), the average total cost is T X j=1 (Bj + bj · ENj) = T X i=1 (Bj + bj · Bj bj ) = 2 T X j=1 Bj. (5) Define T(ε) as the minimum number of epochs such that all outputs afterwards are ε-accurate solutions, i.e. T(ε) = min{T : E∥∇f(˜x∗ T ′)∥≤ε for all T ′ ≥T}. Recall the definition of Ccomp(ε) at the beginning of this section, the average IFO complexity to reach an ε-accurate solution is ECcomp(ε) ≤2 T (ε) X j=1 Bj. 2.2 Parameter settings The generic form (Algorithm 1) allows for flexibility in both stepsize, ηj, and batch/mini-batch size, (Bj, bj). In order to minimize the amount of tuning needed in practice, we provide several default settings which have theoretical support. The settings and the corresponding complexity results are summarized in Table 2. Note that all settings fix bj = 1 since this yields the best rate as will be shown in Section 3. However, in practice a reasonably large mini-batch size bj might be favorable due to the acceleration that could be achieved by vectorization; see Section 4 for more discussions on this point. 4 Table 2: Parameter settings analyzed in this paper. ηj Bj bj Type of Objectives ECcomp(ε) Version 1 1 2LB2/3 O 1 ε ∧n 1 Smooth O 1 ε5/3 ∧n2/3 ε Version 2 1 2LB2/3 j j 3 2 ∧n 1 Smooth ˜O 1 ε5/3 ∧n2/3 ε Version 3 1 2LB2/3 j O 1 µε ∧n 1 Polyak-Lojasiewicz ˜O ( 1 µε ∧n) + 1 µ( 1 µε ∧n)2/3 3 Convergence Analysis 3.1 One-epoch analysis First we present the analysis for a single epoch. Given j, we define ej = ∇fIj(˜xj−1) −∇f(˜xj−1). (6) As shown in [14], the gradient update ν(j) k is a biased estimate of the gradient ∇f(x(j) k ) conditioning on the current random index ik. Specifically, within the j-th epoch, E˜Ikν(j) k = ∇f(x(j) k ) + ∇fIj(x(j) 0 ) −∇f(x(j) 0 ) = ∇f(x(j) k ) + ej. This reveals the basic qualitative difference between SVRG and SCSG. Most of the novelty in our analysis lies in dealing with the extra term ej. Unlike [14], we do not assume ∥x(j) k −x∗∥to be bounded since this is invalid in unconstrained problems, even in convex cases. By careful analysis of primal and dual gaps [cf. 5], we find that the stepsize ηj should scale as (Bj/bj)−2 3 . Then same phenomenon has also been observed in [26, 27, 4] when bj = 1 and Bj = n. Theorem 3.1 Let ηjL = γ(Bj/bj)−2 3 . Suppose γ ≤ 1 3 and Bj ≥8bj for all j, then under Assumption A1, E∥∇f(˜xj)∥2 ≤5L γ · bj Bj 1 3 E(f(˜xj−1) −f(˜xj)) + 6I(Bj < n) Bj · H∗. (7) The proof is presented in Appendix B. It is not surprising that a large mini-batch size will increase the theoretical complexity as in the analysis of mini-batch SGD. For this reason we restrict most of our subsequent analysis to bj ≡1. 3.2 Convergence analysis for smooth non-convex objectives When only assuming smoothness, the output ˜x∗ T is a random element from (˜xj)T j=1. Telescoping (7) over all epochs, we easily obtain the following result. Theorem 3.2 Under the specifications of Theorem 3.1 and Assumption A1, E∥∇f(˜x∗ T )∥2 ≤ 5L γ ∆f + 6 PT j=1 b −1 3 j B −2 3 j I(Bj < n) H∗ PT j=1 b −1 3 j B 1 3 j . This theorem covers many existing results. When Bj = n and bj = 1, Theorem 3.2 implies that E∥∇f(˜x∗ T )∥2 = O L∆f T n1/3 and hence T(ε) = O(1+ L∆f εn1/3 ). This yields the same complexity bound ECcomp(ε) = O(n + n2/3L∆f ε ) as SVRG [26]. On the other hand, when bj = Bj ≡B for some B < n, Theorem 3.2 implies that E∥∇f(˜x∗ T )∥2 = O L∆f T + H∗ B . The second term can be made O(ε) by setting B = O H∗ ε . Under this setting T(ε) = O L∆f ε and ECcomp(ε) = O L∆f H∗ ε2 . This is the same rate as in [26] for SGD. 5 However, both of the above settings are suboptimal since they either set the batch sizes Bj too large or set the mini-batch sizes bj too large. By Theorem 3.2, SCSG can be regarded as an interpolation between SGD and SVRG. By leveraging these two parameters, SCSG is able to outperform both methods. We start from considering a constant batch/mini-batch size Bj ≡B, bj ≡1. Similar to SGD and SCSG, B should be at least O( H∗ ε ). In applications like the training of neural networks, the required accuracy is moderate and hence a small batch size suffices. This is particularly important since the gradient can be computed without communication overhead, which is the bottleneck of SVRG-type algorithms. As shown in Corollary 3.3 below, the complexity of SCSG beats both SGD and SVRG. Corollary 3.3 (Constant batch sizes) Set bj ≡1, Bj ≡B = min 12H∗ ε , n , ηj ≡η = 1 6LB 2 3 . Then it holds that ECcomp(ε) = O H∗ ε ∧n + L∆f ε · H∗ ε ∧n 2 3 ! . Assume that L∆f, H∗= O(1), the above bound can be simplified to ECcomp(ε) = O 1 ε ∧n + 1 ε · 1 ε ∧n 2 3 ! = O 1 ε 5 3 ∧n 2 3 ε ! . When the target accuracy is high, one might consider a sequence of increasing batch sizes. Heuristically, a large batch is wasteful at the early stages when the iterates are inaccurate. Fixing the batch size to be n as in SVRG is obviously suboptimal. Via an involved analysis, we find that Bj ∼j 3 2 gives the best complexity among the class of SCSG algorithms. Corollary 3.4 (Time-varying batch sizes) Set bj ≡1, Bj = min n ⌈j 3 2 ⌉, n o , ηj = 1 6LB 2 3 j . Then it holds that ECcomp(ε) = O min 1 ε 5 3 (L∆f) 5 3 + (H∗) 5 3 log5 H∗ ε , n 5 3 + n 2 3 ε · (L∆f + H∗log n) ! . (8) The proofs of both Corollary 3.3 and Corollary 3.4 are presented in Appendix C. To simplify the bound (8), we assume that L∆f, H∗= O(1) in order to highlight the dependence on ε and n. Then (8) can be simplified to ECcomp(ε) = O 1 ε 5 3 log5 1 ε ∧n 5 3 + n 2 3 log n ε ! = ˜O 1 ε 5 3 ∧n 5 3 + n 2 3 ε ! = ˜O 1 ε 5 3 ∧n 2 3 ε ! . The log-factor log5 1 ε is purely an artifact of our proof. It can be reduced to log 3 2 +µ 1 ε for any µ > 0 by setting Bj ∼j 3 2 (log j) 3 2 +µ; see remark 1 in Appendix C. 3.3 Convergence analysis for P-L objectives When the component fi(x) satisfies the P-L condition, it is known that the global minimum can be found efficiently by SGD [17] and SVRG-type algorithms [26, 4]. Similarly, SCSG can also achieve this. As in the last subsection, we start from a generic result to bound E(f(˜xT ) −f ∗) and then consider specific settings of the parameters as well as their complexity bounds. 6 Theorem 3.5 Let λj = 5Lb 1 3 j µγB 1 3 j +5Lb 1 3 j . Then under the same settings of Theorem 3.2, E(f(˜xT ) −f ∗) ≤λT λT −1 . . . λ1 · ∆f + 6γH∗· T X j=1 λT λT −1 . . . λj+1 · I(Bj < n) µγBj + 5Lb 1 3 j B 2 3 j . The proofs and additional discussion are presented in Appendix D. Again, Theorem 3.5 covers existing complexity bounds for both SGD and SVRG. In fact, when Bj = bj ≡B as in SGD, via some calculation, we obtain that E(f(˜xT ) −f ∗) = O L µ + L T · ∆f + H∗ µB ! . The second term can be made O(ε) by setting B = O( H∗ µε ), in which case T(ε) = O( L µ log ∆f ε ). As a result, the average cost to reach an ε-accurate solution is ECcomp(ε) = O( LH∗ µ2ε ), which is the same as [17]. On the other hand, when Bj ≡n and bj ≡1 as in SVRG, Theorem 3.5 implies that E(f(˜xT ) −f ∗) = O L µn 1 3 + L T · ∆f ! . This entails that T(ε) = O (1 + 1 µn1/3 ) log 1 ε and hence ECcomp(ε) = O (n + n2/3 µ ) log 1 ε , which is the same as [26]. By leveraging the batch and mini-batch sizes, we obtain a counterpart of Corollary 3.3 as below. Corollary 3.6 Set bj ≡1, Bj ≡B = min 12H∗ µε , n , ηj ≡η = 1 6LB 2 3 Then it holds that ECcomp(ε) = O (H∗ µε ∧n + 1 µ H∗ µε ∧n 2 3 ) log ∆f ε ! . Recall the results from Table 1, SCSG is O 1 µ + 1 (µε)1/3 faster than SGD and is never worse than SVRG. When both µ and ε are moderate, the acceleration of SCSG over SVRG is significant. Unlike the smooth case, we do not find any possible choice of setting that can achieve a better rate than Corollary 3.6. 4 Experiments We evaluate SCSG and mini-batch SGD on the MNIST dataset with (1) a three-layer fully-connected neural network with 512 neurons in each layer (FCN for short) and (2) a standard convolutional neural network LeNet [20] (CNN for short), which has two convolutional layers with 32 and 64 filters of size 5 × 5 respectively, followed by two fully-connected layers with output size 1024 and 10. Max pooling is applied after each convolutional layer. The MNIST dataset of handwritten digits has 50, 000 training examples and 10, 000 test examples. The digits have been size-normalized and centered in a fixed-size image. Each image is 28 pixels by 28 pixels. All experiments were carried out on an Amazon p2.xlarge node with a NVIDIA GK210 GPU with algorithms implemented in TensorFlow 1.0. Due to the memory issues, sampling a chunk of data is costly. We avoid this by modifying the inner loop: instead of sampling mini-batches from the whole dataset, we split the batch Ij into Bj/bj mini-batches and run SVRG-type updates sequentially on each. Despite the theoretical advantage of setting bj = 1, we consider practical settings bj > 1 to take advantage of the acceleration obtained 7 by vectorization. We initialized parameters by TensorFlow’s default Xavier uniform initializer. In all experiments below, we show the results corresponding to the best-tuned stepsizes. We consider three algorithms: (1) SGD with a fixed batch size B ∈{512, 1024}; (2) SCSG with a fixed batch size B ∈{512, 1024} and a fixed mini-batch size b = 32; (3) SCSG with time-varying batch sizes Bj = ⌈j3/2 ∧n⌉and bj = ⌈Bj/32⌉. To be clear, given T epochs, the IFO complexity of the three algorithms are TB, 2TB and 2 PT j=1 Bj, respectively. We run each algorithm with 20 passes of data. It is worth mentioning that the largest batch size in Algorithm 3 is ⌈2751.5⌉= 4561, which is relatively small compared to the sample size 50000. We plot in Figure 1 the training and the validation loss against the IFO complexity—i.e., the number of passes of data—for fair comparison. In all cases, both versions of SCSG outperform SGD, especially in terms of training loss. SCSG with time-varying batch sizes always has the best performance and it is more stable than SCSG with a fixed batch size. For the latter, the acceleration is more significant after increasing the batch size to 1024. Both versions of SCSG provide strong evidence that variance reduction can be achieved efficiently without evaluating the full gradient. 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss CNN SGD (B = 512) SCSG (B = 512, b = 32) SCSG (B = j^1.5, B/b = 32) 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss CNN SGD (B = 1024) SCSG (B = 1024, b = 32) SCSG (B = j^1.5, B/b = 32) 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss FCN SGD (B = 512) SCSG (B = 512, b = 32) SCSG (B = j^1.5, B/b = 32) 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss FCN SGD (B = 1024) SCSG (B = 1024, b = 32) SCSG (B = j^1.5, B/b = 32) 0 2 4 6 8 10 12 14 #grad / n 10-1 100 Validation Log-Loss 0 2 4 6 8 10 12 14 #grad / n 10-1 100 Validation Log-Loss 0 2 4 6 8 10 12 14 #grad / n 10-1 100 Validation Log-Loss 0 2 4 6 8 10 12 14 #grad / n 10-1 100 Validation Log-Loss Figure 1: Comparison between two versions of SCSG and mini-batch SGD of training loss (top row) and validation loss (bottom row) against the number of IFO calls. The loss is plotted on a log-scale. Each column represents an experiment with the setup printed on the top. 0 50 100 150 200 Wall Clock Time (in second) 10-1 100 Training Log Loss CNN scsg (B=j^1.5, B/b=16) sgd (B=j^1.5) 0 50 100 150 200 Wall Clock Time (in second) 10-1 100 Validation Log Loss CNN scsg (B=j^1.5, B/b=16) sgd (B=j^1.5) Figure 2: Comparison between SCSG and mini-batch SGD of training loss and validation loss with a CNN loss, against wall clock time. The loss is plotted on a log-scale. Given 2B IFO calls, SGD implements updates on two fresh batches while SCSG replaces the second batch by a sequence of variance reduced updates. Thus, Figure 1 shows that the gain due to variance reduction is significant when the batch size is fixed. To further explore this, we compare SCSG with time-varying batch sizes to SGD with the same sequence of batch sizes. The results corresponding to the best-tuned constant stepsizes are plotted in Figure 3a. It is clear that the benefit from variance reduction is more significant when using time-varying batch sizes. We also compare the performance of SGD with that of SCSG with time-varying batch sizes against wall clock time, when both algorithms are implemented in TensorFlow and run on a Amazon p2.xlarge node with a NVIDIA GK210 GPU. Due to the cost of computing variance reduction terms in SCSG, each update of SCSG is slower per iteration compared to SGD. However, SCSG makes faster progress 8 in terms of both training loss and validation loss compared to SCD in wall clock time. The results are shown in Figure 2. 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss CNN SGD SCSG 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss FCN SGD SCSG (a) SCSG and SGD with increasing batch sizes 0 2 4 6 8 10 12 14 #grad / n 10-2 10-1 100 Training Log-Loss CNN B/b = 2.0 B/b = 5.0 B/b = 10.0 B/b = 16.0 B/b = 32.0 0 2 4 6 8 10 12 14 #grad / n 10-1 100 Training Log-Loss FCN B/b = 2.0 B/b = 5.0 B/b = 10.0 B/b = 16.0 B/b = 32.0 (b) SCSG with different Bj/bj Finally, we examine the effect of Bj/bj, namely the number of mini-batches within an iteration, since it affects the efficiency in practice where the computation time is not proportional to the batch size. Figure 3b shows the results for SCSG with Bj = ⌈j3/2 ∧n⌉and ⌈Bj/bj⌉∈{2, 5, 10, 16, 32}. In general, larger Bj/bj yields better performance. It would be interesting to explore the tradeoff between computation efficiency and this ratio on different platforms. 5 Conclusion and Discussion We have presented the SCSG method for smooth, non-convex, finite-sum optimization problems. SCSG is the first algorithm that achieves a uniformly better rate than SGD and is never worse than SVRG-type algorithms. When the target accuracy is low, SCSG significantly outperforms the SVRG-type algorithms. Unlike various other variants of SVRG, SCSG is clean in terms of both implementation and analysis. Empirically, SCSG outperforms SGD in the training of multi-layer neural networks. Although we only consider the finite-sum objective in this paper, it is straightforward to extend SCSG to the general stochastic optimization problems where the objective can be written as Eξ∼F f(x; ξ): at the beginning of j-th epoch a batch of i.i.d. sample (ξ1, . . . , ξBj) is drawn from the distribution F and gj = 1 Bj Bj X i=1 ∇f(˜xj−1; ξi) (see line 3 of Algorithm 1); at the k-th step, a fresh sample (˜ξ(k) 1 , . . . , ˜ξ(k) bj ) is drawn from the distribution F and ν(j) k−1 = 1 bj bj X i=1 ∇f(x(j) k−1; ˜ξ(k) i ) −1 bj bj X i=1 ∇f(x(j) 0 ; ˜ξ(k) i ) + gj (see line 8 of Algorithm 1). Our proof directly carries over to this case, by simply suppressing the term I(Bj < n), and yields the bound ˜O(ε−5/3) for smooth non-convex objectives and the bound ˜O(µ−1ε−1 ∧µ−5/3ε−2/3) for P-L objectives. These bounds are simply obtained by setting n = ∞in our convergence analysis. Compared to momentum-based methods [28] and methods with adaptive stepsizes [10, 18], the mechanism whereby SCSG achieves acceleration is qualitatively different: while momentum aims at balancing primal and dual gaps [5], adaptive stepsizes aim at balancing the scale of each coordinate, and variance reduction aims at removing the noise. We believe that an algorithm that combines these three techniques is worthy of further study, especially in the training of deep neural networks where the target accuracy is modest. Acknowledgments The authors thank Zeyuan Allen-Zhu, Chi Jin, Nilesh Tripuraneni, Yi Xu, Tianbao Yang, Shenyi Zhao and anonymous reviewers for helpful discussions. References [1] Alekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. ArXiv e-prints abs/1410.0723, 2014. 9 [2] Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding approximate local minima for nonconvex optimization in linear time. arXiv preprint arXiv:1611.01146, 2016. [3] Zeyuan Allen-Zhu. Natasha: Faster stochastic non-convex optimization via strongly non-convex parameter. arXiv preprint arXiv:1702.00763, 2017. [4] Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. ArXiv e-prints abs/1603.05643, 2016. [5] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. arXiv preprint arXiv:1407.1537, 2014. [6] Zeyuan Allen-Zhu and Yang Yuan. Improved SVRG for non-strongly-convex or sum-of-nonconvex objectives. ArXiv e-prints, abs/1506.01972, 2015. [7] Dimitri P Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM Journal on Optimization, 7(4):913–926, 1997. [8] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Accelerated methods for non-convex optimization. arXiv preprint arXiv:1611.00756, 2016. [9] Yair Carmon, Oliver Hinder, John C Duchi, and Aaron Sidford. " convex until proven guilty": Dimension-free acceleration of gradient descent on non-convex functions. arXiv preprint arXiv:1705.02766, 2017. [10] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. [11] Alexei A Gaivoronski. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. part 1. Optimization methods and Software, 4(2):117–134, 1994. [12] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. [13] Saeed Ghadimi and Guanghui Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1-2):59–99, 2016. [14] Reza Harikandeh, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub Koneˇcn`y, and Scott Sallinen. Stop wasting my gradients: Practical SVRG. In Advances in Neural Information Processing Systems, pages 2242–2250, 2015. [15] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [16] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013. [17] Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 795–811. Springer, 2016. [18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [19] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [20] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [21] Lihua Lei and Michael I Jordan. Less than a single pass: Stochastically controlled stochastic gradient method. arXiv preprint arXiv:1609.03261, 2016. [22] Peter McCullagh and John A Nelder. Generalized Linear Models. CRC Press, 1989. 10 [23] Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574– 1609, 2009. [24] Yurii Nesterov. Introductory lectures on convex optimization: A basic course. Kluwer Academic Publishers, Massachusetts, 2004. [25] Boris Teodorovich Polyak. Gradient methods for minimizing functionals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 3(4):643–653, 1963. [26] Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. arXiv preprint arXiv:1603.06160, 2016. [27] Sashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Fast incremental method for nonconvex optimization. arXiv preprint arXiv:1603.06159, 2016. [28] Ilya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of initialization and momentum in deep learning. ICML (3), 28:1139–1147, 2013. [29] Paul Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. SIAM Journal on Optimization, 8(2):506–531, 1998. [30] Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends R⃝in Machine Learning, 1(1–2):1–305, 2008. 11 | 2017 | 339 |
6,829 | Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data Wei-Ning Hsu, Yu Zhang, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {wnhsu,yzhang87,glass}@csail.mit.edu Abstract We present a factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations from sequential data without supervision. Specifically, we exploit the multi-scale nature of information in sequential data by formulating it explicitly within a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to different sets of latent variables. The model is evaluated on two speech corpora to demonstrate, qualitatively, its ability to transform speakers or linguistic content by manipulating different sets of latent variables; and quantitatively, its ability to outperform an i-vector baseline for speaker verification and reduce the word error rate by as much as 35% in mismatched train/test scenarios for automatic speech recognition tasks. 1 Introduction Unsupervised learning is a powerful methodology that can leverage vast quantities of unannotated data in order to learn useful representations that can be incorporated into subsequent applications in either supervised or unsupervised fashions. One of the principle approaches to unsupervised learning is probabilistic generative modeling. Recently, there has been significant interest in three classes of deep probabilistic generative models: 1) Variational Autoencoders (VAEs) [23, 34, 22], 2) Generative Adversarial Networks (GANs) [11], and 3) auto-regressive models [30, 39]; more recently, there are also studies combining multiple classes of models [6, 27, 26]. While GANs bypass any inference of latent variables, and auto-regressive models abstain from using latent variables, VAEs jointly learn an inference model and a generative model, allowing them to infer latent variables from observed data. Despite successes with VAEs, understanding the underlying factors that latent variables associate with is a major challenge. Some research focuses on the supervised or semi-supervised setting using VAEs [21, 17]. There is also research attempting to develop weakly supervised or unsupervised methods to learn disentangled representations, such as DC-IGN [25], InfoGAN [1], and β-VAE [13]. There is yet another line of research analyzing the latent variables with labeled data after the model is trained [33, 15]. While there has been much research investigating static data, such as the aforementioned ones, there is relatively little research on learning from sequential data [8, 3, 2, 9, 7, 18, 36]. Moreover, to the best of our knowledge, there has not been any attempt to learn disentangled and interpretable representations without supervision from sequential data. The information encoded in sequential data, such as speech, video, and text, is naturally multi-scaled; in speech for example, information about the channel, speaker, and linguistic content is encoded in the statistics at the session, utterance, and segment levels, respectively. By leveraging this source of constraint, we can learn disentangled and interpretable factors in an unsupervised manner. In this paper, we propose a novel factorized hierarchical variational autoencoder, which learns disentangled and interpretable latent representations from sequential data without supervision by 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: FHVAE (α = 0) decoding results of three combinations of latent segment variables z1 and latent sequence variables z2 from two utterances in Aurora-4: a clean one (top-left) and a noisy one (bottom-left). FHVAEs learn to encode local attributes, such as linguistic content, into z1, and encode global attributes, such as noise level, into z2. Therefore, by replacing z2 of a noisy utterance with z2 of a clean utterance, an FHVAE decodes a denoised utterance (middle-right) that preserves the linguistic content. Reconstruction results of the clean and noisy utterances are also shown on the right. Audio samples are available at https://youtu.be/naJZITvCfI4. explicitly modeling the multi-scaled information with a factorized hierarchical graphical model. The inference model is designed such that the model can be optimized at the segment level, instead of at the sequence level, which may cause scalability issues when sequences become too long. A sequence-to-sequence neural network architecture is applied to better capture temporal relationships. We evaluate the proposed model on two speech datasets. Qualitatively, the model demonstrates an ability to factorize sequence-level and segment-level attributes into different sets of latent variables. Quantitatively, the model achieves 2.38% and 1.34% equal error rate on unsupervised and supervised speaker verification tasks respectively, which outperforms an i-vector baseline. On speech recognition tasks, it reduces the word error rate in mismatched train/test scenarios by up to 35%. The rest of the paper is organized as follows. In Section 2, we introduce our proposed model, and describe the neural network architecture in Section 3. Experimental results are reported in Section 4. We discuss related work in Section 5, and conclude our work as well as discuss future research plans in Section 6. We have released the code for the model described in this paper.1 2 Factorized Hierarchical Variational Autoencoder Generation of sequential data, such as speech, often involves multiple independent factors operating at different time scales. For instance, the speaker identity affects fundamental frequency (F0) and volume at the sequence level, while phonetic content only affects spectral contour and durations of formants at the segmental level. This multi-scale behavior results in the fact that some attributes, such as F0 and volume, tend to have a smaller amount of variation within an utterance, compared to between utterances; while other attributes, such as phonetic content, tend to have a similar amount of variation within and between utterances. We refer to the first type of attributes as sequence-level attributes, and the other as segment-level attributes. In this work, we achieve disentanglement and interpretability by encoding the two types of attributes into latent sequence variables and latent segment variables respectively, where the former is regularized by an sequence-dependent prior and the latter by an sequence-independent prior. We now formulate a generative process for speech and propose our Factorized Hierarchical Variational Autoencoder (FHVAE). Consider some dataset D = {X(i)}M i=1 consisting of M i.i.d. sequences, where X(i) = {x(i,n)}N (i) n=1 is a sequence of N (i) observed variables. N (i) is referred to as the 1https://github.com/wnhsu/FactorizedHierarchicalVAE 2 M µ(i) 2 N (i) x(i,n) z(i,n) 1 z(i,n) 2 ✓ (a) Generative Model M µ(i) 2 N (i) x(i,n) z(i,n) 1 z(i,n) 2 φ (b) Inference Model Figure 2: Graphical illustration of the proposed generative model and inference model. Grey nodes denote the observed variables, and white nodes are the hidden variables. number of segments for the i-th sequence, and x(i,n) is referred to as the n-th segment of the i-th sequence. Note that here a “segment” refers to a variable of smaller temporal scale compared to the “sequence”, which is in fact a sub-sequence. We will drop the index i whenever it is clear that we are referring to terms associated with a single sequence. We assume that each sequence X is generated from some random process involving the latent variables Z1 = {z(n) 1 }N n=1, Z2 = {z(n) 2 }N n=1, and µ2. The following generation process as illustrated in Figure 2(a) is considered: (1) a s-vector µ2 is drawn from a prior distribution pθ(µ2); (2) N i.i.d. latent sequence variables {z(n) 2 }N n=1 and latent segment variables {z(n) 1 }N n=1 are drawn from a sequence-dependent prior distribution pθ(z2|µ2) and a sequence-independent prior distribution pθ(z1) respectively; (3) N i.i.d. observed variables {x(n)}N n=1 are drawn from a conditional distribution pθ(x|z1, z2). The joint probability for a sequence is formulated in Eq. 1: pθ(X, Z1, Z2, µ2) = pθ(µ2) N Y n=1 pθ(x(n)|z(n) 1 , z(n) 2 )pθ(z(n) 1 )pθ(z(n) 2 |µ2). (1) Specifically, we formulate each of the RHS term as follows: pθ(x|z1, z2) = N(x|fµx(z1, z2), diag(fσ2x(z1, z2))) pθ(z1) = N(z1|0, σ2 z1I), pθ(z2|µ2) = N(z2|µ2, σ2 z2I), pθ(µ2) = N(µ2|0, σ2 µ2I), where the priors over the s-vectors µ2 and the latent segment variables z1 are centered isotropic multivariate Gaussian distributions. The prior over the latent sequence variable z2 conditioned on µ2 is an isotropic multivariate Gaussian centered at µ2. The conditional distribution of the observed variable x is the multivariate Gaussian with a diagonal covariance matrix, whose mean and diagonal variance are parameterized by neural networks fµx(·, ·) and fσ2x(·, ·) with input z1 and z2. We use θ to denote the set of parameters in the generative model. This generative model is factorized in a way such that the latent sequence variables z2 within a sequence are forced to be close to µ2 as well as to each other in Euclidean distance, and therefore are encouraged to encode sequence-level attributes that may have larger variance across sequences, but smaller variance within sequences. The constraint to the latent segment variables z1 is imposed globally, and therefore encourages encoding of residual attributes whose variation is not distinguishable inter and intra sequences. In the variational autoencoder framework, since the exact posterior inference is intractable, an inference model, qφ(Z(i) 1 , Z(i) 2 , µ(i) 2 |X(i)), that approximates the true posterior, pθ(Z(i) 1 , Z(i) 2 , µ(i) 2 |X(i)), for variational inference [19] is introduced. We consider the following inference model as Figure 2(b): qφ(Z(i) 1 , Z(i) 2 , µ(i) 2 |X(i)) = qφ(µ(i) 2 ) N (i) Y n=1 qφ(z(i,n) 1 |x(i,n), z(i,n) 2 )qφ(z(i,n) 2 |x(i,n)) qφ(µ(i) 2 ) = N(µ(i) 2 |gµµ2 (i), σ2 ˜µ2I), qφ(z2|x) = N(z2|gµz2 (x), diag(gσ2 z2 (x))) qφ(z1|x, z2) = N(z1|gµz1 (x, z2), diag(gσ2z1 (x, z2))), 3 where the posteriors over µ2, z1, and z2 are all multivariate diagonal Gaussian distributions. Note that the mean of the posterior distribution of µ2 is not directly inferred from X, but instead is regarded as part of inference model parameters, with one for each utterance, which would be optimized during training. Therefore, gµµ2 (·) can be seen as a lookup table, and we use ˜µ(i) 2 = gµµ2 (i) to denote the posterior mean of µ2 for the i-th sequence; we fix the posterior covariance matrix of µ2 for all sequences. Similar to the generative model, gµz2 (·), gσ2z2 (·), gµz1 (·, ·), and gσ2z1 (·, ·) are also neural networks whose parameters along with gµµ2 (·) are denoted collectively by φ. The variational lower bound for this inference model on the marginal likelihood of a sequence X is derived as follows: L(θ, φ; X) = N X n=1 L(θ, φ; x(n)|˜µ2) + log pθ(˜µ2) + const L(θ, φ; x(n)|˜µ2) =Eqφ(z(n) 1 ,z(n) 2 |x(n)) log pθ(x(n)|z(n) 1 , z(n) 2 ) −Eqφ(z(n) 2 |x(n)) DKL(qφ(z(n) 1 |x(n), z(n) 2 )||pθ(z(n) 1 )) −DKL(qφ(z(n) 2 |x(n))||pθ(z(n) 2 |˜µ2)). The detailed derivation can be found in Appendix A. Because the approximated posterior of µ2 does not depend on the sequence X, the sequence variational lower bound L(θ, φ; X) can be decomposed into the sum of L(θ, φ; x(n)|˜µ2), the conditional segment variational lower bounds, over segments, plus the log prior probability of ˜µ2 and a constant. Therefore, instead of sampling a batch at the sequence level to maximize the sequence variational lower bound, we can sample a batch at the segment level to maximize the segment variational lower bound: L(θ, φ; x(n)) = L(θ, φ; x(n)|˜µ2) + 1 N log pθ(˜µ2) + const. (2) This approach provides better scalability when the sequences are extremely long, such that computing an entire sequence for a batched update is too computationally expensive. In this paper we only introduce two scales of attributes; however, one can easily extend this model to more scales by simply introducing µk for k = 2, 3, · · · 2 that constrains the prior distribution of latent variables at more scales, such as having session-dependent prior or dataset-dependent prior. 2.1 Discriminative Objective The idea of having sequence-specific priors for each sequence is to encourage the model to encode the sequence-level attributes and the segment-level attributes into different sets of latent variables. However, when µ2 = 0 for all sequences, the prior probability of the s-vector is maximized, and the KL-divergence of the inferred posterior of z2 is measured from the same conditional prior for all sequences. This would result in trivial s-vectors µ2, and therefore z1 and z2 would not be factorized to encode sequence and segment attributes respectively. To encourage z2 to encode sequence-level attributes, we use z(i,n) 2 , which is inferred from x(i,n), to infer the sequence index i of x(i,n). We formulate the discriminative objective as: log p(i|z(i,n) 2 ) = log p(z(i,n) 2 |i) −log M X j=1 p(z(i,n) 2 |j) (p(i) is assumed uniform) := log pθ(z(i,n) 2 |˜µ(i) 2 ) −log M X j=1 pθ(z(i,n) 2 |˜µ(j) 2 ) , Combining the discriminative objective using a weighting parameter α with the segment variational lower bound, the objective function to maximize then becomes: Ldis(θ, φ; x(i,n)) = L(θ, φ; x(i,n)) + α log p(i|z(i,n) 2 ), (3) which we refer to as the discriminative segment variational lower bound. 2The index starts from 2 because we do not introduce the hierarchy to z1. 4 2.2 Inferring S-Vectors During Testing During testing, we may want to use the s-vector µ2 of an unseen sequence ˜ X = {˜x(n)} ˜ N n=1 as the sequence-level attribute representation for tasks such as speaker verification. Since the exact maximum a posterior estimation of µ2 is intractable, we approximate the estimation using the conditional segment variational lower bound as follows: µ∗ 2 = argmax µ2 log pθ(µ2| ˜ X) = argmax µ2 log pθ( ˜ X, µ2) = argmax µ2 ˜ N X n=1 log pθ(˜x(n)|µ2) + log pθ(µ2) ≈argmax µ2 ˜ N X n=1 L(θ, φ; ˜x(n)|µ2) + log pθ(µ2). (4) The closed form solution of µ∗ 2 can be derived by differentiating Eq. 4 w.r.t. µ2 (see Appendix B): µ∗ 2 = P ˜ N n=1 gµz2 (˜x(n)) ˜N + σ2 z2/σ2 µ2 . (5) 3 Sequence-to-Sequence Autoencoder Model Architecture In this section, we introduce the detailed neural network architectures for our proposed FHVAE. Let a segment x = x1:T be a sub-sequence of X that contains T time steps, and xt denotes the t-th time step of x. We use recurrent network architectures for encoders that capture the temporal relationship among time steps, and generate a summarized fixed-dimension vector after consuming an entire sub-sequence. Likewise, we adopt a recurrent network architecture that generates a frame step by step conditioned on the latent variables z1 and z2. The complete network can be seen as a stochastic sequence-to-sequence autoencoder that encodes x1:T stochastically into z1, z2, and stochastically decodes from them back to x1:T . x1 p(x1|z1, z2) … z2 q(z1|x1:T, z2) … z1 Encoder Decoder x2 x3 xT x1 x2 p(x2|z1, z2) p(xT|z1, z2) q(z2|x1:T) xT … Figure 3: Sequence-to-sequence factorized hierarchical variational autoencoder. Dashed lines indicate the sampling process using the reparameterization trick [23]. The encoders for z1 and z2 are pink and amber, respectively, while the decoder for x is blue. Darker colors denote the recurrent neural networks, while lighter colors denote the fully-connected layers predicting the mean and log variance. Figure 3 shows our proposed Seq2Seq-FHVAE architecture.3 Here we show the detailed formulation: (hz2,t, cz2,t) = LSTM(xt−1, hz2,t−1, cz2,t−1; φLSTM,z2) qφ(z2|x1:T ) = N(z2| MLP(hz2,T ; φMLPµ,z2), diag(exp(MLP(hz2,T ; φMLPσ2,z2)))) (hz1,t, cz1,t) = LSTM([xt−1; z2], hz1,t−1, cz1,t−1; φz1) qφ(z1|x1:T , z2) = N(z1| MLP(hz1,T ; φMLPµ,z1), diag(exp(MLP(hz1,T ; φMLPσ2,z1)))) (hx,t, cx,t) = LSTM([z1; z2], hx,t−1, cx,t−1; φx) pθ(xt|z1, z2) = N(xt| MLP(hx,t; φMLPµ,x), diag(exp(MLP(hx,t; φMLPσ2,x)))), where LSTM refers to a long short-term memory recurrent neural network [14], and MLP refers to a multi-layer perceptron, φ∗are the related weight matrices. None of the neural network parameters are shared. We refer to this model as Seq2Seq-FHVAE. Log-likelihood and qualitative comparison with alternative architectures can be found in Appendix D. 3Best viewed in color. 5 4 Experiments We use speech, which inherently contains information at multiple scales, such as channel, speaker, and linguistic content to test our model. Learning to disentangle the mixed information from the surface representation is essential for a wide variety of speech applications: for example, noise robust speech recognition [41, 38, 37, 16], speaker verification [5], and voice conversion [40, 29, 24]. The following two corpora are used for our experiments: (1) TIMIT [10], which contains broadband 16kHz recordings of phonetically-balanced read speech. A total of 6300 utterances (5.4 hours) are presented with 10 sentences from each of 630 speakers, of which approximately 70% are male and 30% are female. (2) Aurora-4 [32], a broadband corpus designed for noisy speech recognition tasks based on the Wall Street Journal corpus (WSJ0) [31]. Two microphone types, CLEAN/CHANNEL are included, and six noise types are artificially added to both microphone types, which results in four conditions: CLEAN, CHANNEL, NOISY, and CHANNEL+NOISY. Two 14 hour training sets are used, where one is clean and the other is a mix of all four conditions. The same noise types and microphones are used to generate the development and test sets, which both consist of 330 utterances from all four conditions, resulting in 4,620 utterances in total for each set. All speech is represented as a sequence of 80 dimensional Mel-scale filter bank (FBank) features or 200 dimensional log-magnitude spectrum (only for audio reconstruction), computed every 10ms. Mel-scale features are a popular auditory approximation for many speech applications [28]. We consider a sample x to be a 200ms sub-sequence, which is on the order of the length of a syllable, and implies T = 20 for each x. For the Seq2Seq-FHVAE model, all the LSTM and MLP networks are one-layered, and Adam [20] is used for optimization. More details of the model architecture and training procedure can be found in Appendix C. 4.1 Qualitative Evaluation of the Disentangled Latent Variables Figure 4: (left) Examples generated by varying different latent variables. (right) An illustration of harmonics and formants in filter bank images. The green block ‘A’ contains four reconstructed examples. The red block ‘B’ contains ten original sequences on the first row with the corresponding reconstructed examples on the second row. The entry on the i-th row and the j-th column in the blue block ‘C’ is the reconstructed example using the latent segment variable z1 of the i-th row from block ‘A’ and the latent sequence variable z2 of the j-th column from block ‘B’. To qualitatively study the factorization of information between the latent segment variable z1 and the latent sequence variable z2, we generate examples x by varying each of them respectively. Figure 4 shows 40 examples in block ‘C’ of all the combinations of the 4 latent segment variables extracted from block ‘A’ and the 10 latent sequence variables extracted from block ‘B.’ The top two examples from block ‘A’ and the five leftmost examples from block ‘B’ are from male speakers, while the rest are from female speakers, which show higher fundamental frequencies and harmonics.4 4The harmonics corresponds to horizontal dark stripes in the figure; the more widely these stripes are spaced vertically, the higher the fundamental frequency of the speaker is. 6 Figure 5: FHVAE (α = 0) decoding results of three combinations of latent segment variables z1 and latent sequence variables z2 from one male-speaker utterance (top-left) and one female-speaker utterance (bottom-left) in Aurora-4. By replacing z2 of a male-speaker utterance with z2 of a femalespeaker utterance, an FHVAE decodes a voice-converted utterance (middle-right) that preserves the linguistic content. Audio samples are available at https://youtu.be/VMX3IZYWYdg. We can observe that along each row in block ‘C’, the linguistic phonetic-level content, which manifests itself in the form of the spectral contour and temporal position of formants, as well as the relative position between formants, is very similar between elements; the speaker identity however changes (e.g., harmonic structure). On the other hand, for each column we see that the speaker identity remains consistent, despite the change of linguistic content. The factorization of the sequence-level attributes and the segment-level attributes of our proposed Seq2Seq-FHVAE is clearly evident. In addition, we also show examples of modifying an entire utterance in Figure 1 and 5, which achieves denoising by replacing the latent sequence variable of a noisy utterance with those of a clean utterance, and achieves voice conversion by replacing the latent sequence variable of one speaker with that of another speaker. Details of the operations we applied to modify an entire utterance as well as more larger-sized examples of different α values can be found in Appendix E. We also show extra latent space traversal experiments in Appendix H. 4.2 Quantitative Evaluation of S-Vectors – Speaker Verification To quantify the performance of our model on disentangling the utterance-level attributes from the segment-level attributes, we present experiments on a speaker verification task on the TIMIT corpus to evaluate how well the estimated µ2 encodes speaker-level information.5 As a sanity check, we modify Eq. 5 to estimate an alternative s-vector based on latent segment variables z1 as follows: µ1 = P ˜ N n=1 gµz1 (˜x(n))/( ˜N + σ2 z1). We use the i-vector method [5] as the baseline, which is the representation used in most state-of-the-art speaker verification systems. They are in a low dimensional subspace of the Gaussian mixture model (GMM) mean supervector space, where the GMM is the universal background model (UBM) that models the generative process of speech. I-vectors, µ1, and µ2 can all be extracted without supervision; when speaker labels are available during training, techniques such as linear discriminative analysis (LDA) can be applied to further improve the linear separability of the representation. For all experiments, we use the fast scoring approach in [4] that uses cosine similarity as the similarity metric and compute the equal error rate (EER). More details about the experimental settings can be found in Appendix F. We compare different dimensions for both features as well as different α’s in Eq.3 for training FHVAE models. The results in Table 1 show that the 16 dimensional s-vectors µ2 outperform i-vector baselines in both unsupervised (Raw) and supervised (LDA) settings for all α’s as shown in the fourth column; the more discriminatively the FHVAE model is trained (i.e., with larger α), the better speaker 5TIMIT is not a standard corpus for speaker verification, but it is a good corpus to show the utterance-level attribute we learned via this task, because the main attribute that is consistent within an utterance is speaker identity, while in Aurora-4 both speaker identity and the background noise are consistent within an utterance. 7 verification results it achieves. Moreover, with the appropriately chosen dimension, a 32 dimensional µ2 reaches an even lower EER at 1.34%. On the other hand, the negative results of using µ1 also validate the success in disentangling utterance and segment level attributes. Table 1: Comparison of speaker verification equal error rate (EER) on the TIMIT test set Features Dimension α Raw LDA (12 dim) LDA (24 dim) i-vector 48 10.12% 6.25% 5.95% 100 9.52% 6.10% 5.50% 200 9.82% 6.54% 6.10% µ2 16 0 5.06% 4.02% 16 10−1 4.91% 4.61% 16 100 3.87% 3.86% 16 101 2.38% 2.08% 32 101 2.38% 2.08% 1.34% µ1 16 100 22.77% 15.62% 16 101 27.68% 22.17% 32 101 22.47% 16.82% 17.26% 4.3 Quantitative Evaluation of the Latent Segment Variables – Domain Invariant ASR Speaker adaptation and robust speech recognition in automatic speech recognition (ASR) can often be seen as domain adaptation problems, where available labeled data is limited and hence the data distributions during training and testing are mismatched. One approach to reduce the severity of this issue is to extract speaker/channel invariant features for the tasks. As demonstrated in Section 4.2, the s-vector contains information about domains. Here we evaluate if the latent segment variables contains domain invariant linguistic information by evaluating on an ASR task: (1) train our proposed Seq2Seq-FHVAE using FBank feature on a set that covers different domains. (2) train an LSTM acoustic model [12, 35, 42] on the set that only covers partial domains using mean and log variance of the latent segment variable z1 extracted from the trained Seq2Seq-FHVAE. (3) test the ASR system on all domains. As a baseline, we also train the same ASR models but use the FBank features alone. Detailed configurations are in Appendix G. For TIMIT we assume that male and female speakers constitute different domains, and show the results in Table 2. The first row of results shows that the ASR model trained on all domains (speakers) using FBank features as the upper bound. When trained on only male speakers, the phone error rate (PER) on female speakers increases by 16.1% for FBank features; however, for z1, despite the slight degradation on male speakers, the PER on the unseen domain, which are female speakers, improves by 6.6% compared to FBank features. Table 2: TIMIT test phone error rate of acoustic models trained on different features and sets Train Set and Configuration Test PER by Set ASR FHVAE Features Male Female All Train All FBank 20.1% 16.7% 19.1% Train Male FBank 21.0% 32.8% 25.2% Train All, α = 10 z1 22.0% 26.2% 23.5% On Aurora-4, four domains are considered, which are clean, noisy, channel, and noisy+channel (NC for short). We train the FHVAE on the development set for two purposes: (1) the FHVAE can be considered as a general feature extractor, which can be trained on an arbitrary collection of data that does not necessarily include the data for subsequent applications. (2) the dev set of Aurora-4 contains the domain label for each utterance so it is possible to control which domain has been observed by the FHVAE. Table 3 shows the word error rate (WER) results on Aurora-4, from which we can observe that the FBank representation suffers from severe domain mismatch problems; specifically, the WER 8 increases by 53.3% when noise is presented in mismatched microphone recordings (NC). In contrast, when the FHVAE is trained on data from all domains, using the latent segment variables as features reduce WER from 16% to 35% compare to baseline on mismatched domains, with less than 2% WER degradation on the matched domain. In addition, β-VAEs [13] are trained on the same data as the FHVAE to serve as the baseline feature extractor, from which we extract the latent variables z as the ASR feature and show the result in the third to the sixth rows. The β-VAE features outperform FBank in all mismatched domains, but are inferior to the latent segment variable z1 from the FHVAE in those domains. The results demonstrate the importance of learning not only disentangled, but also interpretable representations, which can be achieved by our proposed FHVAE models. As a sanity check, we replace z1 with z2, the latent sequence variable and train an ASR, which results in terrible WER performance as shown in the eighth row as expected. Finally, we train another FHVAE on all domains excluding the combinatory NC domain, and shows the results in the last row in Table 3. It can be observed that the latent segment variable still outperforms the baseline feature with 30% lower WER on noise and channel combined data, even though the FHAVE has only seen noise and channel variation independently. Table 3: Aurora-4 test word error rate of acoustic models trained on different features and sets Train Set and Configuration Test WER by Set ASR {FH-,β-}VAE Features Clean Noisy Channel NC All Train All FBank 3.60% 7.06% 8.24% 18.49% 11.80% Train Clean FBank 3.47% 50.97% 36.99% 71.80% 55.51% Dev, β = 1 z (β-VAE) 4.95% 23.54% 31.12% 46.21% 32.47% Dev, β = 2 z (β-VAE) 3.57% 27.24% 30.56% 48.17% 34.75% Dev, β = 4 z (β-VAE) 3.89% 24.40% 29.80% 47.87% 33.38% Dev, β = 8 z (β-VAE) 5.32% 34.84% 36.13% 58.02% 42.76% Dev, α = 10 z1 (FHVAE) 5.01% 16.42% 20.29% 36.33% 24.41% Dev, α = 10 z2 (FHVAE) 41.08% 68.73% 61.89% 86.36% 72.53% Dev\NC, α = 10 z1 (FHVAE) 5.25% 16.52% 19.30% 40.59% 26.23% 5 Related Work A number of prior publications have extended VAEs to model structured data by altering the underlying graphical model to dynamic Bayesian networks, such as SRNN [3] and VRNN [9], or to hierarchical models, such as neural statistician [7] and SVAE [18]. These models have shown success in quantitatively increasing the log-likelihood, or qualitatively generating reasonable structured data by sampling. However, it remains unclear whether independent attributes are disentangled in the latent space. Moreover, the learned latent variables in these models are not interpretable without manually inspecting or using labeled data. In contrast, our work presents a VAE framework that addresses both problems by explicitly modeling the difference in the rate of temporal variation of the attributes that operate at different scales. Our work is also related to β-VAE [13] with respect to unsupervised learning of disentangled representations with VAEs. The boosted KL-divergence penalty imposed in β-VAE training encourages disentanglement of independent attributes, but does not provide interpretability without supervision. We demonstrate in our domain invariant ASR experiments that learning interpretable representations is important for such applications, and can be achieved by our FHVAE model. In addition, the idea of boosting KL-divergence regularization is complimentary to our model, which can be potentially integrated for better disentanglement. 6 Conclusions and Future Work We introduce the factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations for sequence-level and segment-level attributes without any supervision. We verify the disentangling ability both qualitatively and quantitatively on two speech corpora. For future work, we plan to (1) extend to more levels of hierarchy, (2) investigate adversarial training for disentanglement, and (3) apply the model to other types of sequential data, such as text and videos. 9 References [1] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, page 2172–2180, 2016. [2] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. [3] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980–2988, 2015. [4] Najim Dehak, Reda Dehak, Patrick Kenny, Niko Brümmer, Pierre Ouellet, and Pierre Dumouchel. Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification. In Interspeech, volume 9, pages 1559–1562, 2009. [5] Najim Dehak, Patrick J Kenny, Réda Dehak, Pierre Dumouchel, and Pierre Ouellet. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):788–798, 2011. [6] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. [7] Harrison Edwards and Amos Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016. [8] Otto Fabius and Joost R van Amersfoort. Variational recurrent auto-encoders. arXiv preprint arXiv:1412.6581, 2014. [9] Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems, pages 2199–2207, 2016. [10] John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93, 1993. [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [12] Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. Hybrid speech recognition with deep bidirectional LSTM. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273–278. IEEE, 2013. [13] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016. [14] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [15] Wei-Ning Hsu, Yu Zhang, and James Glass. Learning latent representations for speech generation and transformation. In Interspeech, pages 1273–1277, 2017. [16] Wei-Ning Hsu, Yu Zhang, and James Glass. Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation. In Automatic Speech Recognition and Understanding (ASRU), 2017 IEEE Workshop on. IEEE, 2017. [17] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Controllable text generation. arXiv preprint arXiv:1703.00955, 2017. [18] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems, pages 2946–2954, 2016. [19] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. 10 [20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [21] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014. [22] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. 2016. [23] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [24] Tomi Kinnunen, Lauri Juvela, Paavo Alku, and Junichi Yamagishi. Non-parallel voice conversion using i-vector plda: Towards unifying speaker verification and transformation. In ICASSP, 2017. [25] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages 2539–2547, 2015. [26] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015. [27] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. [28] Nelson Mogran, Hervé Bourlard, and Hynek Hermansky. Automatic speech recognition: An auditory perspective. In Speech processing in the auditory system, pages 309–338. Springer, 2004. [29] Toru Nakashika, Tetsuya Takiguchi, Yasuhiro Minami, Toru Nakashika, Tetsuya Takiguchi, and Yasuhiro Minami. Non-parallel training in voice conversion using an adaptive restricted boltzmann machine. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 24(11):2032–2045, November 2016. [30] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [31] Douglas B Paul and Janet M Baker. The design for the wall street journal-based csr corpus. In Proceedings of the workshop on Speech and Natural Language, pages 357–362. Association for Computational Linguistics, 1992. [32] David Pearce. Aurora working group: DSR front end LVCSR evaluation AU/384/02. PhD thesis, Mississippi State University, 2002. [33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [34] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [35] Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Interspeech, pages 338–342, 2014. [36] Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. [37] Dmitriy Serdyuk, Kartik Audhkhasi, Philemon Brakel, Bhuvana Ramabhadran, Samuel Thomas, and Yoshua Bengio. Invariant representations for noisy speech recognition. CoRR, abs/1612.01928, 2016. [38] Yusuke Shunohara. Adversarial multi-task learning of deep neural networks for robust speech recognition. In Interspeeech, pages 2369–2372, 2016. [39] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. [40] Zhizheng Wu, Eng Siong Chng, and Haizhou Li. Conditional restricted boltzmann machine for voice conversion. In ChinaSIP, 2013. [41] Dong Yu, Michael Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. Feature learning in deep neural networks – studies on speech recognition tasks. arXiv preprint arXiv:1301.3605, 2013. 11 [42] Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. Highway long short-term memory RNNs for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5755–5759. IEEE, 2016. 12 | 2017 | 34 |
6,830 | First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization Aryan Mokhtari University of Pennsylvania aryanm@seas.upenn.edu Alejandro Ribeiro University of Pennsylvania aribeiro@seas.upenn.edu Abstract This paper studies empirical risk minimization (ERM) problems for large-scale datasets and incorporates the idea of adaptive sample size methods to improve the guaranteed convergence bounds for first-order stochastic and deterministic methods. In contrast to traditional methods that attempt to solve the ERM problem corresponding to the full dataset directly, adaptive sample size schemes start with a small number of samples and solve the corresponding ERM problem to its statistical accuracy. The sample size is then grown geometrically – e.g., scaling by a factor of two – and use the solution of the previous ERM as a warm start for the new ERM. Theoretical analyses show that the use of adaptive sample size methods reduces the overall computational cost of achieving the statistical accuracy of the whole dataset for a broad range of deterministic and stochastic first-order methods. The gains are specific to the choice of method. When particularized to, e.g., accelerated gradient descent and stochastic variance reduce gradient, the computational cost advantage is a logarithm of the number of training samples. Numerical experiments on various datasets confirm theoretical claims and showcase the gains of using the proposed adaptive sample size scheme. 1 Introduction Finite sum minimization (FSM) problems involve objectives that are expressed as the sum of a typically large number of component functions. Since evaluating descent directions is costly, it is customary to utilize stochastic descent methods that access only one of the functions at each iteration. When considering first order methods, a fitting measure of complexity is the total number of gradient evaluations that are needed to achieve optimality of order ✏. The paradigmatic deterministic gradient descent (GD) method serves as a naive complexity upper bound and has long been known to obtain an ✏-suboptimal solution with O(Nlog(1/✏)) gradient evaluations for an FSM problem with N component functions and condition number [13]. Accelerated gradient descent (AGD) [14] improves the computational complexity of GD to O(Nplog(1/✏)), which is known to be the optimal bound for deterministic first-order methods [13]. In terms of stochastic optimization, it has been only recently that linearly convergent methods have been proposed. Stochastic averaging gradient [15, 8], stochastic variance reduction [10], and stochastic dual coordinate ascent [17, 18], have all been shown to converge to ✏-accuracy at a cost of O((N +) log(1/✏)) gradient evaluations. The accelerating catalyst framework in [11] further reduces complexity to O((N + p N) log() log(1/✏)) and the works in [1] and [7] to O((N + p N) log(1/✏)). The latter matches the upper bound on the complexity of stochastic methods [20]. Perhaps the main motivation for studying FSM is the solution of empirical risk minimization (ERM) problems associated with a large training set. ERM problems are particular cases of FSM, but they do have two specific qualities that come from the fact that ERM is a proxy for statistical loss minimization. The first property is that since the empirical risk and the statistical loss have different 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. minimizers, there is no reason to solve ERM beyond the expected difference between the two objectives. This so-called statistical accuracy takes the place of ✏in the complexity orders of the previous paragraph and is a constant of order O(1/N ↵) where ↵is a constant from the interval [0.5, 1] depending on the regularity of the loss function; see Section 2. The second important property of ERM is that the component functions are drawn from a common distribution. This implies that if we consider subsets of the training set, the respective empirical risk functions are not that different from each other and, indeed, their differences are related to the statistical accuracy of the subset. The relationship of ERM to statistical loss minimization suggests that ERM problems have more structure than FSM problems. This is not exploited by most existing methods which, albeit used for ERM, are in fact designed for FSM. The goal of this paper is to exploit the relationship between ERM and statistical loss minimization to achieve lower overall computational complexity for a broad class of first-order methods applied to ERM. The technique we propose uses subsamples of the training set containing n N component functions that we grow geometrically. In particular, we start by a small number of samples and minimize the corresponding empirical risk added by a regularization term of order Vn up to its statistical accuracy. Note that, based on the first property of ERM, the added adaptive regularization term does not modify the required accuracy while it makes the problem strongly convex and improves the problem condition number. After solving the subproblem, we double the size of the training set and use the solution of the problem with n samples as a warm start for the problem with 2n samples. This is a reasonable initialization since based on the second property of ERM the functions are drawn from a joint distribution, and, therefore, the optimal values of the ERM problems with n and 2n functions are not that different from each other. The proposed approach succeeds in exploiting the two properties of ERM problems to improve complexity bounds of first-order methods. In particular, we show that to reach the statistical accuracy of the full training set the adaptive sample size scheme reduces the overall computational complexity of a broad range of first-order methods by a factor of log(N ↵). For instance, the overall computational complexity of adaptive sample size AGD to reach the statistical accuracy of the full training set is of order O(Np) which is lower than O((Np) log(N ↵)) complexity of AGD. Related work. The adaptive sample size approach was used in [6] to improve the performance of the SAGA method [8] for solving ERM problems. In the dynamic SAGA (DynaSAGA) method in [6], the size of training set grows at each iteration by adding two new samples, and the iterates are updated by a single step of SAGA. Although DynaSAGA succeeds in improving the performance of SAGA for solving ERM problems, it does not use an adaptive regularization term to tune the problem condition number. Moreover, DynaSAGA only works for strongly convex functions, while in our proposed scheme the functions are convex (not necessarily strongly convex). The work in [12] is the most similar work to this manuscript. The Ada Newton method introduced in [12] aims to solve each subproblem within its statistical accuracy with a single update of Newton’s method by ensuring that iterates always stay in the quadratic convergence region of Newton’s method. Ada Newton reaches the statistical accuracy of the full training in almost two passes over the dataset; however, its computational complexity is prohibitive since it requires computing the objective function Hessian and its inverse at each iteration. 2 Problem Formulation Consider a decision vector w 2 Rp, a random variable Z with realizations z and a convex loss function f(w; z). We aim to find the optimal argument that minimizes the optimization problem w⇤:= argmin w L(w) = argmin w EZ[f(w, Z)] = argmin w Z Z f(w, Z)P(dz), (1) where L(w) := EZ[f(w, Z)] is defined as the expected loss, and P is the probability distribution of the random variable Z. The optimization problem in (1) cannot be solved since the distribution P is unknown. However, we have access to a training set T = {z1, . . . , zN} containing N independent samples z1, . . . , zN drawn from P, and, therefore, we attempt to minimize the empirical loss associated with the training set T = {z1, . . . , zN}, which is equivalent to minimizing the problem w† n := argmin w Ln(w) = argmin w 1 n n X i=1 f(w, zi), (2) for n = N. Note that in (2) we defined Ln(w) := (1/n) Pn i=1 f(w, zi) as the empirical loss. 2 There is a rich literature on bounds for the difference between the expected loss L and the empirical loss Ln which is also referred to as estimation error [4, 3]. We assume here that there exists a constant Vn, which depends on the number of samples n, that upper bounds the difference between the expected and empirical losses for all w 2 Rp E sup w2Rp |L(w) −Ln(w)| % Vn, (3) where the expectation is with respect to the choice of the training set. The celebrated work of Vapnik in [19, Section 3.4] provides the upper bound Vn = O( p (1/n) log(1/n)) which can be improved to Vn = O( p 1/n) using the chaining technique (see, e.g., [5]). Bounds of the order Vn = O(1/n) have been derived more recently under stronger regularity conditions that are not uncommon in practice, [2, 9, 4]. In this paper, we report our results using the general bound Vn = O(1/n↵) where ↵can be any constant form the interval [0.5, 1]. The observation that the optimal values of the expected loss and empirical loss are within a Vn distance of each other implies that there is no gain in improving the optimization error of minimizing Ln beyond the constant Vn. In other words, if we find an approximate solution wn such that the optimization error is bounded by Ln(wn)−Ln(w† n) Vn, then finding a more accurate solution to reduce the optimization error is not beneficial since the overall error, i.e., the sum of estimation and optimization errors, does not become smaller than Vn. Throughout the paper we say that wn solves the ERM problem in (2) to within its statistical accuracy if it satisfies Ln(wn) −Ln(w† n) Vn. We can further leverage the estimation error to add a regularization term of the form (cVn/2)kwk2 to the empirical loss to ensure that the problem is strongly convex. To do so, we define the regularized empirical risk Rn(w) := Ln(w) + (cVn/2)kwk2 and the corresponding optimal argument w⇤ n := argmin w Rn(w) = argmin w Ln(w) + cVn 2 kwk2, (4) and attempt to minimize Rn with accuracy Vn. Since the regularization in (4) is of order Vn and (3) holds, the difference between Rn(w⇤ n) and L(w⇤) is also of order Vn – this is not immediate as it seems; see [16]. Thus, the variable wn solves the ERM problem in (2) to within its statistical accuracy if it satisfies Rn(wn) −Rn(w⇤ n) Vn. It follows that by solving the problem in (4) for n = N we find w⇤ N that solves the expected risk minimization in (1) up to the statistical accuracy VN of the full training set T . In the following section we introduce a class of methods that solve problem (4) up to its statistical accuracy faster than traditional deterministic and stochastic descent methods. 3 Adaptive Sample Size Methods The empirical risk minimization (ERM) problem in (4) can be solved using state-of-the-art methods for minimizing strongly convex functions. However, these methods never exploit the particular property of ERM that the functions are drawn from the same distribution. In this section, we propose an adaptive sample size scheme which exploits this property of ERM to improve the convergence guarantees for traditional optimization method to reach the statistical accuracy of the full training set. In the proposed adaptive sample size scheme, we start by a small number of samples and solve its corresponding ERM problem with a specific accuracy. Then, we double the size of the training set and use the solution of the previous ERM problem – with half samples – as a warm start for the new ERM problem. This procedure keeps going until the training set becomes identical to the given training set T which contains N samples. Consider the training set Sm with m samples as a subset of the full training T , i.e., Sm ⇢T . Assume that we have solved the ERM problem corresponding to the set Sm such that the approximate solution wm satisfies the condition E[Rm(wm) −Rm(w⇤ m)] δm. Now the next step in the proposed adaptive sample size scheme is to double the size of the current training set Sm and solve the ERM problem corresponding to the set Sn which has n = 2m samples and contains the previous set, i.e., Sm ⇢Sn ⇢T . We use wm which is a proper approximate for the optimal solution of Rm as the initial iterate for the optimization method that we use to minimize the risk Rn. This is a reasonable choice if the optimal arguments of Rm and Rn are close to each other, which is the case since samples are drawn from 3 Algorithm 1 Adaptive Sample Size Mechanism 1: Input: Initial sample size n = m0 and argument wn = wm0 with krRn(wn)k ( p 2c)Vn 2: while n N do {main loop} 3: Update argument and index: wm = wn and m = n. 4: Increase sample size: n = min{2m, N}. 5: Set the initial variable: ˜w = wm. 6: while krRn( ˜w)k > ( p 2c)Vn do 7: Update the variable ˜w: Compute ˜w = Update( ˜w,rRn( ˜w)) 8: end while 9: Set wn = ˜w. 10: end while a fixed distribution P. Starting with wm, we can use first-order descent methods to minimize the empirical risk Rn. Depending on the iterative method that we use for solving each ERM problem we might need different number of iterations to find an approximate solution wn which satisfies the condition E[Rn(wn) −Rn(w⇤ n)] δn. To design a comprehensive routine we need to come up with a proper condition for the required accuracy δn at each phase. In the following proposition we derive an upper bound for the expected suboptimality of the variable wm for the risk Rn based on the accuracy of wm for the previous risk Rm associated with the training set Sm. This upper bound allows us to choose the accuracy δm efficiently. Proposition 1. Consider the sets Sm and Sn as subsets of the training set T such that Sm ⇢Sn ⇢T , where the number of samples in the sets Sm and Sn are m and n, respectively. Further, define wm as an δm optimal solution of the risk Rm in expectation, i.e., E[Rm(wm) −R⇤ m] δm, and recall Vn as the statistical accuracy of the training set Sn. Then the empirical risk error Rn(wm) −Rn(w⇤ n) of the variable wm corresponding to the set Sn in expectation is bounded above by E[Rn(wm)−Rn(w⇤ n)] δm+2(n −m) n (Vn−m + Vm)+2 (Vm −Vn)+c(Vm −Vn) 2 kw⇤k2. (5) Proof. See Section 7.1 in the supplementary material. The result in Proposition 1 characterizes the sub-optimality of the variable wm, which is an δm sub-optimal solution for the risk Rm, with respect to the empirical risk Rn associated with the set Sn. If we assume that the statistical accuracy Vn is of the order O(1/n↵) and we double the size of the training set at each step, i.e., n = 2m, then the inequality in (5) can be simplified to E[Rn(wm) −Rn(w⇤ n)] δm + 2 + ✓ 1 −1 2↵ ◆⇣ 2 + c 2kw⇤k2⌘% Vm. (6) The expression in (6) formalizes the reason that there is no need to solve the sub-problem Rm beyond its statistical accuracy Vm. In other words, even if δm is zero the expected sub-optimality will be of the order O(Vm), i.e., E[Rn(wm) −Rn(w⇤ n)] = O(Vm). Based on this observation, The required precision δm for solving the sub-problem Rm should be of the order δm = O(Vm). The steps of the proposed adaptive sample size scheme is summarized in Algorithm 1. Note that since computation of the sub-optimality Rn(wn)−Rn(w⇤ n) requires access to the minimizer w⇤ n, we replace the condition Rn(wn) −Rn(w⇤ n) Vn by a bound on the norm of gradient krRn(wn)k2. The risk Rn is strongly convex, and we can bound the suboptimality Rn(wn) −Rn(w⇤ n) as Rn(wn) −Rn(w⇤ n) 1 2cVn krRn(wn)k2. (7) Hence, at each stage, we stop updating the variable if the condition krRn(wn)k ( p 2c)Vn holds which implies Rn(wn) −Rn(w⇤ n) Vn. The intermediate variable ˜w can be updated in Step 7 using any first-order method. We will discuss this procedure for accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG) methods in Sections 4.1 and 4.2, respectively. 4 4 Complexity Analysis In this section, we aim to characterize the number of required iterations sn at each stage to solve the subproblems within their statistical accuracy. We derive this result for all linearly convergent first-order deterministic and stochastic methods. The inequality in (6) not only leads to an efficient policy for the required precision δm at each step, but also provides an upper bound for the sub-optimality of the initial iterate, i.e., wm, for minimizing the risk Rn. Using this upper bound, depending on the iterative method of choice, we can characterize the number of required iterations sn to ensure that the updated variable is within the statistical accuracy of the risk Rn. To formally characterize the number of required iterations sn, we first assume the following conditions are satisfied. Assumption 1. The loss functions f(w, z) are convex with respect to w for all values of z. Moreover, their gradients rf(w, z) are Lipschitz continuous with constant M krf(w, z) −rf(w0, z)k Mkw −w0k, for all z. (8) The conditions in Assumption 1 imply that the average loss L(w) and the empirical loss Ln(w) are convex and their gradients are Lipschitz continuous with constant M. Thus, the empirical risk Rn(w) is strongly convex with constant cVn and its gradients rRn(w) are Lipschitz continuous with parameter M + cVn. So far we have concluded that each subproblem should be solved up to its statistical accuracy. This observation leads to an upper bound for the number of iterations needed at each step to solve each subproblem. Indeed various descent methods can be executed for solving the sub-problem. Here we intend to come up with a general result that contains all descent methods that have a linear convergence rate when the objective function is strongly convex and smooth. In the following theorem, we derive a lower bound for the number of required iterations sn to ensure that the variable wn, which is the outcome of updating wm by sn iterations of the method of interest, is within the statistical accuracy of the risk Rn for any linearly convergent method. Theorem 2. Consider the variable wm as a Vm-suboptimal solution of the risk Rm in expectation, i.e., E[Rm(wm) −Rm(w⇤ m)] Vm, where Vm = O(1/m↵). Consider the sets Sm ⇢Sn ⇢T such that n = 2m, and suppose Assumption 1 holds. Further, define 0 ⇢n < 1 as the linear convergence factor of the descent method used for updating the iterates. Then, the variable wn generated based on the adaptive sample size mechanism satisfies E[Rn(wn) −Rn(w⇤ n)] Vn if the number of iterations sn at the n-th stage is larger than sn ≥−log ⇥ 3 ⇥2↵+ (2↵−1) , 2 + c 2kw⇤k2-⇤ log ⇢n . (9) Proof. See Section 7.2 in the supplementary material. The result in Theorem 2 characterizes the number of required iterations at each phase. Depending on the linear convergence factor ⇢n and the parameter ↵for the order of statistical accuracy, the number of required iterations might be different. Note that the parameter ⇢n might depend on the size of the training set directly or through the dependency of the problem condition number on n. It is worth mentioning that the result in (9) shows a lower bound for the number of required iteration which means that sn = b−( log ⇥ 3 ⇥2↵+ (2↵−1) , 2 + (c/2)kw⇤k2-⇤ /log ⇢n)c + 1 is the exact number of iterations needed when minimizing Rn, where bac indicates the floor of a. To characterize the overall computational complexity of the proposed adaptive sample size scheme, the exact expression for the linear convergence constant ⇢n is required. In the following section, we focus on two deterministic and stochastic methods and characterize their overall computational complexity to reach the statistical accuracy of the full training set T . 4.1 Adaptive Sample Size Accelerated Gradient (Ada AGD) The accelerated gradient descent (AGD) method, also called as Nesterov’s method, is a longestablished descent method which achieves the optimal convergence rate for first-order deterministic methods. In this section, we aim to combine the update of AGD with the adaptive sample size scheme in Section 3 to improve convergence guarantees of AGD for solving ERM problems. This 5 can be done by using AGD for updating the iterates in step 7 of Algorithm 1. Given an iterate wm within the statistical accuracy of the set Sm, the adaptive sample size accelerated gradient descent method (Ada AGD) requires sn iterations of AGD to ensure that the resulted iterate wn lies in the statistical accuracy of Sn. In particular, if we initialize the sequences ˜w and ˜y as ˜w0 = ˜y0 = wm, the approximate solution wn for the risk Rn is the outcome of the updates ˜wk+1 = ˜yk −⌘nrRn(˜yk), (10) and ˜yk+1 = ˜wk+1 + βn( ˜wk+1 −˜wk) (11) after sn iterations, i.e., wn = ˜wsn. The parameters ⌘n and βn are indexed by n since they depend on the number of samples. We use the convergence rate of AGD to characterize the number of required iterations sn to guarantee that the outcome of the recursive updates in (10) and (11) is within the statistical accuracy of Rn. Theorem 3. Consider the variable wm as a Vm-optimal solution of the risk Rm in expectation, i.e., E[Rm(wm) −Rm(w⇤ m)] Vm, where Vm = γ/m↵. Consider the sets Sm ⇢Sn ⇢T such that n = 2m, and suppose Assumption 1 holds. Further, set the parameters ⌘n and βn as ⌘n = 1 cVn + M and βn = pcVn + M −pcVn pcVn + M + pcVn . (12) Then, the variable wn generated based on the update of Ada AGD in (10)-(11) satisfies E[Rn(wn)− Rn(w⇤ n)] Vn if the number of iterations sn is larger than sn ≥ s n↵M + cγ cγ log ⇥ 6 ⇥2↵+ (2↵−1) , 4 + ckw⇤k2-⇤ . (13) Moreover, if we define m0 as the size of the first training set, to reach the statistical accuracy VN of the full training set T the overall computational complexity of Ada GD is given by N " 1 + log2 ✓N m0 ◆ + p 2↵ p 2↵−1 ! s N ↵M cγ # log ⇥ 6 ⇥2↵+ (2↵−1) , 4 + ckw⇤k2-⇤ . (14) Proof. See Section 7.3 in the supplementary material. The result in Theorem 3 characterizes the number of required iterations sn to achieve the statistical accuracy of Rn. Moreover, it shows that to reach the accuracy VN = O(1/N ↵) for the risk RN accosiated to the full training set T , the total computational complexity of Ada AGD is of the order O , N (1+↵/2). Indeed, this complexity is lower than the overall computational complexity of AGD for reaching the same target which is given by O , NpN log(N ↵) = O , N (1+↵/2) log(N ↵) . Note that this bound holds for AGD since the condition number N := (M + cVN)/(cVN) of the risk RN is of the order O(1/VN) = O(N ↵). 4.2 Adaptive Sample Size SVRG (Ada SVRG) For the adaptive sample size mechanism presented in Section 3, we can also use linearly convergent stochastic methods such as stochastic variance reduced gradient (SVRG) in [10] to update the iterates. The SVRG method succeeds in reducing the computational complexity of deterministic first-order methods by computing a single gradient per iteration and using a delayed version of the average gradient to update the iterates. Indeed, we can exploit the idea of SVRG to develop low computational complexity adaptive sample size methods to improve the performance of deterministic adaptive sample size algorithms. Moreover, the adaptive sample size variant of SVRG (Ada SVRG) enhances the proven bounds for SVRG to solve ERM problems. We proceed to extend the idea of adaptive sample size scheme to the SVRG algorithm. To do so, consider wm as an iterate within the statistical accuracy, E[Rm(wm) −Rm(w⇤ m)] Vm, for a set Sm which contains m samples. Consider sn and qn as the numbers of outer and inner loops for the update of SVRG, respectively, when the size of the training set is n. Further, consider ˜w and ˆw as the sequences of iterates for the outer and inner loops of SVRG, respectively. In the adaptive sample 6 size SVRG (Ada SVRG) method to minimize the risk Rn, we set the approximate solution wm for the previous ERM problem as the initial iterate for the outer loop, i.e., ˜w0 = wm. Then, the outer loop update which contains gradient computation is defined as rRn( ˜wk) = 1 n n X i=1 rf( ˜wk, zi) + cVn ˜wk for k = 0, . . . , sn −1, (15) and the inner loop for the k-th outer loop contains qn iterations of the following update ˆwt+1,k = ˆwt,k −⌘n (rf( ˆwt,k, zit) + cVn ˆwt,k −rf( ˜wk, zit) −cVn ˜wk + rRn( ˜wk)) , (16) for t = 0, . . . , qn −1, where the iterates for the inner loop at step k are initialized as ˆw0,k = ˜wk, and it is index of the function which is chosen unfirmly at random from the set {1, . . . , n} at the inner iterate t. The outcome of each inner loop ˆwqn,k is used as the variable for the next outer loop, i.e., ˜wk+1 = ˆwqn,k. We define the outcome of sn outer loops ˜wsn as the approximate solution for the risk Rn, i.e., wn = ˜wsn. In the following theorem we derive a bound on the number of required outer loops sn to ensure that the variable wn generated by the updates in (15) and (16) will be in the statistical accuracy of Rn in expectation, i.e., E[Rn(wn) −Rn(w⇤ n)] Vn. To reach the smallest possible lower bound for sn, we properly choose the number of inner loop iterations qn and the learning rate ⌘n. Theorem 4. Consider the variable wm as a Vm-optimal solution of the risk Rm, i.e., a solution such that E[Rm(wm) −Rm(w⇤ m)] Vm, where Vm = O(1/m↵). Consider the sets Sm ⇢Sn ⇢T such that n = 2m, and suppose Assumption 1 holds. Further, set the number of inner loop iterations as qn = n and the learning rate as ⌘n = 0.1/(M + cVn). Then, the variable wn generated based on the update of Ada SVRG in (15)-(16) satisfies E[Rn(wn) −Rn(w⇤ n)] Vn if the number of iterations sn is larger than sn ≥log2 h 3 ⇥2↵+ (2↵−1) ⇣ 2 + c 2kw⇤k2⌘i . (17) Moreover, to reach the statistical accuracy VN of the full training set T the overall computational complexity of Ada SVRG is given by 4N log2 h 3 ⇥2↵+ (2↵−1) ⇣ 2 + c 2kw⇤k2⌘i . (18) Proof. See Section 7.4. The result in (17) shows that the minimum number of outer loop iterations for Ada SVRG is equal to sn = blog2[3 ⇥2↵+ (2↵−1)(2 + (c/2)kw⇤k2)]c+1. This bound leads to the result in (18) which shows that the overall computational complexity of Ada SVRG to reach the statistical accuracy of the full training set T is of the order O(N). This bound not only improves the bound O(N 1+↵/2) for Ada AGD, but also enhances the complexity of SVRG for reaching the same target accuracy which is given by O((N + ) log(N ↵)) = O(N log(N ↵)). 5 Experiments In this section, we compare the adaptive sample size versions of a group of first-order methods, including gradient descent (GD), accelerated gradient descent (AGD), and stochastic variance reduced gradient (SVRG) with their standard (fixed sample size) versions. In the main paper, we only use the RCV1 dataset. Further numerical experiments on MNIST dataset can be found in Section 7.5 in the supplementary material. We use N = 10, 000 samples of the RCV1 dataset for the training set and the remaining 10, 242 as the test set. The number of features in each sample is p = 47, 236. In our experiments, we use logistic loss. The constant c should be within the order of gradients Lipschitz continuity constant M, and, therefore, we set it as c = 1 since the samples are normalized and M = 1. The size of the initial training set for adaptive methods is m0 = 400. In our experiments we assume ↵= 0.5 and therefore the added regularization term is (1/pn)kwk2. The plots in Figure 1 compare the suboptimality of GD, AGD, and SVRG with their adaptive sample size versions. As our theoretical results suggested, we observe that the adaptive sample size scheme reduces the overall computational complexity of all of the considered linearly convergent first-order 7 0 20 40 60 80 100 Number of effective passes 10-2 10-1 100 101 102 Suboptimality GD Ada GD 0 20 40 60 80 100 Number of effective passes 10-2 10-1 100 101 102 Suboptimality AGD Ada AGD 0 1 2 3 4 5 6 Number of effective passes 10-3 10-2 10-1 100 101 102 Suboptimality SVRG Ada SVRG Figure 1: Suboptimality vs. number of effective passes for RCV1 dataset with regularization of O(1/pn). 0 20 40 60 80 100 Number of effective passes 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Test error GD Ada GD 0 20 40 60 80 100 Number of effective passes 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Test error AGD Ada AGD 0 1 2 3 4 5 6 Number of effective passes 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Test error SVRG Ada SVRG Figure 2: Test error vs. number of effective passes for RCV1 dataset with regularization of O(1/pn). methods. If we compare the test errors of GD, AGD, and SVRG with their adaptive sample size variants, we reach the same conclusion that the adaptive sample size scheme reduces the overall computational complexity to reach the statistical accuracy of the full training set. In particular, the left plot in Figure 2 shows that Ada GD approaches the minimum test error of 8% after 55 effective passes, while GD can not improve the test error even after 100 passes. Indeed, GD will reach lower test error if we run it for more iterations. The central plot in Figure 2 showcases that Ada AGD reaches 8% test error about 5 times faster than AGD. This is as predicted by log(N ↵) = log(100) = 4.6. The right plot in Figure 2 illustrates a similar improvement for Ada SVRG. We have observed similar performances for other datasets such as MNIST – see Section 7.5 in supplementary material. 6 Discussions We presented an adaptive sample size scheme to improve the convergence guarantees for a class of first-order methods which have linear convergence rates under strong convexity and smoothness assumptions. The logic behind the proposed adaptive sample size scheme is to replace the solution of a relatively hard problem – the ERM problem for the full training set – by a sequence of relatively easier problems – ERM problems corresponding to a subset of samples. Indeed, whenever m < n, solving the ERM problems in (4) for loss Rm is simpler than the one for loss Rn because: (i) The adaptive regularization term of order Vm makes the condition number of Rm smaller than the condition number of Rn – which uses a regularizer of order Vn. (ii) The approximate solution wm that we need to find for Rm is less accurate than the approximate solution wn we need to find for Rn. (iii) The computation cost of an iteration for Rm – e.g., the cost of evaluating a gradient – is lower than the cost of an iteration for Rn. Properties (i)-(iii) combined with the ability to grow the sample size geometrically, reduce the overall computational complexity for reaching the statistical accuracy of the full training set. We particularized our results to develop adaptive (Ada) versions of AGD and SVRG. In both methods we found a computational complexity reduction of order O(log(1/VN)) = O(log(N ↵)) which was corroborated in numerical experiments. The idea and analysis of adaptive first order methods apply generically to any other approach with linear convergence rate (Theorem 2). The development of sample size adaptation for sublinear methods is left for future research. Acknowledgments This research was supported by NSF CCF 1717120 and ARO W911NF1710438. 8 References [1] Zeyuan Allen-Zhu. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. In STOC, 2017. [2] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [3] L´eon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010. [4] L´eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems 20, Vancouver, British Columbia, Canada, December 3-6, 2007, pages 161–168, 2007. [5] Olivier Bousquet. Concentration inequalities and empirical processes theory applied to the analysis of learning algorithms. PhD thesis, Ecole Polytechnique, 2002. [6] Hadi Daneshmand, Aur´elien Lucchi, and Thomas Hofmann. Starting small - learning with adaptive sample sizes. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, pages 1463–1471, 2016. [7] Aaron Defazio. A simple practical accelerated method for finite sums. In Advances In Neural Information Processing Systems, pages 676–684, 2016. [8] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, Montreal, Quebec, Canada, pages 1646–1654, 2014. [9] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Competing with the empirical risk minimizer in a single pass. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 728–763, 2015. [10] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 26. Lake Tahoe, Nevada, United States., pages 315–323, 2013. [11] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In Advances in Neural Information Processing Systems, pages 3384–3392, 2015. [12] Aryan Mokhtari, Hadi Daneshmand, Aur´elien Lucchi, Thomas Hofmann, and Alejandro Ribeiro. Adaptive Newton method for empirical risk minimization to statistical accuracy. In Advances in Neural Information Processing Systems 29. Barcelona, Spain, pages 4062–4070, 2016. [13] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. [14] Yurii Nesterov et al. Gradient methods for minimizing composite objective function. 2007. [15] Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems 25. Lake Tahoe, Nevada, United States., pages 2672–2680, 2012. [16] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and uniform convergence. The Journal of Machine Learning Research, 11:2635–2670, 2010. [17] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14:567–599, 2013. [18] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105–145, 2016. [19] Vladimir Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 2013. [20] Blake E Woodworth and Nati Srebro. Tight complexity bounds for optimizing composite objectives. In Advances in Neural Information Processing Systems, pages 3639–3647, 2016. 9 | 2017 | 340 |
6,831 | Doubly Stochastic Variational Inference for Deep Gaussian Processes Hugh Salimbeni Imperial College London and PROWLER.io hrs13@ic.ac.uk Marc Peter Deisenroth Imperial College London and PROWLER.io m.deisenroth@imperial.ac.uk Abstract Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to overfitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalizations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm that does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression. 1 Introduction Gaussian processes (GPs) achieve state-of-the-art performance in a range of applications including robotics (Ko and Fox, 2008; Deisenroth and Rasmussen, 2011), geostatistics (Diggle and Ribeiro, 2007), numerics (Briol et al., 2015), active sensing (Guestrin et al., 2005) and optimization (Snoek et al., 2012). A Gaussian process is defined by its mean and covariance function. In some situations prior knowledge can be readily incorporated into these functions. Examples include periodicities in climate modelling (Rasmussen and Williams, 2006), change-points in time series data (Garnett et al., 2009) and simulator priors for robotics (Cutler and How, 2015). In other settings, GPs are used successfully as black-box function approximators. There are compelling reasons to use GPs, even when little is known about the data: a GP grows in complexity to suit the data; a GP is robust to overfitting while providing reasonable error bars on predictions; a GP can model a rich class of functions with few hyperparameters. Single-layer GP models are limited by the expressiveness of the kernel/covariance function. To some extent kernels can be learned from data, but inference over a large and richly parameterized space of kernels is expensive, and approximate methods may be at risk of overfitting. Optimization of the marginal likelihood with respect to hyperparameters approximates Bayesian inference only if the number of hyperparameters is small (Mackay, 1999). Attempts to use, for example, a highly parameterized neural network as a kernel function (Calandra et al., 2016; Wilson et al., 2016) incur the downsides of deep learning, such as the need for application-specific architectures and regularization techniques. Kernels can be combined through sums and products (Duvenaud et al., 2013) to create more expressive compositional kernels, but this approach is limited to simple base kernels, and their optimization is expensive. A Deep Gaussian Process (DGP) is a hierarchical composition of GPs that can overcome the limitations of standard (single-layer) GPs while retaining the advantages. DGPs are richer models than standard GPs, just as deep networks are richer than generalized linear models. In contrast to models with highly parameterized kernels, DGPs learn a representation hierarchy non-parametrically with very few hyperparmeters to optimize. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Unlike their single-layer counterparts, DGPs have proved difficult to train. The mean-field variational approaches used in previous work (Damianou and Lawrence, 2013; Mattos et al., 2016; Dai et al., 2016) make strong independence and Gaussianity assumptions. The true posterior is likely to exhibit high correlations between layers, but mean-field variational approaches are known to severely underestimate the variance in these situations (Turner and Sahani, 2011). In this paper, we present a variational algorithm for inference in DGP models that does not force independence or Gaussianity between the layers. In common with many state-of-the-art GP approximation schemes we start from a sparse inducing point variational framework (Matthews et al., 2016) to achieve computational tractability within each layer, but we do not force independence between the layers. Instead, we use the exact model conditioned on the inducing points as a variational posterior. This posterior has the same structure as the full model, and in particular it maintains the correlations between layers. Since we preserve the non-linearity of the full model in our variational posterior we lose analytic tractability. We overcome this difficulty by sampling from the variational posterior, introducing the first source of stochasticity. This is computationally straightforward due to an important property of the sparse variational posterior marginals: the marginals conditioned on the layer below depend only on the corresponding inputs. It follows that samples from the marginals at the top layer can be obtained without computing the full covariance within the layers. We are primarily interested in large data applications, so we further subsample the data in minibatches. This second source of stochasticity allows us to scale to arbitrarily large data. We demonstrate through extensive experiments that our approach works well in practice. We provide results on benchmark regression and classification data problems, and also demonstrate the first DGP application to a dataset with a billion points. Our experiments confirm that DGP models are never worse than single-layer GPs, and in many cases significantly better. Crucially, we show that additional layers do not incur overfitting, even with small data. 2 Background In this section, we present necessary background on single-layer Gaussian processes and sparse variational inference, followed by the definition of the deep Gaussian process model. Throughout we emphasize a particular property of sparse approximations: the sparse variational posterior is itself a Gaussian process, so the marginals depend only on the corresponding inputs. 2.1 Single-layer Gaussian Processes We consider the task of inferring a stochastic function f : RD →R, given a likelihood p(y|f) and a set of N observations y = (y1, . . . , yN)⊤at design locations X = (x1, . . . , xN)⊤. We place a GP prior on the function f that models all function values as jointly Gaussian, with a covariance function k : RD × RD →R and a mean function m : RD →R. We further define an additional set of M inducing locations Z = (z1, . . . , zM)⊤. We use the notation f = f(X) and u = f(Z) for the function values at the design and inducing points, respectively. We define also [m(X)]i = m(xi) and [k(X, Z)]ij = k(xi, zj). By the definition of a GP, the joint density p(f, u) is a Gaussian whose mean is given by the mean function evaluated at every input (X, Z)⊤, and the corresponding covariance is given by the covariance function evaluated at every pair of inputs. The joint density of y, f and u is p(y, f, u) = p(f|u; X, Z)p(u; Z) | {z } GP prior YN i=1 p(yi|fi) | {z } likelihood . (1) In (1) we factorized the joint GP prior p(f, u; X, Z) 1 into the prior p(u) = N(u|m(Z), k(Z, Z)) and the conditional p(f|u; X, Z) = N(f|µ, Σ), where for i, j = 1, . . . , N [µ]i = m(xi) + α(xi)⊤(u −m(Z)) , (2) [Σ]ij = k(xi, xj) −α(xi)⊤k(Z, Z)α(xj) , (3) 1Throughout this paper we use the semi-colon notation to clarify the input locations of the corresponding function values, which will become important later when we discuss multi-layer GP models. For example, p(f|u; X, Z) indicates that the input locations for f and u are X and Z, respectively. 2 with α(xi) = k(Z, Z)−1k(Z, xi). Note that the conditional mean µ and covariance Σ defined via (2) and (3), respectively, take the form of mean and covariance functions of the inputs xi. Inference in the model (1) is possible in closed form when the likelihood p(y|f) is Gaussian, but the computation scales cubically with N. We are interested in large datasets with non-Gaussian likelihoods. Therefore, we seek a variational posterior to overcome both these difficulties simultaneously. Variational inference seeks an approximate posterior q(f, u) by minimizing the Kullback-Leibler divergence KL[q||p] between the variational posterior q and the true posterior p. Equivalently, we maximize the lower bound on the marginal likelihood (evidence) L = Eq(f,u) log p(y, f, u) q(f, u) , (4) where p(y, f, u) is given in (1). We follow Hensman et al. (2013) and choose a variational posterior q(f, u) = p(f|u; X, Z)q(u) , (5) where q(u) = N(u|m, S). Since both terms in the variational posterior are Gaussian, we can analytically marginalize u, which yields q(f|m, S; X, Z) = Z p(f|u; X, Z)q(u)du = N(f|˜µ, ˜Σ) . (6) Similar to (2) and (3), the expressions for ˜µ and ˜Σ can be written as mean and covariance functions of the inputs. To emphasize this point we define µm,Z(xi) = m(xi) + α(xi)⊤(m −m(Z)) , (7) ΣS,Z(xi, xj) = k(xi, xj) −α(xi)⊤(k(Z, Z) −S)α(xj) . (8) With these functions we define [˜µ]i = µm,Z(xi) and [ ˜Σ]ij = ΣS,Z(xi, xj). We have written the mean and covariance in this way to make the following observation clear. Remark 1. The fi marginals of the variational posterior (6) depend only on the corresponding inputs xi. Therefore, we can write the ith marginal of q(f|m, S; X, Z) as q(fi|m, S; X, Z) = q(fi|m, S; xi, Z) = N(fi|µm,Z(xi), ΣS,Z(xi, xi)) . (9) Using our variational posterior (5) the lower bound (4) simplifies considerably since (a) the conditionals p(f|u; X, Z) inside the logarithm cancel and (b) the likelihood expectation requires only the variational marginals. We obtain L = XN i=1 Eq(fi|m,S;xi,Z)[log p(yi|fi)] −KL[q(u)||p(u)] . (10) The final (univariate) expectation of the log-likelihood can be computed analytically in some cases, with quadrature (Hensman et al., 2015) or through Monte Carlo sampling (Bonilla et al., 2016; Gal et al., 2015). Since the bound is a sum over the data, an unbiased estimator can be obtained through minibatch subsampling. This permits inference on large datasets. In this work we refer to a GP with this method of inference as a sparse GP (SGP). The variational parameters (Z, m and S) are found by maximizing the lower bound (10). This maximization is guaranteed to converge since L is a lower bound to the marginal likelihood p(y|X). We can also learn model parameters (hyperparameters of the kernel or likelihood) through the maximization of this bound, though we should exercise caution as this introduces bias because the bound is not uniformly tight for all settings of hyperparameters (Turner and Sahani, 2011) So far we have considered scalar outputs yi ∈R. In the case of D-dimensional outputs yi ∈RD we define Y as the matrix with ith row containing the ith observation yi. Similarly, we define F and U. If each output is an independent GP we have the GP prior QD d=1 p(Fd|Ud; X, Z)p(Ud; Z), which we abbreviate as p(F|U; X, Z)p(U; Z) to lighten the notation. 3 2.2 Deep Gaussian Processes A DGP (Damianou and Lawrence, 2013) defines a prior recursively on vector-valued stochastic functions F 1, . . . , F L. The prior on each function F l is an independent GP in each dimension, with input locations given by the noisy corruptions of the function values at the next layer: the outputs of the GPs at layer l are F l d, and the corresponding inputs are F l−1. The noise between layers is assumed i.i.d. Gaussian. Most presentations of DGPs (see, e.g. Damianou and Lawrence, 2013; Bui et al., 2016) explicitly parameterize the noisy corruptions separately from the outputs of each GP. Our method of inference does not require us to parameterize these variables separately. For notational convenience, we therefore absorb the noise into the kernel knoisy(xi, xj) = k(xi, xj)+σ2 l δij, where δij is the Kronecker delta, and σ2 l is the noise variance between layers. We use Dl for the dimension of the outputs at layer l. As with the single-layer case, we have inducing locations Zl−1 at each layer and inducing function values Ul for each dimension. An instantiation of the process has the joint density p(Y, {Fl, Ul}L l=1) = YN i=1 p(yi|f L i ) | {z } likelihood YL l=1 p(Fl|Ul; Fl−1, Zl−1)p(Ul; Zl−1) | {z } DGP prior , (11) where we define F0 = X. Inference in this model is intractable, so approximations must be used. The original DGP presentation (Damianou and Lawrence, 2013) uses a variational posterior that maintains the exact model conditioned on Ul, but further forces the inputs to each layer to be independent from the outputs of the previous layer. The noisy corruptions are parameterized separately, and the variational distribution over these variables is a fully factorized Gaussian. This approach requires 2N(D1 + · · · + DL−1) variational parameters but admits a tractable lower bound on the log marginal likelihood if the kernel is of a particular form. A further problem of this bound is that the density over the outputs is simply a single layer GP with independent Gaussian inputs. Since the posterior loses all the correlations between layers it cannot express the complexity of the full model and so is likely to underestimate the variance. In practice, we found that optimizing the objective in Damianou and Lawrence (2013) results in layers being ‘turned off’ (the signal to noise ratio tends to zero). In contrast, our posterior retains the full conditional structure of the true model. We sacrifice analytical tractability, but due to the sparse posterior within each layer we can sample the bound using univariate Gaussians. 3 Doubly Stochastic Variational Inference In this section, we propose a novel variational posterior and demonstrate a method to obtain unbiased samples from the resulting lower bound. The difficulty with inferring the DGP model is that there are complex correlations both within and between layers. Our approach is straightforward: we use sparse variational inference to simplify the correlations within layers, but we maintain the correlations between layers. The resulting variational lower bound cannot be evaluated analytically, but we can draw unbiased samples efficiently using univariate Gaussians. We optimize our bound stochastically. We propose a posterior with three properties. Firstly, the posterior maintains the exact model, conditioned on Ul. Secondly, we assume that the posterior distribution of {Ul}L l=1 is factorized between layers (and dimension, but we suppress this from the notation). Therefore, our posterior takes the simple factorized form q({Fl, Ul}L l=1) = YL l=1 p(Fl|Ul; Fl−1, Zl−1)q(Ul) . (12) Thirdly, and to complete specification of the posterior, we take q(Ul) to be a Gaussian with mean ml and variance Sl. A similar posterior was used in Hensman and Lawrence (2014) and Dai et al. (2016), but each of these works contained additional terms for the noisy corruptions at each layer. As in the single layer SGP, we can marginalize the inducing variables from each layer analytically. After this marginalization we obtain following distribution, which is fully coupled within and between layers: q({Fl}L l=1) = YL l=1 q(Fl|ml, Sl; Fl−1, Zl−1) = YL l=1 N(Fl|˜µl, ˜Σ l) . (13) 4 Here, q(Fl|ml, Sl; Fl−1, Zl−1) is as in (6). Specifically, it is a Gaussian with mean and variance ˜µl and ˜Σ l, where [˜µl]i = µml,Zl−1(f l i) and [ ˜Σ l]ij = ΣSl,Zl−1(f l i, f l j) (recall that f l i is the ith row of Fl). Since (12) is a product of terms that each take the form of the SGP variational posterior (5), we have again the property that within each layer the marginals depend on only the corresponding inputs. In particular, f L i depends only on f L−1 i , which in turn depends only on f L−2 i , and so on. Therefore, we have the following property: Remark 2. The ith marginal of the final layer of the variational DGP posterior (12) depends only on the ith marginals of all the other layers. That is, q(f L i ) = Z YL−1 l=1 q(f l i|ml, Sl; f l−1 i , Zl−1)df l i . (14) The consequence of this property is that taking a sample from q(f L i ) is straightforward, and furthermore we can perform the sampling using only univariate unit Gaussians using the ‘re-parameterization trick’ (Rezende et al., 2014; Kingma et al., 2015). Specifically, we first sample ϵl i ∼N(0, IDl) and then recursively draw the sampled variables ˆf l i ∼q(f l i|ml, Sl;ˆf l−1 i , Zl−1) for l = 1, . . . , L −1 as ˆf l i = µml,Zl−1(ˆf l−1 i ) + ϵl i ⊙ q ΣSl,Zl−1(ˆf l−1 i ,ˆf l−1 i ) , (15) where the terms in (15) are Dl-dimensional and the square root is element-wise. For the first layer we define ˆf 0 i := xi. Efficient computation of the evidence lower bound The evidence lower bound of the DGP is LDGP = Eq({Fl,Ul}L l=1) p(Y, {Fl, Ul}L l=1) q({Fl, Ul}L l=1) . (16) Using (11) and (12) for the corresponding expressions in (16), we obtain after some re-arranging LDGP = XN i=1 Eq(f L i )[log p(yn|f L n )] − XL l=1 KL[q(Ul)||p(Ul; Zl−1)] , (17) where we exploited the exact marginalization of the inducing variables (13) and the property of the marginals of the final layer (14). A detailed derivation is provided in the supplementary material. This bound has complexity O(NM 2(D1 + · · · + DL)) to evaluate. We evaluate the bound (17) approximately using two sources of stochasticity. Firstly, we approximate the expectation with a Monte Carlo sample from the variational posterior (14), which we compute according to (15). Since we have parameterized this sampling procedure in terms of isotropic Gaussians, we can compute unbiased gradients of the bound (17). Secondly, since the bound factorizes over the data we achieve scalability through sub-sampling the data. Both stochastic approximations are unbiased. Predictions To predict we sample from the variational posterior changing the input locations to the test location x∗. We denote the function values at the test location as f l ∗. To obtain the density over f L ∗we use the Gaussian mixture q(f L ∗) ≈1 S XS s=1 q(f L ∗|mL, SL; f (s) ∗ L−1, ZL−1) , (18) where we draw S samples f (s) ∗ L−1 using (15), but replacing the inputs xi with the test location x∗. Further Model Details While GPs are often used with a zero mean function, we consider such a choice inappropriate for the inner layers of a DGP. Using a zero mean function causes difficulties with the DGP prior as each GP mapping is highly non-injective. This effect was analyzed in Duvenaud et al. (2014) where the authors suggest adding the original input X to each layer. Instead, we consider an alternative approach and include a linear mean function m(X) = XW for all the inner layers. If the input and output dimension are the same we use the identity matrix for W, otherwise we compute the SVD of the data and use the top Dl left eigenvectors sorted by singular value (i.e. the PCA mapping). With these choices it is effective to initialize all inducing mean values ml = 0. This choice of mean function is partly inspired by the ‘skip layer’ approach of the ResNet (He et al., 2016) architecture. 5 -2.89 -2.63 -2.37 Linear SGP SGP 500 AEDGP 2 DGP 2 DGP 3 DGP 4 DGP 5 PBP boston N=506, D=13 -3.75 -3.43 -3.11 concrete N=1030, D=8 -2.39 -1.55 -0.71 energy N=768, D=8 0.25 0.78 1.31 Linear SGP SGP 500 AEDGP 2 DGP 2 DGP 3 DGP 4 DGP 5 PBP kin8nm N=8192, D=8 3.92 5.39 6.86 Linear SGP SGP 500 AEDGP 2 DGP 2 DGP 3 DGP 4 DGP 5 PBP naval N=11934, D=26 -2.92 -2.83 -2.73 power N=9568, D=4 -3.05 -2.89 -2.73 protein N=45730, D=9 -1.01 -0.97 -0.93 Linear SGP SGP 500 AEDGP 2 DGP 2 DGP 3 DGP 4 DGP 5 PBP wine_red N=1599, D=22 Bayesian NN Single layer benchmarks DGP with approx EP This work Figure 1: Regression test log-likelihood results on benchmark datasets. Higher (to the right) is better. The sparse GP with the same number of inducing points is highlighted as a baseline. 4 Results We evaluate our inference method on a number of benchmark regression and classification datasets. We stress that we are interested in models that can operate in both the small and large data regimes, with little or no hand tuning. All our experiments were run with exactly the same hyperparameters and initializations. See the supplementary material for details. We use min(30, D0) for all the inner layers of our DGP models, where D0 is the input dimension, and the RBF kernel for all layers. Regression Benchmarks We compare our approach to other state-of-the-art methods on 8 standard small to medium-sized UCI benchmark datasets. Following common practice (e.g. Hernández-Lobato and Adams, 2015) we use 20-fold cross validation with a 10% randomly selected held out test set and scale the inputs and outputs to zero mean and unit standard deviation within the training set (we restore the output scaling for evaluation). While we could use any kernel, we choose the RBF kernel with a lengthscale for each dimension for direct comparison with Bui et al. (2016). The test log-likelihood results are shown in Fig. 1. We compare our models of 2, 3, 4 and 5 layers (DGP 2–5), each with 100 inducing points, with (stochastically optimized) sparse GPs (Hensman et al., 2013) with 100 and 500 inducing points points (SGP, SGP 500). We compare also to a two-layer Bayesian neural network with ReLu activations, 50 hidden units (100 for protein and year), with inference by probabilistic backpropagation (Hernández-Lobato and Adams, 2015) (PBP). The results are taken from Hernández-Lobato and Adams (2015) and were found to be the most effective of several other methods for inferring Bayesian neural networks. We compare also with a DGP model with approximate expectation propagation (EP) for inference (Bui et al., 2016). Using the authors’ code 2 we ran a DGP model with 1 hidden layer using approximate expectation propagation (Bui et al., 2016) (AEPDGP 2). We used the input dimension for the hidden layer for a fair comparison with our models3. We found the time requirements to train a 3-layer model with this inference prohibitive. Plots for test RMSE and further results tables can be found in the supplementary material. On five of the eight datasets, the deepest DGP model is the best. On ‘wine’, ‘naval’ and ‘boston’ our DGP recovers the single-layer GP, which is not surprising: ‘boston’ is very small, ‘wine’ is 2https://github.com/thangbui/deepGP_approxEP 3We note however that in Bui et al. (2016) the inner layers were 2D, so the results we obtained are not directly comparable to those reported in Bui et al. (2016) 6 near-linear (note the proximity of the linear model and the scale) and ‘naval’ is characterized by extremely high test likelihoods (the RMSE on this dataset is less than 0.001 for all SGP and DGP models), i.e. it is a very ‘easy’ dataset for a GP. The Bayesian network is not better than the sparse GP for any dataset and significantly worse for six. The Approximate EP inference for the DGP models is also not competitive with the sparse GP for many of the datasets, but this may be because the initializations were designed for lower dimensional hidden layers than we used. Our results on these small and medium sized datasets confirm that overfitting is not observed with the DGP model, and that the DGP is never worse and often better than the single layer GP. We note in particular that on the ‘power’, ‘protein’ and ‘kin8nm’ datasets all the DGP models outperform the SGP with five times the number of inducing points. Rectangles Benchmark We use the Rectangle-Images dataset4, which is specifically designed to distinguish deep and shallow architectures. The dataset consists of 12,000 training and 50,000 testing examples of size 28 × 28, where each image consists of a (non-square) rectangular image against a different background image. The task is to determine which of the height and width is greatest. We run 2, 3 and 4 layer DGP models, and observe increasing performance with each layer. Table 1 contains the results. Note that the 500 inducing point single-layer GP is significantly less effective than any of the deep models. Our 4-layer model achieves 77.9% classification accuracy, exceeding the best result of 77.5% reported in Larochelle et al. (2007) with a three-layer deep belief network. We also exceed the best result of 76.4% reported in Krauth et al. (2016) using a sparse GP with an Arcsine kernel, a leave-one-out objective, and 1000 inducing points. Table 1: Results on Rectangles-Images dataset (N = 12000, D = 784) Single layer GP Ours Larochelle [2007] Krauth [2016] SGP SGP 500 DGP 2 DGP 3 DGP 4 DBN-3 SVM SGP 1000 Accuracy (%) 76.1 76.4 77.3 77.8 77.9 77.5 76.96 76.4 Likelihood −0.493 −0.485 0.475 −0.460 −0.460 −0.478 Large-Scale Regression To demonstrate our method on a large scale regression problem we use the UCI ‘year’ dataset and the ‘airline’ dataset, which has been commonly used by the large-scale GP community. For the ‘airline’ dataset we take the first 700K points for training and next 100K for testing. We use a random 10% split for the ‘year’ dataset. Results are shown in Table 2, with the log-likelihood reported in the supplementary material. In both datasets we see that the DGP models perform better with increased depth, significantly improving in both log likelihood and RMSE over the single-layer model, even with 500 inducing points. Table 2: Regression test RMSE results for large datasets N D SGP SGP 500 DGP 2 DGP 3 DGP 4 DGP 5 year 463810 90 10.67 9.89 9.58 8.98 8.93 8.87 airline 700K 8 25.6 25.1 24.6 24.3 24.2 24.1 taxi 1B 9 337.5 330.7 281.4 270.4 268.0 266.4 MNIST Multiclass Classification We apply the DGP with 2 and 3 layers to the MNIST multiclass classification problem. We use the robust-max multiclass likelihood (Hernández-Lobato et al., 2011) and use full unprocessed data with the standard training/test split of 60K/10K. The single-layer GP with 100 inducing points achieves a test accuracy of 97.48% and this is increased to 98.06% and 98.11% with two and three layer DGPs, respectively. The 500 inducing point single layer model achieved 97.9% in our implementation, though a slightly higher result for this model has previously been reported of 98.1% (Hensman et al., 2013) and 98.4% (Krauth et al., 2016) for the same model with 1000 inducing points. We attribute this difference to different hyperparameter initialization and training schedules, and stress that we use exactly the same initialization and learning schedule for all our models. The only other DGP result in the literature on this dataset is 94.24% (Wang et al., 2016) for a two layer model with a two dimensional latent space. 4http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/RectanglesData 7 Large-Scale Classification We use the HIGGS (N = 11M, D = 28) and SUSY (N = 5.5M, D = 18) datasets for large-scale binary classification. These datasets have been constructed from Monte Carlo physics simulations to detect the presence of the Higgs boson and super-symmetry (Baldi et al., 2014). We take a 10% random sample for testing and use the rest for training. We use the AUC metric for comparison with Baldi et al. (2014). Our DGP models are the highest performing on the SUSY dataset (AUC of 0.877 for all the DGP models) compared to shallow neural networks (NN, 0.875), deep neural networks (DNN, 0.876) and boosted decision trees (BDT, 0.863). On the HIGGS dataset we see a steady improvement in additional layers (0.830, 0.837, 0.841 and 0.846 for DGP 2–4 respectively). On this dataset the DGP models exceed the performance of BDT (0.810) and NN (0.816) and both single layer GP models SGP (0.785) and SGP 500 (0.794). The best performing model on this dataset is a 5 layer DNN (0.885). Full results are reported in the supplementary material. Table 3: Typical computation time in seconds for a single gradient step. CPU GPU SGP 0.14 0.018 SGP 500 1.71 0.11 DGP 2 0.36 0.030 DGP 3 0.49 0.045 DGP 4 0.65 0.056 DGP 5 0.87 0.069 Massive-Scale Regression To demonstrate the efficacy of our model on massive data we use the New York city yellow taxi trip dataset of 1.21 billion journeys 5. Following Peng et al. (2017) we use 9 features: time of day; day of the week; day of the month; month; pick-up latitude and longitude; drop-off latitude and longitude; travel distance. The target is to predict the journey time. We randomly select 1B (109) examples for training and use 1M examples for testing, and we scale both inputs and outputs to zero mean and unit standard deviation in the training data. We discard journeys that are less than 10 s or greater than 5 h, or start/end outside the New York region, which we estimate to have squared distance less than 5o from the center of New York. The test RMSE results are the bottom row of Table 2 and test log likelihoods are in the supplementary material. We note the significant jump in performance from the single layer models to the DGP. As with all the large-scale experiments, we see a consistent improvement extra layers, but on this dataset the improvement is particularly striking (DGP 5 achieves a 21% reduction in RMSE compared to SGP) 5 Related Work The first example of the outputs of a GP used as the inputs to another GP can be found in Lawrence and Moore (2007). MAP approximation was used for inference. The seminal work of Titsias and Lawrence (2010) demonstrated how sparse variational inference could be used to propagate Gaussian inputs through a GP with a Gaussian likelihood. This approach was extended in Damianou et al. (2011) to perform approximate inference in the model of Lawrence and Moore (2007), and shortly afterwards in a similar model Lázaro-Gredilla (2012), which also included a linear mean function. The key idea of both these approaches is the factorization of the variational posterior between layers. A more general model (flexible in depth and dimensions of hidden layers) introduced the term ‘DGP’ and used a posterior that also factorized between layers. These approaches require a linearly increasing number of variational parameters in the number of data. For high-dimensional observations, it is possible to amortize the cost of this optimization with an auxiliary model. This approach is pursued in Dai et al. (2016), and with a recurrent architecture in Mattos et al. (2016). Another approach to inference in the exact model was presented in Hensman and Lawrence (2014), where a sparse approximation was used within layers for the GP outputs, similar to Damianou and Lawrence (2013), but with a projected distribution over the inputs to the next layer. The particular form of the variational distribution was chosen to admit a tractable bound, but imposes a constraint on the flexibility. An alternative approach is to modify the DGP prior directly and perform inference in a parametric model. This is achieved in Bui et al. (2016) with an inducing point approximation within each layer, and in Cutajar et al. (2017) with an approximation to the spectral density of the kernel. Both approaches then apply additional approximations to achieve tractable inference. In Bui et al. (2016), an approximation to expectation propagation is used, with additional Gaussian approximations to the log partition function to propagate uncertainly through the non-linear GP mapping. In Cutajar et al. (2017) a fully factorized variational approximation is used for the spectral components. Both these 5http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml 8 approaches require specific kernels: in Bui et al. (2016) the kernel must have analytic expectations under a Gaussian, and in Cutajar et al. (2017) the kernel must have an analytic spectral density. Vafa (2016) also uses the same initial approximation as Bui et al. (2016) but applies MAP inference for the inducing points, such that the uncertainty propagated through the layers only represents the quality of the approximation. In the limit of infinitely many inducing points this approach recovers a deterministic radial basis function network. A particle method is used in Wang et al. (2016), again employing an online version of the sparse approximation used by Bui et al. (2016) within each layer. Similarly to our approach, in Wang et al. (2016) samples are taken through the conditional model, but differently from us they then use a point estimate for the latent variables. It is not clear how this approach propagates uncertainty through the layers, since the GPs at each layer have point-estimate inputs and outputs. A pathology with the DGP with zero mean function for the inner layers was identified in Duvenaud et al. (2014). In Duvenaud et al. (2014) a suggestion was made to concatenate the original inputs at each layer. This approach is followed in Dai et al. (2016) and Cutajar et al. (2017). The linear mean function was original used by Lázaro-Gredilla (2012), though in the special case of a two layer DGP with a 1D hidden layer. To the best of our knowledge there has been no previous attempt to use a linear mean function for all inner layers. 6 Discussion Our experiments show that on a wide range of tasks the DGP model with our doubly stochastic inference is both effective and scalable. Crucially, we observe that on the small datasets the DGP does not overfit, while on the large datasets additional layers generally increase performance and never deteriorate it. In particular, we note that the largest gain with increasing layers is achieved on the largest dataset (the taxi dataset, with 1B points). We note also that on all the large scale experiments the SGP 500 model is outperformed by the all the DGP models. Therefore, for the same computational budget increasing the number of layers can be significantly more effective than increasing the accuracy of approximate inference in the single-layer model. Other than the additional computation time, which is fairly modest (see Table 3), we do not see downsides to using a DGP over a single-layer GP, but substantial advantages. While we have considered simple kernels and black-box applications, any domain-specific kernel could be used in any layer. This is in contrast to other methods (Damianou and Lawrence, 2013; Bui et al., 2016; Cutajar et al., 2017) that require specific kernels and intricate implementations. Our implementation is simple (< 200 lines), publicly available 6, and is integrated with GPflow (Matthews et al., 2017), an open-source GP framework built on top of Tensorflow (Abadi et al., 2015). 7 Conclusion We have presented a new method for inference in Deep Gaussian Process (DGP) models. With our inference we have shown that the DGP can be used on a range of regression and classification tasks with no hand-tuning. Our results show that in practice the DGP always exceeds or matches the performance of a single layer GP. Further, we have shown that the DGP often exceeds the single layer significantly, even when the quality of the approximation to the single layer is improved. Our approach is highly scalable and benefits from GPU acceleration. The most significant limitation of our approach is the dealing with high dimensional inner layers. We used a linear mean function for the high dimensional datasets but left this mean function fixed, as to optimize the parameters would go against our non-parametric paradigm. It would be possible to treat this mapping probabilistically, following the work of Titsias and Lázaro-Gredilla (2013). Acknowledgments We have greatly appreciated valuable discussions with James Hensman and Steindor Saemundsson in the preparation of this work. We thank Vincent Dutordoir and anonymous reviewers for helpful feedback on the manuscript. We are grateful for a Microsoft Azure Scholarship and support through a Google Faculty Research Award to Marc Deisenroth. 6https://github.com/ICL-SML/Doubly-Stochastic-DGP 9 References M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, L. Kaiser, M. Kudlur, J. Levenberg, D. Man, R. Monga, S. Moore, D. Murray, J. Shlens, B. Steiner, I. Sutskever, P. Tucker, V. Vanhoucke, V. Vasudevan, O. Vinyals, P. Warden, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint:1603.04467, 2015. P. Baldi, P. Sadowski, and D. Whiteson. Searching for Exotic Particles in High-Energy Physics with Deep Learning. Nature Communications, 2014. E. V. Bonilla, K. Krauth, and A. Dezfouli. Generic Inference in Latent Gaussian Process Models. arXiv preprint:1609.00577, 2016. F.-X. Briol, C. J. Oates, M. Girolami, M. A. Osborne, and D. Sejdinovic. Probabilistic Integration: A Role for Statisticians in Numerical Analysis? arXiv preprint:1512.00933, 2015. T. D. Bui, D. Hernández-Lobato, Y. Li, J. M. Hernández-Lobato, and R. E. Turner. Deep Gaussian Processes for Regression using Approximate Expectation Propagation. International Conference on Machine Learning, 2016. R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian Processes for Regression. IEEE International Joint Conference on Neural Networks, 2016. K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random Feature Expansions for Deep Gaussian Processes. International Conference on Machine Learning, 2017. M. Cutler and J. P. How. Efficient Reinforcement Learning for Robots using Informative Simulated Priors. IEEE International Conference on Robotics and Automation, 2015. Z. Dai, A. Damianou, J. González, and N. Lawrence. Variational Auto-encoded Deep Gaussian Processes. International Conference on Learning Representations, 2016. A. C. Damianou and N. D. Lawrence. Deep Gaussian Processes. International Conference on Artificial Intelligence and Statistics, 2013. A. C. Damianou, M. K. Titsias, and N. D. Lawrence. Variational Gaussian Process Dynamical Systems. Advances in Neural Information Processing Systems, 2011. M. P. Deisenroth and C. E. Rasmussen. PILCO: A Model-Based and Data-Efficient Approach to Policy Search. International Conference on Machine Learning, 2011. P. J. Diggle and P. J. Ribeiro. Model-based Geostatistics. Springer, 2007. D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure Discovery in Nonparametric Regression through Compositional Kernel Search. International Conference on Machine Learning, 2013. D. Duvenaud, O. Rippel, R. P. Adams, and Z. Ghahramani. Avoiding Pathologies in Very Deep Networks. Artificial Intelligence and Statistics, 2014. Y. Gal, Y. Chen, and Z. Ghahramani. Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data. International Conference on Machine Learning, 2015. R. Garnett, M. Osborne, and S. Roberts. Sequential Bayesian Prediction in the Presence of Changepoints. International Conference on Machine Learning, 2009. C. Guestrin, A. Krause, and A. P. Singh. Near-optimal Sensor Placements in Gaussian Processes. International Conference on Machine Learning, 2005. K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition, 2016. J. Hensman and N. D. Lawrence. Nested Variational Compression in Deep Gaussian Processes. arXiv preprint:1412.1370, 2014. 10 J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian Processes for Big Data. Uncertainty in Artificial Intelligence, 2013. J. Hensman, A. Matthews, M. Fillipone, and Z. Ghahramani. MCMC for Variationally Sparse Gaussian Processes. Advances in Neural Information Processing Systems, 2015. D. Hernández-Lobato, H. Lobato, J. Miguel, and P. Dupont. Robust Multi-class Gaussian Process Classification. Advances in Neural Information Processing Systems, 2011. J. M. Hernández-Lobato and R. Adams. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks. International Conference on Machine Learning, 2015. D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization Trick. 2015. J. Ko and D. Fox. GP-BayesFilters: Bayesian Filtering using Gaussian Process Prediction and Observation Models. IEEE Intelligent Robots and Systems, 2008. K. Krauth, E. V. Bonilla, K. Cutajar, and M. Filippone. AutoGP: Exploring the Capabilities and Limitations of Gaussian Process Models. arXiv preprint:1610.05392, 2016. H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. International Conference on Machine Learning, 2007. N. D. Lawrence and A. J. Moore. Hierarchical Gaussian Process Latent Variable Models. International Conference on Machine Learning, 2007. M. Lázaro-Gredilla. Bayesian Warped Gaussian Processes. Advances in Neural Information Processing Systems, 2012. D. J. C. Mackay. Comparison of Approximate Methods for Handling Hyperparameters. Neural computation, 1999. A. G. Matthews, M. Van Der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. León-Villagrá, Z. Ghahramani, and J. Hensman. GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research, 2017. A. G. d. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani. On Sparse Variational Methods and The Kullback-Leibler Divergence Between Stochastic Processes. Artificial Intelligence and Statistics, 2016. C. L. C. Mattos, Z. Dai, A. Damianou, J. Forth, G. A. Barreto, and N. D. Lawrence. Recurrent Gaussian Processes. International Conference on Learning Representations, 2016. H. Peng, S. Zhe, and Y. Qi. Asynchronous Distributed Variational Gaussian Processes. arXiv preprint:1704.06735, 2017. C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. International Conference on Machine Learning, 2014. J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems, 2012. M. K. Titsias and N. D. Lawrence. Bayesian Gaussian Process Latent Variable Model. International Conference on Artificial Intelligence and Statistics, 2010. M. K. Titsias and M. Lázaro-Gredilla. Variational Inference for Mahalanobis Distance Metrics in Gaussian Process Regression. Advances in Neural Information Processing Systems, 2013. R. Turner and M. Sahani. Two Problems with Variational Expectation Maximisation for Time-Series Models. Bayesian Time Series Models, 2011. 11 K. Vafa. Training Deep Gaussian Processes with Sampling. Advances in Approximate Bayesian Inference Workshop, Neural Information Processing Systems, 2016. Y. Wang, M. Brubaker, B. Chaib-Draa, and R. Urtasun. Sequential Inference for Deep Gaussian Process. Artificial Intelligence and Statistics, 2016. A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep Kernel Learning. Artificial Intelligence and Statistics, 2016. 12 | 2017 | 341 |
6,832 | From Parity to Preference-based Notions of Fairness in Classification Muhammad Bilal Zafar MPI-SWS mzafar@mpi-sws.org Isabel Valera MPI-IS isabel.valera@tue.mpg.de Manuel Gomez Rodriguez MPI-SWS manuelgr@mpi-sws.org Krishna P. Gummadi MPI-SWS gummadi@mpi-sws.org Adrian Weller University of Cambridge & Alan Turing Institute aw665@cam.ac.uk Abstract The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fairdivision and envy-freeness literature in economics and game theory and propose preference-based notions of fairness—given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness. 1 Introduction As machine learning is increasingly being used to automate decision making in domains that affect human lives (e.g., credit ratings, housing allocation, recidivism risk prediction), there are growing concerns about the potential for unfairness in such algorithmic decisions [23, 25]. A flurry of recent research on fair learning has focused on defining appropriate notions of fairness and then designing mechanisms to ensure fairness in automated decision making [12, 14, 18, 19, 20, 21, 28, 32, 33, 34]. Existing notions of fairness in the machine learning literature are largely inspired by the concept of discrimination in social sciences and law. These notions call for parity (i.e., equality) in treatment, in impact, or both. To ensure parity in treatment (or treatment parity), decision making systems need to avoid using users’ sensitive attribute information, i.e., avoid using the membership information in socially salient groups (e.g., gender, race), which are protected by anti-discrimination laws [4, 10]. As a result, the use of group-conditional decision making systems is often prohibited. To ensure parity in impact (or impact parity), decision making systems need to avoid disparity in the fraction of users belonging to different sensitive attribute groups (e.g., men, women) that receive beneficial decision outcomes. A number of learning mechanisms have been proposed to achieve parity in treatment [24], An open-source code implementation of our scheme is available at: http://fate-computing.mpi-sws.org/ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. M (100) W (100) M (200) W (200) f2 f1 M (100) W (100) M (200) W (200) f2 f1 Acc: 0.83 Benefit: 0% (M), 67% (W) M (100) W (100) M (200) W (200) f2 f1 Acc: 0.72 Benefit: 22% (M), 22% (W) Acc: 1.00 Benefit: 33% (M), 67% (W) +ve -ve +ve -ve -ve -ve +ve +ve Figure 1: A fictitious decision making scenario involving two groups: men (M) and women (W). Feature f1 (x-axis) is highly predictive for women whereas f2 (y-axis) is highly predictive for men. Green (red) quadrants denote the positive (negative) class. Within each quadrant, the points are distributed uniformly and the numbers in parenthesis denote the number of subjects in that quadrant. The left panel shows the optimal classifier satisfying parity in treatment. This classifier leads to all the men getting classified as negative. The middle panel shows the optimal classifier satisfying parity in impact (in addition to parity in treatment). This classifier achieves impact parity by misclassifying women from positive class into negative class, and in the process, incurs a significant cost in terms of accuracy. The right panel shows a classifier consisting of group-conditional classifiers for men (purple) and women (blue). Both the classifiers satisfy the preferred treatment criterion since for each group, adopting the other group’s classifier would lead to a smaller fraction of beneficial outcomes. Additionally, this group-conditional classifier is also a preferred impact classifier since both groups get more benefit as compared to the impact parity classifier. The overall accuracy is better than the parity classifiers. parity in impact [7, 18, 21] or both [12, 14, 17, 20, 32, 33, 34]. However, these mechanisms pay a significant cost in terms of the accuracy (or utility) of their predictions. In fact, there exist some inherent tradeoffs (both theoretical and empirical) between achieving high prediction accuracy and satisfying treatment and / or impact parity [9, 11, 15, 22]. In this work, we introduce, formalize and evaluate new notions of fairness that are inspired by the concepts of fair division and envy-freeness in economics and game theory [5, 26, 31]. Our work is motivated by the observation that, in certain decision making scenarios, the existing parity-based fairness notions may be too stringent, precluding more accurate decisions, which may also be desired by every sensitive attribute group. To relax these parity-based notions, we introduce the concept of a user group’s preference for being assigned one set of decision outcomes over another. Given the choice between various sets of decision outcomes, any group of users would collectively prefer the set that contains the largest fraction (or the greatest number) of beneficial decision outcomes for that group.1 More specifically, our new preference-based notions of fairness, which we formally define in the next section, use the concept of user group’s preference as follows: — From Parity Treatment to Preferred Treatment: To offer preferred treatment, a decision making system should ensure that every sensitive attribute group (e.g., men and women) prefers the set of decisions they receive over the set of decisions they would have received had they collectively presented themselves to the system as members of a different sensitive group. The preferred treatment criterion represents a relaxation of treatment parity. That is, every decision making system that achieves treatment parity also satisfies the preferred treatment condition, which implies (in theory) that the optimal decision accuracy that can be achieved under the preferred treatment condition is at least as high as the one achieved under treatment parity. Additionally, preferred treatment allows group-conditional decision making (not allowed by treatment parity), which is necessary to achieve high decision accuracy in scenarios when the predictive power of features varies greatly between different sensitive user groups [13], as shown in Figure 1. While preferred treatment is a looser notion of fairness than treatment parity, it retains a core fairness property embodied in treatment parity, namely, envy-freeness at the level of user groups. Under preferred treatment, no group of users (e.g., men or women, blacks or whites) would feel that they would be collectively better off by switching their group membership (e.g., gender, race). Thus, 1Although it is quite possible that certain individuals from the group may not prefer the set that maximizes the benefit for the group as a whole. 2 preferred treatment decision making, despite allowing group-conditional decision making, is not vulnerable to being characterized as “reverse discrimination” against, or "affirmative action” for certain groups. — From Parity Impact to Preferred Impact: To offer preferred impact, a decision making system needs to ensure that every sensitive attribute group (e.g., men and women) prefers the set of decisions they receive over the set of decisions they would have received under the criterion of impact parity. The preferred impact criterion represents a relaxation of impact parity. That is, every decision making system that achieves impact parity also satisfies the preferred impact condition, which implies (in theory) that the optimal decision accuracy that can be achieved under the preferred impact condition is at least as high as the one achieved under impact parity. Additionally, preferred impact allows disparity in benefits received by different groups, which may be justified in scenarios where insisting on impact parity would only lead to a reduction in the beneficial outcomes received by one or more groups, without necessarily improving them for any other group. In such scenarios, insisting on impact parity can additionally lead to a reduction in the decision accuracy, creating a case of tragedy of impact parity with a worse decision making all round, as shown in Figure 1. While preferred impact is a looser notion of fairness compared to impact parity, by guaranteeing that every group receives at least as many beneficial outcomes as they would would have received under impact parity, it retains the core fairness gains in beneficial outcomes that discriminated groups would have achieved under the fairness criterion of impact parity. Finally, we note that our preference-based fairness notions, while having many attractive properties, are not the most suitable notions of fairness in all scenarios. In certain cases, parity fairness may well be the eventual goal [3] and the more desirable notion. In the remainder of this paper, we formalize our preference-based fairness notions in the context of binary classification (Section 2), propose tractable and efficient proxies to include these notions in the formulations of convex margin-based classifiers in the form of convex-concave constraints (Section 3), and show on several real world datasets that our preference-based fairness notions can provide significant gains in overall decision making accuracy as compared to parity-based fairness (Section 4). 2 Defining preference-based fairness for classification In this section, we will first introduce two useful quality metrics—utility and group benefit—in the context of fairness in classification, then revisit parity-based fairness definitions in the light of these quality metrics, and finally formalize the two preference-based notions of fairness introduced in Section 1 from the perspective of the above metrics. For simplicity, we consider binary classification tasks, however, the definitions can be easily extended to m-ary classification. Quality metrics in fair classification. In a fair (binary) classification task, one needs to find a mapping between the user feature vectors x 2 Rd and class labels y 2 {−1, 1}, where (x, y) are drawn from an (unknown) distribution f(x, y). This is often achieved by finding a mapping function ✓: Rd ! R such that given a feature vector x with an unknown label y, the corresponding classifier predicts ˆy = sign(✓(x)). However, this mapping function also needs to be fair with respect to the values of a user sensitive attribute z 2 Z ✓Z≥0 (e.g., sex, race), which are drawn from an (unknown) distribution f(z) and may be dependent of the feature vectors and class labels, i.e., f(x, y, z) = f(x, y|z)f(z) 6= f(x, y)f(z). Given the above problem setting, we introduce the following quality metrics, which we will use to define and compare different fairness notions: I. Utility (U): overall profit obtained by the decision maker using the classifier. For example, in a loan approval scenario, the decision maker is the bank that gives the loan and the utility can be the overall accuracy of the classifier, i.e.: U(✓) = Ex,y[I{sign(✓(x)) = y}], where I(·) denotes the indicator function and the expectation is taken over the distribution f(x, y). It is in the decision maker’s interest to use classifiers that maximize utility. Moreover, depending on the scenario, one can attribute different profit to true positives and true negatives— or conversely, different cost to false negatives and false positives—while computing utility. For 3 example, in the loan approval scenario, marking an eventual defaulter as non-defaulter may have a higher cost than marking a non-defaulter as defaulter. For simplicity, in the remainder of the paper, we will assume that the profit (cost) for true (false) positives and negatives is the same. II. Group benefit (Bz): the fraction of beneficial outcomes received by users sharing a certain value of the sensitive attribute z (e.g., blacks, hispanics, whites). For example, in a loan approval scenario, the beneficial outcome for a user may be receiving the loan and the group benefit for each value of z can be defined as: Bz(✓) = Ex|z[I{sign(✓(x)) = 1}], where the expectation is taken over the conditional distribution f(x|z) and the bank offers a loan to a user if sign(✓(x)) = 1. Moreover, as suggested by some recent studies in fairness-aware learning [18, 22, 32], the group benefits can also be defined as the fraction of beneficial outcomes conditional on the true label of the user. For example, in a recidivism prediction scenario, the group benefits can be defined as the fraction of eventually non-offending defendants sharing a certain sensitive attribute value getting bail, that is: Bz(✓) = Ex|z,y=1[I{sign(✓(x)) = 1}], where the expectation is taken over the conditional distribution f(x|z, y = 1), y = 1 indicates that the defendant does not re-offend, and bail is granted if sign(✓(x)) = 1. Parity-based fairness. A number of recent studies [7, 14, 18, 21, 32, 33, 34] have considered a classifier to be fair if it satisfies the impact parity criterion. That is, it ensures that the group benefits for all the sensitive attribute values are equal, i.e.: Bz(✓) = Bz0(✓) for all z, z0 2 Z. (1) In this context, different (or often same) definitions of group benefit (or beneficial outcome) have lead to different terminology, e.g., disparate impact [14, 33], indirect discrimination [14, 21], redlining [7], statistical parity [12, 11, 22, 34], disparate mistreatment [32], or equality of opportunity [18]. However, all of these group benefit definitions invariably focus on achieving impact parity. We point interested readers to Feldman et al. [14] and Zafar et al. [32] regarding the discussion on this terminology. Although not always explicitly sought, most of the above studies propose classifiers that also satisfy treatment parity in addition to impact parity, i.e., they do not use the sensitive attribute z in the decision making process. However, some of them [7, 18, 21] do not satisfy treatment parity since they resort to group-conditional classifiers, i.e., ✓= {✓z}z2Z. In such case, we can rewrite the above parity condition as: Bz(✓z) = Bz0(✓z0) for all z, z0 2 Z. (2) Fairness beyond parity. Given the above quality metrics, we can now formalize the two preferencebased fairness notions introduced in Section 1. — Preferred treatment: if a classifier ✓resorts to group-conditional classifiers, i.e., ✓= {✓z}z2Z, it is a preferred treatment classifier if each group sharing a sensitive attribute value z benefits more from its corresponding group-conditional classifier ✓z than it would benefit if it would be classified by any of the other group-conditional classifiers ✓z0, i.e., Bz(✓z) ≥Bz(✓z0) for all z, z0 2 Z. (3) Note that, if a classifier ✓does not resort to group-conditional classifiers, i.e., ✓z = ✓for all z 2 Z, it will be always be a preferred treatment classifier. If, in addition, such classifier ensures impact parity, it is easy to show that its utility cannot be larger than a preferred treatment classifier consisting of group-conditional classifiers. — Preferred impact: a classifier ✓offers preferred impact over a classifier ✓0 ensuring impact parity if it achieves higher group benefit for each sensitive attribute value group, i.e., Bz(✓) ≥Bz(✓0) for all z 2 Z. (4) One can also rewrite the above condition for group-conditional classifiers, i.e., ✓= {✓z}z2Z and ✓0 = {✓0 z}z2Z, as follows: Bz(✓z) ≥Bz(✓0 z) for all z 2 Z. (5) Note that, given any classifier ✓0 ensuring impact parity, it is easy to show that there will always exist a preferred impact classifier ✓with equal or higher utility. 4 Connection to the fair division literature. Our notion of preferred treatment is inspired by the concept of envy-freeness [5, 31] in the fair division literature. Intuitively, an envy-free resource division ensures that no user would prefer the resources allocated to another user over their own allocation. Similarly, our notion of preferred treatment ensures envy-free decision making at the level of sensitive attribute groups. Specifically, with preferred treatment classification, no sensitive attribute group would prefer the outcomes from the classifier of another group. Our notion of preferred impact draws inspiration from the two-person bargaining problem [26] in the fair division literature. In a bargaining scenario, given a base resource allocation (also called the disagreement point), two parties try to divide some additional resources between themselves. If the parties cannot agree on a division, no party gets the additional resources, and both would only get the allocation specified by the disagreement point. Taking the resources to be the beneficial outcomes, and the disagreement point to be the allocation specified by the impact parity classifier, a preferred impact classifier offers enhanced benefits to all the sensitive attribute groups. Put differently, the group benefits provided by the preferred impact classifier Pareto-dominate the benefits provided by the impact parity classifier. On individual-level preferences. Notice that preferred treatment and preferred impact notions are defined based on the group preferences, i.e., whether a group as a whole prefers (or, gets more beneficial outcomes from) a given set of outcomes over another set. It is quite possible that a set of outcomes preferred by the group collectively is not preferred by certain individuals in the group. Consequently, one can extend our proposed notions to account for individual preferences as well, i.e., a set of outcomes is preferred over another if all the individuals in the group prefer it. In the remainder of the paper, we focus on preferred treatment and preferred impact in the context of group preferences, and leave the case of individual preferences and its implications on the cost of achieving fairness for future work. 3 Training preferred classifiers In this section, our goal is training preferred treatment and preferred impact group-conditional classifiers, i.e., ✓= {✓z}z2Z, that maximize utility given a training set D = {(xi, yi, zi)}N i=1, where (xi, yi, zi) ⇠f(x, y, z). In both cases, we will assume that:2 I. Each group-conditional classifier is a convex boundary-based classifier. For ease of exposition, in this section, we additionally assume these classifiers to be linear, i.e., ✓z(x) = ✓T z x, where ✓z is a parameter that defines the decision boundary in the feature space. We relax the linearity assumption in Appendix A and extend our methodology to a non-linear SVM classifier. II. The utility function U is defined as the overall accuracy of the group-conditional classifiers, i.e., U(✓) = Ex,y[I{sign(✓(x)) = y}] = X z2Z Ex,y|z[I{sign(✓T z x) = y}]f(z). (6) III. The group benefit Bz for users sharing the sensitive attribute value z is defined as their average probability of being classified into the positive class, i.e., Bz(✓) = Ex|z[I{sign(✓(x)) = 1}] = Ex|z[I{sign(✓T z x) = 1}]. (7) Preferred impact classifiers. Given a impact parity classifier with decision boundary parameters {✓0 z}z2Z, one could think of finding the decision boundary parameters {✓z}z2Z of a preferred impact classifier that maximizes utility by solving the following optimization problem: minimize {✓z} −1 N P (x,y,z)2D I{sign(✓T z x) = y} subject to P x2Dz I{sign(✓T z x) = 1} ≥P x2Dz I{sign(✓0 z T x) = 1} for all z 2 Z, (8) where Dz = {(xi, yi, zi) 2 D | zi = z} denotes the set of users in the training set sharing sensitive attribute value z, the objective uses an empirical estimate of the utility, defined by Eq. 6, and the preferred impact constraints, defined by Eq. 5, use empirical estimates of the group benefits, defined by Eq. 7. Here, note that the right hand side of the inequalities does not contain any variables and can be precomputed, i.e., the impact parity classifiers {✓0 z}z2Z are given. 2Exploring the relaxations of these assumptions is a very interesting avenue for future work. 5 Unfortunately, it is very challenging to solve the above optimization problem since both the objective and constraints are nonconvex. To overcome this difficulty, we minimize instead a convex loss function `✓(x, y), which is classifier dependent [6], and approximate the group benefits using a ramp (convex) function r(x) = max(0, x), i.e., minimize {✓z} −1 N P (x,y,z)2D `✓z(x, y) + P z2Z λz⌦(✓z) subject to P x2Dz max(0, ✓T z x) ≥P x2Dz max(0, ✓0 z T x) for all z 2 Z, (9) which, for any convex regularizer ⌦(·), is a disciplined convex-concave program (DCCP) and thus can be efficiently solved using well-known heuristics [30]. For example, if we particularize the above formulation to group-conditional (standard) logistic regression classifiers ✓0 z and ✓z and L2-norm regularizer, then, Eq. 9 adopts the following form: minimize {✓z} −1 N P (x,y,z)2D log p(y|x, ✓z) + P z2Z λz||✓z||2 subject to P x2Dz max(0, ✓T z x) ≥P x2Dz max(0, ✓0 z T x) for all z 2 Z. (10) where p(y = 1|x, ✓z) = 1 1+e−✓T z x . The constraints can similarly be added to other convex boundary-based classifiers like linear SVM. We further expand on particularizing the constraints for non-linear SVM in Appendix A. Preferred treatment classifiers. Similarly as in the case of preferred impact classifiers, one could think of finding the decision boundary parameters {✓z}z2Z of a preferred treatment classifier that maximizes utility by solving the following optimization problem: minimize {✓z} −1 N P (x,y,z)2D I{sign(✓T z x) = y} subject to P x2Dz I{sign(✓T z x) = 1} ≥P x2Dz I{sign(✓T z0x) = 1} for all z, z0 2 Z, (11) where Dz = {(xi, yi, zi) 2 D | zi = z} denotes the set of users in the training set sharing sensitive attribute value z, the objective uses an empirical estimate of the utility, defined by Eq. 6, and the preferred treatment constraints, defined by Eq. 3, use empirical estimates of the group benefits, defined by Eq. 7. Here, note that both the left and right hand side of the inequalities contain optimization variables. However, the objective and constraints in the above problem are also nonconvex and thus we adopt a similar strategy as in the case of preferred impact classifiers. More specifically, we solve instead the following tractable problem: minimize {✓z} −1 N P (x,y,z)2D `✓z(x, y) + P z2Z λz⌦(✓z) subject to P x2Dz max(0, ✓T z x) ≥P x2Dz max(0, ✓T z0x) for all z, z0 2 Z, (12) which, for any convex regularizer ⌦(·), is also a disciplined convex-concave program (DCCP) and can be efficiently solved. 4 Evaluation In this section, we compare the performance of preferred treatment and preferred impact classifiers against unconstrained, treatment parity and impact parity classifiers on a variety of synthetic and real-world datasets. More specifically, we consider the following classifiers, which we train to maximize utility subject to the corresponding constraints: — Uncons: an unconstrained classifier that resorts to group-conditional classifiers. It violates treatment parity—it trains a separate classifier per sensitive attribute value group—and potentially violates impact parity—it may lead to different benefits for different groups. — Parity: a parity classifier that does not use the sensitive attribute group information in the decision making, but only during the training phase, and is constrained to satisfy both treatment parity— its decisions do not change based on the users’ sensitive attribute value as it does not resort to group-conditional classifiers—and impact parity—it ensures that the benefits for all groups are the same. We train this classifier using the methodology proposed by Zafar et al. [33]. — Preferred treatment: a classifier that resorts to group-conditional classifiers and is constrained to satisfy preferred treatment—each group gets the highest benefit with its own classifier than any other group’s classifier. 6 Acc : 0.87 B0 : 0.16; B1 : 0.77 B0 : 0.20; B1 : 0.85 (a) Uncons Acc : 0.57 B0 : 0.51; B1 : 0.49 (b) Parity Acc : 0.76 B0 : 0.58; B1 : 0.96 B0 : 0.21; B1 : 0.86 (c) Preferred impact Acc : 0.73 B0 : 0.58; B1 : 0.96 B0 : 0.43; B1 : 0.97 (d) Preferred both Figure 2: [Synthetic data] Crosses denote group-0 (points with z = 0) and circles denote group-1. Green points belong to the positive class in the training data whereas red points belong to the negative class. Each panel shows the accuracy of the decision making scenario along with group benefits (B0 and B1) provided by each of the classifiers involved. For group-conditional classifiers, cyan (blue) line denotes the decision boundary for the classifier of group-0 (group-1). Parity case (panel (b)) consists of just one classifier for both groups in order to meet the treatment parity criterion. — Preferred impact: a classifier that resorts to group-conditional classifiers and is constrained to be preferred over the Parity classifier. — Preferred both: a classifier that resort to group-conditional classifiers and is constrained to satisfy both preferred treatment and preferred impact. For the experiments in this section, we use logistic regression classifiers with L2-norm regularization. We randomly split the corresponding dataset into 70%-30% train-test folds 5 times, and report the average accuracy and group benefits in the test folds. Appendix B describes the details for selecting the optimal L2-norm regularization parameters. Here, we compute utility (U) as the overall accuracy of a classifier and group benefits (Bz) as the fraction of users sharing sensitive attribute z that are classified into the positive class. Moreover, the sensitive attribute is always binary, i.e., z 2 {0, 1}. 4.1 Experiments on synthetic data Experimental setup. Following Zafar et al. [33], we generate a synthetic dataset in which the unconstrained classifier (Uncons) offers different benefits to each sensitive attribute group. In particular, we generate 20,000 binary class labels y 2 {−1, 1} uniformly at random along with their corresponding two-dimensional feature vectors sampled from the following Gaussian distributions: p(x|y = 1) = N([2; 2], [5, 1; 1, 5]) and p(x|y = −1) = N([−2; −2], [10, 1; 1, 3]). Then, we generate each sensitive attribute from the Bernoulli distribution p(z = 1) = p(x0|y = 1)/(p(x0|y = 1)+p(x0|y = −1)), where x0 is a rotated version of x, i.e., x0 = [cos(⇡/8), −sin(⇡/8); sin(⇡/8), cos(⇡/8)]. Finally, we train the five classifiers described above and compute their overall (test) accuracy and (test) group benefits. Results. Figure 2 shows the trained classifiers, along with their overall accuracy and group benefits. We can make several interesting observations: The Uncons classifier leads to an accuracy of 0.87, however, the group-conditional boundaries and high disparity in treatment for the two groups (0.16 vs. 0.85) mean that it satisfies neither treatment parity nor impact parity. Moreover, it leads to only a small violation of preferred treatment—benefits for group-0 would increase slightly from 0.16 to 0.20 by adopting the classifier of group-1. However, this will not always be the case, as we will later show in the experiments on real data. The Parity classifier satisfies both treatment and impact parity, however, it does so at a large cost in terms of accuracy, which drops from 0.87 for Uncons to 0.57 for Parity. The Preferred treatment classifier (not shown in the figure), leads to a minor change in decision boundaries as compared to the Uncons classifier to achieve preferred treatment. Benefits for group-0 (group-1) with its own classifier are 0.20 (0.84) as compared to 0.17 (0.83) while using the classifier of group-1 (group-0). The accuracy of this classifier is 0.87. The Preferred impact classifier, by making use of a looser notion of fairness compared to impact parity, provides higher benefits for both groups at a much smaller cost in terms of accuracy than the Parity classifier (0.76 vs. 0.57). Note that, while the Parity classifier achieved equality in benefits by misclassifying negative examples from group-0 into the positive class and misclassifying positive 7 0.4 0.6 0.8 1 Uncons. Parity Prf-treat. Prf-imp. Prf-both 0.4 0.5 0.6 0.7 Benefits Accuracy ProPublica COMPAS dataset B0(θ0) B0(θ1) B1(θ1) B1(θ0) Acc 0 0.2 0.4 Uncons. Parity Prf-treat. Prf-imp. Prf-both 0.81 0.82 0.83 0.84 0.85 Benefits Accuracy Adult dataset 0 0.2 0.4 0.6 0.8 1 Uncons. Parity Prf-treat. Prf-imp. Prf-both 0.5 0.6 0.7 0.8 Benefits Accuracy NYPD SQF dataset Figure 3: The figure shows the accuracy and benefits received by the two groups for various decision making scenarios. ‘Prf-treat.’, ‘Prf-imp.’, and ‘Prf-both’ respectively correspond to the classifiers satisfying preferred treatment, preferred impact, and both preferred treatment and impact criteria. Sensitive attribute values 0 and 1 denote blacks and whites in ProPublica COMPAS dataset and NYPD SQF datasets, and women and men in the Adult dataset. Bi(✓j) denotes the benefits obtained by group i when using the classifier of group j. For the Parity case, we train just one classifier for both the groups, so the benefits do not change by adopting other group’s classifier. examples from group-1 into the negative class, the Preferred impact classifier only incurs the former type of misclassifications. However, the outcomes of the Preferred impact classifier do not satisfy the preferred treatment criterion: group-1 would attain higher benefit if it used the classifier of group-0 (0.96 as compared to 0.86). Finally, the classifier that satisfies preferred treatment and preferred impact (Preferred both) achieves an accuracy and benefits at par with the Preferred impact classifier. We present the results of applying our fairness constraints on a non linearly-separable dataset with a SVM classifier with a radial basis function (RBF) kernel in Appendix C. 4.2 Experiments on real data Dataset description and experimental setup. We experiment with three real-world datasets: the COMPAS recidivism prediction dataset compiled by ProPublica [23], the Adult income dataset from UCI machine learning repository [2], and the New York Police Department (NYPD) Stop-questionand-frisk (SQF) dataset made publicly available by NYPD [1]. These datasets have been used by a number of prior studies in the fairness-aware machine learning literature [14, 29, 32, 34, 33]. In the COMPAS dataset, the classification task is to predict whether a criminal defendant would recidivate within two years (negative class) or not (positive class); in the Adult dataset, the task is to predict whether a person earns more than 50K USD per year (positive class) or not; and, in the SQF dataset, the task is to predict whether a pedestrian should be stopped on the suspicion of having an illegal weapon or not (positive class). In all datasets, we assume being classified as positive to be the beneficial outcome. Additionally, we divide the subjects in each dataset into two sensitive attribute value groups: women (group-0) and men (group-1) in the Adult dataset and blacks (group-0) and whites (group-1) in the COMPAS and SQF datasets. The supplementary material 8 (Appendix D) contains more information on the sensitive and the non-sensitive features as well as the class distributions.3 Results. Figure 3 shows the accuracy achieved by the five classifiers described above along with the benefits they provide for the three datasets. We can draw several interesting observations:4 In all cases, the Uncons classifier, in addition to violating treatment parity (a separate classifier for each group) and impact parity (high disparity in group benefits), also violates the preferred treatment criterion (in all cases, at least one of group-0 or group-1 would benefit more by adopting the other group’s classifier). On the other hand, the Parity classifier satisfies the treatment parity and impact parity but it does so at a large cost in terms of accuracy. The Preferred treatment classifier provides a much higher accuracy than the Parity classifier—its accuracy is at par with that of the Uncons classifier—while satisfying the preferred treatment criterion. However, it does not meet the preferred impact criterion. The Preferred impact classifier meets the preferred impact criterion but does not always satisfy preferred treatment. Moreover, it also leads to a better accuracy then Parity classifier in all cases. However, the gain in accuracy is more substantial for the SQF datasets as compared to the COMPAS and Adult dataset. The classifier satisfying preferred treatment and preferred impact (Preferred both) has a somewhat underwhelming performance in terms of accuracy for the Adult dataset. While the performance of this classifier is better than the Parity classifier in the COMPAS dataset and NYPD SQF dataset, it is slightly worse for the Adult dataset. In summary, the above results show that ensuring either preferred treatment or preferred impact is less costly in terms of accuracy loss than ensuring parity-based fairness, however, ensuring both preferred treatment and preferred impact can lead to comparatively larger accuracy loss in certain datasets. We hypothesize that this loss in accuracy may be partly due to splitting the number of available samples into groups during training—each group-conditional classifier use only samples from the corresponding sensitive attribute group—hence decreasing the effectiveness of empirical risk minimization. 5 Conclusion In this paper, we introduced two preference-based notions of fairness—preferred treatment and preferred impact—establishing a previously unexplored connection between fairness-aware machine learning and the economics and game theoretic concepts of envy-freeness and bargaining. Then, we proposed tractable proxies to design boundary-based classifiers satisfying these fairness notions and experimented with a variety of synthetic and real-world datasets, showing that preference-based fairness often allows for greater decision accuracy than existing parity-based fairness notions. Our work opens many promising venues for future work. For example, our methodology is limited to convex boundary-based classifiers. A natural follow up would be to extend our methodology to other types of classifiers, e.g., neural networks and decision trees. In this work, we defined preferred treatment and preferred impact in the context of group preferences, however, it would be worth revisiting the proposed definitions in the context of individual preferences. The fair division literature establishes a variety of fairness axioms [26] such as Pareto-optimality and scale invariance. It would be interesting to study such axioms in the context of fairness-aware machine learning. Finally, we note that while moving from parity to preference-based fairness offers many attractive properties, we acknowledge it may not always be the most appropriate notion, e.g., in some scenarios, parity-based fairness may very well present the eventual goal and be more desirable [3]. Acknowledgments AW acknowledges support by the Alan Turing Institute under EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. 3Since the SQF dataset is highly skewed in terms of class distribution (⇠97% points in the positive class) resulting in a trained classifier predicting all points as positive (yet having 97% accuracy), we subsample the dataset to have equal class distribution. Another option would be using penalties proportional to the size of the class, but we observe that an unconstrained classifier with class penalties gives similar predictions as compared to a balanced dataset. We decided to experiment with the balanced dataset since the accuracy drops in this dataset are easier to interpret. 4The unfairness in the SQF dataset is different from what one would expect [27]—an unconstrained classifier gives more benefits to blacks as compared to whites. This is due to the fact that a larger fraction of stopped whites were found to be in possession on an illegal weapon (Tables 3 and 4 in Appendix D). 9 References [1] Stop, Question and Frisk Data. http://www1.nyc.gov/site/nypd/stats/reports-analysis/stopfrisk.page, 2017. [2] Adult data. https://archive.ics.uci.edu/ml/datasets/adult, 1996. [3] A. Altman. Discrimination. In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, 2016. https://plato.stanford.edu/archives/win2016/entries/discrimination/. [4] S. Barocas and A. D. Selbst. Big Data’s Disparate Impact. California Law Review, 2016. [5] M. Berliant and W. Thomson. On the Fair Division of a Heterogeneous Commodity. Journal of Mathematics Economics , 1992. [6] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [7] T. Calders and S. Verwer. Three Naive Bayes Approaches for Discrimination-Free Classification. Data Mining and Knowledge Discovery, 2010. [8] O. Chapelle. Training a Support Vector Machine in the Primal. Neural Computation, 2007. [9] A. Chouldechova. Fair Prediction with Disparate Impact:A Study of Bias in Recidivism Prediction Instruments. arXiv preprint, arXiv:1610.07524, 2016. [10] Civil Rights Act. Civil Rights Act of 1964, Title VII, Equal Employment Opportunities, 1964. [11] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic Decision Making and the Cost of Fairness. In KDD, 2017. [12] C. Dwork, M. Hardt, T. Pitassi, and O. Reingold. Fairness Through Awareness. In ITCSC, 2012. [13] C. Dwork, N. Immorlica, A. T. Kalai, and M. Leiserson. Decoupled Classifiers for Fair and Efficient Machine Learning. arXiv preprint arXiv:1707.06613, 2017. [14] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and Removing Disparate Impact. In KDD, 2015. [15] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian. On the (im)possibility of Fairness. arXiv preprint arXiv:1609.07236, 2016. [16] J. E. Gentle, W. K. Härdle, and Y. Mori. Handbook of Computational Statistics: Concepts and Methods. Springer Science & Business Media, 2012. [17] G. Goh, A. Cotter, M. Gupta, and M. Friedlander. Satisfying Real-world Goals with Dataset Constraints. In NIPS, 2016. [18] M. Hardt, E. Price, and N. Srebro. Equality of Opportunity in Supervised Learning. In NIPS, 2016. [19] M. Joseph, M. Kearns, J. Morgenstern, and A. Roth. Fairness in Learning: Classic and Contextual Bandits. In NIPS, 2016. [20] F. Kamiran and T. Calders. Classification with No Discrimination by Preferential Sampling. In BENELEARN, 2010. [21] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Fairness-aware Classifier with Prejudice Remover Regularizer. In PADM, 2011. [22] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In ITCS, 2017. [23] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. https://github.com/propublica/compas-analysis, 2016. [24] B. T. Luong, S. Ruggieri, and F. Turini. kNN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. In KDD, 2011. [25] C. Muñoz, M. Smith, and D. Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President. The White House., 2016. [26] J. F. Nash Jr. The Bargaining Problem. Econometrica: Journal of the Econometric Society, 1950. [27] NYCLU. Stop-and-Frisk Data. https://www.nyclu.org/en/stop-and-frisk-data, 2017. 10 [28] D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware Data Mining. In KDD, 2008. [29] R. S. Sharad Goel, Justin M. Rao. Precinct or Prejudice? Understanding Racial Disparities in New York City’s Stop-and-Frisk Policy. Annals of Applied Statistics, 2015. [30] X. Shen, S. Diamond, Y. Gu, and S. Boyd. Disciplined Convex-Concave Programming. arXiv:1604.02639, 2016. [31] H. R. Varian. Equity, Envy, and Efficiency. Journal of Economic Theory, 1974. [32] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In WWW, 2017. [33] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In AISTATS, 2017. [34] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013. 11 | 2017 | 342 |
6,833 | Nonparametric Online Regression while Learning the Metric Ilja Kuzborskij EPFL Switzerland ilja.kuzborskij@gmail.com Nicol`o Cesa-Bianchi Dipartimento di Informatica Universit`a degli Studi di Milano Milano 20135, Italy nicolo.cesa-bianchi@unimi.it Abstract We study algorithms for online nonparametric regression that learn the directions along which the regression function is smoother. Our algorithm learns the Mahalanobis metric based on the gradient outer product matrix G of the regression function (automatically adapting to the effective rank of this matrix), while simultaneously bounding the regret —on the same data sequence— in terms of the spectrum of G. As a preliminary step in our analysis, we extend a nonparametric online learning algorithm by Hazan and Megiddo enabling it to compete against functions whose Lipschitzness is measured with respect to an arbitrary Mahalanobis metric. 1 Introduction An online learner is an agent interacting with an unknown and arbitrary environment over a sequence of rounds. At each round t, the learner observes a data point (or instance) xt ∈X ⊂Rd, outputs a prediction byt for the label yt ∈R associated with that instance, and incurs some loss ℓt(byt), which in this paper is the square loss (byt −yt)2. At the end of the round, the label yt is given to the learner, which he can use to reduce his loss in subsequent rounds. The performance of an online learner is typically measured using the regret. This is defined as the amount by which the learner’s cumulative loss exceeds the cumulative loss (on the same sequence of instances and labels) of any function f in a given reference class F of functions, RT (f) = T X t=1 ℓt(byt) −ℓt f(xt) ∀f ∈F . (1) Note that typical regret bounds apply to all f ∈F and to all individual data sequences. However, the bounds are allowed to scale with parameters arising from the interplay between f and the data sequence. In order to capture complex environments, the reference class of functions should be large. In this work we focus on nonparametric classes F, containing all differentiable functions that are smooth with respect to some metric on X. Our approach builds on the simple and versatile algorithm for nonparametric online learning introduced in [6]. This algorithm has a bound on the regret RT (f) of order (ignoring logarithmic factors) 1 + v u u t d X i=1 ∥∂if∥2 ∞ T d 1+d ∀f ∈F . (2) Here ∥∂if∥∞is the value of the partial derivative ∂f(x) ∂xi maximized over x ∈X. The square root term is the Lipschitz constant of f, measuring smoothness with respect to the Euclidean metric. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. However, in some directions f may be smoother than in others. Therefore, if we knew in advance the set of directions along which the best performing reference functions f are smooth, we could use this information to control regret better. In this paper we extend the algorithm from [6] and make it adaptive to the Mahalanobis distance defined through an arbitrary positive definite matrix M with spectrum (ui, λi) d i=1 and unit spectral radius (λ1 = 1). We prove a bound on the regret RT (f) of order (ignoring logarithmic factors) p detκ(M) + v u u t d X i=1 ∥∇uif∥2 ∞ λi T ρT 1+ρT ∀f ∈F . (3) Here ρT ≤d is, roughly, the number of eigenvalues of M larger than a threshold shrinking polynomially in T, and detκ(M) ≤1 is the determinant of M truncated at λκ (with κ ≤ρT ). The quantity ∥∇uif∥2 ∞is defined like ∥∂if∥∞, but with the directional derivative ∇f(x)⊤u instead of the partial derivative. When the spectrum of M is light-tailed (so that ρT ≪d and, simultaneously, detκ(M) ≪1), with the smaller eigenvalues λi corresponding to eigenvectors in which f is smoother (so that the ratios ∥∇uif∥2 ∞ λi remain controlled), then our bound improves on (2). On the other hand, when no preliminary knowledge about good f is available, we may run the algorithm with M equal to the identity matrix and recover exactly the bound (2). Given that the regret can be improved by informed choices of M, it is natural to ask whether some kind of improvement is still possible when M is learned online, from the same data sequence on which the regret is being measured. Of course, this question makes sense if the data tell us something about the smoothness of the f against which we are measuring the regret. In the second part of the paper we implement this idea by considering a scenario where instances are drawn i.i.d. from some unknown distribution, labels are stochastically generated by some unknown regression function f0, and we have no preliminary knowledge about the directions along which f0 is smoother. In this stochastic scenario, the expected gradient outer product matrix G = E ∇f0(X)∇f0(X)⊤ provides a natural choice for the matrix M in our algorithm. Indeed, E ∇f0(X)⊤ui 2 = µi where u1, . . . , ud are the eigenvectors of G while µ1, . . . , µd are the corresponding eigenvalues. Thus, eigenvectors u1, . . . ud capture the principal directions of variation for f. In fact, assuming that the labels obey a statistical model Y = g(BX) + ε where ε is the noise and B ∈Rk×d projects X onto a k-dimensional subspace of X, one can show [21] that span(B) ≡span(u1, . . . , ud). In this sense, G is the “best” metric, because it recovers the k-dimensional relevant subspace. When G is unknown, we run our algorithm in phases using a recently proposed estimator b G of G. The estimator is trained on the same data sequence and is fed to the algorithm at the beginning of each phase. Under mild assumptions on f0, the noise in the labels, and the instance distribution, we prove a high probability bound on the regret RT (f0) of order (ignoring logarithmic factors) 1 + v u u t d X j=1
∇ujf0
∞+
∇V f0
∞ 2 µj/µ1 T eρT 1+eρT . (4) Observe that the rate at which the regret grows is the same as the one in (3), though now the effective dimension parameter eρT is larger than ρT by an amount related to the rate of convergence of the eigenvalues of b G to those of G. The square root term is also similar to (3), but for the extra quantity ∥∇V f0∥∞, which accounts for the error in approximating the eigenvectors of G. More precisely, ∥∇V f0∥∞is ∥∇vf∥∞maximized over directions v in the span of V , where V contains those eigenvectors of G that cannot be identified because their eigenvalues are too close to each other (we come back to this issue shortly). Finally, we lose the dependence on the truncated determinant, which is replaced here by its trivial upper bound 1. The proof of (2) in [6] is based on the sequential construction of a sphere packing of X, where the spheres are centered on adaptively chosen instances xt, and have radii shrinking polynomially with time. Each sphere hosts an online learner, and each new incoming instance is predicted using the learner hosted in the nearest sphere. Our variant of that algorithm uses an ellipsoid packing, and computes distances using the Mahalanobis distance ∥·∥M. The main new ingredient in the analysis leading to (3) is our notion of effective dimension ρT (we call it the effective rank of M), which measures how fast the spectrum of M vanishes. The proof also uses an ellipsoid packing bound and a lemma relating the Lipschitz constant to the Mahalanobis distance. 2 The proof of (4) is more intricate because G is only known up to a certain approximation. We use an estimator b G, recently proposed in [14], which is consistent under mild distributional assumptions when f0 is continuously differentiable. The first source of difficulty is adjusting the notion of effective rank (which the algorithm needs to compute) to compensate for the uncertainty in the knowledge of the eigenvalues of G. A further problematic issue arises because we want to measure the smoothness of f0 along the eigendirections of G, and so we need to control the convergence of the eigenvectors, given that b G converges to G in spectral norm. However, when two eigenvalues of G are close, then the corresponding eigenvectors in the estimated matrix b G are strongly affected by the stochastic perturbation (a phenomenon known as hybridization or spectral leaking in matrix perturbation theory, see [1, Section 2]). Hence, in our analysis we need to separate out the eigenvectors that correspond to well spaced eigenvalues from the others. This lack of discrimination causes the appearance in the regret of the extra term
∇V f0
∞. 2 Related works Nonparametric estimation problems have been a long-standing topic of study in statistics, where one is concerned with the recovery of an optimal function from a rich class under appropriate probabilistic assumptions. In online learning, the nonparametric approach was investigated in [15, 16, 17] by Vovk, who considered regression problems in large spaces and proved bounds on the regret. Minimax rates for the regret were later derived in [13] using a non-constructive approach. The first explicit online nonparametric algorithms for regression with minimax rates were obtained in [4]. The nonparametric online algorithm of [6] is known to have a suboptimal regret bound for Lipschitz classes of functions. However, it is a simple and efficient algorithm, well suited to the design of extensions that exhibit different forms of adaptivity to the data sequence. For example, the paper [9] derived a variant that automatically adapts to the intrinsic dimension of the data manifold. Our work explores an alternative direction of adaptivity, mainly aimed at taming the effect of the curse of dimensionality in nonparametric prediction through the learning of an appropriate Mahalanobis distance on the instance space. There is a rich literature on metric learning (see, e.g., the survey [2]) where the Mahalanobis metric ∥·∥M is typically learned through minimization of the pairwise loss function of the form ℓ(M, x, x′). This loss is high whenever dissimilar pairs of x and x′ are close in the Mahalanobis metric, and whenever similar ones are far apart in the same metric —see, e.g., [19]. The works [5, 7, 18] analyzed generalization and consistency properties of online learning algorithms employing pairwise losses. In this work we are primarily interested in using a metric ∥·∥M where M is close to the gradient outer product matrix of the best model in the reference class of functions. As we are not aware whether pairwise loss functions can indeed consistently recover such metrics, we directly estimate the gradient outer product matrix. This approach to metric learning was mostly explored in statistics —e.g., by locally-linear Empirical Risk Minimization on RKHS [12, 11], and through Stochastic Gradient Descent [3]. Our learning approach combines —in a phased manner— a Mahalanobis metric extension of the algorithm by [6] with the estimator of [14]. Our work is also similar in spirit to the “gradient weights” approach of [8], which learns a distance based on a simpler diagonal matrix. Preliminaries and notation. Let B(z, r) ⊂Rd be the ball of center z and radius r > 0 and let B(r) = B(0, r). We assume instances x belong to X ≡B(1) and labels y belong to Y ≡[0, 1]. We consider the following online learning protocol with oblivious adversary. Given an unknown sequence (x1, y1), (x2, y2), · · · ∈X × Y of instances and labels, for every round t = 1, 2, . . . 1. the environment reveals instance xt ∈X; 2. the learner selects an action byt ∈Y and incurs the square loss ℓt byt = byt −yt 2; 3. the learner observes yt. Given a positive definite d × d matrix M, the norm ∥x −z∥M induced by M (a.k.a. Mahalanobis distance) is defined by p (x −z)⊤M(x −z). Definition 1 (Covering and Packing Numbers). An ε-cover of a set S w.r.t. some metric ρ is a set {x′ 1, . . . , x′ n} ⊆S such that for each x ∈S there exists i ∈{1, . . . , n} such that ρ(x, x′ i) ≤ε. The covering number N(S, ε, ρ) is the smallest cardinality of a ε-cover. 3 An ε-packing of a set S w.r.t. some metric ρ is a set {x′ 1, . . . , x′ m} ⊆S such that for any distinct i, j ∈{1, . . . , m}, we have ρ(x′ i, x′ j) > ε. The packing number M(S, ε, ρ) is the largest cardinality of a ε-packing. It is well known that M(S, 2ε, ρ) ≤N(S, ε, ρ) ≤M(S, ε, ρ). For all differentiable f : X →Y and for any orthonormal basis V ≡{u1, . . . , uk} with k ≤d we define ∥∇V f∥∞= max v ∈span(V ) ∥v∥= 1 sup x∈X ∇f(x)⊤v . If V = {u} we simply write ∥∇uf∥∞. In the following, M is a positive definite d×d matrix with eigenvalues λ1 ≥· · · ≥λd > 0 and eigenvectors u1, . . . , ud. For each k = 1, . . . , d the truncated determinant is detk(M) = λ1 × · · · × λk. Figure 1: Quickly decreasing spectrum of M implies slow growth of its effective rank in t. 1 2 3 4 5 6 7 8 9 10 0.0 0.2 0.4 0.6 0.8 1.0 Eigenvalues of M 0 2000 4000 6000 8000 10000 t 1 2 3 4 5 6 7 8 9 10 ρt Effective Rank of M The kappa function for the matrix M is defined by κ(r, t) = max n m : λm ≥t− 2 1+r , m = 1, . . . , d o (5) for t ≥1 and r = 1, . . . , d. Note that κ(r + 1, t) ≤κ(r, t). Now define the effective rank of M at horizon t by ρt = min {r : κ(r, t) ≤r, r = 1, . . . , d} . (6) Since κ(d, t) ≤d for all t ≥1, this is a well defined quantity. Note that ρ1 ≤ρ2 ≤· · · ≤d. Also, ρt = d for all t ≥1 when M is the d × d identity matrix. Note that the effective rank ρt measures the number of eigenvalues that are larger than a threshold that shrinks with t. Hence matrices M with extremely light-tailed spectra cause ρt to remain small even when t grows large. This behaviour is shown in Figure 1. Throughout the paper, we use f O= (g) and f e O= (g) to denote, respectively, f = O(g) and f = e O(g). 3 Online nonparametric learning with ellipsoid packing In this section we present a variant (Algorithm 1) of the online nonparametric regression algorithm introduced in [6]. Since our analysis is invariant to rescalings of the matrix M, without loss of generality we assume M has unit spectral radius (i.e., λ1 = 1). Algorithm 1 sequentially constructs a packing of X using M-ellipsoids centered on a subset of the past observed instances. At each step t, the label of the current instance xt is predicted using the average byt of the labels of past instances that fell inside the ellipsoid whose center xs is closest to xt in the Mahalanobis metric. At the end of the step, if xt was outside of the closest ellipsoid, then a new ellipsoid is created with center xt. The radii εt of all ellipsoids are shrunk at rate t−1/(1+ρt). Note that efficient (i.e., logarithmic in the number of centers) implementations of approximate nearest-neighbor search for the active center xs exist [10]. The core idea of the proof (deferred to the supplementary material) is to maintain a trade-off between the regret contribution of the ellipsoids and an additional regret term due to the approximation of f by the Voronoi partitioning. The regret contribution of each ellipsoid is logarithmic in the number of predictions made. Since each instance is predicted by a single ellipsoid, if we ignore log factors the overall regret contribution is equal to the number of ellipsoids, which is essentially controlled by the packing number w.r.t. the metric defined by M. The second regret term is due to the fact that —at any point of time— the prediction of the algorithm is constant within the Voronoi cells of X induced by the current centers (recall that we predict with nearest neighbor). Hence, we pay an extra term equal to the radius of the ellipsoids times the Lipschitz constant which depends on the directional Lipschitzness of f with respect to the eigenbasis of M. Theorem 1 (Regret with Fixed Metric). Suppose Algorithm 1 is run with a positive definite matrix M with eigenbasis u1, . . . , ud and eigenvalues 1 = λ1 ≥· · · ≥λd > 0. Then, for any differentiable 4 Algorithm 1 Nonparametric online regression Input: Positive definite d × d matrix M. 1: S ←∅ ▷Centers 2: for t = 1, 2, . . . do 3: εt ←t− 1 1+ρt ▷Update radius 4: Observe xt 5: if S ≡∅then 6: S ←{t}, Tt ←∅ ▷Create initial ball 7: end if 8: s ←arg min s∈S ∥xt −xs∥M ▷Find active center 9: if Ts ≡∅then 10: yt = 1 2 11: else 12: byt ← 1 |Ts| X t′∈Ts yt′ ▷Predict using active center 13: end if 14: Observe yt 15: if ∥xt −xs∥M ≤εt then 16: Ts ←Ts ∪{t} ▷Update list for active center 17: else 18: S ←S ∪{s}, Ts ←∅ ▷Create new center 19: end if 20: end for f : X →Y we have that RT (f) e O= p detκ(M) + v u u t d X i=1 ∥∇uif∥2 ∞ λi T ρT 1+ρT where κ = κ(ρT , T) ≤ρT ≤d. We first prove two technical lemmas about packings of ellipsoids. Lemma 1 (Volumetric packing bound). Consider a pair of norms ∥· ∥, ∥· ∥′ and let B, B′ ⊂Rd be the corresponding unit balls. Then M(B, ε, ∥· ∥′) ≤vol B + ε 2B′ vol ε 2B′ . Lemma 2 (Ellipsoid packing bound). If B is the unit Euclidean ball then M B, ε, ∥· ∥M ≤ 8 √ 2 ε !s s Y i=1 p λi where s = max n i : p λi ≥ε, i = 1, . . . , d o . The following lemma states that whenever f has bounded partial derivatives with respect to the eigenbase of M, then f is Lipschitz with respect to ∥· ∥M. Lemma 3 (Bounded derivatives imply Lipschitzness in M-metric). Let f : X →R be everywhere differentiable. Then for any x, x′ ∈X, f(x) −f(x′) ≤∥x −x′∥M v u u t d X i=1 ∥∇uif∥2 ∞ λi . 4 Learning while learning the metric In this section, we assume instances xt are realizations of i.i.d. random variables Xt drawn according to some fixed and unknown distribution µ which has a continuous density on its support X. We also 5 assume labels yt are generated according to the noise model yt = f0(xt) + ν(xt), where f0 is some unknown regression function and ν(x) is a subgaussian zero-mean random variable for all x ∈X. We then simply write RT to denote the regret RT (f0). Note that RT is now a random variable which we bound with high probability. We now show how the nonparametric online learning algorithm (Algorithm 1) of Section 3 can be combined with an algorithm that learns an estimate b Gn = 1 n n X t=1 b∇f0(xt)b∇f0(xt)⊤ (7) of the expected outer product gradient matrix G = E ∇f0(X)∇f0(X)⊤ . The algorithm (described in the supplementary material) is consistent under the following assumptions. Let X(τ) be X blown up by a factor of 1 + τ. Assumption 1. 1. There exists τ0 > 0 such that f0 is continuously differentiable on X(τ0). 2. There exists G > 0 such that max x∈X(τ0) ∥∇f0(x)∥≤G. 3. The distribution µ is full-dimensional: there exists Cµ > 0 such that for all x ∈X and ε > 0, µ B(x, ε) ≥Cµεd. In particular, the next lemma states that, under Assumption 1, b Gn is a consistent estimate of G. Lemma 4 ([14, Theorem 1]). If Assumption 1 holds, then there exists a nonnegative and nonincreasing sequence {γn}n≥1 such that for all n, the estimated gradient outerproduct (7) computed with parameters εn > 0, and 0 < τn < τ0 satisfies
b Gn −G
2 ≤γn with high probability with respect do the random draw of X1, . . . , Xn. Moreover, if τn = Θ ε1/4 n , εn = Ω ln n 2 d n−1 d , and εn = O n− 1 2(d+1) then γn →0 as n →∞. Our algorithm works in phases i = 1, 2, . . . where phase i has length n(i) = 2i. Let T(i) = 2i+1 −2 be the index of the last time step in phase i. The algorithm uses a nonincreasing regularization sequence γ0 ≥γ1 ≥· · · > 0. Let c M(0) = γ0I. During each phase i, the algorithm predicts the data points by running Algorithm 1 with M = c M(i −1)
c M(i −1)∥2 (where ∥· ∥2 denotes the spectral norm). Simultaneously, the gradient outer product estimate (7) is trained over the same data points. At the end of phase i, the current gradient outer product estimate b G(i) = b GT (i) is used to form a new matrix c M(i) = b G(i) + γT (i)I. Algorithm 1 is then restarted in phase i + 1 with M = c M(i)
c M(i)∥2. Note that the metric learning algorithm can be also implemented efficiently through nearest-neighbor search as explained in [14]. Let µ1 ≥µ2 ≥· · · ≥µd be the eigenvalues and u1, . . . , ud be the eigenvectors of G. We define the j-th eigenvalue separation ∆j by ∆j = min k̸=j µj −µk . For any ∆> 0 define also V∆≡ uj : |µj −µk| ≥∆, k ̸= j and V ⊥ ∆= {u1, . . . , ud} \ V∆. Our results are expressed in terms of the effective rank (6) of G at horizon T. However, in order to account for the error in estimating the eigenvalues of G, we define the effective rank eρt with respect to the following slight variant of the function kappa, eκ(r, t) = max n m : µm + 2γt ≥µ1t− 2 1+r , m = 1, . . . , d o t ≥1 and r = 1, . . . , d. Let c M(i) be the estimated gradient outer product constructed at the end of phase i, and let bµ1(i) + γ(i) ≥· · · ≥bµd(i)+γ(i) and bu1(i), . . . , bud(i) be the eigenvalues and eigenvectors of c M(i), where we also write γ(i) to denote γT (i). We use bκ to denote the kappa function with estimated eigenvalues and bρ to denote the effective rank defined through bκ. We start with a technical lemma. Lemma 5. Let µd, α > 0 and d ≥1. Then the derivative of F(t) = µd + 2 T0 + t −α t 2 1+d is positive for all t ≥1 when T0 ≥ d+1 2µd 1/α . 6 Proof. We have that F ′(t) ≥0 if and only if t ≤2(T0+t) α(d+1) 1 + (T0 + t)αµd . This is implied by t ≤2µd(T0 + t)1+α α(d + 1) or, equivalently, T0 ≥A1/(1+α)t1/(1+α) −t where A = α(d + 1)/(2µd). The right-hand side A1/(1+α)t1/(1+α) −t is a concave function of t. Hence the maximum is found at the value of t where the derivative is zero, this value satisfies A1/(1+α) 1 + α t−α/(1+α) = 1 which solved for t gives t = A1/α(1 + α)−(1+α)/α . Substituting this value of t in A1/(1+α)t1/(1+α) −t gives the condition T0 ≥A1/αα(1+α)−(1+α)/α which is satisfied when T0 ≥ d+1 2µd 1/α . Theorem 2. Suppose Assumption 1 holds. If the algorithm is ran with a regularization sequence γ0 = 1 and γt = t−α for some α > 0 such that γt ≥γt for all t ≥ d + 1 2µd 1/α and for γ1 ≥γ2 ≥· · · > 0 satisfying Lemma 4, then for any given ∆> 0 RT e O= 1 + v u u t d X j=1
∇ujf0
∞+
∇V ⊥ ∆f0
∞ 2 µj/µ1 T eρT 1+eρT with high probability with respect to the random draw of X1, . . . , XT . Note that the asymptotic notation is hiding terms that depend on 1/∆, hence we can not zero out the term
∇V ⊥ ∆f0
∞in the bound by taking ∆arbitrarily small. Proof. Pick the smallest i0 such that T(i0) ≥ d + 1 2µd 1/α (8) (we need this condition in the proof). The total regret in phases 1, 2, . . . , i0 is bounded by d + 1 2µd 1/α = O(1). Let the value bρT (i) at the end of phase i be denoted by bρ(i). By Theorem 1, the regret RT (i + 1) of Algorithm 1 in each phase i + 1 > i0 is deterministically upper bounded by RT (i + 1) ≤ 8 ln e2i+1 8 √ 2 bρ(i+1) + 4 v u u t d X j=1
∇buj(i)f0
2 ∞ λj(i) λ1(i) 2(i+1) bρ(i+1) 1+bρ(i+1) (9) where λj(i) = bµj(i) + γ(i). Here we used the trivial upper bound detκ c M(i)
c M(i)∥2 ≤1 for all κ = 1, . . . , d. Now assume that bµ1(i) + γ(i) ≤ bµm(i) + γ(i) t 2 1+r for some m, r ∈{1, . . . , d} and for some t in phase i + 1. Hence, using Lemma 4 and γt ≤γt, we have that max j=1,...,d bµj(i) −µj ≤
b G(i) −G
2 ≤γ(i) with high probability. (10) where the first inequality is straightforward. Hence we may write µ1 ≤µ1 −γ(i) + γ(i) ≤bµ1(i) + γ(i) ≤ bµm(i) + γ(i) t 2 1+r ≤ µm + γ(i) + γ(i) t 2 1+r (using Lemma 4) ≤ µm + 2γ(i) t 2 1+r . Recall γ(i) = T(i)−α. Using Lemma 5, we observe that the derivative of F(t) = µm + 2 T(i) + t −α t 2 1+r is positive for all t ≥1 when T(i) ≥ r + 1 2µd 1/α ≥ r + 1 2µm 1/α 7 which is guaranteed by our choice (8). Hence, µm + 2γ(i) t 2 1+r ≤ µm + 2γT ) T 2 1+r and so bµ1(i) + γ(i) bµm(i) + γ(i) ≤t 2 1+r implies µ1 µm + 2γT ≤T 2 1+r . Recalling the definitions of eκ and bκ, this in turn implies bκ(r, t) ≤eκ(r, T), which also gives bρt ≤eρT for all t ≤T. Next, we bound the approximation error in each individual eigenvalue of G. By (10) we obtain, for any phase i and for any j = 1, . . . , d, µj + 2γ(i) ≥µj + γ(i) + γ(i) ≥bµj(i) + γ(i) ≥µj −γ(i) + γ(i) ≥µj . Hence, bound (9) implies RT (i + 1) ≤ 8 ln e2i+1 12eρT + 4 v u u t µ1 + 2γ(i) d X j=1
∇bujf0
2 ∞ µj 2(i+1) eρT 1+eρT . (11) The error in approximating the eigenvectors of G is controlled via the following first-order eigenvector approximation result from matrix perturbation theory [20, equation (10.2)], for any vector v of constant norm, v⊤ buj(i) −uj = X k̸=j u⊤ k c M(i) −G uj µj −µk v⊤uk + o
c M(i) −G
2 2 ≤ X k̸=j 2γ(i) µj −µk v⊤uk + o γ(i)2 (12) where we used u⊤ k c M(i) −G uj ≤
c M(i) −G
2 ≤γ(i) + γ(i) ≤2γ(i). Then for all j such that uj ∈V∆, ∇f0(x)⊤ buj(i) −uj = X k̸=j 2γ(i) µj −µk ∇f0(x)⊤uk + o γ(i)2 ≤2γ(i) ∆ √ d ∥∇f0(x)∥2 + o γ(i)2 . Note that the coefficients αk = u⊤ k c M(i) −G uj µj −µk + o γ(i)2 k ̸= j are a subset of coordinate values of vector buj(i) −uj w.r.t. the orthonormal basis u1, . . . , ud. Then, by Parseval’s identity, 4 ≥∥buj(i) −uj∥2 2 ≥ X k̸=j α2 k . Therefore, it must be that max k̸=j u⊤ k c M(i) −G uj µj −µk ≤2 + o γ(i)2 . For any j such that uj ∈V ⊥ ∆, since µj −µk ≥∆for all uk ∈V∆, we may write ∇f0(x)⊤ buj(i) −uj ≤2γ(i) ∆ X uk∈V∆ ∇f0(x)⊤uk + 2 + o γ(i)2 X uk∈V ⊥ ∆ ∇f0(x)⊤uk + o γ(i)2 ≤2γ(i) ∆ √ d ∥P V∆∇f0(x)∥2 + 2 + o γ(i)2√ d
P V ⊥ ∆∇f0(x)
2 + o γ(i)2 8 where P V∆and P V ⊥ ∆are the orthogonal projections onto, respectively, V∆and V ⊥ ∆. Therefore, we have that
∇bujf0
∞= sup x∈X ∇f0(x)⊤buj(i) = sup x∈X ∇f0(x)⊤ buj(i) −uj + uj ≤sup x∈X ∇f0(x)⊤uj + sup x∈X ∇f0(x)⊤ buj(i) −uj ≤
∇ujf0
∞+ 2γ(i) ∆ √ d ∥∇V∆f0∥∞+ 2 + o γ(i)2√ d
∇V ⊥ ∆f0
∞+ o γ(i)2 (13) Letting α∆(i) = 2γ(i) ∆ √ d ∥∇V∆f0∥∞+ 2 + o γ(i)2√ d
∇V ⊥ ∆f0
∞+ o γ(i)2 we can upper bound (11) as follows RT (i + 1) ≤ 8 ln e2i+1 12eρT+ 4 v u u t µ1 + 2γ(i) d X j=1
∇ujf0
∞+ α∆(i) 2 µj 2(i+1) eρT 1+eρT . Recall that, due to (10), the above holds at the end of each phase i + 1 with high probability. Now observe that γ(i) = O 2−αi and so α∆(i) O= (∥∇V∆f0∥∞/∆+
∇V ⊥ ∆f0
∞). Hence, by summing over phases i = 1, . . . , log2 T and applying the union bound, RT = ⌈log2 T ⌉ X i=1 RT (i) ≤ 8 ln eT 12d+ 4 v u u t µ1 + 2γ(i −1) d X j=1
∇ujf0
∞+ α∆(i −1) 2 µj 2 eρT 1+eρT i e O= 1 + d X j=1
∇ujf0
∞+
∇V ⊥ ∆f0
∞ 2 µj µ1 T eρT 1+eρT concluding the proof. 5 Conclusions and future work We presented an efficient algorithm for online nonparametric regression which adapts to the directions along which the regression function f0 is smoother. It does so by learning the Mahalanobis metric through the estimation of the gradient outer product matrix E[∇f0(X)∇f0(X)⊤]. As a preliminary result, we analyzed the regret of a generalized version of the algorithm from [6], capturing situations where one competes against functions with directional Lipschitzness with respect to an arbitrary Mahalanobis metric. Our main result is then obtained through a phased algorithm that estimates the gradient outer product matrix while running online nonparametric regression on the same sequence. Both algorithms automatically adapt to the effective rank of the metric. This work could be extended by investigating a variant of Algorithm 1 for classification, in which ball radii shrink at a nonuniform rate, depending on the mistakes accumulated within each ball rather than on time. This could lead to the ability of competing against functions f that are only locally Lipschitz. In addition, it is conceivable that under appropriate assumptions, a fraction of the balls could stop shrinking at a certain point when no more mistakes are made. This might yield better asymptotic bounds than those implied by Theorem 1, because ρT would never attain the ambient dimension d. Acknowledgments Authors would like to thank S´ebastien Gerchinovitz and Samory Kpotufe for useful discussions on this work. IK would like to thank Google for travel support. This work also was in parts funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no 637076). 9 References [1] R. Allez and J.-P. Bouchaud. Eigenvector dynamics: general theory and some applications. Physical Review E, 86(4):046202, 2012. [2] A. Bellet, A. Habrard, and M. Sebban. A Survey on Metric Learning for Feature Vectors and Structured Data. arXiv preprint arXiv:1306.6709, 2013. [3] X. Dong and D.-X. Zhou. Learning Gradients by a Gradient Descent Algorithm. Journal of Mathematical Analysis and Applications, 341(2):1018–1027, 2008. [4] P. Gaillard and S. Gerchinovitz. A chaining Algorithm for Online Nonparametric Regression. In Conference on Learning Theory (COLT), 2015. [5] Z.-C. Guo, Y. Ying, and D.-X. Zhou. Online Regularized Learning with Pairwise Loss Functions. Advances in Computational Mathematics, 43(1):127–150, 2017. [6] E. Hazan and N. Megiddo. Online Learning with Prior Knowledge. In Learning Theory, pages 499–513. Springer, 2007. [7] R. Jin, S. Wang, and Y. Zhou. Regularized Distance Metric Learning: Theory and Algorithm. In Conference on Neural Information Processing Systems (NIPS), 2009. [8] S. Kpotufe, A. Boularias, T. Schultz, and K. Kim. Gradients Weights Improve Regression and Classification. Journal of Machine Learning Research, 17(22):1–34, 2016. [9] S. Kpotufe and F. Orabona. Regression-Tree Tuning in a Streaming Setting. In Conference on Neural Information Processing Systems (NIPS), 2013. [10] R. Krauthgamer and J. R. Lee. Navigating nets: simple algorithms for proximity search. In Proceedings of the 15th annual ACM-SIAM Symposium on Discrete algorithms, pages 798–807. Society for Industrial and Applied Mathematics, 2004. [11] S. Mukherjee and Q. Wu. Estimation of Gradients and Coordinate Covariation in Classification. Journal of Machine Learning Research, 7(Nov):2481–2514, 2006. [12] S. Mukherjee and D.-X. Zhou. Learning Coordinate Covariances via Gradients. Journal of Machine Learning Research, 7(Mar):519–549, 2006. [13] A. Rakhlin and K. Sridharan. Online Non-Parametric Regression. In Conference on Learning Theory (COLT), 2014. [14] S. Trivedi, J. Wang, S. Kpotufe, and G. Shakhnarovich. A consistent Estimator of the Expected Gradient Outerproduct. In Conference on Uncertainty in Artificial Intelligence (UAI), 2014. [15] V. Vovk. Metric entropy in competitive on-line prediction. arXiv preprint cs/0609045, 2006. [16] V. Vovk. On-line regression competitive with reproducing kernel Hilbert spaces. In International Conference on Theory and Applications of Models of Computation. Springer, 2006. [17] V. Vovk. Competing with wild prediction rules. Machine Learning, 69(2):193–212, 2007. [18] Y. Wang, R. Khardon, D. Pechyony, and R. Jones. Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions. In Conference on Learning Theory (COLT), 2012. [19] K. Q. Weinberger and L. K. Saul. Distance Metric Learning for Large Margin Nearest Neighbor Classification. Journal of Machine Learning Research, 10:207–244, 2009. [20] J. H. Wilkinson. The Algebraic Eigenvalue Problem, volume 87. Clarendon Press Oxford, 1965. [21] Q. Wu, J. Guinney, M. Maggioni, and S. Mukherjee. Learning gradients: predictive models that infer geometry and statistical dependence. Journal of Machine Learning Research, 11(Aug):2175–2198, 2010. 10 | 2017 | 343 |
6,834 | Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure Alberto Bietti Inria∗ alberto.bietti@inria.fr Julien Mairal Inria∗ julien.mairal@inria.fr Abstract Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. Unfortunately, these techniques are unable to deal with stochastic perturbations of input data, induced for example by data augmentation. In such cases, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for these settings when the objective is composite and strongly convex. The convergence rate outperforms SGD with a typically much smaller constant factor, which depends on the variance of gradient estimates only due to perturbations on a single example. 1 Introduction Many supervised machine learning problems can be cast as the minimization of an expected loss over a data distribution with respect to a vector x in Rp of model parameters. When an infinite amount of data is available, stochastic optimization methods such as SGD or stochastic mirror descent algorithms, or their variants, are typically used (see [5, 11, 24, 34]). Nevertheless, when the dataset is finite, incremental methods based on variance reduction techniques (e.g., [2, 8, 15, 17, 18, 27, 29]) have proven to be significantly faster at solving the finite-sum problem min x∈Rp n F(x) := f(x) + h(x) = 1 n n X i=1 fi(x) + h(x) o , (1) where the functions fi are smooth and convex, and h is a simple convex penalty that need not be differentiable such as the ℓ1 norm. A classical setting is fi(x) = ℓ(yi, x⊤ξi) + (µ/2)∥x∥2, where (ξi, yi) is an example-label pair, ℓis a convex loss function, and µ is a regularization parameter. In this paper, we are interested in a variant of (1) where random perturbations of data are introduced, which is a common scenario in machine learning. Then, the functions fi involve an expectation over a random perturbation ρ, leading to the problem min x∈Rp n F(x) := 1 n n X i=1 fi(x) + h(x) o . with fi(x) = Eρ[ ˜fi(x, ρ)]. (2) Unfortunately, variance reduction methods are not compatible with the setting (2), since evaluating a single gradient ∇fi(x) requires computing a full expectation. Yet, dealing with random perturbations is of utmost interest; for instance, this is a key to achieve stable feature selection [23], improving the generalization error both in theory [33] and in practice [19, 32], obtaining stable and robust predictors [36], or using complex a priori knowledge about data to generate virtually ∗Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Iteration complexity of different methods for solving the objective (2) in terms of number of iterations required to find x such that E[f(x)−f(x∗)] ≤ǫ. The complexity of N-SAGA [14] matches the first term of S-MISO but is asymptotically biased. Note that we always have the perturbation noise variance σ2 p smaller than the total variance σ2 tot and thus S-MISO improves on SGD both in the first term (linear convergence to a smaller ¯ǫ) and in the second (smaller constant in the asymptotic rate). In many application cases, we also have σ2 p ≪σ2 tot (see main text and Table 2). Method Asymptotic error Iteration complexity SGD 0 O L µ log 1 ¯ǫ + σ2 tot µǫ with ¯ǫ = O σ2 tot µ N-SAGA [14] ǫ0 = O σ2 p µ ! O n + L µ log 1 ǫ with ǫ > ǫ0 S-MISO 0 O n + L µ log 1 ¯ǫ + σ2 p µǫ ! with ¯ǫ = O σ2 p µ ! larger datasets [19, 26, 30]. Injecting noise in data is also useful to hide gradient information for privacy-aware learning [10]. Despite its importance, the optimization problem (2) has been littled studied and to the best of our knowledge, no dedicated optimization method that is able to exploit the problem structure has been developed so far. A natural way to optimize this objective when h=0 is indeed SGD, but ignoring the finite-sum structure leads to gradient estimates with high variance and slow convergence. The goal of this paper is to introduce an algorithm for strongly convex objectives, called stochastic MISO, which exploits the underlying finite sum using variance reduction. Our method achieves a faster convergence rate than SGD, by removing the dependence on the gradient variance due to sampling the data points i in {1, . . . , n}; the dependence remains only for the variance due to random perturbations ρ. To the best of our knowledge, our method is the first algorithm that interpolates naturally between incremental methods for finite sums (when there are no perturbations) and the stochastic approximation setting (when n=1), while being able to efficiently tackle the hybrid case. Related work. Many optimization methods dedicated to the finite-sum problem (e.g., [15, 29]) have been motivated by the fact that their updates can be interpreted as SGD steps with unbiased estimates of the full gradient, but with a variance that decreases as the algorithm approaches the optimum [15]; on the other hand, vanilla SGD requires decreasing step-sizes to achieve this reduction of variance, thereby slowing down convergence. Our work aims at extending these techniques to the case where each function in the finite sum can only be accessed via a first-order stochastic oracle. Most related to our work, recent methods that use data clustering to accelerate variance reduction techniques [3, 14] can be seen as tackling a special case of (2), where the expectations in fi are replaced by empirical averages over points in a cluster. While N-SAGA [14] was originally not designed for the stochastic context we consider, we remark that their method can be applied to (2). Their algorithm is however asymptotically biased and does not converge to the optimum. On the other hand, ClusterSVRG [3] is not biased, but does not support infinite datasets. The method proposed in [1] uses variance reduction in a setting where gradients are computed approximately, but the algorithm computes a full gradient at every pass, which is not available in our stochastic setting. Paper organization. In Section 2, we present our algorithm for smooth objectives, and we analyze its convergence in Section 3. For space limitation reasons, we present an extension to composite objectives and non-uniform sampling in Appendix A. Section 4 is devoted to empirical results. 2 The Stochastic MISO Algorithm for Smooth Objectives In this section, we introduce the stochastic MISO approach for smooth objectives (h = 0), which relies on the following assumptions: • (A1) global strong convexity: f is µ-strongly convex; • (A2) smoothness: ˜fi(·, ρ) is L-smooth for all i and ρ (i.e., with L-Lipschitz gradients). 2 Table 2: Estimated ratio σ2 tot/σ2 p, which corresponds to the expected acceleration of S-MISO over SGD. These numbers are based on feature vectors variance, which is closely related to the gradient variance when learning a linear model. ResNet-50 denotes a 50 layer network [12] pre-trained on the ImageNet dataset. For image transformations, the numbers are empirically evaluated from 100 different images, with 100 random perturbations for each image. R2 tot (respectively, R2 cluster) denotes the average squared distance between pairs of points in the dataset (respectively, in a given cluster), following [14]. The settings for unsupervised CKN and Scattering are described in Section 4. More details are given in the main text. Type of perturbation Application case Estimated ratio σ2 tot/σ2 p Direct perturbation of linear model features Data clustering as in [3, 14] ≈ R2 tot/R2 cluster Additive Gaussian noise N(0, α2I) ≈ 1 + 1/α2 Dropout with probability δ ≈ 1 + 1/δ Feature rescaling by s in U(1 −w, 1 + w) ≈ 1 + 3/w2 Random image transformations ResNet-50 [12], color perturbation 21.9 ResNet-50 [12], rescaling + crop 13.6 Unsupervised CKN [22], rescaling + crop 9.6 Scattering [6], gamma correction 9.8 Note that these assumptions are relaxed in Appendix A by supporting composite objectives and by exploiting different smoothness parameters Li on each example, a setting where non-uniform sampling of the training points is typically helpful to accelerate convergence (e.g., [35]). Complexity results. We now introduce the following quantity, which is essential in our analysis: σ2 p := 1 n n X i=1 σ2 i , with σ2 i := Eρ h ∥∇˜fi(x∗, ρ) −∇fi(x∗)∥2i , where x∗is the (unique) minimizer of f. The quantity σ2 p represents the part of the variance of the gradients at the optimum that is due to the perturbations ρ. In contrast, another quantity of interest is the total variance σ2 tot, which also includes the randomness in the choice of the index i, defined as σ2 tot = Ei,ρ[∥∇˜fi(x∗, ρ)∥2] = σ2 p + Ei[∥∇fi(x∗)∥2] (note that ∇f(x∗) = 0). The relation between σ2 tot and σ2 p is obtained by simple algebraic manipulations. The goal of our paper is to exploit the potential imbalance σ2 p ≪σ2 tot, occurring when perturbations on input data are small compared to the sampling noise. The assumption is reasonable: given a data point, selecting a different one should lead to larger variation than a simple perturbation. From a theoretical point of view, the approach we propose achieves the iteration complexity presented in Table 1, see also Appendix D and [4, 5, 24] for the complexity analysis of SGD. The gain over SGD is of order σ2 tot/σ2 p, which is also observed in our experiments in Section 4. We also compare against the method N-SAGA; its convergence rate is similar to ours but suffers from a non-zero asymptotic error. Motivation from application cases. One clear framework of application is the data clustering scenario already investigated in [3, 14]. Nevertheless, we will focus on less-studied data augmentation settings that lead instead to true stochastic formulations such as (2). First, we consider learning a linear model when adding simple direct manipulations of feature vectors, via rescaling (multiplying each entry vector by a random scalar), Dropout, or additive Gaussian noise, in order to improve the generalization error [33] or to get more stable estimators [23]. In Table 2, we present the potential gain over SGD in these scenarios. To do that, we study the variance of perturbations applied to a feature vector ξ. Indeed, the gradient of the loss is proportional to ξ, which allows us to obtain good estimates of the ratio σ2 tot/σ2 p, as we observed in our empirical study of Dropout presented in Section 4. Whereas some perturbations are friendly for our method such as feature rescaling (a rescaling window of [0.9, 1.1] yields for instance a huge gain factor of 300), a large Dropout rate would lead to less impressive acceleration (e.g., a Dropout with δ = 0.5 simply yields a factor 2). Second, we also consider more interesting domain-driven data perturbations such as classical image transformations considered in computer vision [26, 36] including image cropping, rescaling, brightness, contrast, hue, and saturation changes. These transformations may be used to train a linear 3 Algorithm 1 S-MISO for smooth objectives Input: step-size sequence (αt)t≥1; initialize x0 = 1 n P i z0 i for some (z0 i )i=1,...,n; for t = 1, . . . do Sample an index it uniformly at random, a perturbation ρt, and update zt i = ( (1 −αt)zt−1 i + αt(xt−1 −1 µ∇˜fit(xt−1, ρt)), if i = it zt−1 i , otherwise. (3) xt = 1 n n X i=1 zt i = xt−1 + 1 n(zt it −zt−1 it ). (4) end for classifier on top of an unsupervised multilayer image model such as unsupervised CKNs [22] or the scattering transform [6]. It may also be used for retraining the last layer of a pre-trained deep neural network: given a new task unseen during the full network training and given limited amount of training data, data augmentation may be indeed crucial to obtain good prediction and S-MISO can help accelerate learning in this setting. These scenarios are also studied in Table 2, where the experiment with ResNet-50 involving random cropping and rescaling produces 224 × 224 images from 256 × 256 ones. For these scenarios with realistic perturbations, the potential gain varies from 10 to 20. Description of stochastic MISO. We are now in shape to present our method, described in Algorithm 1. Without perturbations and with a constant step-size, the algorithm resembles the MISO/Finito algorithms [9, 18, 21], which may be seen as primal variants of SDCA [28, 29]. Specifically, MISO is not able to deal with our stochastic objective (2), but it may address the deterministic finite-sum problem (1). It is part of a larger body of optimization methods that iteratively build a model of the objective function, typically a lower or upper bound on the objective that is easier to optimize; for instance, this strategy is commonly adopted in bundle methods [13, 25]. More precisely, MISO assumes that each fi is strongly convex and builds a model using lower bounds Dt(x) = 1 n Pn i=1 dt i(x), where each dt i is a quadratic lower bound on fi of the form dt i(x) = ct i,1 + µ 2 ∥x −zt i∥2 = ct i,2 −µ⟨x, zt i⟩+ µ 2 ∥x∥2. (5) These lower bounds are updated during the algorithm using strong convexity lower bounds at xt−1 of the form lt i(x) = fi(xt−1) + ⟨∇fi(xt−1), x −xt−1⟩+ µ 2 ∥x −xt−1∥2 ≤fi(x): dt i(x) = (1 −αt)dt−1 i (x) + αtlt i(x), if i = it dt−1 i (x), otherwise, (6) which corresponds to an update of the quantity zt i: zt i = ( (1 −αt)zt−1 i + αt(xt−1 −1 µ∇fit(xt−1)), if i = it zt−1 i , otherwise. The next iterate is then computed as xt = arg minx Dt(x), which is equivalent to (4). The original MISO/Finito algorithms use αt = 1 under a “big data” condition on the sample size n [9, 21], while the theory was later extended in [18] to relax this condition by supporting smaller constant steps αt = α, leading to an algorithm that may be interpreted as a primal variant of SDCA (see [28]). Note that when fi is an expectation, it is hard to obtain such lower bounds since the gradient ∇fi(xt−1) is not available in general. For this reason, we have introduced S-MISO, which can exploit approximate lower bounds to each fi using gradient estimates, by letting the step-sizes αt decrease appropriately as commonly done in stochastic approximation. This leads to update (3). Separately, SDCA [29] considers the Fenchel conjugates of fi, defined by f ∗ i (y) = supx x⊤y−fi(x). When fi is an expectation, f ∗ i is not available in closed form in general, nor are its gradients, and in fact exploiting stochastic gradient estimates is difficult in the duality framework. In contrast, [28] gives an analysis of SDCA in the primal, aka. “without duality”, for smooth finite sums, and our work extends this line of reasoning to the stochastic approximation and composite settings. 4 Relationship with SGD in the smooth case. The link between S-MISO in the non-composite setting and SGD can be seen by rewriting the update (4) as xt = xt−1 + 1 n(zt it −zt−1 it ) = xt−1 + αt n vt, where vt := xt−1 −1 µ∇˜fit(xt−1, ρt) −zt−1 it . (7) Note that E[vt|Ft−1] = −1 µ∇f(xt−1), where Ft−1 contains all information up to iteration t; hence, the algorithm can be seen as an instance of the stochastic gradient method with unbiased gradients, which was a key motivation in SVRG [15] and later in other variance reduction algorithms [8, 28]. It is also worth noting that in the absence of a finite-sum structure (n=1), we have zt−1 it =xt−1; hence our method becomes identical to SGD, up to a redefinition of step-sizes. In the composite case (see Appendix A), our approach yields a new algorithm that resembles regularized dual averaging [34]. Memory requirements and handling of sparse datasets. The algorithm requires storing the vectors (zt i)i=1,...,n, which takes the same amount of memory as the original dataset and which is therefore a reasonable requirement in many practical cases. In the case of sparse datasets, it is fair to assume that random perturbations applied to input data preserve the sparsity patterns of the original vectors, as is the case, e.g., when applying Dropout to text documents described with bag-ofwords representations [33]. If we further assume the typical setting where the µ-strong convexity comes from an ℓ2 regularizer: ˜fi(x, ρ) = φi(x⊤ξρ i ) + (µ/2)∥x∥2, where ξρ i is the (sparse) perturbed example and φi encodes the loss, then the update (3) can be written as zt i = ( (1 −αt)zt−1 i −αt µ φ′ i(x⊤ t−1ξρt i )ξρt i , if i = it zt−1 i , otherwise, which shows that for every index i, the vector zt i preserves the same sparsity pattern as the examples ξρ i throughout the algorithm (assuming the initialization z0 i = 0), making the update (3) efficient. The update (4) has the same cost since vt = zt it −zt−1 it is also sparse. Limitations and alternative approaches. Since our algorithm is uniformly better than SGD in terms of iteration complexity, its main limitation is in terms of memory storage when the dataset cannot fit into memory (remember that the memory cost of S-MISO is the same as the input dataset). In these huge-scale settings, SGD should be preferred; this holds true in fact for all incremental methods when one cannot afford to perform more than one (or very few) passes over the data. Our paper focuses instead on non-huge datasets, which are those benefiting most from data augmentation. We note that a different approach to variance reduction like SVRG [15] is able to trade off storage requirements for additional full gradient computations, which would be desirable in some situations. However, we were not able to obtain any decreasing step-size strategy that works for these methods, both in theory and practice, leaving us with constant step-size approaches as in [1, 14] that either maintain a non-zero asymptotic error, or require dynamically reducing the variance of gradient estimates. One possible way to explain this difficulty is that SVRG and SAGA [8] “forget” past gradients for a given example i, while S-MISO averages them in (3), which seems to be a technical key to make it suitable to stochastic approximation. Nevertheless, the question of whether it is possible to trade-off storage with computation in a setting like ours is open and of utmost interest. 3 Convergence Analysis of S-MISO We now study the convergence properties of the S-MISO algorithm. For space limitation reasons, all proofs are provided in Appendix B. We start by defining the problem-dependent quantities z∗ i := x∗−1 µ∇fi(x∗), and then introduce the Lyapunov function Ct = 1 2∥xt −x∗∥2 + αt n2 n X i=1 ∥zt i −z∗ i ∥2. (8) Proposition 1 gives a recursion on Ct, obtained by upper-bounding separately its two terms, and finding coefficients to cancel out other appearing quantities when relating Ct to Ct−1. To this end, we borrow elements of the convergence proof of SDCA without duality [28]; our technical contribution is to extend their result to the stochastic approximation and composite (see Appendix A) cases. 5 Proposition 1 (Recursion on Ct). If (αt)t≥1 is a positive and non-increasing sequence satisfying α1 ≤min 1 2, n 2(2κ −1) , (9) with κ = L/µ, then Ct obeys the recursion E[Ct] ≤ 1 −αt n E[Ct−1] + 2 αt n 2 σ2 p µ2 . (10) We now state the main convergence result, which provides the expected rate O(1/t) on Ct based on decreasing step-sizes, similar to [5] for SGD. Note that convergence of objective function values is directly related to that of the Lyapunov function Ct via smoothness: E[f(xt) −f(x∗)] ≤L 2 E ∥xt −x∗∥2 ≤L E[Ct]. (11) Theorem 2 (Convergence of Lyapunov function). Let the sequence of step-sizes (αt)t≥1 be defined by αt = 2n γ+t with γ ≥0 such that α1 satisfies (9). For all t ≥0, it holds that E[Ct] ≤ ν γ + t + 1 where ν := max ( 8σ2 p µ2 , (γ + 1)C0 ) . (12) Choice of step-sizes in practice. Naturally, we would like ν to be small, in particular independent of the initial condition C0 and equal to the first term in the definition (12). We would like the dependence on C0 to vanish at a faster rate than O(1/t), as it is the case in variance reduction algorithms on finite sums. As advised in [5] in the context of SGD, we can initially run the algorithm with a constant step-size ¯α and exploit this linear convergence regime until we reach the level of noise given by σp, and then start decaying the step-size. It is easy to see that by using a constant step-size ¯α, Ct converges near a value ¯C := 2¯ασ2 p/nµ2. Indeed, Eq. (10) with αt = ¯α yields E[Ct −¯C] ≤ 1 −¯α n E[Ct−1 −¯C]. Thus, we can reach a precision C′ 0 with E[C′ 0] ≤¯ǫ := 2 ¯C in O( n ¯α log C0/¯ǫ) iterations. Then, if we start decaying step-sizes as in Theorem 2 with γ large enough so that α1 = ¯α, we have (γ + 1) E[C′ 0] ≤(γ + 1)¯ǫ = 8σ2 p/µ2, making both terms in (12) smaller than or equal to ν = 8σ2 p/µ2. Considering these two phases, with an initial step-size ¯α given by (9), the final work complexity for reaching E[∥xt −x∗∥2] ≤ǫ is O n + L µ log C0 ¯ǫ + O σ2 p µ2ǫ ! . (13) We can then use (11) in order to obtain the complexity for reaching E[f(xt) −f(x∗)] ≤ǫ. Note that following this step-size strategy was found to be very effective in practice (see Section 4). Acceleration by iterate averaging. When one is interested in the convergence in function values, the complexity (13) combined with (11) yields O(Lσ2 p/µ2ǫ), which can be problematic for illconditioned problems (large condition number L/µ). The following theorem presents an iterate averaging scheme which brings the complexity term down to O(σ2 p/µǫ), which appeared in Table 1. Theorem 3 (Convergence under iterate averaging). Let the step-size sequence (αt)t≥1 be defined by αt = 2n γ + t for γ ≥1 s.t. α1 ≤min 1 2, n 4(2κ −1) . We have E[f(¯xT ) −f(x∗)] ≤2µγ(γ −1)C0 T(2γ + T −1) + 16σ2 p µ(2γ + T −1), where ¯xT := 2 T(2γ + T −1) T −1 X t=0 (γ + t)xt. 6 0 50 100 150 200 250 300 350 400 450 epochs 10-5 10-4 10-3 10-2 10-1 100 f - f* STL-10 ckn, µ = 10−3 S-MISO η = 0. 1 S-MISO η = 1. 0 N-SAGA η = 0. 1 SGD η = 0. 1 SGD η = 1. 0 0 50 100 150 200 250 300 350 400 450 epochs 10-4 10-3 10-2 10-1 100 f - f* STL-10 ckn, µ = 10−4 0 50 100 150 200 250 300 350 400 epochs 10-3 10-2 10-1 100 f - f* STL-10 ckn, µ = 10−5 0 50 100 150 200 250 300 350 400 epochs 10-5 10-4 10-3 10-2 10-1 100 F - F* STL-10 scattering, µ = 10−3 0 50 100 150 200 250 300 350 400 epochs 10-5 10-4 10-3 10-2 10-1 100 101 F - F* STL-10 scattering, µ = 10−4 0 50 100 150 200 250 300 350 400 epochs 10-4 10-3 10-2 10-1 100 101 F - F* STL-10 scattering, µ = 10−5 Figure 1: Impact of conditioning for data augmentation on STL-10 (controlled by µ, where µ=10−4 gives the best accuracy). Values of the loss are shown on a logarithmic scale (1 unit = factor 10). η = 0.1 satisfies the theory for all methods, and we include curves for larger step-sizes η = 1. We omit N-SAGA for η = 1 because it remains far from the optimum. For the scattering representation, the problem we study is ℓ1-regularized, and we use the composite algorithm of Appendix A. 0 50 100 150 200 250 300 350 400 epochs 10-6 10-5 10-4 10-3 10-2 f - f* ResNet50, µ = 10−2 S-MISO η = 0. 1 S-MISO η = 1. 0 N-SAGA η = 0. 1 SGD η = 0. 1 SGD η = 1. 0 0 50 100 150 200 250 300 350 400 epochs 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 f - f* ResNet50, µ = 10−3 0 50 100 150 200 250 300 350 400 epochs 10-5 10-4 10-3 10-2 10-1 100 f - f* ResNet50, µ = 10−4 Figure 2: Re-training of the last layer of a pre-trained ResNet 50 model, on a small dataset with random color perturbations (for different values of µ). The proof uses a similar telescoping sum technique to [16]. Note that if T ≫γ, the first term, which depends on the initial condition C0, decays as 1/T 2 and is thus dominated by the second term. Moreover, if we start averaging after an initial phase with constant step-size ¯α, we can consider C0 ≈4¯ασ2 p/nµ2. In the ill-conditioned regime, taking ¯α = α1 = 2n/(γ + 1) as large as allowed by (9), we have γ of the order of κ = L/µ ≫1. The full convergence rate then becomes E[f(¯xT ) −f(x∗)] ≤O σ2 p µ(γ + T) 1 + γ T ! . When T is large enough compared to γ, this becomes O(σ2 p/µT), leading to a complexity O(σ2 p/µǫ). 4 Experiments We present experiments comparing S-MISO with SGD and N-SAGA [14] on four different scenarios, in order to demonstrate the wide applicability of our method: we consider an image classification dataset with two different image representations and random transformations, and two classification tasks with Dropout regularization, one on genetic data, and one on (sparse) text data. Figures 1 and 3 show the curves for an estimate of the training objective using 5 sampled perturbations per example. The plots are shown on a logarithmic scale, and the values are compared to the best value obtained among the different methods in 500 epochs. The strong convexity constant µ is the regularization parameter. For all methods, we consider step-sizes supported by the theory as well as larger step-sizes that may work better in practice. Our C++/Cython implementation of all methods considered in this section is available at https://github.com/albietz/stochs. Choices of step-sizes. For both S-MISO and SGD, we use the step-size strategy mentioned in Section 3 and advised by [5], which we have found to be most effective among many heuristics 7 0 50 100 150 200 250 300 350 400 epochs 10-5 10-4 10-3 10-2 10-1 100 f - f* gene dropout, δ = 0.30 S-MISO η = 0. 1 S-MISO η = 1. 0 SGD η = 0. 1 SGD η = 1. 0 N-SAGA η = 0. 1 N-SAGA η = 1. 0 0 50 100 150 200 250 300 350 400 epochs 10-6 10-5 10-4 10-3 10-2 10-1 100 f - f* gene dropout, δ = 0.10 0 50 100 150 200 250 300 350 400 epochs 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 f - f* gene dropout, δ = 0.01 0 50 100 150 200 250 300 350 400 epochs 10-4 10-3 10-2 10-1 100 f - f* imdb dropout, δ = 0.30 S-MISO-NU η = 1. 0 S-MISO η = 10. 0 SGD-NU η = 1. 0 SGD η = 10. 0 N-SAGA η = 10. 0 0 50 100 150 200 250 300 350 400 epochs 10-5 10-4 10-3 10-2 10-1 100 f - f* imdb dropout, δ = 0.10 0 50 100 150 200 250 300 350 400 epochs 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 f - f* imdb dropout, δ = 0.01 Figure 3: Impact of perturbations controlled by the Dropout rate δ. The gene data is ℓ2-normalized; hence, we consider similar step-sizes as Figure 1. The IMDB dataset is highly heterogeneous; thus, we also include non-uniform (NU) sampling variants of Appendix A. For uniform sampling, theoretical step-sizes perform poorly for all methods; thus, we show a larger tuned step-size η = 10. we have tried: we initially keep the step-size constant (controlled by a factor η ≤1 in the figures) for 2 epochs, and then start decaying as αt = C/(γ + t), where C = 2n for S-MISO, C = 2/µ for SGD, and γ is chosen large enough to match the previous constant step-size. For N-SAGA, we maintain a constant step-size throughout the optimization, as suggested in the original paper [14]. The factor η shown in the figures is such that η = 1 corresponds to an initial step-size nµ/(L −µ) for S-MISO (from (19) in the uniform case) and 1/L for SGD and N-SAGA (with ¯L instead of L in the non-uniform case when using the variant of Appendix A). Image classification with “data augmentation”. The success of deep neural networks is often limited by the availability of large amounts of labeled images. When there are many unlabeled images but few labeled ones, a common approach is to train a linear classifier on top of a deep network learned in an unsupervised manner, or pre-trained on a different task (e.g., on the ImageNet dataset). We follow this approach on the STL-10 dataset [7], which contains 5K training images from 10 classes and 100K unlabeled images, using a 2-layer unsupervised convolutional kernel network [22], giving representations of dimension 9 216. The perturbation consists of randomly cropping and scaling the input images. We use the squared hinge loss in a one-versus-all setting. The vector representations are ℓ2-normalized such that we may use the upper bound L = 1 + µ for the smoothness constant. We also present results on the same dataset using a scattering representation [6] of dimension 21 696, with random gamma corrections (raising all pixels to the power γ, where γ is chosen randomly around 1). For this representation, we add an ℓ1 regularization term and use the composite variant of S-MISO presented in Appendix A. Figure 1 shows convergence results on one training fold (500 images), for different values of µ, allowing us to study the behavior of the algorithms for different condition numbers. The low variance induced by data transformations allows S-MISO to reach suboptimality that is orders of magnitude smaller than SGD after the same number of epochs. Note that one unit on these plots corresponds to one order of magnitude in the logarithmic scale. N-SAGA initially reaches a smaller suboptimality than SGD, but quickly gets stuck due to the bias in the algorithm, as predicted by the theory [14], while S-MISO and SGD continue to converge to the optimum thanks to the decreasing step-sizes. The best validation accuracy for both representations is obtained for µ ≈10−4 (middle column), and we observed relative gains of up to 1% from using data augmentation. We computed empirical variances of the image representations for these two strategies, which are closely related to the variance in gradient estimates, and observed these transformations to account for about 10% of the total variance. Figure 2 shows convergence results when training the last layer of a 50-layer Residual network [12] that has been pre-trained on ImageNet. Here, we consider the common scenario of leveraging a deep model trained on a large dataset as a feature extractor in order to learn a new classifier on a different small dataset, where it would be difficult to train such a model from scratch. To simulate this setting, we consider a binary classification task on a small dataset of 100 images of size 256x256 taken from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012, which we crop to 8 224x224 before performing random adjustments to brightness, saturation, hue and contrast. As in the STL-10 experiments, the gains of S-MISO over other methods are of about one order of magnitude in suboptimality, as predicted by Table 2. Dropout on gene expression data. We trained a binary logistic regression model on the breast cancer dataset of [31], with different Dropout rates δ, i.e., where at every iteration, each coordinate ξj of a feature vector ξ is set to zero independently with probability δ and to ξj/(1 −δ) otherwise. The dataset consists of 295 vectors of dimension 8 141 of gene expression data, which we normalize in ℓ2 norm. Figure 3 (top) compares S-MISO with SGD and N-SAGA for three values of δ, as a way to control the variance of the perturbations. We include a Dropout rate of 0.01 to illustrate the impact of δ on the algorithms and study the influence of the perturbation variance σ2 p, even though this value of δ is less relevant for the task. The plots show very clearly how the variance induced by the perturbations affects the convergence of S-MISO, giving suboptimality values that may be orders of magnitude smaller than SGD. This behavior is consistent with the theoretical convergence rate established in Section 3 and shows that the practice matches the theory. Dropout on movie review sentiment analysis data. We trained a binary classifier with a squared hinge loss on the IMDB dataset [20] with different Dropout rates δ. We use the labeled part of the IMDB dataset, which consists of 25K training and 250K testing movie reviews, represented as 89 527-dimensional sparse bag-of-words vectors. In contrast to the previous experiments, we do not normalize the representations, which have great variability in their norms, in particular, the maximum Lipschitz constant across training points is roughly 100 times larger than the average one. Figure 3 (bottom) compares non-uniform sampling versions of S-MISO (see Appendix A) and SGD (see Appendix D) with their uniform sampling counterparts as well as N-SAGA. Note that we use a large step-size η = 10 for the uniform sampling algorithms, since η = 1 was significantly slower for all methods, likely due to outliers in the dataset. In contrast, the non-uniform sampling algorithms required no tuning and just use η = 1. The curves clearly show that S-MISO-NU has a much faster convergence in the initial phase, thanks to the larger step-size allowed by non-uniform sampling, and later converges similarly to S-MISO, i.e., at a much faster rate than SGD when the perturbations are small. The value of µ used in the experiments was chosen by cross-validation, and the use of Dropout gave improvements in test accuracy from 88.51% with no dropout to 88.68 ± 0.03% with δ = 0.1 and 88.86 ± 0.11% with δ = 0.3 (based on 10 different runs of S-MISO-NU after 400 epochs). Finally, we also study the effect of the iterate averaging scheme of Theorem 3 in Appendix E. Acknowledgements This work was supported by a grant from ANR (MACARON project under grant number ANR14-CE23-0003-01), by the ERC grant number 714381 (SOLARIS project), and by the MSR-Inria joint center. References [1] M. Achab, A. Guilloux, S. Gaïffas, and E. Bacry. SGD with Variance Reduction beyond Empirical Risk Minimization. arXiv:1510.04822, 2015. [2] Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Symposium on the Theory of Computing (STOC), 2017. [3] Z. Allen-Zhu, Y. Yuan, and K. Sridharan. Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters. In Advances in Neural Information Processing Systems (NIPS), 2016. [4] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems (NIPS), 2011. [5] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization Methods for Large-Scale Machine Learning. arXiv:1606.04838, 2016. [6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence (PAMI), 35(8):1872–1886, 2013. [7] A. Coates, H. Lee, and A. Y. Ng. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. 9 [8] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS), 2014. [9] A. Defazio, J. Domke, and T. S. Caetano. Finito: A faster, permutable incremental gradient method for big data problems. In International Conference on Machine Learning (ICML), 2014. [10] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Privacy aware learning. In Advances in Neural Information Processing Systems (NIPS), 2012. [11] J. C. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research (JMLR), 10:2899–2934, 2009. [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [13] J.-B. Hiriart-Urruty and C. Lemaréchal. Convex analysis and minimization algorithms I: Fundamentals. Springer science & business media, 1993. [14] T. Hofmann, A. Lucchi, S. Lacoste-Julien, and B. McWilliams. Variance Reduced Stochastic Gradient Descent with Neighbors. In Advances in Neural Information Processing Systems (NIPS), 2015. [15] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems (NIPS), 2013. [16] S. Lacoste-Julien, M. Schmidt, and F. Bach. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method. arXiv:1212.2002, 2012. [17] G. Lan and Y. Zhou. An optimal randomized incremental gradient method. Mathematical Programming, 2017. [18] H. Lin, J. Mairal, and Z. Harchaoui. A Universal Catalyst for First-Order Optimization. In Advances in Neural Information Processing Systems (NIPS), 2015. [19] G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines using selective sampling. In Large Scale Kernel Machines, pages 301–320. MIT Press, Cambridge, MA., 2007. [20] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 142–150. Association for Computational Linguistics, 2011. [21] J. Mairal. Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning. SIAM Journal on Optimization, 25(2):829–855, 2015. [22] J. Mairal. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks. In Advances in Neural Information Processing Systems (NIPS), 2016. [23] N. Meinshausen and P. Bühlmann. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):417–473, 2010. [24] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust Stochastic Approximation Approach to Stochastic Programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [25] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2004. [26] M. Paulin, J. Revaud, Z. Harchaoui, F. Perronnin, and C. Schmid. Transformation pursuit for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [27] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162(1):83–112, 2017. [28] S. Shalev-Shwartz. SDCA without Duality, Regularization, and Individual Convexity. In International Conference on Machine Learning (ICML), 2016. [29] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research (JMLR), 14:567–599, 2013. 10 [30] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation Invariance in Pattern Recognition — Tangent Distance and Tangent Propagation. In G. B. Orr and K.-R. Müller, editors, Neural Networks: Tricks of the Trade, number 1524 in Lecture Notes in Computer Science, pages 239–274. Springer Berlin Heidelberg, 1998. [31] M. J. van de Vijver et al. A Gene-Expression Signature as a Predictor of Survival in Breast Cancer. New England Journal of Medicine, 347(25):1999–2009, Dec. 2002. [32] L. van der Maaten, M. Chen, S. Tyree, and K. Q. Weinberger. Learning with marginalized corrupted features. In International Conference on Machine Learning (ICML), 2013. [33] S. Wager, W. Fithian, S. Wang, and P. Liang. Altitude Training: Strong Bounds for Single-layer Dropout. In Advances in Neural Information Processing Systems (NIPS), 2014. [34] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research (JMLR), 11:2543–2596, 2010. [35] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014. [36] S. Zheng, Y. Song, T. Leung, and I. Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 11 | 2017 | 344 |
6,835 | Working hard to know your neighbor’s margins: Local descriptor learning loss Anastasiya Mishchuk1, Dmytro Mishkin2, Filip Radenovi´c2, Jiˇri Matas2 1 Szkocka Research Group, Ukraine anastasiya.mishchuk@gmail.com 2 Visual Recognition Group, CTU in Prague {mishkdmy, filip.radenovic, matas}@cmp.felk.cvut.cz Abstract We introduce a loss for metric learning, which is inspired by the Lowe’s matching criterion for SIFT. We show that the proposed loss, that maximizes the distance between the closest positive and closest negative example in the batch, is better than complex regularization methods; it works well for both shallow and deep convolution network architectures. Applying the novel loss to the L2Net CNN architecture results in a compact descriptor named HardNet. It has the same dimensionality as SIFT (128) and shows state-of-art performance in wide baseline stereo, patch verification and instance retrieval benchmarks. 1 Introduction Many computer vision tasks rely on finding local correspondences, e.g. image retrieval [1, 2], panorama stitching [3], wide baseline stereo [4], 3D-reconstruction [5, 6]. Despite the growing number of attempts to replace complex classical pipelines with end-to-end learned models, e.g., for image matching [7], camera localization [8], the classical detectors and descriptors of local patches are still in use, due to their robustness, efficiency and their tight integration. Moreover, reformulating the task, which is solved by the complex pipeline as a differentiable end-to-end process is highly challenging. As a first step towards end-to-end learning, hand-crafted descriptors like SIFT [9, 10] or detectors [9, 11, 12] have been replace with learned ones, e.g., LIFT [13], MatchNet [14] and DeepCompare [15]. However, these descriptors have not gained popularity in practical applications despite good performance in the patch verification task. Recent studies have confirmed that SIFT and its variants (RootSIFT-PCA [16], DSP-SIFT [17]) significantly outperform learned descriptors in image matching and small-scale retrieval [18], as well as in 3D-reconstruction [19]. One of the conclusions made in [19] is that current local patches datasets are not large and diverse enough to allow the learning of a high-quality widely-applicable descriptor. In this paper, we focus on descriptor learning and, using a novel method, train a convolutional neural network (CNN), called HardNet. We additionally show that our learned descriptor significantly outperforms both hand-crafted and learned descriptors in real-world tasks like image retrieval and two view matching under extreme conditions. For the training, we use the standard patch correspondence data thus showing that the available datasets are sufficient for going beyond the state of the art. 2 Related work Classical SIFT local feature matching consists of two parts: finding nearest neighbors and comparing the first to second nearest neighbor distance ratio threshold for filtering false positive matches. To 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. best of our knowledge, no work in local descriptor learning fully mimics such strategy as the learning objective. Simonyan and Zisserman [20] proposed a simple filter plus pooling scheme learned with convex optimization to replace the hand-crafted filters and poolings in SIFT. Han et al. [14] proposed a twostage siamese architecture – for embedding and for two-patch similarity. The latter network improved matching performance, but prevented the use of fast approximate nearest neighbor algorithms like kd-tree [21]. Zagoruyko and Komodakis [15] have independently presented similar siamese-based method which explored different convolutional architectures. Simo-Serra et al [22] harnessed hardnegative mining with a relative shallow architecture that exploited pair-based similarity. The three following papers have most closedly followed the classical SIFT matching scheme. Balntas et al [23] used a triplet margin loss and a triplet distance loss, with random sampling of the patch triplets. They show the superiority of the triplet-based architecture over a pair based. Although, unlike SIFT matching or our work, they sampled negatives randomly. Choy et al [7] calculate the distance matrix for mining positive as well as negative examples, followed by pairwise contrastive loss. Tian et al [24] use n matching pairs in batch for generating n2 −n negative samples and require that the distance to the ground truth matchings is minimum in each row and column. No other constraint on the distance or distance ratio is enforced. Instead, they propose a penalty for the correlation of the descriptor dimensions and adopt deep supervision [25] by using intermediate feature maps for matching. Given the state-of-art performance, we have adopted the L2Net [24] architecture as base for our descriptor. We show that it is possible to learn even more powerful descriptor with significantly simpler learning objective without need of the two auxiliary loss terms. Batch of input patches Descriptors Distance matrix 𝐷= 𝑝𝑑𝑖𝑠𝑡(𝑎, 𝑝) 𝑝1 𝑝2 𝑝3 𝑝4 𝑎1 𝑎2 𝑎3 𝑎4 𝑃𝑖 𝐴𝑖 𝑎𝑖 𝑝𝑖 𝑑(𝑎1, 𝑝1) 𝑑(𝑎1, 𝑝2) 𝑑(𝑎1, 𝑝3) 𝑑(𝑎1, 𝑝4) 𝑑(𝑎2, 𝑝1) 𝑑(𝑎2, 𝑝2) 𝑑(𝑎2, 𝑝3) 𝑑(𝑎2, 𝑝4) 𝑑(𝑎3, 𝑝1) 𝑑(𝑎3, 𝑝2) 𝑑(𝑎3, 𝑝3) 𝑑(𝑎3, 𝑝4) 𝑑(𝑎4, 𝑝1) 𝑑(𝑎4, 𝑝2) 𝑑(𝑎4, 𝑝3) 𝑑(𝑎4, 𝑝4) 𝑎1 𝑝1 𝑎2𝑚𝑖𝑛 𝑑(𝑎1, 𝑝4𝑚𝑖𝑛) > 𝑑(𝑎2𝑚𝑖𝑛, 𝑝1) ⟹𝑠𝑒𝑙𝑒𝑐𝑡𝑎2 Final triplet (one of n in batch) 𝑝4𝑚𝑖𝑛 𝑎2𝑚𝑖𝑛 Figure 1: Proposed sampling procedure. First, patches are described by the current network, then a distance matrix is calculated. The closest non-matching descriptor – shown in red – is selected for each ai and pi patch from positive pair (green) respectively. Finally, among two negative candidates the hardest one is chosen. All operations are done in a single forward pass. 3 The proposed descriptor 3.1 Sampling and loss Our learning objective mimics SIFT matching criterion. The process is shown in Figure 1. First, a batch X = (Ai, Pi)i=1..n of matching local patches is generated, where A stands for the anchor and P for the positive. The patches Ai and Pi correspond to the same point on 3D surface. We make sure that in batch X, there is exactly one pair originating from a given 3D point. Second, the 2n patches in X are passed through the network shown in Figure 2. 2 L2 pairwise distance matrix D = cdist(a, p), where, d(ai, pj) = p2 −2aipj, i = 1..n, j = 1..n of size n × n is calculated, where ai and pj denote the descriptors of patches Ai and Pj respectively. Next, for each matching pair ai and pi the closest non-matching descriptors i.e. the 2nd nearest neighbor, are found respectively: ai – anchor descriptor, pi – positive descriptor, pjmin – closest non-matching descriptor to ai, where jmin = arg minj=1..n,j̸=i d(ai, pj), akmin – closest non-matching descriptor to pi where kmin = arg mink=1..n,k̸=i d(ak, pi). Then from each quadruplet of descriptors (ai, pi, pjmin, akmin), a triplet is formed: (ai, pi, pjmin), if d(ai, pjmin) < d(akmin, pi) and (pi, ai, akmin) otherwise. Our goal is to minimize the distance between the matching descriptor and closest non-matching descriptor. These n triplet distances are fed into the triplet margin loss: L = 1 n X i=1,n max (0, 1 + d(ai, pi) −min (d(ai, pjmin), d(akmin, pi))) (1) where min (d(ai, pjmin), d(akmin, pi) is pre-computed during the triplet construction. The distance matrix calculation is done on GPU and the only overhead compared to the random triplet sampling is the distance matrix calculation and calculating the minimum over rows and columns. Moreover, compared to usual learning with triplets, our scheme needs only two-stream CNN, not three, which results in 30% less memory consumption and computations. Unlike in [24], neither deep supervision for intermediate layers is used, nor a constraint on the correlation of descriptor dimensions. We experienced no significant over-fitting. 3.2 Model architecture 3x3 Conv pad 1 32 BN + ReLU 3x3 Conv pad 1 32 BN + ReLU 3x3 Conv pad 1 /2 BN + ReLU 3x3 Conv pad 1 BN + ReLU 3x3 Conv pad 1 BN + ReLU 8x8 Conv BN+ L2Norm 1 128 64 128 128 32x32 3x3 Conv pad 1 /2 BN + ReLU 64 Figure 2: The architecture of our network, adopted from L2Net [24]. Each convolutional layer is followed by batch normalization and ReLU, except the last one. Dropout regularization is used before the last convolution layer. The HardNet architecture, Figure 2, is identical to L2Net [24]. Padding with zeros is applied to all convolutional layers, to preserve the spatial size, except to the final one. There are no pooling layers, since we found that they decrease performance of the descriptor. That is why the spatial size is reduced by strided convolutions. Batch normalization [26] layer followed by ReLU [27] non-linearity is added after each layer, except the last one. Dropout [28] regularization with 0.1 dropout rate is applied before the last convolution layer. The output of the network is L2 normalized to produce 128-D descriptor with unit-length. Grayscale input patches with size 32 × 32 pixels are normalized by subtracting the per-patch mean and dividing by the per-patch standard deviation. Optimization is done by stochastic gradient descent with learning rate of 0.1, momentum of 0.9 and weight decay of 0.0001. Learning rate was linearly decayed to zero within 10 epochs for the most of the experiments in this paper. Training is done with PyTorch library [29]. 3.3 Model training UBC Phototour [3], also known as Brown dataset. It consists of three subsets: Liberty, Notre Dame and Yosemite with about 400k normalized 64x64 patches in each. Keypoints were detected by DoG detector and verified by 3D model. 3 Test set consists of 100k matching and non-matching pairs for each sequence. Common setup is to train descriptor on one subset and test on two others. Metric is the false positive rate (FPR) at point of 0.95 true positive recall. It was found out by Michel Keller that [14] and [23] evaluation procedure reports FDR (false discovery rate) instead of FPR (false positive rate). To avoid the incomprehension of results we’ve decided to provide both FPR and FDR rates and re-estimated the scores for straight comparison. Results are shown in Table 1. Proposed descriptor outperforms competitors, with training augmentation, or without it. We haven‘t included results on multiscale patch sampling or so called “center-surrounding” architecture for two reasons. First, architectural choices are beyond the scope of current paper. Second, it was already shown in [24, 30] that “center-surrounding” consistently improves results on Brown dataset for different descriptors, while hurts matching performance on other, more realistic setups, e.g., on Oxford-Affine [31] dataset. In the rest of paper we use descriptor trained on Liberty sequence, which is a common practice, to allow a fair comparison. TFeat [23] and L2Net [24] use the same dataset for training. Table 1: Patch correspondence verification performance on the Brown dataset. We report false positive rate at true positive rate equal to 95% (FPR95). Some papers report false discovery rate (FDR) instead of FPR due to bug in the source code. For consistency we provide FPR, either obtained from the original article or re-estimated from the given FDR (marked with *). The best results are in bold. Training Notredame Yosemite Liberty Yosemite Liberty Notredame Mean Test Liberty Notredame Yosemite FDR FPR SIFT [9] 29.84 22.53 27.29 26.55 MatchNet*[14] 7.04 11.47 3.82 5.65 11.6 8.7 7.74 8.05 TFeat-M* [23] 7.39 10.31 3.06 3.8 8.06 7.24 6.47 6.64 L2Net [24] 3.64 5.29 1.15 1.62 4.43 3.30 3.24 HardNet (ours) 3.06 4.27 0.96 1.4 3.04 2.53 3.00 2.54 Augmentation: flip, 90◦random rotation GLoss+[30] 3.69 4.91 0.77 1.14 3.09 2.67 2.71 DC2ch2st+[15] 4.85 7.2 1.9 2.11 5.00 4.10 4.19 L2Net+ [24] + 2.36 4.7 0.72 1.29 2.57 1.71 2.23 HardNet+ (ours) 2.28 3.25 0.57 0.96 2.13 2.22 1.97 1.9 3.4 Exploring the batch size influence We study the influence of mini-batch size on the final descriptor performance. It is known that small mini-batches are beneficial to faster convergence and better generalization [32], while large batches allow better GPU utilization. Our loss function design should benefit from seeing more hard negative patches to learn to distinguish them from true positive patches. We report the results for batch sizes 16, 64, 128, 512, 1024, 2048. We trained the model described in Section 3.2 using Liberty sequence of Brown dataset. Results are shown in Figure 3. As expected, model performance improves with increasing the mini-batch size, as more examples are seen to get harder negatives. Although, increasing batch size to more than 512 does not bring significant benefit. 4 Empirical evaluation Recently, Balntas et al. [23] showed that good performance on patch verification task on Brown dataset does not always mean good performance in the nearest neighbor setup and vice versa. Therefore, we have extensively evaluated learned descriptors on real-world tasks like two view matching and image retrieval. We have selected RootSIFT [10], TFeat-M* [23], and L2Net [24] for direct comparison with our descriptor, as they show the best results on a variety of datasets. 4 1 2 3 4 5 6 7 8 Epoch 0.00 0.02 0.04 0.06 0.08 0.10 0.12 FPR 16 64 128 512 1024 2048 Figure 3: Influence of the batch size on descriptor performance. The metric is false positive rate (FPR) at true positive rate equal to 95%, averaged over Notredame and Yosemite validation sequences. 102 103 104 number of distractors 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 mAP HardNet HardNet+ L2Net L2Net+ RootSIFT SIFT TFeatM* Figure 4: Patch retrieval descriptor performance (mAP) vs. the number of distractors, evaluated on HPatches dataset. 4.1 Patch descriptor evaluation HPatches [18] is a recent dataset for local patch descriptor evaluation. It consists of 116 sequences of 6 images. The dataset is split into two parts: viewpoint – 59 sequences with significant viewpoint change and illumination – 57 sequences with significant illumination change, both natural and artificial. Keypoints are detected by DoG, Hessian and Harris detectors in the reference image and reprojected to the rest of the images in each sequence with 3 levels of geometric noise: Easy, Hard, and Tough variants. The HPatches benchmark defines three tasks: patch correspondence verification, image matching and small-scale patch retrieval. We refer the reader to the HPatches paper [18] for a detailed protocol for each task. Results are shown in Figure 5. L2Net and HardNet have shown similar performance on the patch verification task with a small advantage of HardNet. On the matching task, even the non-augmented version of HardNet outperforms the augmented version of L2Net+ by a noticeable margin. The difference is larger in the TOUGH and HARD setups. Illumination sequences are more challenging than the geometric ones, for all the descriptors. We have trained network with TFeat architecture, but with proposed loss function – it is denoted as HardTFeat. It outperforms original version in matching and retrieval, while being on par with it on patch verification task. In patch retrieval, relative performance of the descriptors is similar to the matching problem: HardNet beats L2Net+. Both descriptors significantly outperform the previous state-of-the-art, showing the superiority of the selected deep CNN architecture over the shallow TFeat model. EASY HARD TOUGH DIFFSEQ SAMESEQ VIEWPT ILLUM 87.12% 86.69% 86.19% 84.46% 81.90% 81.32% 58.53% 0 20 40 60 80 100 Patch Verification mAP [%] rSIFT HardTFeat TFeat-M* L2Net HardNet L2Net+ HardNet+ 50.38% 48.24% 45.04% 40.82% 38.07% 32.64% 27.22% 0 20 40 60 80 100 Image Matching mAP [%] rSIFT TFeat-M* HardTFeat L2Net L2Net+ HardNet HardNet+ 66.82% 65.26% 63.37% 59.64% 55.12% 52.03% 42.49% 0 20 40 60 80 100 Patch Retrieval mAP [%] rSIFT TFeat-M* HardTFeat L2Net L2Net+ HardNet HardNet+ Figure 5: Left to right: Verification, matching and retrieval results on HPatches dataset. Marker color indicates the level of geometrical noise in: EASY, HARD and TOUGH. Marker type indicates the experimental setup. DIFFSEQ and SAMESEQ shows the source of negative examples for the verification task. VIEWPT and ILLUM indicate the type of sequences for matching. None of the descriptors is trained on HPatches. 5 Table 2: Comparison of the loss functions and sampling strategies on the HPatches matching task, the mean mAP is reported. CPR stands for the regularization penalty of the correlation between descriptor channels, as proposed in [24]. Hard negative mining is performed once per epoch. Best results are in bold. HardNet uses the hardest-in-batch sampling and the triplet margin loss. Sampling / Loss Softmin Triplet margin Contrastive m = 1 m = 1 m = 2 Random overfit Hard negative mining overfit Random + CPR 0.349 0.286 0.007 0.083 Hard negative mining + CPR 0.391 0.346 0.055 0.279 Hardest in batch (ours) 0.474 0.482 0.444 0.482 We also ran another patch retrieval experiment, varying the number of distractors (non-matching patches) in the retrieval dataset. The results are shown in Figure 4. TFeat descriptor performance, which is comparable to L2Net in the presence of low number distractors, degrades quickly as the size of the the database grows. At about 10,000 its performance drops below SIFT. This experiment explains why TFeat performs relatively poorly on the Oxford5k [33] and Paris6k [34] benchmarks, which contain around 12M and 15M distractors, respectively, see Section 4.4 for more details. Performance of the HardNet decreases slightly for both augmented and plain version and the difference in mAP to other descriptors grows with the increasing complexity of the task. 4.2 Ablation study For better understanding of the significance of the sampling strategy and the loss function, we conduct experiments summarized in Table 2. We train our HardNet model (architecture is exactly the same as L2Net model), change one parameter at a time and evaluate its impact. The following sampling strategies are compared: random, the proposed “hardest-in-batch”, and the “classical” hard negative mining, i.e. selecting in each epoch the closest negatives from the full training set. The following loss functions are tested: softmin on distances, triplet margin with margin m = 1, contrastive with margins m = 1, m = 2. The last is the maximum possible distance for unit-normed descriptors. Mean mAP on HPatches Matching task is shown in Table 2. The proposed “hardest-in-batch” clearly outperforms all the other sampling strategies for all loss functions and it is the main reason for HardNet’s good performance. The random sampling and “classical” hard negative mining led to huge overfit, when training loss was high, but test performance was low and varied several times from run to run. This behavior was observed with all loss function. Similar results for random sampling were reported in [24]. The poor results of hard negative mining (“hardest-in-the-training-set”) are surprising. We guess that this is due to dataset label noise, the mined “hard negatives” are actually positives. Visual inspection confirms this. We were able to get 0 0.5 1.0 1.5 2.0 d(a, n) 0 0.5 1.0 1.5 2.0 d(a, p) Triplet margin, m = 1 0 0.5 1.0 1.5 2.0 d(a, n) 0 0.5 1.0 1.5 2.0 d(a, p) Softmin 0 0.5 1.0 1.5 2.0 d(a, n) 0 0.5 1.0 1.5 2.0 d(a, p) Contrastive, m = 1 0 0.5 1.0 1.5 2.0 d(a, n) 0 0.5 1.0 1.5 2.0 d(a, p) Contrastive, m = 2 ∂L ∂n = ∂L ∂p = 1 ∂L ∂n = 0, ∂L ∂p = 1 ∂L ∂n = ∂L ∂p = 0 Figure 6: Contribution to the gradient magnitude from the positive and negative examples. Horizontal and vertical axes show the distance from the anchor (a) to the negative (n) and positive (p) examples respectively. Softmin loss gradient quickly decreases when d(a, n) > d(a, p), unlike the triplet margin loss. For the contrastive loss, negative examples with d(a, n) > m contribute zero to the gradient. The triplet margin loss and the contrastive loss with a big margin behave very similarly. 6 reasonable results with random and hard negative mining sampling only with additional correlation penalty on descriptor channels (CPR), as proposed in [24]. Regarding the loss functions, softmin gave the most stable results across all sampling strategies, but it is marginally outperformed by contrastive and triplet margin loss for our strategy. One possible explanation is that the triplet margin loss and contrastive loss with a large margin have constant non-zero derivative w.r.t to both positive and negative samples, see Figure 6. In the case of contrastive loss with a small margin, many negative examples are not used in the optimization (zero derivatives), while the softmin derivatives become small, once the distance to the positive example is smaller than to the negative one. 4.3 Wide baseline stereo To validate descriptor generalization and their ability to operate in extreme conditions, we tested them on the W1BS dataset [4]. It consists of 40 image pairs with one particular extreme change between the images: Appearance (A): difference in appearance due to seasonal or weather change, occlusions, etc; Geometry (G): difference in scale, camera and object position; Illumination (L): significant difference in intensity, wavelength of light source; Sensor (S): difference in sensor data (IR, MRI). Moreover, local features in W1BS dataset are detected with MSER [35], Hessian-Affine [11] (in implementation from [36]) and FOCI [37] detectors. They fire on different local structures than DoG. Note that DoG patches were used for the training of the descriptors. Another significant difference to the HPatches setup is the absence of the geometrical noise: all patches are perfectly reprojected to the target image in pair. The testing protocol is the same as for the HPatches matching task. Results are shown in Figure 7. HardNet and L2Net perform comparably, former is performing better on images with geometrical and appearance changes, while latter works a bit better in map2photo and visible-vs-infrared pairs. Both outperform SIFT, but only by a small margin. However, considering the significant amount of the domain shift, descriptors perform very well, while TFeat loses badly to SIFT. HardTFeat significantly outperforms the original TFeat descriptor on the W1BS dataset, showing the superiority of the proposed loss. Good performance on patch matching and verification task does not automatically lead to the better performance in practice, e.g. to more images registered. Therefore we also compared descriptor on wide baseline stereo setup with two metric: number of successfully matched image pairs and average number of inliers per matched pair, following the matcher comparison protocol from [4]. The only change to the original protocol is that first fast matching steps with ORB detector and descriptor were removed, as we are comparing “SIFT-replacement” descriptors. The results are shown in Table 3. Results on Edge Foci (EF) [37], Extreme view [38] and Oxford Affine [11] datasets are saturated and all the descriptors are good enough for matching all image pairs. A G L S map2photo Average Nuisance factor 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 mAUC TFeat HardTFeat SIFT RootSIFT L2Net L2Net+ HardNet HardNet+ Figure 7: Descriptor evaluation on the W1BS patch dataset, mean area under precision-recall curve is reported. Letters denote nuisance factor, A: appearance; G: viewpoint/geometry; L: illumination; S: sensor; map2photo: satellite photo vs. map. 7 HardNet has an a slight advantage in a number of inliers per image. The rest of datasets: SymB [39], GDB [40], WxBS [4] and LTLL [41] have one thing in common: image pairs are or from different domain than photo (e.g. drawing to drawing) or cross-domain (e.g., drawing to photo). Here HardNet outperforms learned descriptors and is on-par with hand-crafted RootSIFT. We would like to note that HardNet was not learned to match in different domain, nor cross-domain scenario, therefore such results show the generalization ability. Table 3: Comparison of the descriptors on wide baseline stereo within MODS matcher[4] on wide baseline stereo datasets. Number of matched image pairs and average number of inliers are reported. Numbers is the header corresponds to the number of image pairs in dataset. EF EVD OxAff SymB GDB WxBS LTLL Descriptor 33 inl. 15 inl. 40 inl. 46 inl. 22 inl. 37 inl. 172 inl. RootSIFT 33 32 15 34 40 169 45 43 21 52 11 93 123 27 TFeat-M* 32 30 15 37 40 265 40 45 16 72 10 62 96 29 L2Net+ 33 34 15 34 40 304 43 46 19 78 9 51 127 26 HardNet+ 33 35 15 41 40 316 44 47 21 75 11 54 127 31 4.4 Image retrieval We evaluate our method, and compare against the related ones, on the practical application of image retrieval with local features. Standard image retrieval datasets are used for the evaluation, i.e., Oxford5k [33] and Paris6k [34] datasets. Both datasets contain a set of images (5062 for Oxford5k and 6300 for Paris6k) depicting 11 different landmarks together with distractors. For each of the 11 landmarks there are 5 different query regions defined by a bounding box, constituting 55 query regions per dataset. The performance is reported as mean average precision (mAP) [33]. In the first experiment, for each image in the dataset, multi-scale Hessian-affine features [31] are extracted. Exactly the same features are described by ours and all related methods, each of them producing a 128-D descriptor per feature. Then, k-means with approximate nearest neighbor [21] is used to learn a 1 million visual vocabulary on an independent dataset, that is, when evaluating on Oxford5k, the vocabulary is learned with descriptors of Paris6k and vice versa. All descriptors of testing dataset are assigned to the corresponding vocabulary, so finally, an image is represented by the histogram of visual word occurrences, i.e., the bag-of-words (BoW) [1] representation, and an inverted file is used for an efficient search. Additionally, spatial verification (SV) [33], and standard query expansion (QE) [34] are used to re-rank and refine the search results. Comparison with the related work on patch description is presented in Table 4. HardNet+ and L2Net+ perform comparably across both datasets and all settings, with slightly better performance of HardNet+ on average across Table 4: Performance (mAP) evaluation on bag-of-words (BoW) image retrieval. Vocabulary consisting of 1M visual words is learned on independent dataset, that is, when evaluating on Oxford5k, the vocabulary is learned with features of Paris6k and vice versa. SV: spatial verification. QE: query expansion. The best results are highlighted in bold. All the descriptors except SIFT and HardNet++ were learned on Liberty sequence of Brown dataset [3]. HardNet++ is trained on union of Brown and HPatches [18] datasets. Oxford5k Paris6k Descriptor BoW BoW+SV BoW+QE BoW BoW+SV BoW+QE TFeat-M* [23] 46.7 55.6 72.2 43.8 51.8 65.3 RootSIFT [10] 55.1 63.0 78.4 59.3 63.7 76.4 L2Net+ [24] 59.8 67.7 80.4 63.0 66.6 77.2 HardNet 59.0 67.6 83.2 61.4 67.4 77.5 HardNet+ 59.8 68.8 83.0 61.0 67.0 77.5 HardNet++ 60.8 69.6 84.5 65.0 70.3 79.1 8 Table 5: Performance (mAP) comparison with the state-of-the-art image retrieval with local features. Vocabulary is learned on independent dataset, that is, when evaluating on Oxford5k, the vocabulary is learned with features of Paris6k and vice versa. All presented results are with spatial verification and query expansion. VS: vocabulary size. SA: single assignment. MA: multiple assignments. The best results are highlighted in bold. Oxford5k Paris6k Method VS SA MA SA MA SIFT–BoW [36] 1M 78.4 82.2 – – SIFT–BoW-fVocab [46] 16M 74.0 84.9 73.6 82.4 RootSIFT–HQE [43] 65k 85.3 88.0 81.3 82.8 HardNet++–HQE 65k 86.8 88.3 82.8 84.9 all results (average mAP 69.5 vs. 69.1). RootSIFT, which was the best performing descriptor in image retrieval for a long time, falls behind with average mAP 66.0 across all results. We also trained HardNet++ version – with all available training data at the moment: union of Brown and HPatches datasets, instead of just Liberty sequence from Brown for the HardNet+. It shows the benefits of having more training data and is performing best for all setups. Finally, we compare our descriptor with the state-of-the-art image retrieval approaches that use local features. For fairness, all methods presented in Table 5 use the same local feature detector as described before, learn the vocabulary on an independent dataset, and use spatial verification (SV) and query expansion (QE). In our case (HardNet++–HQE), a visual vocabulary of 65k visual words is learned, with additional Hamming embedding (HE) [42] technique that further refines descriptor assignments with a 128 bits binary signature. We follow the same procedure as RootSIFT–HQE [43] method, by replacing RootSIFT with our learned HardNet++ descriptor. Specifically, we use: (i) weighting of the votes as a decreasing function of the Hamming distance [44]; (ii) burstiness suppression [44]; (iii) multiple assignments of features to visual words [34, 45]; and (iv) QE with feature aggregation [43]. All parameters are set as in [43]. The performance of our method is the best reported on both Oxford5k and Paris6k when learning the vocabulary on an independent dataset (mAP 89.1 was reported [10] on Oxford5k by learning it on the same dataset comprising the relevant images), and using the same amount of features (mAP 89.4 was reported [43] on Oxford5k when using twice as many local features, i.e., 22M compared to 12.5M used here). 5 Conclusions We proposed a novel loss function for learning a local image descriptor that relies on the hard negative mining within a mini-batch and the maximization of the distance between the closest positive and closest negative patches. The proposed sampling strategy outperforms classical hard-negative mining and random sampling for softmin, triplet margin and contrastive losses. The resulting descriptor is compact – it has the same dimensionality as SIFT (128), it shows stateof-art performance on standard matching, patch verification and retrieval benchmarks and it is fast to compute on a GPU. The training source code and the trained convnets are available at https://github.com/DagnyT/hardnet. Acknowledgements The authors were supported by the Czech Science Foundation Project GACR P103/12/G084, the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center, the CTU student grant SGS17/185/OHK3/3T/13, and the MSMT LL1303 ERC-CZ grant. Anastasiya Mishchuk was supported by the Szkocka Research Group Grant. 9 References [1] Josef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. In International Conference on Computer Vision (ICCV), pages 1470–1477, 2003. [2] Filip Radenovic, Giorgos Tolias, and Ondrej Chum. CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. In European Conference on Computer Vision (ECCV), pages 3–20, 2016. [3] Matthew Brown and David G. Lowe. Automatic panoramic image stitching using invariant features. International Journal of Computer Vision (IJCV), 74(1):59–73, 2007. [4] Dmytro Mishkin, Jiri Matas, Michal Perdoch, and Karel Lenc. Wxbs: Wide baseline stereo generalizations. Arxiv 1504.06603, 2015. [5] Johannes L. Schonberger, Filip Radenovic, Ondrej Chum, and Jan-Michael Frahm. From single image query to detailed 3D reconstruction. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 5126–5134, 2015. [6] Johannes L. Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 4104–4113, 2016. [7] Christopher B. Choy, JunYoung Gwak, Silvio Savarese, and Manmohan Chandraker. Universal correspondence network. In Advances in Neural Information Processing Systems, pages 2414– 2422, 2016. [8] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-DOF camera relocalization. In International Conference on Computer Vision (ICCV), 2015. [9] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (IJCV), 60(2):91–110, 2004. [10] Relja Arandjelovic and Andrew Zisserman. Three things everyone should know to improve object retrieval. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 2911–2918, 2012. [11] Krystian Mikolajczyk and Cordelia Schmid. Scale & affine invariant interest point detectors. International Journal of Computer Vision (IJCV), 60(1):63–86, 2004. [12] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative to SIFT or SURF. In International Conference on Computer Vision (ICCV), pages 2564–2571, 2011. [13] Kwang Moo Yi, Eduard Trulls, Vincent Lepetit, and Pascal Fua. LIFT: Learned invariant feature transform. In European Conference on Computer Vision (ECCV), pages 467–483, 2016. [14] Xufeng Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. Matchnet: Unifying feature and metric learning for patch-based matching. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 3279–3286, 2015. [15] Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [16] Andrei Bursuc, Giorgos Tolias, and Herve Jegou. Kernel local descriptors with implicit rotation matching. In ACM International Conference on Multimedia Retrieval, 2015. [17] Jingming Dong and Stefano Soatto. Domain-size pooling in local descriptors: DSP-SIFT. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 5097–5106, 2015. [18] Vassileios Balntas, Karel Lenc, Andrea Vedaldi, and Krystian Mikolajczyk. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 10 [19] Johannes L. Schonberger, Hans Hardmeier, Torsten Sattler, and Marc Pollefeys. Comparative evaluation of hand-crafted and learned local features. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [20] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Descriptor learning using convex optimisation. In European Conference on Computer Vision (ECCV), pages 243–256, 2012. [21] Marius Muja and David G. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. In International Conference on Computer Vision Theory and Application (VISSAPP), pages 331–340, 2009. [22] Edgar Simo-Serra, Eduard Trulls, Luis Ferraz, Iasonas Kokkinos, Pascal Fua, and Francesc Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. In International Conference on Computer Vision (ICCV), pages 118–126, 2015. [23] Vassileios Balntas, Edgar Riba, Daniel Ponsa, and Krystian Mikolajczyk. Learning local feature descriptors with triplets and shallow convolutional neural networks. In British Machine Vision Conference (BMVC), 2016. [24] Bin Fan Yurun Tian and Fuchao Wu. L2-Net: Deep learning of discriminative patch descriptor in euclidean space. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [25] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeplysupervised nets. In Artificial Intelligence and Statistics, pages 562–570, 2015. [26] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ArXiv 1502.03167, 2015. [27] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning (ICML), pages 807–814, 2010. [28] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research (JMLR), 15(1):1929–1958, 2014. [29] PyTorch. http://pytorch.org. [30] Vijay Kumar B. G., Gustavo Carneiro, and Ian Reid. Learning local image descriptors with deep siamese and triplet convolutional networks by minimising global loss functions. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 5385–5394, 2016. [31] Krystian Mikolajczyk, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman, Jiri Matas, Frederik Schaffalitzky, Timor Kadir, and Luc Van Gool. A comparison of affine region detectors. International Journal of Computer Vision (IJCV), 65(1):43–72, 2005. [32] D. Randall Wilson and Tony R. Martinez. The general inefficiency of batch training for gradient descent learning. Neural Networks, 16(10):1429–1451, 2003. [33] James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Object retrieval with large vocabularies and fast spatial matching. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2007. [34] James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008. [35] Jiri Matas, Ondrej Chum, Martin Urban, and Tomas Pajdla. Robust wide baseline stereo from maximally stable extrema regions. In British Machine Vision Conference (BMVC), pages 384–393, 2002. [36] Michal Perdoch, Ondrej Chum, and Jiri Matas. Efficient representation of local geometry for large scale object retrieval. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 9–16, 2009. 11 [37] C. Lawrence Zitnick and Krishnan Ramnath. Edge foci interest points. In International Conference on Computer Vision (ICCV), pages 359–366, 2011. [38] Dmytro Mishkin, Jiri Matas, and Michal Perdoch. Mods: Fast and robust method for twoview matching. Computer Vision and Image Understanding, 141:81 – 93, 2015. doi: https: //doi.org/10.1016/j.cviu.2015.08.005. [39] Daniel C. Hauagge and Noah Snavely. Image matching using local symmetry features. In Computer Vision and Pattern Recognition (CVPR), pages 206–213, 2012. [40] Gehua Yang, Charles V Stewart, Michal Sofka, and Chia-Ling Tsai. Registration of challenging image pairs: Initialization, estimation, and decision. Pattern Analysis and Machine Intelligence (PAMI), 29(11):1973–1989, 2007. [41] Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. Location recognition over large time lags. Computer Vision and Image Understanding, 139:21 – 28, 2015. ISSN 1077-3142. doi: https://doi.org/10.1016/j.cviu.2015.05.016. [42] Herve Jegou, Matthijs Douze, and Cordelia Schmid. Improving bag-of-features for large scale image search. International Journal of Computer Vision (IJCV), 87(3):316–336, 2010. [43] Giorgos Tolias and Herve Jegou. Visual query expansion with or without geometry: refining local descriptors by feature aggregation. Pattern Recognition, 47(10):3466–3476, 2014. [44] Herve Jegou, Matthijs Douze, and Cordelia Schmid. On the burstiness of visual elements. In Computer Vision and Pattern Recognition (CVPR), pages 1169–1176, 2009. [45] Herve Jegou, Cordelia Schmid, Hedi Harzallah, and Jakob Verbeek. Accurate image search using the contextual dissimilarity measure. Pattern Analysis and Machine Intelligence (PAMI), 32(1):2–11, 2010. [46] Andrej Mikulik, Michal Perdoch, Ondˇrej Chum, and Jiˇrí Matas. Learning vocabularies over a fine quantization. International Journal of Computer Vision (IJCV), 103(1):163–175, 2013. 12 | 2017 | 345 |
6,836 | Hiding Images in Plain Sight: Deep Steganography Shumeet Baluja Google Research Google, Inc. shumeet@google.com Abstract Steganography is the practice of concealing a secret message within another, ordinary, message. Commonly, steganography is used to unobtrusively hide a small message within the noisy regions of a larger image. In this study, we attempt to place a full size color image within another image of the same size. Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to specifically work as a pair. The system is trained on images drawn randomly from the ImageNet database, and works well on natural images from a wide variety of sources. Beyond demonstrating the successful application of deep learning to hiding images, we carefully examine how the result is achieved and explore extensions. Unlike many popular steganographic methods that encode the secret message within the least significant bits of the carrier image, our approach compresses and distributes the secret image’s representation across all of the available bits. 1 Introduction to Steganography Steganography is the art of covered or hidden writing; the term itself dates back to the 15th century, when messages were physically hidden. In modern steganography, the goal is to covertly communicate a digital message. The steganographic process places a hidden message in a transport medium, called the carrier. The carrier may be publicly visible. For added security, the hidden message can also be encrypted, thereby increasing the perceived randomness and decreasing the likelihood of content discovery even if the existence of the message detected. Good introductions to steganography and steganalysis (the process of discovering hidden messages) can be found in [1–5]. There are many well publicized nefarious applications of steganographic information hiding, such as planning and coordinating criminal activities through hidden messages in images posted on public sites – making the communication and the recipient difficult to discover [6]. Beyond the multitude of misuses, however, a common use case for steganographic methods is to embed authorship information, through digital watermarks, without compromising the integrity of the content or image. The challenge of good steganography arises because embedding a message can alter the appearance and underlying statistics of the carrier. The amount of alteration depends on two factors: first, the amount of information that is to be hidden. A common use has been to hide textual messages in images. The amount of information that is hidden is measured in bits-per-pixel (bpp). Often, the amount of information is set to 0.4bpp or lower. The longer the message, the larger the bpp, and therefore the more the carrier is altered [6, 7]. Second, the amount of alteration depends on the carrier image itself. Hiding information in the noisy, high-frequency filled, regions of an image yields less humanly detectable perturbations than hiding in the flat regions. Work on estimating how much information a carrier image can hide can be found in [8]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: The three components of the full system. Left: Secret-Image preparation. Center: Hiding the image in the cover image. Right: Uncovering the hidden image with the reveal network; this is trained simultaneously, but is used by the receiver. The most common steganography approaches manipulate the least significant bits (LSB) of images to place the secret information - whether done uniformly or adaptively, through simple replacement or through more advanced schemes [9, 10]. Though often not visually observable, statistical analysis of image and audio files can reveal whether the resultant files deviate from those that are unaltered. Advanced methods attempt to preserve the image statistics, by creating and matching models of the first and second order statistics of the set of possible cover images explicitly; one of the most popular is named HUGO [11]. HUGO is commonly employed with relatively small messages (< 0.5bpp). In contrast to the previous studies, we use a neural network to implicitly model the distribution of natural images as well as embed a much larger message, a full-size image, into a carrier image. Despite recent impressive results achieved by incorporating deep neural networks with steganalysis [12–14], there have been relatively few attempts to incorporate neural networks into the hiding process itself [15–19]. Some of these studies have used deep neural networks (DNNs) to select which LSBs to replace in an image with the binary representation of a text message. Others have used DNNs to determine which bits to extract from the container images. In contrast, in our work, the neural network determines where to place the secret information and how to encode it efficiently; the hidden message is dispersed throughout the bits in the image. A decoder network, that has been simultaneously trained with the encoder, is used to reveal the secret image. Note that the networks are trained only once and are independent of the cover and secret images. In this paper, the goal is to visually hide a full N × N × RGB pixel secret image in another N × N × RGB cover image, with minimal distortion to the cover image (each color channel is 8 bits). However, unlike previous studies, in which a hidden text message must be sent with perfect reconstruction, we relax the requirement that the secret image is losslessly received. Instead, we are willing to find acceptable trade-offs in the quality of the carrier and secret image (this will be described in the next section). We also provide brief discussions of the discoverability of the existence of the secret message. Previous studies have demonstrated that hidden message bit rates as low as 0.1bpp can be discovered; our bit rates are 10× - 40× higher. Though visually hard to detect, given the large amount of hidden information, we do not expect the existence of a secret message to be hidden from statistical analysis. Nonetheless, we will show that commonly used methods do not find it, and we give promising directions on how to trade-off the difficulty of existence-discovery with reconstruction quality, as required. 2 Architectures and Error Propagation Though steganography is often conflated with cryptography, in our approach, the closest analogue is image compression through auto-encoding networks. The trained system must learn to compress the information from the secret image into the least noticeable portions of the cover image. The architecture of the proposed system is shown in Figure 1. The three components shown in Figure 1 are trained as a single network; however, it is easiest to describe them individually. The leftmost, Prep-Network, prepares the secret image to be hidden. This component serves two purposes. First, in cases in which the secret-image (size M × M) is smaller than the cover image (N × N), the preparation network progressively increases the size of the secret image to the size of the cover, thereby distributing the secret image’s bits across the entire N × N 2 Figure 2: Transformations made by the preparation network (3 examples shown). Left: Original Color Images. Middle: the three channels of information extracted by the preparation network that are input into the middle network. Right: zoom of the edge-detectors. The three color channels are transformed by the preparation-network. In the most easily recognizable example, the 2nd channel activates for high frequency regions, e.g. textures and edges (shown enlarged (right)). pixels. (For space reasons, we do not provide details of experiments with smaller images, and instead concentrate on full size images). The more important purpose, relevant to all sizes of hidden images, is to transform the color-based pixels to more useful features for succinctly encoding the image – such as edges [20, 21], as shown in Figure 2. The second/main network, the Hiding Network, takes as input the output of the preparation-network and the cover image, and creates the Container image. The input to this network is a N × N pixel field, with depth concatenated RGB channels of the cover image and the transformed channels of the secret image. Over 30 architectures for this network were attempted for our study with varying number of hidden layers and convolution sizes; the best consisted of 5 convolution layers that had 50 filters each of {3 × 3, 4 × 4, 5 × 5} patches. Finally, the right-most network, the Reveal Network, is used by the receiver of the image; it is the decoder. It receives only the Container image (not the cover nor secret image). The decoder network removes the cover image to reveal the secret image. As mentioned earlier, our approach borrows heavily from auto-encoding networks [22]; however, instead of simply encoding a single image through a bottleneck, we encode two images such that the intermediate representation (the container image) appears as similar as possible to the cover image. The system is trained by reducing the error shown below (c and s are the cover and secret images respectively, and β is how to weigh their reconstruction errors): L(c, c′, s, s′) = ||c −c′|| + β||s −s′|| (1) It is important to note where the errors are computed and the weights that each error affects, see Figure 3. In particular, note that the error term ||c −c′|| does not apply to the weights of the reveal-network that receives the container image and extracts the secret image. On the other hand, all of the networks receive the error signal β||s −s′|| for reconstructing the hidden image. This ensures that the representations formed early in the preparation network as well as those used for reconstruction of the cover image also encode information about the secret image. Figure 3: The three networks are trained as a single, large, network. Error term 1 affects only the first two networks. Error term 2 affects all 3. S is the secret image, C is the cover image. 3 To ensure that the networks do not simply encode the secret image in the LSBs, a small amount of noise is added to the output of the second network (e.g. into the generated container image) during training. The noise was designed such that the LSB was occasionally flipped; this ensured that the LSB was not the sole container of the secret image’s reconstruction. Later, we will discuss where the secret image’s information is placed. Next, we examine how the network performs in practice. 3 Empirical Evaluation The three networks were trained as described above using Adam [23]. For simplicity, the reconstructions minimized the sum of squares error of the pixel difference, although other image metrics could have easily been substituted [24, 25]. The networks were trained using randomly selected pairs of images from the ImageNet training set [26]. Quantitative results are shown in Figure 4, as measured by the SSE per pixel, per channel. The testing was conducted on 1,000 image pairs taken from ImageNet images (not used in training). For comparison, also shown is the result of using the same network for only encoding the cover image without the secret image (e.g. β = 0). This gives the best reconstruction error of the cover using this network (this is unattainable while also encoding the secret image). Also shown in Figure 4 are histograms of errors for the cover and reconstruction. As can be seen, there are few large pixel errors. β Cover Secret Deep-Stego 0.75 2.8 3.6 Deep-Stego 1.00 3.0 3.2 Deep-Stego 1.25 6.4 2.8 Cover Only 0.00 0.1 (n/a) Figure 4: Left: Number of intensity values off (out of 256) for each pixel, per channel, on cover and secret image. Right: Distribution of pixel errors for cover and secret images, respectively. Figure 5 shows the results of hiding six images, chosen to show varying error rates. These images are not taken from ImageNet to demonstrate that the networks have not over-trained to characteristics of the ImageNet database, and work on a range of pictures taken with cell phone cameras and DSLRs. Note that most of the reconstructed cover images look almost identical to the original cover images, despite encoding all the information to reconstruct the secret image. The differences between the original and cover images are shown in the rightmost columns (magnified 5× in intensity). Consider how these error rates compare to creating the container through simple LSB substitution: replacing the 4 least significant bits (LSB) of the cover image with the 4 most-significant 4-bits (MSB) of the secret image. In this procedure, to recreate the secret image, the MSBs are copied from the container image, and the remaining bits set to their average value across the training dataset. Doing this, the average pixel error per channel on the cover image’s reconstruction is 5.4 (in a range of 0-255). The average error on the reconstruction of the secret image (when using the average value for the missing LSB bits) is approximately 4.0.1 Why is the error for the cover image’s reconstruction larger than 4.0? The higher error for the cover image’s reconstruction reflects the fact that the distribution of bits in the natural images used are different for the MSBs and LSBs; therefore, even though the secret and cover image are drawn from the same distribution, when the MSB from the secret image are used in the place of the LSB, larger errors occur than simply using the average values of the LSBs. Most importantly, these error rates are significantly higher than those achieved by our system (Figure 4). 1Note that an error of 4.0 is expected when the average value is used to fill in the LSB: removing 4 bits from a pixel’s encoding yields 16x fewer intensities that can be represented. By selecting the average value to replace the missing bits, the maximum error can be 8, and the average error is 4, assuming uniformly distributed bits. To avoid any confusion, we point out that though it is tempting to consider using the average value for the cover image also, recall that the LSBs of the cover image are where the MSBs of the secret image are stored. Therefore, those bits must be used in this encoding scheme, and hence the larger error. 4 Original Reconstructed Differences ×5 cover secret cover secret cover secret Figure 5: 6 Hiding Results. Left pair of each set: original cover and secret image. Center pair: cover image embedded with the secret image, and the secret image after extraction from the container. Right pair: Residual errors for cover and hidden – enhanced 5×. The errors per pixel, per channel are the smallest in the top row: (3.1, 4.5) , and largest in the last (4.5, 7.9). We close this section with a demonstration of the limitation of our approach. Recall that the networks were trained on natural images found in the ImageNet challenge. Though this covers a very large range of images, it is illuminating to examine the effects when other types of images are used. Five such images are shown in Figure 6. In the first row, a pure white image is used as the cover, to examine the visual effects of hiding a colorful secret image. This simple case was not encountered in training with ImageNet images. The second and third rows change the secret image to bright pink circles and uniform noise. As can be seen, even though the container image (4th column) contains only limited noise, the recovered secret image is extremely noisy. In the final two rows, the cover image is changed to circles, and uniform noise, respectively. As expected, the errors for the reconstruction of the cover and secret are now large, though the secret image remains recognizable. 3.1 What if the original cover image became accessible? For many steganographic applications, it can safely be assumed that access to the original cover image (without the secret image embedded) is impossible for an attacker. However, what if the original cover image was discovered? What could then be ascertained about the secret image, even without access to the decoding network? In Figure 5, we showed the difference image between the original cover and the container with 5x enhancement – almost nothing was visible. We reexamine 5 Figure 6: Results with images outside the set of natural images. the residual image at 5x, 10x, and 20x enhancement (with clipping at 255 where appropriate), see Figure 7. In the first row, note that the residual (at 20x) strongly resembles the cover image. In the second row, the residual is a combination of the cover and secret image, and in the third row, we see the most troubling result – features of the secret image are revealed. (Recall that this happens only when the original, unperturbed image, is available for comparison). There are many standard methods for obfuscation, such as adding cryptographic encodings of the secret image before embedding it into the cover image. We demonstrate another avenue that can be used in conjunction with any other approach: modifying the network’s error function. In addition to the two error terms described, we add an error term that minimizes the pixel-wise correlation between the residual of the cover image and the secret image corr(Rc, S) where Rc = ||C −C′|| and S is the secret image. Many weightings for this term were empirically tested. In the results shown in Figure 7(Bottom), it is scaled to approximately (0.15 * number of pixel * channels). Minimizing the residual’s correlation with the secret image removed many of the secret image’s features from the residuals – even when 20x magnification is used. Naturally, the robustness and resilience comes at a price; the quality of some of the reconstructions have decreased, as shown (see saturation of reds in first image). 4 Where is the Secret Image Encoded? The primary focus of this paper is to concretely demonstrate that it is possible to encode a large amount of information in an image with limited visually noticeable artifacts. However, no explicit attempt has been made to actively hide the existence of that information from machine detection. Though we cannot expect to completely hide the fact that up to 1/2 of the information is part of a hidden message, measures can be taken to make it more difficult to discover. First, however, we must determine where the information of the secret image resides. Is the network simply hiding the information about the secret image in the least significant bits of the cover image? Tools exist to seek out hidden information in the LSBs. One such publicly available steganalysis toolkit, StegExpose, was used to test the detectability of our hidden images [27–29]. Per the description of the tool: “StegExpose rating algorithm is derived from an intelligent and thoroughly tested combination of pre-existing pixel based steganalysis methods including Sample Pairs by Dumitrescu (2003), RS Analysis by Fridrich (2001), Chi Square Attack by Westfeld (2000) and Primary Sets by Dumitrescu (2002)” [27]. In addition to the default settings (threshold = 0.2), the detection thresholds were varied throughout a large range. The ROC curve for StegExpose is shown in Figure 8. Note the little variation beyond random guessing (the green line). StegExpose should have been able to find the information if it were simply placed in the LSB bits. We turn to a second method to find where the information is stored. The images used in the study 6 Figure 7: Top 3 rows. If the original image is leaked and is subtracted from the container image, the residual can be computed. With enough enhancement (20x), some of the secret image is revealed. Bottom 3 rows: by explicitly creating an error term that minimized the correlation between the residual and the secret image, the residual reveals less about the secret image; however, the pixel errors for the container rise (note the less saturated colors in some of the red regions). Figure 8: ROC curves: True Positive Rate vs. False Positive Rate for StegExpose when trying to detect images embedded via the proposed method. are composed, at each pixel, of 24 bits (8 × (R, G, B)). If we flip the first bit of the R channel of all the pixels in the container image, we can measure its effects on the reconstructions on the container image itself and also, by propagating the modified image through reveal network, on the reconstruction of the secret image. The effects are striking, see Figure 9. In Figure 9, the left half of the figure should be considered the “control”. The upper left bar-chart, shows that the effects of changing a bit in the Red channel of the container only has an effect on the red-channel in the container, and that the magnitude is proportional to the significance of the bit; this is exactly as expected and holds true for the Green and Blue channels as well (shown on the diagonal). Much more interesting is the right side of Figure 9. Here, we see that a bit flip in any bit position in any color channel of the container image, has an effect across all color channels in the secret 7 Figure 9: Sensitivity to Bits in the Container Image. (Left:) Control case, the container image. Changing a bit in the container image obviously has no effect on other bits. (Right): However, that same bit flip in the container image has effects across all color bands in the recovered secret image. For example, if we want to measure the effect of flipping the 2nd bit in the Blue channel, we would look at the marked entries (marked with *). image’s reconstruction. Moreover, the error does not follow the standard significance ordering of the bit positions. In other words, the information for the secret image is spread across the color channels; hence the reason it was not detected earlier. Despite StegExpose not finding the encodings, we suspected that this amount of information should be readily discoverable. It has recently been demonstrated in multiple studies that deep networks are competitive and often better than the more established methods of steganalysis that use hand selected image features [12–14]. Interestingly, many published steganalysis techniques are not blind: they assume that they are detecting hidden images only from known hiding methods and even that access to the statistics about the cover image distribution are available. This makes the steganalysis task easier while making the hiding task much more difficult. We will not discuss the real-world limitations of that assumption here, but will follow that example. We trained the steganalysis networks as binary classifiers, using the unperturbed ImageNet images as negative samples, and our containers as positive examples. We discovered that the steganalysis networks performed well with 24 × (N × N) binary input layers. Each of the 24 layers represented a bit in the pixel representation of the input image. The networks were trained in an analogous manner to others in this study; again a multitude of architectures and learning rates were attempted. The fully convolutional networks were able to obtain between 90-95% combined classification rates. For comparison, the same networks were retrained to find Least-Significant-Bit substitution (where the L least significant bits of each color channel in the cover image are replaced with the L most significant bits from the secret image). The networks were trained and tested with L = 1, 2, 3, 4; all achieved over 99% combined rates. An additional set of tests revealed that even when a single bit was randomly placed in one of the 4 bit positions, the detection rate remained over 99%. Given these detection rates on even a single bit, the high detection rates with a full image are not surprising. Nonetheless, this leads to an immediate direction for future work. In this study, the steganalysis networks were trained and used after the complete encoding system was created. However, using pre-trained and/or simultaneously trained steganalysis networks in an adversarial learning framework, such as Generative-Adversarial-Networks (GAN) [30], during the training of steganography nets provides a method to incorporate an obfuscation based error metric. The adversary provides a supplemental error signal based upon the discoverability of the hidden message that is minimized in addition to the reconstruction errors. [31] has recently successfully explored a very similar procedure with small messages hidden in images. 5 Discussion & Future Work In this section, we briefly discuss a few observations found in this study and present ideas for future work. First, lets consider the possibility of training a network to recover the hidden images after the system has been deployed and without access to the original network. One can imagine that if an 8 attacker was able to obtain numerous instances of container images that were created by the targeted system, and in each instance if at least one of the two component images (cover or secret image) was also given, a network could be trained to recover both constituent components. What can an attacker do without having access to this ground-truth “training” data? Using a smoothness constraint or other common heuristic from more classic image decomposition and blind source separation [32–34] may be a first alternative. With many of these approaches, obtaining even a modest amount of training data would be useful in tuning and setting parameters and priors. If such an attack is expected, it is open to further research how much adapting the techniques described in Section 3.1 may mitigate the effectiveness of these attempts. As described in the previous section, in its current form, the correct detection of the existence (not necessarily the exact content) of a hidden image is indeed possible. The discovery rate is high because of the amount of information hidden compared to the cover image’s data (1:1 ratio). This is far more than state-of-the-art systems that transmit reliably undetected messages. We presented one of many methods to make it more difficult to recover the contents of the hidden image by explicitly reducing the similarity of the cover image’s residual to the hidden image. Though beyond the scope of this paper, we can make the system substantially more resilient by supplementing the presented mechanisms as follows. Before hiding the secret image, the pixels are permuted (in-place) in one of M previously agreed upon ways. The permuted-secret-image is then hidden by the system, as is the key (an index into M). This makes recovery difficult even by looking at the residuals (assuming access to the original image is available) since the residuals have no spatial structure. The use of this approach must be balanced with (1) the need to send a permutation key (though this can be sent reliably in only a few bytes), and (2) the fact that the permuted-secret-image is substantially more difficult to encode; thereby potentially increasing the reconstruction-errors throughout the system. Finally, it should be noted that in order to employ this approach, the trained networks in this study cannot be used without retraining. The entire system must be retrained as the hiding networks can no longer exploit local structure in the secret image for encoding information. This study opens a new avenue for exploration with steganography and, more generally, in placing supplementary information in images. Several previous methods have attempted to use neural networks to either augment or replace a small portion of an image-hiding system. We have demonstrated a method to create a fully trainable system that provides visually excellent results in unobtrusively placing a full-size, color image into another image. Although the system has been described in the context of images, the same system can be trained for embedding text, different-sized images, or audio. Additionally, by using spectrograms of audio-files as images, the techniques described here can readily be used on audio samples. There are many immediate and long-term avenues for expanding this work. Three of the most immediate are listed here. (1) To make a complete steganographic system, hiding the existence of the message from statistical analyzers should be addressed. This will likely necessitate a new objective in training (e.g. an adversary), as well as, perhaps, encoding smaller images within large cover images. (2) The proposed embeddings described in this paper are not intended for use with lossy image files. If lossy encodings, such as jpeg, are required, then working directly with the DCT coefficients instead of the spatial domain is possible [35]. (3) For simplicity, we used a straightforward SSE error metric for training the networks; however, error metrics more closely associated with human vision, such as SSIM [24], can be easily substituted. References [1] Gary C Kessler and Chet Hosmer. An overview of steganography. Advances in Computers, 83(1):51–107, 2011. [2] Gary C Kessler. An overview of steganography for the computer forensics examiner. Forensic Science Communications, 6(3), 2014. [3] Gary C Kessler. An overview of steganography for the computer forensics examiner (web), 2015. [4] Jussi Parikka. Hidden in plain sight: The stagnographic image. https://unthinking.photography/themes/fauxtography/hidden-in-plain-sight-the-steganographic-image, 2017. [5] Jessica Fridrich, Jan Kodovsk`y, Vojtˇech Holub, and Miroslav Goljan. Breaking hugo–the process discovery. In International Workshop on Information Hiding, pages 85–101. Springer, 2011. 9 [6] Jessica Fridrich and Miroslav Goljan. Practical steganalysis of digital images: State of the art. In Electronic Imaging 2002, pages 1–13. International Society for Optics and Photonics, 2002. [7] Hamza Ozer, Ismail Avcibas, Bulent Sankur, and Nasir D Memon. Steganalysis of audio based on audio quality metrics. In Electronic Imaging 2003, pages 55–66. International Society for Optics and Photonics, 2003. [8] Farzin Yaghmaee and Mansour Jamzad. Estimating watermarking capacity in gray scale images based on image complexity. EURASIP Journal on Advances in Signal Processing, 2010(1):851920, 2010. [9] Jessica Fridrich, Miroslav Goljan, and Rui Du. Detecting lsb steganography in color, and gray-scale images. IEEE multimedia, 8(4):22–28, 2001. [10] Abdelfatah A Tamimi, Ayman M Abdalla, and Omaima Al-Allaf. Hiding an image inside another image using variable-rate steganography. International Journal of Advanced Computer Science and Applications (IJACSA), 4(10), 2013. [11] Tomáš Pevn`y, Tomáš Filler, and Patrick Bas. Using high-dimensional image models to perform highly undetectable steganography. In International Workshop on Information Hiding, pages 161–177. Springer, 2010. [12] Yinlong Qian, Jing Dong, Wei Wang, and Tieniu Tan. Deep learning for steganalysis via convolutional neural networks. In SPIE/IS&T Electronic Imaging, pages 94090J–94090J. International Society for Optics and Photonics, 2015. [13] Lionel Pibre, Jérôme Pasquet, Dino Ienco, and Marc Chaumont. Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source mismatch. Electronic Imaging, 2016(8):1–11, 2016. [14] Lionel Pibre, Pasquet Jérôme, Dino Ienco, and Marc Chaumont. Deep learning for steganalysis is better than a rich model with an ensemble classifier, and is natively robust to the cover source-mismatch. arXiv preprint arXiv:1511.04855, 2015. [15] Sabah Husien and Haitham Badi. Artificial neural network for steganography. Neural Computing and Applications, 26(1):111–116, 2015. [16] Imran Khan, Bhupendra Verma, Vijay K Chaudhari, and Ilyas Khan. Neural network based steganography algorithm for still images. In Emerging Trends in Robotics and Communication Technologies (INTERACT), 2010 International Conference on, pages 46–51. IEEE, 2010. [17] V Kavitha and KS Easwarakumar. Neural based steganography. PRICAI 2004: Trends in Artificial Intelligence, pages 429–435, 2004. [18] Alexandre Santos Brandao and David Calhau Jorge. Artificial neural networks applied to image steganography. IEEE Latin America Transactions, 14(3):1361–1366, 2016. [19] Robert Jarušek, Eva Volna, and Martin Kotyrba. Neural network approach to image steganography techniques. In Mendel 2015, pages 317–327. Springer, 2015. [20] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408, 2010. [21] Anthony J Bell and Terrence J Sejnowski. The “independent components” of natural scenes are edge filters. Vision research, 37(23):3327–3338, 1997. [22] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [23] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [24] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004. [25] Andrew B Watson. Dct quantization matrices visually optimized for individual images. In proc. SPIE, 1993. [26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. 10 [27] Benedikt Boehm. Stegexpose - A tool for detecting LSB steganography. CoRR, abs/1410.6656, 2014. [28] Stegexpose - github. https://github.com/b3dk7/StegExpose. [29] darknet.org.uk. Stegexpose – steganalysis tool for detecting steganography in images. https://www.darknet.org.uk/2014/09/stegexpose-steganalysis-tool-detecting-steganography-images/, 2014. [30] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. [31] Jamie Hayes and George Danezis. ste-gan-ography: Generating steganographic images via adversarial training. arXiv preprint arXiv:1703.00371, 2017. [32] J-F Cardoso. Blind signal separation: statistical principles. Proceedings of the IEEE, 86(10):2009–2025, 1998. [33] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004. [34] Li Shen and Chuohao Yeo. Intrinsic images decomposition using a local and global sparse representation of reflectance. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 697–704. IEEE, 2011. [35] Hossein Sheisi, Jafar Mesgarian, and Mostafa Rahmani. Steganography: Dct coefficient replacement method andcompare with jsteg algorithm. International Journal of Computer and Electrical Engineering, 4(4):458, 2012. 11 | 2017 | 346 |
6,837 | Lookahead Bayesian Optimization with Inequality Constraints Remi R. Lam Massachusetts Institute of Technology Cambridge, MA rlam@mit.edu Karen E. Willcox Massachusetts Institute of Technology Cambridge, MA kwillcox@mit.edu Abstract We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expensive constraints but are limited to greedy strategies maximizing immediate reward. To address this limitation, we propose a lookahead approach that selects the next evaluation in order to maximize the long-term feasible reduction of the objective function. We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC). 1 Introduction Constrained optimization problems are often challenging to solve, due to complex interactions between the goals of minimizing (or maximizing) the objective function while satisfying the constraints. In particular, non-linear constraints can result in complicated feasible spaces, sometimes partitioned in disconnected regions. Such feasible spaces can be difficult to explore for a local optimizer, potentially preventing the algorithm from converging to a global solution. Global optimizers, on the other hand, are designed to tackle disconnected feasible spaces and optimization of multi-modal objective functions. Such algorithms typically require a large number of evaluations to converge. This can be prohibitive when the evaluation of the objective function or the constraints is expensive, or when there is a finite budget of evaluations allocated for the optimization, as it is often the case with expensive models. This evaluation budget typically results from resource scarcity such as the restricted availability of a high-performance computer, finite financial resources to build prototypes, or even time when working on a paper submission deadline. Bayesian optimization (BO) [19] is a global optimization technique designed to address problems with expensive function evaluations. Its constrained extension, constrained Bayesian optimization (CBO), iteratively builds a statistical model for the objective function and the constraints. Based on this model that leverages all the past evaluations, a utility function quantifies the merit of evaluating any design under consideration. At each iteration, a CBO algorithm evaluates the expensive objective function and constraints at the design which maximizes this utility function. In most existing methods, the utility function only quantifies the reward obtained over the immediate next step, and ignores the gains that could be collected at future steps. This results in greedy CBO algorithms. However, quantifying long-term rewards may be beneficial. For instance, in the presence of constraints, it could be valuable to learn the boundaries of the feasible space. In order to do so, it is likely that an infeasible design would need to be evaluated, bringing no immediate improvement, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. but leading to long-term benefits. Such strategy requires planning over several steps. Planning is also required to balance the so-called exploration-exploitation trade-off. Intuitively, in order to improve the statistical model, the beginning of the optimization should mainly be dedicated to exploring the design space, while the end of the optimization should focus on exploiting that statistical model to find the best design. To balance this trade-off in a principled way, the optimizer needs to plan ahead and be aware of the remaining evaluation budget. To address the shortcomings of greedy algorithms, we propose a new lookahead formulation for CBO with a finite budget. This approach is aware of the remaining budget and can balance the exploration-exploitation trade-off in a principled way. In this formulation, the best optimization policy sequentially evaluates the design yielding the maximum cumulated reward over multiple steps. This optimal policy is the solution of an intractable dynamic programming (DP) problem. We circumvent this issue by employing an approximate dynamic programming (ADP) algorithm: rollout, building on the unconstrained BO algorithm in [17]. Numerical examples illustrate the benefits of the proposed lookahead algorithm over several greedy ones, especially when the objective function is multi-modal and the feasible space has a complex topology. The next section gives an overview of CBO and discusses some of the related work (Sec. 2). Then, we formulate the lookahead approach to CBO as a dynamic programming problem and demonstrate how to approximately solve it by adapting the rollout algorithm (Sec. 3). Numerical results are provided in Sec. 4. Finally, we present our conclusions in Sec. 5. 2 Constrained Bayesian Optimization We consider the following optimization problem: (OPc) x∗= argmin x∈X f(x) s.t. gi(x) ≤0, ∀i ∈{1, . . . , I}, (1) where x is a d-dimensional vector of design variables. The design space X is a bounded subset of Rd, f : X 7→R is an objective function, I is the number of inequality constraints and gi : X 7→R is the ith constraint function. The functions f and gi are considered expensive to evaluate. We are interested in finding the minimizer x∗of the objective function f subject to the non-linear constraints gi ≤0 with a finite budget of N evaluations. We refer to this problem as the original constrained problem (OPc). Constrained Bayesian optimization (CBO) addresses the original constrained problem (OPc) by modeling the objective function f and the constraints gi as realizations of stochastic processes. Typically, each expensive-to-evaluate function is modeled with an independent Gaussian process (GP). At every iteration n, new evaluations of f and gi become available and augment a training set Sn = {(xj, f(xj), g1(xj), · · · , gI(xj))}n j=1. Using Bayes rule, the statistical model is updated and the posterior quantities of the GP, conditioned on Sn, reflect the current representation of the unknown expensive functions. In particular, for any design x, the posterior mean µn(x; ϕ) and the posterior variance σ2 n(x; ϕ) of the GP associated with the expensive function ϕ ∈{f, g1, · · · , gI} can be computed cheaply using a closed-form expression (see [24] for an overview of GP). CBO leverages this statistical model to quantify, in a cheap-to-evaluate utility function Un, the usefulness of any design under consideration. The next design to evaluate is then selected by solving the following auxiliary problem (AP): (AP) xn+1 = argmax x∈X Un(x; Sn). (2) The vanilla CBO algorithm is summarized in Algorithm 1. Many utility functions have been proposed in the literature. To decide which design to evaluate next, [27] proposed the use of constrained expected improvement EIc, which, in the case of independent GPs, can be computed in closed-form as the product of the expected improvement (obtained by considering the GP associated with the objective function) and the probability of feasibility associated with each constraint. This approach was later applied to machine learning applications [6] and extended to the multi-objective case [5]. Note that this method transforms an original constrained optimization problem into an unconstrained auxiliary problem by modifying the utility function. Other attempts to cast the constrained problem into an unconstrained one include [3]. That work uses 2 Algorithm 1 Constrained Bayesian Optimization Input: Initial training set S1, budget N for n = 1 to N do Construct GPs using Sn Update hyper-parameters Solve AP for xn+1 = argmaxx∈X Un(x; Sn) Evaluate f(xn+1), g1(xn+1), · · · , gI(xn+1) Sn+1 = Sn ∪{(xn+1, f(xn+1), g1(xn+1), · · · , gI(xn+1))} end for a penalty method to transform the original constrained problem into an unconstrained problem, to which they apply a radial basis functions (RBF) method for global optimization (constrained RBF methods exist as well [25]). Other techniques from local constrained optimization have been leveraged in [10] where the utility function is constructed based on an augmented Lagrangian formulation. This technique was recently extended in [22] where a slack-variables formulation allows the handling of equality and mixed constraints. Another approach is proposed by [1]: at each iteration, a finite set of candidate designs is first generated from a Latin hypercube, second, candidate designs with expected constraint violation higher than a user-defined threshold are rejected. Finally, among the remaining candidates, the ones achieving the best expected improvement are evaluated (several designs can be selected simultaneously at each iteration in this formulation). Another method [26] solves a constrained auxiliary optimization problem: the next design is selected to maximize the expected improvement subject to approximated constraints (the posterior mean of the GP associated with a constraint is used in lieu of the constraint itself). Note that the two previous methods solve a constrained auxiliary problem. Another method to address constrained BO is proposed by [11], who develop an integrated conditional expected improvement criterion. Given a candidate design, this criterion quantifies the expected improvement point-wise (conditioned on the fact that the candidate will be evaluated). This pointwise improvement is then integrated over the entire design space. In the unconstrained case, in the integration phase, equal weight is given to designs throughout the design space. The constrained case is addressed by defining a weight function that depends on the feasible probability of a design: improvement at designs that are likely to be infeasible have low weight. The probability of a design being feasible is calculated using a classification GP. The computation of this criterion is more involved as there is no closed-form formulation available for the integration and techniques such as Monte Carlo or Markov chain Monte Carlo must be employed. In a similar spirit, [21] introduces a utility function which quantifies the benefit of evaluating a design by integrating its effect over the design space. The proposed utility function computes the expected reduction of the feasible domain below the best feasible value evaluated so far. This results in the expected volume of excursion criteria which also requires approximation techniques to be computed. The former approaches revolve around computing a quantity based on improvement and require having at least one feasible design. Other strategies use information gain as the key element to drive the optimization strategy. [7] proposed a two-step approach for constrained BO when the objective and the constraints can be evaluated independently. The first step chooses the next location by maximizing the constrained EI [27], the second step chooses whether to evaluate the objective or a constraint using an information gain metric (i.e., entropy search [12]). [13, 14] developed a strategy that simultaneously selects the design to be evaluated and the model to query (the objective or a constraint). The criterion used, predictive entropy search with constraints (PESC), is an extension of predictive entropy search (PES) [15]. One of the advantages of information gain-based methods stems from the fact that one does not need to start with a feasible design. All aforementioned methods use myopic utilities to select the next design to evaluate, leading to suboptimal optimization strategies. In the unconstrained BO setting, multiple-steps lookahead algorithms have been explored [20, 8, 18, 9, 17] and were shown to improve the performance of BO. To our knowledge, such lookahead strategies for constrained optimization have not yet been addressed in the literature and also have the potential to improve the performance of CBO algorithms. 3 3 Lookahead Formulation of CBO In this section, we formulate CBO with a finite budget as a dynamic programming (DP) problem (Sec. 3.1). This leads to an optimal but computationally challenging optimization policy. To mitigate the cost of computing such a policy, we employ an approximate dynamic programming algorithm, rollout, and demonstrate how it can be adapted to CBO with a finite budget (Sec. 3.2). 3.1 Dynamic Programming Formulation We seek an optimization policy which leads, after consumption of the evaluation budget, to the maximum feasible decrease of the objective function. Because the value of the expensive objective function and constraints are not known before their evaluations, it is impossible to quantify such long-term reward within a cheap-to-evaluate utility function Un. However, CBO endows the objective function and the constraints with a statistical model that can be interrogated to inform the optimizer of the likely values of f and gi at a given design. This statistical model can be leveraged to simulate optimization scenarios over multiple steps and quantify their probabilities. Using this simulation mechanism, it is possible to quantify, in an average sense, the long-term reward achieved under a given optimization policy. The optimal policy is the solution of the DP problem that we formalize now. Let n be the current iteration number of the CBO algorithm, and N the total budget of evaluations, or horizon. We refer to the future iterations of the optimization generated by simulation as stages. For any stage k ∈{n, · · · , N}, all the information collected is contained in the training set Sk. The function f and the I functions gi are modeled with independent GPs. Their posterior quantities, conditioned on Sk, fully characterize our knowledge of f and gi. Thus, we define the state of our knowledge at stage k to be the training set Sk ∈Zk. Based on the training set Sk, the simulation makes a decision regarding the next design xk+1 ∈X to evaluate using an optimization policy. An optimization policy π = {π1, · · · , πN} is a sequence of rules, πk : Zk 7→X for k ∈{1, · · · , N}, mapping a training set Sk to a design xk+1 = πk(Sk). In the simulations, the values f(xk+1) and gi(xk+1) are unknown and are treated as uncertainties. We model those I + 1 uncertain quantities with I + 1 independent Gaussian random variables W f k+1 and W gi k+1 based on the GPs: W f k+1 ∼N(µk(xk+1; f), σ2 k(xk+1; f)), (3) W gi k+1 ∼N(µk(xk+1; gi), σ2 k(xk+1; gi)), (4) where we recall that µk(xk+1; ϕ) and σ2 k(xk+1; ϕ) are the posterior mean and variance of the GP associated with any expensive function ϕ ∈{f, g1, · · · , gI}, conditioned on Sk, at xk+1. Then, the simulation generates an outcome. A simulated outcome wk+1 = (fk+1, g1 k+1, · · · , gI k+1) ∈W ⊂ RI+1 is a sample of the (I + 1)-dimensional random variable Wk+1 = [W f k+1, W g1 k+1, · · · , W gI k+1]. Note that simulating an outcome does not require evaluating the expensive f and gi. In particular, fk+1 and gi k+1 are not f(xk+1) and gi(xk+1). Once an outcome wk+1 = (fk+1, g1 k+1, · · · , gI k+1) is simulated, the system transitions to a new state Sk+1, governed by the system dynamic Fk : Zk × X × W 7→Zk+1 given by: Sk+1 = Fk(Sk, xk+1, wk+1) = Sk ∪{(xk+1, fk+1, g1 k+1, · · · , gI k+1)). (5) Now that the simulation mechanism is defined, one needs a metric to assert the quality of a given optimization policy. At stage k, a stage-reward function rk : Zk × X × W 7→R quantifies the merit of querying a design if the outcome wk = (fk+1, g1 k+1, · · · , gI k+1) occurs. This stage-reward is defined as the reduction of the objective function satisfying the constraints: rk(Sk, xk+1, wk+1) = max n 0, f Sk best −fk+1 o , (6) if gi k+1 ≤0 for all i ∈{1, · · · , I}, and rk(·, ·, ·) = 0 otherwise, where f Sk best is the best feasible value at stage k. Thus, the expected (long-term) reward starting from training set Sn under optimization policy π is: Jπ(Sn) = E " N X k=n rk(Sk, πk(Sk), wk+1) # , (7) 4 where the expectation is taken with respect to the (correlated) simulated values (wn+1, · · · , wN+1), and the state evolution is governed by Eq. 5. An optimal policy, π∗, is a policy maximizing this long-term expected reward in the space of admissible policies Π: Jπ∗(Sn) = max π∈Π Jπ(Sn). (8) The optimal reward Jπ∗(Sn) is given by Bellman’s principle of optimality and can be computed using the DP recursive algorithm, working backward from k = N −1 to k = n, JN(SN) = max xN+1∈X E[rN(SN, xN+1, wN+1)] = max xN+1∈X EIc(xN+1; SN) Jk(Sk) = max xk+1∈X E[rk(Sk, xk+1, wk+1) + Jk+1(Fk(Sk, xk+1, wk+1))], (9) where each expectation is taken with respect to one simulated outcome vector wk+1, and we have used the fact that E[rk(Sk, xk+1, wk+1)] = EIc(xk+1; Sk) is the constrained expected improvement known in closed-form [27]. The optimal reward is given by Jπ∗(Sn) = Jn(Sn). Thus, at iteration n of the CBO algorithm, the optimal policy select the next design xn+1 that maximizes Jn(Sn) given by Eqs. 9. In other words, the best decision to make at iteration n maximizes, on average, the sum of the immediate reward rn and the future long-term reward Jn+1(Sn+1) obtained by making optimal subsequent decisions. This is illustrated in Fig. 1, left panel. Sk xk+1 wk+1 Sk+1 xk+2 wk+2 Sk+2 xk+3 · · · · · · · · · · · · Sk xk+1 wk+1 Sk+1 πk+1(Sk+1) wk+2 Sk+2 πk+2(Sk+2) · · · · · · · · · · · · Figure 1: Left: Tree illustrating the intractable DP formulation. Each black circle represents a training set and a design, each white circle is a training set. Dashed lines represent simulated outcomes resulting in expectations. The double arrows represent designs selected with the (unknown) optimal policy, leading to nested maximizations. Double arrows depict the bidirectional way information propagates when the optimal policy is built: each optimal decision depends on the previous steps and relies on the optimality of the future decisions. Right: Single arrows represent designs selected using a heuristic. This illustrates the unidirectional propagation of information when a known heuristic drives the simulations: each decision depends on the previous steps but is independent of the future ones. The absence of nested maximization leads to a tractable formulation. 3.2 Rollout for Constrained Bayesian Optimization The best optimization policy evaluates, at each iteration n of the CBO algorithm, the design xn+1 maximizing the optimal reward Jπ∗(Sn) (Eq. 8). This requires solving a problem with several nested maximizations and expectations (Eqs. 9), which is computationally intractable. To mitigate the cost of solving the DP algorithm, we employ an approximate dynamic programming (ADP) technique: rollout (see [2, 23] for an overview). Rollout selects the next design by maximizing a (suboptimal) long-term reward Jπ. The reward is computed by simulating optimization scenarios over several future steps. However, the simulated steps are not controlled by the optimal policy π∗. Instead, rollout uses a suboptimal policy π, i.e. a heuristic, to drive the simulation. This circumvents the need for nested maximizations (as illustrated in Fig. 1, right panel) and simplifies the computation of Jπ compared to Jπ∗. We now formalize the rollout algorithm, propose a heuristic π adapted to the context of CBO with a finite budget, and detail further numerical approximations. Let us consider the iteration n of the CBO algorithm. The long-term reward Jπ(Sn) induced by a (known) heuristic π = {π1, · · · , πN}, starting from state Sn, is defined by Eq. 7. This can be rewritten as Jπ(Sn) = Hn, where Hn is recursively defined, from k = N back to k = n, by: HN+1(SN+1) = 0 Hk(Sk) = E[rk(Sk, πk(Sk), wk+1) + γHk+1(Fk(Sk, πk(Sk), wk+1))], (10) 5 where each expectation is taken with respect to one simulated outcome vector wk+1, and γ ∈[0, 1] is a discount factor encouraging the early collection of reward. A discount factor γ = 0 leads to a greedy policy, focusing on immediate reward. In that case, the reward Jπ simplifies to the constrained expected improvement EIc. A discount factor γ = 1, on the other hand, is indifferent to when the reward is collected. The fundamental simplification introduced by the rollout algorithm lies in the absence of nested maximizations in Eqs. 10. This is illustrated in Fig. 1, right panel. By applying a known heuristic, information only propagates forward: every simulated step depends on the previous steps, but is independent from the future simulated steps. This is in contrast to the DP algorithm, illustrated in Fig. 1. Because the optimal policy is not known, it needs to be built by solving a sequence of nested problems. Thus, information propagates both forward and backward. While Hn is simpler to compute than Jn, it still requires computing nested expectations for which there is no closed-form expression. To further alleviate the cost of computing the long-term reward, we introduce two numerical simplifications. First, we use a rolling horizon h ∈N to decrease the number of future steps simulated. A rolling horizon h replaces the horizon N by ˜N = min{N, n+h}. Second, the expectations with respect to the (I + 1)-dimensional Gaussian random variables are numerically approximated using Gauss-Hermite quadrature. We obtain the following formulation: ˜H ˜ N+1(S ˜ N+1) = 0 ˜Hk(Sk) = EIc(πk(Sk); Sk) + γ Nq X q=1 α(q)[ ˜Hk+1(Fk(Sk, πk(Sk), w(q) k+1))], (11) where Nq is the number of quadrature weights α(q) ∈R and points w(q) k+1 ∈RI+1. For all iteration n ∈{1, · · · , N} and for all xn+1 ∈X, we define the utility function of our rollout algorithm for CBO with finite budget to be: Un(xn+1; Sn) = EIc(xn+1; Sn) + γ Nq X q=1 α(q)[ ˜Hn+1(Fn(Sn, xn+1, w(q) n+1))]. (12) The heuristic π is problem-dependent. A desirable heuristic combines two properties: (1) it is cheap to compute, (2) it is a good approximation of the optimal policy π∗. In the case of CBO with a finite budget, the heuristic π ought to mimic the exploration-exploitation trade-off balanced by the optimal policy π∗. To do so, we propose using a combination of greedy CBO algorithms: maximization of the constrained expected improvement (which has an exploratory behavior) and a constrained optimization based on the posterior means of the GPs (which has an exploitative behavior). For a given iteration n, we define the heuristic π = {πn+1, · · · , π ˜ N} such that for stages k ∈{n + 1, · · · , ˜N −1}, the policy component πk : Zk 7→X, maps a state Sk to the design xk+1 satisfying: xk+1 = argmax x∈X EIc(x; Sk). (13) The last policy component, π ˜ N : Z ˜ N 7→X, maps a state S ˜ N to x ˜ N+1 such that: x ˜ N+1 = argmin x∈X µ ˜ N(x; f) s.t. PF(x; S ˜ N) ≥0.99, (14) where PF is the probability of feasibility known in closed-form. Every evaluation of the utility function Un requires O N h q applications of a heuristic component πk. The heuristic that we propose optimizes a quantity that requires O |Sk|2 of work. To summarize, the proposed approach sequentially selects the next design to evaluate by maximizing the long-term reward induced by a heuristic. This rollout algorithm is a one-step lookahead formulation (one maximization) and is easier to solve than the N-steps lookahead approach (N nested maximizations) presented in Sec. 3.1. Rollout is a closed-loop approach where the information collected at a given stage of the simulation is used to simulate the next stages. The heuristic used in the rollout is problem-dependent, and we proposed using a combination of greedy CBO algorithms to construct such a heuristic. The computation of the utility function is detailed in Algorithm 2. 6 Algorithm 2 Rollout Utility Function Function: utility(x, h, S) Construct GPs using S if h = 0 then U ←EIc(x; S) else U ←EIc(x; S) Generate Nq Gauss-Hermite quadrature weights α(q) and points w(q) associated with x for q = 1 to Nq do S′ ←S ∪{(x, w(q))} if h > 1 then x′ ←π(S′) using Eq. 13 else x′ ←π(S′) using Eq. 14 end if U ←U + γα(q)utility(x′, h −1, S′) end for end if Output: U 4 Results In this section, we numerically investigate the proposed algorithm and demonstrate its performance on classic test functions and a reacting flow problem. To compare the performance of the different CBO algorithms tested, we use the utility gap metric [14]. At iteration n, the utility gap en measures the error between the optimum feasible value f ∗and the value of the objective function at a recommended design x∗ n: en = |f(x∗ n) −f ∗| if x∗ n is feasible, |Ψ −f ∗| else, (15) where Ψ is a user-defined penalty punishing infeasible recommendations. The recommended design, x∗ n, differs from the design selected for evaluation xn. It is the design that the algorithm would recommend to evaluate if the optimization were to be stopped at iteration n, without early notice. We use the same system of recommendation as [14]: x∗ n = argmin x∈X µn(x; f) s.t. PF(x; Sn) ≥0.975. (16) Note that the utility gap en is not guaranteed to decrease because recommendations x∗ n are not necessarily better with iterations. In particular, en is not the best error achieved in the training set Sn. In the following numerical experiments, for the rollout algorithm, we use independent zero-mean GPs with automatic relevance determination (ARD) square-exponential kernel to model each expensive-toevaluate function. In Algorithm. 1, when the GPs are constructed, the vector of hyper-parameters θi associated with the ith GP kernel is estimated by maximization of the marginal likelihood. However, to reduce the cost of computing Un, the hyper-parameters are kept constant in the simulated steps (i.e., in Algorithm. 2). To compute the expectations of Eqs. 11-12, we employ Nq = 3I+1 Gauss-Hermite quadrature weights and points and we set the discount factor to γ = 0.9. Finally, at iteration n, the best value f Sn best is set to the minimum posterior mean µn(x; f) over the designs x in the training set Sn, such that the posterior mean of each constraint is feasible. If no such point can be found, then f Sn best is set to the maximum of {µn(x; f) + 3σm} over the designs x in Sn, where σ2 m is the maximum variance of the GP associated with f. The EIC algorithm is computed as a special case of the rollout with rolling horizon h = 0, and we use the Spearmint package1 to run the PESC algorithm. We additionally run a CBO algorithm that selects the next design to evaluate based on the posterior means of the GPs2: xn+1 = argmin x∈X µn(x; f) s.t. µn(x; gi) ≤0, ∀i ∈{1, . . . , I}. (17) 1https://github.com/HIPS/Spearmint/tree/PESC 2As suggested by a reviewer. 7 0 5 10 15 20 25 30 35 40 Iteration n −5 −4 −3 −2 −1 0 1 log10 Median Utility Gap en PESC PM EIc Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 0 5 10 15 20 25 30 35 40 Iteration n −3.0 −2.5 −2.0 −1.5 −1.0 −0.5 0.0 log10 Median Utility Gap en PESC PM EIc Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 Figure 2: Left: Multi-modal objective and single constraint (P1). Right: Linear objective and multiple non-linear constraints (P2). Shaded region indicates 95% confidence interval of the median statistic. We refer to this algorithm as PM. We also compare the CBO algorithms to three local algorithms (SLSQP, MMA and COBYLA) and to one global evolutionary algorithm (ISRES). We now consider four problems with different design space dimensions d, several numbers of constraints I, and various topologies of the feasible space. The three first problems, P1-3, are analytic functions while the last one, P4, uses a reacting flow model that requires solving a set of partial differential equations (PDEs) [4]. For P1 and P2, we use N = 40 evaluations (as in [6, 10]). For P3 and P4, we use a small number of iterations N = 60, which corresponds to situations where the functions are very expensive to evaluate (e.g. solving large systems of PDEs can take over a day on a supercomputer). The full description of the problems is available in the appendix. In Figs. 2-3, we show the median of the utility gap, the shadings represent the 95% confidence interval of the median computed by bootstrap. Other statistics of the utility gap are shown in the appendix. For P1, the median utility gap for EIC, PESC, PM and the rollout algorithm with h ∈{1, 2, 3} is shown in Fig. 2 (left panel). The PM algorithm does not improve its recommendations. This is not surprising because PM focuses on exploitation (PM does not depends on posterior variance) which can result in the algorithm failing to make further progress. Such behavior has already been reported in [16] (Sec. 3). The three other CBO algorithms perform similarly in the first 10 iterations. PESC is the first to converge to a utility gap ≈10−2.7. The rollout performs better or similarly than EIC. In the 15 first iterations, longer rolling horizons lead to slightly lower utility gaps. This is likely to be due to the more exploratory behavior associated with lookahead, which helps differentiating the global solution from the local ones. For the remaining iterations, the shorter rolling horizons reduce the utility gap faster than longer rolling horizons before reaching a plateau. EIC and rollout outperform PESC after 25 iterations. We note that EIC and rollout have essentially converged. For P2, the median performance of EIC, PESC, PM and rollout with rolling horizon h ∈{1, 2, 3} is shown in Fig. 2 (right panel). The PM algorithm reduces the utility gap in the first 10 iterations, but reaches a plateau at 10−1.7. The three other CBO algorithms perform similarly up to iteration 15, where PESC reaches a plateau 3. This similarity may be explained by the fact that the local solutions are easily differentiable from the global one, leading to no advantage for exploratory behavior. In this example, the rollout algorithms reached the same plateau at 10−3, with longer horizons h taking more iterations to converge. EIC performs better than rollout h = 2 before its performance slightly decreases, reaching a plateau at a larger utility gap 10−2.6 (note that the utility gap is not computed with the best value observed so far and thus is not guaranteed to decrease). This increase of the median utility gap can be explained by the fact that a few runs change their recommendation from one local minimum to another one, resulting in the change in median utility function. This is also reflected in the 95% confidence interval of the median, which further indicates that the statistic is sensitive to a few runs. For P3, the median utility gap for the four CBO algorithms is shown in Fig. 3 (left panel). PM is rapidly outperformed by the other algorithms. The PESC algorithm is outperformed by EIC and rollout after 25 iterations. Again, we note that rollout with h = 1 obtains a lower utility gap that EIC at every iteration. The rollout with h ∈{2, 3} exhibits a different behavior: it starts decreasing the utility gap later in the optimization but achieves a better performance when the evaluation budget 3Results obtained for PESC mean utility gap are consistent with [13]. 8 0 10 20 30 40 50 60 Iteration n 1.2 1.4 1.6 1.8 2.0 2.2 log10 Median Utility Gap en PESC PM EIc Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 0 10 20 30 40 50 60 Iteration n −0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 log10 Median Utility Gap en PESC PM EIc Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 Figure 3: Left: Multi-modal 4-d objective and constraint (P3). Right: Reacting flow problem (P4). The awareness of the remaining budget explains the sharp decrease in the last iterations for the rollout. is consumed. Note that none of the algorithms has converged to the global solution, and the strong multi-modality of the objective and constraint function seems to favor exploratory behaviors. For the reacting flow problem P4, the median performances are shown in Fig. 3 (right panel). PM rapidly reaches a plateau at en ≈101.3. PESC reduces rapidly the utility gap, outperforming the other algorithms after 15 iterations. EIC and rollout perform similarly and slowly decrease the utility gap up to iteration 40, where EIC reaches a plateau and rollout continues to improve performance, slightly outperforming PESC at the end of the optimization. The results are summarized in Table. 1, and show that the rollout algorithm with different rolling horizons h (R-h) performs similarly or favorably compared to the other algorithms. Table 1: Log median utility gap log10(eN). Statistics computed over m independent runs. Prob d N I m SLSQP MMA COBYLA ISRES PESC PM EIC R-1 R-2 R-3 P1 2 40 1 500 0.59 0.59 -0.05 -0.19 -2.68 0.30 -4.45 -4.59 -4.52 -4.42 P2 2 40 2 500 -0.40 -0.40 -0.82 -0.70 -2.43 -1.76 -2.62 -2.99 -2.99 -2.994 P3 4 60 1 500 2.15 3.06 3.06 1.68 1.66 1.79 1.60 1.48 1.31 1.35 P4 4 60 1 50 0.80 0.80 0.80 0.13 0.09 1.26 0.57 -0.10 -0.10 0.19 Based on the four previous examples, we notice that increasing the rolling horizon h does not necessarily improve the performance of the rollout algorithm. One possible reason stems from the fact that lookahead algorithms rely more on the statistical model that greedy algorithms. Because this model is learned as the optimization unfolds, it is an imperfect model (in particular the hyperparameters of the GPs are updated after each iteration, but not after each stage of a simulated scenario). By simulating too many steps with the GPs, one may be over-confidently using the model. In some sense, the rolling horizon h, as well as the discount factor γ, can be interpreted as a form of regularization. The effect of a larger rolling horizon is problem-dependent, and experiment P3 suggests that multimodal problems in higher dimension may benefits from longer rolling horizons. 5 Conclusions We proposed a new formulation for constrained Bayesian optimization with a finite budget of evaluations. The best optimization policy is defined as the one maximizing, in average, the cumulative feasible decrease of the objective function over multiple steps. This optimal policy is the solution of a dynamic programming problem that is intractable due to the presence of nested maximizations. To circumvent this difficulty, we employed the rollout algorithm. Rollout uses a heuristic to simulate optimization scenarios over several step, thereby computing an approximation of the long-term reward. This heuristic is problem-dependent and, in this paper, we proposed to use a combination of cheap-to-evaluate greedy CBO algorithms to construct such heuristic. The proposed algorithm was numerically investigated and performed similarly or favorably compared to constrained expected improvement (EIC) and predictive entropy search with constraint (PESC). This work was supported in part by the AFOSR MURI on multi-information sources of multi-physics systems under Award Number FA9550-15-1-0038, program manager Dr. Jean-Luc Cambier. 4For cost reasons, the median for h = 3 was computed with m = 100 independent runs instead of 500. 9 References [1] C. Audet, A. J. Booker, J. E. Dennis Jr, P. D. Frank, and D. W. Moore. A surrogate-model-based method for constrained optimization. AIAA paper, 4891, 2000. [2] D. P. Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific, 1995. [3] M. Björkman and K. Holmström. Global optimization of costly nonconvex functions using radial basis functions. Optimization and Engineering, 4(1):373–397, 2000. [4] M. Buffoni and K. E. Willcox. Projection-based model reduction for reacting flows. In 40th Fluid Dynamics Conference and Exhibit, page 5008, 2010. [5] P. Feliot, J. Bect, and E. Vazquez. A Bayesian approach to constrained single-and multi-objective optimization. Journal of Global Optimization, 67(1-2):97–133, 2017. [6] J. Gardner, M. Kusner, K. Q. Weinberger, J. Cunningham, and Z. Xu. Bayesian optimization with inequality constraints. In T. Jebara and E. P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 937–945. JMLR Workshop and Conference Proceedings, 2014. [7] M. A. Gelbart, J. Snoek, and R. P. Adams. Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607, 2014. [8] D. Ginsbourger and R. Le Riche. Towards Gaussian process-based optimization with finite time horizon. In mODa 9–Advances in Model-Oriented Design and Analysis, pages 89–96. Springer, 2010. [9] J. González, M. Osborne, and N. D. Lawrence. GLASSES: Relieving the myopia of Bayesian optimisation. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 790–799, 2016. [10] R. B. Gramacy, G. A. Gray, S. Le Digabel, H. K. H. Lee, P. Ranjan, G. Wells, and S. M. Wild. Modeling an augmented Lagrangian for blackbox constrained optimization. Technometrics, 58(1):1–11, 2016. [11] R. B. Gramacy and H. K. H. Lee. Optimization under unknown constraints. arXiv preprint arXiv:1004.4027, 2010. [12] P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. The Journal of Machine Learning Research, 13(1):1809–1837, 2012. [13] J. M. Hernández-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general framework for constrained bayesian optimization using information-based search. arXiv preprint arXiv:1511.09422, 2015. [14] J. M. Hernández-Lobato, M. A. Gelbart, M. W. Hoffman, R. P. Adams, and Z. Ghahramani. Predictive entropy search for bayesian optimization with unknown constraints. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015. [15] J. M. Hernández-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Advances in Neural Information Processing Systems, pages 918–926, 2014. [16] D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4):345–383, 2001. [17] R. R. Lam, K. E. Willcox, and D. H. Wolpert. Bayesian optimization with a finite budget: An approximate dynamic programming approach. In Advances in Neural Information Processing Systems, pages 883–891, 2016. [18] C. K. Ling, K. H. Low, and P. Jaillet. Gaussian process planning with lipschitz continuous reward functions: Towards unifying Bayesian optimization, active learning, and beyond. arXiv preprint arXiv:1511.06890, 2015. [19] J. Mockus, V. Tiesis, and A. Zilinskas. The application of bayesian methods for seeking the extremum. Towards Global Optimization, 2(117-129):2, 1978. [20] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In 3rd International Conference on Learning and Intelligent Optimization (LION3), pages 1–15, 2009. [21] V. Picheny. A stepwise uncertainty reduction approach to constrained global optimization. In AISTATS, pages 787–795, 2014. 10 [22] V. Picheny, R. B. Gramacy, S. Wild, and S. Le Digabel. Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian. In Advances in Neural Information Processing Systems, pages 1435–1443, 2016. [23] W. B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality, volume 842. John Wiley & Sons, 2011. [24] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. [25] R. G. Regis. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Engineering Optimization, 46(2):218–243, 2014. [26] M. J. Sasena, P. Y. Papalambros, and P. Goovaerts. The use of surrogate modeling algorithms to exploit disparities in function computation time within simulation-based optimization. Constraints, 2:5, 2001. [27] M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pages 11–25, 1998. 11 | 2017 | 347 |
6,838 | Online Learning with Transductive Regret Mehryar Mohri Courant Institute and Google Research New York, NY mohri@cims.nyu.edu Scott Yang⇤ D. E. Shaw & Co. New York, NY yangs@cims.nyu.edu Abstract We study online learning with the general notion of transductive regret, that is regret with modification rules applying to expert sequences (as opposed to single experts) that are representable by weighted finite-state transducers. We show how transductive regret generalizes existing notions of regret, including: (1) external regret; (2) internal regret; (3) swap regret; and (4) conditional swap regret. We present a general and efficient online learning algorithm for minimizing transductive regret. We further extend that to design efficient algorithms for the time-selection and sleeping expert settings. A by-product of our study is an algorithm for swap regret, which, under mild assumptions, is more efficient than existing ones, and a substantially more efficient algorithm for time selection swap regret. 1 Introduction Online learning is a general framework for sequential prediction. Within that framework, a widely adopted setting is that of prediction with expert advice [Littlestone and Warmuth, 1994, Cesa-Bianchi and Lugosi, 2006], where the algorithm maintains a distribution over a set of experts. At each round, the loss assigned to each expert is revealed. The algorithm then incurs the expected value of these losses for its current distribution and next updates its distribution. The standard benchmark for the algorithm in this scenario is the external regret, that is the difference between its cumulative loss and that of the best (static) expert in hindsight. However, while this benchmark is useful in a variety of contexts and has led to the design of numerous effective online learning algorithms, it may not constitute a useful criterion in common cases where no single fixed expert performs well over the full course of the algorithm’s interaction with the environment. This had led to several extensions of the notion of external regret, along two main directions. The first is an extension of the notion of regret so that the learner’s algorithm is compared against a competitor class consisting of dynamic sequences of experts. Research in this direction started with the work of Herbster and Warmuth [1998] on tracking the best expert, who studied the scenario of learning against the best sequence of experts with at most k switches. This work has been subsequently improved [Monteleoni and Jaakkola, 2003], generalized [Vovk, 1999, Cesa-Bianchi et al., 2012, Koolen and de Rooij, 2013], and modified [Hazan and Seshadhri, 2009, Adamskiy et al., 2012, Daniely et al., 2015]. More recently, an efficient algorithm with favorable regret guarantees has been given for the general case of a competitor class consisting of sequences of experts represented by a (weighted) finite automaton [Mohri and Yang, 2017, 2018]. This includes as special cases previous competitor classes considered in the literature. The second direction is to consider competitor classes based on modifications of the learner’s sequence of actions. This approach began with the notion of internal regret [Foster and Vohra, 1997, Hart and Mas-Colell, 2000], which considers how much better an algorithm could have performed if it had ⇤Work done at the Courant Institute of Mathematical Sciences. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. switched all instances of playing one action with another, and was subsequently generalized to the notion of swap regret [Blum and Mansour, 2007], which considers all possible in-time modifications of a learner’s action sequence. More recently, Mohri and Yang [2014] introduced the notion of conditional swap regret, which considers all possible modifications of a learner’s action sequence that depend on some fixed bounded history. Odalric and Munos [2011] also studied regret against history-dependent modifications and presented computationally tractable algorithms (with suboptimal regret guarantees) when the comparator class can be organized into a small number of equivalence classes. In this paper, we consider the second direction and study regret with respect to modification rules. We first present an efficient online algorithm for minimizing swap regret (Section 3). We then introduce the notion of transductive regret in Section 4, that is the regret of the learner’s algorithm with respect to modification rules representable by a family of weighted finite-state transducers (WFSTs). This definition generalizes the existing notions of external, internal, swap, and conditional swap regret, and includes modification rules that apply to expert sequences, as opposed to single experts. Moreover, we present efficient algorithms for minimizing transductive regret. We further extend transductive regret to the time-selection setting (Section 5) and present efficient algorithms minimizing time-selection transductive regret. These algorithms significantly improve upon existing state-of-the-art algorithms in the special case of time-selection swap regret. Finally, in Section 6, we extend transductive regret to the sleeping experts setting and present new and efficient algorithms for minimizing sleeping transductive regret. 2 Preliminaries and notation We consider the setting of prediction with expert advice with a set ⌃of N experts. At each round t 2 [T], an online algorithm A selects a distribution pt over ⌃, the adversary reveals a loss vector lt 2 [0, 1]N, where lt(x) is the loss of expert x 2 ⌃, and the algorithm incurs the expected loss pt · lt. Let Φ ✓⌃⌃denote a set of modification functions mapping the expert set to itself. The objective of the algorithm is to minimize its Φ-regret, RegT (A, Φ), defined as the difference between its cumulative expected loss and that of the best modification of the sequence in hindsight: RegT (A, Φ) = max '2Φ ( T X t=1 E xt⇠pt[lt(xt)] − E xt⇠pt[lt('(xt))] ) . (1) This definition coincides with the standard notion of external regret [Cesa-Bianchi and Lugosi, 2006] when Φ is reduced to the family of constant functions: Φext = {'a : ⌃! ⌃: a 2 ⌃, 8x 2 ⌃, 'a(x) = a}, with the notion of internal regret [Foster and Vohra, 1997] when Φ is the family of functions that only switch two actions: Φint = {'a,b : ⌃! ⌃: a, b 2 ⌃, 'a,b(x) = 1x=ab + 1x=ba + x1x6=a,b}, and with the notion of swap regret [Blum and Mansour, 2007] when Φ consists of all possible functions mapping ⌃to itself: Φswap. In Section 4, we will introduce a more general notion of regret with modification rules applying to expert sequences, as opposed to single experts. There are known algorithms achieving an external regret in O(pT log N) with a per-iteration computational cost in O(N) [Cesa-Bianchi and Lugosi, 2006], an internal regret in O(pT log N) with a per-iteration computational cost in O(N 3) [Stoltz and Lugosi, 2005], and a swap regret in O(pTN log N) with a per-iteration computational cost in O(N 3) [Blum and Mansour, 2007]. 3 Efficient online algorithm for swap regret In this section, we present an online algorithm, FASTSWAP, that achieves the same swap regret guarantee as the algorithm of Blum and Mansour [2007], O(pTN log N), but admits the more favorable per-iteration complexity of O(N 2 log(T)), under some mild assumptions. Existing online algorithms for internal or swap regret minimization require, at each round, solving for a fixed-point of an N ⇥N-stochastic matrix [Foster and Vohra, 1997, Stoltz and Lugosi, 2005, Blum and Mansour, 2007]. For example, the algorithm of Blum and Mansour [2007] is based on a meta-algorithm A that makes use of N external regret minimization sub-algorithms {Ai}i2[N] (see Figure 1). Sub-algorithm Ai is specialized in guaranteeing low regret against swapping expert i with any other expert j. The meta-algorithm A maintains a distribution pt over the experts and, 2 A A1 A2 AN qt,1 qt,2 pt,1lt pt,2lt pt,Nlt qt,N Figure 1: Illustration of the swap regret algorithm of Blum and Mansour [2007] or the FASTSWAP algorithm, which use a meta-algorithm to control a set of N external regret minimizing algorithms. Algorithm 1: FASTSWAP; {Ai}N i=1 are external regret minimization algorithms. Algorithm: FASTSWAP((Ai)N i=1) for t 1 to T do for i 1 to N do qi QUERY(Ai) Qt [q1 · · · qN]> for j 1 to N do cj minN i=1 Qt i,j ↵t kck1; ⌧t l log ⇣ 1 p t ⌘ log(1−↵t) m if ⌧t < N then pt p1 t c ↵t for ⌧ 1 to ⌧t do (p⌧ t )> (p⌧ t )>(Qt −~1c>); pt pt + p⌧ t pt pt kptk1 else p> t = FIXED-POINT(Qt) xt SAMPLE(pt); lt RECEIVELOSS() for i 1 to N do ATTRIBUTELOSS(pt[i]lt, Ai) at each round t, assigns to sub-algorithm Ai only a fraction of the loss, (pt,ilt), and receives the distribution qi (over the experts) returned by Ai. At each round t, the distribution pt is selected to be the fixed-point of the N ⇥N-stochastic matrix Qt = [q1 · · · qN]>. Thus, pt = ptQt is the stationary distribution of the Markov process defined by Qt. This choice of the distribution is natural to ensure that the learner’s sequence of actions is competitive against a family of modifications, since it is invariant under a mapping that relates to this family of modifications. The computation of a fixed-point involves solving a linear system of equations, thus, the per-round complexity of these algorithms is in O(N 3) using standard methods (or O(N 2.373), using the method of Coppersmith and Winograd). To improve upon this complexity in the setting of internal regret, Greenwald et al. [2008] estimate the fixed-point by applying, at each round, a single power iteration to some stochastic matrix. Their algorithm runs in O(N 2) time per iteration, but at the price of a regret guarantee that is only in O( p NT 9 10 ). Here, we describe an efficient algorithm for swap regret, FASTSWAP. Algorithm 1 gives its pseudocode. As with the algorithm of Blum and Mansour [2007], FASTSWAP is based on a meta-algorithm A making use of N external regret minimization sub-algorithms {Ai}i2[N]. However, unlike the algorithm of Blum and Mansour [2007], which explicitly computes the stationary distribution of Qt at round t, or that of Greenwald et al. [2008], which applies a single power iteration at each round, our algorithm applies multiple modified power iterations at round t (⌧t power iterations). Our modified power iterations are based on the REDUCEDPOWERMETHOD (RPM) algorithm introduced by Nesterov and Nemirovski [2015]. Unlike the algorithm of Greenwald et al. [2008], FASTSWAP uses a specific initial distribution at each round, applies the power method to a modification of the original stochastic matrix, and uses, as an approximation, an average of all the iterates at that round. Theorem 1. Let A1, . . . , AN be external regret minimizing algorithms admitting data-dependent regret bounds of the form O( p LT (Ai) log N), where LT (Ai) is the cumulative loss of Ai after T 3 0 1 a:b/1 b:a/1 2 b:b/1 a:b/1 b:b/1 0 a:φ(a)/1 b:φ(b)/1 c:φ(c)/1 0 1 Apple:IBM/0.4 Apple:Apple/0.5 Apple:gold/0.1 IBM:IBM/0.5 IBM:Apple/0.3 IBM:silver/0.2 2 gold:silver/0.4 gold:gold/0.5 gold:Apple/0.1 silver:gold/0.3 silver:silver/0.5 silver:IBM/0.2 sell:IBM/0.3 sell:Apple/0.7 Apple:IBM/0.3 Apple:Apple/0.5 Apple:sell/0.2 IBM:IBM/0.6 IBM:Apple/0.3 IBM:sell/0.1 sell:gold/0.5 sell:silver/0.5 gold:silver/0.2 gold:gold/0.6 gold:sell/0.2 silver:gold/0.3 silver:silver/0.6 silver:sell/0.1 (i) (ii) (iii) Figure 2: (i) Example of a WFST T: IT = 0, ilab[ET[0]] = {a, b}, olab[ET[1]] = {b}, ET[2] = {(0, a, b, 1, 1), (0, b, a, 1, 1)}. (ii) Family of swap WFSTs T', with ': {a, b, c} ! {a, b, c}. (iii) A more general example of a WFST. rounds. Assume that, at each round, the sum of the minimal probabilities given to an expert by these algorithms is bounded below by some constant ↵> 0. Then, FASTSWAP achieves a swap regret in O(pTN log N) with a per-iteration complexity in O ' N 2 min ( log T log(1/(1−↵)), N * . The proof is given in Appendix D. It is based on a stability analysis bounding the additive regret term due to using an approximation of the fixed point distribution, and the property that ⌧t iterations of the reduced power method ensure a 1 p t-approximation, where t is the number of rounds. The favorable complexity of our algorithm requires an assumption on the sum of the minimal probabilities assigned to an expert by the algorithms at each round. This is a reasonable assumption which one would expect to hold in practice if all the external regret minimizing sub-algorithms are the same. This is because the true losses assigned to each column of the stochastic matrix are the same, and the rescaling based on the distribution pt is uniform over each row. Furthermore, since the number of rounds sufficient for a good approximation can be efficiently estimated, our algorithm can determine when it is worthwhile to switch to standard fixed-point methods, that is when the condition ⌧t > N holds. Thus, the time complexity of our algorithm is never worse than that of Blum and Mansour [2007]. 4 Online algorithm for transductive regret In this section, we consider a more general notion of regret than swap regret, where the family of modification functions applies to sequences instead of just to single experts. We will consider sequence-to-sequence mappings that can be represented by finite-state transducers. In fact, more generally, we will allow weights to be used for these mappings and will consider weighted finite-state transducers. This will lead us to define the notion of transductive regret where the cumulative loss of an algorithm’s sequence of actions is compared to that of sequences images of its action sequence via a transducer mapping. As we shall see, this is an extremely flexible definition that admits as special cases standard notions of external, internal, and swap regret. We will start with some preliminary definitions and concepts related to transducers. 4.1 Weighted finite-state transducer definitions A weighted finite-state transducer (WFST) T is a finite automaton whose transitions are augmented with an output label and a real-valued weight, in addition to the familiar input label. Figure 2(i) shows a simple example. We will assume both input and output labels to be elements of the alphabet ⌃, which denotes the set of experts. ⌃⇤denotes the set of all strings over the alphabet ⌃. We denote by ET the set of transitions of T and, for any transition e 2 ET, we denote by ilab[e] its input label, by olab[e] its output label, and by w[e] its weight. For any state u of T, we denote by ET[u] the set of transitions leaving u. We also extend the definition of ilab to sets and denote by ilab[ET[u]] the set of input labels of the transitions ET[u]. We assume that T admits a single initial state, which we denote by IT. For any state u and string x 2 ⌃⇤, we also denote by δT(u, x) the set of states reached from u by reading string x as input. In particular, we will denote by δT(IT, x) the set of states reached from the initial state by reading string x as input. 4 The input (or output) label of a path is obtained by concatenating the input (output) transition labels along that path. The weight of a path is obtained by multiplying is transition weights. A path from the initial state to a final state is called an accepting path. A WFST maps the input label of each accepting path to its output label, with that path weight probability. The WFSTs we consider may be non-deterministic, that is they may admit states with multiple outgoing transitions sharing the same input label. However, we will assume that, at any state, outgoing transitions sharing the same input label admit the same destination state. We will further require that, at any state, the set of output labels of the outgoing transitions be contained in the set of input labels of the same transitions. This requirement is natural for our definition of regret: our learner will use input label experts and will compete against sequences of output label experts. Thus, the algorithm should have the option of selecting an expert sequence it must compete against. Finally, we will assume that our WFSTs are stochastic, that is, for any state u and input label a 2 ⌃, we have P e2ET[u,a] w[e] = 1. The class of WFSTs thereby defined is broad and, as we shall see, includes the families defining external, internal and swap regret. 4.2 Transductive regret Given any WFST T, let T be a family of WFSTs with the same alphabet ⌃, the same set of states Q, the same initial state I and final states F, but with different output labels and weights. Thus, we can write IT , FT , QT , and δT , without any ambiguity. We will also use the notation ET when we refer to the transitions of a transducer within the family T in a way that does not depend on the output labels or weights. We define the learner’s transductive regret with respect to T as follows: RegT (A, T ) = max T2T ( T X t=1 E xt⇠pt[lt(xt)]− T X t=1 E xt⇠pt " X e2ET[δT (IT ,x1:t−1),xt] w[e] lt(olab[e]) #) . (2) This measures the maximum difference of the expected loss of the sequence xT 1 played by A and the expected loss of a competitor sequence, that is a sequence image by T 2 T of xT 1 , where the expectation for competing sequences is both over pts and the transitions weights w[e] of T. We also assume that the family T does not admit proper non-empty invariant subsets of labels out of any state, i.e. for any state u, there exists no proper subset E ( ET [u] where the inclusion olab[E] ✓ilab[E] holds for all T 2 T . This is not a strict requirement but will allow us to avoid cases of degenerate competitor classes. As an example, consider the family of WFSTs Ta, a 2 ⌃, with a single state Q = I = F = {0} and with Ta defined by self-loop transitions with all input labels b 2 ⌃with the same output label a, and with uniform weights. Thus, Ta maps all labels to a. Then, the notion of transductive regret with T = {Ta : a 2 ⌃} coincides with that of external regret. Similarly, consider the family of WFSTs T', ': ⌃! ⌃, with a single state Q = I = F = {0} and with T' defined by self-loop transitions with input label a 2 ⌃and output '(a), all weights uniform. Thus, T' maps a symbol a to '(a). Then, the notion of transductive regret with T = {T' : ' 2 ⌃⌃} coincides with that of swap regret (see Figure 2 (ii)). The more general notion of k-gram conditional swap regret presented in Mohri and Yang [2014] can also be modeled as transductive regret with respect to a family of WFSTs (k-gram WFSTs). We present additional figures illustrating all of these examples in Appendix A. In general, it may be desirable to design WFSTs intended for a specific task, so that an algorithm is robust against some sequence modifications more than others. In fact, such WFSTs may have been learned from past data. The definition of transductive regret is flexible and can accommodate such settings both because a transducer can conveniently help model mappings and because the transition weights help distinguish alternatives. For instance, consider a scenario where each action naturally admits a different swapping subset, which may be only a small subset of all actions. As an example, an investor may only be expected to pick the best strategy from within a similar class of strategies. For example, instead of buying IBM, the investor could have bought Apple or Microsoft, and instead of buying gold, he could have bought silver or bronze. One can also imagine a setting where along the sequences, some new alternatives are possible while others are excluded. Moreover, one may wish to assign different weights to some sequence modifications or penalize the investor for choosing strategies that are negatively correlated to recent choices. The algorithms in this work are flexible 5 enough to accommodate these environments, which can be straightforwardly modeled by a WFST. We give a simple example in Figure 2(iii) and give another illustration in Figure 5 in Appendix A, which can be easily generalized. Notice that, as we shall see later, in the case where the maximum out-degree of any state in the WFST (size of the swapping subset) is bounded by a mild constant independent of the number of actions, our transductive regret bounds can be very favorable. 4.3 Algorithm We now present an algorithm, FASTTRANSDUCE, seeking to minimize the transductive regret given a family T of WFSTs. Our algorithm is an extension of FASTSWAP. As in that algorithm, a meta-algorithm is used that assigns partial losses to external regret minimization slave algorithms and combines the distributions it receives from these algorithms via multiple reduced power method iterations. The meta-algorithm tracks the state reached in the WFST and maintains a set of external regret minimizing algorithms that help the learner perform well at every state. Thus, here, we need one external regret minimization algorithm Au,i, for each state u reached at time t after reading sequence x1:t−1 and each i 2 ⌃ labeling an outgoing transition at u. The pseudocode of this algorithm is provided in Appendix B. Let |ET |in denote the sum of the number of transitions with distinct input label at each state of T , that is |ET |in = P u2QT |ilab[ET [u]]|. |ET |in is upper bounded by the total number of transitions |ET |. Then, the following regret guarantee and computational complexity hold for FASTTRANSDUCE. Theorem 2. Let (Au,i)u2Q,i2ilab[ET[u]] be external regret minimizing algorithms admitting datadependent regret bounds of the form O( p LT (Au,i) log N), where LT (Au,i) is the cumulative loss of Au,i after T rounds. Assume that, at each round, the sum of the minimal probabilities given to an expert by these algorithms is bounded below by some constant ↵> 0. Then, FASTTRANSDUCE achieves a transductive regret against T that is in O( p T|ET |in log N) with a per-iteration complexity in O ⇣ N 2 min n log T log(1/(1−↵)), N o⌘ . The proof is given in Appendix E. The regret guarantee of FASTTRANSDUCE matches that of the swap regret algorithm of Blum and Mansour [2007] or FASTSWAP in the case where T is chosen to be the family of swap transducers, and it matches the conditional k-gram swap regret of Mohri and Yang [2014] when T is chosen to be that of the k-gram swap transducers. Additionally, its computational complexity is typically more favorable than that of algorithms previously presented in the literature when the assumption on ↵holds, and it is never worse. Remarkably, the computational complexity of FASTTRANSDUCE is comparable to the cost of FASTSWAP, even though FASTTRANSDUCE is a regret minimization algorithm against an arbitrary family of finite-state transducers. This is because only the external regret minimizing algorithms that correspond to the current state need to be updated at each round. 5 Time-selection transductive regret In this section, we extend the notion of time-selection functions with modification rules to the setting of transductive regret and present an algorithm that achieves the same regret guarantee as Khot and Ponnuswami [2008] in their specific setting, but with a substantially more favorable computational complexity. Time-selection functions were first introduced in [Lehrer, 2003] as boolean functions that determine which subset of times are relevant in the calculation of regret. This concept was relaxed to the real-valued setting by Blum and Mansour [2007] who considered time-selection functions taking values in [0, 1]. The authors introduced an algorithm which, for K modification rules and M timeselection functions, guarantees a regret in O( p TN log(MK)) and admits a per-iteration complexity in O(max{NKM, N 3}). For swap regret with time selection functions, this corresponds to a regret bound of O( p TN 2 log(MN)) and a per-iteration computational cost in O(N N+1M). [Khot and Ponnuswami, 2008] improved upon this result and presented an algorithm with a regret bound in O( p T log(MK)) and a per-iteration computational cost in O(max{MK, N 3}), which is still prohibitively expensive for swap regret, since it is in O(N NM). 6 Algorithm 2: FASTTIMESELECTTRANSDUCE; AI, (AI,u,i) external regret algorithms. Algorithm: FASTTIMESELECTTRANSDUCE(I, T , AI, (AI,u,i)I2I,u2QT ,i2ilab[ET [q]]) u IT for t 1 to T do for each I 2 I do ˜q QUERY(AI) for each i 2 ilab[ET [u]] do qI,i QUERY(AI,u,i) Mt,u,I [qI,1112ilab[ET [u]]; . . . ; qI,N1N2ilab[ET [u]]]; Qt,u Qt,u + I(t)˜qIMt,u,I; Zt Zt + I(t)˜qI Qt,u Qt,u Zt for each j 1 to N do cj mini2ilab[ET [u]] Qt,u i,j 1j2ilab[ET [u]] ↵t kck1; ⌧t l log ⇣ 1 p t ⌘ log(1−↵t) m if ⌧t < N then pt p0 t c ↵t for ⌧ 1 to ⌧t do (p⌧ t )> (p⌧ t )>(Qt,u −~1c>); pt pt + p⌧ t pt pt kptk1 else p> t FIXED-POINT(Qt,u) xt SAMPLE(pt); lt RECEIVELOSS(); u δT [u, xt] for each I 2 I do ˜lt I I(t) ' p> t Mt,u,Ilt −p> t lt * for each i 2 ilab[ET [u]] do ATTRIBUTELOSS(AI,u,i, pt[i]I(t)lt) ATTRIBUTELOSS(AI, ˜lt) We now formally define the scenario of online learning with time-selection transductive regret. Let I ⇢[0, 1]N be a family of time-selection functions. Each time-selection function I 2 I determines the importance of the instantaneous regret at each round. Then, the time-selection transductive regret is defined as: RegT (A, I, Φ) = max I2I,T2Φ ( T X t=1 I(t) E xt⇠pt[lt(xt)] − T X t=1 I(t) E xt⇠pt " X e2ET[δT (IT ,x1:t−1),xt] w[e]lt(olab[e]) #) . (3) When the family of transducers admits a single state, this definition coincides with the notion of time-selection regret studied by Blum and Mansour [2007] or Khot and Ponnuswami [2008]. Time-selection transductive regret is a more difficult benchmark than transductive regret because the learner must account for only a subset of the rounds being relevant, in addition to playing a strategy that is robust against a large set of possible transductions. To handle this scenario, we propose the following strategy. We maintain an external regret minimizing algorithm AI over the set of time-selection functions. This algorithm will be responsible for ensuring that our strategy is competitive against the a posteriori optimal time-selection function. We also maintain |I||Q|N other external regret minimizing algorithms, {AI,u,i}I2I,u2QT ,i2ilab[ET [u]], which will ensure that our algorithm is robust against each of the modification rules and the potential transductions. We will then use a meta-algorithm to assign appropriate surrogate losses to each of these external regret minimizing algorithms and combine them to form a stochastic matrix. As in FASTTRANSDUCE, this meta-algorithm will also approximate the stationary distribution of the matrix and use that as the learner’s strategy. We call this algorithm FASTTIMESELECTTRANSDUCE. Its pseudocode is given in Algorithm 2. 7 Theorem 3. Let (AI,u,i)I2I,u2QT ,i2ilab[ET [q]] be external regret minimizing algorithms admitting data-dependent regret bounds of the form O( p LT (AI,u,i) log N), where LT (AI,u,i) is the cumulative loss of AI,u,i after T rounds. Let AI be an external regret minimizing algorithm over I that admits a regret in O( p T log(|I|)) after T rounds. Assume further that at each round, the sum of the minimal probabilities given to an expert by these algorithms is bounded below by some constant ↵> 0. Then, FASTTIMESELECTTRANSDUCE achieves a time-selection transductive regret with respect to the time-selection family I and WFST family T that is in O ⇣p T (log(|I|) + |ET |in log N) ⌘ with a per-iteration complexity in O ⇣ N 2⇣ min n log(T ) log((1−↵)−1), N o + |I| ⌘⌘ . In particular, Theorem 3 implies that FASTTIMESELECTTRANSDUCE achieves the same timeselection swap regret guarantee as the algorithm of Khot and Ponnuswami [2008] but with a perround computational cost that is only in O ⇣ N 2⇣ min n log(T ) log((1−↵)−1), N o + |I| ⌘⌘ , as opposed to O(|I|N N), which is an exponential improvement! Notice that this significant improvement does not require any assumption (it holds even for ↵= 0). 6 Sleeping transductive regret The standard setting of prediction with expert advice can be extended to the sleeping experts scenario studied by Freund et al. [1997], where, at each round, a subset of the experts are asleep and thus unavailable to the learner. The sleeping experts setting has been used to model problems appearing in text categorization [Cohen and Singer, 1999], calendar scheduling [Blum, 1997], or learning how to formulate search-engine queries [Cohen and Singer, 1996]. The standard benchmark in this setting is the sleeping regret, that is the difference between the cumulative expected loss of the learner and the cumulative expected loss of the best static distribution over the experts, restricted to and normalized over the set of awake experts At ✓⌃at each round t: max u2∆N ( T X t=1 E xt⇠pAt t [lt(xt)] − T X t=1 E xt⇠uAt[lt(xt)] ) . (4) Here, for any distribution p, we use the notation pAt = p|At P i2At pi with p|A(a) = p(a)1a2A, for any a 2 ⌃and A ✓⌃. An alternative definition of sleeping regret studied and bounded by Freund et al. [1997] is the following: max u2∆N ( T X t=1 u(At) E xt⇠pAt t [lt(xt)] − T X t=1 E xt⇠u[1xt2Atlt(xt)] ) . (5) This is also the definition we will be adopting in our analysis. Note that if u(At) does not vary with t, then the two definitions only differ by a multiplicative constant. By generalizing the results of Freund et al. [1997] to arbitrary losses, that is beyond those that satisfy equation (6) in their paper, one can show that there exist algorithms with sleeping regret in O ⇣qPT t=1 u⇤(At) Ext⇠pt[lt(xt)] log(N) ⌘ , where u⇤maximizes the expression to be bounded. In this section, we extend this definition of sleeping regret to sleeping transductive regret, that is the difference between the learner’s cumulative expected loss and the cumulative expected loss of any transduction of the learner’s actions among a family of finite-state transducers, where the weights of the transductions are normalized over the set of awake experts. The sleeping transductive regret can be expressed as follows: RegT (A, T , AT 1 ) = max T2T u2∆N ( T X t=1 u(At) E xt⇠pAt t [lt(xt)] − T X t=1 E xt⇠pAt t " X e2ET[δT (IT ,x1:t−1),xt] (u|At)olab[e]w[e]lt(olab[e]) #) . (6) 8 Figure 3: Maximum values of ⌧and minimum values of ↵in FASTSWAP experiments. The vertical bars represent the standard deviation across 16 instantiations of the same simulation. When all experts are awake at every round, i.e. At = ⌃, the sleeping transductive regret reduces to the standard transductive regret. When the family of transducers corresponds to that of swap regret, we uncover a natural definition for sleeping swap regret: max'2Φswap,u2∆N PT t=1 u(At) Ext⇠pAt t [lt(xt)] − PT t=1 Ext⇠pAt t ⇥ u'(xt)1'(xt)2Atlt('(xt)) ⇤ . We now present an efficient algorithm for minimizing sleeping transductive regret, FASTSLEEPTRANSDUCE. Similar to FASTTRANSDUCE, this algorithm uses a meta-algorithm with multiple regret minimizing sub-algorithms and a fixed-point approximation to compute the learner’s strategy. However, since FASTSLEEPTRANSDUCE minimizes sleeping transductive regret, it uses sleeping regret minimizing sub-algorithms (i.e. those with regret guarantees of the form (5)). The meta-algorithm also designs a different stochastic matrix. The pseudocode of this algorithm is given in Appendix C. Theorem 4. Assume that the sleeping regret minimizing algorithms used as inputs of FASTSLEEPTRANSDUCE achieve data-dependent regret bounds such that, if the algorithm selects the distributions (pt)T t=1 and observes losses (lt)T t=1 with awake sets (At)T t=1, then the regret of Aq i is at most O ✓qPT t=1 u⇤(At) Ext⇠pt[lt(xt)] log(N) ◆ . Assume further that at each round, the sum of the minimal probabilities given to an expert by these algorithms is bounded below by some constant ↵> 0. Then, the sleeping regret RegT (FASTSLEEPTRANSDUCE, T , AT 1 ) of FASTSLEEPTRANSDUCE is upper bounded by O ⇣qPT t=1 u(At)|ET |in log(N) ⌘ . Moreover, FASTSLEEPTRANSDUCE admits a per-iteration complexity in O ⇣ N 2 min n log T log(1/(1−↵)), N o⌘ . 7 Experiments In this section, we present some toy experiments illustrating the effectiveness of the Reduced Power Method for approximating the stationary distribution in FASTSWAP. We considered n base learners, where n 2 {40, 80, 120, 160, 200}, each using the weighted-majority algorithm [Littlestone and Warmuth, 1994]. We generated losses as i.i.d. normal random variables with means in (0.1, 0.9) (chosen randomly) and standard deviation equal to 0.1. We capped the losses above and below to remain in [0, 1]. We ran FASTSWAP for 10,000 rounds in each simulation and repeated each simulation 16 times. The plot of the maximum ⌧for each simulation is shown in Figure 3. Across all simulations, the maximum ⌧attained was 4, so that at most 4 iterations of the RPM were needed on any given round to obtain a sufficient approximation. Thus, the per-iteration cost in these simulations was indeed in eO(N 2), an improvement over the O(N 3) cost in prior work. 8 Conclusion We introduced the notion of transductive regret, further extended it to the time-selection and sleeping experts settings, and presented efficient online learning algorithms for all these setting with sublinear transductive regret guarantees. We both generalized the existing theory and gave more efficient algorithms in existing subcases. The algorithms and results in this paper can be further extended to the case of fully non-deterministic weighted finite-state transducers. 9 Acknowledgments We thank Avrim Blum for informing us of an existing lower bound for swap regret proven by Auer [2017]. This work was partly funded by NSF CCF-1535987 and NSF IIS-1618662. References D. Adamskiy, W. M. Koolen, A. Chernov, and V. Vovk. A closer look at adaptive regret. In ALT, pages 290–304. Springer, 2012. P. Auer. Personal communication, 2017. A. Blum. Empirical support for Winnow and Weighted-Majority algorithms: Results on a calendar scheduling domain. Machine Learning, 26(1):5–23, 1997. A. Blum and Y. Mansour. From external to internal regret. Journal of Machine Learning Research, 8: 1307–1324, 2007. N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. N. Cesa-Bianchi, P. Gaillard, G. Lugosi, and G. Stoltz. Mirror descent meets fixed share (and feels no regret). In NIPS, pages 980–988, 2012. W. W. Cohen and Y. Singer. Learning to query the web. In In AAAI Workshop on Internet-Based Information Systems. Citeseer, 1996. W. W. Cohen and Y. Singer. Context-sensitive learning methods for text categorization. ACM Transactions on Information Systems, 17(2):141–173, 1999. A. Daniely, A. Gonen, and S. Shalev-Shwartz. Strongly adaptive online learning. In Proceedings of ICML, pages 1405–1411, 2015. D. P. Foster and R. V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic Behavior, 21(1-2):40–55, 1997. Y. Freund, R. E. Schapire, Y. Singer, and M. K. Warmuth. Using and combining predictors that specialize. In STOC, pages 334–343. ACM, 1997. A. Greenwald, Z. Li, and W. Schudy. More efficient internal-regret-minimizing algorithms. In COLT, pages 239–250, 2008. S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000. E. Hazan and S. Kale. Computational equivalence of fixed points and no regret algorithms, and convergence to equilibria. In NIPS, pages 625–632, 2008. E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In Proceedings of ICML, pages 393–400. ACM, 2009. M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32(2):151–178, 1998. S. Khot and A. K. Ponnuswami. Minimizing wide range regret with time selection functions. In 21st Annual Conference on Learning Theory, COLT 2008, 2008. W. M. Koolen and S. de Rooij. Universal codes from switching strategies. IEEE Transactions on Information Theory, 59(11):7168–7185, 2013. E. Lehrer. A wide range no-regret theorem. Games and Economic Behavior, 42(1):101–115, 2003. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994. 10 M. Mohri and S. Yang. Conditional swap regret and conditional correlated equilibrium. In NIPS, pages 1314–1322, 2014. M. Mohri and S. Yang. Online learning with expert automata. ArXiv 1705.00132, 2017. URL http://arxiv.org/abs/1705.00132. M. Mohri and S. Yang. Competing with automata-based expert sequences. In AISTATS, 2018. C. Monteleoni and T. S. Jaakkola. Online learning of non-stationary sequences. In NIPS, 2003. Y. Nesterov and A. Nemirovski. Finding the stationary states of Markov chains by iterative methods. Applied Mathematics and Computation, 255:58–65, 2015. N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani. Algorithmic game theory, volume 1. Cambridge University Press Cambridge, 2007. M. Odalric and R. Munos. Adaptive bandits: Towards the best history-dependent strategy. In AISTATS, pages 570–578, 2011. G. Stoltz and G. Lugosi. Internal regret in on-line portfolio selection. Machine Learning, 59(1): 125–159, 2005. V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35(3):247–282, 1999. 11 | 2017 | 348 |
6,839 | Pixels to Graphs by Associative Embedding Alejandro Newell Jia Deng Computer Science and Engineering University of Michigan, Ann Arbor {alnewell, jiadeng}@umich.edu Abstract Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph definition. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and demonstrate state-of-the-art performance on the challenging task of scene graph generation. 1 Introduction Extracting semantics from images is one of the main goals of computer vision. Recent years have seen rapid progress in the classification and localization of objects [7, 24, 10]. But a bag of labeled and localized objects is an impoverished representation of image semantics: it tells us what and where the objects are (“person” and “car”), but does not tell us about their relations and interactions (“person next to car”). A necessary step is thus to not only detect objects but to identify the relations between them. An explicit representation of these semantics is referred to as a scene graph [12] where we represent objects grounded in the scene as vertices and the relationships between them as edges. End-to-end training of convolutional networks has proven to be a highly effective strategy for image understanding tasks. It is therefore natural to ask whether the same strategy would be viable for predicting graphs from pixels. Existing approaches, however, tend to break the problem down into more manageable steps. For example, one might run an object detection system to propose all of the objects in the scene, then isolate individual pairs of objects to identify the relationships between them [18]. This breakdown often restricts the visual features used in later steps and limits reasoning over the full graph and over the full contents of the image. We propose a novel approach to this problem, where we train a network to define a complete graph from a raw input image. The proposed supervision allows a network to better account for the full image context while making predictions, meaning that the network reasons jointly over the entire scene graph rather than focusing on pairs of objects in isolation. Furthermore, there is no explicit reliance on external systems such as Region Proposal Networks (RPN) [24] that provide an initial pool of object detections. To do this, we treat all graph elements—both vertices and edges—as visual entities to be detected as in a standard object detection pipeline. Specifically, a vertex is an instance of an object (“person”), and an edge is an instance of an object-object relation (“person next to car”). Just as visual patterns in an image allow us to distinguish between objects, there are properties of the image that allow us to see relationships. We train the network to pick up on these properties and point out where objects and relationships are likely to exist in the image space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Scene graphs are defined by the objects in an image (vertices) and their interactions (edges). The ability to express information about the connections between objects make scene graphs a useful representation for many computer vision tasks including captioning and visual question answering. What distinguishes this work from established detection approaches [24] is the need to represent connections between detections. Traditionally, a network takes an image, identifies the items of interest, and outputs a pile of independent objects. A given detection does not tell us anything about the others. But now, if the network produces a pool of objects (“car”, “person”, “dog”, “tree”, etc), and also identifies a relationship such as “in front of” we need to define which of the detected objects is in front of which. Since we do not know which objects will be found in a given image ahead of time, the network needs to somehow refer to its own outputs. We draw inspiration from associative embeddings [20] to solve this problem. Originally proposed for detection and grouping in the context of multiperson pose estimation, associative embeddings provide the necessary flexibility in the network’s output space. For pose estimation, the idea is to predict an embedding vector for each detected body joint such that detections with similar embeddings can be grouped to form an individual person. But in its original formulation, the embeddings are too restrictive, the network can only define clusters of nodes, and for a scene graph, we need to express arbitrary edges between pairs of nodes. To address this, associative embeddings must be used in a substantially different manner. That is, rather than having nodes output a shared embedding to refer to clusters and groups, we instead have each node define its own unique embedding. Given a set of detected objects, the network outputs a different embedding for each object. Now, each edge can refer to the source and destination nodes by correctly producing their embeddings. Once the network is trained it is straightforward to match the embeddings from detected edges to each vertex and construct a final graph. There is one further issue that we address in this work: how to deal with detections grounded at the same location in the image. Frequently in graph prediction, multiple vertices or edges may appear in the same place. Supervision of this is difficult as training a network traditionally requires telling it exactly what appears and where. With an unordered set of overlapping detections there may not be a direct mapping to explicitly lay this out. Consider a set of object relations grounded at the same pixel location. Assume the network has some fixed output space consisting of discrete “slots” in which detections can appear. It is unclear how to define a mapping so that the network has a consistent rule for organizing its relation predictions into these slots. We address this problem by not enforcing any explicit mapping at all, and instead provide supervision such that it does not matter how the network chooses to fill its output, a correct loss can still be applied. Our contributions are a novel use of associative embeddings for connecting the vertices and edges of a graph, and a technique for supervising an unordered set of network outputs. Together these form the building blocks of our system for direct graph prediction from pixels. We apply our method to the task of generating a semantic graph of objects and relations and test on the Visual Genome dataset [14]. We achieve state-of-the-art results improving performance over prior work by nearly a factor of three on the most difficult task setting. 2 Related Work Relationship detection: There are many ways to frame the task of identifying objects and the relationships between them. This includes localization from referential expressions [11], detection of human-object interactions [3], or the more general tasks of visual relationship detection (VRD) [18] and scene graph generation [12]. In all of these settings, the aim is to correctly determine the 2 relationships between pairs of objects and ground this in the image with accurate object bounding boxes. Visual relationship detection has drawn much recent attention [18, 28, 27, 2, 17, 19, 22, 23]. The open-ended and challenging nature of the task lends itself to a variety of diverse approaches and solutions. For example: incorporating vision and language when reasoning over a pair of objects [18]; using message-passing RNNs to process a set of proposed object boxes [26]; predicting over triplets of bounding boxes that corresponding to proposals for a subject, phrase, and object [15]; using reinforcement learning to sequentially evaluate on pairs of object proposals and determine their relationships [16]; comparing the visual features and relative spatial positions of pairs of boxes [4]; learning to project proposed objects into a vector space such that the difference between two object vectors is informative of the relationship between them [27]. Most of these approaches rely on generated bounding boxes from a Region Proposal Network (RPN) [24]. Our method does not require proposed boxes and can produce detections directly from the image. However proposals can be incorporated as additional input to improve performance. Furthermore, many methods process pairs of objects in isolation whereas we train a network to process the whole image and produce all object and relationship detections at once. Associative Embedding: Vector embeddings are used in a variety of contexts. For example, to measure the similarity between pairs of images [6, 25], or to map visual and text features to a shared vector space [5, 8, 13]. Recent work uses vector embeddings to group together body joints for multiperson pose estimation [20]. These are referred to as associative embeddings since supervision does not require the network to output a particular vector value, and instead uses the distances between pairs of embeddings to calculate a loss. What is important is not the exact value of the vector but how it relates to the other embeddings produced by the network. More specifically, in [20] a network is trained to detect body joints of the various people in an image. In addition, it must produce a vector embedding for each of its detections. The embedding is used to identify which person a particular joint belongs to. This is done by ensuring that all joints that belong to a single individual produce the same output embedding, and that the embeddings across individuals are sufficiently different to separate detections out into discrete groups. In a certain sense, this approach does define a graph, but the graph is restricted in that it can only represent clusters of nodes. For the purposes of our work, we take a different perspective on the associative embedding loss in order to express any arbitrary graph as defined by a set of vertices and directed edges. There are other ways that embeddings could be applied to solve this problem, but our approach depends on our specific formulation where we treat edges as elements of the image to be detected which is not obvious given the prior use of associative embeddings for pose. 3 Pixels →Graph Our goal is to construct a graph from a set of pixels. In particular, we want to construct a graph grounded in the space of these pixels. Meaning that in addition to identifying vertices of the graph, we want to know their precise locations. A vertex in this case can refer to any object of interest in the scene including people, cars, clothing, and buildings. The relationships between these objects is then captured by the edges of the graph. These relationships may include verbs (eating, riding), spatial relations (on the left of, behind), and comparisons (smaller than, same color as). More formally we consider a directed graph G = (V, E). A given vertex vi ∈V is grounded at a location (xi, yi) and defined by its class and bounding box. Each edge e ∈E takes the form ei = (vs, vt, ri) defining a relationship of type ri from vs to vt. We train a network to explicitly define V and E. This training is done end-to-end on a single network, allowing the network to reason fully over the image and all possible components of the graph when making its predictions. While production of the graph occurs all at once, it helps to think of the process in two main steps: detecting individual elements of the graph, and connecting these elements together. For the first step, the network indicates where vertices and edges are likely to exist and predicts the properties of these detections. For the second, we determine which two vertices are connected by a detected edge. We describe these two steps in detail in the following subsections. 3 Figure 2: Full pipeline for object and relationship detection. A network is trained to produce two heatmaps that activate at the predicted locations of objects and relationships. Feature vectors are extracted from the pixel locations of top activations and fed through fully connected networks to predict object and relationship properties. Embeddings produced at this step serve as IDs allowing detections to refer to each other. 3.1 Detecting graph elements First, the network must find all of the vertices and edges that make up a graph. Each graph element is grounded at a pixel location which the network must identify. In a scene graph where vertices correspond to object detections, the center of the object bounding box will serve as the grounding location. We ground edges at the midpoint of the source and target vertices: (⌊xs+xt 2 ⌋, ⌊ys+yt 2 ⌋). With this grounding in mind, we can detect individual elements by using a network that produces per-pixel features at a high output resolution. The feature vector at a pixel determines if an edge or vertex is present at that location, and if so is used to predict the properties of that element. A convolutional neural network is used to process the image and produce a feature tensor of size h x w x f. All information necessary to define a vertex or edge is thus encoded at particular pixel in a feature vector of length f . Note that even at a high output resolution, multiple graph elements may be grounded at the same location. The following discussion assumes up to one vertex and edge can exist at a given pixel, and we elaborate on how we accommodate multiple detections in Section 3.3. We use a stacked hourglass network [21] to process an image and produce the output feature tensor. While our method has no strict dependence on network architecture, there are some properties that are important for this task. The hourglass design combines global and local information to reason over the full image and produce high quality per-pixel predictions. This is originally done for human pose prediction which requires global reasoning over the structure of the body, but also precise localization of individual joints. Similar logic applies to scene graphs where the context of the whole scene must be taken into account, but we wish to preserve the local information of individual elements. An important design choice here is the output resolution of the network. It does not have to match the full input resolution, but there are a few details worth considering. First, it is possible for elements to be grounded at the exact same pixel. The lower the output resolution, the higher the probability of overlapping detections. Our approach allows this, but the fewer overlapping detections, the better. All information necessary to define these elements must be encoded into a single feature vector of length f which gets more difficult to do as more elements occupy a given location. Another detail is that increasing the output resolution aids in performing better localization. To predict the presence of graph elements we take the final feature tensor and apply a 1x1 convolution and sigmoid activation to produce two heatmaps (one for vertices and another for edges). Each heatmap indicates the likelihood that a vertex or edge exists at a given pixel. Supervision is a binary cross-entropy loss on the heatmap activations, and we threshold on the result to produce a candidate set of detections. Next, for each of these detections we must predict their properties such as their class label. We extract the feature vector from the corresponding location of a detection, and use the vector as input to a set of fully connected networks. A separate network is used for each property we wish to predict, and each consists of a single hidden layer with f nodes. This is illustrated above in Figure 2. During training we use the ground truth locations of vertices and edges to extract features. A softmax loss is used to supervise labels like object class and relationship predicate. And to predict bounding box information we use anchor boxes and regress offsets based on the approach in Faster-RCNN [24]. 4 In summary, the detection pipeline works as follows: We pass the image through a network to produce a set of per-pixel features. These features are first used to produce heatmaps identifying vertex and edge locations. Individual feature vectors are extracted from the top heatmap locations to predict the appropriate vertex and edge properties. The final result is a pool of vertex and edge detections that together will compose the graph. 3.2 Connecting elements with associative embeddings Next, the various pieces of the graph need to be put together. This is made possible by training the network to produce additional outputs in the same step as the class and bounding box prediction. For every vertex, the network produces a unique identifier in the form of a vector embedding, and for every edge, it must produce the corresponding embeddings to refer to its source and destination vertices. The network must learn to ensure that embeddings are different across different vertices, and that all embeddings that refer to a single vertex are the same. These embeddings are critical for explicitly laying out the definition of a graph. For instance, while it is helpful that edge detections are grounded at the midpoint of two vertices, this ultimately does not address a couple of critical details for correctly constructing the graph. The midpoint does not indicate which vertex serves as the source and which serves as the destination, nor does it disambiguate between pairs of vertices that happen to share the same midpoint. To train the network to produce a coherent set of embeddings we build off of the loss penalty used in [20]. During training, we have a ground truth set of annotations defining the unique objects in the scene and the edges between these objects. This allows us to enforce two penalties: that an edge points to a vertex by matching its output embedding as closely as possible, and that the embedding vectors produced for each vertex are sufficiently different. We think of the first as “pulling together” all references to a single vertex, and the second as “pushing apart” the references to different individual vertices. We consider an embedding hi ∈Rd produced for a vertex vi ∈V . All edges that connect to this vertex produce a set of embeddings h′ ik, k = 1, ..., Ki where Ki is the total number of references to that vertex. Given an image with n objects the loss to “pull together” these embeddings is: Lpull = 1 Pn i=1 Ki n X i=1 Ki X k=1 (hi −h′ ik)2 To “push apart” embeddings across different vertices we first used the penalty described in [20], but experienced difficulty with convergence. We tested alternatives and the most reliable loss was a margin-based penalty similar to [9]: Lpush = n−1 X i=1 n X j=i+1 max(0, m −||hi −hj||) Intuitively, Lpush is at its highest the closer hi and hj are to each other. The penalty drops off sharply as the distance between hi and hj grows, eventually hitting zero once the distance is greater than a given margin m. On the flip side, for some edge connected to a vertex vi, the loss Lpull will quickly grow the further its reference embedding h′ i is from hi. The two penalties are weighted equally leaving a final associative embedding loss of Lpull + Lpush. In this work, we use m = 8 and d = 8. Convergence of the network improves greatly after increasing the dimension d of tags up from 1 as used in [20]. Once the network is trained with this loss, full construction of the graph can be performed with a trivial postprocessing step. The network produces a pool of vertex and edge detections. For every edge, we look at the source and destination embeddings and match them to the closest embedding amongst the detected vertices. Multiple edges may have the same source and target vertices, vs and vt, and it is also possible for vs to equal vt. 5 3.3 Support for overlapping detections In scene graphs, there are going to be many cases where multiple vertices or multiple edges will be grounded at the same pixel location. For example, it is common to see two distinct relationships between a single pair of objects: person wearing shirt — shirt on person. The detection pipeline must therefore be extended to support multiple detections at the same pixel. One way of dealing with this is to define an extra axis that allows for discrete separation of detections at a given x, y location. For example, one could split up objects along a third spatial dimension assuming the z-axis were annotated, or perhaps separate them by bounding box anchors. In either of these cases there is a visual cue guiding the network so that it can learn a consistent rule for assigning new detections to a correct slot in the third dimension. Unfortunately this idea cannot be applied as easily to relationship detections. It is unclear how to define a third axis such that there is a reliable and consistent bin assignment for each relationship. In our approach, we still separate detections out into several discrete bins, but address the issue of assignment by not enforcing any specific assignment at all. This means that for a given detection we strictly supervise the x, y location in which it is to appear, but allow it to show up in one of several “slots”. We have no way of knowing ahead of time in which slot it will be placed by the network, so this means an extra step must be taken at training time to identify where we think the network has placed its predictions and then enforce the loss at those slots. We define so and sr to be the number of slots available for objects and relationships respectively. We modify the network pipeline so that instead of producing predictions for a single object and relationship at a pixel, a feature vector is used to produce predictions for a set of so objects and sr relationships. That is, given a feature vector f from a single pixel, the network will for example output so object class labels, so bounding box predictions, and so embeddings. This is done with separate fully connected layers predicting the various object and relationship properties for each available slot. No weights are shared amongst these layers. Furthermore, we add an additional output to serve as a score indicating whether or not a detection exists at each slot. During training, we have some number of ground truth objects, between 1 and so, grounded at a particular pixel. We do not know which of the so outputs of the network will correspond to which objects, so we must perform a matching step. The network produces distributions across possible object classes and bounding box sizes, so we try to best match the outputs to the ground truth information we have available. We construct a reference vector by concatenating one-hot encodings of the class and bounding box anchor for a given object. Then we compare these reference vectors to the output distributions produced at each slot. The Hungarian method is used to perform a maximum matching step such that ground truth annotations are assigned to the best possible slot, but no two annotations are assigned to the same slot. Matching for relationships is similar. The ground truth reference vector is constructed by concatenating a one-hot encoding of its class with the output embeddings hs and ht from the source and destination vertices, vs and vt. Once the best matching has been determined we have a correspondence between the network predictions and the set of ground truth annotations and can now apply the various losses. We also supervise the score for each slot depending on whether or not it is matched up to a ground truth detection - thus teaching the network to indicate a “full” or “empty” slot. This matching process is only used during training. At test time, we extract object and relationship detections from the network by first thresholding on the heatmaps to find a set of candidate pixel locations, and then thresholding on individual slot scores to see which slots have produced detections. 4 Implementation details We train a stacked hourglass architecture [21] in TensorFlow [1]. The input to the network is a 512x512 image, with an output resolution of 64x64. To prepare an input image we resize it is so that its largest dimension is of length 512, and center it by padding with zeros along the other dimension. During training, we augment this procedure with random translation and scaling making sure to update the ground truth annotations to ignore objects and relationships that may be cropped out. We make a slight modification to the orginal hourglass design: doubling the number of features to 512 at the two lowest resolutions of the hourglass. The output feature length f is 256. All losses classification, bounding box regression, associative embedding - are weighted equally throughout 6 Figure 3: Predictions on Visual Genome. In the top row, the network must produce all object and relationship detections directly from the image. The second row includes examples from an easier version of the task where object detections are provided. Relationships outlined in green correspond to predictions that correctly matched to a ground truth annotation. the course of training. We set so = 3 and sr = 6 which is sufficient to completely accommodate the detection annotations for all but a small fraction of cases. Incorporating prior detections: In some problem settings, a prior set of object detections may be made available either as ground truth annotations or as proposals from an independent system. It is good to have some way of incorporating these into the network. We do this by formatting an object detection as a two channel input where one channel consists of a one-hot activation at the center of the object bounding box and the other provides a binary mask of the box. Multiple boxes can be displayed on these two channels, with the first indicating the center of each box and the second, the union of their masks. If provided with a large set of detections, this representation becomes too crowded so we either separate bounding boxes by object class, or if no class information is available, by bounding box anchors. To reduce computational cost this additional input is incorporated after several layers of convolution and pooling have been applied to the input image. For example, we set up this representation at the output resolution, 64x64, then apply several consecutive 1x1 convolutions to remap the detections to a feature tensor with f channels. Then, we add this result to the first feature tensor produced by the hourglass network at the same resolution and number of channels. Sparse supervision: It is important to note that it is almost impossible to exhaustively annotate images for scene graphs. A large number of possible relationships can be described between pairs of objects in a real-world scene. The network is likely to generate many reasonable predictions that are not covered in the ground truth. We want to reduce the penalty associated with these detections and encourage the network to produce as many detections as possible. There are a few properties of our training pipeline that are conducive to this. For example, we do not need to supervise the entire heatmap for object and relationship detections. Instead, we apply a loss at the pixels we know correspond to positive detections, and then randomly sample some fraction from the rest of the image to serve as negatives. This balances the proportion of positive and negative samples, and reduces the chance of falsely penalizing unannotated detections. 5 Experiments Dataset: We evaluate the performance of our method on the Visual Genome dataset [14]. Visual Genome consists of 108,077 images annotated with object detections and object-object relationships, and it serves as a challenging benchmark for scene graph generation on real world images. Some 7 SGGen (no RPN) SGGen (w/ RPN) SGCls PredCls R@50 R@100 R@50 R@100 R@50 R@100 R@50 R@100 Lu et al. [18] – – 0.3 0.5 11.8 14.1 27.9 35.0 Xu et al. [26] – – 3.4 4.2 21.7 24.4 44.8 53.0 Our model 6.7 7.8 9.7 11.3 26.5 30.0 68.0 75.2 Table 1: Results on Visual Genome Figure 4: How detections are distributed across the six available slots for relationships. Predicate R@100 Predicate R@100 wearing 87.3 to 5.5 has 80.4 and 5.4 on 79.3 playing 3.8 wears 77.1 made of 3.2 of 76.1 painted on 2.5 riding 74.1 between 2.3 holding 66.9 against 1.6 in 61.6 flying in 0.0 sitting on 58.4 growing on 0.0 carrying 56.1 from 0.0 Table 2: Performance per relationship predicate (top ten on left, bottom ten on right) processing has to be done before using the dataset as objects and relationships are annotated with natural language not with discrete classes, and many redundant bounding box detections are provided for individual objects. To make a direct comparison to prior work we use the preprocessed version of the set made available by Xu et al. [26]. Their network is trained to predict the 150 most frequent object classes and 50 most frequent relationship predicates in the dataset. We use the same categories, as well as the same training and test split as defined by the authors. Task: The scene graph task is defined as the production of a set of subject-predicate-object tuples. A proposed tuple is composed of two objects defined by their class and bounding box and the relationship between them. A tuple is correct if the object and relationship classes match those of a ground truth annotation and the two objects have at least a 0.5 IoU overlap with the corresponding ground truth objects. To avoid penalizing extra detections that may be correct but missing an annotation, the standard evaluation metric used for scene graphs is Recall@k which measures the fraction of ground truth tuples to appear in a set of k proposals. Following [26], we report performance on three problem settings: SGGen: Detect and classify all objects and determine the relationships between them. SGCls: Ground truth object boxes are provided, classify them and determine their relationships. PredCls: Boxes and classes are provided for all objects, predict their relationships. SGGen corresponds to the full scene graph task while PredCls allows us to focus exclusively on predicate classification. Example predictions on the SgGen and PredCls tasks are shown in Figure 3. It can be seen in Table 1 that on all three settings, we achieve a significant improvement in performance over prior work. It is worth noting that prior approaches to this problem require a set of object proposal boxes in order to produce their predictions. For the full scene graph task (SGGen) these detections are provided by a Region Proposal Network (RPN) [24]. We evaluate performance with and without the use of RPN boxes, and achieve promising results even without the use of proposal boxes - using nothing but the raw image as input. Furthermore, the network is trained from scratch, and does not rely on pretraining on other datasets. Discussion: There are a few interesting results that emerge from our trained model. The network exhibits a number of biases in its predictions. For one, the vast majority of predicate predictions correspond to a small fraction of the 50 predicate classes. Relationships like “on” and “wearing” tend to completely dominate the network output, and this is in large part a function of the distribution of ground truth annotations of Visual Genome. There are several orders of magnitude more examples for 8 “on” than most other predicate classes. This discrepancy becomes especially apparent when looking at the performance per predicate class in Table 2. The poor results on the worst classes do not have much effect on final performance since there are so few instances of relationships labeled with those predicates. We do some additional analysis to see how the network fills its “slots” for relationship detection. Remember, at a particular pixel the network produces a set of dectection and this is expressed by filling out a fixed set of available slots. There is no explicit mapping telling the network which slots it should put particular detections. From Figure 4, we see that the network learns to divide slots up such that they correspond to subsets of predicates. For example, any detection for the predicates behind, has, in, of, and on will exclusively fall into three of the six available slots. This pattern emerges for most classes, with the exception of wearing/wears where detections are distributed uniformly across all six slots. 6 Conclusion The qualities of a graph that allow it to capture so much information about the semantic content of an image come at the cost of additional complexity for any system that wishes to predict them. We show how to supervise a network such that all of the reasoning about a graph can be abstracted away into a single network. The use of associative embeddings and unordered output slots offer the network the flexibility necessary to making training of this task possible. Our results on Visual Genome clearly demonstrate the effectiveness of our approach. 7 Acknowledgements This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-2015-CRG42639. References [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, and Gal Chechik. Learning to generalize to new compositions in image understanding. arXiv preprint arXiv:1608.07639, 2016. [3] Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object interactions. arXiv preprint arXiv:1702.05448, 2017. [4] Bo Dai, Yuqi Zhang, and Dahua Lin. Detecting visual relationships with deep relational networks. arXiv preprint arXiv:1704.03114, 2017. [5] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129, 2013. [6] Andrea Frome, Yoram Singer, Fei Sha, and Jitendra Malik. Learning globally-consistent local distance functions for shape-based image retrieval and classification. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007. [7] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. [8] Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In European Conference on Computer Vision, pages 529–545. Springer, 2014. [9] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In Computer vision and pattern recognition, 2006 IEEE computer society conference on, volume 2, pages 1735–1742. IEEE, 2006. 9 [10] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. arxiv preprint. arXiv preprint arXiv:1703.06870, 2017. [11] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships in referential expressions with compositional modular networks. arXiv preprint arXiv:1611.09978, 2016. [12] Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li FeiFei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3668–3678, 2015. [13] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015. [14] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. 2016. [15] Yikang Li, Wanli Ouyang, and Xiaogang Wang. Vip-cnn: A visual phrase reasoning convolutional neural network for visual relationship detection. arXiv preprint arXiv:1702.07191, 2017. [16] Xiaodan Liang, Lisa Lee, and Eric P Xing. Deep variation-structured reinforcement learning for visual relationship and attribute detection. arXiv preprint arXiv:1703.03054, 2017. [17] Wentong Liao, Michael Ying Yang, Hanno Ackermann, and Bodo Rosenhahn. On support relations and semantic scene graphs. arXiv preprint arXiv:1609.05834, 2016. [18] Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language priors. In European Conference on Computer Vision, pages 852–869. Springer, 2016. [19] Cewu Lu, Hao Su, Yongyi Lu, Li Yi, Chikeung Tang, and Leonidas Guibas. Beyond holistic object recognition: Enriching image understanding with part states. arXiv preprint arXiv:1612.07310, 2016. [20] Alejandro Newell and Jia Deng. Associative embedding: End-to-end learning for joint detection and grouping. arXiv preprint arXiv:1611.05424, 2016. [21] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016. [22] Bryan A Plummer, Arun Mallya, Christopher M Cervantes, Julia Hockenmaier, and Svetlana Lazebnik. Phrase localization and visual relationship detection with comprehensive linguistic cues. arXiv preprint arXiv:1611.06641, 2016. [23] David Raposo, Adam Santoro, David Barrett, Razvan Pascanu, Timothy Lillicrap, and Peter Battaglia. Discovering objects and their relations from entangled scene representations. arXiv preprint arXiv:1702.05068, 2017. [24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015. [25] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in neural information processing systems, pages 1473–1480, 2005. [26] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [27] Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. Visual translation embedding network for visual relation detection. arXiv preprint arXiv:1702.08319, 2017. [28] Bohan Zhuang, Lingqiao Liu, Chunhua Shen, and Ian Reid. Towards context-aware interaction recognition. arXiv preprint arXiv:1703.06246, 2017. 10 | 2017 | 349 |
6,840 | Recurrent Ladder Networks Isabeau Prémont-Schwarz, Alexander Ilin, Tele Hotloo Hao, Antti Rasmus, Rinu Boney, Harri Valpola The Curious AI Company {isabeau,alexilin,hotloo,antti,rinu,harri}@cai.fi Abstract We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information. 1 Introduction Many cognitive tasks require learning useful representations on multiple abstraction levels. Hierarchical latent variable models are an appealing approach for learning a hierarchy of abstractions. The classical way of learning such models is by postulating an explicit parametric model for the distributions of random variables. The inference procedure, which evaluates the posterior distribution of the unknown variables, is then derived from the model – an approach adopted in probabilistic graphical models (see, e.g., [5]). The success of deep learning can, however, be explained by the fact that popular deep models focus on learning the inference procedure directly. For example, a deep classifier like AlexNet [19] is trained to produce the posterior probability of the label for a given data sample. The representations that the network computes at different layers are related to the inference in an implicit latent variable model but the designer of the model does not need to know about them. However, it is actually tremendously valuable to understand what kind of inference is required by different types of probabilistic models in order to design an efficient network architecture. Ladder networks [22, 28] are motivated by the inference required in a hierarchical latent variable model. By design, the Ladder networks aim to emulate a message passing algorithm, which includes a bottom-up pass (from input to label in classification tasks) and a top-down pass of information (from label to input). The results of the bottom-up and top-down computations are combined in a carefully selected manner. The original Ladder network implements only one iteration of the inference algorithm but complex models are likely to require iterative inference. In this paper, we propose a recurrent extension of the Ladder network for iterative inference and show that the same architecture can be used for temporal modeling. We also show how to use the proposed architecture as an inference engine in more complex models which can handle multiple independent objects in the sensory input. Thus, the proposed architecture is suitable for the type of inference required by rich models: those that can learn a hierarchy of abstractions, can handle temporal information and can model multiple objects in the input. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. t −1 el−1 dl l el el dl+1 dl sl l + 1 t el−1 dl el el dl+1 (a) ˜x x y z (b) xt xt+1 yt−1 yt yt+1 zt−1 zt zt+1 (c) Figure 1: (a): The structure of the Recurrent Ladder networks. The encoder is shown in red, the decoder is shown in blue, the decoder-to-encoder connections are shown in green. The dashed line separates two iterations t −1 and t. (b)-(c): The type of hierarchical latent variable models for which RLadder is designed to emulate message passing. (b): A graph of a static model. (c): A fragment of a graph of a temporal model. White circles are unobserved latent variables, gray circles represent observed variables. The arrows represent the directions of message passing during inference. 2 Recurrent Ladder Recurrent Ladder networks In this paper, we present a recurrent extension of the Ladder networks which is conducive to iterative inference and temporal modeling. Recurrent Ladder (RLadder) is a recurrent neural network whose units resemble the structure of the original Ladder networks [22, 28] (see Fig. 1a). At every iteration t, the information first flows from the bottom (the input level) to the top through a stack of encoder cells. Then, the information flows back from the top to the bottom through a stack of decoder cells. Both the encoder and decoder cells also use the information that is propagated horizontally. Thus, at every iteration t, an encoder cell in the l-th layer receives three inputs: 1) the output of the encoder cell from the level below el−1(t), 2) the output dl(t −1) of the decoder cell from the same level from the previous iteration, 3) the encoder state sl(t −1) from the same level from the previous iteration. It updates its state value sl(t) and passes the same output el(t) both vertically and horizontally: sl(t) = fs,l(el−1(t), dl(t −1), sl(t −1)) (1) el(t) = fe,l(el−1(t), dl(t −1), sl(t −1)) . (2) The encoder cell in the bottom layer typically sends observed data (possibly corrupted by noise) as its output e1(t). Each decoder cell is stateless, it receives two inputs (the output of the decoder cell from one level above and the output of the encoder cell from the same level) and produces one output dl(t) = gl(el(t), dl+1(t)) , (3) which is passed both vertically and horizontally. The exact computations performed in the cells can be tuned depending on the task at hand. In practice, we have used LSTM [15] or GRU [8] cells in the encoder and cells inspired by the original Ladder networks in the decoder (see Appendix A). Similarly to Ladder networks, the RLadder is usually trained with multiple tasks at different abstraction levels. Tasks at the highest abstraction level (like classification) are typically formulated at the highest layer. Conversely, the output of the decoder cell in the bottom level is used to formulate a low-level task which corresponds to abstractions close to the input. The low-level task can be denoising (reconstruction of a clean input from the corrupted one), other possibilities include object detection [21], segmentation [3, 23], or in a temporal setting, prediction. A weighted sum of the costs at different levels is optimized during training. Connection to hierarchical latent variables and message passing The RLadder architecture is designed to mimic the computational structure of an inference procedure in probabilistic hierarchical latent variable models. In an explicit probabilistic graphical model, inference can be done by an algorithm which propagates information (messages) between the nodes of a graphical model so as to compute the posterior distribution of the latent variables (see, e.g., [5]). 2 For static graphical models implicitly assumed by the RLadder (see Fig. 1b), messages need to be propagated from the input level up the hierarchy to the highest level and from the top to the bottom, as shown in Fig. 1a. In Appendix B, we present a derived iterative inference procedure for a simple static hierarchical model to give an example of a message-passing algorithm. We also show how that inference procedure can be implemented in the RLadder computational graph. In the case of temporal modeling, the type of a graphical model assumed by the RLadder is shown in Fig. 1c. If the task is to do next step prediction of observations x, an online inference procedure should update the knowledge about the latent variables yt, zt using observed data xt and compute the predictive distributions for the input xt+1. Assuming that the distributions of the latent variables at previous time instances (⌧< t) are kept fixed, the inference can be done by propagating messages from the observed variables xt and the latent variables y, z bottom-up, top-down and from the past to the future, as shown in Fig. 1c. The architecture of the RLadder (Fig. 1a) is designed so as to emulate such a message-passing procedure, that is the information can propagate in all the required directions: bottom-up, top-down and from the past to the future. In Appendix C, we present an example of the message-passing algorithm derived for a temporal hierarchical model to show how it is related to the RLadders’s computation graph. Even though the motivation of the RLadder architecture is to emulate a message-passing procedure, the nodes of the RLadder do not directly correspond to nodes of any specific graphical model.1 The RLadder directly learns an inference procedure and the corresponding model is never formulated explicitly. Note also that using stateful encoder cells is not strictly motivated by the message-passing argument but in practice these skip connections facilitate training of a deep network. As we mentioned previously, the RLadder is usually trained with multiple tasks formulated at different representation levels. The purpose of tasks is to encourage the RLadder to learn the right inference procedure, and hence formulating the right kind of tasks is crucial for the success of training. For example, the task of denoising encourages the network to learn important aspects of the data distribution [1, 2]. For temporal modeling, the task of next step prediction plays a similar role. The RLadder is most useful in problems that require accurate inference on multiple abstraction levels, which is supported by the experiments presented in this paper. Related work The RLadder architecture is similar to that of other recently proposed models for temporal modeling [10, 11, 9, 27, 20]. In [9], the recurrent connections (from time t −1 to time t) are placed in the lateral links between the encoder and the decoder. This can make it easier to extend an existing feed-forward network architecture to the case of temporal data as the recurrent units do not participate in the bottom-up computations. On the other hand, the recurrent units do not receive information from the top, which makes it impossible for higher layers to influence the dynamics of lower layers. The architectures in [10, 11, 27] are quite similar to ours but they could potentially derive further benefit from the decoder-to-encoder connections between successive time instances (green links in Fig. 1b). The aforementioned connections are well justified from the message-passing point of view: When updating the posterior distribution of a latent variable, one should combine the latest information from the top and from the bottom, and it is the decoder that contains the latest information from the top. We show empirical evidence to the importance of those connections in Section 3.1. 3 Experiments with temporal data In this section, we demonstrate that the RLadder can learn an accurate inference algorithm in tasks that require temporal modeling. We consider datasets in which passing information both in time and in abstraction hierarchy is important for achieving good performance. 3.1 Occluded Moving MNIST We use a dataset where we know how to do optimal inference in order to be able to compare the results of the RLadder to the optimal ones. To this end we designed the Occluded Moving MNIST 1To emphasize this, we used different shapes for the nodes of the RLadder network (Fig. 1a) and the nodes of graphical models that inspired the RLadder architecture (Figs. 1b-c). 3 t = 1 t = 2 t = 3 t = 4 t = 5 observed frames frames with occlusion visualized optimal temporal reconstruction Figure 2: The Occluded Moving MNIST dataset. Bottom row: Optimal temporal recombination for a sequence of occluded frames from the dataset. dataset. It consists of MNIST digits downscaled to 14 ⇥14 pixels flying on a 32 ⇥32 white background with white vertical and horizontal occlusion bars (4 pixels in width, and spaced by 8 visible pixels apart) which, when the digit flies behind them, occludes the pixels of the digit (see Fig. 2). We also restrict the velocities to be randomly chosen in the set of eight discrete velocities {(1, ±2), (−1, ±2), (2, ±1), (−2, ±1)} pixels/frame, so that apart from the bouncing, the movement is deterministic. The digits are split into training, validation, and test sets according to the original MNIST split. The primary task is then to classify the digit which is only partially observable at any given moment, at the end of five time steps. In order to do optimal classification, one would need to assimilate information about the digit identity (which is only partially visible at any given time instance) by keeping track of the observed pixels (see the bottom row of Fig. 2) and then feeding the resultant reconstruction to a classifier. In order to encourage optimal inference, we add a next step prediction task to the RLadder at the bottom of the decoder: The RLadder is trained to predict the next occluded frame, that is the network never sees the un-occluded digit. This thus mimics a realistic scenario where the ground truth is not known. To assess the importance of the features of the RLadder, we also do an ablation study. In addition, we compare it to three other networks. In the first comparison network, the optimal reconstruction of the digit from the five frames (as shown in Fig. 2) is fed to a static feed-forward network from which the encoder of the RLadder was derived. This is our gold standard, and obtaining similar results to it implies doing close to optimal temporal inference. The second, a temporal baseline, is a deep feed-forward network (the one on which the encoder is based) with a recurrent neural network (RNN) at the top only so that, by design the network can propagate temporal information only at a high level, and not at a low level. The third, a hierarchical RNN, is a stack of convolutional LSTM units with a few convolutional layers in between, which is the RLadder amputated of its decoder. See Fig. 3 and Appendix D.1 for schematics and details of the architectures. Fully supervised learning results. The results are presented in Table 1. The first thing to notice is that the RLadder reaches (up to uncertainty levels) the classification accuracy obtained by the network which was given the optimal reconstruction of the digit. Furthermore, if the RLadder does not have a decoder or the decoder-to-encoder connections, or if it is trained without the auxiliary prediction task, we see the classification error rise almost to the level of the temporal baseline. This means that even if a network has RNNs at the lowest levels (like with only the feed-forward encoder), or if it does not have a task which encourages it to develop a good world model (like the RLadder without the next-frame prediction task), or if the information cannot travel from the decoder to the encoder, the high level task cannot truly benefit from lower level temporal modeling. Next, one notices from Table 1 that the top-level classification cost helps the low-level prediction cost in the RLadder (which in turn helps the top-level cost in a mutually beneficial cycle). This mutually supportive relationship between high-level and low-level inferences is nicely illustrated by the example in Fig. 4. Up until time step t = 3 inclusively, the network believes the digit to be a five 4 xt−1 xt Temporal baseline network xt−1 xt Hierarchical RNN xt−1 ˆxt xt ˆxt+1 RLadder Figure 3: Architectures used for modeling occluded Moving MNIST. Temporal baseline network is a convolutional network with a fully connected RNN on top. Table 1: Performance on Occluded Moving MNIST Classification error (%) Prediction error, ·10−5 Optimal reconstruction and static classifier 0.71 ± 0.03 Temporal baseline 2.02 ± 0.16 Hierarchical RNN (encoder only) 1.60 ± 0.05 RLadder w/o prediction task 1.51 ± 0.21 RLadder w/o decoder-to-encoder conn. 1.24 ± 0.05 156.7 ± 0.4 RLadder w/o classification task 155.2 ± 2.5 RLadder 0.74 ± 0.09 0.74 ± 0.09 0.74 ± 0.09 150.1 ± 0.1 150.1 ± 0.1 150.1 ± 0.1 (Fig. 4a). As such, at t = 3, the network predicts that the top right part of the five which has been occluded so far will stick out from behind the occlusions as the digit moves up and right at the next time step (Fig. 4b). Using the decoder-to-encoder connections, the decoder can relay this expectation to the encoder at t = 4. At t = 4 the encoder can compare this expectation with the actual input where the top right part of the five is absent (Fig. 4c). Without the decoder-to-encoder connections this comparison would have been impossible. Using the upward path of the encoder, the network can relay this discrepancy to the higher classification layers. These higher layers with a large receptive field can then conclude that since it is not a five, then it must be a three (Fig. 4d). Now thanks to the decoder, the higher classification layers can relay this information to the lower prediction layers so that they can change their prediction of what will be seen at t = 5 appropriately (Fig. 4e). Without a decoder which would bring this high level information back down to the low level, this drastic update of the prediction would be impossible. With this information the lower prediction layer can now predict that the top-left part of the three (which it has never seen before) will appear at the next time step from behind the occlusion, which is indeed what happens at t = 5 (Fig. 4f). Semi-supervised learning results. In the following experiment, we test the RLadder in the semisupervised scenario when the training data set contains 1.000 labeled sequences and 59.000 unlabeled ones. To make use of the unlabeled data, we added an extra auxiliary task at the top level which was the consistency cost with the targets provided by the Mean Teacher (MT) model [26]. Thus, the RLadder was trained with three tasks: 1) next step prediction at the bottom, 2) classification at the top, 3) consistency with the MT outputs at the top. As shown in Table 2, the RLadder improves dramatically by learning a better model with the help of unlabeled data independently and in addition to other semi-supervised learning methods. The temporal baseline model also improves the classification accuracy by using the consistency cost but it is clearly outperformed by the RLadder. 3.2 Polyphonic Music Dataset In this section, we evaluate the RLadder on the midi dataset converted to piano rolls [6]. The dataset consists of piano rolls (the notes played at every time step, where a time step is, in this case, an eighth note) of various piano pieces. We train an 18-layer RLadder containing five convolutional LSTMs and one fully-connected LSTM. More details can be found in Appendix D.2. Table 3 shows the 5 t = 1 t = 2 t = 3 t = 4 t = 5 ground-truth unoccluded digits observed frames c f predicted frames b e probe of internal representations a d Figure 4: Example prediction of an RLadder on the occluded moving MNIST dataset. First row: the ground truth of the digit, which the network never sees and does not train on. Second row: The actual five frames seen by the network and on which it trains. Third row: the predicted next frames of a trained RLadder. Fourth row: A stopped-gradient (gradient does not flow into the RLadder) readout of the bottom layer of the decoder trained on the ground truth to probe what aspects of the digit are represented by the neurons which predict the next frame. Notice how at t = 1, the network does not yet know in which direction the digit will move and so it predicts a superposition of possible movements. Notice further (red annotations a-f), that until t = 3, the network thought the digit was a five, but when the top bar of the supposed five did not materialize on the other side of the occlusion as expected at t = 4, the network immediately concluded correctly that it was actually a three. Table 2: Classification error (%) on semi-supervised Occluded Moving MNIST 1k labeled 1k labeled & 59k unlabeled w/o MT MT Optimal reconstruction and static classifier 3.50 ± 0.28 3.50 ± 0.28 1.34 ± 0.04 Temporal baseline 10.86 ± 0.43 10.86 ± 0.43 3.14 ± 0.16 RLadder 10.49 ± 0.81 5.20 ± 0.77 1.69 ± 0.14 negative log-likelhoods of the next-step prediction obtained on the music dataset, where our results are reported as mean plus or minus standard deviation over 10 seeds. We see that the RLadder is competitive with the best results, and gives the best results amongst models outputting the marginal distribution of notes at each time step. The fact that the RLadder did not beat [16] on the midi datasets shows one of the limitations of RLadder. Most of the models in Table 3 output a joint probability distribution of notes, unlike RLadder which outputs the marginal probability for each note. That is to say, those models, to output the probability of a note, take as input the notes at previous time instances, but also the ground truth of the notes to the left at the same time instance. RLadder does not do that, it only takes as input the past notes played. Even though, as the example in 3.1 of the the digit five turning into a three after seeing only one absent dot, shows that internally the RLadder models the joint distribution. 4 Experiments with perceptual grouping In this section, we show that the RLadder can be used as an inference engine in a complex model which benefits from iterative inference and temporal modeling. We consider the task of perceptual grouping, that is identifying which parts of the sensory input belong to the same higher-level perceptual 6 Table 3: Negative log-likelihood (smaller is better) on polyphonic music dataset Piano-midi.de Nottingham Muse JSB Chorales Models outputting a joint distribution of notes: NADE masked [4] 7.42 3.32 6.48 8.51 NADE [4] 7.05 2.89 5.54 7.59 RNN-RBM [6] 7.09 2.39 6.01 6.27 RNN-NADE (HF) [6] 7.05 2.31 5.60 5.56 5.56 5.56 LSTM-NADE [16] 7.39 2.06 5.03 6.10 TP-LSTM-NADE [16] 5.49 1.64 4.34 5.92 BALSTM [16] 5.00 5.00 5.00 1.62 1.62 1.62 3.90 3.90 3.90 5.86 Models outputting marginal probabilities for each note: RNN [4] 7.88 3.87 7.43 8.76 LSTM [17] 6.866 3.492 MUT1 [17] 6.792 3.254 RLadder 6.19 ± 0.02 6.19 ± 0.02 6.19 ± 0.02 2.42 ± 0.03 2.42 ± 0.03 2.42 ± 0.03 5.69 ± 0.02 5.69 ± 0.02 5.69 ± 0.02 5.64 ± 0.02 5.64 ± 0.02 5.64 ± 0.02 components (objects). We enhance the previously developed model for perceptual grouping called Tagger [13] by replacing the originally used Ladder engine with the RLadder. For another perspective on the problem see [14] which also extends Tagger to a recurrent neural network, but does so from an expectation maximization point of view. 4.1 Recurrent Tagger Tagger is a model designed for perceptual grouping. When applied to images, the modeling assumption is that each pixel ˜xi belongs to one of the K objects, which is described by binary variables zi,k: zi,k = 1 if pixel i belongs to object k and zi,k = 0 otherwise. The reconstruction of the whole image using object k only is µµµk which is a vector with as many elements µi,k as there are pixels. Thus, the assumed probabilistic model can be written as follows: p(˜x,µµµ, z, h) = Y i,k N(˜xi|µi,k, σ2 k)zi,k K Y k=1 p(zk,µµµk|hk)p(hk) (4) where zk is a vector of elements zi,k and hk is (a hierarchy of) latent variables which define the shape and the texture of the objects. See Fig. 5a for a graphical representation of the model and Fig. 5b for possible values of model variables for the textured MNIST dataset used in the experiments of Section 4.2. The model in (4) is defined for noisy image ˜x because Tagger is trained with an auxiliary low-level task of denoising. The inference procedure in model (4) should evaluate the posterior distributions of the latent variables zk, µµµk, hk for each of the K groups given corrupted data ˜x. Making the approximation that the variables of each of the K groups are independent a posteriori p(z,µµµ, h|˜x) ⇡ Y k q(zk,µµµk, hk) , (5) the inference procedure could be implemented by iteratively updating each of the K approximate distributions q(zk,µµµk, hk), if the model (4) and the approximation (5) were defined explicitly. Tagger does not explicitly define a probabilistic model (4) but learns the inference procedure directly. The iterative inference procedure is implemented by a computational graph with K copies of the same Ladder network doing inference for one of the groups (see Fig. 5c). At the end of every iteration, the inference procedure produces the posterior probabilities ⇡i,k that pixel i belongs to object k and the point estimates of the reconstructions µµµk (see Fig. 5c). Those outputs, are used to form the low-level cost and the inputs for the next iteration (see more details in [13]). In this paper, we replace the original Ladder engine of Tagger with the RLadder. We refer to the new model as RTagger. 4.2 Experiments on grouping using texture information The goal of the following experiment is to test the efficiency of RTagger in grouping objects using the texture information. To this end, we created a dataset that contains thickened MNIST digits with 7 ˜x x µµµk zk hk K (a) x µµµ1 z1 µµµ2 z2 h1 h2 (b) ˜x ⇡⇡⇡,µµµ ˜x ⇡⇡⇡,µµµ K (c) Figure 5: (a): Graphical model for perceptual grouping. White circles are unobserved latent variables, gray circles represent observed variables. (b): Examples of possible values of model variables for the textured MNIST dataset. (c): Computational graph that implements iterative inference in perceptual grouping task (RTagger). Two graph iterations are drawn. The plate notation represent K copies of the same graph. (a) (b) (c) (d) Figure 6: (a): Example image from the Brodatz-textured MNIST dataset. (b): The image reconstruction m0 by the group that learned the background. (c): The image reeconstruction m1 by the group that learned the digit. (d): The original image colored using the found grouping ⇡⇡⇡k. 20 textures from the Brodatz dataset [7]. An example of a generated image is shown in Fig. 6a. To create a greater diversity of textures (to avoid over-fitting), we randomly rotated and scaled the 20 Brodatz textures when producing the training data. The network trained on the textured MNIST dataset has the architecture presented in Fig. 5c with three iterations. The number of groups was set to K = 3. The details of the RLadder architecture are presented in Appendix D.3. The network was trained on two tasks: The low-level segmentation task was formulated around denoising, the same way as in the Tagger model [13]. The top-level cost was the log-likelihood of the digit class at the last iteration. Table 4 presents the obtained performance on the textured MNIST dataset in both fully supervised and semi-supervised settings. All experiments were run over 5 seeds. We report our results as mean plus or minus standard deviation. In some runs, Tagger experiments did not converge to a reasonable solution (because of unstable or too slow convergence), so we did not include those runs in our evaluations. Following [13], the segmentation accuracy was computed using the adjusted mutual information (AMI) score [29] which is the mutual information between the ground truth segmentation and the estimated segmentation ⇡⇡⇡k scaled to give one when the segmentations are identical and zero when the output segmentation is random. For comparison, we trained the Tagger model [13] on the same dataset. The other comparison method was a feed-forward convolutional network which had an architecture resembling the bottom-up pass (encoder) of the RLadder and which was trained on the classification task only. One thing to notice is that the results obtained with the RTagger clearly improve over iterations, which supports the idea that iterative inference is useful in complex cognitive tasks. We also observe that RTagger outperforms Tagger and both approaches significantly outperform the convolutional network baseline in which the classification task is not supported by the input-level task. We have also observed that the top-level classification tasks makes the RTagger faster to train in terms of the number of updates, which also supports that the high-level and low-level tasks mutually benefit from each other: Detecting object 8 Table 4: Results on the Brodatz-textured MNIST. i-th column corresponds to the intermediate results of RTagger after the i-th iteration. In the fully supervised case, Tagger was only trained successfully in 2 of the 5 seeds, the given results are for those 2 seeds. In the semi-supervised case, we were not able to train Tagger successfully. 50k labeled Segmentation accuracy, AMI: RTagger 0.55 0.75 0.80 ± 0.01 0.80 ± 0.01 0.80 ± 0.01 Tagger − − 0.73 ± 0.02 Classification error, %: RTagger 18.2 8.0 5.9 ± 0.2 5.9 ± 0.2 5.9 ± 0.2 Tagger − − 12.15 ± 0.1 ConvNet – – 14.3 ± 0.46 1k labeled + 49k unlabeled Segmentation accuracy, AMI: RTagger 0.56 0.74 0.80 ± 0.03 0.80 ± 0.03 0.80 ± 0.03 Classification error, %: RTagger 63.8 28.2 22.6 ± 6.2 22.6 ± 6.2 22.6 ± 6.2 ConvNet – – 88 ± 0.30 Figure 7: Example of segmentation and generation by the RTagger trained on the Moving MNIST. First row: frames 0-9 is the input sequence, frames 10-15 is the ground truth future. Second row: Next step prediction of frames 1-9 and future frame generation (frames 10-15) by RTagger, the colors represent grouping performed by RTagger. boundaries using textures helps classify a digit, while knowing the class of the digit helps detect the object boundaries. Figs. 6b-d show the reconstructed textures and the segmentation results for the image from Fig. 6a. 4.3 Experiments on grouping using movement information The same RTagger model can perform perceptual grouping in video sequences using motion cues. To demonstrate this, we applied the RTagger to the moving MNIST [25]2 sequences of length 20 and the low-level task was prediction of the next frame. When applied to temporal data, the RTagger assumes the existence of K objects whose dynamics are independent of each other. Using this assumption, the RTagger can separate the two moving digits into different groups. We assessed the segmentation quality by the AMI score which was computed similarly to [13, 12] ignoring the background in the case of a uniform zero-valued background and overlap regions where different objects have the same color. The achieved averageAMI score was 0.75. An example of segmentation is shown in Fig. 7. When we tried to use Tagger on the same dataset, we were only able to train successfully in a single seed out of three. This is possibly because speed is intermediate level of abstraction not represented at the pixel level. Due to its reccurent connections, RTagger can keep those representations from one time step to the next and segment accordingly, something more difficult for Tagger to do, which might explain the training instability. 5 Conclusions In the paper, we presented recurrent Ladder networks. The proposed architecture is motivated by the computations required in a hierarchical latent variable model. We empirically validated that the recurrent Ladder is able to learn accurate inference in challenging tasks which require modeling dependencies on multiple abstraction levels, iterative inference and temporal modeling. The proposed model outperformed strong baseline methods on two challenging classification tasks. It also produced competitive results on a temporal music dataset. We envision that the purposed Recurrent Ladder will be a powerful building block for solving difficult cognitive tasks. 2For this experiment, in order to have the ground truth segmentation, we reimplemented the dataset ourselves. 9 Acknowledgments We would like to thank Klaus Greff and our colleagues from The Curious AI Company for their contribution in the presented work, especially Vikram Kamath and Matti Herranen. References [1] Alain, G., Bengio, Y., and Rifai, S. (2012). Regularized auto-encoders estimate local statistics. CoRR, abs/1211.4246. [2] Arponen, H., Herranen, M., and Valpola, H. (2017). On the exact relationship between the denoising function and the data distribution. arXiv preprint arXiv:1709.02797. [3] Badrinarayanan, V., Kendall, A., and Cipolla, R. (2015). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561. [4] Berglund, M., Raiko, T., Honkala, M., Kärkkäinen, L., Vetek, A., and Karhunen, J. T. (2015). Bidirectional recurrent neural networks as generative models. In Advances in Neural Information Processing Systems. [5] Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. [6] Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1159–1166. [7] Brodatz, P. (1966). Textures: a photographic album for artists and designers. Dover Pubns. [8] Cho, K., Van Merriënboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. [9] Cricri, F., Honkala, M., Ni, X., Aksu, E., and Gabbouj, M. (2016). Video Ladder networks. arXiv preprint arXiv:1612.01756. [10] Eyjolfsdottir, E., Branson, K., Yue, Y., and Perona, P. (2016). Learning recurrent representations for hierarchical behavior modeling. arXiv preprint arXiv:1611.00094. [11] Finn, C., Goodfellow, I. J., and Levine, S. (2016). Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems 29. [12] Greff, K., Srivastava, R. K., and Schmidhuber, J. (2015). Binding via reconstruction clustering. CoRR, abs/1511.06418. [13] Greff, K., Rasmus, A., Berglund, M., Hao, T., Valpola, H., and Schmidhuber, J. (2016). Tagger: Deep unsupervised perceptual grouping. In Advances in Neural Information Processing Systems 29. [14] Greff, K., van Steenkiste, S., and Schmidhuber, J. (2017). Neural expectation maximization. In ICLR Workshop. [15] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735– 1780. [16] Johnson, D. D. (2017). Generating polyphonic music using tied parallel networks. In International Conference on Evolutionary and Biologically Inspired Music and Art. [17] Jozefowicz, R., Zaremba, W., and Sutskever, I. (2015). An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). [18] Kingma, D. and Ba, J. (2015). Adam: A method for stochastic optimization. In The International Conference on Learning Representations (ICLR), San Diego. [19] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 10 [20] Laukien, E., Crowder, R., and Byrne, F. (2016). Feynman machine: The universal dynamical systems computer. arXiv preprint arXiv:1609.03971. [21] Newell, A., Yang, K., and Deng, J. (2016). Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision. Springer. [22] Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and Raiko, T. (2015). Semi-supervised learning with Ladder networks. In Advances in Neural Information Processing Systems. [23] Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. [24] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. [25] Srivastava, N., Mansimov, E., and Salakhudinov, R. (2015). Unsupervised learning of video representations using LSTMs. In International Conference on Machine Learning, pages 843–852. [26] Tarvainen, A. and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems. [27] Tietz, M., Alpay, T., Twiefel, J., and Wermter, S. (2017). Semi-supervised phoneme recognition with recurrent ladder networks. In International Conference on Artificial Neural Networks 2017. [28] Valpola, H. (2015). From neural PCA to deep unsupervised learning. Advances in Independent Component Analysis and Learning Machines. [29] Vinh, N. X., Epps, J., and Bailey, J. (2010). Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct), 2837–2854. 11 | 2017 | 35 |
6,841 | Accelerated Stochastic Greedy Coordinate Descent by Soft Thresholding Projection onto Simplex Chaobing Song, Shaobo Cui, Yong Jiang, Shu-Tao Xia Tsinghua University {songcb16,cuishaobo16}@mails.tsinghua.edu.cn {jiangy, xiast}@sz.tsinghua.edu.cn ∗ Abstract In this paper we study the well-known greedy coordinate descent (GCD) algorithm to solve ℓ1-regularized problems and improve GCD by the two popular strategies: Nesterov’s acceleration and stochastic optimization. Firstly, based on an ℓ1-norm square approximation, we propose a new rule for greedy selection which is nontrivial to solve but convex; then an efficient algorithm called “SOft ThreshOlding PrOjection (SOTOPO)” is proposed to exactly solve an ℓ1-regularized ℓ1-norm square approximation problem, which is induced by the new rule. Based on the new rule and the SOTOPO algorithm, the Nesterov’s acceleration and stochastic optimization strategies are then successfully applied to the GCD algorithm. The resulted algorithm called accelerated stochastic greedy coordinate descent (ASGCD) has the optimal convergence rate O( p 1/ϵ); meanwhile, it reduces the iteration complexity of greedy selection up to a factor of sample size. Both theoretically and empirically, we show that ASGCD has better performance for high-dimensional and dense problems with sparse solutions. 1 Introduction In large-scale convex optimization, first-order methods are widely used due to their cheap iteration cost. In order to improve the convergence rate and reduce the iteration cost further, two important strategies are used in first-order methods: Nesterov’s acceleration and stochastic optimization. Nesterov’s acceleration is referred to the technique that uses some algebra trick to accelerate firstorder algorithms; while stochastic optimization is referred to the method that samples one training example or one dual coordinate at random from the training data in each iteration. Assume the objective function F(x) is convex and smooth. Let F ∗= minx∈Rd F(x) be the optimal value. In order to find an approximate solution x that satisfies F(x) −F ∗≤ϵ, the vanilla gradient descent method needs O(1/ϵ) iterations. While after applying the Nesterov’s acceleration scheme [16], the resulted accelerated full gradient method (AFG) [16] only needs O( p 1/ϵ) iterations, which is optimal for first-order algorithms [16]. Meanwhile, assume F(x) is also a finite sum of n sample convex functions. By sampling one training example, the resulted stochastic gradient descent (SGD) and its variants [13, 23, 1] can reduce the iteration complexity by a factor of the sample size. As an alternative of SGD, randomized coordinate descent (RCD) can also reduce the iteration complexity by a factor of the sample size [15] and obtain the optimal convergence rate O( p 1/ϵ) by Nesterov’s acceleration [14, 12]. The development of gradient descent and RCD raises an interesting problem: can the Nesterov’s acceleration and stochastic optimization strategies be used to improve other existing first-order algorithms? ∗This work is supported by the National Natural Science Foundation of China under grant Nos. 61771273, 61371078. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we answer this question partly by studying coordinate descent with Gauss-Southwell selection, i.e., greedy coordinate descent (GCD). GCD is widely used for solving sparse optimization problems in machine learning [22, 9, 17]. If an optimization problem has a sparse solution, it is more suitable than its counterpart RCD. However, the theoretical convergence rate is still O(1/ϵ). Meanwhile if the iteration complexity is comparable, GCD will be preferable than RCD [17]. However in the general case, in order to do exact Gauss-Southwell selection, computing the full gradient beforehand is necessary, which causes GCD has much higher iteration complexity than RCD. To be concrete, in this paper we consider the well-known nonsmooth ℓ1-regularized problem: min x∈Rd n F(x) def = f(x) + λ∥x∥1 def = 1 n n X j=1 fj(x) + λ∥x∥1 o , (1) where λ ≥0 is a regularization parameter, f(x) = 1 n Pn j=1 fj(x) is a smooth convex function that is a finite average of n smooth convex function fj(x). Given samples {(a1, b1), (a2, b2), . . . , (an, bn)} with aj ∈Rd (j ∈[n] def = {1, 2, . . . , n}), if each fj(x) = fj(aT j x, bj), then (1) is an ℓ1-regularized empirical risk minimization (ℓ1-ERM) problem. For example, if bj ∈R and fj(x) = 1 2(bj −aT j x)2, (1) is Lasso; if bj ∈{−1, 1} and fj(x) = log(1 + exp(−bjaT j x)), ℓ1-regularized logistic regression is obtained. In the above nonsmooth case, the Gauss-Southwell rule has 3 different variants [17, 22]: GS-s, GS-r and GS-q. The GCD algorithm with all the 3 rules can be viewed as the following procedure: in each iteration based on a quadratic approximation of f(x) in (1), one minimizes a surrogate objective function under the constraint that the direction vector used for update has at most 1 nonzero entry. The resulted problems under the 3 rules are easy to solve but are nonconvex due to the cardinality constraint of direction vector. While when using Nesterov’s acceleration scheme, convexity is needed for the derivation of the optimal convergence rate O( p 1/ϵ) [16]. Therefore, it is impossible to accelerate GCD by the Nesterov’s acceleration scheme under the 3 existing rules. In this paper, we propose a novel variant of Gauss-Southwell rule by using an ℓ1-norm square approximation of f(x) rather than quadratic approximation. The new rule involves an ℓ1-regularized ℓ1-norm square approximation problem, which is nontrivial to solve but is convex. To exactly solve the challenging problem, we propose an efficient SOft ThreshOlding PrOjection (SOTOPO) algorithm. The SOTOPO algorithm has O(d + |Q| log |Q|) cost, where it is often the case |Q| ≪d. The complexity result O(d + |Q| log |Q|) is better than O(d log d) of its counterpart SOPOPO [18], which is an Euclidean projection method. Then based on the new rule and SOTOPO, we accelerate GCD to attain the optimal convergence rate O( p 1/ϵ) by combing a delicately selected mirror descent step. Meanwhile, we show that it is not necessary to compute full gradient beforehand: sampling one training example and computing a noisy gradient rather than full gradient is enough to perform greedy selection. This stochastic optimization technique reduces the iteration complexity of greedy selection by a factor of the sample size. The final result is an accelerated stochastic greedy coordinate descent (ASGCD) algorithm. Assume x∗is an optimal solution of (1). Assume that each fj(x)(for all j ∈[n]) is Lp-smooth w.r.t. ∥· ∥p (p = 1, 2), i.e., for all x, y ∈Rd, ∥∇fj(x) −∇fj(y)∥q ≤Lp∥x −y∥p, (2) where if p = 1, then q = ∞; if p = 2, then q = 2. In order to find an x that satisfies F(x) −F(x∗) ≤ϵ, ASGCD needs O √CL1∥x∗∥1 √ϵ iterations (see (16)), where C is a function of d that varies slowly over d and is upper bounded by log2(d). For high-dimensional and dense problems with sparse solutions, ASGCD has better performance than the state of the art. Experiments demonstrate the theoretical result. Notations: Let [d] denote the set {1, 2, . . . , d}. Let R+ denote the set of nonnegative real number. For x ∈Rd, let ∥x∥p = (Pd i=1 |xi|p) 1 p (1 ≤p < ∞) denote the ℓp-norm and ∥x∥∞= maxi∈[d] |xi| denote the ℓ∞-norm of x. For a vector x, let dim(x) denote the dimension of x; let xi denote the i-th element of x. For a gradient vector ∇f(x), let ∇if(x) denote the i-th element of ∇f(x). For a set S, let |S| denote the cardinality of S. Denote the simplex △d = {θ ∈Rd + : Pd i=1 θi = 1}. 2 2 The SOTOPO algorithm The proposed SOTOPO algorithm aims to solve the proposed new rule, i.e., minimize the following ℓ1-regularized ℓ1-norm square approximation problem, ˜h def = arg min g∈Rd ⟨∇f(x), g⟩+ 1 2η ∥g∥2 1 + λ∥x + g∥1 , (3) ˜x def = x + ˜h, (4) where x denotes the current iteration, η a step size, g the variable to optimize, ˜h the director vector for update and ˜x the next iteration. The number of nonzero entries of ˜h denotes how many coordinates will be updated in this iteration. Unlike the quadratic approximation used in GS-s, GS-r and GS-q rules, in the new rule the coordinate(s) to update is implicitly selected by the sparsity-inducing property of the ℓ1-norm square ∥g∥2 1 rather than using the cardinality constraint ∥g∥0 ≤1 (i.e., g has at most 1 nonzero element) [17, 22]. By [6, §9.4.2], when the nonsmooth term λ∥x + g∥1 in (1) does not exist, the minimizer of the ℓ1-norm square approximation (i.e., ℓ1-norm steepest descent) is equivalent to GCD. When λ∥x + g∥1 exists, generally, there may be one or more coordinates to update in this new rule. Because of the sparsity-inducing property of ∥g∥2 1 and ∥x + g∥1, both the direction vector ˜h and the iterative solution ˜x are sparse. In addition, (3) is an unconstrained problem and thus is feasible. 2.1 A variational reformulation and its properties (3) involves the nonseparable, nonsmooth term ∥g∥2 1 and the nonsmooth term ∥x + g∥1. Because there are two nonsmooth terms, it seems difficult to solve (3) directly. While by the variational identity ∥g∥2 1 = infθ∈△d Pd i=1 g2 i θi in [4] 2, in Lemma 1, it is shown that we can transform the original nonseparable and nonsmooth problem into a separable and smooth optimization problem on a simplex. Lemma 1. By defining J(g, θ) def = ⟨∇f(x), g⟩+ 1 2η d X i=1 g2 i θi + λ∥x + g∥1, (5) ˜g(θ) def = arg ming∈Rd J(g, θ), J(θ) def = J(˜g(θ), θ), (6) ˜θ def = arg infθ∈△d J(θ), (7) where ˜g(θ) is a vector function. Then the minimization problem to find ˜h in (3) is equivalent to the problem (7) to find ˜θ with the relation ˜h = ˜g(˜θ). Meanwhile, ˜g(θ) and J(θ) in (6) are both coordinate separable with the expressions ∀i ∈[d], ˜gi(θ) = ˜gi(θi) def = sign(xi −θiη∇if(x)) · max{0, |xi −θiη∇if(x)| −θiηλ} −xi, (8) J(θ) = d X i=1 Ji(θi), where Ji(θi) def = ∇if(x) · ˜gi(θi) + 1 2η d X i=1 ˜g2 i (θi) θi + λ|xi + ˜gi(θi)|. (9) In Lemma 1, (8) is obtained by the iterative soft thresholding operator [5]. By Lemma 1, we can reformulate (3) into the problem (5), which is about two parameters g and θ. Then by the joint convexity, we swap the optimization order of g and θ. Fixing θ and optimizing with respect to (w.r.t.) g, we can get a closed form of ˜g(θ), which is a vector function about θ. Substituting ˜g(θ) into J(g, θ), we get the problem (7) about θ. Finally, the optimal solution ˜h in (3) can be obtained by ˜h = ˜g(˜θ). The explicit expression of each Ji(θi) can be given by substituting (8) into (9). Because θ ∈△d, we have for all i ∈[d], 0 ≤θi ≤1. In the following Lemma 2, it is observed that the derivate J′ i(θi) can be a constant or have a piecewise structure, which is the key to deduce the SOTOPO algorithm. 2The infima can be replaced by minimization if the convention “0/0 = 0” is used. 3 Lemma 2. Assume that for all i ∈[d], J′ i(0) and J′ i(1) have been computed. Denote ri1 def = |xi| √ −2ηJ′ i(0) and ri2 def = |xi| √ −2ηJ′ i(1), then J′ i(θi) belongs to one of the 4 cases, (case a) : J′ i(θi) = 0, 0 ≤θi ≤1, (case b) : J′ i(θi) = J′ i(0) < 0, 0 ≤θi ≤1, (case c) : J′ i(θi) = ( J′ i(0), 0 ≤θi ≤ri1 −x2 i 2ηθ2 i , ri1 < θi ≤1 , (case d) : J′ i(θi) = J′ i(0), 0 ≤θi ≤ri1 −x2 i 2ηθ2 i , ri1 < θi < ri2 J′ i(1), ri2 ≤θi ≤1 . Although the formulation of J′ i(θi) is complicated, by summarizing the property of the 4 cases in Lemma 2, we have Corollary 1. Corollary 1. For all i ∈[d] and 0 ≤θi ≤1, if the derivate J′ i(θi) is not always 0, then J′ i(θi) is a non-decreasing, continuous function with value always less than 0. Corollary 1 shows that except the trivial (case a), for all i ∈[d], whichever J′ i(θi) belong to (case b), (case c) or case (d), they all share the same group of properties, which makes a consistent iterative procedure possible for all the cases. The different formulations in the four cases mainly have impact about the stopping criteria of SOTOPO. 2.2 The property of the optimal solution The Lagrangian of the problem (7) is L(θ, γ, ζ) def = J(θ) + γ d X i=1 θi −1 −⟨ζ, θ⟩, (10) where γ ∈R is a Lagrange multiplier and ζ ∈Rd + is a vector of non-negative Lagrange multipliers. Due to the coordinate separable property of J(θ) in (9), it follows that ∂J(θ) ∂θi = J′ i(θi). Then the KKT condition of (10) can be written as ∀i ∈[d], J′ i(θi) + γ −ζi = 0, ζiθi = 0, and d X i=1 θi = 1. (11) By reformulating the KKT condition (11), we have Lemma 3. Lemma 3. If (˜γ, ˜θ, ˜ζ) is a stationary point of (10), then ˜θ is an optimal solution of (7). Meanwhile, denote S def = {i : ˜θi > 0} and T def = {j : ˜θj = 0}, then the KKT condition can be formulated as P i∈S ˜θi = 1; for all j ∈T, ˜θj = 0; for all i ∈S, ˜γ = −J′ i(˜θi) ≥maxj∈T −J′ j(0). (12) By Lemma 3, if the set S in Lemma 3 is known beforehand, then we can compute ˜θ by simply applying the equations in (12). Therefore finding the optimal solution ˜θ is equivalent to finding the set of the nonzero elements of ˜θ. 2.3 The soft thresholding projection algorithm In Lemma 3, for each i ∈[d] with ˜θi > 0, it is shown that the negative derivate −J′ i(˜θi) is equal to a single variable ˜γ. Therefore, a much simpler problem can be obtained if we know the coordinates of these positive elements. At first glance, it seems difficult to identify these coordinates, because the number of potential subsets of coordinates is clearly exponential on the dimension d. However, the property clarified by Lemma 2 enables an efficient procedure for identifying the nonzero elements of ˜θ. Lemma 4 is a key tool in deriving the procedure for identifying the non-zero elements of ˜θ. Lemma 4 (Nonzero element identification). Let ˜θ be an optimal solution of (7). Let s and t be two coordinates such that J′ s(0) < J′ t(0). If ˜θs = 0, then ˜θt must be 0 as well; equivalently, if ˜θt > 0, then ˜θs must be greater than 0 as well. 4 Lemma 4 shows that if we sort u def = −∇J(0) such that ui1 ≥ui2 ≥· · · ≥uid, where {i1, i2, . . . , id} is a permutation of [d], then the set S in Lemma 3 is of the form {i1, i2, . . . , iϱ}, where 1 ≤ϱ ≤d. If ϱ is obtained, then we can use the fact that for all j ∈[ϱ], −J′ ij(˜θij) = ˜γ and ϱ X j=1 ˜θij = 1 (13) to compute ˜γ. Therefore, by Lemma 4, we can efficiently identify the nonzero elements of the optimal solution ˜θ after a sort operation, which costs O(d log d). However based on Lemmas 2 and 3, the sort cost O(d log d) can be further reduced by the following Lemma 5. Lemma 5 (Efficient identification). Assume ˜θ and S are given in Lemma 3. Then for all i ∈S, −J′ i(0) ≥max j∈[d]{−J′ j(1)}. (14) By Lemma 5, before ordering u, we can filter out all the coordinates i’s that satisfy −J′ i(0) < maxj∈[d] −J′ j(1). Based on Lemmas 4 and 5, we propose the SOft ThreshOlding PrOjection (SOTOPO) algorithm in Alg. 1 to efficiently obtain an optimal solution ˜θ. In the step 1, by Lemma 5, we find the quantity vm, im and Q. In the step 2, by Lemma 4, we sort the elements {−J′ i(0)| i ∈Q}. In the step 3, because S in Lemma 3 is of the form {i1, i2, . . . , iϱ}, we search the quantity ρ from 1 to |Q| + 1 until a stopping criteria is met. In Alg. 1, the number of nonzero elements of ˜θ is ρ or ρ −1. In the step 4, we compute the ˜γ in Lemma 3 according to the conditions. In the step 5, the optimal ˜θ and the corresponding ˜h, ˜x are given. Algorithm 1 ˜x =SOTOPO(∇f(x), x, λ, η) 1. Find (vm, im) def = (maxi∈[d]{−J′ i(1)}, arg maxi∈[d]{−J′ i(1)}), Q def = {i ∈[d]| −J′ i(0) > vm}. 2. Sort {−J′ i(0)| i ∈Q} such that −J′ i1(0) ≥−J′ i2(0) ≥· · · ≥−J′ i|Q|(0), where {i1, i2, . . . , i|Q|} is a permutation of the elements in Q. Denote v def = (−J′ i1(0), −J′ i2(0), . . . , −J′ i|Q|(0), vm), and i|Q|+1 def = im, v|Q|+1 def = vm. 3. For j ∈[|Q| + 1], denote Rj = {ik|k ∈[j]}. Search from 1 to |Q| + 1 to find the quantity ρ def = min j ∈[|Q| + 1]| J′ ij(0) = J′ ij(1) or X l∈Rj |xl| ≥ p 2ηvj or j = |Q| + 1 . 4. The ˜γ in Lemma 3 is given by ˜γ = (P l∈Rρ−1 |xl| 2 /(2η), if P l∈Rρ−1 |xl| ≥p2ηvρ; vρ, otherwise. 5. Then the ˜θ in Lemma 3 and its corresponding ˜h, ˜x in (3) and (4) are obtained by (˜θl, ˜hl, ˜xl) = |xl| √2η˜γ , −xl, 0 , if l ∈Rρ\{iρ}; 1 −P k∈Rρ\{iρ} ˜θk, ˜gl(˜θl), xl + ˜gl(˜θl) , if l = iρ; (0, 0, xl), if l ∈[d]\Rρ. In Theorem 1, we give the main result about the SOTOPO algorithm. Theorem 1. The SOTOPO algorihtm in Alg. 1 can get the exact minimizer ˜h, ˜x of the ℓ1-regularized ℓ1-norm square approximation problem in (3) and (4). The SOTOPO algorithm seems complicated but is indeed efficient. The dominant operations in Alg. 1 are steps 1 and 2 with the total cost O(d + |Q| log |Q|). To show the effect of the complexity reduction by Lemma 5, we give the following fact. 5 Proposition 1. For the optimization problem defined in (5)-(7), where λ is the regularization parameter of the original problem (1), we have that 0 ≤max i∈[d] (s −2J′ i(0) η ) −max j∈[d] s −2J′ j(1) η ≤2λ. (15) Assume vm is defined in the step 1 of Alg. 1. By Proposition 1, for all i ∈Q, s −2J′ i(0) η ≤max k∈[d] (s −2J′ k(0) η ) ≤max j∈[d] s −2J′ j(1) η + 2λ = r2vm η + 2λ, Therefore at least the coordinates j’s that satisfy q −2J′ j(0) η > q 2vm η + 2λ will be not contained in Q. In practice, it can considerably reduce the sort complexity. Remark 1. SOTOPO can be viewed as an extension of the SOPOPO algorithm [18] by changing the objective function from Euclidean distance to a more general function J(θ) in (9). It should be noted that Lemma 5 does not have a counterpart in the case that the objective function is Euclidean distance [18]. In addition, some extension of randomized median finding algorithm [10] with linear time in our setting is also deserved to research. Due to the limited space, it is left for further discussion. 3 The ASGCD algorithm Now we can come back to our motivation, i.e., accelerating GCD to obtain the optimal convergence rate O(1/√ϵ) by Nesterov’s acceleration and reducing the complexity of greedy selection by stochastic optimization. The main idea is that although like any (block) coordinate descent algorithm, the proposed new rule, i.e., minimizing the problem in (3), performs update on one or several coordinates, it is a generalized proximal gradient descent problem based on ℓ1-norm. Therefore this rule can be applied into the existing Nesterov’s acceleration and stochastic optimization framework “Katyusha” [1] if it can be solved efficiently. The final result is the accelerated stochastic greedy coordinate descent (ASGCD) algorithm, which is described in Alg. 2. Algorithm 2 ASGCD δ = log(d) −1 − p (log(d) −1)2 −1; p = 1 + δ, q = p p−1, C = d 2δ 1+δ δ ; z0 = y0 = ˜x0 = ϑ0 = 0; τ2 = 1 2, m = ⌈n b ⌉, η = 1 (1+2 n−b b(n−1))L1 ; for s = 0, 1, 2, . . . , S −1, do 1. τ1,s = 2 s+4, αs = η τ1,sC ; 2. µs = ∇f(˜xs); 3. for l = 0, 1, . . . , m −1, do (a) k = (sm) + l; (b) randomly sample a mini batch B of size b from {1, 2, . . . , n} with equal probability; (c) xk+1 = τ1,szk + τ2˜xs + (1 −τ1,s −τ2)yk; (d) ˜∇k+1 = µs + 1 b P j∈B(∇fj(xk+1) −∇fj(˜xs)); (e) yk+1 =SOTOPO( ˜∇k+1, xk+1, λ, η); (f) (zk+1, ϑk+1) = pCOMID( ˜∇k+1, ϑk, q, λ, αs); end for 4. ˜xs+1 = 1 m Pm l=1 ysm+l; end for Output: ˜xS 6 Algorithm 3 (˜x, ˜ϑ) = pCOMID(g, ϑ, q, λ, α) 1. ∀i ∈[d], ˜ϑi = sign(ϑi −αgi) · max{0, |ϑi −αgi| −αλ}; 2. ∀i ∈[d], ˜xi = sign( ˜ϑi)|˜θi|q−1 ∥˜ϑ∥q−2 q ; 3. Output: ˜x, ˜ϑ. In Alg. 2, the gradient descent step 3(e) is solved by the proposed SOTOPO algorithm, while the mirror descent step 3(f) is solved by the COMID algorithm with p-norm divergence [11, Sec. 7.2]. We denote the mirror descent step as pCOMID in Alg. 3. All other parts are standard steps in the Katyusha framework except some parameter settings. For example, instead of the custom setting p = 1 + 1/log(d) [19, 11], a particular choice p = 1 + δ (δ is defined in Alg. 2) is used to minimize the C = d 2δ 1+δ δ . C varies slowly over d and is upper bounded by log2(d). Meanwhile, αk+1 depends on the extra constant C. Furthermore, the step size η = 1 (1+2 n−b b(n−1))L1 is used, where L1 is defined in (2). Finally, unlike [1, Alg. 2], we let the batch size b as an algorithm parameter to cover both the stochastic case b < n and the deterministic case b = n. To the best of our knowledge, the existing GCD algorithms are deterministic, therefore by setting b = n, we can compare with the existing GCD algorithms better. Based on the efficient SOTOPO algorithm, ASGCD has nearly the same iteration complexity with the standard form [1, Alg. 2] of Katyusha. Meanwhile we have the following convergence rate. Theorem 2. If each fj(x)(j ∈[n]) is convex, L1-smooth in (2) and x∗is an optimum of the ℓ1-regularized problem (1), then ASGCD satisfies E[F(˜xS)] −F(x∗) ≤ 4 (S + 3)2 1 + 1 + 2β(b) 2m C L1∥x∗∥2 1 = O CL1∥x∗∥2 1 S2 , (16) where β(b) = n−b b(n−1), S, b, m and C are given in Alg. 2. In other words, ASGCD achieves an ϵ-additive error (i.e., E[F(˜xS)] −F(x∗) ≤ϵ ) using at most O √CL1∥x∗∥1 √ϵ iterations. In Table 1, we give the convergence rate of the existing algorithms and ASGCD to solve the ℓ1regularized problem (1). In the first column, “Acc” and “Non-Acc” denote the corresponding algorithms are Nesterov’s accelerated or not respectively, “Primal” and “Dual” denote the corresponding algorithms solves the primal problem (1) and its regularized dual problem [20] respectively, ℓ2-norm and ℓ1-norm denote the theoretical guarantee is based on ℓ2-norm and ℓ1-norm respectively. In terms of ℓ2-norm based guarantee, Katyusha and APPROX give the state of the art convergence rate O √L2∥x∗∥2 √ϵ . In terms of ℓ1-norm based guarantee, GCD gives the state of the art convergence rate O( L1∥x∥2 1 ϵ ), which is only applicable for the smooth case λ = 0 in (1). When λ > 0, the generalized GS-r, GS-s and GS-q rules generally have worse theoretical guarantee than GCD [17]. While the bound of ASGCD in this paper is O( √L1∥x∥1 log d √ϵ ), which can be viewed as an accelerated version of the ℓ1-norm based guarantee O( L1∥x∥2 1 ϵ ). Meanwhile, because the bound depends on ∥x∗∥1 rather than ∥x∗∥2 and on L1 rather than L2 (L1 and L2 are defined in (2)), for the ℓ1-ERM problem, if the samples are high-dimensional, dense and the regularization parameter λ is relatively large, then it is possible that L1 ≪L2 (in the extreme case, L2 = dL1 [9]) and ∥x∗∥1 ≈∥x∗∥2. In this case, the ℓ1-norm based guarantee O( √L1∥x∥1 log d √ϵ ) of ASGCD is better than the ℓ2-norm based guarantee O √L2∥x∗∥2 √ϵ of Katyusha and APPROX. Finally, whether the log d factor in the bound of ASGCD (which also appears in the COMID [11] analysis) is necessary deserves further research. Remark 2. When the batch size b = n, ASGCD is a deterministic algorithm. In this case, we can use a better smooth constant T1 that satisfies ∥∇f(x) −∇f(y)∥∞≤T1∥x −y∥1 rather than L1 [1]. Remark 3. The necessity of computing the full gradient beforehand is the main bottleneck of GCD in applications [17]. There exists some work [9] to avoid the computation of full gradient by performing some approximate greedy selection. While the method in [9] needs preprocessing, incoherence 7 Table 1: Convergence rate on ℓ1-regularized empirical risk minimization problems. (For GCD, the convergence rate is applied for λ = 0. ) ALGORITHM TYPE PAPER CONVERGENCE RATE NON-ACC, PRIMAL, ℓ2-NORM SAGA [8] O L2∥x∗∥2 2 ϵ ACC, PRIMAL, ℓ2-NORM KATYUSHA [1] O √L2∥x∗∥2 √ϵ ACC, ACC-SDCA [21] O √L2∥x∗∥2 √ϵ log( 1 ϵ ) DUAL, SPDC [24] ℓ2-NORM APCG [14] APPROX [12] NON-ACC, PRIMAL, ℓ1-NORM GCD [2] O L1∥x∗∥2 1 ϵ ACC, PRIMAL, ℓ1-NORM ASGCD (THIS PAPER) O √L1∥x∗∥1 log d √ϵ condition for dataset and is somewhat complicated. Contrary to [9], the proposed ASGCD algorithm reduces the complexity of greedy selection by a factor up to n in terms of the amortized cost by simply applying the existing stochastic variance reduction framework. 4 Experiments In this section, we use numerical experiments to demonstrate the theoretical results in Section 3 and show the empirical performance of ASGCD with batch size b = 1 and its deterministic version with b = n (In Fig. 1 they are denoted as ASGCD (b = 1) and ASGCD (b = n) respectively). In addition, following the claim to using data access rather than CPU time [19] and the recent SGD and RCD literature [13, 14, 1], we use the data access, i.e., the number of times the algorithm accesses the data matrix, to measure the algorithm performance. To show the effect of Nesterov’s acceleration, we compare ASGCD (b = n) with the non-accelerated greedy coordinate descent with GS-q rule, i.e., coordinate gradient descent (CGD) [22]. To show the effect of both Nesterov’s acceleration and stochastic optimization strategies, we compare ASGCD (b = 1) with Katyusha [1, Alg. 2]. To show the effect of the proposed new rule in Section 2, which is based on ℓ1-norm square approximation, we compare ASGCD (b = n) with the ℓ2-norm based proximal accelerated full gradient (AFG) implemented by the linear coupling framework [3]. Meanwhile, as a benchmark of stochastic optimization for the problems with finite-sum structure, we also show the performance of proximal stochastic variance reduced gradient (SVRG) [23]. In addition, based on [1] and our experiments, we find that “Katyusha” [1, Alg. 2] has the best empirical performance in general for the ℓ1-regularized problem (1). Therefore other well-known state-of-art algorithms, such as APCG [14] and accelerated SDCA [21], are not included in the experiments. The datasets are obtained from LIBSVM data [7] and summarized in Table 2. All the algorithms are used to solve the following lasso problem min x∈Rd{f(x) + λ∥x∥1 = 1 2n∥b −Ax∥2 2 + λ∥x∥1} (17) on the 3 datasets, where A = (a1, a2, . . . , an)T = (h1, h2, . . . , hd) ∈Rn×d with each aj ∈Rd representing a sample vector and hi ∈Rn representing a feature vector, b ∈Rn is the prediction vector. Table 2: Characteristics of three real datasets. DATASET NAME # SAMPLES n # FEATURES d LEUKEMIA 38 7129 GISETTE 6000 5000 MNIST 60000 780 For ASGCD (b = 1) and Katyusha [1, Alg. 2], we can use the tight smooth constant L1 = maxj∈[n],i∈[d] |a2 j,i| and L2 = maxj∈[n] ∥aj∥2 2 respectively in their implementation. While for AS8 λ Leu Gisette Mnist 10−2 0 1 2 3 4 5 Number of Passes ×10 4 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log loss CGD AFG ASGCD (b=n) SVRG Katyusha ASGCD (b=1) 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Passes -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log Loss 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Passes -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log Loss 10−6 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Number of Passes -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log Loss 0 1 2 3 4 5 6 7 8 9 10 Number of Passes ×104 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log Loss 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Number of Passes ×104 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Log Loss Figure 1: Comparing AGCD (b = 1) and ASGCD (b = n) with CGD, SVRG, AFG and Katyusha on Lasso. GCD (b = n) and AFG, the better smooth constant T1 = maxi∈[d] ∥hi∥2 2 n and T2 = ∥A∥2 n are used respectively. The learning rate of CGD and SVRG are tuned in {10−6, 10−5, 10−4, 10−3, 10−2, 10−1}. Table 3: Factor rates of for the 6 cases λ LEU GISETTE MNIST 10−2 (0.85, 1.33) (0.88, 0.74) (5.85, 3.02) 10−6 (1.45, 2.27) (3.51, 2.94) (5.84, 3.02) We use λ = 10−6 and λ = 10−2 in the experiments. In addition, for each case (Dataset, λ), AFG is used to find an optimum x∗with enough accuracy. The performance of the 6 algorithms is plotted in Fig. 1. We use Log loss log(F(xk) −F(x∗)) in the y-axis. x-axis denotes the number that the algorithm access the data matrix A. For example, ASGCD (b = n) accesses A once in each iteration, while ASGCD (b = 1) accesses A twice in an entire outer iteration. For each case (Dataset, λ), we compute the rate (r1, r2) = √CL1∥x∗∥1 √L2∥x∗∥2 , √CT1∥x∗∥1 √T2∥x∗∥2 in Table 3. First, because of the acceleration effect, ASGCD (b = n) are always better than the non-accelerated CGD algorithm; second, by comparing ASGCD(b = 1) with Katyusha and ASGCD (b = n) with AFG, we find that for the cases (Leu, 10−2), (Leu, 10−6) and (Gisette, 10−2), ASGCD (b = 1) dominates Katyusha [1, Alg.2] and ASGCD (b = n) dominates AFG. While the theoretical analysis in Section 3 shows that if r1 is relatively small such as around 1, then ASGCD (b = 1) will be better than [1, Alg.2]. For the other 3 cases, [1, Alg.2] and AFG are better. The consistency between Table 3 and Fig. 1 demonstrates the theoretical analysis. References [1] Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. ArXiv e-prints, abs/1603.05953, 2016. [2] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent. ArXiv e-prints, abs/1407.1537, July 2014. [3] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. ArXiv e-prints, abs/1407.1537, July 2014. [4] Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, et al. Optimization with sparsityinducing penalties. Foundations and Trends R⃝in Machine Learning, 4(1):1–106, 2012. 9 [5] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009. [6] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [7] Chih-Chung Chang. Libsvm: Introduction and benchmarks. http://www. csie. ntn. edu. tw/˜ cjlin/libsvm, 2000. [8] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646–1654, 2014. [9] Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Nearest neighbor based greedy coordinate descent. In Advances in Neural Information Processing Systems, pages 2160–2168, 2011. [10] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l 1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 272–279. ACM, 2008. [11] John C Duchi, Shai Shalev-Shwartz, Yoram Singer, and Ambuj Tewari. Composite objective mirror descent. In COLT, pages 14–26, 2010. [12] Olivier Fercoq and Peter Richtárik. Accelerated, parallel, and proximal coordinate descent. SIAM Journal on Optimization, 25(4):1997–2023, 2015. [13] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013. [14] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method. In Advances in Neural Information Processing Systems, pages 3059–3067, 2014. [15] Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [16] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. [17] Julie Nutini, Mark Schmidt, Issam H Laradji, Michael Friedlander, and Hoyt Koepke. Coordinate descent converges faster with the gauss-southwell rule than random selection. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1632–1641, 2015. [18] Shai Shalev-Shwartz and Yoram Singer. Efficient learning of label ranking by soft projections onto polyhedra. Journal of Machine Learning Research, 7(Jul):1567–1599, 2006. [19] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for l1-regularized loss minimization. Journal of Machine Learning Research, 12(Jun):1865–1892, 2011. [20] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(Feb):567–599, 2013. [21] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In ICML, pages 64–72, 2014. [22] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1):387–423, 2009. [23] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014. [24] Yuchen Zhang and Lin Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In Proceedings of the 32nd International Conference on Machine Learning, volume 951, page 2015, 2015. 10 | 2017 | 350 |
6,842 | Reinforcement Learning under Model Mismatch Aurko Roy1, Huan Xu2, and Sebastian Pokutta2 1Google ∗, Email: aurkor@google.com 2ISyE, Georgia Institute of Technology, Atlanta, GA, USA. Email: huan.xu@isye.gatech.edu 2ISyE, Georgia Institute of Technology, Atlanta, GA, USA. Email: sebastian.pokutta@isye.gatech.edu Abstract We study reinforcement learning under model misspecification, where we do not have access to the true environment but only to a reasonably close approximation to it. We address this problem by extending the framework of robust MDPs of [1, 15, 11] to the model-free Reinforcement Learning setting, where we do not have access to the model parameters, but can only sample states from it. We define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy and approximate value function respectively. We scale up the robust algorithms to large MDPs via function approximation and prove convergence under two different settings. We prove convergence of robust approximate policy iteration and robust approximate value iteration for linear architectures (under mild assumptions). We also define a robust loss function, the mean squared robust projected Bellman error and give stochastic gradient descent algorithms that are guaranteed to converge to a local minimum. 1 Introduction Reinforcement learning is concerned with learning a good policy for sequential decision making problems modeled as a Markov Decision Process (MDP), via interacting with the environment [20, 18]. In this work we address the problem of reinforcement learning from a misspecified model. As a motivating example, consider the scenario where the problem of interest is not directly accessible, but instead the agent can interact with a simulator whose dynamics is reasonably close to the true problem. Another plausible application is when the parameters of the model may evolve over time but can still be reasonably approximated by an MDP. To address this problem we use the framework of robust MDPs which was proposed by [1, 15, 11] to solve the planning problem under model misspecification. The robust MDP framework considers a class of models and finds the robust optimal policy which is a policy that performs best under the worst model. It was shown by [1, 15, 11] that the robust optimal policy satisfies the robust Bellman equation which naturally leads to exact dynamic programming algorithms to find an optimal policy. However, this approach is model dependent and does not immediately generalize to the model-free case where the parameters of the model are unknown. Essentially, reinforcement learning is a model-free framework to solve the Bellman equation using samples. Therefore, to learn policies from misspecified models, we develop sample based methods to solve the robust Bellman equation. In particular, we develop robust versions of classical reinforcement learning algorithms such as Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal policy under mild assumptions on the discount factor. We also show that ∗Work done while at Georgia Tech 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the nominal versions of these iterative algorithms converge to policies that may be arbitrarily worse compared to the optimal policy. We also scale up these robust algorithms to large scale MDPs via function approximation, where we prove convergence under two different settings. Under a technical assumption similar to [5, 24] we show convergence of robust approximate policy iteration and value iteration algorithms for linear architectures. We also study function approximation with nonlinear architectures, by defining an appropriate mean squared robust projected Bellman error (MSRPBE) loss function, which is a generalization of the mean squared projected Bellman error (MSPBE) loss function of [22, 21, 6]. We propose robust versions of stochastic gradient descent algorithms as in [22, 21, 6] and prove convergence to a local minimum under some assumptions for function approximation with arbitrary smooth functions. Contribution. In summary we have the following contributions: 1. We extend the robust MDP framework of [1, 15, 11] to the model-free reinforcement learning setting. We then define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy. 2. We also provide robust reinforcement learning algorithms for the function approximation case and prove convergence of robust approximate policy iteration and value iteration algorithms for linear architectures. We also define the MSRPBE loss function which contains the robust optimal policy as a local minimum and we derive stochastic gradient descent algorithms to minimize this loss function as well as establish convergence to a local minimum in the case of function approximation by arbitrary smooth functions. 3. Finally, we demonstrate empirically the improvement in performance for the robust algorithms compared to their nominal counterparts. For this we used various Reinforcement Learning test environments from OpenAI [9] as benchmark to assess the improvement in performance as well as to ensure reproducibility and consistency of our results. Related Work. Recently, several approaches have been proposed to address model performance due to parameter uncertainty for Markov Decision Processes (MDPs). A Bayesian approach was proposed by [19] which requires perfect knowledge of the prior distribution on transition matrices. Other probabilistic and risk based settings were studied by [10, 25, 23] which propose various mechanisms to incorporate percentile risk into the model. A framework for robust MDPs was first proposed by [1, 15, 11] who consider the transition matrices to lie in some uncertainty set and proposed a dynamic programming algorithm to solve the robust MDP. Recent work by [24] extended the robust MDP framework to the function approximation setting where under a technical assumption the authors prove convergence to an optimal policy for linear architectures. Note that these algorithms for robust MDPs do not readily generalize to the model-free reinforcement learning setting where the parameters of the environment are not explicitly known. For reinforcement learning in the non-robust model-free setting, several iterative algorithms such as Q-learning, TD-learning, and SARSA are known to converge to an optimal policy under mild assumptions, see [4] for a survey. Robustness in reinforcement learning for MDPs was studied by [13] who introduced a robust learning framework for learning with disturbances. Similarly, [16] also studied learning in the presence of an adversary who might apply disturbances to the system. However, for the algorithms proposed in [13, 16] no theoretical guarantees are known and there is only limited empirical evidence. Another recent work on robust reinforcement learning is [12], where the authors propose an online algorithm with certain transitions being stochastic and the others being adversarial and the devised algorithm ensures low regret. For the case of reinforcement learning with large MDPs using function approximations, theoretical guarantees for most TD-learning based algorithms are only known for linear architectures [2]. Recent work by [6] extended the results of [22, 21] and proved that a stochastic gradient descent algorithm minimizing the mean squared projected Bellman equation (MSPBE) loss function converges to a local minimum, even for nonlinear architectures. However, these algorithms do not apply to robust MDPs; in this work we extend these algorithms to the robust setting. 2 2 Preliminaries We consider an infinite horizon Markov Decision Process (MDP) [18] with finite state space X of size n and finite action space A of size m. At every time step t the agent is in a state i ∈X and can choose an action a ∈A incurring a cost ct(i, a). We will make the standard assumption that future cost is discounted, see e.g., [20], with a discount factor ϑ < 1 applied to future costs, i.e., ct(i, a) := ϑtc(i, a), where c(i, a) is a fixed constant independent of the time step t for i ∈X and a ∈A. The states transition according to probability transition matrices τ := {Pa}a∈A which depends only on their last taken action a. A policy of the agent is a sequence π = (a0, a1, . . . ), where every at(i) corresponds to an action in A if the system is in state i at time t. For every policy π, we have a corresponding value function vπ ∈Rn, where vπ(i) for a state i ∈X measures the expected cost of that state if the agent were to follow policy π. This can be expressed by the recurrence relation vπ(i) := c(i, a0(i)) + ϑEj∼X [vπ(j)] . (1) The goal is to devise algorithms to learn an optimal policy π∗that minimizes the expected total cost: Definition 2.1 (Optimal policy). Given an MDP with state space X , action space A and transition matrices Pa, let Π be the strategy space of all possibile policies. Then an optimal policy π∗is one that minimizes the expected total cost, i.e., π∗:= arg min π∈Π E " ∞ ∑ t=0 ϑtc(it, at(it)) # . (2) In the robust case we will assume as in [15, 11] that the transition matrices Pa are not fixed and may come from some uncertainty region P a and may be chosen adversarially by nature in future runs of the model. In this setting, [15, 11] prove the following robust analogue of the Bellman recursion. A policy of nature is a sequence τ := (P0, P1, . . . ) where every Pt(a) ∈P a corresponds to a transition probability matrix chosen from P a. Let T denote the set of all such policies of nature. In other words, a policy τ ∈T of nature is a sequence of transition matrices that may be played by it in response to the actions of the agent. For any set P ⊆Rn and vector v ∈Rn, let σP(v) := sup p⊤v | p ∈P be the support function of the set P. For a state i ∈X , let P a i be the projection onto the ith row of P a. Theorem 2.2. [15] We have the following perfect duality relation min π∈Π max τ∈T Eτ " ∞ ∑ t=0 ϑtc (it, at(it)) # = max τ∈T min π∈Π Eτ " ∞ ∑ t=0 ϑtc (it, at(it)) # . (3) The optimal value function vπ∗corresponding to the optimal policy π∗satisfies vπ∗(i) = min a∈A c(i, a) + ϑσP a i (vπ∗) , (4) and π∗can then be obtained in a greedy fashion, i.e., a∗(i) ∈arg min a∈A n c(i, a) + ϑσP a i (v) o . (5) The main shortcoming of this approach is that it does not generalize to the model free case where the transition probabilities are not explicitly known but rather the agent can only sample states according to these probabilities. In the absence of this knowledge, we cannot compute the support functions of the uncertainty sets P a i . On the other hand it is often easy to have a confidence region Ua i , e.g., a ball or an ellipsoid, corresponding to every state-action pair i ∈X , a ∈A that quantifies our uncertainty in the simulation, with the uncertainty set P a i being the confidence region Ua i centered around the unknown simulator probabilities. Formally, we define the uncertainty sets corresponding to every state action pair in the following fashion. Definition 2.3 (Uncertainty sets). Corresponding to every state-action pair (i, a) we have a confidence region Ua i so that the uncertainty region P a i of the probability transition matrix corresponding to (i, a) is defined as P a i := {x + pa i | x ∈Ua i } , (6) where pa i is the unknown state transition probability vector from the state i ∈X to every other state in X given action a during the simulation. 3 As a simple example, we have the ellipsoid Ua i := x | x⊤Aa i x ≤1, ∑i∈X xi = 0 for some n × n psd matrix Aa i with the uncertainty set P a i being P a i := x + pa i | x ∈Ua i , where pa i is the unknown simulator state transition probability vector with which the agent transitioned to a new state during training. Note that while it may easy to come up with good descriptions of the confidence region Ua i , the approach of [15, 11] breaks down since we have no knowledge of pa i and merely observe the new state j sampled from this distribution. In the following sections we develop robust versions of Q-learning, SARSA, and TD-learning which are guaranteed to converge to an approximately optimal policy that is robust with respect to this confidence region. The robust versions of these iterative algorithms involve an additional linear optimization step over the set Ua i , which in the case of Ua i = {∥x∥2 ≤r} simply corresponds to adding fixed noise during every update. In later sections we will extend it to the function approximation case where we study linear architectures as well as nonlinear architectures; in the latter case we derive new stochastic gradient descent algorithms for computing approximately robust policies. 3 Robust exact dynamic programming algorithms In this section we develop robust versions of exact dynamic programming algorithms such as Qlearning, SARSA, and TD-learning. These methods are suitable for small MDPs where the size n of the state space is not too large. Note that confidence region Ua i must also be constrained to lie within the probability simplex ∆n. However since we do not have knowledge of the simulator probabilities pa i , we do not know how far away pa i is from the boundary of ∆n and so the algorithms will make use of a proxy confidence region c Ua i where we drop the requirement of c Ua i ⊆∆n, to compute the robust optimal policies. With a suitable choice of step lengths and discount factors we can prove convergence to an approximately optimal Ua i -robust policy where the approximation depends on the difference between the unconstrained proxy region c Ua i and the true confidence region Ua i . Below we give specific examples of possible choices for simple confidence regions. Ellipsoid: Let {Aa i }i,a be a sequence of n × n psd matrices. Then we can define the confidence region as Ua i := ( x x⊤Aa i x ≤1, ∑ i∈X xi = 0, −pa ij ≤xj ≤1 −pa ij, ∀j ∈X ) . (7) Note that Ua i has some additional linear constraints so that the uncertainty set P a i := pa i + x | x ∈Ua i lies inside ∆n. Since we do not know pa i , we will make use of the proxy confidence region c Ua i := {x | x⊤Aa i x ≤1, ∑i∈X xi = 0}. In particular when Aa i = r−1In for every i ∈X , a ∈A then this corresponds to a spherical confidence interval of [−r, r] in every direction. In other words, each uncertainty set P a i is an ℓ2 ball of radius r. Parallelepiped: Let {Ba i }i,a be a sequence of n × n invertible matrices. Then we can define the confidence region as Ua i := ( x ∥Ba i x∥1 ≤1, ∑ i∈X xi = 0, −pa ij ≤xj ≤1 −pa ij, ∀j ∈X ) . (8) As before, we will use the unconstrained parallelepiped c Ua i without the −pa ij ≤xj ≤1 −pa ij constraints, as a proxy for Ua i since we do not have knowledge pa i . In particular if Ba i = D for a diagonal matrix D, then the proxy confidence region c Ua i corresponds to a rectangle. In particular if every diagonal entry is r, then every uncertainty set P a i is an ℓ1 ball of radius r. 3.1 Robust Q-learning Let us recall the notion of a Q-factor of a state-action pair (i, a) and a policy π which in the non-robust setting is defined as Q(i, a) := c(i, a) + Ej∼X [v(j)] , (9) 4 where v is the value function of the policy π. In other words, the Q-factor represents the expected cost if we start at state i, use the action a and follow the policy π subsequently. One may similarly define the robust Q-factors using a similar interpretation and the minimax characterization of Theorem 2.2. Let Q∗denote the Q-factors of the optimal robust policy and let v∗∈Rn be its value function. Note that we may write the value function in terms of the Q-factors as v∗= mina∈A Q∗(i, a). From Theorem 2.2 we have the following expression for Q∗: Q∗(i, a) = c(i, a) + ϑσP a i (v∗) (10) = c(i, a) + ϑσUa i (v∗) + ϑ ∑ j∈X pa ij min a′∈A Q∗(j, a′), (11) where equation (11) follows from Definition 2.3. For an estimate Qt of Q∗, let vt ∈Rn be its value vector, i.e., vt(i) := mina∈A Qt(i, a). The robust Q-iteration is defined as: Qt(i, a) := (1 −γt) Qt−1(i, a) + γt c(i, a) + ϑσc Ua i (vt−1) + ϑ min a′∈A Qt−1(j, a′) , (12) where a state j ∈X is sampled with the unknown transition probability pa ij using the simulator. Note that the robust Q-iteration of equation (12) involves an additional linear optimization step to compute the support function σc Ua i (vt) of vt over the proxy confidence region c Ua i . We will prove that iterating equation (12) converges to an approximately optimal policy. The following definition introduces the notion of an ε-optimal policy, see e.g., [4]. The error factor ε is also referred to as the amplification factor. We will treat the Q-factors as a |X | × |A| matrix in the definition so that its ℓ∞norm is defined as usual. Definition 3.1 (ε-optimal policy). A policy π with Q-factors Q′ is ε-optimal with respect to the optimal policy π∗with corresponding Q-factors Q∗if
Q′ −Q∗
∞≤ε ∥Q∗∥∞. The following simple lemma allows us to decompose the optimization of a linear function over the proxy uncertainty set c P a i in terms of linear optimization over P a i , Ua i , and c Ua i . Lemma 3.2. Let v ∈Rn be any vector and let βa i := maxy∈c Ua i minx∈Ua i ∥y −x∥1. Then we have σc P a i (v) ≤σP a i (v) + βa i ∥v∥∞. The following theorem proves that under a suitable choice of step lengths γt and discount factor ϑ, the iteration of equation (12) converges to an ε-approximately optimal policy with respect to the confidence regions Ua i . Theorem 3.3. Let the step lengths γt of the Q-iteration algorithm be chosen such that ∑∞ t=0 γt = ∞ and ∑∞ t=0 γ2 t < ∞and let the discount factor ϑ < 1. Let βa i be as in Lemma 3.2 and let β := maxi∈X ,a∈A βa i . If ϑ(1 + β) < 1 then with probability 1 the iteration of equation (12) converges to an ε-optimal policy where ε := ϑβ 1−ϑ(1+β). Remark 3.4. If β = 0 then note that by Theorem 3.3, the robust Q-iterations converge to the exact optimal Q-factors since ε = 0. Since βa i := maxy∈c Ua i minx∈Ua i ∥y −x∥1, it follows that β = 0 iff c Ua i = Ua i for every i ∈X , a ∈A. This happens when the confidence region is small enough so that the simplex constraints −pa ij ≤xj ≤1 −pa ij∀j ∈X in the description of P a i become redundant for every i ∈X , a ∈A. Equivalently every pa i is “far” from the boundary of the simplex ∆n compared to the size of the confidence region Ua i . Remark 3.5. Note that simply using the nominal Q-iteration without the σc Ua i (v) term does not guarantee convergence to Q∗. Indeed, the nominal Q-iterations converge to Q-factors Q′ where
Q′ −Q∗
∞may be arbitrary large. This follows easily from observing that | Q′(i, a) −Q∗(i, a)| = σc Ua i (v∗) (13) , where v∗is the value function of Q∗and so
Q′ −Q∗
∞= max i∈X ,a∈A σc Ua i (v∗) (14) which can be as high as ∥v∗∥∞= ∥Q∗∥∞. 5 3.2 Robust TD-Learning Let (i0, i1, . . . ) be a trajectory of the agent, where im denotes the state of the agent at time step m. The main idea behind the TD(λ)-learning method is to estimate the value function vπ of a policy π using the temporal difference errors dm defined as dm := c(im, π(im)) + νvt(im+1) −vt(im). (15) For a parameter λ ∈(0, 1), the TD-learning iteration is defined in terms of the temporal difference errors as vt+1(ik) := vt(ik) + γt ∞ ∑ m=k (ϑλ)m−k dm ! . (16) In the robust setting, we have a confidence region Ua i with proxy c Ua i for every temporal difference error, which leads us to define the robust temporal difference errors as edm := dm + ϑσ \ Uπ(im) im (vt), (17) where dm is the non-robust temporal difference. The robust TD-update is the usual TD-update, with the robust temporal difference errors f dm replacing the usual temporal difference error dm. We define an ε-suboptimal value function for a fixed policy π similar to Definition 3.1. Definition 3.6 (ε-approximate value function). Given a policy π, we say that a vector v′ ∈Rn is an ε-approximation of vπ if ∥v′ −vπ∥∞≤ε ∥vπ∥∞. The following theorem guarantees convergence of the robust TD-iteration to an approximate value function for π. We refer the reader to the supplementary material for a proof. Theorem 3.7. Let βa i be as in Lemma 3.2 and let β := maxi∈X ,a∈A βa i . Let ρ := ϑλ 1−ϑλ. If ϑ(1 + ρβ) < 1 then the robust TD-iteration converges to an ε-approximate value function, where ε := ϑβ 1−ϑ(1+ρβ). In particular if βa i = β = 0, i.e., the proxy confidence region c Ua i is the same as the true confidence region Ua i , then the convergence is exact, i.e., ε = 0. 4 Robust Reinforcement Learning with function approximation In Section 3 we derived robust versions of exact dynamic programming algorithms such as Q-learning, SARSA and TD-learning respectively. If the state space X of the MDP is large then it is prohibitive to maintain a lookup table entry for every state. A standard approach for large scale MDPs is to use the approximate dynamic programming (ADP) framework [17]. In this setting, the problem is parametrized by a smaller dimensional vector θ ∈Rd where d ≪n = |X |. The natural generalizations of Q-learning, SARSA, and TD-learning algorithms of Section 3 are via the projected Bellman equation, where we project back to the space spanned by all the parameters in θ ∈Rd, since they are the value functions representable by the model. Convergence for these algorithms even in the non-robust setting are known only for linear architectures, see e.g., [2]. Recent work by [6] proposed stochastic gradient descent algorithms with convergence guarantees for smooth nonlinear function architectures, where the problem is framed in terms of minimizing a loss function. We give robust versions of both these approaches. 4.1 Robust approximations with linear architectures In the approximate setting with linear architectures, we approximate the value function vπ of a policy π by Φθ where θ ∈Rd and Φ is a n × d feature matrix with rows φ(j) for every state j ∈X representing its feature vector. Let S be the span of the columns of Φ, i.e., S := n Φθ | θ ∈Rdo . Define the operator Tπ : Rn →Rn as (Tπv)(i) := c(i, π(i)) + ϑ ∑j∈X pπ(i) ij v(j), so that the true value function vπ satisfies Tπvπ = vπ. A natural approach towards estimating vπ given a current estimate Φθt is to compute Tπ (Φθt) and project it back to S to get the next parameter θt+1. The motivation behind such an iteration is the fact that the true value function is a fixed point of 6 this operation if it belonged to the subspace S. This gives rise to the projected Bellman equation where the projection Π is typically taken with respect to a weighted Euclidean norm ∥·∥ξ, i.e., ∥x∥ξ = ∑i∈X ξix2 i , where ξ is some probability distribution over the states X . In the model free case, where we do not have explicit knowledge of the transition probabilities, various methods like LSTD(λ), LSPE(λ), TD(λ) have been proposed [3, 8, 7, 14, 22, 21]. The key idea behind proving convergence for these methods is to show that the mapping ΠTπ is a contraction mapping with respect to the ∥·∥ξ for some distribution ξ over the states X . While the operator Tπ in the non-robust case is linear and is a contraction in the ℓ∞norm as in Section 3, the projection operator with respect to such norms is not guaranteed to be a contraction. However, it is known that if ξ is the steady state distribution of the policy π under evaluation, then Π is non-expansive in ∥·∥ξ [4, 2]. In the robust setting, we have the same methods but with the robust Bellman operators Tπ defined as (Tπv)(i) := c(i, π(i)) + ϑσPπ(i) i (v). Since we do not have access to the simulator probabilities pa i , we will use a proxy set c P a i as in Section 3, with the proxy operator denoted by c Tπ. While the iterative methods of the non-robust setting generalize via the robust operator Tπ and the robust projected Bellman equation Φθ = ΠTπ(Φθ), it is however not clear how to choose the distribution ξ under which the projected operator ΠTπ is a contraction in order to show convergence. Let ξ be the steady state distribution of the exploration policy bπ of the MDP with transition probability matrix P bπ. We make the following assumption on the discount factor ϑ as in [24]. Assumption 4.1. For every state i ∈X and action a ∈A, there exists a constant α ∈(0, 1) such that for any p ∈P a i we have ϑpj ≤αP bπ ij for every j ∈X . Assumption 4.1 might appear artificially restrictive; however, it is necessary to prove that ΠTπ is a contraction. While [24] require this assumption for proving convergence of robust MDPs, a similar assumption is also required in proving convergence of off-policy Reinforcement Learning methods of [5] where the states are sampled from an exploration policy bπ which is not necessarily the same as the policy π under evaluation. Note that in the robust setting, all methods are necessarily off-policy since the transition matrices are not fixed for a given policy. The following lemma is an ξ-weighted Euclidean norm version of Lemma 3.2. Lemma 4.2. Let v ∈Rn be any vector and let βa i := maxy∈c Ua i minx∈Ua i ∥y−x∥ξ ξmin . Then we have σc P a i (v) ≤σP a i (v) + βa i ∥v∥ξ , where ξmin := mini∈X ξi. The following theorem shows that the robust projected Bellman equation is a contraction under some assumptions on the discount factor ϑ. Theorem 4.3. Let βa i be as in Lemma 4.2 and let β := maxi∈X βπ(i) i . If the discount factor ϑ satisfies Assumption 4.1 and α2 + ϑ2β2 < 1 2, then the operator c Tπ is a contraction with respect to ∥·∥ξ. In other words for any two θ, θ′ ∈Rd, we have
c Tπ(Φθ) −c Tπ(Φθ′)
2 ξ ≤2 α2 + ϑ2β2
Φθ −Φθ′
2 ξ <
Φθ −Φθ′
2 ξ . (18) If βi = β = 0 so that [ Uπ(i) i = Uπ(i) i , then we have a simpler contraction under the assumption that α < 1. The following corollary shows that the solution to the proxy projected Bellman equation converges to a solution that is not too far away from the true value function vπ. Corollary 4.4. Let Assumption 4.1 hold and let β be as in Theorem 4.3. Let evπ be the fixed point of the projected Bellman equation for the proxy operator c Tπ, i.e., Πc Tπevπ = evπ. Let bvπ be the fixed point of the proxy operator c Tπ, i.e., c Tπbvπ = bvπ. Let vπ be the true value function of the policy π, i.e., Tπvπ = vπ. Then it follows that ∥evπ −vπ∥ξ ≤ ϑβ ∥vπ∥ξ + ∥Πvπ −vπ∥ξ 1 − p 2 (α2 + ϑ2β2) . (19) 7 In particular if βi = β = 0 i.e., the proxy confidence region is actually the true confidence region, then the proxy projected Bellman equation has a solution satisfying ∥evπ −vπ∥ξ ≤ ∥Πvπ−vπ∥ξ 1−α . Theorem 4.3 guarantees that the robust projected Bellman iterations of LSTD(λ), LSPE(λ) and TD(λ)-methods converge, while Corollary 4.4 guarantees that the solution it coverges to is not too far away from the true value function vπ. 4.2 Robust approximations with nonlinear architectures In this section we consider the situation where the function approximator vθ is a smooth but not necessarily linear function of θ. This section generalizes the results of [6] to the robust setting with confidence regions. We define robust analogues of the nonlinear GTD2 and nonlinear TDC algorithms respectively. Let M := n vθ | θ ∈Rdo be the manifold spanned by all possible value functions representable by our model and let PMθ be the tangent plane of M at θ. Let TMθ be the tangent space, i.e., the translation of PMθ to the origin. In other words, TMθ := n Φθu | u ∈Rdo , where Φθ is an n × d matrix with entries Φθ(i, j) := ∂ ∂θj vθ(i). In the nonlinear case, we project on to the tangent space TMθ, since projections on to M is computationally hard. We denote this projection by Πθ and it is also with respect to a weighted Euclidean norm ∥·∥ξ. The mean squared projected Bellman equation (MSPBE) loss function was proposed by [6] and is an extension of [22, 21], MSPBE(θ) = ∥vθ −ΠθTπvθ∥2 ξ , where we now project to the the tangent space TMθ. Since the number n of states is prohibitively large, we want stochastic gradient algorithms that run in time polynomial in d. Therefore, we assume that the confidence region of every state action pair is the same: Ua i = U and c Ua i = Ua i . The robust version of the MSPBE loss function, the. mean squared robust projected Bellman equation (MSRPBE) loss can then be defined in terms of the robust Bellman operator with the proxy confidence region bU and proxy uncertainty set [ Pπ(i) i as MSRPBE(θ) =
vθ −Πθ c Tπvθ
2 ξ . (20) In order to derive stochastic gradient descent algorithms for minimizing the MSRPBE loss function, we need to take the gradient of σP(vθ) for the a convex set P. The gradient µ of σ is given by µP(θ) := ∇max y∈P y⊤vθ = Φ⊤ θ arg max y∈P y⊤vθ, (21) where Φθ(i) := ∇vθ(i). Let us denote Φθ(i) simply by φ and Φθ(i′) by φ′, where i′ is the next sampled state. Let us denote by bU the proxy confidence region [ Uπ(i) i of state i and the policy π under evaluation. Let h(θ, u) := −E h ( ed −φ⊤u)∇2vθ(i)u i (22) where ed is the robust temporal difference error. As in [6], we may express ∇MSRPBE(θ) in terms of h(θ, w) where w = E φφ⊤−1 E h edφ i . We refer the reader to the supplementary material for the details. This leads us to the following robust analogues of nonlinear GTD and nonlinear TDC, where we update the estimators wk of w as wk+1 := wk + βk edk −φ⊤ k wk φk, with the parameters θk being updated on a slower timescale as θk+1 := Γ θk + αk n φk −ϑφ′ k −ϑµ bU(θ) (φ⊤ k wk) −hk o robust-nonlinear-GTD2, (23) θk+1 := Γ θk + αk n edkφk −ϑφ′ k −ϑµ bU(θ)(φ⊤ k wk) −hk o robust-nonlinear-TDC, (24) where hk := edk −φ⊤ k wk ∇2vθk (ik) wk and Γ is a projection into an appropriately chosen compact set C with a smooth boundary as in [6]. Under the assumption of Lipschitz continuous gradients 8 and suitable assumptions on the step lengths αk and βk and the confidence region bU, the updates of equations (23) converge with probability 1 to a local optima of MSRPBE(θ). See the supplementary material for the exact statement and proof of convergence. Note that in general computing µ bU(θ) would take time polynomial in n, but it can be done in O(d2) time using a rank-d approximation to bU. 5 Experiments We implemented robust versions of Q-learning and SARSA as in Section 3 and evaluated its performance against the nominal algorithms using the OpenAI gym framework [9]. To test the performance of the robust algorithms, we perturb the models slightly by choosing with a small probability p a random state after every action. The size of the confidence region Ua i for the robust model is chosen by a 10-fold cross validation via line search. After the value functions are learned for the robust and the nominal algorithms, we evaluate its performance on the true environment. To compare the true algorithms we compare both the cumulative reward as well as the tail distribution function (complementary cumulative distribution function) as in [24] which for every a plots the probability that the algorithm earned a reward of at least a. Note that there is a tradeoffin the performance of the robust versus the nominal algorithms with the value of p due to the presence of the β term in the convergence results. See Figure 1 for a comparison. More figures and detailed results are included in the supplementary material. Figure 1: Line search, tail distribution, and cumulative rewards during transient phase of robust vs nominal Q-learning on FrozenLake-v0 with p = 0.01. Note the instability of reward as a function of the size of the uncertainty set (left) is due to the small sample size used in line search. Acknowledgments The authors would like to thank Guy Tennenholtz and anonymous reviewers for helping improve the presentation of the paper. References [1] J. A. Bagnell, A. Y. Ng, and J. G. Schneider. Solving uncertain markov decision processes. 2001. [2] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3):310–335, 2011. [3] D. P. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Lab. for Info. and Decision Systems Report LIDS-P-2349, MIT, Cambridge, MA, 1996. [4] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming: an overview. In Decision and Control, 1995., Proceedings of the 34th IEEE Conference on, volume 1, pages 560–564. IEEE, 1995. [5] D. P. Bertsekas and H. Yu. Projected equation methods for approximate solution of large linear systems. Journal of Computational and Applied Mathematics, 227(1):27–50, 2009. 9 [6] S. Bhatnagar, D. Precup, D. Silver, R. S. Sutton, H. R. Maei, and C. Szepesvári. Convergent temporal-difference learning with arbitrary smooth function approximation. In Advances in Neural Information Processing Systems, pages 1204–1212, 2009. [7] J. A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, 49(2-3):233–246, 2002. [8] S. J. Bradtke and A. G. Barto. Linear least-squares algorithms for temporal difference learning. Machine learning, 22(1-3):33–57, 1996. [9] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. [10] E. Delage and S. Mannor. Percentile optimization for markov decision processes with parameter uncertainty. Operations research, 58(1):203–213, 2010. [11] G. N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257– 280, 2005. [12] S. H. Lim, H. Xu, and S. Mannor. Reinforcement learning in robust markov decision processes. In Advances in Neural Information Processing Systems, pages 701–709, 2013. [13] J. Morimoto and K. Doya. Robust reinforcement learning. Neural computation, 17(2):335–359, 2005. [14] A. Nedić and D. P. Bertsekas. Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems, 13(1):79–110, 2003. [15] A. Nilim and L. El Ghaoui. Robustness in markov decision problems with uncertain transition matrices. In NIPS, pages 839–846, 2003. [16] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta. Robust adversarial reinforcement learning. arXiv preprint arXiv:1703.02702, 2017. [17] W. B. Powell. Approximate Dynamic Programming: Solving the curses of dimensionality, volume 703. John Wiley & Sons, 2007. [18] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. [19] A. Shapiro and A. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and Software, 17(3):523–542, 2002. [20] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [21] R. S. Sutton, H. R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesvári, and E. Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 993–1000. ACM, 2009. [22] R. S. Sutton, H. R. Maei, and C. Szepesvári. A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation. In Advances in neural information processing systems, pages 1609–1616, 2009. [23] A. Tamar, Y. Glassner, and S. Mannor. Optimizing the cvar via sampling. arXiv preprint arXiv:1404.3862, 2014. [24] A. Tamar, S. Mannor, and H. Xu. Scaling up robust mdps using function approximation. In ICML, volume 32, page 2014, 2014. [25] W. Wiesemann, D. Kuhn, and B. Rustem. Robust markov decision processes. Mathematics of Operations Research, 38(1):153–183, 2013. 10 | 2017 | 351 |
6,843 | Concrete Dropout Yarin Gal yarin.gal@eng.cam.ac.uk University of Cambridge and Alan Turing Institute, London Jiri Hron jh2084@cam.ac.uk University of Cambridge Alex Kendall agk34@cam.ac.uk University of Cambridge Abstract Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers. 1 Introduction Well-calibrated uncertainty is crucial for many tasks in deep learning. From the detection of adversarial examples [25], through an agent exploring its environment safely [10, 18], to analysing failure cases in autonomous driving vision systems [20]. Tasks such as these depend on good uncertainty estimates to perform well, with miscalibrated uncertainties in reinforcement learning (RL) having the potential to lead to over-exploration of the environment. Or, much worse, miscalibrated uncertainties in an autonomous driving vision systems leading to its failure to detect its own ignorance about the world, resulting in the loss of human life [29]. A principled technique to obtaining uncertainty in models such as the above is Bayesian inference, with dropout [9, 14] being a practical inference approximation. In dropout inference the neural network is trained with dropout at training time, and at test time the output is evaluated by dropping units randomly to generate samples from the predictive distribution [9]. But to get well-calibrated uncertainty estimates it is necessary to adapt the dropout probability as a variational parameter to the data at hand [7]. In previous works this was done through a grid-search over the dropout probabilities [9]. Grid-search can pose difficulties though in certain tasks. Grid-search is a prohibitive operation with large models such as the ones used in Computer Vision [19, 20], where multiple GPUs would be used to train a single model. Grid-searching over the dropout probability in such models would require either an immense waste of computational resources, or extremely prolonged experimentation cycles. More so, the number of possible per-layer dropout configurations grows exponentially as the number of model layers increases. Researchers have therefore restricted the grid-search to a small number of possible dropout values to make such search feasible [8], which in turn might hurt uncertainty calibration in vision models for autonomous systems. In other tasks a grid-search over the dropout probabilities is impossible altogether. In tasks where the amount of data changes over time, for example, the dropout probability should be decreased as the amount of data increases [7]. This is because the dropout probability has to diminish to zero in the limit of data—with the model explaining away its uncertainty completely (this is explained in more detail in §2). RL is an example setting where the dropout probability has to be adapted dynamically. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The amount of data collected by the agent increases steadily with each episode, and in order to reduce the agent’s uncertainty, the dropout probability must be decreased. Grid-searching over the dropout probability is impossible in this setting, as the agent will have to be reset and re-trained with the entire data with each new acquired episode. A method to tune the dropout probability which results in good accuracy and uncertainty estimates is needed then. Existing literature on tuning the dropout probability is sparse. Current methods include the optimisation of ↵in Gaussian dropout following its variational interpretation [23], and overlaying a binary belief network to optimise the dropout probabilities as a function of the inputs [2]. The latter approach is of limited practicality with large models due to the increase in model size. With the former approach [23], practical use reveals some unforeseen difficulties [28]. Most notably, the ↵ values have to be truncated at 1, as the KL approximation would diverge otherwise. In practice the method under-performs. In this work we propose a new practical dropout variant which can be seen as a continuous relaxation of the discrete dropout technique. Relying on recent techniques in Bayesian deep learning [16, 27], together with appropriate regularisation terms derived from dropout’s Bayesian interpretation, our variant allows the dropout probability to be tuned using gradient methods. This results in bettercalibrated uncertainty estimates in large models, avoiding the coarse and expensive grid-search over the dropout probabilities. Further, this allows us to use dropout in RL tasks in a principled way. We analyse the behaviour of our proposed dropout variant on a wide variety of tasks. We study its ability to capture different types of uncertainty on a simple synthetic dataset with known ground truth uncertainty, and show how its behaviour changes with increasing amounts of data versus model size. We show improved accuracy and uncertainty on popular datasets in the field, and further demonstrate our variant on large models used in the Computer Vision community, showing a significant reduction in experiment time as well as improved model performance and uncertainty calibration. We demonstrate our dropout variant in a model-based RL task, showing that the agent automatically reduces its uncertainty as the amount of data increases, and give insights into common practice in the field where a small dropout probability is often used with the shallow layers of a model, and a large dropout probability used with the deeper layers. 2 Background In order to understand the relation between a model’s uncertainty and the dropout probability, we start with a slightly philosophical discussion of the different types of uncertainty available to us. This discussion will be grounded in the development of new tools to better understand these uncertainties in the next section. Three types of uncertainty are often encountered in Bayesian modelling. Epistemic uncertainty captures our ignorance about the models most suitable to explain our data; Aleatoric uncertainty captures noise inherent in the environment; Lastly, predictive uncertainty conveys the model’s uncertainty in its output. Epistemic uncertainty reduces as the amount of observed data increases— hence its alternative name “reducible uncertainty”. When dealing with models over functions, this uncertainty can be captured through the range of possible functions and the probability given to each function. This uncertainty is often summarised by generating function realisations from our distribution and estimating the variance of the functions when evaluated on a fixed set of inputs. Aleatoric uncertainty captures noise sources such as measurement noise—noises which cannot be explained away even if more data were available (although this uncertainty can be reduced through the use of higher precision sensors for example). This uncertainty is often modelled as part of the likelihood, at the top of the model, where we place some noise corruption process on the function’s output. Gaussian corrupting noise is often assumed in regression, although other noise sources are popular as well such as Laplace noise. By inferring the Gaussian likelihood’s precision parameter ⌧ for example we can estimate the amount of aleatoric noise inherent in the data. Combining both types of uncertainty gives us the predictive uncertainty—the model’s confidence in its prediction, taking into account noise it can explain away and noise it cannot. This uncertainty is often obtained by generating multiple functions from our model and corrupting them with noise (with precision ⌧). Calculating the variance of these outputs on a fixed set of inputs we obtain the model’s predictive uncertainty. This uncertainty has different properties for different inputs. Inputs near the training data will have a smaller epistemic uncertainty component, while inputs far away 2 from the training data will have higher epistemic uncertainty. Similarly, some parts of the input space might have larger aleatoric uncertainty than others, with these inputs producing larger measurement error for example. These different types of uncertainty are of great importance in fields such as AI safety [1] and autonomous decision making, where the model’s epistemic uncertainty can be used to avoid making uninformed decisions with potentially life-threatening implications [20]. When using dropout neural networks (or any other stochastic regularisation technique), a randomly drawn masked weight matrix corresponds to a function draw [7]. Therefore, the dropout probability, together with the weight configuration of the network, determine the magnitude of the epistemic uncertainty. For a fixed dropout probability p, high magnitude weights will result in higher output variance, i.e. higher epistemic uncertainty. With a fixed p, a model wanting to decrease its epistemic uncertainty will have to reduce its weight magnitude (and set the weights to be exactly zero to have zero epistemic uncertainty). Of course, this is impossible, as the model will not be able to explain the data well with zero weight matrices, therefore some balance between desired output variance and weight magnitude is achieved1. For uncertainty representation, this can be seen as a degeneracy with the model when the dropout probability is held fixed. Allowing the probability to change (for example by grid-searching it to maximise validation loglikelihood [9]) will let the model decrease its epistemic uncertainty by choosing smaller dropout probabilities. But if we wish to replace the grid-search with a gradient method, we need to define an optimisation objective to optimise p with respect to. This is not a trivial thing, as our aim is not to maximise model performance, but rather to obtain good epistemic uncertainty. What is a suitable objective for this? This is discussed next. 3 Concrete Dropout One of the difficulties with the approach above is that grid-searching over the dropout probability can be expensive and time consuming, especially when done with large models. Even worse, when operating in a continuous learning setting such as reinforcement learning, the model should collapse its epistemic uncertainty as it collects more data. When grid-searching this means that the data has to be set-aside such that a new model could be trained with a smaller dropout probability when the dataset is large enough. This is infeasible in many RL tasks. Instead, the dropout probability can be optimised using a gradient method, where we seek to minimise some objective with respect to (w.r.t.) that parameter. A suitable objective follows dropout’s variational interpretation [7]. Following the variational interpretation, dropout is seen as an approximating distribution q✓(!) to the posterior in a Bayesian neural network with a set of random weight matrices ! = {Wl}L l=1 with L layers and ✓the set of variational parameters. The optimisation objective that follows from the variational interpretation can be written as: bLMC(✓) = −1 M X i2S log p(yi|f !(xi)) + 1 N KL(q✓(!)||p(!)) (1) with ✓parameters to optimise, N the number of data points, S a random set of M data points, f !(xi) the neural network’s output on input xi when evaluated with weight matrices realisation !, and p(yi|f !(xi)) the model’s likelihood, e.g. a Gaussian with mean f !(xi). The KL term KL(q✓(!)||p(!)) is a “regularisation” term which ensures that the approximate posterior q✓(!) does not deviate too far from the prior distribution p(!). A note on our choice for a prior is given in appendix B. Assume that the set of variational parameters for the dropout distribution satisfies ✓= {Ml, pl}L l=1, a set of mean weight matrices and dropout probabilities such that q✓(!) = Q l qMl(Wl) and qMl(Wl) = Ml ·diag[Bernoulli(1−pl)Kl] for a single random weight matrix Wl of dimensions Kl+1 by Kl. The KL term can be approximated well following [7] KL(q✓(!)||p(!)) = L X l=1 KL(qMl(Wl)||p(Wl)) (2) KL(qM(W)||p(W)) / l2(1 −p) 2 ||M||2 −KH(p) (3) 1This raises an interesting hypothesis: does dropout work well because it forces the weights to be near zero, i.e. regularising the weights? We will comment on this later. 3 with H(p) := −p log p −(1 −p) log(1 −p) (4) the entropy of a Bernoulli random variable with probability p. The entropy term can be seen as a dropout regularisation term. This regularisation term depends on the dropout probability p alone, which means that the term is constant w.r.t. model weights. For this reason the term can be omitted when the dropout probability is not optimised, but the term is crucial when it is optimised. Minimising the KL divergence between qM(W) and the prior is equivalent to maximising the entropy of a Bernoulli random variable with probability 1 −p. This pushes the dropout probability towards 0.5—the highest it can attain. The scaling of the regularisation term means that large models will push the dropout probability towards 0.5 much more than smaller models, but as the amount of data N increases the dropout probability will be pushed towards 0 (because of the first term in eq. (1)). We need to evaluate the derivative of the last optimisation objective eq. (1) w.r.t. the parameter p. Several estimators are available for us to do this: for example the score function estimator (also known as a likelihood ratio estimator and Reinforce [6, 12, 30, 35]), or the pathwise derivative estimator (this estimator is also referred to in the literature as the re-parametrisation trick, infinitesimal perturbation analysis, and stochastic backpropagation [11, 22, 31, 34]). The score function estimator is known to have extremely high variance in practice, making optimisation difficult. Following early experimentation with the score function estimator, it was evident that the increase in variance was not manageable. The pathwise derivative estimator is known to have much lower variance than the score function estimator in many applications, and indeed was used by [23] with Gaussian dropout. However, unlike the Gaussian dropout setting, in our case we need to optimise the parameter of a Bernoulli distributions. The pathwise derivative estimator assumes that the distribution at hand can be re-parametrised in the form g(✓, ✏) with ✓the distribution’s parameters, and ✏a random variable which does not depend on ✓. This cannot be done with the Bernoulli distribution. Instead, we replace dropout’s discrete Bernoulli distribution with its continuous relaxation. More specifically, we use the Concrete distribution relaxation. This relaxation allows us to re-parametrise the distribution and use the low variance pathwise derivative estimator instead of the score function estimator. The Concrete distribution is a continuous distribution used to approximate discrete random variables, suggested in the context of latent random variables in deep generative models [16, 27]. One way to view the distribution is as a relaxation of the “max” function in the Gumbel-max trick to a “softmax” function, which allows the discrete random variable z to be written in the form ˜z = g(✓, ✏) with parameters ✓, and ✏a random variable which does not depend on ✓. We will concentrate on the binary random variable case (i.e. a Bernoulli distribution). Instead of sampling the random variable from the discrete Bernoulli distribution (generating zeros and ones) we sample realisations from the Concrete distribution with some temperature t which results in values in the interval [0, 1]. This distribution concentrates most mass on the boundaries of the interval 0 and 1. In fact, for the one dimensional case here with the Bernoulli distribution, the Concrete distribution relaxation ˜z of the Bernoulli random variable z reduces to a simple sigmoid distribution which has a convenient parametrisation: ˜z = sigmoid ✓1 t · % log p −log(1 −p) + log u −log(1 −u) &◆ (5) with uniform u ⇠Unif(0, 1). This relation between u and ˜z is depicted in figure 10 in appendix A. Here u is a random variable which does not depend on our parameter p. The functional relation between ˜z and u is differentiable w.r.t. p. With the Concrete relaxation of the dropout masks, it is now possible to optimise the dropout probability using the pathwise derivative estimator. We refer to this Concrete relaxation of the dropout masks as Concrete Dropout. A Python code snippet for Concrete dropout in Keras [5] is given in appendix C, spanning about 20 lines of code, and experiment code is given online2. We next assess the proposed dropout variant empirically on a large array of tasks. 2https://github.com/yaringal/ConcreteDropout 4 4 Experiments We next analyse the behaviour of our proposed dropout variant on a wide variety of tasks. We study how our dropout variant captures different types of uncertainty on a simple synthetic dataset with known ground truth uncertainty, and show how its behaviour changes with increasing amounts of data versus model size (§4.1). We show that Concrete dropout matches the performance of handtuned dropout on the UCI datasets (§4.2) and MNIST (§4.3), and further demonstrate our variant on large models used in the Computer Vision community (§4.4). We show a significant reduction in experiment time as well as improved model performance and uncertainty calibration. Lastly, we demonstrate our dropout variant in a model-based RL task extending on [10], showing that the agent correctly reduces its uncertainty dynamically as the amount of data increases (§4.5). We compare the performance of hand-tuned dropout to our Concrete dropout variant in the following experiments. We chose not to compare to Gaussian dropout in our experiments, as when optimising Gaussian dropout’s ↵following its variational interpretation [23], the method is known to underperform [28] (however, Gal [7] compared Gaussian dropout to Bernoulli dropout and found that when optimising the dropout probability by hand, the two methods perform similarly). 4.1 Synthetic data The tools above allow us to separate both epistemic and aleatoric uncertainties with ease. We start with an analysis of how different uncertainties behave with different data sizes. For this we optimise both the dropout probability p as well as the (per point) model precision ⌧(following [20] for the latter one). We generated simple data from the function y = 2x + 8 + ✏with known noise ✏⇠N(0, 1) (i.e. corrupting the observations with noise with a fixed standard deviation 1), creating datasets increasing in size ranging from 10 data points (example in figure 1e) up to 10, 000 data points (example in figure 1f). Knowing the true amount of noise in our synthetic dataset, we can assess the quality of the uncertainties predicted by the model. We used models with three hidden layers of size 1024 and ReLU non-linearities, and repeated each experiment three times, averaging the experiments’ results. Figure 1a shows the epistemic uncertainty (in standard deviation) decreasing as the amount of data increases. This uncertainty was computed by generating multiple function draws and evaluating the functions over a test set generated from the same data distribution. Figure 1b shows the aleatoric uncertainty tending towards 1 as the data increases—showing that the model obtains an increasingly improved estimate to the model precision as more data is given. Finally, figure 1c shows the predictive uncertainty obtained by combining the variances of both plots above. This uncertainty seems to converge to a constant value as the epistemic uncertainty decreases and the estimation of the aleatoric uncertainty improves. Lastly, the optimised dropout probabilities corresponding to the various dataset sizes are given in figure 1d. As can be seen, the optimal dropout probability in each layer decreases as more data is observed, starting from near 0.5 probabilities in all layers with the smallest dataset, and converging to values ranging between 0.2 and 0.4 when 10, 000 data points are given to the model. More interesting, the optimal dropout probability for the input layer is constant at near-zero, which is often observed with hand-tuned dropout probabilities as well. (a) Epistemic (b) Aleatoric (c) Predictive (d) Optimised dropout probability values (per layer). First layer in blue. (e) Example dataset with 10 data points. (f) Example dataset with 10, 000 data points. Figure 1: Different uncertainties (epistemic, aleatoric, and predictive, in std) as the number of data points increases, as well as optimised dropout probabilities and example synthetic datasets. 5 Figure 2: Test negative log likelihood. The lower the better. Best viewed in colour. Figure 3: Test RMSE. The lower the better. Best viewed in colour. 4.2 UCI We next assess the performance of our technique in a regression setting using the popular UCI benchmark [26]. All experiments were performed using a fully connected neural network (NN) with 2 hidden layers, 50 units each, following the experiment setup of [13]. We compare against a two layer Bayesian NN approximated by standard dropout [9] and a Deep Gaussian Process of depth 2 [4]. Test negative log likelihood for 4 datasets is reported in figure 2, with test error reported in figure 3. Full results as well as experiment setup are given in the appendix D. Figure 4 shows posterior dropout probabilities across different cross validation splits. Intriguingly, the input layer’s dropout probability (p) always decreases to essentially zero. This is a recurring pattern we observed with all UCI datasets experiments, and is further discussed in the next section. Figure 4: Converged dropout probabilities per layer, split and UCI dataset (best viewed on a computer screen). 4.3 MNIST We further experimented with the standard classification benchmark MNIST [24]. Here we assess the accuracy of Concrete dropout, and study its behaviour in relation to the training set size and model size. We assessed a fully connected NN with 3 hidden layers and ReLU activations. All models were trained for 500 epochs (⇠2·105 iterations); each experiment was run three times using random initial settings in order to avoid reporting spurious results. Concrete dropout achieves MNIST accuracy of 98.6%, matching that of hand-tuned dropout. Figure 5 shows a decrease in converged dropout probabilities as the size of data increases. Notice that while the dropout probabilities in the third hidden and output layers vary by a relatively small amount, they converge to zero in the first two layers. This happens despite the fact that the 2nd and Figure 5: Converged dropout probabilities as function of training set size (3x512 MLP). Figure 6: Converged dropout probabilities as function of number of hidden units. 6 (a) Input Image (b) Semantic Segmentation (c) Epistemic Uncertainty Figure 7: Example output from our semantic segmentation model (a large computer vision model). 3rd hidden layers are of the same shape and prior length scale setting. Note how the optimal dropout probabilities are zero in the first layer, matching the previous results. However, observe that the model only becomes confident about the optimal input transformation (dropout probabilities are set to zero) after seeing a relatively large number of examples in comparison to the model size (explaining the results in §4.1 where the dropout probabilities of the first layer did not collapse to zero). This implies that removing dropout a priori might lead to suboptimal results if the training set is not sufficiently informative, and it is best to allow the probability to adapt to the data. Figure 6 provides further insights by comparing the above examined 3x512 MLP model (orange) to other architectures. As can be seen, the dropout probabilities in the first layer stay close to zero, but others steadily increase with the model size as the epistemic uncertainty increases. Further results are given in the appendix D.1. 4.4 Computer vision In computer vision, dropout is typically applied to the final dense layers as a regulariser, because the top layers of the model contain the majority of the model’s parameters [32]. For encoder-decoder semantic segmentation models, such as Bayesian SegNet, [21] found through grid-search that the best performing model used dropout over the middle layers (central encoder and decoder units) as they contain the most parameters. However, the vast majority of computer vision models leave the dropout probability fixed at p = 0.5, because it is prohibitively expensive to optimise manually – with a few notable exceptions which required considerable computing resources [15, 33]. We demonstrate Concrete dropout’s efficacy by applying it to the DenseNet model [17] for semantic segmentation (example input, output, and uncertainty map is given in Figure 7). We use the same training scheme and hyper-parameters as the original authors [17]. We use Concrete dropout weight regulariser 10−8 (derived from the prior length-scale) and dropout regulariser 0.01 ⇥N ⇥H ⇥W, where N is the training dataset size, and H ⇥W are the number of pixels in the image. This is because the loss is pixel-wise, with the random image crops used as model input. The original model uses a hand-tuned dropout p = 0.2. Table 1 shows that replacing dropout with Concrete dropout marginally improves performance. DenseNet Model Variant MC Sampling IoU No Dropout 65.8 Dropout (manually-tuned p = 0.2) 7 67.1 Dropout (manually-tuned p = 0.2) 3 67.2 Concrete Dropout 7 67.2 Concrete Dropout 3 67.4 Table 1: Comparing the performance of Concrete dropout against baseline models with DenseNet [17] on the CamVid road scene semantic segmentation dataset. Table 2: Calibration plot. Concrete dropout reduces the uncertainty calibration RMSE compared to the baselines. Concrete dropout is tolerant to initialisation values. Figure 8 shows that for a range of initialisation choices in p = [0.05, 0.5] we converge to a similar optima. Interestingly, we observe that Concrete dropout learns a different pattern to manual dropout tuning results [21]. The second and last layers have larger dropout probability, while the first and middle layers are largely deterministic. Concrete dropout improves calibration of uncertainty obtained from the models. Figure 2 shows calibration plots of a Concrete dropout model against the baselines. This compares the model’s predicted uncertainty against the accuracy frequencies, where a perfectly calibrated model corresponds to the line y = x. 7 (a) L = 0 (b) L = 1 (c) L = n/2 (d) L = n −1 (e) L = n Figure 8: Learned Concrete dropout probabilities for the first, second, middle and last two layers in a semantic segmentation model. p converges to the same minima for a range of initialisations from p = [0.05, 0.5]. Concrete dropout layer requires negligible additional compute compared with standard dropout layers with our implementation. However, using conventional dropout requires considerable resources to manually tune dropout probabilities. Typically, computer vision models consist of 10M+ parameters, and take multiple days to train on a modern GPU. Using Concrete dropout can decrease the time of model training by weeks by automatically learning the dropout probabilities. 4.5 Model-based reinforcement learning Existing RL research using dropout uncertainty would hold the dropout probability fixed, or decrease it following a schedule [9, 10, 18]. This gives a proxy to the epistemic uncertainty, but raises other difficulties such as planning the dropout schedule. This can also lead to under-exploitation of the environment as was reported in [9] with Thompson sampling. To avoid this under-exploitation, Gal et al. [10] for example performed a grid-search to find p that trades-off this exploration and exploitation over the acquisition of multiple episodes at once. We repeated the experiment setup of [10], where an agent attempts to balance a pendulum hanging from a cart by applying force to the cart. [10] used a fixed dropout probability of 0.1 in the dynamics model. Instead, we use Concrete dropout with the dynamics model, and able to match their cumulative reward (16.5 with 25 time steps). Concrete dropout allows the dropout probability to adapt as more data is collected, instead of being set once and held fixed. Figures 9a–9c show the optimised dropout probabilities per layer vs. the number of episodes (acquired data), as well as the fixed probabilities in the original setup. Concrete dropout automatically decreases the dropout probability as more data is observed. Figures 9d–9g show the dynamics’ model epistemic uncertainty for each one of the four state components in the system: [x, ˙x, ✓, ˙✓] (cart location, velocity, pendulum angle, and angular velocity). This uncertainty was calculated on a validation set split from the total data after each episode. Note how with Concrete dropout the epistemic uncertainty decreases over time as more data is observed. (a) L = 0 (b) L = 1 (c) L = 2 (d) x (e) ˙x (f) ✓ (g) ˙✓ Figure 9: Concrete dropout in model-based RL. Left three plots: dropout probabilities for the 3 layers of the dynamics model as a function of the number of episodes (amount of data) observed by the agent (Concrete dropout in blue, baseline in orange). Right four plots: epistemic uncertainty over the dynamics model output for the four state components: [x, ˙x, ✓, ˙✓]. Best viewed on a computer screen. 5 Conclusions and Insights In this paper we introduced Concrete dropout—a principled extension of dropout which allows for the dropout probabilities to be tuned. We demonstrated improved calibration and uncertainty estimates, as well as reduced experimentation cycle time. Two interesting insights arise from this work. First, common practice in the field where a small dropout probability is often used with the shallow layers of a model seems to be supported by dropout’s variational interpretation. This can be seen as evidence towards the variational explanation of dropout. Secondly, an open question arising from previous research was whether dropout works well because it forces the weights to be near zero with fixed p. Here we showed that allowing p to adapt, gives comparable performance as optimal fixed p. Allowing p to change does not force the weight magnitude to be near zero, suggesting that the hypothesis that dropout works because p is fixed is false. 8 References [1] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. [2] Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pages 3084–3092, 2013. [3] Matthew J. Beal and Zoubin Ghahramani. The variational Bayesian EM algorithm for incomplete data: With application to scoring graphical model structures. Bayesian Statistics, 2003. [4] Thang D. Bui, José Miguel Hernández-Lobato, Daniel Hernández-Lobato, Yingzhen Li, and Richard E. Turner. Deep gaussian processes for regression using approximate expectation propagation. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 1472–1481, 2016. [5] François Chollet. Keras, 2015. URL https://github.com/fchollet/keras. GitHub repository. [6] Michael C. Fu. Chapter 19 gradient estimation. In Shane G. Henderson and Barry L. Nelson, editors, Simulation, volume 13 of Handbooks in Operations Research and Management Science, pages 575 – 616. Elsevier, 2006. [7] Yarin Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016. [8] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. NIPS, 2016. [9] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICML, 2016. [10] Yarin Gal, Rowan McAllister, and Carl E. Rasmussen. Improving PILCO with Bayesian neural network dynamics models. In Data-Efficient Machine Learning workshop, ICML, April 2016. [11] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013. [12] Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84, 1990. [13] Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In ICML, 2015. [14] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [15] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. [16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-softmax. In Bayesian Deep Learning workshop, NIPS, 2016. [17] Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. arXiv preprint arXiv:1611.09326, 2016. [18] Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. In ArXiv e-prints, 1702.01182, 2017. [19] Michael Kampffmeyer, Arnt-Borre Salberg, and Robert Jenssen. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2016. [20] Alex Kendall and Yarin Gal. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In ArXiv e-prints, 1703.04977, 2017. [21] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015. [22] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. [23] Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In NIPS. Curran Associates, Inc., 2015. [24] Yann LeCun and Corinna Cortes. The MNIST database of handwritten digits. 1998. URL http: //yann.lecun.com/exdb/mnist/. 9 [25] Yingzhen Li and Yarin Gal. Dropout Inference in Bayesian Neural Networks with Alpha-divergences. In ArXiv e-prints, 1703.02914, 2017. [26] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. [27] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete distribution: A continuous relaxation of discrete random variables. In Bayesian Deep Learning workshop, NIPS, 2016. [28] Dmitry Molchanov, Arseniy Ashuha, and Dmitry Vetrov. Dropout-based automatic relevance determination. In Bayesian Deep Learning workshop, NIPS, 2016. [29] NHTSA. PE 16-007. Technical report, U.S. Department of Transportation, National Highway Traffic Safety Administration, Jan 2017. Tesla Crash Preliminary Evaluation Report. [30] John Paisley, David Blei, and Michael Jordan. Variational Bayesian inference with stochastic search. ICML, 2012. [31] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [32] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [33] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. [34] Michalis Titsias and Miguel Lázaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971–1979, 2014. [35] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. 10 | 2017 | 352 |
6,844 | Multiresolution Kernel Approximation for Gaussian Process Regression Yi Ding∗, Risi Kondor∗†, Jonathan Eskreis-Winkler† ∗Department of Computer Science, †Department of Statistics The University of Chicago, Chicago, IL, 60637 {dingy,risi,eskreiswinkler}@uchicago.edu Abstract Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K−1 and det(K). 1 Introduction Gaussian Process (GP) regression, and its frequentist cousin, kernel ridge regression, are such natural and canonical algorithms that they have been reinvented many times by different communities under different names. In machine learning, GPs are considered one of the standard methods of Bayesian nonparametric inference [1]. Meanwhile, the same model, under the name Kriging or Gaussian Random Fields, is the de facto standard for modeling a range of natural phenomena from geophyics to biology [2]. One of the most appealing features of GPs is that, ultimately, the algorithm reduces to “just” having to compute the inverse of a kernel matrix, K. Unfortunately, this also turns out to be the algorithm’s Achilles heel, since in the general case, the complexity of inverting a dense n×n matrix scales with O(n3), meaning that when the number of training examples exceeds 104 ∼105, GP inference becomes problematic on virtually any computer1. Over the course of the last 15 years, devising approximations to address this problem has become a burgeoning field. The most common approach is to use one of the so-called Nystr¨om methods [3], which select a small subset {xi1, . . . , xim} of the original training data points as “anchors” and approximate K in the form K ≈K∗,ICK⊤ ∗,I, where K∗,I is the submatrix of K consisting of columns {i1, . . . , im}, and C is a matrix such as the pseudo-inverse of KI,I. Nystr¨om methods often work well in practice and have a mature literature offering strong theoretical guarantees. Still, Nystr¨om is inherently a global low rank approximation, and, as pointed out in [4], a priori there is no reason to believe that K should be well approximable by a low rank matrix: for example, in the case of the popular Gaussian kernel k(x, x′) = exp(−(x −x′)2/(2ℓ2)), as ℓdecreases and the kernel becomes more and more “local” the number of significant eigenvalues quickly increases. This observation has motivated alternative types of approximations, including local, hierarchical and distributed ones (see Section 2). In certain contexts involving translation invariant kernels yet other strategies may be applicable [5], but these are beyond the scope of the present paper. In this paper we present a new kernel approximation method, Multiresolution Kernel Approximation (MKA), which is inspired by a combination of ideas from hierarchical matrix decomposition 1 In the limited case of evaluating a GP with a fixed Gram matrix on a single training set, GP inference reduces to solving a linear system in K, which scales better with n, but might be problematic behavior when the condition number of K is large. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. algorithms and multiresolution analysis. Some of the important features of MKA are that (a) it is a broad spectrum algorithm that approximates the entire kernel matrix K, not just its top eigenvectors, and (b) it is a so-called “direct” method, i.e., it yields explicit approximations to K−1 and det(K). Notations. We define [n] = {1, 2, . . . , n}. Given a matrix A, and a tuple I = (i1, . . . , ir), AI,∗ will denote the submatrix of A formed of rows indexed by i1, . . . , ir, similarly A∗,J will denote the submatrix formed of columns indexed by j1, . . . , jp, and AI,J will denote the submatrix at the intersection of rows i1, . . . , ir and columns j1, . . . , jp. We extend these notations to the case when I and J are sets in the obvious way. If A is a blocked matrix then JAKi,j will denote its (i, j) block. 2 Local vs. global kernel approximation Recall that a Gaussian Process (GP) on a space X is a prior over functions f : X →R defined by a mean function µ(x) = E[f(x)], and covariance function k(x, x′) = Cov(f(x), f(x′)). Using the most elementary model yi = f(xi) + ϵ where ϵ ∼N(0, σ2) and σ2 is a noise parameter, given training data {(x1, y1), . . . , (xn, yn)}, the posterior is also a GP, with mean µ′(x) = µ(x)+k⊤ x (K+ σ2I)−1y, where kx = (k(x, x1), . . . , k(x, xn)), y=(y1, . . . , yn), and covariance k′(x, x′) = k(x, x′) −k⊤ x′(K + σ2I)−1kx. (1) Thus (here and in the following assuming µ = 0 for simplicity), the maximum a posteriori (MAP) estimate of f is bf(x) = k⊤ x (K + σ2I)−1y. (2) Ridge regression, which is the frequentist analog of GP regression, yields the same formula, but regards bf as the solution to a regularized risk minimization problem over a Hilbert space H induced by k. We will use “GP” as the generic term to refer to both Bayesian GPs and ridge regression. Letting K′ = (K+σ2I), virtually all GP approximation approaches focus on trying to approximate the (augmented) kernel matrix K′ in such a way so as to make inverting it, solving K′y = α or computing det(K′) easier. For the sake of simplicity in the following we will actually discuss approximating K, since adding the diagonal term usually doesn’t make the problem any more challenging. 2.1 Global low rank methods As in other kernel methods, intuitively, Ki,j = k(xi, xj) encodes the degree of similarity or closeness between the two points xi and xj as it relates to the degree of correlation/similarity between the value of f at xi and at xj. Given that k is often conceived of as a smooth, slowly varying function, one very natural idea is to take a smaller set {xi1, . . . , xim} of “landmark points” or “pseudo-inputs” and approximate k(x, x′) in terms of the similarity of x to each of the landmarks, the relationship of the landmarks to each other, and the similarity of the landmarks to x′. Mathematically, k(x, x′) ≈ m X s=1 m X j=1 k(x, xis) cis,ij k(xij, x′), which, assuming that {xi1, . . . , xim} is a subset of the original point set {x1, . . . , xn}, amounts to an approximation of the form K ≈K∗,I C K⊤ ∗,I, with I = {i1, . . . , im}. The canonical choice for C is C = W +, where W = KI,I, and W + denotes the Moore-Penrose pseudoinverse of W. The resulting approximation K ≈K∗,IW +K⊤ ∗,I, (3) is known as the Nystr¨om approximation, because it is analogous to the so-called Nystr¨om extension used to extrapolate continuous operators from a finite number of quadrature points. Clearly, the choice of I is critical for a good quality approximation. Starting with the pioneering papers [6, 3, 7], over the course of the last 15 years a sequence of different sampling strategies have been developed for obtaining I, several with rigorous approximation bounds [8, 9, 10, 11]. Further variations include the ensemble Nystr¨om method [12] and the modified Nystr¨om method [13]. Nystr¨om methods have the advantage of being relatively simple, and having reliable performance bounds. A fundamental limitation, however, is that the approximation (3) is inherently low rank. As pointed out in [4], there is no reason to believe that kernel matrices in general should be close to low rank. An even more fundamental issue, which is less often discussed in the literature, relates to the 2 specific form of (2). The appearance of K′−1 in this formula suggests that it is the low eigenvalue eigenvectors of K′ that should dominate the result of GP regression. On the other hand, multiplying the matrix by kx largely cancels this effect, since kx is effectively a row of a kernel matrix similar to K′, and will likely concentrate most weight on the high eigenvalue eigenvectors. Therefore, ultimately, it is not K′ itself, but the relationship between the eigenvectors of K′ and the data vector y that determines which part of the spectrum of K′ the result of GP regression is most sensitive to. Once again, intuition about the kernel helps clarify this point. In a setting where the function that we are regressing is smooth, and correspondingly, the kernel has a large length scale parameter, it is the global, long range relationships between data points that dominate GP regression, and that can indeed be well approximated by the landmark point method. In terms of the linear algebra, the spectral expansion of K′ is dominated by a few large eigenvalue eigenvectors, we will call this the “PCA-like” scenario. In contrast, in situations where f varies more rapidly, a shorter lengthscale kernel is called for, local relationships between nearby points become more important, which the landmark point method is less well suited to capture. We call this the “k–nearest neighbor type” scenario. In reality, most non-trivial GP regression problems fall somewhere in between the above two extremes. In high dimensions data points tend to be all almost equally far from each other anyway, limiting the applicability of simple geometric interpretations. Nonetheless, the two scenarios are an illustration of the general point that one of the key challenges in large scale machine learning is integrating information from both local and global scales. 2.2 Local and hierarchical low rank methods Realizing the limitations of the low rank approach, local kernel approximation methods have also started appearing in the literature. Broadly, these algorithms: (1) first cluster the rows/columns of K with some appropriate fast clustering method, e.g., METIS [14] or GRACLUS [15] and block K accordingly; (2) compute a low rank, but relatively high accuracy, approximation JKKi,i ≈UiΣiU ⊤ i to each diagonal block of K; (3) use the {Ui} bases to compute possibly coarser approximations to the JKKi,j off diagonal blocks. This idea appears in its purest form in [16], and is refined in [4] in a way that avoids having to form all rows/columns of the off-diagonal blocks in the first place. Recently, [17] proposed a related approach, where all the blocks in a given row share the same row basis but have different column bases. A major advantage of local approaches is that they are inherently parallelizable. The clustering itself, however, is a delicate, and sometimes not very robust component of these methods. In fact, divide-and-conquer type algorithms such as [18] and [19] can also be included in the same category, even though in these cases the blocking is usually random. A natural extension of the blocking idea would be to apply the divide-and-conquer approach recursively, at multiple different scales. Geometrically, this is similar to recent multiresolution data analysis approaches such as [20]. In fact, hierarchical matrix approximations, including HODLR matrices, H–matrices [21], H2–matrices [22] and HSS matrices [23] are very popular in the numerical analysis literature. While the exact details vary, each of these methods imposes a specific type of block structure on the matrix and forces the off-diagonal blocks to be low rank (Figure 1 in the Supplement). Intuitively, nearby clusters interact in a richer way, but as we move farther away, data can be aggregated more and more coarsely, just as in the fast multipole method [24]. We know of only two applications of the hierarchical matrix methodology to kernel approximation: B¨orm and Garcke’s H2 matrix approach [25] and O’Neil et al.’s HODLR method [26]. The advantage of H2 matrices is their more intricate structure, allowing relatively tight interactions between neighboring clusters even when the two clusters are not siblings in the tree (e.g. blocks 8 and 9 in Figure 1c in the Supplement). However, the H2 format does not directly help with inverting K or computing its determinant: it is merely a memory-efficient way of storing K and performing matrix/vector multiplies inside an iterative method. HODLR matrices have a simpler structure, but admit a factorization that makes it possible to directly compute both the inverse and the determinant of the approximated matrix in just O(n log n) time. The reason that hierarchical matrix approximations have not become more popular in machine learning so far is that in the case of high dimensional, unstructured data, finding the way to organize {x1, . . . , xn} into a single hierarchy is much more challenging than in the setting of regularly spaced points in R2 or R3, where these methods originate: 1. Hierarchical matrices require making hard assignments of data points to clusters, since the block structure at each level corresponds to partitioning the rows/columns of the original matrix. 2. The hierarchy must form a single tree, which 3 puts deep divisions between clusters whose closest common ancestor is high up in the tree. 3. Finding the hierarchy in the first place is by no means trivial. Most works use a top-down strategy which defeats the inherent parallelism of the matrix structure, and the actual algorithm used (kd-trees) is known to be problematic in high dimensions [27]. 3 Multiresolution Kernel Approximation Our goal in this paper is to develop a data adapted multiscale kernel matrix approximation method, Multiresolution Kernel Approximation (MKA), that reflects the “distant clusters only interact in a low rank fashion” insight of the fast multipole method, but is considerably more flexible than existing hierarchical matrix decompositions. The basic building blocks of MKA are local factorizations of a specific form, which we call core-diagonal compression. Definition 1 We say that a matrix H is c–core-diagonal if Hi,j = 0 unless either i, j ≤c or i = j. Definition 2 A c–core-diagonal compression of a symmetric matrix A ∈Rm×m is an approximation of the form A ≈Q⊤H Q = , (4) where Q is orthogonal and H is c–core-diagonal. Core-diagonal compression is to be contrasted with rank c sketching, where H would just have the c × c block, without the rest of the diagonal. From our multiresolution inspired point of view, however, the purpose of (4) is not just to sketch A, but to also to split Rm into the direct sum of two subspaces: (a) the “detail space”, spanned by the last n−c rows of Q, responsible for capturing purely local interactions in A and (b) the “scaling space”, spanned by the first c rows, capturing the overall structure of A and its relationship to other diagonal blocks. Hierarchical matrix methods apply low rank decompositions to many blocks of K in parallel, at different scales. MKA works similarly, by applying core-diagonal compressions. Specifically, the algorithm proceeds by taking K through a sequence of transformations K = K0 7→K1 7→. . . 7→ Ks, called stages. In the first stage 1. Similar to other local methods, MKA first uses a fast clustering method to cluster the rows/columns of K0 into clusters C1 1, . . . , C1 p1. Using the corresponding permutation matrix C1 (which maps the elements of the first cluster to (1, 2, . . . |C1 1|), the elements of the second cluster to (|C1 1| + 1, . . . , |C1 1| + |C1 2|), and so on) we form a blocked matrix K0 = C1 K0 C⊤ 1 , where JK0Ki,j = KC1 i ,C1 j . 2. Each diagonal block of K0 is independently core-diagonally compressed as in (4) to yield H1 i = Q1 i JK0Ki,i (Q1 i )⊤ CD(c1 i ) (5) where CD(c1 i ) in the index stands for truncation to c1 i –core-diagonal form. 3. The Q1 i local rotations are assembled into a single large orthogonal matrix Q1 = L i Q1 i and applied to the full matrix to give H1 = Q1 K0 Q1 ⊤. 4. The rows/columns of H1 are rearranged by applying a permutation P1 that maps the core part of each block to one of the first c1 := c1 1 + . . . c1 p1 coordinates, and the diagonal part to the rest, giving Hpre 1 = P1 H1 P ⊤ 1 . 5. Finally, Hpre 1 is truncated into the core-diagonal form H1 = K1 ⊕D1, where K1 ∈Rc1×c1 is dense, while D1 is diagonal. Effectively, K1 is a compressed version of K0, while D1 is formed by concatenating the diagonal parts of each of the H1 i matrices. Together, this gives a global core-diagonal compression K0 ≈C⊤ 1 Q1 ⊤P ⊤ 1 | {z } Q⊤ 1 (K1 ⊕D1) P1 Q1 C1 | {z } Q1 of the entire original matrix K0. The second and further stages of MKA consist of applying the above five steps to K1, K2, . . . , Ks−1 in turn, so ultimately the algorithm yields a kernel approximation ˜K which has a telescoping form ˜K ≈Q⊤ 1 (Q⊤ 2 (. . . Q⊤ s (Ks ⊕Ds)Qs . . .⊕D2)Q2 ⊕D1)Q1 (6) The pseudocode of the full algorithm is in the Supplementary Material. 4 MKA is really a meta-algorithm, in the sense that it can be used in conjunction with different corediagonal compressors. The main requirements on the compressor are that (a) the core of H should capture the dominant part of A, in particular the subspace that most strongly interacts with other blocks, (b) the first c rows of Q should be as sparse as possible. We consider two alternatives. Augmented Sparse PCA (SPCA). Sparse PCA algorithms explicitly set out to find a set of vectors {v1, . . . , vc} so as to maximize ∥V ⊤AV ∥Frob, where V = [v1, . . . , vc], while constraining each vector to be as sparse as possible [28]. While not all SPCAs guarantee orthogonality, this can be enforced a posteriori via e.g., QR factorization, yielding Qsc, the top c rows of Q in (4). Letting U be a basis for the complementary subspace, the optimal choice for the bottom m−c rows in terms of minimizing Frobenius norm error of the compression is Qwlet = U ˆO, where ˆO = argmax O⊤O=I ∥diag(O⊤U ⊤A UO)∥, the solution to which is of course given by the eigenvectors of U ⊤AU. The main drawback of the SPCA approach is its computational cost: depending on the algorithm, the complexity of SPCA scales with m3 or worse [29, 30]. Multiresolution Matrix Factorization (MMF) MMF is a recently introduced matrix factorization algorithm motivated by similar multiresolution ideas as the present work, but applied at the level of individual matrix entries rather than at the level of matrix blocks [31]. Specifically, MMF yields a factorization of the form A ≈q⊤ 1 . . . q⊤ L | {z } Q⊤ H qL . . . q1 | {z } Q , where, in the simplest case, the qi’s are just Givens rotations. Typically, the number of rotations in MMF is O(m). MMF is efficient to compute, and sparsity is guaranteed by the sparsity of the individual qi’s and the structure of the algorithm. Hence, MMF has complementary strengths to SPCA: it comes with strong bounds on sparsity and computation time, but the quality of the scaling/wavelet space split that it produces is less well controlled. Remarks. We make a few remarks about MKA. 1. Typically, low rank approximations reduce dimensionality quite aggressively. In contrast, in core-diagonal compression c is often on the order of m/2, leading to “gentler” and more faithful, kernel approximations. 2. In hierarchical matrix methods, the block structure of the matrix is defined by a single tree, which, as discussed above, is potentially problematic. In contrast, by virtue of reclustering the rows/columns of Kℓbefore every stage, MKA affords a more flexible factorization. In fact, beyond the first stage, it is not even individual data points that MKA clusters, but subspaces defined by the earlier local compressions. 3. While Cℓand Pℓare presented as explicit permutations, they really just correspond to different ways of blocking Ks, which is done implicitly in practice with relatively little overhead. 4. Step 3 of the algorithm is critical, because it extends the core-diagonal splits found in the diagonal blocks of the matrix to the off-diagonal blocks. Essentially the same is done in [4] and [17]. This operation reflects a structural assumption about K, namely that the same bases that pick out the dominant parts of the diagonal blocks (composed of the first cℓ i rows of the Qℓ i rotations) are also good for compressing the off-diagonal blocks. In the hierarchical matrix literature, for the case of specific kernels sampled in specific ways in low dimensions, it is possible to prove such statements. In our high dimensional and less structured setting, deriving analytical results is much more challenging. 5. MKA is an inherently bottom-up algorithm, including the clustering, thus it is naturally parallelizable and can be implemented in a distributed environment. 6. The hierarchical structure of MKA is similar to that of the parallel version of MMF (pMMF) [32], but the way that the compressions are calculated is different (pMMF tries to minimize an objective that relates to the entire matrix). 4 Complexity and application to GPs For MKA to be effective for large scale GP regression, it must be possible to compute the factorization fast. In addition, the resulting approximation ˜K must be symmetric positive semi-definite (spsd) (MEKA, for example, fails to fulfill this [4]). We say that a matrix approximation algorithm A 7→˜A is spsd preserving if ˜A is spsd whenever A is. It is clear from its form that the Nystr¨om approximation is spsd preserving , so is augmented SPCA compression. MMF has different variants, but the core part of H is always derived by conjugating A by rotations, while the diagonal elements are guaranteed to be positive, therefore MMF is spsd preserving as well. 5 Proposition 1 If the individual core-diagonal compressions in MKA are spsd preserving, then the entire algorithm is spsd perserving. The complexity of MKA depends on the complexity of the local compressions. Next, we assume that to leading order in m this cost is bounded by ccomp mαcomp (with αcomp ≥1) and that each row of the Q matrix that is produced is csp–sparse. We assume that the MKA has s stages, the size of the final Ks “core matrix” is dcore × dcore, and that the size of the largest cluster is mmax. We assmue that the maximum number of clusters in any stage is bmax and that the clustering is close to balanced in the sense that that bmax = θ(n/mmax) with a small constant. We ignore the cost of the clustering algorithm, which varies, but usually scales linearly in snbmax. We also ignore the cost of permuting the rows/columns of Kℓ, since this is a memory bound operation that can be virtualized away. The following results are to leading order in mmax and are similar to those in [32] for parallel MMF. Proposition 2 With the above notations, the number of operations needed to compute the MKA of an n×n matrix is upper bounded by 2scspn2 +sccompmαcomp−1 max n. Assuming bmax–fold parallelism, this complexity reduces to 2scspn2/bmax +sccompmαcomp max . The memory cost of MKA is just the cost of storing the various matrices appearing in (6). We only include the number of non-zero reals that need to be stored and not indices, etc.. Proposition 3 The storage complexity of MKA is upper bounded by (scsp +1)n + d2 core. Rather than the general case, it is more informative to focus on MMF based MKA, which is what we use in our experiments. We consider the simplest case of MMF, referred to as “greedy-Jacobi” MMF, in which each of the qi elementary rotations is a Given rotation. An additional parameter of this algorithm is the compression ratio γ, which in our notation is equal to c/n. Some of the special features of this type of core-diagonal compression are: (a) While any given row of the rotation Q produced by the algorithm is not guaranteed to be sparse, Q will be the product of exactly ⌊(1−γ)m⌋Givens rotations. (b) The leading term in the cost is the m3 cost of computing A⊤A, but this is a BLAS operation, so it is fast. (c) Once A⊤A has been computed, the cost of the rest of the compression scales with m2. Together, these features result in very fast core-diagonal compressions and a very compact representation of the kernel matrix. Proposition 4 The complexity of computing the MMF-based MKA of an n×n dense matrix is upper bounded by 4sn2 + sm2 maxn, where s = log(dcore/n)/(log γ). Assuming bmax–fold parallelism, this is reduced to 4snmmax + m3 max. Proposition 5 The storage complexity of MMF-based MKA is upper bounded by (2s+1)n + d2 core. Typically, dcore = O(1). Note that this implies O(n log n) storage complexity, which is similar to Nystr¨om approximations with very low rank. Finally, we have the following results that are critical for using MKA in GPs. Proposition 6 Given an approximate kernel ˜K in MMF-based MKA form (6), and a vector z ∈Rn the product ˜Kz can be computed in 4sn + d2 core operations. With bmax–fold parallelism, this is reduced to 4smmax + d2 core. Proposition 7 Given an approximate kernel ˜K in (MMF or SPCA-based) MKA form, the MKA form of ˜Kα for any α can be computed in O(n + d3 core) operations. The complexity of computing the matrix exponential exp(β ˜K) for any β in MKA form and the complexity of computing det( ˜K) are also O(n + d3 core). 4.1 MKA–GPs and MKA Ridge Regression The most direct way of applying MKA to speed up GP regression (or ridge regression) is simply using it to approximate the augmented kernel matrix K′ = (K + σ2I) and then inverting this approximation using Proposition 7 (with α = −1). Note that the resulting ˜K′−1 never needs to be evaluated fully, in matrix form. Instead, in equations such as (2), the matrix-vector product ˜K′−1y can be computed in “matrix-free” form by cascading y through the analog of (6). Assuming that dcore ≪n and mmax is not too large, the serial complexity of each stage of this computation scales with at most n2, which is the same as the complexity of computing K in the first place. One potential issue with the above approach however is that because MKA involves repeated truncation of the Hpre j matrices, ˜K′ will be a biased approximation to K, therefore expressions such as (2) 6 Full 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 SOR 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 FITC 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 PITC 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 MEKA 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 MKA 50 100 150 200 250 300 -8 -6 -4 -2 0 2 4 6 8 10 Figure 1: Snelson’s 1D example: ground truth (black circles); prediction mean (solid line curves); one standard deviation in prediction uncertainty (dashed line curves). Table 1: Regression Results with k to be # pseudo-inputs/dcore : SMSE(MNLP) Method k Full SOR FITC PITC MEKA MKA housing 16 0.36(−0.32) 0.93(−0.03) 0.91(−0.04) 0.96(−0.02) 0.85(−0.08) 0.52(−0.32) rupture 16 0.17(−0.89) 0.94(−0.04) 0.96(−0.04) 0.93(−0.05) 0.46(−0.18) 0.32(−0.54) wine 32 0.59(−0.33) 0.86(−0.07) 0.84(−0.03) 0.87(−0.07) 0.97(−0.12) 0.70(−0.23) pageblocks 32 0.44(−1.10) 0.86(−0.57) 0.81(−0.78) 0.86(−0.72) 0.96(−0.10) 0.63(−0.85) compAct 32 0.58(−0.66) 0.88(−0.13) 0.91(−0.08) 0.88(−0.14) 0.75(−0.21) 0.60(−0.32) pendigit 64 0.15(−0.73) 0.65(−0.19) 0.70(−0.17) 0.71(−0.17) 0.53(−0.29) 0.30(−0.42) which mix an approximate K′ with an exact kx will exhibit some systematic bias. In Nystr¨om type methods (specifically, the so-called Subset of Regressors and Deterministic Training Conditional GP approximations) this problem is addressed by replacing kx with its own Nystr¨om approximation, ˆkx = K∗,IW +kI x, where [ˆkI x]j = k(x, xij). Although ˆK′ = K∗,IW +K⊤ ∗,I + σ2I is a large matrix, expressions such as ˆk⊤ x ˆK′−1 can nonetheless be efficiently evaluated by using a variant of the Sherman–Morrison–Woodbury identity and the fact that W is low rank (see [33]). The same approach cannot be applied to MKA because ˜K is not low rank. Assuming that the testing set {x1, . . . , xp} is known at training time, however, instead of approximating K or K′, we compute the MKA approximation of the joint train/test kernel matrix K = K K∗ K⊤ ∗ Ktest where Ki,j = k(xi, xj) + σ2 [K∗]i,j = k(xi, x′ j) [Ktest]i,j = k(x′ i, x′ j). Writing K−1 in blocked form ˜K−1 = A B C D , and taking the Schur complement of D now recovers an alternative approximation ˇK−1 = A − BD−1C to K−1 which is consistent with the off-diagonal block K∗leading to our final MKA–GP formula b f = K⊤ ∗ˇK−1y, where b f = ( bf(x′ 1), . . . , bf(x′ p))⊤. While conceptually this is somewhat more involved than naively estimating K′, assuming p ≪n, the cost of inverting D is negligible, and the overall serial complexity of the algorithm remains (n + p)2. In certain GP applications, the O(n2) cost of writing down the kernel matrix is already forbidding. The one circumstance under which MKA can get around this problem is when the kernel matrix is a matrix polynomial in a sparse matrix L, which is most notably for diffusion kernels and certain other graph kernels. Specifically in the case of MMF-based MKA, since the computational cost is dominated by computing local “Gram matrices” A⊤A, when L is sparse, and this sparsity is retained from one compression to another, the MKA of sparse matrices can be computed very fast. In the case of graph Laplacians, empirically, the complexity is close to linear in n. By Proposition 7, the diffusion kernel and certain other graph kernels can also be approximated in about O(n log n) time. 5 Experiments We compare MKA to five other methods: 1. Full: the full GP regression using Cholesky factorization [1]. 2. SOR: the Subset of Regressors method (also equivalent to DTC in mean) [1]. 3. FITC: the Fully Independent Training Conditional approximation, also called Sparse Gaussian Processes using Pseudo-inputs [34]. 4. PITC: the Partially Independent Training Conditional approximation method (also equivalent to PTC in mean) [33]. 5. MEKA: the Memory Efficient Kernel Approximation method [4]. The KISS-GP [35] and other interpolation based methods are not discussed in this paper, because, we believe, they mostly only apply to low dimensional settings. We used custom Matlab implementations [1] for Full, SOR, FITC, and PITC. We used the Matlab codes provided by 7 housing 2 2.5 3 3.5 4 4.5 Log2 # pseudo-inputs 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 SMSE Full SOR FITC PITC MEKA MKA housing 2 2.5 3 3.5 4 4.5 Log2 # pseudo-inputs -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 MNLP Full SOR FITC PITC MEKA MKA rupture 4 4.5 5 5.5 6 6.5 7 7.5 8 Log2 # pseudo-inputs 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SMSE Full SOR FITC PITC MEKA MKA rupture 4 4.5 5 5.5 6 6.5 7 7.5 8 Log2 # pseudo-inputs -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 MNLP Full SOR FITC PITC MEKA MKA Figure 2: SMSE and MNLP as a function of the number of pseudo-inputs/dcore on two datasets. In the given range MKA clearly outperforms the other methods in both error measures. the author for MEKA. Our algorithm MKA was implemented in C++ with the Matlab interface. To get an approximately fair comparison, we set dcore in MKA to be the number of pseudo-inputs. The parallel MMF algorithm was used as the compressor due to its computational strength [32]. The Gaussian kernel is used for all experiments with one length scale for all input dimensions. Qualitative results. We show the qualitative behavior of each method on the 1D toy dataset from [34]. We sampled the ground truth from a Gaussian processes with length scale ℓ= 0.5 and number of pseudo-inputs (dcore) is 10. We applied cross-validation to select the parameters for each method to fit the data. Figure 1 shows that MKA fits the data almost as well as the Full GP does. In terms of the other approximate methods, although their fit to the data is smoother, this is to the detriment of capturing the local structure of the underlying data, which verifies MKA’s ability to capture the entire spectrum of the kernel matrix, not just its top eigenvectors. Real data. We tested the efficacy of GP regression on real-world datasets. The data are normalized to mean zero and variance one. We randomly selected 10% of each dataset to be used as a test set. On the other 90% we did five-fold cross validation to learn the length scale and noise parameter for each method and the regression results were averaged over repeating this setting five times. All experiments were ran on a 3.4GHz 8 core machine with 8GB of memory. Two distinct error measures are used to assess performance: (a) standardized mean square error (SMSE), 1 n Pn t=1(ˆyt− yt)2/ˆσ2 ⋆, where ˆσ2 ⋆is the variance of test outputs, and (2) mean negative log probability (MNLP) 1 n Pn t=1 (ˆyt −yt)2/ˆσ2 ⋆+ log ˆσ2 ⋆+ log 2π , each of which corresponds to the predictive mean and variance in error assessment. From Table 1, we are competitive in both error measures when the number of pseudo-inputs (dcore) is small, which reveals low-rank methods’ inability in capturing the local structure of the data. We also illustrate the performance sensitivity by varying the number of pseudo-inputs on selected datasets. In Figure 2, for the interval of pseudo-inputs considered, MKA’s performance is robust to dcore, while low-rank based methods’ performance changes rapidly, which shows MKA’s ability to achieve good regression results even with a crucial compression level. The Supplementary Material gives a more detailed discussion of the datasets and experiments. 6 Conclusions In this paper we made the case that whether a learning problem is low rank or not depends on the nature of the data rather than just the spectral properties of the kernel matrix K. This is easiest to see in the case of Gaussian Processes, which is the algorithm that we focused on in this paper, but it is also true more generally. Most existing sketching algorithms used in GP regression force low rank structure on K, either globally, or at the block level. When the nature of the problem is indeed low rank, this might actually act as an additional regularizer and improve performance. When the data does not have low rank structure, however, low rank approximations will fail. Inspired by recent work on multiresolution factorizations, we proposed a mulitresolution meta-algorithm, MKA, for approximating kernel matrices, which assumes that the interaction between distant clusters is low rank, while avoiding forcing a low rank structure of the data locally, at any scale. Importantly, MKA allows fast direct calculations of the inverse of the kernel matrix and its determinant, which are almost always the computational bottlenecks in GP problems. Acknowledgements This work was completed in part with resources provided by the University of Chicago Research Computing Center. The authors wish to thank Michael Stein for helpful suggestions. 8 References [1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [2] Michael L. Stein. Statistical Interpolation of Spatial Data: Some Theory for Kriging. Springer, 1999. [3] Christopher Williams and Matthias Seeger. Using the Nystr¨om Method to Speed Up Kernel Machines. In Advances in Neural Information Processing Systems 13, 2001. [4] Si Si, C Hsieh, and Inderjit S Dhillon. Memory Efficient Kernel Approximation. In ICML, 2014. [5] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. NIPS, 2008. [6] Alex J. Smola and Bernhard Sch¨okopf. Sparse Greedy Matrix Approximation for Machine Learning. In Proceedings of the 17th International Conference on Machine Learning, ICML, pages 911–918, 2000. [7] Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. Spectral grouping using the Nystr¨om method. IEEE transactions on pattern analysis and machine intelligence, 26(2):214–25, 2004. [8] P. Drineas and M. W. Mahoney. On the Nystr¨om method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153–2175, 2005. [9] Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, and Zhi-Hua Zhou. Improved Bounds for the Nystr¨om Method With Application to Kernel Classification. IEEE Trans. Inf. Theory, 2013. [10] Alex Gittens and Michael W Mahoney. Revisiting the Nystr¨om method for improved large-scale machine learning. ICML, 28:567–575, 2013. [11] Shiliang Sun, Jing Zhao, and Jiang Zhu. A Review of Nystr¨om Methods for Large-Scale Machine Learning. Information Fusion, 26:36–48, 2015. [12] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Ensemble Nystr¨om method. In NIPS, 2009. [13] Shusen Wang. Efficient algorithms and error analysis for the modified Nystr¨om method. AISTATS, 2014. [14] Amine Abou-Rjeili and George Karypis. Multilevel algorithms for partitioning power-law graphs. In Proceedings of the 20th International Conference on Parallel and Distributed Processing, 2006. [15] Inderjit S Dhillon, Yuqiang Guan, and Brian Kulis. Weighted graph cuts without eigenvectors a multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(11):1944–1957, 2007. [16] Berkant Savas, Inderjit Dhillon, et al. Clustered Low-Rank Approximation of Graphs in Information Science Applications. In Proceedings of the SIAM International Conference on Data Mining, 2011. [17] Ruoxi Wang, Yingzhou Li, Michael W Mahoney, and Eric Darve. Structured Block Basis Factorization for Scalable Kernel Matrix Evaluation. arXiv preprint arXiv:1505.00398, 2015. [18] Yingyu Liang, Maria-Florina F Balcan, Vandana Kanchanapally, and David Woodruff. Improved distributed principal component analysis. In NIPS, pages 3113–3121, 2014. [19] Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression. Conference on Learning Theory, 30:1–26, 2013. [20] William K Allard, Guangliang Chen, and Mauro Maggioni. Multi-scale geometric methods for data sets II: Geometric multi-resolution analysis. Applied and Computational Harmonic Analysis, 2012. [21] W Hackbusch. A Sparse Matrix Arithmetic Based on H-Matrices. Part I: Introduction to H-Matrices. Computing, 62:89–108, 1999. [22] Wolfgang Hackbusch, Boris Khoromskij, and Stefan a. Sauter. On H2-Matrices. Lectures on applied mathematics, pages 9–29, 2000. [23] S. Chandrasekaran, M. Gu, and W. Lyons. A Fast Adaptive Solver For Hierarchically Semi-separable Representations. Calcolo, 42(3-4):171–185, 2005. [24] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. J. Comput. Phys., 1987. [25] Steffen B¨orm and Jochen Garcke. Approximating Gaussian Processes with H2 Matrices. In ECML. 2007. [26] Sivaram Ambikasaran, Sivaram Foreman-Mackey, Leslie Greengard, David W. Hogg, and Michael O’Neil. Fast Direct Methods for Gaussian Processes. arXiv:1403.6015v2, April 2015. [27] Nazneen Rajani, Kate McArdle, and Inderjit S Dhillon. Parallel k-Nearest Neighbor Graph Construction Using Tree-based Data Structures. In 1st High Performance Graph Mining workshop, 2015. [28] Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse Principal Component Analysis. Journal of Computational and Graphical Statistics, 15(2):265–286, 2004. [29] Q. Berthet and P. Rigollet. Complexity Theoretic Lower Bounds for Sparse Principal Component Detection. J. Mach. Learn. Res. (COLT), 30, 1046-1066 2013. [30] Volodymyr Kuleshov. Fast algorithms for sparse principal component analysis based on rayleigh quotient iteration. In ICML, pages 1418–1425, 2013. [31] Risi Kondor, Nedelina Teneva, and Vikas Garg. Multiresolution Matrix Factorization. In ICML, 2014. [32] Nedelina Teneva, Pramod K Murakarta, and Risi Kondor. Multiresolution Matrix Compression. In Proceedings of the 19th International Conference on Aritifical Intelligence and Statistics (AISTATS-16), 2016. [33] Joaquin Qui˜nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:1939–1959, 2005. [34] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. NIPS, 2005. [35] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured gaussian processes (KISS-GP). In ICML, Lille, France, 6-11, pages 1775–1784, 2015. 9 | 2017 | 353 |
6,845 | Near Minimax Optimal Players for the Finite-Time 3-Expert Prediction Problem Yasin Abbasi-Yadkori Adobe Research Peter L. Bartlett UC Berkeley Victor Gabillon Queensland University of Technology Abstract We study minimax strategies for the online prediction problem with expert advice. It has been conjectured that a simple adversary strategy, called COMB, is near optimal in this game for any number of experts. Our results and new insights make progress in this direction by showing that, up to a small additive term, COMB is minimax optimal in the finite-time three expert problem. In addition, we provide for this setting a new near minimax optimal COMB-based learner. Prior to this work, in this problem, learners obtaining the optimal multiplicative constant in their regret rate were known only when K = 2 or K →∞. We characterize, when K = 3, the regret of the game scaling as 8/(9π)T ± log(T)2 which gives for the first time the optimal constant in the leading ( √ T) term of the regret. 1 Introduction This paper studies the online prediction problem with expert advice. This is a fundamental problem of machine learning that has been studied for decades, going back at least to the work of Hannan [12] (see [4] for a survey). As it studies prediction under adversarial data the designed algorithms are known to be robust and are commonly used as building blocks of more complicated machine learning algorithms with numerous applications. Thus, elucidating the yet unknown optimal strategies has the potential to significantly improve the performance of these higher level algorithms, in addition to providing insight into a classic prediction problem. The problem is a repeated two-player zero-sum game between an adversary and a learner. At each of the T rounds, the adversary decides the quality/gain of K experts’ advice, while simultaneously the learner decides to follow the advice of one of the experts. The objective of the adversary is to maximize the regret of the learner, defined as the difference between the total gain of the learner and the total gain of the best fixed expert. Open Problems and our Main Results. Previously this game has been solved asymptotically as both T and K tend to ∞: asymptotically the upper bound on the performance of the state-of-theart Multiplicative Weights Algorithm (MWA) for the learner matches the optimal multiplicative constant of the asymptotic minimax optimal regret rate (T/2) log K [3]. However, for finite K, this asymptotic quantity actually overestimates the finite-time value of the game. Moreover, Gravin et al. [10] proved a matching lower bound (T/2) log K on the regret of the classic version of MWA, additionally showing that the optimal learner does not belong an extended MWA family. Already, Cover [5] proved that the value of the game is of order of T/(2π) when K = 2, meaning that the regret of a MWA learner is 47% larger that the optimal learner in this case. Therefore the question of optimality remains open for non-asymptotic K which are the typical cases in applications. In studying a related setting with K = 3, where T is sampled from a geometric distribution with parameter δ, Gravin et al. [9] conjectured that, for any K, a simple adversary strategy, called the COMB adversary, is asymptotically optimal (T →∞, or when δ →0), and also excessively competitive for finite-time fixed T. The COMB strategy sorts the experts based on their cumulative gains and, with probability one half, assigns gain one to each expert in an odd position and gain zero 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. to each expert in an even position. With probability one half, the zeros and ones are swapped. The simplicity and elegance of this strategy, combined with its almost optimal performance makes it very appealing and calls for a more extensive study of its properties. Our results and new insights make progress in this direction by showing that, for any fixed T and up to small additive terms, COMB is minimax optimal in the finite-time three expert problem. Additionally and with similar guarantees, we provide for this setting a new near minimax optimal COMB-based learner. For K = 3, the regret of a MWA learner is 39% larger than our new optimal learner1. In this paper we also characterize, when K = 3, the regret of the game as 8/(9π)T ± log(T)2 which gives for the first time the optimal constant in the leading ( √ T) term of the regret. Note that the state-of-the-art non-asymptotic lower bound in [15] on the value of this problem is non informative as the lower bound for the case of K = 3 is a negative quantity. Related Works and Challenges. For the case of K = 3, Gravin et al. [9] proved the exact minimax optimality of a COMB-related adversary in the geometrical setting, i.e. where T is not fixed in advance but rather sampled from a geometric distribution with parameter δ. However the connection between the geometrical setting and the original finite-time setting is not well understood, even asymptotically (possibly due to the large variance of geometric distributions with small δ). Addressing this issue, in Section 7 of [8], Gravin et al. formulate the “Finite vs Geometric Regret” conjecture which states that the value of the game in the geometrical setting, Vα, and the value of the game in the finite-time setting, VT , verify VT = 2 √πVα=1/T . We resolve here the conjecture for K = 3. Analyzing the finite-time expert problem raises new challenges compared to the geometric setting. In the geometric setting, at any time (round) t of the game, the expected number of remaining rounds before the end of the game is constant (does not depend on the current time t). This simplifies the problem to the point that, when K = 3, there exists an exactly minimax optimal adversary that ignores the time t and the parameter δ. As noted in [9], and noticeable from solving exactly small instances of the game with a computer, in the finite-time case, the exact optimal adversary seems to depend in a complex manner on time and state. It is therefore natural to compromise for a simpler adversary that is optimal up to a small additive error term. Actually, based on the observation of the restricted computer-based solutions, the additive error term of COMB seems to vanish with larger T. Tightly controlling the errors made by COMB is a new challenge with respect to [9], where the solution to the optimality equations led directly to the exact optimal adversary. The existence of such equations in the geometric setting crucially relies on the fact that the value-to-go of a given policy in a given state does not depend on the current time t (because geometric distributions are memoryless). To control the errors in the finite-time setting, our new approach solves the game by backward induction showing the approximate greediness of COMB with respect to itself (read Section 2.1 for an overview of our new proof techniques and their organization). We use a novel exchangeability property, new connections to random walks and a close relation that we develop between COMB and a TWIN-COMB strategy. Additional connections with new related optimal strategies and random walks are used to compute the value of the game (Theorem 2). We discuss in Section 6 how our new techniques have more potential to extend to an arbitrary number of arms, than those of [9]. Additionally, we show how the approximate greediness of COMB with respect to itself is key to proving that a learner based directly on the COMB adversary is itself quasi-minimax-optimal. This is the first work to extend to the approximate case, approaches used to designed exactly optimal players in related works. In [2] a probability matching learner is proven optimal under the assumption that the adversary is limited to a fixed cumulative loss for the best expert. In [14] and [1], the optimal learner relies on estimating the value-to-go of the game through rollouts of the optimal adversary’s plays. The results in these papers were limited to games where the optimal adversary was only playing canonical unit vector while our result holds for general gain vectors. Note also that a probability matching learner is optimal in [9]. Notation: Let [a : b] = {a, a + 1, . . . , b} with a, b ∈N, a ≤b, and [a] = [1 : a]. For a vector w ∈Rn, n ∈N, w∞= maxk∈[n]|wk|. A vector indexed by both a time t and a specific element index k is wt,k. An undiscounted Markov Decision Process (MDP) [13, 16] M is a 4-tuple S, A, r, p. S is the state space, A is the set of actions, r : S × A →R is the reward function, and the transition model p(·|s, a) gives the probability distribution over the next state when action a is taken in state s. A state is denoted by s or st if it is taken at time t. An action is denoted by a or at. 1[19] also provides an upper-bound that is suboptimal when K = 3 even after optimization of its parameters. 2 2 The Game We consider a game, composed of T rounds, between two players, called a learner and an adversary. At each time/round t the learner chooses an index It ∈[K] from a distribution pt on the K arms. Simultaneously, the adversary assigns a binary gain to each of the arms/experts, possibly at random from a distribution ˙At, and we denote the vector of these gains by gt ∈{0, 1}K. The adversary and the learner then observe It and gt. For simplicity we use the notation g[t] = (gs)s=1,...,t. The value of one realization of such a game is the cumulative regret defined as RT = T t=1 gt ∞ − T t=1 gt,It . A state s ∈S = (N ∪{0})K is a K-dimensional vector such that the k-th element is the cumulative sum of gains dealt by the adversary on arm k before the current time t. Here the state does not include t but is typically denoted for a specific time t as st and computed as st = t−1 t=1 gt. This definition is motivated by the fact that there exist minimax strategies for both players that rely solely on the state and time information as opposed to the complete history of plays, g[t] ∪I[t]. In state s, the set of leading experts, i.e., those with maximum cumulative gain, is X(s) = {k ∈[K] : sk = s∞}. We use π to denote the (possibly non-stationary) strategy/policy used by the adversary, i.e., for any input state s and time t it outputs the gain distribution π(s, t) played by the adversary at time t in state s. Similarly we use ¯p to denote the strategy of the learner. As the state depends only on the adversary plays, we can sample a state s at time t from π. Given an adversary π and a learner ¯p, the expected regret of the game, V T ¯p,π, is V T ¯p,π = Eg[T ]∼π,I[T ]∼¯p [RT ] . The learner tries to minimize the expected regret while the adversary tries to maximize it. The value of the game is the minimax value VT defined by VT = min ¯p max π V T ¯p,π = max π min ¯p V T ¯p,π. In this work, we are interested in the search for optimal minimax strategies, which are adversary strategies π∗such that VT = min ¯p V T ¯p,π∗and learner strategies ¯p∗, such that VT = maxπ V T ¯p∗,π. 2.1 Summary of our Approach to Obtain the Near Greediness of COMB Most of our material is new. First, Section 3 recalls that Gravin et al. [9] have shown that the search for the optimal adversary π∗can be restricted to the finite family of balanced strategies (defined in the next section). When K = 3, the action space of a balanced adversary is limited to seven stochastic actions (gain distributions), denoted by ˙B3 = { ˙W, ˙C, ˙V, ˙1, ˙2, {}, {123}} (see Section 5.1 for their description). The COMB adversary repeats the gain distribution ˙C at each time and in any state. In Section 4 we provide an explicit formulation of the problem as finding π∗inside an MDP with a specific reward function. Interestingly, we observe that another adversary, which we call TWINCOMB and denote by πW, which repeats the distribution ˙W, has the same value as πC (Section 5.1). To control the errors made by COMB, the proof uses a novel and intriguing exchangeability property (Section 5.2). This exchangeability property holds thanks to the surprising role played by the TWINCOMB strategy. For any distributions ˙A ∈ ˙B3 there exists a distribution ˙D, mixture of ˙C and ˙W, such that for almost all states, playing ˙A and then ˙D is the same as playing ˙W and then ˙A in terms of the expected reward and the probabilities over the next states after these two steps. Using Bellman operators, this can be concisely written as: for any (value) function f : S −→R, in (almost) any state s, we have that [T ˙A[T ˙Df]](s) = [T ˙W[T ˙Af]](s). We solve the MDP with a backward induction in time from t = T. We show that playing ˙C at time t is almost greedy with respect to playing πC in later rounds t > t. The greedy error is defined as the difference of expected reward between always playing πC and playing the best (greedy) first action before playing COMB. Bounding how these errors accumulate through the rounds relates the value of COMB to the value of π∗(Lemma 16). To illustrate the main ideas, let us first make two simplifying (but unrealistic) assumptions at time t: COMB has been proven greedy w.r.t. itself in rounds t > t and the exchangeability holds in all states. Then we would argue at time t that by the exchangeability property, instead of optimizing the greedy 3 action w.r.t. COMB as max ˙A∈˙B3 ˙A ˙C . . . ˙C, we can study the optimizer of max ˙A∈˙B3 ˙W ˙A ˙C . . . ˙C. Then we use the induction property to conclude that ˙C is the solution of the previous optimization problem. Unfortunately, the exchangeability property does not hold in one specific state denoted by sα. What saves us though is that we can directly compute the error of greedification of any gain distribution with respect to COMB in sα and show that it diminishes exponentially fast as T −t, the number of rounds remaining, increases (Lemma 7). This helps us to control how the errors accumulate during the induction. From one given state st = sα at time t, first, we use the exchangeability property once when trying to assess the ‘quality’ of an action ˙A as a greedy action w.r.t. COMB. This leads us to consider the quality of playing ˙A in possibly several new states {st+1} at time t+1 reached following TWIN-COMB in s. We use our exchangeability property repeatedly, starting from the state st until a subsequent state reaches sα, say at time tα, where we can substitute the exponentially decreasing greedy error computed at this time tα in sα. Here the subsequent states are the states reached after having played TWIN-COMB repetitively starting from the state st. If sα is never reached we use the fact that COMB is an optimal action everywhere else in the last round. The problem is then to determine at which time tα, starting from any state at time t and following a TWIN-COMB strategy, we hit sα for the first time. This is translated into a classical gambler’s ruin problem, which concerns the hitting times of a simple random walk (Section 5.3). Similarly the value of the game is computed using the study of the expected number of equalizations of a simple random walk (Theorem 5.1). 3 Solving for the Adversary Directly In this section, we recall the results from [9] that, for arbitrary K, permit us to directly search for the minimax optimal adversary in the restricted set of balanced adversaries while ignoring the learner. Definition 1. A gain distribution ˙A is balanced if there exists a constant c ˙A, the mean gain of ˙A, such that ∀k ∈[K], c ˙A = Eg| ˙A [gk]. A balanced adversary uses exclusively balanced gain distributions. Lemma 1 (Claim 5 in [9]). There exists a minimax optimal balanced adversary. Use B to denote the set of all balanced strategies and ˙B to denote the set of all balanced gain distributions. Interestingly, as demonstrated in [9], a balanced adversary π inflicts the same regret on every learner: If π ∈B, then ∃V π T ∈R : ∀¯p, V T ¯p,π = V π T . (See Lemma 10) Therefore, given an adversary strategy π, we can define the value-to-go V π t0(s) associated with π from time t0 in state s, V π t0(s) = E sT +1 sT +1∞− T t=t0 E st cπ(st,t) , st+1 ∼P(.|st, π(st, t), st0 = s). Another reduction comes from the fact that the set of balanced gain distributions can be seen as a convex combination of a finite set of balanced distributions [9, Claim 2 and 3]. We call this limited set the atomic gain distributions. Therefore the search for π∗can be limited to this set. The set of convex combinations of the m distributions ˙A1, . . . ˙Am is denoted by Δ( ˙A1, . . . ˙Am). 4 Reformulation as a Markovian Decision Problem In this section we formulate, for arbitrary K, the maximization problem over balanced adversaries as an undiscounted MDP problem S, A, r, p. The state space S was defined in Section 2 and the action space is the set of atomic balanced distributions as discussed in Section 3. The transition model is defined by p(.|s, ˙D), which is a probability distribution over states given the current state s and a balanced distribution over gains ˙D. In this model, the transition dynamics are deterministic and entirely controlled by the adversary’s action choices. However, the adversary is forced to choose stochastic actions (balanced gain distributions). The maximization problem can therefore also be thought of as designing a balanced random walk on states so as to maximize a sum of rewards (that are yet to be defined). First, we define P ˙A the transition probability operator with respect to a gain distribution ˙A. Given function f : S −→R, P ˙A returns [P ˙Af](s) = E[f(s)|s ∼p(.|s, ˙A)] = E g∼s, ˙A[f(s + g)]. g is sampled in s according to ˙A. Given ˙A in s, the per-step regret is denoted by r ˙A(s) and defined as r ˙A(s) = E s|s, ˙A s∞−s∞−c ˙A. 4 Given an adversary strategy π, starting in s at time t0, the cumulative per-step regret is ¯V π t0(s) = T t=t0 E rπ(·,t)(st) | st+1 ∼p(.|st, π(st, t), st0 = s) . The action-value function of π at (s, ˙D) and t is the expected sum of rewards received by starting from s, taking action ˙D, and then following π: ¯Qπ t (st, ˙D) = E [ T t=t r ˙At(st) | ˙A0 = ˙D, st+1 ∼p(·|st, ˙At), ˙At+1 = π(st+1, t + 1)]. The Bellman operator of ˙A, T ˙A, is [T ˙Af](s) = r ˙A(s) + [P ˙Af](s). with [Tπ(s,t) ¯V π t+1](s) = ¯V π t (s). This per-step regret, r ˙A(s), depends on s and ˙A and not on the time step t. Removing the time from the picture permits a simplified view of the problem that leads to a natural formulation of the exchangeability property that is independent of the time t. Crucially, this decomposition of the regret into per-step regrets is such that maximizing ¯V π t0(s) over adversaries π is equivalent, for all time t0 and s, to maximizing over adversaries the original value of the game, the regret V π t0(s) (Lemma 2). Lemma 2. For any adversary strategy π and any state s and time t0, V π t0(s) = ¯V π t0(s) + s∞. The proof of Lemma 2 is in Section 8. In the following, our focus will be on maximizing ¯V π t (s) in any state s. We now show some basic properties of the per-step regret that holds for an arbitrary number of experts K and discuss their implications. The proofs are in Section 9. Lemma 3. Let ˙A ∈˙B, for all s, t , we have 0 ≤r ˙A(s) ≤1. Furthermore if |X(s)|= 1, r ˙A(s) = 0. Lemma 3 shows that a state s in which the reward is not zero contains at least two equal leading experts, |X(s)|> 1. Therefore the goal of maximizing the reward can be rephrased into finding a policy that visits the states with |X(s)|> 1 as often as possible, while still taking into account that the per-step reward increases with |X(s)|. The set of states with |X(s)|> 1 is called the ‘reward wall’. Lemma 4. In any state s, with |X(s)|= 2, for any balanced gain distribution ˙D such that with probability one exactly one of the leading expert receives a gain of 1, r ˙D(s) = max ˙A∈˙B r ˙A(s). 5 The Case of K = 3 5.1 Notations in the 3-Experts Case, the COMB and the TWIN-COMB Adversaries First we define the state space in the 3-expert case. The experts are sorted with respect to their cumulative gains and are named in decreasing order, the leading expert, the middle expert and the lagging expert. As mentioned in [9], in our search for the minimax optimal adversary, it is sufficient for any K to describe our state only using dij that denote the difference between the cumulative gains of consecutive sorted experts i and j = i + 1. Here, i denotes the expert with ith largest cumulative gains, and hence dij ≥0 for all i < j. Therefore one notation for a state, that will be used throughout this section, is s = (x, y) = (d12, d23). We distinguish four types of states C1, C2, C3, C4 as detailed below in Figure 1. In the same figure, in the center, the states are represented on a 2d-grid. C4 contains only the state denoted sα = (0, 0). s ∈C1, d12 > 0, d23 > 0 s ∈C2, d12 = 0, d23 > 0 s ∈C3, d12 > 0, d23 = 0 s ∈C4, d12 = 0, d23 = 0 d12 d23 4 3 3 3 2 1 1 1 2 1 1 1 2 1 1 1 Reward Wall Atomic ˙A Symbol c ˙A {1}{23} ˙W 1/2 {2}{13} ˙C 1/2 {3}{12} ˙V 1/2 {1}{2}{3} ˙1 1/3 {12}{13}{23} ˙2 2/3 Figure 1: 4 types of states (left), their location on the 2d grid of states (center) and 5 atomic ˙A (right) Concerning the action space, the gain distributions use brackets. The group of arms in the same bracket receive gains together and each group receive gains with equal probability. For instance, {1}{2}{3} exclusively deals a gain to expert 1 (leading expert) with probability 1/3, expert 2 (middle expert) with probability 1/3, and expert 3 (lagging expert) with probability 1/3, whereas {1}{23} means dealing a gain to expert 1 alone with probability 1/2 and experts 2 and 3 together with probability 1/2. As discussed in Section 3, we are searching for a π∗using mixtures of atomic balanced distributions. When K = 3 there are seven atomic distributions, denoted by ˙B3 = { ˙V, ˙1, ˙2, ˙C, ˙W, {}, {123}} and described in Figure 1 (right). Moreover, in Figure 2, we report in detail—in a table (left) and 5 s r˙C(s) Distribution of next state s ∼ p(·|s, ˙C) with s = (x, y) C1 0 P(s = (x−1, y+1)) = P(s = (x+1, y−1)) = .5 C2 1/2 P(s = (x + 1, y)) = P(s = (x + 1, y −1)) = .5 C3 0 P(s = (x, y + 1)) = P(s = (x −1, y + 1)) = .5 C4 1/2 P(s = (x, y + 1)) = P(s = (x + 1, y)) = .5 4 .5 2 .5 .5 3 .5 .5 .5 1 .5 .5 0 0 1/2 1/2 d12 d23 Figure 2: The per-step regret and transition probabilities of the gain distribution ˙C an illustration (right) on the 2-D state grid—the properties of the COMB gain distribution ˙C. The remaining atomic distributions are similarly reported in the appendix in Figures 5 to 8. In the case of three experts, the COMB distribution is simply playing {2}{13} in any state. We use ˙W to denote the strategy that plays {1}{23} in any state and refer to it as the TWIN-COMB strategy. The COMB and TWIN-COMB strategies (as opposed to the distributions) repeat their respective gain distributions in any state and any time. They are respectively denoted πC, πW. The Lemma 5 shows that the COMB strategy πC, the TWIN-COMB strategy πW and therefore any mixture of both, have the same expected cumulative per-step regret. The proof is reported to Section 11. Lemma 5. For all states s at time t, we have ¯V πC t (s) = ¯V πW t (s). 5.2 The Exchangeability Property Lemma 6. Let ˙A ∈˙B3, there exists ˙D ∈Δ( ˙C, ˙W) such that for any s = sα, and for any f : S −→R, [T ˙A[T ˙Df]](s) = [T ˙W[T ˙Af]](s). Proof. If ˙A = ˙W, ˙A = {} or ˙A = {123}, use ˙D = ˙W. If ˙A = ˙C, use Lemma 11 and 12. Case 1. ˙A = ˙V: ˙V is equal to ˙C in C3 ∪C4 and if s ∼p(.|s, ˙W) with s ∈C3 then s ∈C3 ∪C4. So when s ∈C3 we reuse the case ˙A = ˙C above. When s ∈C1 ∪C2, we consider two cases. Case 1.1. s = (0, 1): We choose ˙D = ˙W which is {1}{23}. If s ∼p(.|s, ˙V) with s ∈C2 then s ∈C2. Similarly, if s ∼p(.|s, ˙V) with s ∈C1 then s ∈C1 ∪C3. Moreover ˙D modifies similarly the coordinates (d12, d23) of s ∈C1 and s ∈C3. Therefore the effect in terms of transition probability and reward of ˙D is the same whether it is done before or after the actions chosen by ˙V. If s ∼p(.|s, ˙D) with s ∈C1 ∪C2 then s ∈C1 ∪C2. Moreover ˙V modifies similarly the coordinates (d12, d23) of s ∈C1 and s ∈C2. Therefore the effect in terms of the transition probability of ˙V is the same whether it is done before or after the action ˙D. In terms of reward, notice that in the states s ∈C1 ∪C2, ˙V has 0 per-step regret and using ˙V does not make s leave or enter the reward wall. Case 1.2 st = (0, 1): We can chose ˙D = ˙W. One can check from the tables in Figures 7 and 8 that exchangebility holds. Additionally we provide an illustration of the exchangeability equality in the 2d-grid in Figure 1. The starting state s = (0, 1), is graphically represented by . We show on the grid the effect of the gain distribution ˙V (in dashed red) followed (left picture) or preceded (right picture) by the gain distribution ˙D (in plain blue). The illustration shows that ˙V· ˙D and ˙D· ˙V lead to the same final states ( ) with equal probabilities. The rewards are displayed on top of the pictures. Their color corresponds to the actions, the probabilities are in italic, and the rewards are in roman. Case 2 & 3. ˙A = ˙1 & ˙A = ˙2: The proof is similar and is reported in Section 12 of the appendix. 6 5.3 Approximate Greediness of COMB, Minimax Players and Regret The greedy error of the gain distribution ˙D in state s at time t is ˙D s,t = max ˙A∈˙B3 ¯QπC t (s, ˙A) −¯QπC t (s, ˙D). Let ˙D t = maxs∈S ˙D s,t denote the maximum greedy error of the gain distribution ˙D at time t. The COMB greedy error in sα is controlled by the following lemma proved in Section 13.1. Missing proofs from this section are in the appendix in Section 13.2. Lemma 7. For any t ∈[T] and gain distribution ˙D ∈{ ˙W, ˙C, ˙V, ˙1}, ˙D sα,t ≤ 1 6 1 2 T −t. d12 d23 4 3 3 3 3 2 1 1 1 1 2 1 1 1 1 2 1 1 1 1 .5 .5 .5 .5 .5 .5 .5 .5 1 2 3 4 3 4 5 6 2 5 6 7 8 4 7 8 9 10 6 d12 d23 4 3 3 3 3 .5 .5 1 1 2 3 4 0 Figure 3: Numbering TWIN-COMB (top) & πG random walks (bottom) The following proposition shows how we can index the states in the 2d-grid as a one dimensional line over which the TWINCOMB strategy behaves very similarly to a simple random walk. Figure 3 (top) illustrates this random walk on the 2d-grid and the indexing scheme (the yellow stickers). Proposition 1. Index a state s = (x, y) by is = x + 2y irrespective of the time. Then for any state s = sα, and s ∼ p(·|s, ˙W) we have that P(is = is−1) = P(is = is+1) = 1 2. Consider a random walk that starts from state s0 = s and is generated by the TWIN-COMB strategy, st+1 ∼p(.|st, ˙W). Define the random variable Tα,s = min{t ∈N∪{0} : st = sα}. This random variable is the number of steps of the random walk before hitting sα for the first time. Then, let Pα(s, t) be the probability that sα is reached after t steps: Pα(s, t) = P(Tα,s = t). Lemma 8 controls the COMB greedy error in st in relation to Pα(s, t). Lemma 9 derives a state-independent upper-bound for Pα(s, t). Lemma 8. For any time t ∈[T] and state s, ˙C s,t ≤ T t=t Pα(s, t −t)1 6 1 2 T −t . Proof. If s = sα, this is a direct application of Lemma 7 as Pα(sα, t) = 0 for t > 0. When s = sα, the following proof is by induction. Initialization: Let t = T. At the last round only the last per-step regret matters (for all states s, ¯QπC t (s, ˙D) = r ˙D(s)). As s = sα, s is such that |X(s)|≤2 then r ˙D(s) = max ˙A∈˙B r ˙A(s) because of Lemma 4 and Lemma 3. Therefore the statement holds. Induction: Let t < T. We assume the statement is true at time t + 1. We distinguish two cases. For all gain distributions ˙D ∈˙B3, ¯QπC t (s, ˙D) (a) = [T ˙D[T˙E ¯V πC t+2]](s) (b) = [T ˙W[T ˙D ¯V πC t+2]](s) = [T ˙W ¯QπC t+1(., ˙D)](s) (c) ≥[T ˙W max ˙A∈˙B3 ¯QπC t+1(., ˙A)](s) − T t1=t+1 [P ˙WPα(., t1 −t −1)1 6 1 2 T −t1 ](s) (d) ≥ max ˙A∈˙B3 [T ˙W ¯QπC t+1(., ˙A)](s) − T t1=t+1 1 6 1 2 T −t1 [P ˙WPα(., t1 −t −1)](s) (b) = max ˙A∈˙B3 ¯QπC t (s, ˙A) − T t1=t+1 1 6 1 2 T −t1 [P ˙WPα(., t1 −t −1)](s) (e) = max ˙A∈˙B3 ¯QπC t (s, ˙A) − T t1=t 1 6 1 2 T −t1 Pα(s, t1 −t) 7 where in (a) ˙E is any distribution in Δ( ˙C, ˙W) and this step holds because of Lemma 5, (b) holds because of the exchangeability property of Lemma 6, (c) is true by induction and monotonicity of Bellman operator, in (d) the max operators change from being specific to any next state s at time t + 1 to being just one max operator that has to choose a single optimal gain distribution in state s at time t, (e) holds by definition as for any t2, (here the last equality holds because s = sα) [P ˙WPα(., t2)](s) = Es∼p(.|s, ˙W)[Pα(s, t2)] = Es∼p(.|s, ˙W)[P(Tα,s = t2)] = Pα(s, t2 + 1). Lemma 9. For t > 0 and any s, Pα(s, t) ≤2 t 2 π . Proof. Using the connection between the TWIN-COMB strategy and a simple random walk in Proposition 1, a formula can be found for Pα(s, t) from the classical “Gambler’s ruin” problem, where one wants to know the probability that the gambler reaches ruin (here state sα) at any time t given an initial capital in dollars (here is as defined in Proposition 1). The gambler has an equal probability to win or lose one dollar at each round and has no upper bound on his capital during the game. Using [7] (Chapter XIV, Equation 4.14) or [18] we have Pα(s, t) = is t t t+is 2 2−t, where the binomial coefficient is 0 if t and is are not of the same parity. The technical Lemma 14 completes the proof. We now state our main result, connecting the value of the COMB adversary to the value of the game. Theorem 1. Let K = 3, the regret of COMB strategies against any learner ¯p, min ¯p V T ¯p,πC, satisfies min ¯p V T ¯p,πC ≥VT −12 log2 (T + 1) . We also characterize the minimax regret of the game. Theorem 2. Let K = 3, for even T, we have that VT − T + 2 T/2 + 1 T/2 + 1 3 ∗2T ≤12 log2(T + 1), with T + 2 T/2 + 1 T/2 + 1 3 ∗2T ∼ 8T 9π . In Figure 4 we introduce a COMB-based learner that is denoted by ¯pC. Here a state is represented by a vector of 3 integers. The three arms/experts are ordered as (1) (2) (3), breaking ties arbitrarily. pt,(1)(s) = V πC t+1(s+e(1))−V πC t (s) pt,(2)(s) = V πC t+1(s+e(2))−V πC t (s) pt,(3)(s) = 1 −pt,(1)(s) −pt,(2)(s) Figure 4: A COMB learner, ¯pC We connect the value of the COMB-based learner to the value of the game. Theorem 3. Let K = 3, the regret of COMB-based learner against any adversary π, maxπ V T ¯pC,π, satisfies max π V T ¯pC,π ≤VT + 36 log2 (T + 1) . Similarly to [2] and [14], this strategy can be efficiently computed using rollouts/simulations from the COMB adversary in order to estimate the value V πC t (s) of πC in s at time t. 6 Discussion and Future Work The main objective is to generalize our new proof techniques to higher dimensions. In our case, the MDP formulation and all the results in Section 4 already holds for general K. Interestingly, Lemma 3 and 4 show that the COMB distribution is the balanced distribution with highest per-step regret in all the states s such that |X(s)|≤2, for arbitrary K. Then assuming an ideal exchangeability property that gives max ˙A∈˙B ˙A ˙C . . . ˙C = max ˙A∈˙B ˙C ˙C . . . ˙C ˙A, a distribution would be greedy w.r.t the COMB strategy at an early round of the game if it maximizes the per-step regret at the last round of the game. The COMB policy specifically tends to visit almost exclusively states |X(s)|≤2, states where COMB itself is the maximizer of the per-step regret (Lemma 3). This would give that COMB is greedy w.r.t. itself and therefore optimal. To obtain this result for larger K, we will need to extend the exchangeability property to higher K and therefore understand how the COMB and TWIN-COMB families extend to higher dimensions. One could also borrow ideas from the link with pde approaches made in [6]. 8 Acknowledgements We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the Australian Research Council through an Australian Laureate Fellowship (FL110100281) and through the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS). We would like to thank Nate Eldredge for pointing us to the results in [18] and Wouter Koolen for pointing us at [19]! References [1] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In Advances in Neural Information Processing Systems (NIPS), pages 1–9, 2010. [2] Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. Optimal strategies from random walks. In 21st Annual Conference on Learning Theory (COLT), pages 437–446, 2008. [3] Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM (JACM), 44(3):427–485, 1997. [4] Nicolò Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006. [5] Thomas M. Cover. Behavior of sequential predictors of binary sequences. In 4th Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263–272, 1965. [6] Nadeja Drenska. A pde approach to mixed strategies prediction with expert advice. http://www.gtcenter.org/Downloads/Conf/Drenska2708.pdf. (Extended abstract). [7] Willliam Feller. An Introduction to Probability Theory and its Applications, volume 2. John Wiley & Sons, 2008. [8] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Towards optimal algorithms for prediction with expert advice. In arXiv preprint arXiv:1603.04981, 2014. [9] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Towards optimal algorithms for prediction with expert advice. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 528–547, 2016. [10] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Tight Lower Bounds for Multiplicative Weights Algorithmic Families. In 44th International Colloquium on Automata, Languages, and Programming (ICALP), volume 80, pages 48:1–48:14, 2017. [11] Charles Miller Grinstead and James Laurie Snell. Introduction to probability. American Mathematical Soc., 2012. [12] James Hannan. Approximation to bayes risk in repeated play. Contributions to the Theory of Games, 3:97–139, 1957. [13] Ronald A. Howard. Dynamic Programming and Markov Processes. The MIT Press, Cambridge, MA, 1960. [14] Haipeng Luo and Robert E. Schapire. Towards minimax online learning with unknown time horizon. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 226–234, 2014. [15] Francesco Orabona and Dávid Pál. Optimal non-asymptotic lower bound on the minimax regret of learning with expert advice. arXiv preprint arXiv:1511.02176, 2015. [16] Martin L. Puterman. Markov Decision Processes. Wiley, New York, 1994. [17] Pantelimon Stanica. Good lower and upper bounds on binomial coefficients. Journal of Inequalities in Pure and Applied Mathematics, 2(3):30, 2001. 9 [18] Remco van der Hofstad and Michael Keane. An elementary proof of the hitting time theorem. The American Mathematical Monthly, 115(8):753–756, 2008. [19] Vladimir Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences (JCSS), 56(2):153–173, 1998. 10 | 2017 | 354 |
6,846 | Learned D-AMP: Principled Neural Network Based Compressive Image Recovery Christopher A. Metzler Rice University chris.metzler@rice.edu Ali Mousavi Rice University ali.mousavi@rice.edu Richard G. Baraniuk Rice University richb@rice.edu Abstract Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS. 1 Introduction Over the last few decades computational imaging systems have proliferated in a host of different imaging domains, from synthetic aperture radar to functional MRI and CT scanners. The majority of these systems capture linear measurements y ∈Rm of the signal of interest x ∈Rn via y = Ax + ϵ, where A ∈Rm×n is a measurement matrix and ϵ ∈Rm is noise. Given the measurements y and the measurement matrix A, a computational imaging system seeks to recover x. When m < n this problem is underdetermined, and prior knowledge about x must be used to recovery the signal. This problem is broadly referred to as compressive sampling (CS) [1; 2]. There are myriad ways to use priors to recover an image x from compressive measurements. In the following, we briefly describe some of these methods. Note that the ways in which these algorithms use priors span a spectrum; from simple hand-designed models to completely data-driven methods (see Figure 1). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: The spectrum of compressive signal recovery algorithms. 1.1 Hand-designed recovery methods The vast majority of CS recovery algorithms can be considered “hand-designed” in the sense that they use some sort of expert knowledge, i.e., prior, about the structure of x. The most common signal prior is that x is sparse in some basis. Algorithms using sparsity priors include CoSaMP [3], ISTA [4], approximate message passing (AMP) [5], and VAMP [6], among many others. Researchers have also developed priors and algorithms that more accurately describe the structure of natural images, such as minimal total variation, e.g., TVAL3 [7], markov-tree models on the wavelet coefficients, e.g., ModelCoSaMP [8], and nonlocal self-similarity, e.g., NLR-CS [9]. Off-the-shelf denoising and compression algorithms have also been used to impose priors on the reconstruction, e.g., Denoising-based AMP (D-AMP) [10], D-VAMP [11], and C-GD [12]. When applied to natural images, algorithms using advanced priors outperform simple priors, like wavelet sparsity, by a large margin [10]. The appeal of hand-designed methods is that they are based on interpretable priors and often have well understood behavior. Moreover, when they are set up as convex optimization problems they often have theoretical convergence guarantees. Unfortunately, among the algorithms that use accurate priors on the signal, even the fastest is too slow for many real-time applications [10]. More importantly, these algorithms do not take advantage of potentially available training data. As we will see, this leaves much room for improvement. 1.2 Data-driven recovery methods At the other end of the spectrum are data-driven (often deep learning-based) methods that use no hand-designed models whatsoever. Instead, researchers provide neural networks (NNs) vast amounts of training data, and the networks learn how to best use the structure within the data [13–16]. The first paper to apply this approach was [13], where the authors used stacked denoising autoencoders (SDA) [17] to recover signals from their undersampled measurements. Other papers in this line of work have used either pure convolutional layers (DeepInverse [15]) or a combination of convolutional and fully connected layers (DR2-Net [16] and ReconNet [14]) to build deep learning frameworks capable of solving the CS recovery problem. As demonstrated in [13], these methods can compete with state-of-the-art methods in terms of accuracy while running thousands of times faster. Unfortunately, these methods are held back by the fact that there exists almost no theory governing their performance and that, so far, they must be trained for specific measurement matrices and noise levels. 1.3 Mixing hand-designed and data-driven methods for recovery The third class of recovery algorithms blends data-driven models with hand-designed algorithms. These methods first use expert knowledge to set up a recovery algorithm and then use training data to learn priors within this algorithm. Such methods benefit from the ability to learn more realistic signal priors from the training data, while still maintaining the interpretability and guarantees that made hand-designed methods so appealing. Algorithms of this class can be divided into two subcategories. The first subcategory uses a black box neural network that performs some function within the algorithm, such as the proximal mapping. The second subcategory explicitly unrolls and iterative algorithm and turns it into a deep NN. Following this unrolling, the network can be tuned with training data. Our LDAMP algorithm uses ideas from both these camps. Black box neural nets. The simplest way to use a NN in a principled way to solve the CS problem is to treat it as a black box that performs some function; such as computing a posterior probability. 2 (a) D-IT Iterations (b) D-AMP Iterations Figure 2: Reconstruction behavior of D-IT (left) and D-AMP (right) with an idealized denoiser. Because D-IT allows bias to build up over iterations of the algorithm, its denoiser becomes ineffective at projecting onto the set C of all natural images. The Onsager correction term enables D-AMP to avoid this issue. Figure adapted from [10]. Examples of this approach include RBM-AMP and its generalizations [18–20], which use Restricted Boltzmann Machines to learn non-i.i.d. priors; RIDE-CS [21], which uses the RIDE [22] generative model to compute the probability of a given estimate of the image; and OneNet [23], which uses a NN as a proximal mapping/denoiser. Unrolled algorithms. The second way to use a NN in a principled way to solve the CS problem is to simply take a well-understood iterative recovery algorithm and unroll/unfold it. This method is best illustrated by the the LISTA [24; 25] and LAMP [26] NNs. In these works, the authors simply unroll the iterative ISTA [4] and AMP [5] algorithms, respectively, and then treat parameters of the algorithm as weights to be learned. Following the unrolling, training data can be fed through the network, and stochastic gradient descent can be used to update and optimize its parameters. Unrolling was recently applied to the ADMM algorithm to solve the CS-MRI problem [27]. The resulting network, ADMM-Net, uses training data to learn filters, penalties, simple nonlinearities, and multipliers. Moving beyond CS, the unrolling principle has been applied successfully in speech enhancement [28], non-negative matrix factorization applied to music transcription [29], and beyond. In these applications, unrolling and training significantly improve both the quality and speed of signal reconstruction. 2 Learned D-AMP 2.1 D-IT and D-AMP Learned D-AMP (LDAMP), is a mixed hand-designed/data-driven compressive signal recovery framework that is builds on the D-AMP algorithm [10]. We describe D-AMP now, as well as the simpler denoising-based iterative thresholding (D-IT) algorithm. For concreteness, but without loss of generality, we focus on image recovery. A compressive image recovery algorithm solves the ill-posed inverse problem of finding the image x given the low-dimensional measurements y = Ax by exploiting prior information on x, such as fact that x ∈C, where C is the set of all natural images. A natural optimization formulation reads argminx∥y −Ax∥2 2 subject to x ∈C. (1) When no measurement noise ϵ is present, a compressive image recovery algorithm should return the (hopefully unique) image xo at the intersection of the set C and the affine subspace {x|y = Ax} (see Figure 2). The premise of D-IT and D-AMP is that high-performance image denoisers Dσ, such as BM3D [30], are high-quality approximate projections onto the set C of natural images.1,2 That is, suppose 1The notation Dσ indicates that the denoiser can be parameterized by the standard deviation of the noise σ. 2Denoisers can also be thought of as a proximal mapping with respect to the negative log likelihood of natural images [31] or as taking a gradient step with respect to the data generating function of natural images [32; 33]. 3 xo + σz is a noisy observation of a natural image, with xo ∈C and z ∼N(0, I). An ideal denoiser Dσ would simply find the point in the set C that is closest to the observation xo + σz Dσ(x) = argminx∥xo + σz −x∥2 2 subject to x ∈C. (2) Combining (1) and (2) leads naturally to the D-IT algorithm, presented in (3) and illustrated in Figure 2(a). Starting from x0 = 0, D-IT takes a gradient step towards the {x|y = Ax} affine subspace and then applies the denoiser Dσ to move to x1 in the set C of natural images . Gradient stepping and denoising is repeated for t = 1, 2, . . . until convergence. D-IT Algorithm zt = y −Axt, xt+1 = Dˆσt(xt + AHzt). (3) Let νt = xt + AHzt −xo denote the difference between xt + AHzt and the true signal xo at each iteration. νt is known as the effective noise. At each iteration, D-IT denoises xt + AHzt = xo + νt, i.e., the true signal plus the effective noise. Most denoisers are designed to work with νt as additive white Gaussian noise (AWGN). Unfortunately, as D-IT iterates, the denoiser biases the intermediate solutions, and νt soon deviates from AWGN. Consequently, the denoising iterations become less effective [5; 10; 26], and convergence slows. D-AMP differs from D-IT in that it corrects for the bias in the effective noise at each iteration t = 0, 1, . . . using an Onsager correction term bt. D-AMP Algorithm bt = zt−1divDˆσt−1(xt−1 + AHzt−1) m , zt = y −Axt + bt, ˆσt = ∥zt∥2 √m , xt+1 = Dˆσt(xt + AHzt). (4) The Onsager correction term removes the bias from the intermediate solutions so that the effective noise νt follows the AWGN model expected by typical image denoisers. For more information on the Onsager correction, its origins, and its connection to the Thouless-Anderson-Palmer equations [34], see [5] and [35]. Note that ∥zt∥2 √m serves as a useful and accurate estimate of the standard deviation of νt [36]. Typically, D-AMP algorithms use a Monte-Carlo approximation for the divergence divD(·), which was first introduced in [37; 10]. 2.2 Denoising convolutional neural network NNs have a long history in signal denoising; see, for instance [38]. However, only recently have they begun to significantly outperform established methods like BM3D [30]. In this section we review the recently developed Denoising Convolutional Neural Network (DnCNN) image denoiser [39], which is both more accurate and far faster than competing techniques. The DnCNN neural network consists of 16 to 20 convolutional layers, organized as follows. The first convolutional layer uses 64 different 3 × 3 × c filters (where c denotes the number of color channels) and is followed by a rectified linear unit (ReLU) [40]. The next 14 to 18 convolutional layers each use 64 different 3 × 3 × 64 filters which are each followed by batch-normalization [41] and a ReLU. The final convolutional layer uses c separate 3 × 3 × 64 filters to reconstruct the signal. The parameters are learned via residual learning [42]. 2.3 Unrolling D-IT and D-AMP into networks The central contribution of this work is to apply the unrolling ideas described in Section 1.3 to D-IT and D-AMP to form the LDIT and LDAMP neural networks. The LDAMP network, presented in (5) and illustrated in Figure 3, consists of 10 AMP layers where each AMP layer contains two denoisers 4 Figure 3: Two layers of the LDAMP neural network. When used with the DnCNN denoiser, each denoiser block is a 16 to 20 convolutional-layer neural network. The forward and backward operators are represented as the matrices A and AH; however function handles work as well. with tied weights. One denoiser is used to update xl, and the other is used to estimate the divergence using the Monte-Carlo approximation from [37; 10]. The LDIT network is nearly identical but does not compute an Onsager correction term and hence, only applies one denoiser per layer. One of the few challenges to unrolling D-IT and D-AMP is that, to enable training, we must use a denoiser that easily propagates gradients; a black box denoiser like BM3D will not work. This restricts us to denoisers such as DnCNN, which, fortunately, offers improved performance. LDAMP Neural Network bl = zl−1divDl wl−1(ˆσl−1)(xl−1 + AHzl−1) m , zl = y −Axl + bl, ˆσl = ∥zl∥2 √m , xl+1 = Dl wl(ˆσl)(xl + AHzl). (5) Within (5), we use the slightly cumbersome notation Dl wl(ˆσl) to indicate that layer l of the network uses denoiser Dl, that this denoiser depends on its weights/biases wl, and that these weights may be a function of the estimated standard deviation of the noise ˆσl. During training, the only free parameters we learn are the denoiser weights w1, ...wL. This is distinct from the LISTA and LAMP networks, where the authors decouple and learn the A and AH matrices used in the network [24; 26]. 3 Training the LDIT and LDAMP networks We experimented with three different methods to train the LDIT and LDAMP networks. Here we describe and compare these training methods at a high level; the details are described in Section 5. • End-to-end training: We train all the weights of the network simultaneously. This is the standard method of training a neural network. • Layer-by-layer training: We train a 1 AMP layer network (which itself contains a 16-20 layer denoiser) to recover the signal, fix these weights, add an AMP layer, train the second layer of the resulting 2 layer network to recover the signal, fix these weights, and repeat until we have trained a 10 layer network. • Denoiser-by-denoiser training: We decouple the denoisers from the rest of the network and train each on AWGN denoising problems at different noise levels. During inference, the network uses its estimate of the standard deviation of the noise to select which set of denoiser weights to use. Note that, in selecting which denoiser weights to use, we must discretize the expected range of noise levels; e.g., if ˆσ = 35, then we use the denoiser for noise standard deviations between 20 and 40. 5 LDIT LDAMP End-to-end 32.1 33.1 Layer-by-layer 26.1 33.1 Denoiser-by-denoiser 28.0 31.6 (a) LDIT LDAMP End-to-end 8.0 18.7 Layer-by-layer -2.6 18.7 Denoiser-by-denoiser 22.1 25.9 (b) Figure 4: Average PSNRs4 of 100 40 × 40 image reconstructions with i.i.d. Gaussian measurements trained at a sampling rate of m n = 0.20 and tested at sampling rates of m n = 0.20 (a) and m n = 0.05 (b). Comparing Training Methods. Stochastic gradient descent theory suggests that layer-by-layer and denoiser-by-denoiser training should sacrifice performance as compared to end-to-end training [43]. In Section 4.2 we will prove that this is not the case for LDAMP. For LDAMP, layer-by-layer and denoiser-by-denoiser training are minimum-mean-squared-error (MMSE) optimal. These theoretical results are born out experimentally in Tables 4(a) and 4(b). Each of the networks tested in this section consists of 10 unrolled DAMP/DIT layers that each contain a 16 layer DnCNN denoiser. Table 4(a) demonstrates that, as suggested by theory, layer-by-layer training of LDAMP is optimal; additional end-to-end training does not improve the performance of the network. In contrast, the table demonstrates that layer-by-layer training of LDIT, which represents the behavior of a typical neural network, is suboptimal; additional end-to-end training dramatically improves its performance. Despite the theoretical result the denoiser-by-denoiser training is optimal, Table 4(a) shows that LDAMP trained denoiser-by-denoiser performs slightly worse than the end-to-end and layer-by-layer trained networks. This gap in performance is likely due to the discretization of the noise levels, which is not modeled in our theory. This gap can be reduced by using a finer discretization of the noise levels or by using deeper denoiser networks that can better handle a range of noise levels [39]. In Table 4(b) we report on the performance of the two networks when trained at a one sampling rate and tested at another. LDIT and LDAMP networks trained end-to-end and layer-by-layer at a sampling rate of m n = 0.2 perform poorly when tested at a sampling rate of m n = 0.05. In contrast, the denoiser-by-denoiser trained networks, which were not trained at a specific sampling rate, generalize well to different sampling rates. 4 Theoretical analysis of LDAMP This section makes two theoretical contributions. First, we show that the state-evolution (S.E.), a framework that predicts the performance of AMP/D-AMP, holds for LDAMP as well.5 Second, we use the S.E. to prove that layer-by-layer and denoiser-by-denoiser training of LDAMP are MMSE optimal. 4.1 State-evolution In the context of LAMP and LDAMP, the S.E. equations predict the intermediate mean squared error (MSE) of the network over each of its layers [26]. Starting from θ0 = ∥xo∥2 2 n the S.E. generates a sequence of numbers through the following iterations: θl+1(xo, δ, σ2 ϵ ) = 1 nEϵ∥Dl wl(σ)(xo + σlϵ) −xo∥2 2, (6) where (σl)2 = 1 δ θl(xo, δ, σ2 ϵ ) + σ2 ϵ , the scalar σϵ is the standard deviation of the measurement noise ϵ, and the expectation is with respect to ϵ ∼N(0, I). Note that the notation θl+1(xo, δ, σ2 ϵ ) is used to emphasize that θl may depend on the signal xo, the under-determinacy δ, and the measurement noise. Let xl denote the estimate at layer l of LDAMP. Our empirical findings, illustrated in Figure 5, show that the MSE of LDAMP is predicted accurately by the S.E. We formally state our finding. 4PSNR = 10 log10( 2552 mean((ˆx−xo)2)) when the pixel range is 0 to 255. 5For D-AMP and LDAMP, the S.E. is entirely observational; no rigorous theory exists. For AMP, the S.E. has been proven asymptotically accurate for i.i.d. Gaussian measurements [44]. 6 Figure 5: The MSE of intermediate reconstructions of the Boat test image across different layers for the DnCNN variants of LDAMP and LDIT alongside their predicted S.E. The image was sampled with Gaussian measurements at a rate of m n = 0.1. Note that LDAMP is well predicted by the S.E., whereas LDIT is not. Finding 1. If the LDAMP network starts from x0 = 0, then for large values of m and n, the S.E. predicts the mean square error of LDAMP at each layer, i.e., θl(xo, δ, σ2 ϵ ) ≈1 n
xl −xo
2 2 , if the following conditions hold: (i) The elements of the matrix A are i.i.d. Gaussian (or subgaussian) with mean zero and standard deviation 1/m. (ii) The noise w is also i.i.d. Gaussian. (iii) The denoisers Dl at each layer are Lipschitz continuous.6 4.2 Layer-by-layer and denoiser-by-denoiser training is optimal The S.E. framework enables us to prove the following results: Layer-by-layer and denoiser-bydenoiser training of LDAMP are MMSE optimal. Both these results rely upon the following lemma. Lemma 1. Suppose that D1, D2, ...DL are monotone denoisers in the sense that for l = 1, 2, ...L infwl E∥Dl wl(σ)(xo + σϵ) −xo∥2 2 is a non-decreasing function of σ. If the weights w1 of D1 are set to minimize Ex0[θ1] and fixed; and then the weights w2 of D2 are set to minimize Ex0[θ2] and fixed, . . . and then the weights wL of DL are set to minimize Ex0[θL], then together they minimize Ex0[θL]. Lemma 1 can be derived using the proof technique for Lemma 3 of [10], but with θl replaced by Ex0[θl] throughout. It leads to the following two results. Corollary 1. Under the conditions in Lemma 1, layer-by-layer training of LDAMP is MMSE optimal. This result follows from Lemma 1 and the equivalence between Ex0[θl] and Ex0[ 1 n∥xl −xo∥2 2]. Corollary 2. Under the conditions in Lemma 1, denoiser-by-denoiser training of LDAMP is MMSE optimal. This result follows from Lemma 1 and the equivalence between Ex0[θl] and Ex0[ 1 nEϵ∥Dl wl(σ)(xo + σlϵ) −xo∥2 2]. 5 Experiments Datasets Training images were pulled from Berkeley’s BSD-500 dataset [46]. From this dataset, we used 400 images for training, 50 for validation, and 50 for testing. For the results presented in Section 3, the training images were cropped, rescaled, flipped, and rotated to form a set of 204,800 overlapping 40 × 40 patches. The validation images were cropped to form 1,000 non-overlapping 40 × 40 patches. We used 256 non-overlapping 40 × 40 patches for test. For the results presented in this section, we used 382,464 50 × 50 patches for training, 6,528 50 × 50 patches for validation, and seven standard test images, illustrated in Figure 6 and rescaled to various resolutions, for test. Implementation. We implemented LDAMP and LDIT, using the DnCNN denoiser [39], in both TensorFlow and MatConvnet [47], which is a toolbox for Matlab. Public implementations of both versions of the algorithm are available at https://github.com/ricedsp/D-AMP_Toolbox. 6A denoiser is said to be L-Lipschitz continuous if for every x1, x2 ∈C we have ∥D(x1) −D(x2)∥2 2 ≤ L∥x1 −x2∥2 2. While we did not find it necessary in practice, weight clipping and gradient norm penalization can be used to ensure Lipschitz continuity of the convolutional denoiser [45]. 7 (a) Barbara (b) Boat (c) Couple (d) Peppers (e) Fingerprint (f) Mandrill (g) Bridge Figure 6: The seven test images. Training parameters. We trained all the networks using the Adam optimizer [48] with a training rate of 0.001, which we dropped to 0.0001 and then 0.00001 when the validation error stopped improving. We used mini-batches of 32 to 256 patches, depending on network size and memory usage. For layer-by-layer and denoiser-by-denoiser training, we used a different randomly generated measurement matrix for each mini-batch. Training generally took between 3 and 5 hours per denoiser on an Nvidia Pascal Titan X. Results in this section are for denoiser-by-denoiser trained networks which consists of 10 unrolled DAMP/DIT layers that each contain a 20 layer DnCNN denoiser. Competition. We compared the performance of LDAMP to three state-of-the-art image recovery algorithms; TVAL3 [7], NLR-CS [9], and BM3D-AMP [10]. We also include a comparison with LDIT to demonstrate the benefits of the Onsager correction term. Our results do not include comparisons with any other NN-based techniques. While many NN-based methods are very specialized and only work for fixed matrices [13–16; 27], the recently proposed OneNet [23] and RIDE-CS [21] methods can be applied more generally. Unfortunately, we were unable to train and test the OneNet code in time for this submission. While RIDE-CS code was available, the implementation requires the measurement matrices to have orthonormalized rows. When tested on matrices without orthonormal rows, RIDE-CS performed significantly worse than the other methods. Algorithm parameters. All algorithms used their default parameters. However, NLR-CS was initialized using 8 iterations of BM3D-AMP, as described in [10]. BM3D-AMP was run for 10 iterations. LDIT and LDAMP used 10 layers. LDIT had its per layer noise standard deviation estimate ˆσ parameter set to 2∥zl∥2/√m, as was done with D-IT in [10]. Testing setup. We tested the algorithms with i.i.d. Gaussian measurements and with measurements from a randomly sampled coded diffraction pattern [49]. The coded diffraction pattern forward operator was formed as a composition of three steps; randomly (uniformly) change the phase, take a 2D FFT, and then randomly (uniformly) subsample. Except for the results in Figure 7, we tested the algorithms with 128 × 128 images (n = 1282). We report recovery accuracy in terms of PSNR. We report run times in seconds. Results broken down by image are provided in the supplement. Gaussian measurements. With noise-free Gaussian measurements, the LDAMP network produces the best reconstructions at every sampling rate on every image except Fingerprints, which looks very unlike the natural images the network was trained on. With noise-free Gaussian measurements, LDIT and LDAMP produce reconstructions significantly faster than the competing methods. Note that, despite having to perform twice as many denoising operations, at a sampling rate of m n = 0.25 the LDAMP network is only about 25% slower than LDIT. This indicates that matrix multiplies, not denoising operations, are the dominant source of computation. Average recovery PSNRs and run times are reported in Table 1. With noisy Gaussian measurements, LDAMP uniformly outperformed the other methods; these results can be found in the supplement. Coded diffraction measurements. With noise-free coded diffraction measurements, the LDAMP network again produces the best reconstructions on every image except Fingerprints. With coded diffraction measurements, LDIT and LDAMP produce reconstructions significantly faster than competing methods. Note that because the coded diffraction measurement forward and backward operator can be applied in O(n log n) operations, denoising becomes the dominant source of computations: LDAMP, which has twice as many denoising operations as LDIT, takes roughly 2× longer to complete. Average recovery PSNRs and run times are reported in Table 2. We end this section with a visual comparison of 512 × 512 reconstructions from TVAL3, BM3D-AMP, and LDAMP, presented 8 Table 1: PSNRs and run times (sec) of 128 × 128 reconstructions with i.i.d. Gaussian measurements and no measurement noise at various sampling rates. Method m n = 0.10 m n = 0.15 m n = 0.20 m n = 0.25 PSNR Time PSNR Time PSNR Time PSNR Time TVAL3 21.5 2.2 22.8 2.9 24.0 3.6 25.0 4.3 BM3D-AMP 23.1 4.8 25.1 4.4 26.6 4.2 27.9 4.1 LDIT 20.1 0.3 20.7 0.4 21.1 0.4 21.7 0.5 LDAMP 23.7 0.4 25.7 0.5 27.2 0.5 28.5 0.6 NLR-CS 23.2 85.9 25.2 104.0 26.8 124.4 28.2 146.3 Table 2: PSNRs and run times (sec) of 128×128 reconstructions with coded diffraction measurements and no measurement noise at various sampling rates. Method m n = 0.10 m n = 0.15 m n = 0.20 m n = 0.25 PSNR Time PSNR Time PSNR Time PSNR Time TVAL3 24.0 0.52 26.0 0.46 27.9 0.43 29.7 0.41 BM3D-AMP 23.8 4.55 25.7 4.29 27.5 3.67 29.1 3.40 LDIT 22.9 0.14 25.6 0.14 27.4 0.14 28.9 0.14 LDAMP 25.3 0.26 27.4 0.26 28.9 0.27 30.5 0.26 NLR-CS 21.6 87.82 22.8 87.43 25.1 87.18 26.4 86.87 in Figure 7. At high resolutions, the LDAMP reconstructions are incrementally better than those of BM3D-AMP yet computed over 60× faster. (a) Original Image (b) TVAL3 (26.4 dB, 6.85 sec) (c) BM3D-AMP (27.2 dB, 75.04 sec) (d) LDAMP (28.1 dB, 1.22 sec) Figure 7: Reconstructions of 512 × 512 Boat test image sampled at a rate of m n = 0.05 using coded diffraction pattern measurements and no measurement noise. LDAMP’s reconstructions are noticeably cleaner and far faster than the competing methods. 6 Conclusions In this paper, we have developed, analyzed, and validated a novel neural network architecture that mimics the behavior of the powerful D-AMP signal recovery algorithm. The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a stateevolution heuristic that accurately predicts its performance. Most importantly, LDAMP outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. LDAMP represents the latest example in a trend towards using training data (and lots of offline computations) to improve the performance of iterative algorithms. The key idea behind this paper is that, rather than training a fairly arbitrary black box to learn to recover signals, we can unroll a conventional iterative algorithm and treat the result as a NN, which produces a network with well-understood behavior, performance guarantees, and predictable shortcomings. It is our hope this paper highlights the benefits of this approach and encourages future research in this direction. 9 Acknowledgements This work was supported in part by DARPA REVEAL grant HR0011-16-C-0028, DARPA OMNISCIENT grant G001534-7500, ONR grant N00014-15-1-2735, ARO grant W911NF-15-1-0316, ONR grant N00014-17-1-2551, and NSF grant CCF-1527501. In addition, C. Metzler was supported in part by the NSF GRFP. References [1] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006. [2] R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Processing Mag., vol. 24, no. 4, pp. 118–121, 2007. [3] D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301–321, 2009. [4] I. Daubechies, M. Defrise, and C. D. Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Comm. on Pure and Applied Math., vol. 75, pp. 1412–1457, 2004. [5] D. L. Donoho, A. Maleki, and A. Montanari, “Message passing algorithms for compressed sensing,” Proc. Natl. Acad. Sci., vol. 106, no. 45, pp. 18 914–18 919, 2009. [6] S. Rangan, P. Schniter, and A. Fletcher, “Vector approximate message passing,” arXiv preprint arXiv:1610.03082, 2016. [7] C. Li, W. Yin, and Y. Zhang, “User’s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms,” Rice CAAM Department report, vol. 20, pp. 46–47, 2009. [8] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Trans. Inform. Theory, vol. 56, no. 4, pp. 1982 –2001, Apr. 2010. [9] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, “Compressive sensing via nonlocal low-rank regularization,” IEEE Trans. Image Processing, vol. 23, no. 8, pp. 3618–3632, 2014. [10] C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From denoising to compressed sensing,” IEEE Trans. Inform. Theory, vol. 62, no. 9, pp. 5117–5144, 2016. [11] P. Schniter, S. Rangan, and A. Fletcher, “Denoising based vector approximate message passing,” arXiv preprint arXiv:1611.01376, 2016. [12] S. Beygi, S. Jalali, A. Maleki, and U. Mitra, “An efficient algorithm for compression-based compressed sensing,” arXiv preprint arXiv:1704.01992, 2017. [13] A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” Proc. Allerton Conf. Communication, Control, and Computing, pp. 1336–1343, 2015. [14] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 449–458, 2016. [15] A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP), pp. 2272–2276, 2017. [16] H. Yao, F. Dai, D. Zhang, Y. Ma, S. Zhang, and Y. Zhang, “DR2-net: Deep residual reconstruction network for image compressive sensing,” arXiv preprint arXiv:1702.05743, 2017. [17] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research, vol. 11, pp. 3371–3408, 2010. [18] E. W. Tramel, A. Drémeau, and F. Krzakala, “Approximate message passing with restricted Boltzmann machine priors,” Journal of Statistical Mechanics: Theory and Experiment, vol. 2016, no. 7, p. 073401, 2016. 10 [19] E. W. Tramel, A. Manoel, F. Caltagirone, M. Gabrié, and F. Krzakala, “Inferring sparsity: Compressed sensing using generalized restricted Boltzmann machines,” Proc. IEEE Information Theory Workshop (ITW), pp. 265–269, 2016. [20] E. W. Tramel, M. Gabrié, A. Manoel, F. Caltagirone, and F. Krzakala, “A deterministic and generalized framework for unsupervised learning with restricted Boltzmann machines,” arXiv preprint arXiv:1702.03260, 2017. [21] A. Dave, A. K. Vadathya, and K. Mitra, “Compressive image recovery using recurrent generative model,” arXiv preprint arXiv:1612.04229, 2016. [22] L. Theis and M. Bethge, “Generative image modeling using spatial LSTMs,” Proc. Adv. in Neural Processing Systems (NIPS), pp. 1927–1935, 2015. [23] J. Rick Chang, C.-L. Li, B. Poczos, B. Vijaya Kumar, and A. C. Sankaranarayanan, “One network to solve them all–Solving linear inverse problems using deep projection models,” Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 5888–5897, 2017. [24] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. Int. Conf. Machine Learning, pp. 399–406, 2010. [25] U. S. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” IEEE Signal Process. Lett., vol. 23, no. 5, pp. 747–751, 2016. [26] M. Borgerding and P. Schniter, “Onsager-corrected deep networks for sparse linear inverse problems,” arXiv preprint arXiv:1612.01183, 2016. [27] Y. Yang, J. Sun, H. Li, and Z. Xu, “Deep ADMM-net for compressive sensing MRI,” Proc. Adv. in Neural Processing Systems (NIPS), vol. 29, pp. 10–18, 2016. [28] J. R. Hershey, J. L. Roux, and F. Weninger, “Deep unfolding: Model-based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014. [29] T. B. Yakar, P. Sprechmann, R. Litman, A. M. Bronstein, and G. Sapiro, “Bilevel sparse models for polyphonic music transcription.” ISMIR, pp. 65–70, 2013. [30] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Processing, vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [31] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” Proc. Global Conf. on Signal and Inform. Processing (GlobalSIP), pp. 945–948, 2013. [32] G. Alain and Y. Bengio, “What regularized auto-encoders learn from the data-generating distribution,” J. Machine Learning Research, vol. 15, no. 1, pp. 3563–3593, 2014. [33] C. K. Sønderby, J. Caballero, L. Theis, W. Shi, and F. Huszár, “Amortised map inference for image super-resolution,” Proc. Int. Conf. on Learning Representations (ICLR), 2017. [34] D. J. Thouless, P. W. Anderson, and R. G. Palmer, “Solution of ‘Solvable model of a spin glass’,” Philos. Mag., vol. 35, no. 3, pp. 593–601, 1977. [35] M. Mézard and A. Montanari, Information, Physics, Computation: Probabilistic Approaches. Cambridge University Press, 2008. [36] A. Maleki, “Approximate message passing algorithm for compressed sensing,” Stanford University PhD Thesis, Nov. 2010. [37] S. Ramani, T. Blu, and M. Unser, “Monte-Carlo sure: A black-box optimization of regularization parameters for general denoising algorithms,” IEEE Trans. Image Processing, pp. 1540–1554, 2008. [38] H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 2392–2399, 2012. [39] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Processing, 2017. [40] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Proc. Adv. in Neural Processing Systems (NIPS), pp. 1097–1105, 2012. 11 [41] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. [42] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 770–778, 2016. [43] F. J. ´Smieja, “Neural network constructive algorithms: Trading generalization for learning efficiency?” Circuits, Systems, and Signal Processing, vol. 12, no. 2, pp. 331–374, 1993. [44] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764–785, 2011. [45] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” arXiv preprint arXiv:1704.00028, 2017. [46] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” Proc. Int. Conf. Computer Vision, vol. 2, pp. 416–423, July 2001. [47] A. Vedaldi and K. Lenc, “Matconvnet – Convolutional neural networks for MATLAB,” Proc. ACM Int. Conf. on Multimedia, 2015. [48] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [49] E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal., vol. 39, no. 2, pp. 277–299, 2015. 12 | 2017 | 355 |
6,847 | Deep Multi-task Gaussian Processes for Survival Analysis with Competing Risks Ahmed M. Alaa Electrical Engineering Department University of California, Los Angeles ahmedmalaa@ucla.edu Mihaela van der Schaar Department of Engineering Science University of Oxford mihaela.vanderschaar@eng.ox.ac.uk Abstract Designing optimal treatment plans for patients with comorbidities requires accurate cause-specific mortality prognosis. Motivated by the recent availability of linked electronic health records, we develop a nonparametric Bayesian model for survival analysis with competing risks, which can be used for jointly assessing a patient’s risk of multiple (competing) adverse outcomes. The model views a patient’s survival times with respect to the competing risks as the outputs of a deep multi-task Gaussian process (DMGP), the inputs to which are the patients’ covariates. Unlike parametric survival analysis methods based on Cox and Weibull models, our model uses DMGPs to capture complex non-linear interactions between the patients’ covariates and cause-specific survival times, thereby learning flexible patient-specific and cause-specific survival curves, all in a data-driven fashion without explicit parametric assumptions on the hazard rates. We propose a variational inference algorithm that is capable of learning the model parameters from time-to-event data while handling right censoring. Experiments on synthetic and real data show that our model outperforms the state-of-the-art survival models. 1 Introduction Designing optimal treatment plans for elderly patients or patients with comorbidities is a challenging problem: the nature (and the appropriate level of invasiveness) of the best therapeutic intervention for a patient with a specific clinical risk depends on whether this patient suffers from, or is susceptible to other "competing risks" [1-3]. For instance, the decision on whether a diabetic patient who also has a renal disease should receive dialysis or a renal transplant must be based on a joint prognosis of diabetes-related complications and end-stage renal failure; overlooking the diabetes-related risks may lead to misguided therapeutic decisions [1]. The same problem arises in nephrology, where a typical patient’s competing risks are peritonitis, death, kidney transplantation and transfer to haemodialysis [2]. An even more common encounter with competing risks realizes in oncology and cardiovascular medicine, where the risk of a cardiac disease may alter the decision on whether a cancer patient should undergo chemotherapy or a particular type of surgery [3]. Since conventional methods for survival analysis, such as the Kaplan-Meier method and standard Cox proportional hazards regression, are not equipped to handle competing risks, alternate variants of those methods that rely on cumulative incidence estimators have been proposed and used in clinical research [1-7]. According to the most recent data brief by the Office of National Coordinator (ONC)1, electronic health records (EHRs) are currently deployed in more than 75% of hospitals in the United States [8]. The increasing availability of data in EHRs has stimulated a great deal of research efforts that used machine learning to conduct clinical risk prognosis and survival analysis. In particular, 1https://www.healthit.gov/sites/default/files/briefs/ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. various recent works have proposed novel methods for survival analysis based on Gaussian processes [9], "temporal" logistic regression [10], ranking [11], and deep neural networks [12]. All these works have were restricted to the conventional survival analysis problem in which there is only one event of interest rather than a set of competing risks. (A detailed overview of previous works is provided in Section 3.) The usage of machine learning to construct data-driven survival models for patients with comorbidities is an important step towards precision medicine [13]. Contribution In the light of the discussion above, we develop a nonparametric Bayesian model for survival analysis with competing risks using deep (multi-task) Gaussian processes (DMGPs) [15]. Our model relies on a novel conception of the competing risks problem as a multi-task learning problem; that is, we model the cause-specific survival times as the outputs of a random vector-valued function [14], the inputs to which are the patients’ covariates. This allows us to learn a "shared representation" of the patients’ survival times with respect to multiple related comorbidities. The proposed model is Bayesian: we assign a prior distribution over a space of vector-valued functions of the patients’ covariates [16], and update the posterior distribution given a (potentially right-censored) time-to-event dataset. This process gives rise to patient-specific multivariate survival distributions, from which a patient-specific, cause-specific cumulative incidence function can be easily derived. Such a patient-specific cumulative incidence function serves as actionable information, based upon which clinicians can design personalized treatment plans. Unlike many existing parametric survival models, our model neither assumes a parametric form for the interactions between the covariates and the survival times, nor does it restrict the distribution of the survival times to a parametric model. Thus, it can flexibly describe non-proportional hazard rates with complex interactions between covariates and survival times, which are common in many diseases with heterogeneous phenotypes (such as cardiovascular diseases [2]). Inference of patient-specific posterior survival distribution is conducted via a variational Bayes algorithm; we use inducing variables to derive a variational lower bound on the marginal likelihood of the observed time-to-event data [17], which we maximize using the adaptive moment estimation algorithm [18]. We conduct a set of experiments on synthetic and real data showing that our model outperforms state-of-the-art survival models. 2 Preliminaries We consider a dataset D comprising survival (time-to-event) data for n subjects who have been followed up for a finite amount of time. Let D = {Xi, Ti, ki}n i=1, where Xi ∈X is a d-dimensional vector of covariates associated with subject i, Ti ∈R+ is the time until an event occurred, and ki ∈K is the type of event that occurred. The set K = {∅, 1, . . ., K} is a finite set of K mutually exclusive, competing events that could occur to subject i, where ∅corresponds to right-censoring. For simplicity of exposition, we assume that only one event occurs for every patient; this corresponds, for instance, to the case when the events in K correspond to deaths due to different causes. This assumption does not simplify the problem, in fact it implies the nonidentifiability of the event times’ distribution parameters [6, 7], which makes the problem more challenging. Figure 1 depicts a time-to-event dataset D with patients dying due to either cancer or cardiovascular diseases, or have their endpoints censored. Throughout this paper, we assume independent censoring [1-7], i.e. censoring times are independent of clinical outcomes. Patient 9 Patient 8 Patient 7 Patient 6 Patient 5 Patient 4 Patient 3 Patient 2 Patient 1 Cancer (k = 2) Time since Diagnosis (k = ∅) Censored (k = 1) Cardiovascular T7 k7 = 1 Figure 1: Depiction for the time-to-event data. Define a multivariate random variable T = (T 1, . . ., T K), where T k, k ∈K, denotes the net survival time with respect to event k, i.e. the survival time of the subject given that only event k can occur. We assume that T is drawn from a conditional density function that depends on the sub2 ject’s covariates. For every subject i, we only observe the occurrence time for the earliest event, i.e. Ti = min(T 1 i , . . ., T K i ) and ki = arg minj T j i . The cause-specific hazard function λk(t, X) represents the instantaneous risk of event k, and is formally defined as λk(t, X) = limdt→0 1 dtP(t ≤T k < t + dt, k | T k ≥t, X) [6]. By the law of total probability, the overall hazard function is given by λ(t, X) = ∑ k∈K λk(t, X). This leads to the notion of a survival function S(t, X) = exp( ∫t 0 λ(u, X)du), which captures the probability of a subject surviving all types of risk events up to time t. The Cumulative Incidence Function (CIF), also known as the subdistribution function [2-7], is the probability of occurrence of a particular event in k ∈K by time t, and is given by Fk(t, X) = ∫t 0 λk(u, X) S(u, X)du. Our main goal is to estimate the CIF function using the dataset D; through these estimates, treatment plans can be set up for patients who suffer from comorbidities or are at risk of different types of diseases. 3 Survival Analysis using Deep Multi-task Gaussian Processes We conduct patient-specific survival analysis by directly modeling the event times T as a function of the patients’ covariates through the generative probabilistic model described hereunder. Deep Multi-task Gaussian Processes (DMGPs) We assume that the net survival times for a patient with covariates X are generated via a (nonparametric) multi-output random function g(.), i.e. T = g(X), and we use Gaussian processes to model g(.). A simple model of the form g(X) = f(X) + ϵ, with f(.) being a Gaussian process and ϵ a Gaussian noise, would constrain T to have a symmetric Gaussian distribution with a restricted parametric form conditional on X [Sec. 2, 19]. This may not be a realistic construct for many settings in which the survival times display an asymmetric distribution (e.g. cancer survival times [2]). To that end, we model g(.) as a Deep multi-task Gaussian Process (DMGP) [15]; a multi-layer cascade of vector-valued Gaussian processes that confer a greater representational power and produce outputs that are generally nonGaussian. In particular, we assume that the net survival times T are generated via a DMGP with two layers as follows T = fT (Z) + ϵT , ϵT ∼N(0, σ2 T I), Z = fZ(X) + ϵZ, ϵZ ∼N(0, σ2 Z I), (1) where σT and σZ are the noise variances at the two layers, fT (.) and fZ(.) are two Gaussian processes with hyperparameters ΘT and ΘZ respectively, and Z is a hidden variable that the first layer passes to the second. Based on (1), we have that g(X) = fT (fZ(X) + ϵZ) + ϵT . The model in (1) resembles a neural network with two layers and an infinite number of hidden nodes in each layer, but with an output that can be described probabilistically in terms of a distribution. We assume that fT (.) has K outputs, whereas fZ(.) has Q outputs. The use of a Gaussian processes with two layers allows us to jointly represent complex survival distributions and complex interactions with the covariates in a data-driven fashion, without the need to assume a predefined non-linear transformation on the output space as it is the case in warped Gaussian processes [19-20]. A dataset D comprising n i.i.d instances can be sampled from our model as follows: fZ ∼GP(0, KΘZ), fT ∼GP(0, KΘT ), Zi ∼N(fZ(Xi), σ2 Z I), Ti ∼N(fT (Zi), σ2 T I), Ti = min(T 1 i , . . ., T K i ), i ∈{1, . . ., n}, where KΘ is the Gaussian process kernel with hyperparameters Θ. .. . T fZ fT Covariates Survival time (Leaf node) (Parent node) First Layer Second Layer X Z ΘT ΘZ Competing events times T T 1 T K Figure 2: Graphical depiction for the probabilistic model. Figure 2 provides a graphical depiction for our model (observable variables are in double-circled nodes); patient’s covariates are the parent node; the survival time is the leaf node. 3 Survival Analysis as a Multi-task Learning Problem As can be seen in (1), the cause-specific net survival times are viewed as the outputs of a vector-valued function g(.). This casts the competing risks problem in a multi-task learning framework that allows finding a shared representation for the subjects’ survival behavior with respect to multiple correlated comorbidities, such as renal failure, diabetes and cardiac diseases [1-3]. Such a shared representation is captured via the kernel functions for the two DMGP layers (i.e. KΘZ and KΘT ). For both layers, we assume that the kernels follow an intrinsic coregionalization model [14, 16], i.e. KΘZ(x, x′) = AZ kZ(x, x′), KΘT (x, x′) = AT kT (x, x′), (2) where AZ ∈ RQ×Q + , AT ∈ RK×K + are positive semi-definite matrices, kZ(x, x′) and kT (x, x′) are radial basis functions with automatic relevance determination, i.e. kZ(x, x′) = exp ( −1 2(x −x′)T R−1 Z (x −x′) ) , RZ = diag(ℓ2 1,Z, ℓ2 2,Z, . . . , ℓ2 d,Z), with ℓj,Z being the length scale parameter of the jth feature (kT (x, x′) can be defined similarly). Note that unlike regular Gaussian processes, DMGPs are less sensitive to the selection of the parametric form of the kernel functions [15]. This because the output of the first layer undergoes a transformation through a learned nonparametric function fZ(.), and hence the "overall smoothness" of the function g(X) is governed by an "equivalent data-driven kernel" function describing the transformation fT (fZ(.)). Our model adopts a Bayesian approach to multi-task learning: it posits a prior distribution on the multi-output function g(X), and then conducts the survival analysis by updating the posterior distribution of the event times P(g(X) | D, ΘZ, ΘT ) given the evidential data in the time-to-event dataset D. The distribution P(g(X) | D, ΘZ, ΘT ) does not commit to any predefined parametric form since it is depends on a random variable transformation through a nonparametric function g(.). In Section 4, we propose an inference algorithm for computing the posterior distribution P(T | D, X∗, ΘZ, ΘT ) for a given out-of-sample subject with covariates X∗. Once P(T | X∗, D) is computed, we can directly derive the CIF function Fk(t, X∗) for all events k ∈K as explained in Section 2. A pictorial visualization of the survival analysis procedure assuming 2 competing risks is provided in Fig. 3. Figure 3: Pictorial depiction for survival analysis with 2 competing risks using deep multi-task Gaussian processes. The posterior distribution of T given D is displayed in the top left panel, and the corresponding cumulative incidence functions for a particular patient with covariates X∗is displayed in the bottom left panel. The posterior distributions on the two DMGP layers conditional on their inputs are depicted on the right panels. Related Works Standard survival modeling in the statistical and medical research literature is largely based on either the nonparametric Kaplan-Meier estimator [21], or the (parametric) Cox proportional hazard model [22]. The former is capable of learning flexible –and potentially nonproportional– survival curves but fails to incorporate patients’ covariates, whereas the latter is capable of incorporating covariates, but is restricted to rigid parametric assumptions that impose proportional hazard curves. These limitations seems to have been inherited by various recently developed Bayesian nonparametric survival models. For instance, [24] develops a Bayesian survival model based on a Dirichlet prior, and [23] develops a model based on Gaussian latent fields, and proposes an inference algorithm that utilizes nested Laplace approximations; however, neither model incorporates the individual patient’s covariates, and hence both are restricted to estimating a population-level survival curves which cannot inform personalized treatment plans. Contrarily, our model does not suffer from any such limitations since it learns patient-specific, nonparametric survival curves by adopting a Bayesian prior over a function space that takes the patients’ covariates as an input. 4 A lot of interest has been recently devoted to the problem of survival analysis by the machine learning community. Recently developed survival models include random survival forests [26], deep exponential families [12], dependent logistic regressors [10], ranking algorithms [11], and semiparametric Bayesian models based on Gaussian processes [9]. All of these methods are capable of incorporating the individual patient’s covariates, but none of them has considered the problem of competing risks. The problem of survival analysis with competing risks has been only addressed through two classical parametric models: (1) the Fine-Gray model, which modifies the traditional proportional hazard model by direct transformation of the CIF [4], and (2) the threshold regression (multi-state) models, which directly model net survival times as the first hitting times of a stochastic process (e.g. Weiner process) [25]. Unlike our model, both models are limited by strong parametric assumptions on both the hazard rates, and the nature of the interactions between the patient covariates and the survival curves. These limitations have been slightly alleviated in [19], which uses a Gaussian process to model the interactions between survival times and covariates. However, this model assumes a Gaussian distribution as a basis for an accelerated failure time model, which is both unrealistic (since the distribution of survival times is often asymmetric), and also hinders the nonparametric modeling of survival curves. The model in [19] can be ameliorated via a warped Gaussian process that first transforms the survival times through a deterministic, monotonic nonlinear function, and then applies Gaussian process regression on the transformed survival times [20], which would lead to more degrees of freedom in modeling the survival curves. Our model can be thought of as a generalization of a warped Gaussian process in which the deterministic non-linear transformation is replaced with another data-driven Gaussian process, which enables flexible nonparametric modeling of the survival curves. In Section 5, we demonstrate the superiority of our model via experiments on synthetic and real datasets. 4 Inference As discussed in Section 3, conducting survival analysis requires computing the posterior probability density dP(T∗| D, X∗, ΘZ, ΘT ) for a given out-of-sample point X∗with T∗= g(X∗). We follow an empirical Bayes approach for updating the posterior on g(.). That is, we first tune the hyperparameters ΘZ and ΘT using the offline dataset D, and then for any out-of-sample patient with covariates X∗, we evaluate dP(T∗| D, X∗, ΘZ, ΘT ) by direct Monte Carlo sampling. We calibrate the hyperparameters by maximizing the marginal likelihood dP(D | ΘZ, ΘT ). Note that for every subject i in D, we observe a "label" of the form (Ti, ki), indicating the type of event that occurred to the subject along with the time of its occurrence. Since Ti is the smallest element in T, then the label (Ti, ki) is informative of all the events (i.e. all the learning tasks) in K/{ki}; we know that T j i ≥Ti, ∀j ∈K/{ki}. We also note that the subject’s data may be right-censored, i.e. ki = ∅, which implies that T j i ≥Ti, ∀j ∈K. Hence, the likelihood of the survival information in D is dP({Xi, Ti, ki}n i=1 | ΘZ, ΘT ) ∝dP({Ti}n i=1 | {Xi}n i=1, ΘZ, ΘT ), where Ti is a set of events given by Ti = { {T ki i = Ti, {T j i ≥Ti}j∈K/{ki}}, ki ̸= ∅, {T j i ≥Ti}j∈K, ki = ∅. (3) We can write the marginal likelihood in (3) as the conditional density by marginalizing over the conditional distribution of the hidden variable Zi as follows dP({Ti}n i=1 | {Xi}n i=1, ΘZ, ΘT ) = ∫ dP({Ti}n i=1 | {Zi}n i=1, ΘT ) dP({Zi}n i=1 | {Xi}n i=1, ΘZ). (4) Since the integral in (4) is intractable, we follow the variational inference scheme proposed in [15], where we tune the hyperparameters by maximizing the following variational bound on (4): F = ∫ Z,fz,fT Q · log (dP({Ti}n i=1, {Zi}n i=1, {fz(Xi)}n i=1, {fT (Zi)}n i=1 | {Xi}n i=1, ΘZ, ΘT ) Q ) , where Q is a variational distribution, and F ≤log (dP({Ti}n i=1 | {Xi}n i=1, ΘZ, ΘT )). Since the event Ti happens with a probability that can be written in terms of a Gaussian density conditional on fZ and fT , we can obtain a tractable version of the variational bound F by introducing a set of M pseudo-inputs to the two layers of the DMGP, with corresponding function values U Z and U T at the first and second layers [15, 17], and setting the variational distribution to 5 Q = P(f T (Zi) | U T , Zi) q(U T ) q(Zi) P(f Z(Xi) | U Z, Xi) q(U Z), where q(Zi) is a Gaussian distribution, whereas q(U T ) and q(U Z) are free-form variational distributions. Given these settings, the variational lower bound can be written as [Eq. 13, 15] F = E [ log(dP({Ti}n i=1 | {f T (Zi)}n i=1)) + log(dP(U T )) q(U T ) ] + E [ log(dP({Zi}n i=1 | {f Z(Xi)}n i=1)) + log(dP(U Z)) q(U Z) ] , (5) where the first expectation is taken with respect to P(f T (Zi) | U T , Zi) q(U T ) q(Zi) whereas the second is taken with respect to P(f Z(Xi) | U Z, Xi) q(U Z). Since all the densities involved in (5) are Gaussian, F is tractable and can be written in closed-form. We use the adaptive moment estimation (ADAM) algorithm to optimize F with respect to ΘT and ΘZ [18]. 5 Experiments In this Section, we validate our model by conducting a set of experiments on both a synthetic survival model, and a real-world time-to-event dataset. In all experiments, we use the cause-specific concordance index (C-index), recently proposed in [27], as a performance metric. The cause-specific C-index quantifies the goodness of a model in ranking the subjects’ survival times with respect to a particular cause/event based on their covariates: a higher C-index indicates a better performance. Formally, we define the (time-dependent) C-index for a cause k ∈K as follows [Sec. 2.3, 27] Ck(t) := P(Fk(t, Xi) > Fk(t, Xj) | {ki = k} ∧{Ti ≤t} ∧{Ti < Tj ∨kj ̸= k}), (6) where we have used the CIF Fk(t, X) as a natural choice for the prognostic score in [Eq. (2.3), 27]. The C-index defined in (6) corresponds to the probability that, for a time horizon t, a particular survival analysis method prompts an assignment of CIF functions for subjects i and j that satisfy Fk(t, Xi) > Fk(t, Xj), given that ki = k, Ti < Tj, and that subject i was not right-censored by time t. A high C-index for cause k is achieved if the cause-specific CIF functions for a group of subjects who encounter event k are likely to be "ordered" in accordance with the ordering of their realized survival times. In all experiments, we estimate the C-index for the survival analysis methods under consideration using the function cindex of the R-package pec2 [Sec. 3, 27]. We run the algorithm in Section 4 with Q = 3 outputs for the first layer of the DMGP, and we use the default settings prescribed in [18] for the ADAM algorithm. We compare our model with four benchmarks: the Fine-Gray proportional subdistribution hazards model (FG) [4, 28], the accelerated failure time model using multi-task Gaussian processes (MGP) [19], the cause-specific Cox proportional hazards model (Cox) [27, 28], and the threshold-regression (multi-state) first-time hitting model with a multidimensional Wiener process (THR) [25]. The MGP benchmark is a special case of our model with 1 layer and a deterministic linear transformation of the survival times to Gaussian process outputs [Sec. 3, 19]. We run the FG and Cox benchmarks using the R libraries cmprsk and survival, whereas for the THR benchmark, we use the R-package threg3. 5.1 Synthetic Data The goal of this Section is to demonstrate the ability of our model to cope with highly heterogeneous patient cohorts; we demonstrate this by running experiments on two synthetic models with different types of interactions between survival times and covariates. Model A Model B Xi ∼N(0, I), Xi ∼N(0, I), T 1 i ∼exp(γT 1 Xi), T 1 i ∼exp(cosh(γT 1 Xi)), T 2 i ∼exp(γT 2 Xi), T 2 i ∼exp(|N(0, 1) + sinh(γT 2 Xi)|), Ti = min{T 1 i , T 2 i }, Ti = min{T 1 i , T 2 i }, ki = arg mink∈{1,2} T k i , ki = arg mink∈{1,2} T k i , i ∈{1, . . ., n}. i ∈{1, . . ., n}. In particular, we run experiments using the synthetic survival models A and B described above; the two models correspond to two patient cohorts that differ in terms of patients’ heterogeneity. In 2https://cran.r-project.org/web/packages/pec/index.html 3https://cran.r-project.org/web/packages/threg/index.html 6 model A, we assume that survival times are exponentially distributed with a mean parameter that comprises a simple linear function of the covariates, whereas in model B, we assume that the survival distributions are not necessarily exponential, and that their parameters depend on the covariates in a nonlinear fashion through the sinh and cosh functions. Both models have two competing risks, i.e. K = {∅, 1, 2}, and for both models we assume that each patient has d = 10 covariates that are drawn from a standard normal distribution. The parameters γ1 and γ2 are 10-dimensional vectors, the elements of which are drawn independently from a uniform distribution. Given a draw of γ1 and γ2, a dataset D with n subjects can be sampled using the models described above. We run 10,000 repeated experiments using each model, where in each experiment we draw a new γ1, γ2, and a dataset D with 1000 subjects; we divide D into 500 subjects for training and 500 subjects for out-of-sample testing. We compute the CIF function for the testing subjects via the different benchmarks, and based on those functions we evaluate the cause-specific C-index for time horizons [1, 2.5, 7.5, 10]. We average the C-indexes achieved by each benchmark over the 1000 experiments and report the mean value and the 95% confidence interval at each time horizon. In all experiments, we induce right-censoring on 100 subjects which we randomly pick from D; for a subject i, rightcensoring is induced by altering her survival time as follows: Ti ←uniform(0, Ti). 2.5 5 7.5 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Time Horizon t Cause-specific C-index C1(t) DMGP MGP THR Cox FG Model A Figure 4: Results for model A. 2.5 5 7.5 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Time Horizon t Cause-specific C-index C1(t) DMGP MGP THR Cox FG Model B Figure 5: Results for model B. 2.5 5 7.5 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Time Horizon t Cause-specific C-index C2(t) DMGP MGP THR Cox FG Model B Figure 6: Results for model B. Fig. 4, 5, and 6 depict the cause-specific C-indexes for all the survival methods under consideration when applied to the data generated by models A and B (error bars correspond to the 95% confidence intervals). As we can see, the DMGP model outperforms all other benchmarks for survival data generated by both models. For model A, we only depict C1(t) in Fig. 4 since the results on C2(t) are almost identical due to the symmetry of model A with respect to the two competing risks. Fig. 4 shows that, for all time horizons, the DMGP model already confers a gain in the C-index even when the data is generated by model A, which displays simple linear interactions between the covariates and the parameters of the survival time distribution. Fig. 5 and 6 show that the performance gains achieved by the DMGP are even larger under model B (for both C1(t) and C2(t)). This is because model B displays a highly nonlinear relationship between covariates and survival times, and in addition, it assumes a complicated form for the distributions of the survival times, all of which are features that can be captured well by a DMGP but not by the other benchmarks which posit strict parametric assumptions. The superiority of DMGPs to MGPs shows the value of the extra representational power attained by adding multiple layers to conventional MGPs. 5.2 Real Data More than 30 million patients in the U.S. are diagnosed with either cardiovascular disease (CVD) or cancer [1, 2, 29]. Mounting evidence suggests that CVD and cancer share a number of risk factors, and possess various biological similarities and (possible) interactions; in addition, many of the existing cancer therapies increase a patient’s risk for CVD [2, 29]. Therefore, it is important that patients who are at risk of both cancer and CVD be provided with a joint prognosis of mortality due to the two competing diseases in order to properly manage therapeutic interventions. This is a challenging problem since CVD patient cohorts are very heterogeneous; CVD exhibits complex phenotypes for which mortality rates can vary as much as 10-fold among patients in the same phenotype [1, 2]. The goal of this Section is to investigate the ability of our model to accurately model survival of patients in such a highly heterogeneous cohort, with CVD and cancer as competing risks. We conducted experiments on a real-world patient cohort extracted from a publicly accessible dataset provided by the Surveillance, Epidemiology, and End Results Program 4 (SEER). The extracted cohort contains data on survival of breast cancer patients over the years from 1992-2007. The total number of subjects in the cohort is 61,050, with a follow-up period restricted to 10 years. 4https://seer.cancer.gov/causespecific/ 7 The mortality rate of the subjects within the 10-year follow-up period is 25.56%. We divided the mortality causes into: (1) death due to breast cancer (13.64%), (2) death due to CVD (4.62%), and (3) death due to other causes (7.3%), i.e. K = {∅, 1, 2, 3}. Every subject is associated with 20 covariates including: age, race, gender, morphology information (Lymphoma subtype, histological type, etc), diagnostic confirmation, therapy information (surgery, type of surgery, etc), tumor size and type, etc. We divide the dataset into training and testing sets, and report the C-index results obtained for all benchmarks via 10-fold cross-validation. 0.5 0.6 0.7 0.8 0.9 1 C-index CVD Other causes DMGP MGP FG Cox THR Breast cancer Figure 7: Boxplot for the cause-specific C-indexes of various methods. The x-axis contains the methods’ names, and with each method, 3 boxplots corresponding to the C-indexes for the different causes are provided. Fig. 7 depicts boxplots for the 10-year survival C-indexes (i.e. C1(10), C2(10) and C3(10)) of all benchmarks for the 3 competing risks. With respect to predicting survival times due to "other causes", the gain provided by DMGPs is marginal. We believe that this due to the absence of the covariates that are predictive of mortality due to causes other than breast cancer and CVD in the SEER dataset. The median C-index of our model is larger than all other benchmarks for all causes. In terms of the median C-index, our model provides a significant improvement in predicting breast cancer survival times while maintaining a decent gain in the accuracy of predicting survival times of CVD as well. This implies that DMGPs, by virtue of our nonparametric multi-task learning formulation, are capable of accurately (and flexibly) capturing the "shared representation" of the two "correlated" risks of breast cancer and CVD as a function of their shared risk factors (hypertension, obesity, diabetes mellitus, age, etc). As expected, since CVD is a phenotype-rich disease, predictions of breast cancer survival are more accurate than those for CVD for all benchmarks. The competing multi-task modeling benchmark, MGP, is inferior to our model as it restricts the survival times to an exponential-like parametric distribution (See [Eq. 13, 19]). Contrarily, our model allows for a nonparametric model of the survival curves, which appears to be crucial for modeling breast cancer survival. This is evident in the boxplots of the cause-specific Cox benchmark, which is the only benchmark that performs better on CVD than breast cancer. Since the Cox model is restricted to a proportional hazard model with parametric, non-crossing survival curves, its poor performance on predicting breast cancer survival suggests that breast cancer patients have crossing survival curves, which signals the need for a nonparametric survival model [9]. This explains the gain achieved by DMGPs as compared to MGPs (and all other benchmarks), which posit strong parametric assumptions on the patients’ survival curves. 6 Discussion The problem of survival analysis with competing risks has recently gained significant attention in the medical community due to the realization that many chronic diseases possess a shared biology. We have proposed a survival model for competing risks that hinges on a novel multi-task learning conception of cause-specific survival analysis. Our model is liberated from the traditional parametric restrictions imposed by previous models; it allows for nonparametric learning of patient-specific survival curves and their interactions with the patients’ covariates. This is achieved by modeling the patients’ cause-specific survival times as a function of the patients’ covariates using deep multi-task Gaussian processes. Through the personalized actionable prognoses offered by our model, clinicians can design personalized treatment plans that (hopefully) save thousands of lives annually. 8 References [1] H. J. Lim, X. Zhang, R. Dyck, and N. Osgood. Methods of Competing Risks Analysis of End-stage Renal Disease and Mortality among People with Diabetes. BMC Medical Research Methodology, 10(1), 97, 2010. [2] P. C. Lambert, P. W. Dickman, C. P. Nelson, and P. Royston. Estimating the Crude Probability of Death due to Cancer and other Causes using Relative Survival Models. Statistics in Medicine, 29(7): 885-895, 2010. [3] J. M. Satagopan, L. Ben-Porat, M. Berwick, M. Robson, D. Kutler, and A. Auerbach. A Note on Competing Risks in Survival Data Analysis. British Journal of Cancer, 91(7): 1229-1235, 2004. [4] J. P. Fine and R. J. Gray. A Proportional Hazards Model for the Subdistribution of a Competing Risk. Journal of the American statistical association, 94(446): 496-509, 1999. [5] M. J. Crowder. Classical competing risks. CRC Press, 2001. [6] T. A. Gooley, W. Leisenring, J. Crowley, and B. E. Storer. Estimation of Failure Probabilities in the Presence of Competing Risks: New Representations of Old Estimators. Statistics in Medicine, 18(6): 695-706, 1999. [7] A. Tsiatis. A Non-identifiability Aspect of the Problem of Competing Risks. PNAS, 72(1): 20-22, 1975. [8] J. Henry, Y. Pylypchuk, T. Searcy, and V. Patel. Adoption of Electronic Health Record Systems among US Non-federal Acute Care Hospitals: 2008-2015. The Office of National Coordinator, 2016. [9] T. Fernndez, N. Rivera, and Y. W. Teh. Gaussian Processes for Survival Analysis. In NIPS, 2016. [10] C. N. Yu, R. Greiner, H. C. Lin, and V. Baracos. Learning Patient-specific Cancer Survival Distributions as a Sequence of Dependent Regressors. In NIPS, 1845-1853, 2011. [11] H. Steck, B. Krishnapuram, C. Dehing-oberije, P. Lambin and V. C. Raykar. On Ranking in Survival Analysis: Bounds on the Concordance Index. In NIPS, 1209-1216, 2008. [12] R. Ranganath, A. Perotte, N. Elhadad, and D. Blei. Deep Survival Analysis. arXiv:1608.02158, 2016. [13] F. S. Collins and H. Varmus. A New Initiative on Precision Medicine. New England Journal of Medicine, 372(9): 793-795, 2015. [14] M. A. Alvarez, L. Rosasco, N. D. Lawrence. Kernels for Vector-valued Functions: A Review. Foundations and Trends R⃝in Machine Learning, 4(3):195-266, 2012. [15] A. Damianou and N. Lawrence. Deep Gaussian Processes. In AISTATS, 2013. [16] E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task Gaussian Process Prediction. In NIPS, 2007. [17] M. K. Titsias and N. D. Lawrence. Bayesian Gaussian Process Latent Variable Model. In AISTATS, 2010. [18] D. Kingma and J. Ba. ADAM: A Method for Stochastic Optimization. arXiv:1412.6980, 2014. [19] J. E. Barrett and A. C. C. Coolen. Gaussian Process Regression for Survival Data with Competing Risks. arXiv preprint arXiv:1312.1591, 2013. [20] E. Snelson, C. E. Rasmussen, and Z. Ghahramani. Warped Gaussian Processes. In NIPS, 2004. [21] E. L. Kaplan and P. Meier. Nonparametric Estimation from Incomplete Observations. Journal of the American Statistical Association, 53(282):457-481, 1958. [22] D. Cox. Regression Models and Life-tables. Journal of Royal Statistical Society, 34(2):187-220, 1972. [23] M. De Iorio, W. O. Johnson, P. Mller, and G. L. Rosner. Bayesian Nonparametric Non-proportional Hazards Survival Modeling. Biometrics, 65(3): 762-771, 2009. [24] S. Martino, R. Akerkar, and H. Rue. Approximate Bayesian Inference for Survival Models. Scandinavian Journal of Statistics, 38(3):514-528, 2011. [25] M. L. T. Lee and A. G. Whitmore. Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary. Statistical Science, 501-513, 2006. [26] H. Ishwaran, U. B. Kogalur, E. H. Blackstone, and M. S. Lauer. Random Survival Forests. The Annals of Applied Statistics, 841-860, 2008. [27] M. Wolbers, P. Blanche, M. T. Koller, J. C. Witteman and A. T. Gerds. Concordance for Prognostic Models with Competing Risks. Biostatistics, 15(3): 526-539, 2014. [28] P. C. Austin, D. S. Lee, and J. P. Fine. Introduction to the Analysis of Survival Data in the Presence of Competing Risks. Circulation, 133(6): 601-609, 2016. [29] R. Koene, et al. Shared Risk Factors in Cardiovascular Disease and Cancer. Circulation, 2016. 9 | 2017 | 356 |
6,848 | Unsupervised Transformation Learning via Convex Relaxations Tatsunori B. Hashimoto John C. Duchi Percy Liang Stanford University Stanford, CA 94305 {thashim,jduchi,pliang}@cs.stanford.edu Abstract Our goal is to extract meaningful transformations from raw images, such as varying the thickness of lines in handwriting or the lighting in a portrait. We propose an unsupervised approach to learn such transformations by attempting to reconstruct an image from a linear combination of transformations of its nearest neighbors. On handwritten digits and celebrity portraits, we show that even with linear transformations, our method generates visually high-quality modified images. Moreover, since our method is semiparametric and does not model the data distribution, the learned transformations extrapolate off the training data and can be applied to new types of images. 1 Introduction Transformations (e.g, rotating or varying the thickness of a handwritten digit) capture important invariances in data, which can be useful for dimensionality reduction [7], improving generative models through data augmentation [2], and removing nuisance variables in discriminative tasks [3]. However, current methods for learning transformations have two limitations. First, they rely on explicit transformation pairs—for example, given pairs of image patches undergoing rotation [12]. Second, improvements in transformation learning have focused on problems with known transformation classes, such as orthogonal or rotational groups [3, 4], while algorithms for general transformations require solving a difficult, nonconvex objective [12]. To tackle the above challenges, we propose a semiparametric approach for unsupervised transformation learning. Specifically, given data points x1, . . . , xn, we find K linear transformations A1 . . . AK such that the vector from each xi to its nearest neighbor lies near the span of A1xi . . . AKxi. The idea of using nearest neighbors for unsupervised learning has been explored in manifold learning [1, 7], but unlike these approaches and more recent work on representation learning [2, 13], we do not seek to model the full data distribution. Thus, even with relatively few parameters, the transformations we learn naturally extrapolate off the training distribution and can be applied to novel types of points (e.g., new types of images). Our contribution is to express transformation matrices as a sum of rank-one matrices based on samples of the data. This new objective is convex, thus avoiding local minima (which we show to be a problem in practice), scales to real-world problems beyond the 10 × 10 image patches considered in past work, and allows us to derive disentangled transformations through a trace norm penalty. Empirically, we show our method is fast and effective at recovering known disentangled transformations, improving on past baseline methods based on gradient descent and expectation maximization [11]. On the handwritten digits (MNIST) and celebrity faces (CelebA) datasets, our method finds interpretable and disentangled transformations—for handwritten digits, the thickness of lines and the size of loops in digits such as 0 and 9; and for celebrity faces, the degree of a smile. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Problem statement Given a data point x ∈Rd (e.g., an image) and strength scalar t ∈R, a transformation is a smooth function f : Rd × R →Rd. For example, f(x, t) may be a rotated image. For a collection {fk}K k=1 of transformations, we consider entangled transformations, defined for a vector of strengths t ∈RK by f(x, t) := PK k=1 fk(x, tk). We consider the problem of estimating a collection of transformations f ∗:= PK k=1 f ∗ k given random observations as follows: let pX be a distribution on points x and pT on transformation strength vectors t ∈RK, where the components tk are independent under pT. Then for ˜xi iid ∼pX and ti iid ∼pT, i = 1, . . . , n, we observe the transformations xi = f ∗(˜xi, ti), while ˜xi and ti are unobserved. Our goal is to estimate the K functions f ∗ 1 , . . . , f ∗ K. 2.1 Learning transformations based on matrix Lie groups In this paper, we consider the subset of generic transformations defined via matrix Lie groups. These are natural as they map Rd →Rd and form a family of invertible transformations that we can parameterize by an exponential map. We begin by giving a simple example (rotation of points in two dimensions) and using this to establish the idea of the exponential map and its linear approximation. We then use these linear approximations for transformation learning. A matrix Lie group is a set of invertible matrices closed under multiplication and inversion. In the example of rotation in two dimensions, the set of all rotations is parameterized by the angle θ, and any rotation by θ has representation Rθ = cos(θ) −sin(θ) sin(θ) cos(θ) . The set of rotation matrices form a Lie group, as RθR−θ = I and the rotations are closed under composition. Linear approximation. In our context, the important property of matrix Lie groups is that for transformations near the identity, they have local linear approximations (tangent spaces, the associated Lie algebra), and these local linearizations map back into the Lie group via the exponential map [9]. As a simple example, consider the rotation Rθ, which satisfies Rθ = I + θA + O(θ2), where A = 0 −1 1 0 , and Rθ = exp(θA) for all θ (here exp is the matrix exponential). The infinitesimal structure of Lie groups means that such relationships hold more generally through the exponential map: for any matrix Lie group G ⊂Rd×d, there exists ε > 0 such that for all R ∈G with ∥R −I∥≤ε, there is an A ∈Rd×d such that R = exp(A) = I + P m≥1 Am/m!. In the case that G is a one-dimensional Lie group, we have more: for each R near I, there is a t ∈R satisfying R = exp(tA) = I + ∞ X m=1 tmAm m! . The matrix tA = log R in the exponential map is the derivative of our transformation (as A ≈ (R−I)/t for R−I small) and is analogous to locally linear neighborhoods in manifold learning [10]. The exponential map states that for transformations close to the identity, a linear approximation is accurate. For any matrix A, we can also generate a collection of associated 1-dimensional manifolds as follows: letting x ∈Rd, the set Mx = {exp(tA)x | t ∈R} is a manifold containing x. Given two nearby points xt = exp(tA)x and xs = exp(sA)x, the local linearity of the exponential map shows that xt = exp((t −s)A)xs = xs + (t −s)Axs + O((t −s)2) ≈xs + (t −s)Axs. (1) Single transformation learning. The approximation (1) suggests a learning algorithm for finding a transformation from points on a one-dimensional manifold M: given points x1, . . . , xn sampled from M, pair each point xi with its nearest neighbor xi. Then we attempt to learn a transformation matrix A satisfying xi ≈xi + tiAxi for some small ti for each of these nearest neighbor pairs. As nearest neighbor distances ∥xi −xi∥→0 as n →∞[6], the linear approximation (1) eventually holds. For a one-dimensional manifold and transformation, we could then solve the problem minimize {ti},A n X i=1 ||tiAxi −(xi −xi)||2. (2) 2 If instead of using nearest neighbors, the pairs (xi, xi) were given directly as supervision, then this objective would be a form of first-order matrix Lie group learning [12]. Sampling and extrapolation. The learning problem (2) is semiparametric: our goal is to learn a transformation matrix A while considering the density of points x as a nonparametric nuisance variable. By focusing on the modeling differences between nearby (x, x) pairs, we avoid having to specify the density of x, which results in two advantages: first, the parametric nature of the model means that the transformations A are defined beyond the support of the training data; and second, by not modeling the full density of x, we can learn the transformation A even when the data comes from highly non-smooth distributions with arbitrary cluster structure. 3 Convex learning of transformations The problem (2) makes sense only for one-dimensional manifolds without superposition of transformations, so we now extend the ideas (using the exponential map and its linear approximation) to a full matrix Lie group learning problem. We shall derive a natural objective function for this problem and provide a few theoretical results about it. 3.1 Problem setup As real-world data contains multiple degrees of freedom, we learn several one-dimensional transformations, giving us the following multiple Lie group learning problem: Definition 3.1. Given data x1 . . . xn ∈Rd with xi ∈Rd as the nearest neighbor of xi, the nonconvex transformation learning problem objective is minimize t∈Rd×K,A∈Rd×d n X i=1
K X k=1 tikAkxi −(xi −xi)
2 . (3) This problem is nonconvex, and prior authors have commented on the difficulty of optimizing similar objectives [11, 14]. To avoid this difficulty, we will construct a convex relaxation. Define a matrix Z ∈Rn×d2, where row Zi is an unrolling of the transformation that approximately takes any xi to ¯xi. Then Eq. (3) can be written as min rank(Z)=K n X i=1 ∥mat(Zi)xi −(xi −xi)∥2 , (4) where mat : Rd2 →Rd×d is the matricization operator. Note the rank of Z is at most K, the number of transformations. We then relax the rank constraint to a trace norm penalty as min n X i=1 ∥mat(Zi)xi −(xi −xi)∥2 + λ ∥Z∥∗. (5) However, the matrix Z ∈Rn×d2 is too large to handle for real-world problems. Therefore, we propose approximating the objective function by modeling the transformation matrices as weighted sums of observed transformation pairs. This idea of using sampled pairs is similar to a kernel method: we will show that the true transformation matrices A∗ k can be written as a linear combination of rank-one matrices (xi −xi)x⊤ i . 1 As intuition, assume that we are given a single point xi ∈Rd and xi = tiA∗xi + xi, where ti ∈R is unobserved. If we approximate A∗via the rank-one approximation A = (xi −xi)x⊤ i , then ∥xj∥−2 2 Axi + xi = xi. This shows that A captures the behavior of A∗on a single point xi. By sampling sufficiently many examples and appropriately weighting each example, we can construct an accurate approximation over all points. 1Section 9 of the supplemental material introduces a kernelized version that extends this idea to general manifolds. 3 Let us subsample x1, . . . , xr (WLOG, these are the first r points). Given these samples, let us write a transformation A as a weighted sum of r rank-one matrices (xj −xj)x⊤ j with weights α ∈Rn×r. We then optimize these weights: min α n X i=1
r X j=1 αij(xj −xj)x⊤ j xi −(xi −xi)
2 + λ ∥α∥∗. (6) Next we show that with high probability, the weighted sum of O(K2d) samples is close in operator norm to the true transformation matrix A∗(Lemma 3.2 and Theorem 3.3). 3.2 Learning one transformation via subsampling We begin by giving the intuition behind the sampling based objective in the one-transformation case. The correctness of rank-one reconstruction is obvious for the special case where the number of samples r = d, and for each i we define xi = ei, where ei is the i-th canonical basis vector. In this case xi = tiA∗ei + ei for some unknown ti ∈R. Thus we can easily reconstruct A∗with a weighted combination of rank-one samples as A = P i A∗eie⊤ i = P i αi(xi −xi)x⊤ i with αi = t−1 i . In the general case, we observe the effects of A∗on a non-orthogonal set of vectors x1 . . . xr as xi −xi = tiA∗xi. A similar argument follows by changing our basis to make tixi the i-th canonical basis vector and reconstructing A∗in this new basis. The change of basis matrix for this case is the map Σ−1/2 where Σ = Pr i=1 xix⊤ i /r. Our lemma below makes the intuition precise and shows that given r > d samples, there exists weights α ∈Rd such that A∗= P i αi(xi −xi)x⊤ i Σ−1, where Σ is the inner product matrix from above. This justifies our objective in Eq. (6), since we can whiten x to ensure Σ = I, and there exists weights αij which minimizes the objective by reconstructing A∗. Lemma 3.2. Given x1 . . . xr drawn i.i.d. from a density with full-rank covariance, and neighboring points xi . . . xr defined by xi = tiA∗xi + xi for some unknown ti ̸= 0 and A∗∈Rd×d. If r ≥d, then there exists weights α ∈Rr which recover the unknown A∗as A∗= r X i=1 αi(xi −xi)x⊤ i Σ−1, where αi = 1/(rti) and Σ = Pr i=1 xix⊤ i /r. Proof. The identity xi = tiA∗xi + xi implies ti(Σ−1/2A∗Σ1/2)Σ−1/2xi = Σ−1/2(xi −xi). Summing both sides with weights αi and multiplying by x⊤ i (Σ−1/2)⊤yields r X i=1 αiΣ−1/2(xi −xi)x⊤ i (Σ−1/2)⊤= r X i=1 αiti(Σ−1/2A∗Σ1/2)Σ−1/2xix⊤ i (Σ−1/2)⊤ = Σ−1/2A∗Σ1/2 r X i=1 αitiΣ−1/2xix⊤ i (Σ−1/2)⊤. By construction of Σ−1/2 and αi = 1/(tir), Pr i=1 αitiΣ−1/2xix⊤ i (Σ−1/2)⊤= I. Therefore, Pr i=1 αiΣ−1/2(xi −xi)x⊤ i (Σ−1/2)⊤= Σ−1/2A∗Σ1/2. When x spans Rd, Σ−1/2 is both invertible and symmetric giving the theorem statement. 3.3 Learning multiple transformations In the case of multiple transformations, the definition of recovering any single transformation matrix A∗ k is ambiguous since given transformations A∗ 1 and A∗ 2, the matrices A∗ 1 + A∗ 2 and A∗ 1 −A∗ 2 both locally generate the same family of transformations. We will refer to the transformations A∗∈RK×d×d and strengths t ∈Rn×K as disentangled if t⊤t/r = σ2I for a scalar σ2 > 0. This criterion implies that the activation strengths are uncorrelated across the observed data. We will later 4 show in section 3.4 that this definition of disentangling captures our intuition, has a closed form estimate, and is closely connected to our optimization problem. We show an analogous result to the one-transformation case (Lemma 3.2) which shows that given r > K2 samples we can find weights α ∈Rr×k which reconstruct any of the K disentangled transformation matrices A∗ k as A∗ k ≈Ak = Pr i=1 αik(xi −xi)x⊤ i . This implies that minimization over α leads to estimates of A∗. In contrast to Lemma 3.2, the multiple transformation recovery guarantee is probabilistic and inexact. This is because each summand (xi −xi)x⊤ i contains effects from all K transformations, and there is no weighting scheme which exactly isolates the effects of a single transformation A∗ k. Instead, we utilize the randomness in t to estimate A∗ k by approximately canceling the contributions from the K −1 other transformations. Theorem 3.3. Let x1 . . . xr ∈Rd be i.i.d isotropic random variables and for each k ∈[K], define t1,k . . . tr,k ∈R as i.i.d draws from a symmetric random variable with t⊤t/r = σ2I ∈Rd×d, tik < C1, and ∥xi∥2 < C2 with probability one. Given x1 . . . xr, and neighbors x1 . . . xr defined as xi = PK k=1 tikA∗ kxi + xi for some A∗ k ∈Rd×d, there exists α ∈Rr×K such that for all k ∈[K], P
A∗ k − r X i=1 αik(xi −xi)x⊤ i
> ε ! < Kd exp −rε2 supk ∥A∗ k∥−2 2K2(2C2 1C2 2(1 + K−1 supk ∥A∗ k∥−1 ε) ! . Proof. We give a proof sketch and defer the details to the supplement (Section 7). We claim that for any k, αik = tik σ2r satisfies the theorem statement. Following the one-dimensional case, we can expand the outer product in terms of the transformation A∗as Ak = r X i=1 αik(xi −xi)x⊤ i = K X k′=1 A∗ k′ r X i=1 αiktik′xix⊤ i . As before, we must now control the inner terms Zk k′ = Pr i=1 αiktik′xix⊤ i . We want Zk k′ to be close to the identity when k′ = k and near zero when k′ ̸= k. Our choice of αik = tik σ2r does this since if k′ ̸= k then αiktik′ are zero mean with random sign, resulting in Rademacher concentration bounds near zero, and if k′ = k then Bernstein bounds show that Zk k ≈I since E[αikti] = 1. 3.4 Disentangling transformations Given K estimated transformations A1 . . . AK ∈Rd×d and strengths t ∈Rn×K, any invertible matrix W ∈RK×K can be used to find an equivalent family of transformations ˆAi = P k WikAk and ˆtik = P j W −1 kj tij. Despite this unidentifiability, there is a choice of ˆA1 . . . ˆAK and ˆt which is equivalent to A1 . . . AK but disentangled, meaning that across the observed transformation pairs {(xi, xi)}n i=1, the strengths for any two pairs of transformations are uncorrelated ˆt⊤ˆt/n = I. This is a necessary condition to captures the intuition that two disentangled transformations will have independent strength distributions. For example, given a set of images generated by changing lighting conditions and sharpness, we expect the sharpness of an image to be uncorrelated to lighting condition. Formally, we will define a set of ˆA such that: ˆt·j and ˆt·i are uncorrelated over the observed data, and any pair of transformations ˆAix and ˆAjx generate decorrelated outputs. In contrast to mutual information based approaches to finding disentangled representations, our approach only seeks to control second moments, but enforces decorrelation both in the latent space (tik) as well as the observed space ( ˆAix). Theorem 3.4. Given Ak ∈Rd×d, t ∈Rn×k with P i tik = 0, define Z = USV ⊤∈Rn×d2 as the SVD of Z, where each row is Zi = PK k=1 tikvec(Ak). The transformation ˆAk = Sk,kmat(V ⊤ k ) and strengths ˆtik = Uik fulfils the following properties: • P k ˆtik ˆAkxi = P k tikAkxi (correct behavior), 5 • ˆt⊤ˆt = I (uncorrelated in latent space), • E[⟨ˆAiX, ˆAjX⟩] = 0 for any i ̸= j and random variable X with E[XX⊤] = I (uncorrelated in observed space). Proof. The first property follows since Z is rank-K by construction, and the rank-K SVD preserves P k tikAk exactly. The second property follows from the SVD, U ⊤U = I. The last property follows from V V ⊤= I, implying tr( ˆA⊤ i ˆAj) = 0 for i ̸= j. By linearity of trace: E[⟨ˆAiX, ˆAjX⟩] = Si,iSj,j tr(mat(Vi)mat(Vj)⊤) = 0. Interestingly, this SVD appears in both the convex and subsampling algorithm (Eq. 6) as part of the proximal step for the trace norm optimization. Thus the rank sparsity induced by the trace norm naturally favors a small number of disentangled transformations. 4 Experiments We evaluate the effectiveness of our sampling-based convex relaxation for learning transformations in two ways. In section 4.1, we check whether we can recover a known set of rotation / translation transformations applied to a downsampled celebrity face image dataset. Next, in section 4.2 we perform a qualitative evaluation of learning transformations over raw celebrity faces (CelebA) and MNIST digits, following recent evaluations of disentangling in adversarial networks [2]. 4.1 Recovering known transformations We validate our convex relaxation and sampling procedure by recovering synthetic data generated from known transformations, and compare these to existing approaches for learning linear transformations. Our experiment consists of recovering synthetic transformations applied to 50 image subsets of a downsampled version (18 × 18) of CelebA. The resolution and dataset size restrictions were due to runtime restrictions from the baseline methods. We compare two versions of our matrix Lie group learning algorithm against two baselines. For our method, we implement and compare convex relaxation with sampling (Eq. 6) and convex relaxation and sampling followed by gradient descent. This second method ensures that we achieve exactly the desired number of transformations K, since trace norm regularization cannot guarantee a fixed rank constraint. The full convex relaxation (Eq. 5) is not covered here, since it is too slow to run on even the smallest of our experiments. As baselines, we compare to gradient descent with restarts on the nonconvex objective (Eq. 3) and the EM algorithm from Miao and Rao [11] run for 20 iterations and augmented with the SVD based disentangling method (Theorem 3.4). These two methods represent the two classes of existing approaches to estimating general linear transformations from pairwise data [11]. Optimization for our methods and gradient descent use minibatch proximal gradient descent with Adagrad [8], where the proximal step for trace norm penalties use subsampling down to five thousand points and randomized SVD. All learned transformations were disentangled using the SVD method unless otherwise noted (Theorem 3.4). Figures 1a and b show the results of recovering a single horizontal translation transformation with error measured in operator norm. Convex relaxation plus gradient descent (Convex+Gradient) achieves the same low error across all sampled 50 image subsets. Without the gradient descent, convex relaxation alone does not achieve low error, since the trace norm penalty does not produce exactly rank-one results. Gradient descent on the other hand gets stuck in local minima even with stepsize tuning and restarts as indicated by the wide variance in error across runs. All methods outperform EM while using substantially less time. Next, we test disentangling and multiple-transformation recovery for random rotations, horizontal translations, and vertical translations (Figure 1c). In this experiment, we apply the three types of transformations to the downsampled CelebA images, and evaluate the outputs by measuring the minimum-cost matching for the operator norm error between learned transformation matrices and 6 the ground truth. Minimizing this metric requires recovering the true transformations up to label permutation. We find results consistent with the one-transform recovery case, where convex relaxation with gradient descent outperforms the baselines. We additionally find SVD based disentangling to be critical to recovering multiple transformations. We find that removing SVD from the nonconvex gradient descent baseline leads to substantially worse results (Figure 1c). (a) Operator norm error for recovering a single translation transform (b) Sampled convex relaxations are faster than baselines (c) Multiple transformations can be recovered using SVD based disentangling Figure 1: Sampled convex relaxation with gradient descent achieves lower error on recovering a single known transformation (panel a), runs faster than baselines (panel b) and recovers multiple disentangled transformations accurately (panel c). 4.2 Qualitative outputs We now test convex relaxation with sampling on MNIST and celebrity faces. We show a subset of learned transformations here and include the full set in the supplemental Jupyter notebook. (a) Thickness (b) Blur (c) Loop size (d) Angle Figure 2: Matrix transformations learned on MNIST (top rows) and extrapolating on Kannada handwriting (bottom row). Center column is the original digit, flanking columns are generated by applying the transformation matrix. On MNIST digits we trained a five-dimensional linear transformation model over a 20,000 example subset of the data, which took 10 minutes. The components extracted by our approach represent coherent stylistic features identified by earlier work using neural networks [2] such as thickness, rotation as well as some new transformations loop size and blur. Examples of images generated from these learned transformations are shown in figure 2. The center column is the original image and all other images are generated by repeatedly applying transformation matrices). We also found that the transformations could also sometimes extrapolate to other handwritten symbols, such as Kannada handwriting [5] (last row, figure 2). Finally, we visualize the learned transformations by summing the estimated transformation strength for each transformation across the minimum spanning tree on the observed data (See supplement section 9 for details). This visualization demonstrates that the learned representation of the data captures the style of the digit, such as thickness and loop size and ignores the digit identity. This is a highly desirable trait for the algorithm, as it means that we can extract continuous factors of variations such as digit thickness without explicitly specifying and removing cluster structure in the data (Figure 3). 7 Figure 3: Embedding of MNIST digits based on two transformations: thickness and loop size. The learned transformations captures extracts continuous, stylistic features which apply across multiple clusters despite being given no cluster information. (a) PCA (b) InfoGAN Figure 4: Baselines applied to the same MNIST data often entangle digit identity and style. In contrast to our method, many baseline methods inadvertently capture digit identity as part of the learned transformation. For example, the first component of PCA simply adds a zero to every image (Figure 4), while the first component of InfoGAN has higher fidelity in exchange for training instability, which often results in mixing digit identity and multiple transformations (Figure 4). Finally, we apply our method to the celebrity faces dataset and find that we are able to extract high-level transformations using only linear models. We trained a our model on a 1000-dimensional PCA projection of CelebA constructed from the original 116412 dimensions with K = 20, and found both global scene transformation such as sharpness and contrast (Figure 5a) and more high level-transformations such as adding a smile (Figure 5b). (a) Contrast / Sharpness (b) Smiling / Skin tone Figure 5: Learned transformations for celebrity faces capture both simple (sharpness) and high-level (smiling) transformations. For each panel, the center column is the original image, and columns to the left and right were generated by repeatedly applying the learnt transformation. 8 5 Related Work and Discussion Learning transformation matrices, also known as Lie group learning, has a long history with the closest work to ours being Miao and Rao [11] and Rao and Ruderman [12]. These earlier methods use a Taylor approximation to learn a set of small (< 10 × 10) transformation matrices given pairs of image patches undergoing a small transformation. In contrast, our work does not require supervision in the form of transformation pairs and provides a scalable new convex objective function. There have been improvements to Rao and Ruderman [12] focusing on removing the Taylor approximation in order to learn transformations from distant examples: Cohen and Welling [3, 4] learned commutative and 3d-rotation Lie groups under a strong assumption of uniform density over rotations. Sohl-Dickstein et al. [14] learn commutative transformations generated by normal matrices using eigendecompositions and supervision in the form of successive 17 × 17 image patches in a video. Our work differs because we seek to learn multiple, general transformation matrices from large, high-dimensional datasets. Because of this difference, our algorithm focuses on scalability and avoiding local minima at the expense of utilizing a less accurate first-order Taylor approximation. This approximation is reasonable, since we fit our model to nearest neighbor pairs which are by definition close to each other. Empirically, we find that these approximations result in a scalable algorithm for unsupervised recovery of transformations. Learning to transform between neighbors on a nonlinear manifold has been explored in Dollár et al. [7] and Bengio and Monperrus [1]. Both works model a manifold by predicting the linear neighborhoods around points using nonlinear functions (radial basis functions in Dollár et al. [7] and a one-layer neural net in Bengio and Monperrus [1]). In contrast to these methods, which begin with the goal of learning all manifolds, we focus on a class of linear transformations, and treat the general manifold problem as a special kernelization. This has three benefits: first, we avoid the high model complexity necessary for general manifold learning. Second, extrapolation beyond the training data occurs explicitly from the linear parametric form of our model (e.g., from digits to Kannada). Finally, linearity leads to a definition of disentangling based on correlations and a SVD based method for recovering disentangled representations. In summary, we have presented an unsupervised approach for learning disentangled representations via linear Lie groups. We demonstrated that for image data, even a linear model is surprisingly effective at learning semantically meaningful transformations. Our results suggest that these semi-parametric transformation models are promising for identifying semantically meaningful low-dimensional continuous structures from high-dimensional real-world data. Acknowledgements. We thank Arun Chaganty for helpful discussions and comments. This work was supported by NSF-CAREER award 1553086, DARPA (Grant N66001-14-2-4055), and the DAIS ITA program (W911NF-16-3-0001). Reproducibility. Code, data, and experiments can be found on Codalab Worksheets (http://bit.ly/2Aj5tti). 9 | 2017 | 357 |
6,849 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations Eirikur Agustsson ETH Zurich aeirikur@vision.ee.ethz.ch Fabian Mentzer ETH Zurich mentzerf@vision.ee.ethz.ch Michael Tschannen ETH Zurich michaelt@nari.ee.ethz.ch Lukas Cavigelli ETH Zurich cavigelli@iis.ee.ethz.ch Radu Timofte ETH Zurich & Merantix timofter@vision.ee.ethz.ch Luca Benini ETH Zurich benini@iis.ee.ethz.ch Luc Van Gool KU Leuven & ETH Zurich vangool@vision.ee.ethz.ch Abstract We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both. 1 Introduction In recent years, deep neural networks (DNNs) have led to many breakthrough results in machine learning and computer vision [20, 28, 10], and are now widely deployed in industry. Modern DNN models often have millions or tens of millions of parameters, leading to highly redundant structures, both in the intermediate feature representations they generate and in the model itself. Although overparametrization of DNN models can have a favorable effect on training, in practice it is often desirable to compress DNN models for inference, e.g., when deploying them on mobile or embedded devices with limited memory. The ability to learn compressible feature representations, on the other hand, has a large potential for the development of (data-adaptive) compression algorithms for various data types such as images, audio, video, and text, for all of which various DNN architectures are now available. DNN model compression and lossy image compression using DNNs have both independently attracted a lot of attention lately. In order to compress a set of continuous model parameters or features, we need to approximate each parameter or feature by one representative from a set of quantization levels (or vectors, in the multi-dimensional case), each associated with a symbol, and then store the assignments (symbols) of the parameters or features, as well as the quantization levels. Representing each parameter of a DNN model or each feature in a feature representation by the corresponding quantization level will come at the cost of a distortion D, i.e., a loss in performance (e.g., in classification accuracy for a classification DNN with quantized model parameters, or in reconstruction error in the context of autoencoders with quantized intermediate feature representations). The rate R, i.e., the entropy of the symbol stream, determines the cost of encoding the model or features in a bitstream. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To learn a compressible DNN model or feature representation we need to minimize D + βR, where β > 0 controls the rate-distortion trade-off. Including the entropy into the learning cost function can be seen as adding a regularizer that promotes a compressible representation of the network or feature representation. However, two major challenges arise when minimizing D + βR for DNNs: i) coping with the non-differentiability (due to quantization operations) of the cost function D + βR, and ii) obtaining an accurate and differentiable estimate of the entropy (i.e., R). To tackle i), various methods have been proposed. Among the most popular ones are stochastic approximations [39, 19, 7, 32, 5] and rounding with a smooth derivative approximation [15, 30]. To address ii) a common approach is to assume the symbol stream to be i.i.d. and to model the marginal symbol distribution with a parametric model, such as a Gaussian mixture model [30, 34], a piecewise linear model [5], or a Bernoulli distribution [33] (in the case of binary symbols). DNN model compression x F1( · ; w1) x(1) x(K−1) FK( · ; wK) x(K) z = [w1, w2, . . . , wK] data compression z = x(b) x x(K) FK ◦... ◦Fb+1 Fb ◦... ◦F1 z: vector to be compressed In this paper, we propose a unified end-to-end learning framework for learning compressible representations, jointly optimizing the model parameters, the quantization levels, and the entropy of the resulting symbol stream to compress either a subset of feature representations in the network or the model itself (see inset figure). We address both challenges i) and ii) above with methods that are novel in the context DNN model and feature compression. Our main contributions are: • We provide the first unified view on end-to-end learned compression of feature representations and DNN models. These two problems have been studied largely independently in the literature so far. • Our method is simple and intuitively appealing, relying on soft assignments of a given scalar or vector to be quantized to quantization levels. A parameter controls the “hardness” of the assignments and allows to gradually transition from soft to hard assignments during training. In contrast to rounding-based or stochastic quantization schemes, our coding scheme is directly differentiable, thus trainable end-to-end. • Our method does not force the network to adapt to specific (given) quantization outputs (e.g., integers) but learns the quantization levels jointly with the weights, enabling application to a wider set of problems. In particular, we explore vector quantization for the first time in the context of learned compression and demonstrate its benefits over scalar quantization. • Unlike essentially all previous works, we make no assumption on the marginal distribution of the features or model parameters to be quantized by relying on a histogram of the assignment probabilities rather than the parametric models commonly used in the literature. • We apply our method to DNN model compression for a 32-layer ResNet model [13] and fullresolution image compression using a variant of the compressive autoencoder proposed recently in [30]. In both cases, we obtain performance competitive with the state-of-the-art, while making fewer model assumptions and significantly simplifying the training procedure compared to the original works [30, 6]. The remainder of the paper is organized as follows. Section 2 reviews related work, before our soft-to-hard vector quantization method is introduced in Section 3. Then we apply it to a compressive autoencoder for image compression and to ResNet for DNN compression in Section 4 and 5, respectively. Section 6 concludes the paper. 2 Related Work There has been a surge of interest in DNN models for full-resolution image compression, most notably [32, 33, 4, 5, 30], all of which outperform JPEG [35] and some even JPEG 2000 [29] The pioneering work [32, 33] showed that progressive image compression can be learned with convolutional recurrent neural networks (RNNs), employing a stochastic quantization method during training. [4, 30] both rely on convolutional autoencoder architectures. These works are discussed in more detail in Section 4. In the context of DNN model compression, the line of works [12, 11, 6] adopts a multi-step procedure in which the weights of a pretrained DNN are first pruned and the remaining parameters are quantized using a k-means like algorithm, the DNN is then retrained, and finally the quantized DNN model is encoded using entropy coding. A notable different approach is taken by [34], where the DNN 2 compression task is tackled using the minimum description length principle, which has a solid information-theoretic foundation. It is worth noting that many recent works target quantization of the DNN model parameters and possibly the feature representation to speed up DNN evaluation on hardware with low-precision arithmetic, see, e.g., [15, 23, 38, 43]. However, most of these works do not specifically train the DNN such that the quantized parameters are compressible in an information-theoretic sense. Gradually moving from an easy (convex or differentiable) problem to the actual harder problem during optimization, as done in our soft-to-hard quantization framework, has been studied in various contexts and falls under the umbrella of continuation methods (see [3] for an overview). Formally related but motivated from a probabilistic perspective are deterministic annealing methods for maximum entropy clustering/vector quantization, see, e.g., [24, 42]. Arguably most related to our approach is [41], which also employs continuation for nearest neighbor assignments, but in the context of learning a supervised prototype classifier. To the best of our knowledge, continuation methods have not been employed before in an end-to-end learning framework for neural network-based image compression or DNN compression. 3 Proposed Soft-to-Hard Vector Quantization 3.1 Problem Formulation Preliminaries and Notations. We consider the standard model for DNNs, where we have an architecture F : Rd1 7→RdK+1 composed of K layers F = FK ◦· · · ◦F1, where layer Fi maps Rdi →Rdi+1, and has parameters wi ∈Rmi. We refer to W = [w1, · · · , wK] as the parameters of the network and we denote the intermediate layer outputs of the network as x(0) := x and x(i) := Fi(x(i−1)), such that F(x) = x(K) and x(i) is the feature vector produced by layer Fi. The parameters of the network are learned w.r.t. training data X = {x1, · · · , xN} ⊂Rd1 and labels Y = {y1, · · · , yN} ⊂RdK+1, by minimizing a real-valued loss L(X, Y; F). Typically, the loss can be decomposed as a sum over the training data plus a regularization term, L(X, Y; F) = 1 N N X i=1 ℓ(F(xi), yi) + λR(W), (1) where ℓ(F(x), y) is the sample loss, λ > 0 sets the regularization strength, and R(W) is a regularizer (e.g., R(W) = P i ∥wi∥2 for l2 regularization). In this case, the parameters of the network can be learned using stochastic gradient descent over mini-batches. Assuming that the data X, Y on which the network is trained is drawn from some distribution PX,Y, the loss (1) can be thought of as an estimator of the expected loss E[ℓ(F(X), Y) + λR(W)]. In the context of image classification, Rd1 would correspond to the input image space and RdK+1 to the classification probabilities, and ℓwould be the categorical cross entropy. We say that the deep architecture is an autoencoder when the network maps back into the input space, with the goal of reproducing the input. In this case, d1 = dK+1 and F(x) is trained to approximate x, e.g., with a mean squared error loss ℓ(F(x), y) = ∥F(x) −y∥2. Autoencoders typically condense the dimensionality of the input into some smaller dimensionality inside the network, i.e., the layer with the smallest output dimension, x(b) ∈Rdb , has db ≪d1, which we refer to as the “bottleneck”. Compressible representations. We say that a weight parameter wi or a feature x(i) has a compressible representation if it can be serialized to a binary stream using few bits. For DNN compression, we want the entire network parameters W to be compressible. For image compression via an autoencoder, we just need the features in the bottleneck, x(b), to be compressible. Suppose we want to compress a feature representation z ∈Rd in our network (e.g., x(b) of an autoencoder) given an input x. Assuming that the data X, Y is drawn from some distribution PX,Y, z will be a sample from a continuous random variable Z. To store z with a finite number of bits, we need to map it to a discrete space. Specifically, we map z to a sequence of m symbols using a (symbol) encoder E : Rd 7→[L]m, where each symbol is an index ranging from 1 to L, i.e., [L] := {1, . . . , L}. The reconstruction of z is then produced by a (symbol) decoder D : [L]m 7→Rd, which maps the symbols back to ˆz = D(E(z)) ∈Rd. Since z is 3 a sample from Z, the symbol stream E(z) is drawn from the discrete probability distribution PE(Z). Thus, given the encoder E, according to Shannon’s source coding theorem [8], the correct metric for compressibility is the entropy of E(Z): H(E(Z)) = − X e∈[L]m P(E(Z) = e) log(P(E(Z) = e)). (2) Our generic goal is hence to optimize the rate distortion trade-off between the expected loss and the entropy of E(Z): min E,D,W EX,Y[ℓ( ˆF(X), Y) + λR(W)] + βH(E(Z)), (3) where ˆF is the architecture where z has been replaced with ˆz, and β > 0 controls the trade-off between compressibility of z and the distortion it imposes on ˆF. However, we cannot optimize (3) directly. First, we do not know the distribution of X and Y. Second, the distribution of Z depends in a complex manner on the network parameters W and the distribution of X. Third, the encoder E is a discrete mapping and thus not differentiable. For our first approximation we consider the sample entropy instead of H(E(Z)). That is, given the data X and some fixed network parameters W, we can estimate the probabilities P(E(Z) = e) for e ∈[L]m via a histogram. For this estimate to be accurate, we however would need |X| ≫Lm. If z is the bottleneck of an autoencoder, this would correspond to trying to learn a single histogram for the entire discretized data space. We relax this by assuming the entries of E(Z) are i.i.d. such that we can instead compute the histogram over the L distinct values. More precisely, we assume that for e = (e1, · · · , em) ∈[L]m we can approximate P(E(Z) = e) ≈Qm l=1 pel, where pj is the histogram estimate pj := |{el(zi)|l ∈[m], i ∈[N], el(zi) = j}| mN , (4) where we denote the entries of E(z) = (e1(z), · · · , em(z)) and zi is the output feature z for training data point xi ∈X. We then obtain an estimate of the entropy of Z by substituting the approximation (3.1) into (2), H(E(Z)) ≈− X e∈[L]m m Y l=1 pel ! log m Y l=1 pel ! = −m L X j=1 pj log pj = mH(p), (5) where the first (exact) equality is due to [8], Thm. 2.6.6, and H(p) := −PL j=1 pj log pj is the sample entropy for the (i.i.d., by assumption) components of E(Z) 1. We now can simplify the ideal objective of (3), by replacing the expected loss with the sample mean over ℓand the entropy using the sample entropy H(p), obtaining 1 N N X i=1 ℓ(F(xi), yi) + λR(W) + βmH(p). (6) We note that so far we have assumed that z is a feature output in F, i.e., z = x(k) for some k ∈[K]. However, the above treatment would stay the same if z is the concatenation of multiple feature outputs. One can also obtain a separate sample entropy term for separate feature outputs and add them to the objective in (6). In case z is composed of one or more parameter vectors, such as in DNN compression where z = W, z and ˆz cease to be random variables, since W is a parameter of the model. That is, opposed to the case where we have a source X that produces another source ˆZ which we want to be compressible, we want the discretization of a single parameter vector W to be compressible. This is analogous to compressing a single document, instead of learning a model that can compress a stream of documents. In this case, (3) is not the appropriate objective, but our simplified objective in (6) remains appropriate. This is because a standard technique in compression is to build a statistical model of the (finite) data, which has a small sample entropy. The only difference is that now the histogram probabilities in (4) are taken over W instead of the dataset X, i.e., N = 1 and zi = W in (4), and they count towards storage as well as the encoder E and decoder D. 1In fact, from [8], Thm. 2.6.6, it follows that if the histogram estimates pj are exact, (5) is an upper bound for the true H(E(Z)) (i.e., without the i.i.d. assumption). 4 Challenges. Eq. (6) gives us a unified objective that can well describe the trade-off between compressible representations in a deep architecture and the original training objective of the architecture. However, the problem of finding a good encoder E, a corresponding decoder D, and parameters W that minimize the objective remains. First, we need to impose a form for the encoder and decoder, and second we need an approach that can optimize (6) w.r.t. the parameters W. Independently of the choice of E, (6) is challenging since E is a mapping to a finite set and, therefore, not differentiable. This implies that neither H(p) is differentiable nor ˆF is differentiable w.r.t. the parameters of z and layers that feed into z. For example, if ˆF is an autoencoder and z = x(b), the output of the network will not be differentiable w.r.t. w1, · · · , wb and x(0), · · · , x(b−1). These challenges motivate the design decisions of our soft-to-hard annealing approach, described in the next section. 3.2 Our Method Encoder and decoder form. For the encoder E : Rd 7→[L]m we assume that we have L centers vectors C = {c1, · · · , cL} ⊂Rd/m. The encoding of z ∈Rd is then performed by reshaping it into a matrix Z = [¯z(1), · · · , ¯z(m)] ∈R(d/m)×m, and assigning each column ¯z(l) to the index of its nearest neighbor in C. That is, we assume the feature z ∈Rd can be modeled as a sequence of m points in Rd/m, which we partition into the Voronoi tessellation over the centers C. The decoder D : [L]m 7→Rd then simply constructs ˆZ ∈R(d/m)×m from a symbol sequence (e1, · · · , em) by picking the corresponding centers ˆZ = [ce1, · · · , cem], from which ˆz is formed by reshaping ˆZ back into Rd. We will interchangeably write ˆz = D(E(z)) and ˆZ = D(E(Z)). The idea is then to relax E and D into continuous mappings via soft assignments instead of the hard nearest neighbor assignment of E. Soft assignments. We define the soft assignment of ¯z ∈Rd/m to C as φ(¯z) := softmax(−σ[∥¯z −c1∥2, . . . , ∥¯z −cL∥2]) ∈RL, (7) where softmax(y1, · · · , yL)j := eyj ey1+···+eyL is the standard softmax operator, such that φ(¯z) has positive entries and ∥φ(¯z)∥1 = 1. We denote the j-th entry of φ(¯z) with φj(¯z) and note that lim σ→∞φj(¯z) = ( 1 if j = arg minj′∈[L]∥¯z −cj′∥ 0 otherwise such that ˆφ(¯z) := limσ→∞φ(¯z) converges to a one-hot encoding of the nearest center to ¯z in C. We therefore refer to ˆφ(¯z) as the hard assignment of ¯z to C and the parameter σ > 0 as the hardness of the soft assignment φ(¯z). Using soft assignment, we define the soft quantization of ¯z as ˜Q(¯z) := L X j=1 cjφi(¯z) = Cφ(¯z), where we write the centers as a matrix C = [c1, · · · , cL] ∈Rd/m×L. The corresponding hard assignment is taken with ˆQ(¯z) := limσ→∞˜Q(¯z) = ce(¯z), where e(¯z) is the center in C nearest to ¯z. Therefore, we can now write: ˆZ = D(E(Z)) = [ ˆQ(¯z(1)), · · · , ˆQ(¯z(m))] = C[ˆφ(¯z(1)), · · · , ˆφ(¯z(m))]. Now, instead of computing ˆZ via hard nearest neighbor assignments, we can approximate it with a smooth relaxation ˜Z := C[φ(¯z(1)), · · · , φ(¯z(m))] by using the soft assignments instead of the hard assignments. Denoting the corresponding vector form by ˜z, this gives us a differentiable approximation ˜F of the quantized architecture ˆF, by replacing ˆz in the network with ˜z. Entropy estimation. Using the soft assignments, we can similarly define a soft histogram, by summing up the partial assignments to each center instead of counting as in (4): qj := 1 mN N X i=1 m X l=1 φj(¯z(l) i ). 5 This gives us a valid probability mass function q = (q1, · · · , qL), which is differentiable but converges to p = (p1, · · · , pL) as σ →∞. We can now define the “soft entropy” as the cross entropy between p and q: ˜H(φ) := H(p, q) = − L X j=1 pj log qj = H(p) + DKL(p||q) where DKL(p||q) = P j pj log(pj/qj) denotes the Kullback–Leibler divergence. Since DKL(p||q) ≥0, this establishes ˜H(φ) as an upper bound for H(p), where equality is obtained when p = q. We have therefore obtained a differentiable “soft entropy” loss (w.r.t. q), which is an upper bound on the sample entropy H(p). Hence, we can indirectly minimize H(p) by minimizing ˜H(φ), treating the histogram probabilities of p as constants for gradient computation. However, we note that while qj is additive over the training data and the symbol sequence, log(qj) is not. This prevents the use of mini-batch gradient descent on ˜H(φ), which can be an issue for large scale learning problems. In this case, we can instead re-define the soft entropy ˜H(φ) as H(q, p). As before, ˜H(φ) →H(p) as σ →∞, but ˜H(φ) ceases to be an upper bound for H(p). The benefit is that now ˜H(φ) can be decomposed as ˜H(φ) := H(q, p) = − L X j=1 qj log pj = − N X i=1 m X l=1 L X j=1 1 mN φj(¯z(l) i ) log pj, (8) such that we get an additive loss over the samples xi ∈X and the components l ∈[m]. Soft-to-hard deterministic annealing. Our soft assignment scheme gives us differentiable approximations ˜F and ˜H(φ) of the discretized network ˆF and the sample entropy H(p), respectively. However, our objective is to learn network parameters W that minimize (6) when using the encoder and decoder with hard assignments, such that we obtain a compressible symbol stream E(z) which we can compress using, e.g., arithmetic coding [40]. To this end, we anneal σ from some initial value σ0 to infinity during training, such that the soft approximation gradually becomes a better approximation of the final hard quantization we will use. Choosing the annealing schedule is crucial as annealing too slowly may allow the network to invert the soft assignments (resulting in large weights), and annealing too fast leads to vanishing gradients too early, thereby preventing learning. In practice, one can either parametrize σ as a function of the iteration, or tie it to an auxiliary target such as the difference between the network losses incurred by soft quantization and hard quantization (see Section 4 for details). For a simple initialization of σ0 and the centers C, we can sample the centers from the set Z := {¯z(l) i |i ∈[N], l ∈[m]} and then cluster Z by minimizing the cluster energy P ¯z∈Z ∥¯z −˜Q(¯z)∥2 using SGD. 4 Image Compression We now show how we can use our framework to realize a simple image compression system. For the architecture, we use a variant of the convolutional autoencoder proposed recently in [30] (see Appendix A.1 for details). We note that while we use the architecture of [30], we train it using our soft-to-hard entropy minimization method, which differs significantly from their approach, see below. Our goal is to learn a compressible representation of the features in the bottleneck of the autoencoder. Because we do not expect the features from different bottleneck channels to be identically distributed, we model each channel’s distribution with a different histogram and entropy loss, adding each entropy term to the total loss using the same β parameter. To encode a channel into symbols, we separate the channel matrix into a sequence of pw × ph-dimensional patches. These patches (vectorized) form the columns of Z ∈Rd/m×m, where m = d/(pwph), such that Z contains m (pwph)-dimensional points. Having ph or pw greater than one allows symbols to capture local correlations in the bottleneck, which is desirable since we model the symbols as i.i.d. random variables for entropy coding. At test time, the symbol encoder E then determines the symbols in the channel by performing a nearest neighbor assignment over a set of L centers C ⊂Rpwph, resulting in ˆZ, as described above. During training we instead use the soft quantized ˜Z, also w.r.t. the centers C. 6 0.2 0.4 0.6 rate [bpp] 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MS-SSIM ImageNET100 0.2 0.4 0.6 rate [bpp] 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MS-SSIM B100 0.2 0.4 0.6 rate [bpp] 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MS-SSIM Urban100 0.2 0.4 0.6 rate [bpp] 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MS-SSIM Kodak SHVQ (ours) BPG JPEG 2000 JPEG 0.20bpp / 0.91 / 0.69 / 23.88dB 0.20bpp / 0.90 / 0.67 / 24.19dB 0.20bpp / 0.88 / 0.63 / 23.01dB 0.22bpp / 0.77 / 0.48 / 19.77dB SHVQ (ours) BPG JPEG 2000 JPEG Figure 1: Top: MS-SSIM as a function of rate for SHVQ (Ours), BPG, JPEG 2000, JPEG, for each data set. Bottom: A visual example from the Kodak data set along with rate / MS-SSIM / SSIM / PSNR. We trained different models using Adam [17], see Appendix A.2. Our training set is composed similarly to that described in [4]. We used a subset of 90,000 images from ImageNET [9], which we downsampled by a factor 0.7 and trained on crops of 128 × 128 pixels, with a batch size of 15. To estimate the probability distribution p for optimizing (8), we maintain a histogram over 5,000 images, which we update every 10 iterations with the images from the current batch. Details about other hyperparameters can be found in Appendix A.2. The training of our autoencoder network takes place in two stages, where we move from an identity function in the bottleneck to hard quantization. In the first stage, we train the autoencoder without any quantization. Similar to [30] we gradually unfreeze the channels in the bottleneck during training (this gives a slight improvement over learning all channels jointly from the start). This yields an efficient weight initialization and enables us to then initialize σ0 and C as described above. In the second stage, we minimize (6), jointly learning network weights and quantization levels. We anneal σ by letting the gap between soft and hard quantization error go to zero as the number of iterations t goes to infinity. Let eS = ∥˜F(x)−x∥2 be the soft error, eH = ∥ˆF(x)−x∥2 be the hard error. With gap(t) = eH −eS we can denote the error between the actual the desired gap with eG(t) = gap(t) −T/(T + t) gap(0), such that the gap is halved after T iterations. We update σ according to σ(t + 1) = σ(t) + KG eG(t), where σ(t) denotes σ at iteration t. Fig. 3 in Appendix A.4 shows the evolution of the gap, soft and hard loss as sigma grows during training. We observed that both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively (see Appendix A.3 for details). Evaluation. To evaluate the image compression performance of our Soft-to-Hard Vector Quantization Autoencoder (SHVQ) method we use four datasets, namely Kodak [2], B100 [31], Urban100 [14], ImageNET100 (100 randomly selected images from ImageNET [25]) and three standard quality measures, namely peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [37], and multi-scale SSIM (MS-SSIM), see Appendix A.5 for details. We compare our SHVQ with the standard JPEG, JPEG 2000, and BPG [1], focusing on compression rates < 1 bits per pixel (bpp) (i.e., the regime where traditional integral transform-based compression algorithms are most challenged). As shown in Fig. 1, for high compression rates (< 0.4 bpp), our SHVQ outperforms JPEG and JPEG 2000 in terms of MS-SSIM and is competitive with BPG. A similar trend can be observed for SSIM (see Fig. 4 in Appendix A.6 for plots of SSIM and PSNR as a function of bpp). SHVQ performs best on ImageNET100 and is most challenged on Kodak when compared with JPEG 2000. Visually, SHVQ-compressed images have fewer artifacts than those compressed by JPEG 2000 (see Fig. 1, and Fig. 5–12 in Appendix A.7). Related methods and discussion. JPEG 2000 [29] uses wavelet-based transformations and adaptive EBCOT coding. BPG [1], based on a subset of the HEVC video compression standard, is the 7 ACC COMP. METHOD [%] RATIO ORIGINAL MODEL 92.6 1.00 PRUNING + FT. + INDEX CODING + H. CODING [12] 92.6 4.52 PRUNING + FT. + K-MEANS + FT. + I.C. + H.C. [11] 92.6 18.25 PRUNING + FT. + HESSIAN-WEIGHTED K-MEANS + FT. + I.C. + H.C. 92.7 20.51 PRUNING + FT. + UNIFORM QUANTIZATION + FT. + I.C. + H.C. 92.7 22.17 PRUNING + FT. + ITERATIVE ECSQ + FT. + I.C. + H.C. 92.7 21.01 SOFT-TO-HARD ANNEALING + FT. + H. CODING (OURS) 92.1 19.15 SOFT-TO-HARD ANNEALING + FT. + A. CODING (OURS) 92.1 20.15 Table 1: Accuracies and compression factors for different DNN compression techniques, using a 32-layer ResNet on CIFAR-10. FT. denotes fine-tuning, IC. denotes index coding and H.C. and A.C. denote Huffman and arithmetic coding, respectively. The pruning based results are from [6]. current state-of-the art for image compression. It uses context-adaptive binary arithmetic coding (CABAC) [21]. SHVQ (ours) Theis et al. [30] Quantization vector quantization rounding to integers Backpropagation grad. of soft relaxation grad. of identity mapping Entropy estimation (soft) histogram Gaussian scale mixtures Training material ImageNET high quality Flickr images Operating points single model ensemble The recent works of [30, 5] also showed competitive performance with JPEG 2000. While we use the architecture of [30], there are stark differences between the works, summarized in the inset table. The work of [5] build a deep model using multiple generalized divisive normalization (GDN) layers and their inverses (IGDN), which are specialized layers designed to capture local joint statistics of natural images. Furthermore, they model marginals for entropy estimation using linear splines and also use CABAC[21] coding. Concurrent to our work, the method of [16] builds on the architecture proposed in [33], and shows that impressive performance in terms of the MS-SSIM metric can be obtained by incorporating it into the optimization (instead of just minimizing the MSE). In contrast to the domain-specific techniques adopted by these state-of-the-art methods, our framework for learning compressible representation can realize a competitive image compression system, only using a convolutional autoencoder and simple entropy coding. 5 DNN Compression For DNN compression, we investigate the ResNet [13] architecture for image classification. We adopt the same setting as [6] and consider a 32-layer architecture trained for CIFAR-10 [18]. As in [6], our goal is to learn a compressible representation for all 464,154 trainable parameters of the model. We concatenate the parameters into a vector W ∈R464,154 and employ scalar quantization (m = d), such that ZT = z = W. We started from the pre-trained original model, which obtains a 92.6% accuracy on the test set. We implemented the entropy minimization by using L = 75 centers and chose β = 0.1 such that the converged entropy would give a compression factor ≈20, i.e., giving ≈32/20 = 1.6 bits per weight. The training was performed with the same learning parameters as the original model was trained with (SGD with momentum 0.9). The annealing schedule used was a simple exponential one, σ(t + 1) = 1.001 · σ(t) with σ(0) = 0.4. After 4 epochs of training, when σ(t) has increased by a factor ≈20, we switched to hard assignments and continued fine-tuning at a 10× lower learning rate. 2 Adhering to the benchmark of [6, 12, 11], we obtain the compression factor by dividing the bit cost of storing the uncompressed weights as floats (464, 154 × 32 bits) with the total encoding cost of compressed weights (i.e., L × 32 bits for the centers plus the size of the compressed index stream). Our compressible model achieves a comparable test accuracy of 92.1% while compressing the DNN by a factor 19.15 with Huffman and 20.15 using arithmetic coding. Table 1 compares our results with state-of-the-art approaches reported by [6]. We note that while the top methods from the literature also achieve accuracies above 92% and compression factors above 20×, they employ a considerable amount of hand-designed steps, such as pruning, retraining, various types of weight clustering, special encoding of the sparse weight matrices into an index-difference based format and then finally use 2 We switch to hard assignments since we can get large gradients for weights that are equally close to two centers as ˜Q converges to hard nearest neighbor assignments. One could also employ simple gradient clipping. 8 entropy coding. In contrast, we directly minimize the entropy of the weights in the training, obtaining a highly compressible representation using standard entropy coding. In Fig. 13 in Appendix A.8, we show how the sample entropy H(p) decays and the index histograms develop during training, as the network learns to condense most of the weights to a couple of centers when optimizing (6). In contrast, the methods of [12, 11, 6] manually impose 0 as the most frequent center by pruning ≈80% of the network weights. We note that the recent works by [34] also manages to tackle the problem in a single training procedure, using the minimum description length principle. In contrast to our framework, they take a Bayesian perspective and rely on a parametric assumption on the symbol distribution. 6 Conclusions In this paper we proposed a unified framework for end-to-end learning of compressed representations for deep architectures. By training with a soft-to-hard annealing scheme, gradually transferring from a soft relaxation of the sample entropy and network discretization process to the actual nondifferentiable quantization process, we manage to optimize the rate distortion trade-off between the original network loss and the entropy. Our framework can elegantly capture diverse compression tasks, obtaining results competitive with state-of-the-art for both image compression as well as DNN compression. The simplicity of our approach opens up various directions for future work, since our framework can be easily adapted for other tasks where a compressible representation is desired. Acknowledgments This work was supported by EUs Horizon 2020 programme under grant agreement No 687757 – REPLICATE, by NVIDIA Corporation through the Academic Hardware Grant, by ETH Zurich, and by Armasuisse. References [1] BPG Image format. https://bellard.org/bpg/. [2] Kodak PhotoCD dataset. http://r0k.us/graphics/kodak/. [3] Eugene L Allgower and Kurt Georg. Numerical continuation methods: an introduction, volume 13. Springer Science & Business Media, 2012. [4] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. arXiv preprint arXiv:1607.05006, 2016. [5] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. arXiv preprint arXiv:1611.01704, 2016. [6] Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Towards the limit of network quantization. arXiv preprint arXiv:1612.01543, 2016. [7] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123–3131, 2015. [8] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. [10] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115–118, 2017. [11] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [12] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143, 2015. [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 9 [14] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197–5206, 2015. [15] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016. [16] Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. arXiv preprint arXiv:1703.10114, 2017. [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [18] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. [19] Alex Krizhevsky and Geoffrey E Hinton. Using very deep autoencoders for content-based image retrieval. In ESANN, 2011. [20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [21] Detlev Marpe, Heiko Schwarz, and Thomas Wiegand. Context-based adaptive binary arithmetic coding in the h. 264/avc video compression standard. IEEE Transactions on circuits and systems for video technology, 13(7):620–636, 2003. [22] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. Int’l Conf. Computer Vision, volume 2, pages 416–423, July 2001. [23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525–542. Springer, 2016. [24] Kenneth Rose, Eitan Gurewitz, and Geoffrey C Fox. Vector quantization by deterministic annealing. IEEE Transactions on Information theory, 38(4):1249–1257, 1992. [25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [26] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874–1883, 2016. [27] Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016. [28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. [29] David S. Taubman and Michael W. Marcellin. JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer Academic Publishers, Norwell, MA, USA, 2001. [30] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszar. Lossy image compression with compressive autoencoders. In ICLR 2017, 2017. [31] Radu Timofte, Vincent De Smet, and Luc Van Gool. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution, pages 111–126. Springer International Publishing, Cham, 2015. [32] George Toderici, Sean M O’Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085, 2015. 10 [33] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016. [34] Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017. [35] Gregory K Wallace. The JPEG still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviii–xxxiv, 1992. [36] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems Computers, 2003, volume 2, pages 1398–1402 Vol.2, Nov 2003. [37] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, April 2004. [38] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074–2082, 2016. [39] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. [40] Ian H. Witten, Radford M. Neal, and John G. Cleary. Arithmetic coding for data compression. Commun. ACM, 30(6):520–540, June 1987. [41] Paul Wohlhart, Martin Kostinger, Michael Donoser, Peter M. Roth, and Horst Bischof. Optimizing 1-nearest prototype classifiers. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2013. [42] Eyal Yair, Kenneth Zeger, and Allen Gersho. Competitive learning and soft competition for vector quantizer design. IEEE transactions on Signal Processing, 40(2):294–309, 1992. [43] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017. 11 | 2017 | 358 |
6,850 | Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM Katrina Ligett Caltech and Hebrew University Seth Neel University of Pennsylvania Aaron Roth University of Pennsylvania Bo Waggoner University of Pennsylvania Zhiwei Steven Wu Microsoft Research Abstract Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed in practical settings, it may often be that there is instead a fixed accuracy requirement for a given computation and the data analyst would like to maximize the privacy of the computation subject to the accuracy constraint. This raises the question of how to find and run a maximally private empirical risk minimizer subject to a given accuracy requirement. We propose a general “noise reduction” framework that can apply to a variety of private empirical risk minimization (ERM) algorithms, using them to “search” the space of privacy levels to find the empirically strongest one that meets the accuracy constraint, and incurring only logarithmic overhead in the number of privacy levels searched. The privacy analysis of our algorithm leads naturally to a version of differential privacy where the privacy parameters are dependent on the data, which we term ex-post privacy, and which is related to the recently introduced notion of privacy odometers. We also give an ex-post privacy analysis of the classical AboveThreshold privacy tool, modifying it to allow for queries chosen depending on the database. Finally, we apply our approach to two common objective functions, regularized linear and logistic regression, and empirically compare our noise reduction methods to (i) inverting the theoretical utility guarantees of standard private ERM algorithms and (ii) a stronger, empirical baseline based on binary search.1 1 Introduction and Related Work Differential Privacy [7, 8] enjoys over a decade of study as a theoretical construct, and a much more recent set of large-scale practical deployments, including by Google [10] and Apple [11]. As the large theoretical literature is put into practice, we start to see disconnects between assumptions implicit in the theory and the practical necessities of applications. In this paper we focus our attention on one such assumption in the domain of private empirical risk minimization (ERM): that the data analyst first chooses a privacy requirement, and then attempts to obtain the best accuracy guarantee (or empirical performance) that she can, given the chosen privacy constraint. Existing theory is tailored to this view: the data analyst can pick her privacy parameter ε via some exogenous process, and either plug it into a “utility theorem” to upper bound her accuracy loss, or simply deploy her algorithm and (privately) evaluate its performance. There is a rich and substantial literature on private convex ERM that takes this approach, weaving tight connections between standard mechanisms in 1A full version of this paper appears on the arXiv preprint site: https://arxiv.org/abs/1705.10829. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. differential privacy and standard tools for empirical risk minimization. These methods for private ERM include output and objective perturbation [5, 14, 18, 4], covariance perturbation [19], the exponential mechanism [16, 2], and stochastic gradient descent [2, 21, 12, 6, 20]. While these existing algorithms take a privacy-first perspective, in practice, product requirements may impose hard accuracy constraints, and privacy (while desirable) may not be the over-riding concern. In such situations, things are reversed: the data analyst first fixes an accuracy requirement, and then would like to find the smallest privacy parameter consistent with the accuracy constraint. Here, we find a gap between theory and practice. The only theoretically sound method available is to take a “utility theorem” for an existing private ERM algorithm and solve for the smallest value of ε (the differential privacy parameter)—and other parameter values that need to be set—consistent with her accuracy requirement, and then run the private ERM algorithm with the resulting ε. But because utility theorems tend to be worst-case bounds, this approach will generally be extremely conservative, leading to a much larger value of ε (and hence a much larger leakage of information) than is necessary for the problem at hand. Alternately, the analyst could attempt an empirical search for the smallest value of ε consistent with her accuracy goals. However, because this search is itself a data-dependent computation, it incurs the overhead of additional privacy loss. Furthermore, it is not a priori clear how to undertake such a search with nontrivial privacy guarantees for two reasons: first, the worst case could involve a very long search which reveals a large amount of information, and second, the selected privacy parameter is now itself a data-dependent quantity, and so it is not sensible to claim a “standard” guarantee of differential privacy for any finite value of ε ex-ante. In this paper, we provide a principled variant of this second approach, which attempts to empirically find the smallest value of ε consistent with an accuracy requirement. We give a meta-method that can be applied to several interesting classes of private learning algorithms and introduces very little privacy overhead as a result of the privacy-parameter search. Conceptually, our meta-method initially computes a very private hypothesis, and then gradually subtracts noise (making the computation less and less private) until a sufficient level of accuracy is achieved. One key technique that significantly reduces privacy loss over naive search is the use of correlated noise generated by the method of [15], which formalizes the conceptual idea of “subtracting” noise without incurring additional privacy overhead. In order to select the most private of these queries that meets the accuracy requirement, we introduce a natural modification of the now-classic AboveThreshold algorithm [8], which iteratively checks a sequence of queries on a dataset and privately releases the index of the first to approximately exceed some fixed threshold. Its privacy cost increases only logarithmically with the number of queries. We provide an analysis of AboveThreshold that holds even if the queries themselves are the result of differentially private computations, showing that if AboveThreshold terminates after t queries, one only pays the privacy costs of AboveThreshold plus the privacy cost of revealing those first t private queries. When combined with the above-mentioned correlated noise technique of [15], this gives an algorithm whose privacy loss is equal to that of the final hypothesis output – the previous ones coming “for free” – plus the privacy loss of AboveThreshold. Because the privacy guarantees achieved by this approach are not fixed a priori, but rather are a function of the data, we introduce and apply a new, corresponding privacy notion, which we term ex-post privacy, and which is closely related to the recently introduced notion of “privacy odometers” [17]. In Section 4, we empirically evaluate our noise reduction meta-method, which applies to any ERM technique which can be described as a post-processing of the Laplace mechanism. This includes both direct applications of the Laplace mechanism, like output perturbation [5]; and more sophisticated methods like covariance perturbation [19], which perturbs the covariance matrix of the data and then performs an optimization using the noisy data. Our experiments concentrate on ℓ2 regularized least-squares regression and ℓ2 regularized logistic regression, and we apply our noise reduction meta-method to both output perturbation and covariance perturbation. Our empirical results show that the active, ex-post privacy approach massively outperforms inverting the theory curve, and also improves on a baseline “ε-doubling” approach. 2 Privacy Background and Tools 2.1 Differential Privacy and Ex-Post Privacy Let X denote the data domain. We call two datasets D, D′ ∈X ∗neighbors (written as D ∼D′) if D can be derived from D′ by replacing a single data point with some other element of X. 2 Definition 2.1 (Differential Privacy [7]). Fix ε ≥0. A randomized algorithm A : X ∗→O is ε-differentially private if for every pair of neighboring data sets D ∼D′ ∈X ∗, and for every event S ⊆O: Pr[A(D) ∈S] ≤exp(ε) Pr[A(D′) ∈S]. We call exp(ε) the privacy risk factor. It is possible to design computations that do not satisfy the differential privacy definition, but whose outputs are private to an extent that can be quantified after the computation halts. For example, consider an experiment that repeatedly runs an ε′-differentially private algorithm, until a stopping condition defined by the output of the algorithm itself is met. This experiment does not satisfy ε-differential privacy for any fixed value of ε, since there is no fixed maximum number of rounds for which the experiment will run (for a fixed number of rounds, a simple composition theorem, Theorem 2.5, shows that the ε-guarantees in a sequence of computations “add up.”) However, if expost we see that the experiment has stopped after k rounds, the data can in some sense be assured an “ex-post privacy loss” of only kε′. Rogers et al. [17] initiated the study of privacy odometers, which formalize this idea. They study privacy composition when the data analyst can choose the privacy parameters of subsequent computations as a function of the outcomes of previous computations. We apply a related idea here, for a different purpose. Our goal is to design one-shot algorithms that always achieve a target accuracy but that may have variable privacy levels depending on their input. Definition 2.2. Given a randomized algorithm A : X ∗→O, define the ex-post privacy loss2 of A on outcome o to be Loss(o) = max D,D′:D∼D′ log Pr [A(D) = o] Pr [A(D′) = o]. We refer to exp (Loss(o)) as the ex-post privacy risk factor. Definition 2.3 (Ex-Post Differential Privacy). Let E : O →(R≥0 ∪{∞}) be a function on the outcome space of algorithm A : X ∗→O. Given an outcome o = A(D), we say that A satisfies E(o)-ex-post differential privacy if for all o ∈O, Loss(o) ≤E(o). Note that if E(o) ≤ε for all o, A is ε-differentially private. Ex-post differential privacy has the same semantics as differential privacy, once the output of the mechanism is known: it bounds the log-likelihood ratio of the dataset being D vs. D′, which controls how an adversary with an arbitrary prior on the two cases can update her posterior. 2.2 Differential Privacy Tools Differentially private computations enjoy two nice properties: Theorem 2.4 (Post Processing [7]). Let A : X ∗→O be any ε-differentially private algorithm, and let f : O →O′ be any function. Then the algorithm f ◦A : X ∗→O′ is also ε-differentially private. Post-processing implies that, for example, every decision process based on the output of a differentially private algorithm is also differentially private. Theorem 2.5 (Composition [7]). Let A1 : X ∗→O, A2 : X ∗→O′ be algorithms that are ε1- and ε2-differentially private, respectively. Then the algorithm A : X ∗→O × O′ defined as A(x) = (A1(x), A2(x)) is (ε1 + ε2)-differentially private. The composition theorem holds even if the composition is adaptive—-see [9] for details. The Laplace mechanism. The most basic subroutine we will use is the Laplace mechanism. The Laplace Distribution centered at 0 with scale b is the distribution with probability density function Lap (z|b) = 1 2be−|z| b . We say X ∼Lap (b) when X has Laplace distribution with scale b. Let f : X ∗→Rd be an arbitrary d-dimensional function. The ℓ1 sensitivity of f is defined to be ∆1(f) = maxD∼D′ ∥f(D) −f(D′)∥1. The Laplace mechanism with parameter ε simply adds noise drawn independently from Lap ∆1(f) ε to each coordinate of f(x). 2If A’s output is from a continuous distribution rather than discrete, we abuse notation and write Pr[A(D) = o] to mean the probability density at output o. 3 Theorem 2.6 ([7]). The Laplace mechanism is ε-differentially private. Gradual private release. Koufogiannis et al. [15] study how to gradually release private data using the Laplace mechanism with an increasing sequence of ε values, with a privacy cost scaling only with the privacy of the marginal distribution on the least private release, rather than the sum of the privacy costs of independent releases. For intuition, the algorithm can be pictured as a continuous random walk starting at some private data v with the property that the marginal distribution at each point in time is Laplace centered at v, with variance increasing over time. Releasing the value of the random walk at a fixed point in time gives a certain output distribution, for example, ˆv, with a certain privacy guarantee ε. To produce ˆv′ whose ex-ante distribution has higher variance (is more private), one can simply “fast forward” the random walk from a starting point of ˆv to reach ˆv′; to produce a less private ˆv′, one can “rewind.” The total privacy cost is max{ε, ε′} because, given the “least private” point (say ˆv), all “more private” points can be derived as post-processings given by taking a random walk of a certain length starting at ˆv. Note that were the Laplace random variables used for each release independent, the composition theorem would require summing the ε values of all releases. In our private algorithms, we will use their noise reduction mechanism as a building block to generate a list of private hypotheses θ1, . . . , θT with gradually increasing ε values. Importantly, releasing any prefix (θ1, . . . , θt) only incurs the privacy loss in θt. More formally: Algorithm 1 Noise Reduction [15]: NR(v, ∆, {εt}) Input: private vector v, sensitivity parameter ∆, list ε1 < ε2 < · · · < εT Set ˆvT := v + Lap (∆/εT ) ▷drawn i.i.d. for each coordinate for t = T −1, T −2, . . . , 1 do With probability εt εt+1 2 : set ˆvt := ˆvt+1 Else: set ˆvt := ˆvt+1 + Lap (∆/εt) ▷drawn i.i.d. for each coordinate Return ˆv1, . . . , ˆvT Theorem 2.7 ([15]). Let f have ℓ1 sensitivity ∆and let ˆv1, . . . , ˆvT be the output of Algorithm 1 on v = f(D), ∆, and the increasing list ε1, . . . , εT . Then for any t, the algorithm which outputs the prefix (ˆv1, . . . , ˆvt) is εt-differentially private. 2.3 AboveThreshold with Private Queries Our high-level approach to our eventual ERM problem will be as follows: Generate a sequence of hypotheses θ1, . . . , θT , each with increasing accuracy and decreasing privacy; then test their accuracy levels sequentially, outputting the first one whose accuracy is “good enough.” The classical AboveThreshold algorithm [8] takes in a dataset and a sequence of queries and privately outputs the index of the first query to exceed a given threshold (with some error due to noise). We would like to use AboveThreshold to perform these accuracy checks, but there is an important obstacle: for us, the “queries” themselves depend on the private data.3 A standard composition analysis would involve first privately publishing all the queries, then running AboveThreshold on these queries (which are now public). Intuitively, though, it would be much better to generate and publish the queries one at a time, until AboveThreshold halts, at which point one would not publish any more queries. The problem with analyzing this approach is that, a-priori, we do not know when AboveThreshold will terminate; to address this, we analyze the ex-post privacy guarantee of the algorithm.4 Let us say that an algorithm M(D) = (f1, . . . , fT ) is (ε1, . . . , εT )-prefix-private if for each t, the function that runs M(D) and outputs just the prefix (f1, . . . , ft) is εt-differentially private. Lemma 2.8. Let M : X ∗→(X ∗→O)T be a (ε1, . . . , εT )-prefix private algorithm that returns T queries, and let each query output by M have ℓ1 sensitivity at most ∆. Then Algorithm 2 run on D, εA, W, ∆, and M is E-ex-post differentially private for E((t, ·)) = εA + εt for any t ∈[T]. 3In fact, there are many applications beyond our own in which the sequence of queries input to AboveThreshold might be the result of some private prior computation on the data, and where we would like to release both the stopping index of AboveThreshold and the “query object.” (In our case, the query objects will be parameterized by learned hypotheses θ1, . . . , θT .) 4This result does not follow from a straightforward application of privacy odometers from [17], because the privacy analysis of algorithms like the noise reduction technique is not compositional. 4 Algorithm 2 InteractiveAboveThreshold: IAT(D, ε, W, ∆, M) Input: Dataset D, privacy loss ε, threshold W, ℓ1 sensitivity ∆, algorithm M Let ˆW = W + Lap 2∆ ε for each query t = 1, . . . , T do Query ft ←M(D)t if ft(D) + Lap 4∆ ε ≥ˆW: then Output (t, ft); Halt. Output (T, ⊥). The proof, which is a variant on the proof of privacy for AboveThreshold [8], appears in the full version, along with an accuracy theorem for IAT. 3 Noise-Reduction with Private ERM In this section, we provide a general private ERM framework that allows us to approach the best privacy guarantee achievable on the data given a target excess risk goal. Throughout the section, we consider an input dataset D that consists of n row vectors X1, X2, . . . , Xn ∈Rp and a column y ∈Rn. We will assume that each ∥Xi∥1 ≤1 and |yi| ≤1. Let di = (Xi, yi) ∈Rp+1 be the i-th data record. Let ℓbe a loss function such that for any hypothesis θ and any data point (Xi, yi) the loss is ℓ(θ, (Xi, yi)). Given an input dataset D and a regularization parameter λ, the goal is to minimize the following regularized empirical loss function over some feasible set C: L(θ, D) = 1 n n X i=1 ℓ(θ, (Xi, yi)) + λ 2 ∥θ∥2 2. Let θ∗= argminθ∈C ℓ(θ, D). Given a target accuracy parameter α, we wish to privately compute a θp that satisfies L(θp, D) ≤L(θ∗, D) + α, while achieving the best ex-post privacy guarantee. For simplicity, we will sometimes write L(θ) for L(θ, D). One simple baseline approach is a “doubling method”: Start with a small ε value, run an εdifferentially private algorithm to compute a hypothesis θ and use the Laplace mechanism to estimate the excess risk of θ; if the excess risk is lower than the target, output θ; otherwise double the value of ε and repeat the same process. (See the full version for details.) As a result, we pay for privacy loss for every hypothesis we compute and every excess risk we estimate. In comparison, our meta-method provides a more cost-effective way to select the privacy level. The algorithm takes a more refined set of privacy levels ε1 < . . . < εT as input and generates a sequence of hypotheses θ1, . . . , θT such that the generation of each θt is εt-private. Then it releases the hypotheses θt in order, halting as soon as a released hypothesis meets the accuracy goal. Importantly, there are two key components that reduce the privacy loss in our method: 1. We use Algorithm 1, the “noise reduction” method of [15], for generating the sequence of hypotheses: we first compute a very private and noisy θ1, and then obtain the subsequent hypotheses by gradually “de-noising” θ1. As a result, any prefix (θ1, . . . , θk) incurs a privacy loss of only εk (as opposed to (ε1 + . . . + εk) if the hypotheses were independent). 2. When evaluating the excess risk of each hypothesis, we use Algorithm 2, InteractiveAboveThreshold, to determine if its excess risk exceeds the target threshold. This incurs substantially less privacy loss than independently evaluating the excess risk of each hypothesis using the Laplace mechanism (and hence allows us to search a finer grid of values). For the rest of this section, we will instantiate our method concretely for two ERM problems: ridge regression and logistic regression. In particular, our noise-reduction method is based on two private ERM algorithms: the recently introduced covariance perturbation technique [19] and the output perturbation method [5]. 5 3.1 Covariance Perturbation for Ridge Regression In ridge regression, we consider the squared loss function: ℓ((Xi, yi), θ) = 1 2(yi −⟨θ, Xi⟩)2, and hence empirical loss over the data set is defined as L(θ, D) = 1 2n∥y −Xθ∥2 2 + λ∥θ∥2 2 2 , where X denotes the (n × p) matrix with row vectors X1, . . . , Xn and y = (y1, . . . , yn). Since the optimal solution for the unconstrained problem has ℓ2 norm no more than p 1/λ (see the full version for a proof), we will focus on optimizing θ over the constrained set C = {a ∈Rp | ∥a∥2 ≤ p 1/λ}, which will be useful for bounding the ℓ1 sensitivity of the empirical loss. Before we formally introduce the covariance perturbation algorithm due to [19], observe that the optimal solution θ∗can be computed as θ∗= argmin θ∈C L(θ, D) = argmin θ∈C (θ⊺(X⊺X)θ −2⟨X⊺y, θ⟩) 2n + λ∥θ∥2 2 2 . In other words, θ∗only depends on the private data through X⊺y and X⊺X. To compute a private hypothesis, the covariance perturbation method simply adds Laplace noise to each entry of X⊺y and X⊺X (the covariance matrix), and solves the optimization based on the noisy matrix and vector. The formal description of the algorithm and its guarantee are in Theorem 3.1. Our analysis differs from the one in [19] in that their paper considers the “local privacy” setting, and also adds Gaussian noise whereas we use Laplace. The proof is deferred to the full version. Theorem 3.1. Fix any ε > 0. For any input data set D, consider the mechanism M that computes θp = argmin θ∈C 1 2n (θ⊺(X⊺X + B)θ −2⟨X⊺y + b, θ⟩) + λ∥θ∥2 2 2 , where B ∈Rp×p and b ∈Rp×1 are random Laplace matrices such that each entry of B and b is drawn from Lap (4/ε). Then M satisfies ε-differential privacy and the output θp satisfies E B,b [L(θp) −L(θ∗)] ≤4 √ 2(2 p p/λ + p/λ) nε . In our algorithm COVNR, we will apply the noise reduction method, Algorithm 1, to produce a sequence of noisy versions of the private data (X⊺X, X⊺y): (Z1, z1), . . . , (ZT , zT ), one for each privacy level. Then for each (Zt, zt), we will compute the private hypothesis by solving the noisy version of the optimization problem in Equation (1). The full description of our algorithm COVNR is in Algorithm 3, and satisfies the following guarantee: Theorem 3.2. The instantiation of COVNR(D, {ε1, . . . , εT }, α, γ) outputs a hypothesis θp that with probability 1 −γ satisfies L(θp) −L(θ∗) ≤α. Moreover, it is E-ex-post differentially private, where the privacy loss function E : (([T] ∪{⊥}) × Rp) →(R≥0 ∪{∞}) is defined as E((k, ·)) = ε0 + εk for any k ̸=⊥, E((⊥, ·)) = ∞, and ε0 = 16( p 1/λ + 1)2 log(2T/γ) nα is the privacy loss incurred by IAT. 3.2 Output Perturbation for Logistic Regression Next, we show how to combine the output perturbation method with noise reduction for the ridge regression problem.5 In this setting, the input data consists of n labeled examples (X1, y1), . . . , (Xn, yn), such that for each i, Xi ∈Rp, ∥Xi∥1 ≤1, and yi ∈{−1, 1}. The goal is to train a linear classifier given by a weight vector θ for the examples from the two classes. We consider the logistic loss function: ℓ(θ, (Xi, yi)) = log(1 + exp(−yiθ⊺Xi)), and the empirical loss is L(θ, D) = 1 n n X i=1 log(1 + exp(−yiθ⊺Xi)) + λ∥θ∥2 2 2 . 5We study the ridge regression problem for concreteness. Our method works for any ERM problem with strongly convex loss functions. 6 Algorithm 3 Covariance Perturbation with Noise-Reduction: COVNR(D, {ε1, . . . , εT }, α, γ) Input: private data set D = (X, y), accuracy parameter α, privacy levels ε1 < ε2 < . . . < εT , and failure probability γ Instantiate InteractiveAboveThreshold: A = IAT(D, ε0, −α/2, ∆, ·) with ε0 = 16∆(log(2T/γ))/α and ∆= ( p 1/λ + 1)2/(n) Let C = {a ∈Rp | ∥a∥2 ≤ p 1/λ} and θ∗= argminθ∈C L(θ) Compute noisy data: {Zt} = NR((X⊺X), 2, {ε1/2, . . . , εT /2}), {zt} = NR((X⊺Y ), 2, {ε1/2, . . . , εT /2}) for t = 1, . . . , T: do θt = argmin θ∈C 1 2n θ⊺Ztθ −2⟨zt, θ⟩ + λ∥θ∥2 2 2 (1) Let f t(D) = L(θ∗, D) −L(θt, D); Query A with query f t to check accuracy if A returns (t, f t) then Output (t, θt) ▷Accurate hypothesis found. Output: (⊥, θ∗) The output perturbation method simply adds Laplace noise to perturb each coordinate of the optimal solution θ∗. The following is the formal guarantee of output perturbation. Our analysis deviates slightly from the one in [5] since we are adding Laplace noise (see the full version). Theorem 3.3. Fix any ε > 0. Let r = 2√p nλε . For any input dataset D, consider the mechanism that first computes θ∗= argminθ∈Rp L(θ), then outputs θp = θ∗+ b, where b is a random vector with its entries drawn i.i.d. from Lap (r). Then M satisfies ε-differential privacy, and θp has excess risk E b [L(θp) −L(θ∗)] ≤2 √ 2p nλε + 4p2 n2λε2 . Given the output perturbation method, we can simply apply the noise reduction method NR to the optimal hypothesis θ∗to generate a sequence of noisy hypotheses. We will again use InteractiveAboveThreshold to check the excess risk of the hypotheses. The full algorithm OUTPUTNR follows the same structure in Algorithm 3, and we defer the formal description to the full version. Theorem 3.4. The instantiation of OUTPUTNR(D, ε0, {ε1, . . . , εT }, α, γ) is E-ex-post differentially private and outputs a hypothesis θp that with probability 1 −γ satisfies L(θp) −L(θ∗) ≤α, where the privacy loss function E : (([T] ∪{⊥}) × Rp) →(R≥0 ∪{∞}) is defined as E((k, ·)) = ε0 + εk for any k ̸=⊥, E((⊥, ·)) = ∞, and ε0 ≤32 log(2T/γ) p 2 log 2/λ nα is the privacy loss incurred by IAT. Proof sketch of Theorems 3.2 and 3.4. The accuracy guarantees for both algorithms follow from an accuracy guarantee of the IAT algorithm (a variant on the standard AboveThreshold bound) and the fact that we output θ∗if IAT identifies no accurate hypothesis. For the privacy guarantee, first note that any prefix of the noisy hypotheses θ1, . . . , θt satisfies εt-differential privacy because of our instantiation of the Laplace mechanism (see the full version for the ℓ1 sensitivity analysis) and noise-reduction method NR. Then the ex-post privacy guarantee directly follows Lemma 2.8. 4 Experiments To evaluate the methods described above, we conducted empirical evaluations in two settings. We used ridge regression to predict (log) popularity of posts on Twitter in the dataset of [1], with p = 77 features and subsampled to n =100,000 data points. Logistic regression was applied to classifying 7 0.00 0.05 0.10 0.15 0.20 Input α (excess error guarantee) 0 5 10 15 20 ex-post privacy loss ϵ Comparison to theory approach CovarPert theory OutputPert theory NoiseReduction (a) Linear (ridge) regression, vs theory approach. 0.00 0.05 0.10 0.15 0.20 Input α (excess error guarantee) 0 2 4 6 8 10 12 14 ex-post privacy loss ϵ Comparison to theory approach OutputPert theory NoiseReduction (b) Regularized logistic regression, vs theory approach. 0.00 0.05 0.10 0.15 0.20 Input α (excess error guarantee) 0 2 4 6 8 10 ex-post privacy loss ϵ Comparison to Doubling Doubling NoiseReduction (c) Linear (ridge) regression, vs DOUBLINGMETHOD. 0.00 0.05 0.10 0.15 0.20 Input α (excess error guarantee) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 ex-post privacy loss ϵ Comparison to Doubling Doubling NoiseReduction (d) Regularized logistic regression, vs DOUBLINGMETHOD. Figure 1: Ex-post privacy loss. (1a) and (1c), left, represent ridge regression on the Twitter dataset, where Noise Reduction and DOUBLINGMETHOD both use Covariance Perturbation. (1b) and (1d), right, represent logistic regression on the KDD-99 Cup dataset, where both Noise Reduction and DOUBLINGMETHOD use Output Perturbation. The top plots compare Noise Reduction to the “theory approach”: running the algorithm once using the value of ε that guarantees the desired expected error via a utility theorem. The bottom compares to the DOUBLINGMETHOD baseline. Note the top plots are generous to the theory approach: the theory curves promise only expected error, whereas Noise Reduction promises a high probability guarantee. Each point is an average of 80 trials (Twitter dataset) or 40 trials (KDD-99 dataset). network events as innocent or malicious in the KDD-99 Cup dataset [13], with 38 features and subsampled to 100,000 points. Details of parameters and methods appear in the full version.6 In each case, we tested the algorithm’s average ex-post privacy loss for a range of input accuracy goals α, fixing a modest failure probability γ = 0.1 (and we observed that excess risks were concentrated well below α/2, suggesting a pessimistic analysis). The results show our meta-method gives a large improvement over the “theory” approach of simply inverting utility theorems for private ERM algorithms. (In fact, the utility theorem for the popular private stochastic gradient descent algorithm does not even give meaningful guarantees for the ranges of parameters tested; one would need an order of magnitude more data points, and even then the privacy losses are enormous, perhaps due to loose constants in the analysis.) To gauge the more modest improvement over DOUBLINGMETHOD, note that the variation in the privacy risk factor eε can still be very large; for instance, in the ridge regression setting of α = 0.05, 6 A full implementation of our algorithms appears at: https://github.com/steven7woo/ Accuracy-First-Differential-Privacy. 8 Noise Reduction has eε ≈10.0 while DOUBLINGMETHOD has eε ≈495; at α = 0.075, the privacy risk factors are 4.65 and 56.6 respectively. Interestingly, for our meta-method, the contribution to privacy loss from “testing” hypotheses (the InteractiveAboveThreshold technique) was significantly larger than that from “generating” them (NoiseReduction). One place where the InteractiveAboveThreshold analysis is loose is in using a theoretical bound on the maximum norm of any hypothesis to compute the sensitivity of queries. The actual norms of hypotheses tested was significantly lower which, if taken as guidance to the practitioner in advance, would drastically improve the privacy guarantee of both adaptive methods. 5 Future Directions Throughout this paper, we focus on ε-differential privacy, instead of the weaker (ε, δ)-(approximate) differential privacy. Part of the reason is that an analogue of Lemma 2.8 does not seem to hold for (ε, δ)-differentially private queries without further assumptions, as the necessity to union-bound over the δ “failure probability” that the privacy loss is bounded for each query can erase the ex-post gains. We leave obtaining similar results for approximate differential privacy as an open problem. More generally, we wish to extend our ex-post privacy framework to approximate differential privacy, or to the stronger notion of concentrated differential privacy [3]. Such results will allow us to obtain ex-post privacy guarantees for a much broader class of algorithms. 9 References [1] The AMA Team at Laboratoire d’Informatique de Grenoble. Buzz prediction in online social media, 2017. [2] Raef Bassily, Adam D. Smith, and Abhradeep Thakurta. Private empirical risk minimization, revisited. CoRR, abs/1405.7085, 2014. [3] Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography - 14th International Conference, TCC 2016-B, Beijing, China, October 31 - November 3, 2016, Proceedings, Part I, pages 635–658, 2016. [4] Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 289–296, 2008. [5] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069–1109, 2011. [6] John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local privacy and statistical minimax rates. In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton 2013, Allerton Park & Retreat Center, Monticello, IL, USA, October 2-4, 2013, page 1592, 2013. [7] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265–284. Springer, 2006. [8] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends R⃝in Theoretical Computer Science, 9(3–4):211–407, 2014. [9] Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 51–60. IEEE, 2010. [10] Giulia Fanti, Vasyl Pihur, and Úlfar Erlingsson. Building a rappor with the unknown: Privacypreserving learning of associations and data dictionaries. Proceedings on Privacy Enhancing Technologies (PoPETS), issue 3, 2016, 2016. [11] Andy Greenberg. Apple’s ’differential privacy’ is about collecting your data—but not your data. Wired Magazine, 2016. [12] Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning. In COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, pages 24.1–24.34, 2012. [13] KDD’99. Kdd cup 1999 data, 1999. [14] Daniel Kifer, Adam D. Smith, and Abhradeep Thakurta. Private convex optimization for empirical risk minimization with applications to high-dimensional regression. In COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, pages 25.1–25.40, 2012. [15] Fragkiskos Koufogiannis, Shuo Han, and George J. Pappas. Gradual release of sensitive data under differential privacy. Journal of Privacy and Confidentiality, 7, 2017. [16] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on, pages 94–103. IEEE, 2007. [17] Ryan M Rogers, Aaron Roth, Jonathan Ullman, and Salil Vadhan. Privacy odometers and filters: Pay-as-you-go composition. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1921–1929. Curran Associates, Inc., 2016. 10 [18] Benjamin I. P. Rubinstein, Peter L. Bartlett, Ling Huang, and Nina Taft. Learning in a large function space: Privacy-preserving mechanisms for SVM learning. CoRR, abs/0911.5708, 2009. [19] Adam Smith, Jalaj Upadhyay, and Abhradeep Thakurta. Is interaction necessary for distributed private learning? IEEE Symposium on Security and Privacy, 2017. [20] Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with differentially private updates. In IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013, Austin, TX, USA, December 3-5, 2013, pages 245–248, 2013. [21] Oliver Williams and Frank McSherry. Probabilistic inference and differential privacy. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada., pages 2451–2459, 2010. 11 | 2017 | 359 |
6,851 | Distral: Robust Multitask Reinforcement Learning Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu DeepMind London, UK Abstract Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning. 1 Introduction Deep Reinforcement Learning is an emerging subfield of Reinforcement Learning (RL) that relies on deep neural networks as function approximators that can scale RL algorithms to complex and rich environments. One key work in this direction was the introduction of DQN [21] which is able to play many games in the ATARI suite of games [1] at above human performance. However the agent requires a fairly large amount of time and data to learn effective policies and the learning process itself can be quite unstable, even with innovations introduced to improve wall clock time, data efficiency, and robustness by changing the learning algorithm [27, 33] or by improving the optimizer [20, 29]. A different approach was introduced by [12, 19, 14], whereby data efficiency is improved by training additional auxiliary tasks jointly with the RL task. With the success of deep RL has come interest in increasingly complex tasks and a shift in focus towards scenarios in which a single agent must solve multiple related problems, either simultaneously or sequentially. Due to the large computational cost, making progress in this direction requires robust algorithms which do not rely on task-specific algorithmic design or extensive hyperparameter tuning. Intuitively, solutions to related tasks should facilitate learning since the tasks share common structure, and thus one would expect that individual tasks should require less data or achieve a higher asymptotic performance. Indeed this intuition has long been pursued in the multitask and transfer-learning literature [2, 31, 34, 5]. Somewhat counter-intuitively, however, the above is often not the result encountered in practice, particularly in the RL domain [26, 23]. Instead, the multitask and transfer learning scenarios are 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. frequently found to pose additional challenges to existing methods. Instead of making learning easier it is often observed that training on multiple tasks can negatively affect performances on the individual tasks, and additional techniques have to be developed to counteract this [26, 23]. It is likely that gradients from other tasks behave as noise, interfering with learning, or, in another extreme, one of the tasks might dominate the others. In this paper we develop an approach for multitask and transfer RL that allows effective sharing of behavioral structure across tasks, giving rise to several algorithmic instantiations. In addition to some instructive illustrations on a grid world domain, we provide a detailed analysis of the resulting algorithms via comparisons to A3C [20] baselines on a variety of tasks in a first-person, visually-rich, 3D environment. We find that the Distral algorithms learn faster and achieve better asymptotic performance, are significantly more robust to hyperparameter settings, and learn more stably than multitask A3C baselines. 2 Distral: Distill and Transfer Learning π0 π1 π2 π3 π4 regularise regularise regularise regularise distill distill distill distill Figure 1: Illustration of the Distral framework. We propose a framework for simultaneous reinforcement learning of multiple tasks which we call Distral. Figure 1 provides a high level illustration involving four tasks. The method is founded on the notion of a shared policy (shown in the centre) which distills (in the sense of Bucila and Hinton et al. [4, 11]) common behaviours or representations from task-specific policies [26, 23]. Crucially, the distilled policy is then used to guide task-specific policies via regularization using a Kullback-Leibler (KL) divergence. The effect is akin to a shaping reward which can, for instance, overcome random walk exploration bottlenecks. In this way, knowledge gained in one task is distilled into the shared policy, then transferred to other tasks. 2.1 Mathematical framework In this section we describe the mathematical framework underlying Distral. A multitask RL setting is considered where there are n tasks, where for simplicity we assume an infinite horizon with discount factor γ.1 We will assume that the action A and state S spaces are the same across tasks; we use a 2 A to denote actions, s 2 S to denote states. The transition dynamics pi(s0|s, a) and reward functions Ri(a, s) are different for each task i. Let ⇡i be task-specific stochastic policies. The dynamics and policies give rise to joint distributions over state and action trajectories starting from some initial state, which we will also denote by ⇡i by an abuse of notation. Our mechanism for linking the policy learning across tasks is via optimising an objective which consists of expected returns and policy regularizations. We designate ⇡0 to be the distilled policy which we believe will capture agent behaviour that is common across the tasks. We regularize each task policy ⇡i towards the distilled policy using γ-discounted KL divergences E⇡i[P t≥0 γt log ⇡i(at|st) ⇡0(at|st)]. In addition, we also use a γ-discounted entropy regularization to further encourage exploration. The resulting objective to be maximized is: J(⇡0, {⇡i}n i=1) = X i E⇡i 2 4X t≥0 γtRi(at, st) −cKLγt log ⇡i(at|st) ⇡0(at|st) −cEntγt log ⇡i(at|st) 3 5 = X i E⇡i 2 4X t≥0 γtRi(at, st) + γt↵ β log ⇡0(at|st) −γt β log ⇡i(at|st) 3 5 (1) where cKL, cEnt ≥0 are scalar factors which determine the strengths of the KL and entropy regularizations, and ↵= cKL/(cKL + cEnt) and β = 1/(cKL + cEnt). The log ⇡0(at|st) term can be thought 1The method can be easily generalized to other scenarios like undiscounted finite horizon. 2 of as a reward shaping term which encourages actions which have high probability under the distilled policy, while the entropy term −log ⇡i(at|st) encourages exploration. In the above we used the same regularization costs cKL, cEnt for all tasks. It is easy to generalize to using task-specific costs; this can be important if tasks differ substantially in their reward scales and amounts of exploration needed, although it does introduce additional hyperparameters that are expensive to optimize. 2.2 Soft Q Learning and Distillation A range of optimization techniques in the literature can be applied to maximize the above objective, which we will expand on below. To build up intuition for how the method operates, we will start in the simple case of a tabular representation and an alternating maximization procedure which optimizes over ⇡i given ⇡0 and over ⇡0 given ⇡i. With ⇡0 fixed, (1) decomposes into separate maximization problems for each task, and is an entropy regularized expected return with redefined (regularized) reward R0 i(a, s) := Ri(a, s) + ↵ β log ⇡0(a|s). It can be optimized using soft Q learning [10] aka G learning [7], which are based on deriving the following “softened” Bellman updates for the state and action values (see also [25, 28, 22]): Vi(st) = 1 β log X at ⇡↵ 0 (at|st) exp [βQi(at, st)] (2) Qi(at, st) = Ri(at, st) + γ X st pi(st+1|st, at)Vi(st+1) (3) The Bellman updates are softened in the sense that the usual max operator over actions for the state values Vi is replaced by a soft-max at inverse temperature β, which hardens into a max operator as β ! 1. The optimal policy ⇡i is then a Boltzmann policy at inverse temperature β: ⇡i(at|st) = ⇡↵ 0 (at|st)eβQi(at|st)−βVi(st) = ⇡↵ 0 (at|st)eβAi(at|st) (4) where Ai(a, s) = Qi(a, s) −Vi(s) is a softened advantage function. Note that the softened state values Vi(s) act as the log normalizers in the above. The distilled policy ⇡0 can be interpreted as a policy prior, a perspective well-known in the literature on RL as probabilistic inference [32, 13, 25, 7]. However, unlike in past works, it is raised to a power of ↵1. This softens the effect of the prior ⇡0 on ⇡i, and is the result of the additional entropy regularization beyond the KL divergence. Also unlike past works, we will learn ⇡0 instead of hand-picking it (typically as a uniform distribution over actions). In particular, notice that the only terms in (1) depending on ⇡0 are: ↵ β X i E⇡i 2 4X t≥0 γt log ⇡0(at|st) 3 5 (5) which is simply a log likelihood for fitting a model ⇡0 to a mixture of γ-discounted state-action distributions, one for each task i under policy ⇡i. A maximum likelihood (ML) estimator can be derived from state-action visitation frequencies under roll-outs in each task, with the optimal ML solution given by the mixture of state-conditional action distributions. Alternatively, in the non-tabular case, stochastic gradient ascent can be employed, which leads precisely to an update which distills the task policies ⇡i into ⇡0 [4, 11, 26, 23]. Note however that in our case the distillation step is derived naturally from a KL regularized objective on the policies. Another difference from [26, 23] and from prior works on the use of distillation in deep learning [4, 11] is that the distilled policy is “fed back in” to improve the task policies when they are next optimized, and serves as a conduit in which common and transferable knowledge is shared across the task policies. It is worthwhile here to take pause and ponder the effect of the extra entropy regularization. First suppose that there is no extra entropy regularisation, ↵= 1, and consider the simple scenario of only n = 1 task.Then (5) is maximized when the distilled policy ⇡0 and the task policy ⇡1 are equal, and the KL regularization term is 0. Thus the objective reduces to an unregularized expected return, and so the task policy ⇡1 converges to a greedy policy which locally maximizes expected returns. Another way to view this line of reasoning is that the alternating maximization scheme is equivalent to trust-region methods like natural gradient or TRPO [24, 29] which use a KL ball centred at the previous policy, and which are understood to converge to greedy policies. If ↵< 1, there is an additional entropy term in (1). So even with ⇡0 = ⇡1 and KL(⇡1k⇡0) = 0, the objective (1) will no longer be maximized by greedy policies. Instead (1) reduces to an entropy 3 regularized expected returns with entropy regularization factor β0 = β/(1 −↵) = 1/cEnt, so that the optimal policy is of the Boltzmann form with inverse temperature β0 [25, 7, 28, 22]. In conclusion, by including the extra entropy term, we can guarantee that the task policy will not turn greedy, and we can control the amount of exploration by adjusting cEnt appropriately. This additional control over the amount of exploration is essential when there are more than one task. To see this, imagine a scenario where one of the tasks is easier and is solved first, while other tasks are harder with much sparser rewards. Without the entropy term, and before rewards in other tasks are encountered, both the distilled policy and all the task policies can converge to the one that solves the easy task. Further, because this policy is greedy, it can insufficiently explore the other tasks to even encounter rewards, leading to sub-optimal behaviour. For single-task RL, the use of entropy regularization was recently popularized by Mnih et al. [20] to counter premature convergence to greedy policies, which can be particularly severe when doing policy gradient learning. This carries over to our multitask scenario as well, and is the reason for the additional entropy regularization. 2.3 Policy Gradient and a Better Parameterization The above method alternates between maximization of the distilled policy ⇡0 and the task policies ⇡i, and is reminiscent of the EM algorithm [6] for learning latent variable models, with ⇡0 playing the role of parameters, while ⇡i plays the role of the posterior distributions for the latent variables. Going beyond the tabular case, when both ⇡0 and ⇡i are parameterized by, say, deep networks, such an alternating maximization procedure can be slower than simply optimizing (1) with respect to task and distilled policies jointly by stochastic gradient ascent. In this case the gradient update for ⇡i is simply given by policy gradient with an entropic regularization [20, 28], and can be carried out within a framework like advantage actor-critic [20]. A simple parameterization of policies would be to use a separate network for each task policy ⇡i, and another one for the distilled policy ⇡0. An alternative parameterization, which we argue can result in faster transfer, can be obtained by considering the form of the optimal Boltzmann policy (4). Specifically, consider parameterising the distilled policy using a network with parameters ✓0, ˆ⇡0(at|st) = exp(h✓0(at|st) P a0 exp(h✓0(a0|st)) (6) and estimating the soft advantages2 using another network with parameters ✓i: ˆAi(at|st) = f✓i(at|st) −1 β log X a ˆ⇡↵ 0 (a|st) exp(βf✓i(a|st)) (7) We used hat notation to denote parameterized approximators of the corresponding quantities. The policy for task i then becomes parameterized as, ˆ⇡i(at|st) = ˆ⇡↵ 0 (at|st) exp(β ˆAi(at|st)) = exp(↵h✓0(at|st) + βf✓i(at|st)) P a0 exp((↵h✓0(a0|st) + βf✓i(a0|st)) (8) This can be seen as a two-column architecture for the policy, with one column being the distilled policy, and the other being the adjustment required to specialize to task i. Given the parameterization above, we can now derive the policy gradients. The gradient wrt to the task specific parameters ✓i is given by the standard policy gradient theorem [30], r✓iJ =Eˆ⇡i h⇣P t≥1 r✓i log ˆ⇡i(at|st) ⌘⇣P u≥1 γu(Rreg i (au, su)) ⌘i =Eˆ⇡i hP t≥1 r✓i log ˆ⇡i(at|st) ⇣P u≥t γu(Rreg i (au, su)) ⌘i (9) where Rreg i (a, s) = Ri(a, s) + ↵ β log ˆ⇡0(a|s) −1 β log ˆ⇡i(a|s) is the regularized reward. Note that the partial derivative of the entropy in the integrand has expectation Eˆ⇡i[r✓i log ˆ⇡i(at|st)] = 0 because of the log-derivative trick. If a value baseline is estimated, it can be subtracted from the regularized 2In practice, we do not actually use these as advantage estimates. Instead we use (8) to parameterize a policy which is optimized by policy gradients. 4 ⇡0 ⇡i KL Returns i = 1, 2, .. KL 2col KL 1col A3C 2col ⇡i Returns A3C i = 1, 2, .. ⇡0 Returns A3C i = 1, 2, .. multitask ⇣ ⇡0 ⇡i KL Returns i = 1, 2, .. ⇡i Returns i = 1, 2, .. entropy DisTra Learning Baselines entropy entropy h f h h h f f f entropy ⌘ KL+ent 2col KL+ent 1col ⇣ entropy ⌘ Figure 2: Depiction of the different algorithms and baselines. On the left are two of the Distral algorithms and on the right are the three A3C baselines. Entropy is drawn in brackets as it is optional and only used for KL+ent 2col and KL+ent 1col. returns as a control variate. The gradient wrt ✓0 is more interesting: r✓0J = X i Eˆ⇡i hP t≥1 r✓0 log ˆ⇡i(at|st) ⇣P u≥t γu(Rreg i (au, su) ⌘i (10) + ↵ β X i Eˆ⇡i hP t≥1 γt P a0 t(ˆ⇡i(a0 t|st) −ˆ⇡0(a0 t|st))r✓0h✓0(a0 t|st) i Note that the first term is the same as for the policy gradient of ✓i. The second term tries to match the probabilities under the task policy ˆ⇡i and under the distilled policy ˆ⇡0. The second term would not be present if we simply parameterized ⇡i using the same architecture ˆ⇡i, but do not use a KL regularization for the policy. The presence of the KL regularization gets the distilled policy to learn to be the centroid of all task policies, in the sense that the second term would be zero if ˆ⇡0(a0 t|st) = 1 n P i ˆ⇡i(a0 t|st), and helps to transfer information quickly across tasks and to new tasks. 2.4 Other Related Works The centroid and star-shaped structure of Distral is reminiscent of ADMM [3], elastic-averaging SGD [35] and hierarchical Bayes [9]. Though a crucial difference is that while ADMM, EASGD and hierarchical Bayes operate in the space of parameters, in Distral the distilled policy learns to be the centroid in the space of policies. We argue that this is semantically more meaningful, and may contribute to the observed robustness of Distral by stabilizing learning. In our experiments we find indeed that absence of the KL regularization significantly affects the stability of the algorithm. Another related line of work is guided policy search [17, 18, 15, 16]. These focus on single tasks, and uses trajectory optimization (corresponding to task policies here) to guide the learning of a policy (corresponding to the distilled policy ⇡0 here). This contrasts with Distral, which is a multitask setting, where a learnt ⇡0 is used to facilitate transfer by sharing common task-agnostic behaviours, and the main outcome of the approach are instead the task policies. Our approach is also reminiscent of recent work on option learning [8], but with a few important differences. We focus on using deep neural networks as flexible function approximators, and applied our method to rich 3D visual environments, while Fox et al. [8] considered only the tabular case. We argue for the importance of an additional entropy regularization besides the KL regularization. This lead to an interesting twist in the mathematical framework allowing us to separately control the amounts of transfer and of exploration. On the other hand Fox et al. [8] focused on the interesting problem of learning multiple options (distilled policies here). Their approach treats the assignment of tasks to options as a clustering problem, which is not easily extended beyond the tabular case. 3 Algorithms The framework we just described allows for a number of possible algorithmic instantiations, arising as combinations of objectives, algorithms and architectures, which we describe below and summarize in Table 1 and Figure 2. KL divergence vs entropy regularization: With ↵= 0, we get a purely 5 h✓0(a|s) f✓i(a|s) ↵h✓0(a|s) + βf✓i(a|s) ↵= 0 A3C multitask A3C A3C 2col ↵= 1 KL 1col KL 2col 0 < ↵< 1 KL+ent 1col KL+ent 2col Table 1: The seven different algorithms evaluated in our experiments. Each column describes a different architecture, with the column headings indicating the logits for the task policies. The rows define the relative amount of KL vs entropy regularization loss, with the first row comprising the A3C baselines (no KL loss). entropy-regularized objective which does not couple and transfer across tasks [20, 28]. With ↵= 1, we get a purely KL regularized objective, which does couple and transfer across tasks, but might prematurely stop exploration if the distilled and task policies become similar and greedy. With 0 < ↵< 1 we get both terms. Alternating vs joint optimization: We have the option of jointly optimizing both the distilled policy and the task policies, or optimizing one while keeping the other fixed. Alternating optimization leads to algorithms that resemble policy distillation/actor-mimic [23, 26], but are iterative in nature with the distilled policy feeding back into task policy optimization. Also, soft Q learning can be applied to each task, instead of policy gradients. While alternating optimization can be slower, evidence from policy distillation/actor-mimic indicate it might learn more stably, particularly for tasks which differ significantly. Separate vs two-column parameterization: Finally, the task policy can be parameterized to use the distilled policy (8) or not. If using the distilled policy, behaviour distilled into the distilled policy is “immediately available” to the task policies so transfer can be faster. However if the process of transfer occurs too quickly, it might interfere with effective exploration of individual tasks. From this spectrum of possibilities we consider four concrete instances which differ in the underlying network architecture and distillation loss, identified in Table 1. In addition, we compare against three A3C baselines. In initial experiments we explored two variants of A3C: the original method [20] and the variant of Schulman et al. [28] which uses entropy regularized returns. We did not find significant differences for the two variants in our setting, and chose to report only the original A3C results for clarity in Section 4. Further algorithmic details are provided in the Appendix. 4 Experiments We demonstrate the various algorithms derived from our framework, firstly using alternating optimization with soft Q learning and policy distillation on a set of simple grid world tasks. Then all seven algorithms will be evaluated on three sets of challenging RL tasks in partially observable 3D environments. 4.1 Two room grid world To give better intuition for the role of the distilled behaviour policy, we considered a set of tasks in a grid world domain with two rooms connected by a corridor (see Figure 3) [8]. Each task is distinguished by a different randomly chosen goal location and each MDP state consists of the map location, the previous action and the previous reward. A Distral agent is trained using only the KL regularization and an optimization algorithm which alternates between soft Q learning and policy distillation. Each soft Q learning iteration learns using a rollout of length 10. To determine the benefit of the distilled policy, we compared the Distral agent to one which soft Q learns a separate policy for each task. The learning curves are shown in Figure 3 (left). We see that the Distral agent is able to learn significantly faster than single-task agents. Figure 3 (right) visualizes the distilled policy (probability of next action given position and previous action), demonstrating that the agent has learnt a policy which guides the agent to move consistently in the same direction through the corridor in order to reach the other room. This allows the agent to reach the other room faster and helps exploration, if the agent is shown new test tasks. In Fox et al. [8] two separate options are learnt, while here we learn a single distilled policy which conditions on more past information (previous action and reward). 6 Four di↵erent examples of GridWorld tasks A B C D Policy in the corridor if previous action was: left Policy in the corridor if previous action was: right Figure 3: Left: Learning curves on two room grid world. The Distral agent (blue) learns faster, converges towards better policies, and demonstrates more stable learning overall. Center: Example of tasks. Green is goal position which is uniformly sampled for each task. Starting position is uniformly sampled at the beginning of each episode. Right: depiction of learned distilled policy ⇡0 only in the corridor, conditioned on previous action being left/right and no previous reward. Sizes of arrows depict probabilities of actions. Note that up/down actions have negligible probabilities. The model learns to preserve direction of travel in the corridor. 4.2 Complex Tasks To assess Distral under more challenging conditions, we use a complex first-person partially observed 3D environment with a variety of visually-rich RL tasks. All agents were implemented with a distributed Python/TensorFlow code base, using 32 workers for each task and learnt using asynchronous RMSProp. The network columns contain convolutional layers and an LSTM and are uniform across experiments and algorithms. We tried three values for the entropy costs β and three learning rates ✏. Four runs for each hyperparameter setting were used. All other hyperparameters were fixed to the single-task A3C defaults and, for the KL+ent 1col and KL+ent 2col algorithms, ↵was fixed at 0.5. Mazes In the first experiment, each of n = 8 tasks is a different maze containing randomly placed rewards and a goal object. Figure 4.A1 shows the learning curves for all seven algorithms. Each curve is produced by averaging over all 4 runs and 8 tasks, and selecting the best settings for β and ✏ (as measured by the area under the learning curves). The Distral algorithms learn faster and achieve better final performance than all three A3C baselines. The two-column algorithms learn faster than the corresponding single column ones. The Distral algorithms without entropy learn faster but achieve lower final scores than those with entropy, which we believe is due to insufficient exploration towards the end of learning. We found that both multitask A3C and two-column A3C can learn well on some runs, but are generally unstable—some runs did not learn well, while others may learn initially then suffer degradation later. We believe this is due to negative interference across tasks, which does not happen for Distral algorithms. The stability of Distral algorithms also increases their robustness to hyperparameter selection. Figure 4.A2 shows the final achieved average returns for all 36 runs for each algorithm, sorted in decreasing order. We see that Distral algorithms have a significantly higher proportion of runs achieving good returns, with KL+ent_2col being the most robust. Distral algorithms, along with multitask A3C, use a distilled or common policy which can be applied on all tasks. Panels B1 and B2 in Figure 4 summarize the performances of the distilled policies. Algorithms that use two columns (KL_2col and KL+ent_2col) obtain the best performance, because policy gradients are also directly propagated through the distilled policy in those cases. Moreover, panel B2 reveals that Distral algorithms exhibit greater stability as compared to traditional multitask A3C. We also observe that KL algorithms have better-performing distilled policies than KL+ent ones. We believe this is because the additional entropy regularisation allows task policies to diverge more substantially from the distilled policy. This suggests that annealing the entropy term or increasing the KL term throughout training could improve the distilled policy performance, if that is of interest. Navigation We experimented with n = 4 navigation and memory tasks. In contrast to the previous experiment, these tasks use random maps which are procedurally generated on every episode. The first task features reward objects which are randomly placed in a maze, and the second task requires to return these objects to the agent’s start position. The third task has a single goal object which must be repeatedly found from different start positions, and on the fourth task doors are randomly opened and 7 Figure 4: Panels A1, C1, D1 show task specific policy performance (averaged across all the tasks) for the maze, navigation and laser-tag tasks, respectively. The x-axes are total numbers of training environment steps per task. Panel B1 shows the mean scores obtained with the distilled policies (A3C has no distilled policy, so it is represented by the performance of an untrained network.). For each algorithm, results for the best set of hyperparameters (based on the area under curve) are reported. The bold line is the average over 4 runs, and the colored area the average standard deviation over the tasks. Panels A2, B2, C2, D2 shows the corresponding final performances for the 36 runs of each algorithm ordered by best to worst (9 hyperparameter settings and 4 runs). closed to force novel path-finding. Hence, these tasks are more involved than the previous navigation tasks. The panels C1 and C2 of Figure 4 summarize the results. We observe again that Distral algorithms yield better final results while having greater stability (Figure 4.C2). The top-performing algorithms are, again, the 2 column Distral algorithms (KL_2col and KL+ent_2col). Laser-tag In the final set of experiments, we use n = 8 laser-tag levels. These tasks require the agent to learn to tag bots controlled by a built-in AI, and differ substantially: fixed versus procedurally generated maps, fixed versus procedural bots, and complexity of agent behaviour (e.g. learning to jump in some tasks). Corresponding to this greater diversity, we observe (see panels D1 and D2 of Figure 4) that the best baseline is the A3C algorithm that is trained independently on each task. Among the Distral algorithms, the single column variants perform better, especially initially, as they are able to learn task-specific features separately. We observe again the early plateauing phenomenon for algorithms that do not possess an additional entropy term. While not significantly better than the A3C baseline on these tasks, the Distral algorithms clearly outperform the multitask A3C. Discussion Considering the 3 different sets of complex 3D experiments, we argue that the Distral algorithms are promising solutions to the multitask deep RL problem. Distral can perform significantly better than A3C baselines when tasks have sufficient commonalities for transfer (maze and navigation), while still being competitive with A3C when there is less transfer possible. In terms of specific algorithmic proposals, the additional entropy regularization is important in encouraging continued exploration, while two column architectures generally allow faster transfer (but can affect performance when there is little transfer due to task interference). The computational costs of Distral algorithms are at most twice that of the corresponding A3C algorithms, as each agent need to process two network columns instead of one. However in practice the runtimes are just slightly more than for A3C, because the cost of simulating environments is significant and the same whether single or multitask. 8 5 Conclusion We have proposed Distral, a general framework for distilling and transferring common behaviours in multitask reinforcement learning. In experiments we showed that the resulting algorithms learn quicker, produce better final performances, and are more stable and robust to hyperparameter settings. We have found that Distral significantly outperforms the standard way of using shared neural network parameters for multitask or transfer reinforcement learning. Two ideas in Distral might be worth reemphasizing here. We observe that distillation arises naturally as one half of an optimization procedure when using KL divergences to regularize the output of task models towards a distilled model. The other half corresponds to using the distilled model as a regularizer for training the task models. Another observation is that parameters in deep networks do not typically by themselves have any semantic meaning, so instead of regularizing networks in parameter space, it is worthwhile considering regularizing networks in a more semantically meaningful space, e.g. of policies. We would like to end with a discussion of the various difficulties faced by multitask RL methods. The first is that of positive transfer: when there are commonalities across tasks, how does the method achieve this transfer and lead to better learning speed and better performance on new tasks in the same family? The core aim of Distral is this, where the commonalities are exhibited in terms of shared common behaviours. The second is that of task interference, where the differences among tasks adversely affect agent performance by interfering with exploration and the optimization of network parameters. This is the core aim of the policy distillation and mimic works [26, 23]. As in these works, Distral also learns a distilled policy. But this is further used to regularise the task policies to facilitate transfer. This means that Distral algorithms can be affected by task interference. It would be interesting to explore ways to allow Distral (or other methods) to automatically balance between increasing task transfer and reducing task interference. Other possible directions of future research include: combining Distral with techniques which use auxiliary losses [12, 19, 14], exploring use of multiple distilled policies or latent variables in the distilled policy to allow for more diversity of behaviours, exploring settings for continual learning where tasks are encountered sequentially, and exploring ways to adaptively adjust the KL and entropy costs to better control the amounts of transfer and exploration. Finally, theoretical analyses of Distral and other KL regularization frameworks for deep RL would help better our understanding of these recent methods. 9 References [1] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, june 2013. [2] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In JMLR: Workshop on Unsupervised and Transfer Learning, 2012. [3] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1), January 2011. [4] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proc. of the Int’l Conference on Knowledge Discovery and Data Mining (KDD), 2006. [5] Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, July 1997. [6] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1–38, 1977. [7] R. Fox, A. Pakman, and N. Tishby. Taming the noise in reinforcement learning via soft updates. In Uncertainty in Artificial Intelligence (UAI), 2016. [8] Roy Fox, Michal Moshkovitz, and Naftali Tishby. Principled option learning in markov decision processes. In European Workshop on Reinforcement Learning (EWRL), 2016. [9] Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. Bayesian data analysis, volume 2. Chapman & Hall/CRC Boca Raton, FL, USA, 2014. [10] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017. [11] Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. NIPS Deep Learning Workshop, 2014. [12] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. Int’l Conference on Learning Representations (ICLR), 2016. [13] Hilbert J Kappen, Vicenç Gómez, and Manfred Opper. Optimal control as a graphical model inference problem. Machine learning, 87(2):159–182, 2012. [14] Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning. Association for the Advancement of Artificial Intelligence (AAAI), 2017. [15] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071–1079, 2014. [16] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016. [17] Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems, pages 207–215, 2013. [18] Sergey Levine and Vladlen Koltun. Learning complex neural network policies with trajectory optimization. In International Conference on Machine Learning, pages 829–837, 2014. [19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. Int’l Conference on Learning Representations (ICLR), 2016. [20] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Int’l Conference on Machine Learning (ICML), 2016. [21] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015. [22] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. arXiv:1702.08892, 2017. [23] Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. In Int’l Conference on Learning Representations (ICLR), 2016. 10 [24] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. Int’l Conference on Learning Representations (ICLR), 2014. [25] Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In Robotics: Science and Systems (RSS), 2012. [26] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In Int’l Conference on Learning Representations (ICLR), 2016. [27] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. CoRR, abs/1511.05952, 2015. [28] J. Schulman, P. Abbeel, and X. Chen. Equivalence between policy gradients and soft Q-Learning. arXiv:1704.06440, 2017. [29] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Int’l Conference on Machine Learning (ICML), 2015. [30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In Adv. in Neural Information Processing Systems (NIPS), volume 99, pages 1057–1063, 1999. [31] Matthew E. Taylor and Peter Stone. An introduction to inter-task transfer for reinforcement learning. AI Magazine, 32(1):15–34, 2011. [32] Marc Toussaint, Stefan Harmeling, and Amos Storkey. Probabilistic inference for solving (PO)MDPs. Technical Report EDI-INF-RR-0934, University of Edinburgh, School of Informatics, 2006. [33] Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q-learning. Association for the Advancement of Artificial Intelligence (AAAI), 2016. [34] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Adv. in Neural Information Processing Systems (NIPS), 2014. [35] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In Adv. in Neural Information Processing Systems (NIPS), 2015. 11 | 2017 | 36 |
6,852 | Triple Generative Adversarial Nets Chongxuan Li, Kun Xu, Jun Zhu∗, Bo Zhang Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys., Center for Bio-Inspired Computing Research, Tsinghua University, Beijing, 100084, China {licx14, xu-k16}@mails.tsinghua.edu.cn, {dcszj, dcszb}@mail.tsinghua.edu.cn Abstract Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players—a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally. 1 Introduction Deep generative models (DGMs) can capture the underlying distributions of the data and synthesize new samples. Recently, significant progress has been made on generating realistic images based on Generative Adversarial Nets (GANs) [7, 3, 22]. GAN is formulated as a two-player game, where the generator G takes a random noise z as input and produces a sample G(z) in the data space while the discriminator D identifies whether a certain sample comes from the true data distribution p(x) or the generator. Both G and D are parameterized as deep neural networks and the training procedure is to solve a minimax problem: min G max D U(D, G) = Ex∼p(x)[log(D(x))] + Ez∼pz(z)[log(1 −D(G(z)))], where pz(z) is a simple distribution (e.g., uniform or normal) and U(·) denotes the utilities. Given a generator and the defined distribution pg, the optimal discriminator is D(x) = p(x)/(pg(x) + p(x)) in the nonparametric setting, and the global equilibrium of this game is achieved if and only if pg(x) = p(x) [7], which is desired in terms of image generation. GANs and DGMs in general have also proven effective in semi-supervised learning (SSL) [11], while retaining the generative capability. Under the same two-player game framework, Cat-GAN [26] generalizes GANs with a categorical discriminative network and an objective function that minimizes the conditional entropy of the predictions given the real data while maximizes the conditional entropy ∗J. Zhu is the corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 𝑿𝒄, 𝒀𝒄 ~𝒑𝒄(𝑿, 𝒀) 𝑿𝒍, 𝒀𝒍∼𝒑(𝑿, 𝒀) 𝑿𝒄∼𝒑(𝑿) 𝒁𝒈∼𝒑𝒛(𝒁) 𝑿𝒈, 𝒀𝒈 ~𝒑𝒈(𝑿, 𝒀) 𝑿𝒍, 𝒀𝒍∼𝒑(𝑿, 𝒀) G C D A/R A A/R CE CE 𝒀𝒈∼𝒑(𝒀) Figure 1: An illustration of Triple-GAN (best view in color). The utilities of D, C and G are colored in blue, green and yellow respectively, with “R” denoting rejection, “A” denoting acceptance and “CE” denoting the cross entropy loss for supervised learning. “A”s and “R”s are the adversarial losses and “CE”s are unbiased regularizations that ensure the consistency between pg, pc and p, which are the distributions defined by the generator, classifier and true data generating process, respectively. of the predictions given the generated samples. Odena [20] and Salimans et al. [25] augment the categorical discriminator with one more class, corresponding to the fake data generated by the generator. There are two main problems in existing GANs for SSL: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time [25]; and (2) the generator cannot control the semantics of the generated samples. For the first problem, as an instance, Salimans et al. [25] propose two alternative training objectives that work well for either classification or image generation in SSL, but not both. The objective of feature matching works well in classification but fails to generate indistinguishable samples (See Sec.5.2 for examples), while the other objective of minibatch discrimination is good at realistic image generation but cannot predict labels accurately. The phenomena are not analyzed deeply in [25] and here we argue that they essentially arise from the two-player formulation, where a single discriminator has to play two incompatible roles—identifying fake samples and predicting labels. Specifically, assume that G is optimal, i.e p(x) = pg(x), and consider a sample x ∼pg(x). On one hand, as a discriminator, the optimal D should identify x as a fake sample with non-zero probability (See [7] for the proof). On the other hand, as a classifier, the optimal D should always predict the correct class of x confidently since x ∼p(x). It conflicts as D has two incompatible convergence points, which indicates that G and D may not be optimal at the same time. Moreover, the issue remains even given imperfect G, as long as pg(x) and p(x) overlaps as in most of the real cases. Given a sample form the overlapped area, the two roles of D still compete by treating the sample differently, leading to a poor classifier2. Namely, the learning capacity of existing two-player models is restricted, which should be addressed to advance current SSL results. For the second problem, disentangling meaningful physical factors like the object category from the latent representations with limited supervision is of general interest [30, 2]. However, to our best knowledge, none of the existing GANs can learn the disentangled representations in SSL, though some work [22, 5, 21] can learn such representations given full labels. Again, we believe that the problem is caused by their two-player formulation. Specifically, the discriminators in [26, 25] take a single data instead of a data-label pair as input and the label information is totally ignored when justifying whether a sample is real or fake. Therefore, the generators will not receive any learning signal regarding the label information from the discriminators and hence such models cannot control the semantics of the generated samples, which is not satisfactory. To address these problems, we present Triple-GAN, a flexible game-theoretical framework for both classification and class-conditional image generation in SSL, where we have a partially labeled dataset. We introduce two conditional networks–a classifier and a generator to generate pseudo labels given real data and pseudo data given real labels, respectively. To jointly justify the quality of the samples from the conditional networks, we define a single discriminator network which has the sole role of distinguishing whether a data-label pair is from the real labeled dataset or not. The resulting model is called Triple-GAN because not only are there three networks, but we consider three joint distributions, i.e. the true data-label distribution and the distributions defined by the conditional networks (See Figure 1 for the illustration of Triple-GAN). Directly motivated by the desirable equilibrium that both the classifier and the conditional generator are optimal, we carefully design 2The results of minibatch discrimination approach in [25] well support our analysis. 2 compatible utilities including adversarial losses and unbiased regularizations (See Sec. 3), which lead to an effective solution to the challenging SSL task, justified both in theory and practice. In particular, theoretically, instead of competing as stated in the first problem, a good classifier will result in a good generator and vice versa in Triple-GAN (See Sec. 3.2 for the proof). Furthermore, the discriminator can access the label information of the unlabeled data from the classifier and then force the generator to generate correct image-label pairs, which addresses the second problem. Empirically, we evaluate our model on the widely adopted MNIST [14], SVHN [19] and CIFAR10 [12] datasets. The results (See Sec. 5) demonstrate that Triple-GAN can simultaneously learn a good classifier and a conditional generator, which agrees with our motivation and theoretical results. Overall, our main contributions are two folded: (1) we analyze the problems in existing SSL GANs [26, 25] and propose a novel game-theoretical Triple-GAN framework to address them with carefully designed compatible objectives; and (2) we show that on the three datasets with incomplete labels, Triple-GAN can advance the state-of-the-art classification results of DGMs substantially and, at the same time, disentangle classes and styles and perform class-conditional interpolation. 2 Related Work Recently, various approaches have been developed to learn directed DGMs, including Variational Autoencoders (VAEs) [10, 24], Generative Moment Matching Networks (GMMNs) [16, 6] and Generative Adversarial Nets (GANs) [7]. These criteria are systematically compared in [28]. One primal goal of DGMs is to generate realistic samples, for which GANs have proven effective. Specifically, LAP-GAN [3] leverages a series of GANs to upscale the generated samples to high resolution images through the Laplacian pyramid framework [1]. DCGAN [22] adopts (fractionally) strided convolution layers and batch normalization [8] in GANs and generates realistic natural images. Recent work has introduced inference networks in GANs. For instance, InfoGAN [2] learns explainable latent codes from unlabeled data by regularizing the original GANs via variational mutual information maximization. In ALI [5, 4], the inference network approximates the posterior distribution of latent variables given true data in unsupervised manner. Triple-GAN also has an inference network (classifier) as in ALI but there exist two important differences in the global equilibria and utilities between them: (1) Triple-GAN matches both the distributions defined by the generator and classifier to true data distribution while ALI only ensures that the distributions defined by the generator and inference network to be the same; (2) the discriminator will reject the samples from the classifier in Triple-GAN while the discriminator will accept the samples from the inference network in ALI, which leads to different update rules for the discriminator and inference network. These differences naturally arise because Triple-GAN is proposed to solve the existing problems in SSL GANs as stated in the introduction. Indeed, ALI [5] uses the same approach as [25] to deal with partially labeled data and hence it still suffers from the problems. In addition, Triple-GAN outperforms ALI significantly in the semi-supervised classification task (See comparison in Table. 1). To handle partially labeled data, the conditional VAE [11] treats the missing labels as latent variables and infer them for unlabeled data. ADGM [17] introduces auxiliary variables to build a more expressive variational distribution and improve the predictive performance. The Ladder Network [23] employs lateral connections between a variation of denoising autoencoders and obtains excellent SSL results. Cat-GAN [26] generalizes GANs with a categorical discriminator and an objective function. Salimans et al. [25] propose empirical techniques to stabilize the training of GANs and improve the performance on SSL and image generation under incompatible learning criteria. Triple-GAN differs significantly from these methods, as stated in the introduction. 3 Method We consider learning DGMs in the semi-supervised setting,3 where we have a partially labeled dataset with x denoting the input data and y denoting the output label. The goal is to predict the labels y for unlabeled data as well as to generate new samples x conditioned on y. This is different from the unsupervised setting for pure generation, where the only goal is to sample data x from a generator to fool a discriminator; thus a two-player game is sufficient to describe the process as in GANs. 3Supervised learning is an extreme case, where the training set is fully labeled. 3 In our setting, as the label information y is incomplete (thus uncertain), our density model should characterize the uncertainty of both x and y, therefore a joint distribution p(x, y) of input-label pairs. A straightforward application of the two-player GAN is infeasible because of the missing values on y. Unlike the previous work [26, 25], which is restricted to the two-player framework and can lead to incompatible objectives, we build our game-theoretic objective based on the insight that the joint distribution can be factorized in two ways, namely, p(x, y) = p(x)p(y|x) and p(x, y) = p(y)p(x|y), and that the conditional distributions p(y|x) and p(x|y) are of interest for classification and classconditional generation, respectively. To jointly estimate these conditional distributions, which are characterized by a classifier network and a class-conditional generator network, we define a single discriminator network which has the sole role of distinguishing whether a sample is from the true data distribution or the models. Hence, we naturally extend GANs to Triple-GAN, a three-player game to characterize the process of classification and class-conditional generation in SSL, as detailed below. 3.1 A Game with Three Players Triple-GAN consists of three components: (1) a classifier C that (approximately) characterizes the conditional distribution pc(y|x) ≈p(y|x); (2) a class-conditional generator G that (approximately) characterizes the conditional distribution in the other direction pg(x|y) ≈p(x|y); and (3) a discriminator D that distinguishes whether a pair of data (x, y) comes from the true distribution p(x, y). All the components are parameterized as neural networks. Our desired equilibrium is that the joint distributions defined by the classifier and the generator both converge to the true data distribution. To this end, we design a game with compatible utilities for the three players as follows. We make the mild assumption that the samples from both p(x) and p(y) can be easily obtained.4 In the game, after a sample x is drawn from p(x), C produces a pseudo label y given x following the conditional distribution pc(y|x). Hence, the pseudo input-label pair is a sample from the joint distribution pc(x, y) = p(x)pc(y|x). Similarly, a pseudo input-label pair can be sampled from G by first drawing y ∼p(y) and then drawing x|y ∼pg(x|y); hence from the joint distribution pg(x, y) = p(y)pg(x|y). For pg(x|y), we assume that x is transformed by the latent style variables z given the label y, namely, x = G(y, z), z ∼pz(z), where pz(z) is a simple distribution (e.g., uniform or standard normal). Then, the pseudo input-label pairs (x, y) generated by both C and G are sent to the single discriminator D for judgement. D can also access the input-label pairs from the true data distribution as positive samples. We refer the utilities in the process as adversarial losses, which can be formulated as a minimax game: min C,G max D U(C, G, D) =E(x,y)∼p(x,y)[log D(x, y)] + αE(x,y)∼pc(x,y)[log(1 −D(x, y))] +(1 −α)E(x,y)∼pg(x,y)[log(1 −D(G(y, z), y))], (1) where α ∈(0, 1) is a constant that controls the relative importance of generation and classification and we focus on the balance case by fixing it as 1/2 throughout the paper. The game defined in Eqn. (1) achieves its equilibrium if and only if p(x, y) = (1 −α)pg(x, y) + αpc(x, y) (See details in Sec. 3.2). The equilibrium indicates that if one of C and G tends to the data distribution, the other will also go towards the data distribution, which addresses the competing problem. However, unfortunately, it cannot guarantee that p(x, y) = pg(x, y) = pc(x, y) is the unique global optimum, which is not desirable. To address this problem, we introduce the standard supervised loss (i.e., cross-entropy loss) to C, RL = E(x,y)∼p(x,y)[−log pc(y|x)], which is equivalent to the KL-divergence between pc(x, y) and p(x, y). Consequently, we define the game as: min C,G max D ˜U(C, G, D) =E(x,y)∼p(x,y)[log D(x, y)] + αE(x,y)∼pc(x,y)[log(1 −D(x, y))] +(1 −α)E(x,y)∼pg(x,y)[log(1 −D(G(y, z), y))] + RL. (2) It will be proven that the game with utilities ˜U has the unique global optimum for C and G. 3.2 Theoretical Analysis and Pseudo Discriminative Loss 4In semi-supervised learning, p(x) is the empirical distribution of inputs and p(y) is assumed same to the distribution of labels on labeled data, which is uniform in our experiment. 4 Algorithm 1 Minibatch stochastic gradient descent training of Triple-GAN in SSL. for number of training iterations do • Sample a batch of pairs (xg, yg) ∼pg(x, y) of size mg, a batch of pairs (xc, yc) ∼pc(x, y) of size mc and a batch of labeled data (xd, yd) ∼p(x, y) of size md. • Update D by ascending along its stochastic gradient: ∇θd 1 md ( X (xd,yd) log D(xd, yd))+ α mc X (xc,yc) log(1−D(xc, yc))+ 1 −α mg X (xg,yg) log(1−D(xg, yg)) . • Compute the unbiased estimators ˜RL and ˜RP of RL and RP respectively. • Update C by descending along its stochastic gradient: ∇θc α mc X (xc,yc) pc(yc|xc) log(1 −D(xc, yc)) + ˜RL + αP ˜RP . • Update G by descending along its stochastic gradient: ∇θg 1 −α mg X (xg,yg) log(1 −D(xg, yg)) . end for We now provide a formal theoretical analysis of Triple-GAN under nonparametric assumptions and introduce the pseudo discriminative loss, which is an unbiased regularization motivated by the global equilibrium. For clarity of the main text, we defer the proof details to Appendix A. First, we can show that the optimal D balances between the true data distribution and the mixture distribution defined by C and G, as summarized in Lemma 3.1. Lemma 3.1 For any fixed C and G, the optimal D of the game defined by the utility function U(C, G, D) is: D∗ C,G(x, y) = p(x, y) p(x, y) + pα(x, y), (3) where pα(x, y) := (1 −α)pg(x, y) + αpc(x, y) is a mixture distribution for α ∈(0, 1). Given D∗ C,G, we can omit D and reformulate the minimax game with value function U as: V (C, G) = maxD U(C, G, D), whose optimal point is summarized as in Lemma 3.2. Lemma 3.2 The global minimum of V (C, G) is achieved if and only if p(x, y) = pα(x, y). We can further show that C and G can at least capture the marginal distributions of data, especially for pg(x), even there may exist multiple global equilibria, as summarized in Corollary 3.2.1. Corollary 3.2.1 Given p(x, y) = pα(x, y), the marginal distributions are the same for p, pc and pg, i.e. p(x) = pg(x) = pc(x) and p(y) = pg(y) = pc(y). Given the above result that p(x, y) = pα(x, y), C and G do not compete as in the two-player based formulation and it is easy to verify that p(x, y) = pc(x, y) = pg(x, y) is a global equilibrium point. However, it may not be unique and we should minimize an additional objective to ensure the uniqueness. In fact, this is true for the utility function ˜U(C, G, D) in problem (2), as stated below. Theorem 3.3 The equilibrium of ˜U(C, G, D) is achieved if and only if p(x, y) = pg(x, y) = pc(x, y). The conclusion essentially motivates our design of Triple-GAN, as we can ensure that both C and G will converge to the true data distribution if the model has been trained to achieve the optimum. We can further show another nice property of ˜U, which allows us to regularize our model for stable and better convergence in practice without bias, as summarized below. 5 Corollary 3.3.1 Adding any divergence (e.g. the KL divergence) between any two of the joint distributions or the conditional distributions or the marginal distributions, to ˜U as the additional regularization to be minimized, will not change the global equilibrium of ˜U. Because label information is extremely insufficient in SSL, we propose pseudo discriminative loss RP = Epg[−log pc(y|x)], which optimizes C on the samples generated by G in the supervised manner. Intuitively, a good G can provide meaningful labeled data beyond the training set as extra side information for C, which will boost the predictive performance (See Sec. 5.1 for the empirical evidence). Indeed, minimizing pseudo discriminative loss with respect to C is equivalent to minimizing DKL(pg(x, y)||pc(x, y)) (See Appendix A for proof) and hence the global equilibrium remains following Corollary 3.3.1. Also note that directly minimizing DKL(pg(x, y)||pc(x, y)) is infeasible since its computation involves the unknown likelihood ratio pg(x, y)/pc(x, y). The pseudo discriminative loss is weighted by a hyperparameter αP. See Algorithm 1 for the whole training procedure, where θc, θd and θg are trainable parameters in C, D and G respectively. 4 Practical Techniques In this section we introduce several practical techniques used in the implementation of Triple-GAN, which may lead to a biased solution theoretically but work well for challenging SSL tasks empirically. One crucial problem of SSL is the small size of the labeled data. In Triple-GAN, D may memorize the empirical distribution of the labeled data, and reject other types of samples from the true data distribution. Consequently, G may collapse to these modes. To this end, we generate pseudo labels through C for some unlabeled data and use these pairs as positive samples of D. The cost is on introducing some bias to the target distribution of D, which is a mixture of pc and p instead of the pure p. However, this is acceptable as C converges quickly and pc and p are close (See results in Sec.5). Since properly leveraging the unlabeled data is key to success in SSL, it is necessary to regularize C heuristically as in many existing methods [23, 26, 13, 15] to make more accurate predictions. We consider two alternative losses on the unlabeled data. The confidence loss [26] minimizes the conditional entropy of pc(y|x) and the cross entropy between p(y) and pc(y), weighted by a hyperparameter αB, as RU = Hpc(y|x) + αBEp −log pc(y) , which encourages C to make predictions confidently and be balanced on the unlabeled data. The consistency loss [13] penalizes the network if it predicts the same unlabeled data inconsistently given different noise ϵ, e.g., dropout masks, as RU = Ex∼p(x)||pc(y|x, ϵ) −pc(y|x, ϵ′)||2, where || · ||2 is the square of the l2-norm. We use the confidence loss by default except on the CIFAR10 dataset (See details in Sec. 5). Another consideration is to compute the gradients of Ex∼p(x),y∼pc(y|x)[log(1 −D(x, y))] with respect to the parameters θc in C, which involves summation over the discrete random variable y, i.e. the class label. On one hand, integrating out the class label is time consuming. On the other hand, directly sampling one label to approximate the expectation via the Monte Carlo method makes the feedback of the discriminator not differentiable with respect to θc. As the REINFORCE algorithm [29] can deal with such cases with discrete variables, we use a variant of it for the endto-end training of our classifier. The gradients in the original REINFORCE algorithm should be Ex∼p(x)Ey∼pc(y|x)[∇θc log pc(y|x) log(1 −D(x, y))]. In our experiment, we find the best strategy is to use most probable y instead of sampling one to approximate the expectation over y. The bias is small as the prediction of C is rather confident typically. 5 Experiments We now present results on the widely adopted MNIST [14], SVHN [19], and CIFAR10 [12] datasets. MNIST consists of 50,000 training samples, 10,000 validation samples and 10,000 testing samples of handwritten digits of size 28 × 28. SVHN consists of 73,257 training samples and 26,032 testing samples and each is a colored image of size 32 × 32, containing a sequence of digits with various backgrounds. CIFAR10 consists of colored images distributed across 10 general classes—airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. There are 50,000 training samples and 10,000 testing samples of size 32 × 32 in CIFAR10. We split 5,000 training data of SVHN and 6 Table 1: Error rates (%) on partially labeled MNIST, SHVN and CIFAR10 datasets, averaged by 10 runs. The results with † are trained with more than 500,000 extra unlabeled data on SVHN. Algorithm MNIST n = 100 SVHN n = 1000 CIFAR10 n = 4000 M1+M2 [11] 3.33 (±0.14) 36.02 (±0.10) VAT [18] 2.33 24.63 Ladder [23] 1.06 (±0.37) 20.40 (±0.47) Conv-Ladder [23] 0.89 (±0.50) ADGM [17] 0.96 (±0.02) 22.86 † SDGM [17] 1.32 (±0.07) 16.61(±0.24)† MMCVA [15] 1.24 (±0.54) 4.95 (±0.18) † CatGAN [26] 1.39 (±0.28) 19.58 (±0.58) Improved-GAN [25] 0.93 (±0.07) 8.11 (±1.3) 18.63 (±2.32) ALI [5] 7.3 18.3 Triple-GAN (ours) 0.91 (±0.58) 5.77(±0.17) 16.99 (±0.36) Table 2: Error rates (%) on MNIST with different number of labels, averaged by 10 runs. Algorithm n = 20 n = 50 n = 200 Improved-GAN [25] 16.77 (±4.52) 2.21 (±1.36) 0.90 (±0.04) Triple-GAN (ours) 4.81 (±4.95) 1.56 (±0.72) 0.67 (±0.16) CIFAR10 for validation if needed. On CIFAR10, we follow [13] to perform ZCA for the input of C but still generate and estimate the raw images using G and D. We implement our method based on Theano [27] and here we briefly summarize our experimental settings.5 Though we have an additional network, the generator and classifier of Triple-GAN have comparable architectures to those of the baselines [26, 25] (See details in Appendix F). The pseudo discriminative loss is not applied until the number of epochs reach a threshold that the generator could generate meaningful data. We only search the threshold in {200, 300}, αP in {0.1, 0.03} and the global learning rate in {0.0003, 0.001} based on the validation performance on each dataset. All of the other hyperparameters including relative weights and parameters in Adam [9] are fixed according to [25, 15] across all of the experiments. Further, in our experiments, we find that the training techniques for the original two-player GANs [3, 25] are sufficient to stabilize the optimization of Triple-GAN. 5.1 Classification For fair comparison, all the results of the baselines are from the corresponding papers and we average Triple-GAN over 10 runs with different random initialization and splits of the training data and report the mean error rates with the standard deviations following [25]. Firstly, we compare our method with a large body of approaches in the widely used settings on MNIST, SVHN and CIFAR10 datasets given 100, 1,000 and 4,000 labels6, respectively. Table 1 summarizes the quantitative results. On all of the three datasets, Triple-GAN achieves the state-of-the-art results consistently and it substantially outperforms the strongest competitors (e.g., Improved-GAN) on more challenging SVHN and CIFAR10 datasets, which demonstrate the benefit of compatible learning objectives proposed in Triple-GAN. Note that for a fair comparison with previous GANs, we do not leverage the extra unlabeled data on SVHN, while some baselines [17, 15] do. Secondly, we evaluate our method with 20, 50 and 200 labeled samples on MNIST for a systematical comparison with our main baseline Improved-GAN [25], as shown in Table 2. Triple-GAN consistently outperforms Improved-GAN with a substantial margin, which again demonstrates the benefit of Triple-GAN. Besides, we can see that Triple-GAN achieves more significant improvement as the number of labeled data decreases, suggesting the effectiveness of the pseudo discriminative loss. Finally, we investigate the reasons for the outstanding performance of Triple-GAN. We train a single C without G and D on SVHN as the baseline and get more than 10% error rate, which shows that G is important for SSL even though C can leverage unlabeled data directly. On CIFAR10, the baseline 5Our source code is available at https://github.com/zhenxuan00/triple-gan 6We use these amounts of labels as default settings throughout the paper if not specified. 7 (a) Feature Matching (b) Triple-GAN (c) Automobile (d) Horse Figure 2: (a-b) Comparison between samples from Improved-GAN trained with feature matching and Triple-GAN on SVHN. (c-d) Samples of Triple-GAN in specific classes on CIFAR10. (a) SVHN data (b) SVHN samples (c) CIFAR10 data (d) CIFAR10 samples Figure 3: (a) and (c) are randomly selected labeled data. (b) and (d) are samples from Triple-GAN, where each row shares the same label and each column shares the same latent variables. (a) SVHN (b) CIFAR10 Figure 4: Class-conditional latent space interpolation. We first sample two random vectors in the latent space and interpolate linearly from one to another. Then, we map these vectors to the data level given a fixed label for each class. Totally, 20 images are shown for each class. We select two endpoints with clear semantics on CIFAR10 for better illustration. (a simple version of Π model [13]) achieves 17.7% error rate. The smaller improvement is reasonable as CIFAR10 is more complex and hence G is not as good as in SVHN. In addition, we evaluate Triple-GAN without the pseudo discriminative loss on SVHN and it achieves about 7.8% error rate, which shows the advantages of compatible objectives (better than the 8.11% error rate of ImprovedGAN) and the importance of the pseudo discriminative loss (worse than the complete Triple-GAN by 2%). Furthermore, Triple-GAN has a comparable convergence speed with Improved-GAN [25], as shown in Appendix E. 5.2 Generation We demonstrate that Triple-GAN can learn good G and C simultaneously by generating samples in various ways with the exact models used in Sec. 5.1. For fair comparison, the generative model and the number of labels are the same to the previous method [25]. In Fig. 2 (a-b), we first compare the quality of images generated by Triple-GAN on SVHN and the Improved-GAN with feature matching [25],7 which works well for semi-supervised classification. We can see that Triple-GAN outperforms the baseline by generating fewer meaningless samples and 7Though the Improved-GAN trained with minibatch discrimination [25] can generate good samples, it fails to predict labels accurately. 8 clearer digits. Further, the baseline generates the same strange sample four times, labeled with red rectangles in Fig. 2 . The comparison on MNIST and CIFAR10 is presented in Appendix B. We also evaluate the samples on CIFAR10 quantitatively via the inception score following [25]. The value of Triple-GAN is 5.08 ± 0.09 while that of the Improved-GAN trained without minibatch discrimination [25] is 3.87 ± 0.03, which agrees with the visual comparison. We then illustrate images generated from two specific classes on CIFAR10 in Fig. 2 (c-d) and see more in Appendix C. In most cases, Triple-GAN is able to generate meaningful images with correct semantics. Further, we show the ability of Triple-GAN to disentangle classes and styles in Fig. 3. It can be seen that Triple-GAN can generate realistic data in a specific class and the latent factors encode meaningful physical factors like: scale, intensity, orientation, color and so on. Some GANs [22, 5, 21] can generate data class-conditionally given full labels, while Triple-GAN can do similar thing given much less label information. Finally, we demonstrate the generalization capability of our Triple-GAN on class-conditional latent space interpolation as in Fig. 4. Triple-GAN can transit smoothly from one sample to another with totally different visual factors without losing label semantics, which proves that Triple-GANs can learn meaningful latent spaces class-conditionally instead of overfitting to the training data, especially labeled data. See these results on MNIST in Appendix D. Overall, these results confirm that Triple-GAN avoid the competition between C and G and can lead to a situation where both the generation and classification are good in semi-supervised learning. 6 Conclusions We present triple generative adversarial networks (Triple-GAN), a unified game-theoretical framework with three players—a generator, a discriminator and a classifier, to do semi-supervised learning with compatible utilities. With such utilities, Triple-GAN addresses two main problems of existing methods [26, 25]. Specifically, Triple-GAN ensures that both the classifier and the generator can achieve their own optima respectively in the perspective of game theory and enable the generator to sample data in a specific class. Our empirical results on MNIST, SVHN and CIFAR10 datasets demonstrate that as a unified model, Triple-GAN can simultaneously achieve the state-of-the-art classification results among deep generative models and disentangle styles and classes and transfer smoothly on the data level via interpolation in the latent space. Acknowledgments The work is supported by the National NSF of China (Nos. 61620106010, 61621136008, 61332007), the MIIT Grant of Int. Man. Comp. Stan (No. 2016ZXFB00001), the Youth Top-notch Talent Support Program, Tsinghua Tiangong Institute for Intelligent Computing, the NVIDIA NVAIL Program and a Project from Siemens. References [1] Peter Burt and Edward Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on communications, 1983. [2] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. [3] Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015. [4] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. [5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. 9 [6] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015. [7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [10] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. [11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In NIPS, 2014. [12] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Citeseer, 2009. [13] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [15] Chongxuan Li, Jun Zhu, and Bo Zhang. Max-margin deep generative models for (semi-) supervised learning. arXiv preprint arXiv:1611.07119, 2016. [16] Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, 2015. [17] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [18] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015. [19] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011. [20] Augustus Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016. [21] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. [22] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [23] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semisupervised learning with ladder networks. In NIPS, 2015. [24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [25] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. [26] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. 10 [27] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605. 02688. [28] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. [29] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. [30] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015. 11 | 2017 | 360 |
6,853 | Deep Learning with Topological Signatures Christoph Hofer Department of Computer Science University of Salzburg, Austria chofer@cosy.sbg.ac.at Roland Kwitt Department of Computer Science University of Salzburg, Austria Roland.Kwitt@sbg.ac.at Marc Niethammer UNC Chapel Hill, NC, USA mn@cs.unc.edu Andreas Uhl Department of Computer Science University of Salzburg, Austria uhl@cosy.sbg.ac.at Abstract Inferring topological and geometrical information from data can offer an alternative perspective on machine learning problems. Methods from topological data analysis, e.g., persistent homology, enable us to obtain such information, typically in the form of summary representations of topological features. However, such topological signatures often come with an unusual structure (e.g., multisets of intervals) that is highly impractical for most machine learning techniques. While many strategies have been proposed to map these topological signatures into machine learning compatible representations, they suffer from being agnostic to the target learning task. In contrast, we propose a technique that enables us to input topological signatures to deep neural networks and learn a task-optimal representation during training. Our approach is realized as a novel input layer with favorable theoretical properties. Classification experiments on 2D object shapes and social network graphs demonstrate the versatility of the approach and, in case of the latter, we even outperform the state-of-the-art by a large margin. 1 Introduction Methods from algebraic topology have only recently emerged in the machine learning community, most prominently under the term topological data analysis (TDA) [7]. Since TDA enables us to infer relevant topological and geometrical information from data, it can offer a novel and potentially beneficial perspective on various machine learning problems. Two compelling benefits of TDA are (1) its versatility, i.e., we are not restricted to any particular kind of data (such as images, sensor measurements, time-series, graphs, etc.) and (2) its robustness to noise. Several works have demonstrated that TDA can be beneficial in a diverse set of problems, such as studying the manifold of natural image patches [8], analyzing activity patterns of the visual cortex [28], classification of 3D surface meshes [27, 22], clustering [11], or recognition of 2D object shapes [29]. Currently, the most widely-used tool from TDA is persistent homology [15, 14]. Essentially1, persistent homology allows us to track topological changes as we analyze data at multiple “scales”. As the scale changes, topological features (such as connected components, holes, etc.) appear and disappear. Persistent homology associates a lifespan to these features in the form of a birth and a death time. The collection of (birth, death) tuples forms a multiset that can be visualized as a persistence diagram or a barcode, also referred to as a topological signature of the data. However, leveraging these signatures for learning purposes poses considerable challenges, mostly due to their 1We will make these concepts more concrete in Sec. 2. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ν ∆ Death Birth (µ1, σ1) (x0, x1) sµ,σ,ν((x0, x1)) x = ρ(p) p = (b, d) Input Layer Param.: θ = (µi, σi)N−1 i=0 (1) Rotate points in D by π/4 (2) Transform & Project (y1, y2)⊤∈R0 + ×R0 Output: Death-Birth Birth+Death (persistence) (µ2, σ2) ν Input: D ∈D Figure 1: Illustration of the proposed network input layer for topological signatures. Each signature, in the form of a persistence diagram D ∈D (left), is projected w.r.t. a collection of structure elements. The layer’s learnable parameters θ are the locations µi and the scales σi of these elements; ν ∈R+ is set a-priori and meant to discount the impact of points with low persistence (and, in many cases, of low discriminative power). The layer output y is a concatenation of the projections. In this illustration, N = 2 and hence y = (y1, y2)⊤. unusual structure as a multiset. While there exist suitable metrics to compare signatures (e.g., the Wasserstein metric), they are highly impractical for learning, as they require solving optimal matching problems. Related work. In order to deal with these issues, several strategies have been proposed. In [2] for instance, Adcock et al. use invariant theory to “coordinatize” the space of barcodes. This allows to map barcodes to vectors of fixed size which can then be fed to standard machine learning techniques, such as support vector machines (SVMs). Alternatively, Adams et al. [1] map barcodes to so-called persistence images which, upon discretization, can also be interpreted as vectors and used with standard learning techniques. Along another line of research, Bubenik [6] proposes a mapping of barcodes into a Banach space. This has been shown to be particularly viable in a statistical context (see, e.g., [10]). The mapping outputs a representation referred to as a persistence landscape. Interestingly, under a specific choice of parameters, barcodes are mapped into L2(R2) and the inner-product in that space can be used to construct a valid kernel function. Similar, kernel-based techniques, have also recently been studied by Reininghaus et al. [27], Kwitt et al. [20] and Kusano et al. [19]. While all previously mentioned approaches retain certain stability properties of the original representation with respect to common metrics in TDA (such as the Wasserstein or Bottleneck distances), they also share one common drawback: the mapping of topological signatures to a representation that is compatible with existing learning techniques is pre-defined. Consequently, it is fixed and therefore agnostic to any specific learning task. This is clearly suboptimal, as the eminent success of deep neural networks (e.g., [18, 17]) has shown that learning representations is a preferable approach. Furthermore, techniques based on kernels [27, 20, 19] for instance, additionally suffer scalability issues, as training typically scales poorly with the number of samples (e.g., roughly cubic in case of kernel-SVMs). In the spirit of end-to-end training, we therefore aim for an approach that allows to learn a task-optimal representation of topological signatures. We additionally remark that, e.g., Qi et al. [25] or Ravanbakhsh et al. [26] have proposed architectures that can handle sets, but only with fixed size. In our context, this is impractical as the capability of handling sets with varying cardinality is a requirement to handle persistent homology in a machine learning setting. Contribution. To realize this idea, we advocate a novel input layer for deep neural networks that takes a topological signature (in our case, a persistence diagram), and computes a parametrized projection that can be learned during network training. Specifically, this layer is designed such that its output is stable with respect to the 1-Wasserstein distance (similar to [27] or [1]). To demonstrate the versatility of this approach, we present experiments on 2D object shape classification and the classification of social network graphs. On the latter, we improve the state-of-the-art by a large margin, clearly demonstrating the power of combining TDA with deep learning in this context. 2 Background For space reasons, we only provide a brief overview of the concepts that are relevant to this work and refer the reader to [16] or [14] for further details. Homology. The key concept of homology theory is to study the properties of some object X by means of (commutative) algebra. In particular, we assign to X a sequence of modules C0, C1, . . . 2 which are connected by homomorphisms ∂n : Cn →Cn−1 such that im ∂n+1 ⊆ker ∂n. A structure of this form is called a chain complex and by studying its homology groups Hn = ker ∂n/ im ∂n+1 we can derive properties of X. A prominent example of a homology theory is simplicial homology. Throughout this work, it is the used homology theory and hence we will now concretize the already presented ideas. Let K be a simplicial complex and Kn its n-skeleton. Then we set Cn(K) as the vector space generated (freely) by Kn over Z/2Z2. The connecting homomorphisms ∂n : Cn(K) →Cn−1(K) are called boundary operators. For a simplex σ = [x0, . . . , xn] ∈Kn, we define them as ∂n(σ) = Pn i=0[x0, . . . , xi−1, xi+1, . . . , xn] and linearly extend this to Cn(K), i.e., ∂n(P σi) = P ∂n(σi). Persistent homology. Let K be a simplicial complex and (Ki)m i=0 a sequence of simplicial complexes such that ∅= K0 ⊆K1 ⊆· · · ⊆Km = K. Then, (Ki)m i=0 is called a filtration of K. If we use the extra information provided by the filtration of K, we obtain the following sequence of chain complexes (left), · · · C1 2 C1 1 C1 0 0 · · · C2 2 C2 1 C2 0 0 · · · Cm 2 Cm 1 Cm 0 0 ∂3 ι ∂2 ι ∂1 ι ∂0 ∂3 ι ∂2 ι ∂1 ι ∂0 ∂3 ∂2 ∂1 ∂0 Example K1 K2 K3 ⊆ ⊆ v2 v4 v3 v1 C2 0 = [[v1], [v2], [v3]]Z2 C2 1 = [[v1, v3], [v2, v3]]Z2 C2 2 = 0 C1 0 = [[v1], [v2]]Z2 C1 1 = 0 C1 2 = 0 C2 0 = [[v1], [v2], [v3], [v4]]Z2 C2 1 = [[v1, v3], [v2, v3], [v3, v4]]Z2 C3 2 = 0 where Ci n = Cn(Ki n) and ι denotes the inclusion. This then leads to the concept of persistent homology groups, defined by Hi,j n = ker ∂i n/(im ∂j n+1 ∩ker ∂i n) for i ≤j . The ranks, βi,j n = rank Hi,j n , of these homology groups (i.e., the n-th persistent Betti numbers), capture the number of homological features of dimensionality n (e.g., connected components for n = 0, holes for n = 1, etc.) that persist from i to (at least) j. In fact, according to [14, Fundamental Lemma of Persistent Homology], the quantities µi,j n = (βi,j−1 n −βi,j n ) −(βi−1,j−1 n −βi−1,j n ) for i < j (1) encode all the information about the persistent Betti numbers of dimension n. Topological signatures. A typical way to obtain a filtration of K is to consider sublevel sets of a function f : C0(K) →R. This function can be easily lifted to higher-dimensional chain groups of K by f([v0, . . . , vn]) = max{f([vi]) : 0 ≤i ≤n} . Given m = |f(C0(K))|, we obtain (Ki)m i=0 by setting K0 = ∅and Ki = f −1((−∞, ai]) for 1 ≤i ≤m, where a1 < · · · < am is the sorted sequence of values of f(C0(K)). If we construct a multiset such that, for i < j, the point (ai, aj) is inserted with multiplicity µi,j n , we effectively encode the persistent homology of dimension n w.r.t. the sublevel set filtration induced by f. Upon adding diagonal points with infinite multiplicity, we obtain the following structure: Definition 1 (Persistence diagram). Let ∆= {x ∈R2 ∆: mult(x) = ∞} be the multiset of the diagonal R2 ∆= {(x0, x1) ∈R2 : x0 = x1}, where mult denotes the multiplicity function and let R2 ⋆= {(x0, x1) ∈R2 : x1 > x0}. A persistence diagram, D, is a multiset of the form D = {x : x ∈R2 ⋆} ∪∆. We denote by D the set of all persistence diagrams of the form |D \ ∆| < ∞. For a given complex K of dimension nmax and a function f (of the discussed form), we can interpret persistent homology as a mapping (K, f) 7→(D0, . . . , Dnmax−1), where Di is the diagram of dimension i and nmax the dimension of K. We can additionally add a metric structure to the space of persistence diagrams by introducing the notion of distances. 2Simplicial homology is not specific to Z/2Z, but it’s a typical choice, since it allows us to interpret n-chains as sets of n-simplices. 3 Definition 2 (Bottleneck, Wasserstein distance). For two persistence diagrams D and E, we define their Bottleneck (w∞) and Wasserstein (wq p) distances by w∞(D, E) = inf η sup x∈D ||x −η(x)||∞and wq p(D, E) = inf η X x∈D ||x −η(x)||p q ! 1 p , (2) where p, q ∈N and the infimum is taken over all bijections η : D →E. Essentially, this facilitates studying stability/continuity properties of topological signatures w.r.t. metrics in the filtration or complex space; we refer the reader to [12],[13], [9] for a selection of important stability results. Remark. By setting µi,∞ n = βi,m n −βi−1,m n , we extend Eq. (1) to features which never disappear, also referred to as essential. This change can be lifted to D by setting R2 ⋆= {(x0, x1) ∈R × (R ∪{∞}) : x1 > x0}. In Sec. 5, we will see that essential features can offer discriminative information. 3 A network layer for topological signatures In this section, we introduce the proposed (parametrized) network layer for topological signatures (in the form of persistence diagrams). The key idea is to take any D and define a projection w.r.t. a collection (of fixed size N) of structure elements. In the following, we set R+ := {x ∈R : x > 0} and R+ 0 := {x ∈R : x ≥0}, resp., and start by rotating points of D such that points on R2 ∆lie on the x-axis, see Fig. 1. The y-axis can then be interpreted as the persistence of features. Formally, we let b0 and b1 be the unit vectors in directions (1, 1)⊤and (−1, 1)⊤and define a mapping ρ : R2 ⋆∪R2 ∆→R×R+ 0 such that x 7→(⟨x, b0⟩, ⟨x, b1⟩). This rotates points in R⋆∪R2 ∆clock-wise by π/4. We will later see that this construction is beneficial for a closer analysis of the layers’ properties. Similar to [27, 19], we choose exponential functions as structure elements, but other choices are possible (see Lemma 1). Differently to [27, 19], however, our structure elements are not at fixed locations (i.e., one element per point in D), but their locations and scales are learned during training. Definition 3. Let µ = (µ0, µ1)⊤∈R × R+, σ = (σ0, σ1) ∈R+ × R+ and ν ∈R+. We define sµ,σ,ν : R × R+ 0 →R as follows: sµ,σ,ν (x0, x1) = e−σ2 0(x0−µ0)2−σ2 1(x1−µ1)2, x1 ∈[ν, ∞) e−σ2 0(x0−µ0)2−σ2 1(ln( x1 ν )+ν−µ1)2, x1 ∈(0, ν) 0, x1 = 0 (3) A persistence diagram D is then projected w.r.t. sµ,σ,ν via Sµ,σ,ν : D →R, D 7→ X x∈D sµ,σ,ν(ρ(x)) . (4) Remark. Note that sµ,σ,ν is continuous in x1 as lim x→ν x = lim x→ν ln x ν + ν and lim x1→0 sµ,σ,ν (x0, x1) = 0 = sµ,σ,ν (x0, 0) and e(·) is continuous. Further, sµ,σ,ν is differentiable on R × R+, since 1 = lim x→ν+ ∂x1 ∂x1 (x) and lim x→ν− ∂ ln x1 ν + ν ∂x1 (x) = lim x→ν− ν x = 1 . Also note that we use the log-transform in Eq. (4) to guarantee that sµ,σ,ν satisfies the conditions of Lemma 1; this is, however, only one possible choice. 4 Remark. The intuition behind ν is the following. It is the threshold at which the log-transform starts to operate. The log-transform, on the other hand, stretches the space between the x-axis and the line drawn at x + ν to infinite length. As a consequence, sµ,σ,ν = 0 for x ∈R2 ∆. This is necessary since otherwise Sµ,σ,ν(D) = ∞for D ∈D (as each persistence diagram contains points at the diagonal with infinite multiplicity). Finally, given a collection of Sµi,σi,ν, we combine them to form the output of the network layer. Definition 4. Let N ∈N, θ = (µi, σi)N−1 i=0 ∈ (R × R+) × (R+ × R+) N and ν ∈R+. We define Sθ,ν : D →(R+ 0 )N D 7→ Sµi,σi,ν(D) N−1 i=0 . as the concatenation of all N mappings defined in Eq. (4). Importantly, a network layer implementing Def. 4 is trainable via backpropagation, as (1) sµi,σi,ν is differentiable in µi, σi, (2) Sµi,σi,ν(D) is a finite sum of sµi,σi,ν and (3) Sθ,ν is just a concatenation. 4 Theoretical properties In this section, we demonstrate that the proposed layer is stable w.r.t. the 1-Wasserstein distance wq 1, see Eq. (2). In fact, this claim will follow from a more general result, stating sufficient conditions on functions s : R2 ⋆∪R2 ∆→R+ 0 such that a construction in the form of Eq. (3) is stable w.r.t. wq 1. Lemma 1. Let s : R2 ⋆∪R2 ∆→R+ 0 have the following properties: (i) s is Lipschitz continuous w.r.t. ∥· ∥q and constant Ks (ii) s(x = 0, for x ∈R2 ∆ Then, for two persistence diagrams D, E ∈D, it holds that X x∈D s(x) − X y∈E s(y) ≤Ks · wq 1(D, E) . (5) Proof. see Appendix B Remark. At this point, we want to clarify that Lemma 1 is not specific to sµ,σ,ν (e.g., as in Def. 3). Rather, Lemma 1 yields sufficient conditions to construct a w1-stable input layer. Our choice of sµ,σ,ν is just a natural example that fulfils those requirements and, hence, Sθ,ν is just one possible representative of a whole family of input layers. With the result of Lemma 1 in mind, we turn to the specific case of Sθ,ν and analyze its stability properties w.r.t. wq 1. The following lemma is important in this context. Lemma 2. sµ,σ,ν has absolutely bounded first-order partial derivatives w.r.t. x0 and x1 on R × R+. Proof. see Appendix B Theorem 1. Sθ,ν is Lipschitz continuous with respect to wq 1 on D. Proof. Lemma 2 immediately implies that sµ,σ,ν from Eq. (3) is Lipschitz continuous w.r.t || · ||q. Consequently, s = sµ,σ,ν ◦ρ satisfies property (i) from Lemma 1; property (ii) from Lemma 1 is satisfied by construction. Hence, Sµ,σ,ν is Lipschitz continuous w.r.t. wq 1. Consequently, Sθ,ν is Lipschitz in each coordinate and therefore Liptschitz continuous. Interestingly, the stability result of Theorem 1 is comparable to the stability results in [1] or [27] (which are also w.r.t. wq 1 and in the setting of diagrams with finitely-many points). However, contrary to previous works, if we would chop-off the input layer after network training, we would then have a mapping Sθ,ν of persistence diagrams that is specifically-tailored to the learning task on which the network was trained. 5 a1 a2 a3 b1 b9 b2 b3 b4 b5 b8 b7 b6 ν Persistence diagram (0-dim. features) shift due to noise Artificially added noise S1 Filtration directions a1 a2 a3 a1 a2 a3 b1 b2,3,4 b5 b6 b7 b8 b9 Figure 2: Height function filtration of a “clean” (left, green points) and a “noisy” (right, blue points) shape along direction d = (0, −1)⊤. This example demonstrates the insensitivity of homology towards noise, as the added noise only (1) slightly shifts the dominant points (upper left corner) and (2) produces additional points close to the diagonal, which have little impact on the Wasserstein distance and the output of our layer. 5 Experiments To demonstrate the versatility of the proposed approach, we present experiments with two totally different types of data: (1) 2D shapes of objects, represented as binary images and (2) social network graphs, given by their adjacency matrix. In both cases, the learning task is classification. In each experiment we ensured a balanced group size (per label) and used a 90/10 random training/test split; all reported results are averaged over five runs with fixed ν = 0.1. In practice, points in input diagrams were thresholded at 0.01 for computational reasons. Additionally, we conducted a reference experiment on all datasets using simple vectorization (see Sec. 5.3) of the persistence diagrams in combination with a linear SVM. Implementation. All experiments were implemented in PyTorch3, using DIPHA4 and Perseus [23]. Source code is publicly-available at https://github.com/c-hofer/nips2017. 5.1 Classification of 2D object shapes We apply persistent homology combined with our proposed input layer to two different datasets of binary 2D object shapes: (1) the Animal dataset, introduced in [3] which consists of 20 different animal classes, 100 samples each; (2) the MPEG-7 dataset which consists of 70 classes of different object/animal contours, 20 samples each (see [21] for more details). Filtration. The requirements to use persistent homology on 2D shapes are twofold: First, we need to assign a simplicial complex to each shape; second, we need to appropriately filtrate the complex. While, in principle, we could analyze contour features, such as curvature, and choose a sublevel set filtration based on that, such a strategy requires substantial preprocessing of the discrete data (e.g., smoothing). Instead, we choose to work with the raw pixel data and leverage the persistent homology transform, introduced by Turner et al. [29]. The filtration in that case is based on sublevel sets of the height function, computed from multiple directions (see Fig. 2). Practically, this means that we directly construct a simplicial complex from the binary image. We set K0 as the set of all pixels which are contained in the object. Then, a 1-simplex [p0, p1] is in the 1-skeleton K1 iff p0 and p1 are 4–neighbors on the pixel grid. To filtrate the constructed complex, we denote by b the barycenter of the object and with r the radius of its bounding circle around b. Finally, we define, for [p] ∈K0 and d ∈S1, the filtration function by f([p]) = 1/r · ⟨p −b, d⟩. Function values are lifted to K1 by taking the maximum, cf. Sec. 2. Finally, let di be the 32 equidistantly distributed directions in S1, starting from (1, 0)⊤. For each shape, we get a vector of persistence diagrams (Di)32 i=1 where Di is the 0-th diagram obtained by filtration along di. As most objects do not differ in homology groups of higher dimensions (> 0), we did not use the corresponding persistence diagrams. Network architecture. While the full network is listed in the supplementary material, the key architectural choices are: 32 independent input branches, i.e., one for each filtration direction. Further, the i-th branch gets, as input, the vector of persistence diagrams from directions di−1, di and di+1. This is a straightforward approach to capture dependencies among the filtration directions. We use cross-entropy loss to train the network for 400 epochs, using stochastic gradient descent (SGD) with mini-batches of size 128 and an initial learning rate of 0.1 (halved every 25-th epoch). 3https://github.com/pytorch/pytorch 4https://bitbucket.org/dipha/dipha 6 MPEG-7 Animal ‡Skeleton paths 86.7 67.9 ‡Class segment sets 90.9 69.7 †ICS 96.6 78.4 †BCF 97.2 83.4 Ours 91.8 69.5 Figure 3: Left: some examples from the MPEG-7 (bottom) and Animal (top) datasets. Right: Classification results, compared to the two best (†) and two worst (‡) results reported in [30]. Results. Fig. 3 shows a selection of 2D object shapes from both datasets, together with the obtained classification results. We list the two best (†) and two worst (‡) results as reported in [30]. While, on the one hand, using topological signatures is below the state-of-the-art, the proposed architecture is still better than other approaches that are specifically tailored to the problem. Most notably, our approach does not require any specific data preprocessing, whereas all other competitors listed in Fig. 3 require, e.g., some sort of contour extraction. Furthermore, the proposed architecture readily generalizes to 3D with the only difference that in this case di ∈S2. Fig. 4 (Right) shows an exemplary visualization of the position of the learned structure elements for the Animal dataset. 5.2 Classification of social network graphs In this experiment, we consider the problem of graph classification, where vertices are unlabeled and edges are undirected. That is, a graph G is given by G = (V, E), where V denotes the set of vertices and E denotes the set of edges. We evaluate our approach on the challenging problem of social network classification, using the two largest benchmark datasets from [31], i.e., reddit-5k (5 classes, 5k graphs) and reddit-12k (11 classes, ≈12k graphs). Each sample in these datasets represents a discussion graph and the classes indicate subreddits (e.g., worldnews, video, etc.). Filtration. The construction of a simplicial complex from G = (V, E) is straightforward: we set K0 = {[v] ∈V } and K1 = {[v0, v1] : {v0, v1} ∈E}. We choose a very simple filtration based on the vertex degree, i.e., the number of incident edges to a vertex v ∈V . Hence, for [v0] ∈K0 we get f([v0]) = deg(v0)/ maxv∈V deg(v) and again lift f to K1 by taking the maximum. Note that chain groups are trivial for dimension > 1, hence, all features in dimension 1 are essential. Network architecture. Our network has four input branches: two for each dimension (0 and 1) of the homological features, split into essential and non-essential ones, see Sec. 2. We train the network for 500 epochs using SGD and cross-entropy loss with an initial learning rate of 0.1 (reddit_5k), or 0.4 (reddit_12k). The full network architecture is listed in the supplementary material. Results. Fig. 5 (right) compares our proposed strategy to state-of-the-art approaches from the literature. In particular, we compare against (1) the graphlet kernel (GK) and deep graphlet kernel (DGK) results from [31], (2) the Patchy-SAN (PSCN) results from [24] and (3) a recently reported graph-feature + random forest approach (RF) from [4]. As we can see, using topological signatures in our proposed setting considerably outperforms the current state-of-the-art on both datasets. This is an interesting observation, as PSCN [24] for instance, also relies on node degrees and an extension of the convolution operation to graphs. Further, the results reveal that including essential features is key to these improvements. 5.3 Vectorization of persistence diagrams Here, we briefly present a reference experiment we conducted following Bendich et al. [5]. The idea is to directly use the persistence diagrams as features via vectorization. For each point (b, d) in a persistence diagram D we calculate its persistence, i.e., d−b. We then sort the calculated persistences by magnitude from high to low and take the first N values. Hence, we get, for each persistence diagram, a vector of dimension N (if |D \ ∆| < N, we pad with zero). We used this technique on all four data sets. As can be seen from the results in Table 4 (averaged over 10 cross-validation runs), vectorization performs poorly on MPEG-7 and Animal but can lead to competitive rates on reddit-5k and reddit-12k. Nevertheless, the obtained performance is considerably inferior to our proposed approach. 7 N Ours 5 10 20 40 80 160 MPEG-7 81.8 82.3 79.7 74.5 68.2 64.4 91.8 Animal 48.8 50.0 46.2 42.4 39.3 36.0 69.5 reddit-5k 37.1 38.2 39.7 42.1 43.8 45.2 54.5 reddit-12k 24.2 24.6 27.9 29.8 31.5 31.6 44.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Birth Death Figure 4: Left: Classification accuracies for a linear SVM trained on vectorized (in RN) persistence diagrams (see Sec. 5.3). Right: Exemplary visualization of the learned structure elements (in 0-th dimension) for the Animal dataset and filtration direction d = (−1, 0)⊤. Centers of the learned elements are marked in blue. G = (V, E) 2 1 2 3 1 1 1 1 5 f−1((−∞, 2]) f−1((−∞, 5]) f−1((−∞, 3]) 1 reddit-5k reddit-12k GK [31] 41.0 31.8 DGK [31] 41.3 32.2 PSCN [24] 49.1 41.3 RF [4] 50.9 42.7 Ours (w/o essential) 49.1 38.5 Ours (w/ essential) 54.5 44.5 Figure 5: Left: Illustration of graph filtration by vertex degree, i.e., f ≡deg (for different choices of ai, see Sec. 2). Right: Classification results as reported in [31] for GK and DGK, Patchy-SAN (PSCN) as reported in [24] and feature-based random-forest (RF) classification from [4]. . Finally, we remark that in both experiments, tests with the kernel of [27] turned out to be computationally impractical, (1) on shape data due to the need to evaluate the kernel for all filtration directions and (2) on graphs due the large number of samples and the number of points in each diagram. 6 Discussion We have presented, to the best of our knowledge, the first approach towards learning task-optimal stable representations of topological signatures, in our case persistence diagrams. Our particular realization of this idea, i.e., as an input layer to deep neural networks, not only enables us to learn with topological signatures, but also to use them as additional (and potentially complementary) inputs to existing deep architectures. From a theoretical point of view, we remark that the presented structure elements are not restricted to exponential functions, so long as the conditions of Lemma 1 are met. One drawback of the proposed approach, however, is the artificial bending of the persistence axis (see Fig. 1) by a logarithmic transformation; in fact, other strategies might be possible and better suited in certain situations. A detailed investigation of this issue is left for future work. From a practical perspective, it is also worth pointing out that, in principle, the proposed layer could be used to handle any kind of input that comes in the form of multisets (of Rn), whereas previous works only allow to handle sets of fixed size (see Sec. 1). In summary, we argue that our experiments show strong evidence that topological features of data can be beneficial in many learning tasks, not necessarily to replace existing inputs, but rather as a complementary source of discriminative information. Acknowledgements. This work was partially funded by the Austrian Science Fund FWF (KLI project 00012) and the Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, Salzburg. 8 A Technical results Lemma 3. Let α ∈R+ and β ∈R. We have lim x→0 ln(x) x · e−α(ln(x)+β)2 = 0 i) lim x→0 1 x · e−α(ln(x)+β)2 = 0 . ii) Proof. We omit the proof for brevity (see supplementary material for details), but remark that only i) needs to be shown as ii) follows immediately. B Proofs Proof of Lemma 1. Let ϕ be a bijection between D and E which realizes wq 1(D, E) and let D0 = D \ ∆, E0 = E \ ∆. To show the result of Eq. (5), we consider the following decomposition: D = ϕ−1(E0) ∪ϕ−1(∆) = (ϕ−1(E0) \ ∆) | {z } A ∪(ϕ−1(E0) ∩∆) | {z } B ∪(ϕ−1(∆) \ ∆) | {z } C ∪(ϕ−1(∆) ∩∆) | {z } D (6) Except for the term D, all sets are finite. In fact, ϕ realizes the Wasserstein distance wq 1 which implies ϕ D = id. Therefore, s(x) = s(ϕ(x)) = 0 for x ∈D since D ⊂∆. Consequently, we can ignore D in the summation and it suffices to consider E = A ∪B ∪C. It follows that X x∈D s(x) − X y∈E s(y) = X x∈D s(x) − X x∈D s(ϕ(x)) = X x∈E s(x) − X x∈E s(ϕ(x)) = X x∈E s(x) −s(ϕ(x)) ≤ X x∈E |s(x) −s(ϕ(x))| ≤Ks · X x∈E ||x −ϕ(x)||q = Ks · X x∈D ||x −ϕ(x)||q = Ks · wq 1(D, E) . Proof of Lemma 2. Since sµ,σ,ν is defined differently for x1 ∈[ν, ∞) and x1 ∈(0, ν), we need to distinguish these two cases. In the following x0 ∈R. (1) x1 ∈[ν, ∞): The partial derivative w.r.t. xi is given as ∂ ∂xi sµ,σ,ν (x0, x1) = C · ∂ ∂xi e−σ2 i (xi−µi)2 (x0, x1) = C · e−σ2 i (xi−µi)2 · (−2σ2 i )(xi −µi) , (7) where C is just the part of exp(·) which is not dependent on xi. For all cases, i.e., x0 →∞, x0 → −∞and x1 →∞, it holds that Eq. (7) →0. (2) x1 ∈(0, ν): The partial derivative w.r.t. x0 is similar to Eq. (7) with the same asymptotic behaviour for x0 →∞and x0 →−∞. However, for the partial derivative w.r.t. x1 we get ∂ ∂x1 sµ,σ,ν (x0, x1) = C · ∂ ∂x1 e−σ2 1(ln( x1 ν )+ν−µ1)2 (x0, x1) = C · e( ... ) · (−2σ2 1) · ln x1 ν + ν −µ1 · ν x1 = C′ · e( ... ) · ln x1 ν · 1 x1 | {z } (a) +(ν −µ1) · e( ... ) · 1 x1 | {z } (b) . (8) As x1 →0, we can invoke Lemma 3 i) to handle (a) and Lemma 3 ii) to handle (b); conclusively, Eq. (8) →0. As the partial derivatives w.r.t. xi are continuous and their limits are 0 on R, R+, resp., we conclude that they are absolutely bounded. 9 References [1] H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson, F. Motta, and L. Ziegelmeier. Persistence images: A stable vector representation of persistent homology. JMLR, 18(8):1–35, 2017. 2, 5 [2] A. Adcock, E. Carlsson, and G. Carlsson. The ring of algebraic functions on persistence bar codes. CoRR, 2013. https://arxiv.org/abs/1304.0530. 2 [3] X. Bai, W. Liu, and Z. Tu. Integrating contour and skeleton for shape classification. In ICCV Workshops, 2009. 6 [4] I. Barnett, N. Malik, M.L. Kuijjer, P.J. Mucha, and J.-P. Onnela. Feature-based classification of networks. CoRR, 2016. https://arxiv.org/abs/1610.05868. 7, 8 [5] P. Bendich, J.S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain artery trees. Ann. Appl. Stat, 10(2), 2016. 7 [6] P. Bubenik. Statistical topological data analysis using persistence landscapes. JMLR, 16(1):77–102, 2015. 2 [7] G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255–308, 2009. 1 [8] G. Carlsson, T. Ishkhanov, V. de Silva, and A. Zomorodian. On the local behavior of spaces of natural images. IJCV, 76:1–12, 2008. 1 [9] F. Chazal, D. Cohen-Steiner, L. J. Guibas, F. Mémoli, and S. Y. Oudot. Gromov-Hausdorff stable signatures for shapes using persistence. Comput. Graph. Forum, 28(5):1393–1403, 2009. 4 [10] F. Chazal, B.T. Fasy, F. Lecci, A. Rinaldo, and L. Wassermann. Stochastic convergence of persistence landscapes and silhouettes. JoCG, 6(2):140–161, 2014. 2 [11] F. Chazal, L.J. Guibas, S.Y. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds. J. ACM, 60(6):41–79, 2013. 1 [12] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete Comput. Geom., 37(1):103–120, 2007. 4 [13] D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have Lp-stable persistence. Found. Comput. Math., 10(2):127–139, 2010. 4 [14] H. Edelsbrunner and J. L. Harer. Computational Topology : An Introduction. American Mathematical Society, 2010. 1, 2, 3 [15] H. Edelsbrunner, D. Letcher, and A. Zomorodian. Topological persistence and simplification. Discrete Comput. Geom., 28(4):511–533, 2002. 1 [16] A. Hatcher. Algebraic Topology. Cambridge University Press, Cambridge, 2002. 2 [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 2 [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 2 [19] G. Kusano, K. Fukumizu, and Y. Hiraoka. Persistence weighted Gaussian kernel for topological data analysis. In ICML, 2016. 2, 4 [20] R. Kwitt, S. Huber, M. Niethammer, W. Lin, and U. Bauer. Statistical topological data analysis - a kernel perspective. In NIPS, 2015. 2 [21] L. Latecki, R. Lakamper, and T. Eckhardt. Shape descriptors for non-rigid shapes with a single closed contour. In CVPR, 2000. 6 [22] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014. 1 [23] K. Mischaikow and V. Nanda. Morse theory for filtrations and efficient computation of persistent homology. Discrete Comput. Geom., 50(2):330–353, 2013. 6 [24] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML, 2016. 7, 8 [25] C.R. Qi, H. Su, K. Mo, and L.J. Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In CVPR, 2017. 2 [26] S. Ravanbakhsh, S. Schneider, and B. Póczos. Deep learning with sets and point clouds. In ICLR, 2017. 2 [27] R. Reininghaus, U. Bauer, S. Huber, and R. Kwitt. A stable multi-scale kernel for topological machine learning. In CVPR, 2015. 1, 2, 4, 5, 8 [28] G. Singh, F. Memoli, T. Ishkhanov, G. Sapiro, G. Carlsson, and D.L. Ringach. Topological analysis of population activity in visual cortex. J. Vis., 8(8), 2008. 1 10 [29] K. Turner, S. Mukherjee, and D. M. Boyer. Persistent homology transform for modeling shapes and surfaces. Inf. Inference, 3(4):310–344, 2014. 1, 6 [30] X. Wang, B. Feng, X. Bai, W. Liu, and L.J. Latecki. Bag of contour fragments for robust shape classification. Pattern Recognit., 47(6):2116–2125, 2014. 7 [31] P. Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In KDD, 2015. 7, 8 11 | 2017 | 361 |
6,854 | Revenue Optimization with Approximate Bid Predictions Andr´es Mu˜noz Medina Google Research 76 9th Ave New York, NY 10011 Sergei Vassilvitskii Google Research 76 9th Ave New York, NY 10011 Abstract In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem. This is due to the heterogeneity of ad opportunity types, and the non-convexity of the objective function. In this work, we show how to reduce reserve price optimization to the standard setting of prediction under squared loss, a well understood problem in the learning community. We further bound the gap between the expected bid and revenue in terms of the average loss of the predictor. This is the first result that formally relates the revenue gained to the quality of a standard machine learned model. 1 Introduction A crucial task for revenue optimization in auctions is setting a good reserve (or minimum) price. Set it too low, and the sale may yield little revenue, set it too high and there may not be anyone willing to buy the item. The celebrated work by Myerson [1981] shows how to optimally set reserves in second price auctions, provided the value distribution of each bidder is known. In practice there are two challenges that make this problem significantly more complicated. First, the value distribution is never known directly; rather, the auctioneer can only observe samples drawn from it. Second, in the context of ad auctions, the items for sale (impressions) are heterogeneous, and there are literally trillions of different types of items being sold. It is therefore likely that a specific type of item has never been observed previously, and no information about its value is known. A standard machine learning approach addressing the heterogeneity problem is to parametrize each impression by a feature vector, with the underlying assumption that bids observed from auctions with similar features will be similar. In online advertising. these features encode, for instance, the ad size, whether it’s mobile or desktop, etc. The question is, then, how to use the features to set a good reserve price for a particular ad opportunity. On the face of it, this sounds like a standard machine learning question—given a set of features, predict the value of the maximum bid. The difficulty comes from the shape of the loss function. Much of the machine learning literature is concerned with optimizing well behaved loss functions, such as squared loss, or hinge loss. The revenue function, on the other hand is non-continuous and strongly non-concave, making a direct attack a challenging proposition. In this work we take a different approach and reduce the problem of finding good reserve prices to a prediction problem under the squared loss. In this way we can rely upon many widely available and scalable algorithms developed to minimize this objective. We proceed by using the predictor to define a judicious clustering of the data, and then compute the empirically maximizing reserve price for each group. Our reduction is simple and practical, and directly ties the revenue gained by the algorithm to the prediction error. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Related Work Optimizing revenue in auctions has been a rich area of study, beginning with the seminal work of Myerson [1981] who introduced optimal auction design. Follow up work by Chawla et al. [2007] and Hartline and Roughgarden [2009], among others, refined his results to increasingly more complex settings, taking into account multiple items, diverse demand functions, and weaker assumptions on the shape of the value distributions. Most of the classical literature on revenue optimization focuses on the design of optimal auctions when the bidding distribution of buyers is known. More recent work has considered the computational and information theoretic challenges in learning optimal auctions from data. A long line of work [Cole and Roughgarden, 2015, Devanur et al., 2016, Dhangwatnotai et al., 2015, Morgenstern and Roughgarden, 2015, 2016] analyzes the sample complexity of designing optimal auctions. The main contribution of this direction is to show that under fairly general bidding scenarios, a near-optimal auction can be designed knowing only a polynomial number of samples from bidders’ valuations. Other authors, [Leme et al., 2016, Roughgarden and Wang, 2016] have focused on the computational complexity of finding optimal reserve prices from samples, showing that even for simple mechanisms the problem is often NP-hard to solve directly. Another well studied approach to data-driven revenue optimization is that of online learning. Here, auctions occur one at a time, and the learning algorithm must compute prices as a function of the history of the algorithm. These algorithms generally make no distributional assumptions and measure their performance in terms of regret: the difference between the algorithm’s performance and the performance of the best fixed reserve price in hindsight. Kleinberg and Leighton [2003] developed an online revenue optimization algorithm for posted-price auctions that achieves low regret. Their work was later extended to second-price auctions by Cesa-Bianchi et al. [2015]. A natural approach in both of these settings is to attempt to predict an optimal reserve price, equivalently the highest bid submitted by any of the buyers. While the problem of learning this reserve price is well understood for the simplistic model of buyers with i.i.d. valuations [Cesa-Bianchi et al., 2015, Devanur et al., 2016, Kleinberg and Leighton, 2003], the problem becomes much more challenging in practice, when the valuations of a buyer also depend on features associated with the ad opportunity (for instance user demographics, and publisher information). This problem is not nearly as well understood as its i.i.d. counterpart. Mohri and Medina [2014] provide learning guarantees and an algorithm based on DC programming to optimize revenue in second-price auctions with reserve. The proposed algorithm, however, does not easily scale to large auction data sets as each iteration involves solving a convex optimization problem. A smoother version of this algorithm is given by [Rudolph et al., 2016]. However, being a highly non-convex problem, neither algorithm provides a guarantee on the revenue attainable by the algorithm’s output. Devanur et al. [2016] give sample complexity bounds on the design of optimal auctions with side information. However, the authors consider only cases where this side information is given by σ ∈[0, 1]. More importantly, their proposed algorithm only works under the unverifiable assumption that the conditional distributions of bids given σ satisfy stochastic dominance. Our results. We show that given a predictor of the bid with squared loss of η2, we can construct a reserve function r that extracts all but g(η) revenue, for a simple increasing function g. (See Theorem 2 for the exact statement.) To the best of our knowledge, this is the first result that ties the revenue one can achieve directly to the quality of a standard prediction task. Our algorithm for computing r is scalable, practical, and efficient. Along the way we show what kinds of distributions are amenable to revenue optimization via reserve prices. We prove that when bids are drawn i.i.d. from a distribution F, the ratio between the mean bid and the revenue extracted with the optimum monopoly reserve scales as O(log Var(F)) – Theorem 5. This result refines the log h bound derived by Goldberg et al. [2001], and formalizes the intuition that reserve prices are more successful for low variance distributions. 2 Setup We consider a repeated posted price auction setup where every auction is parametrized by a feature vector x ∈X and a bid b ∈[0, 1]. Let D be a distribution over X × [0, 1]. Let h: X →[0, 1], be a bid prediction function and denote by η2 the squared loss incurred by h: E[(h(x) −b)2] = η2. 2 We assume h is given, and make no assumption on the structure of h or how it is obtained. Notice that while the existence of such h is not guaranteed for all values of η, using historical data one could use one of multiple readily available regression algorithms to find the best hypothesis h. Let S = (x1, b1), . . . , (xm, bm) ∼D be a set of m i.i.d. samples drawn from D and denote by SX = (x1, . . . , xm) its projection on X. Given a price p let Rev(p, b) = p1b≥p denote the revenue obtained when the bidder bids b. For a reserve price function r: X →[0, 1] we let: R(r) = E (x,b)∼D Rev(r(x), b) and bR(r) = 1 m X (x,b)∈S Rev(r(x), b) denote the expected and empirical revenue of reserve price function r. We also let B = E[b], bB = 1 m Pm i=1 bi denote the population and empirical mean bid, and S(r) = B −R(r), bS(r) = bB −bR(r) denote the expected and empirical separation between bid values and the revenue. Notice that for a given reserve price function r, S(r) corresponds to revenue left on the table. Our goal is, given S and h, to find a function r that maximizes R(r) or equivalently minimizes S(r). 2.1 Generalization Error Note that in our set up we are only given samples from the distribution, D, but aim to maximize the expected revenue. Understanding the difference between the empirical performance of an algorithm and its expected performance, also known as the generalization error, is a key tenet of learning theory. At a high level, the generalization error is a function of the training set size: larger training sets lead to smaller generalization error; and the inherent complexity of the learning algorithm: simple rules such as linear classifiers generalize better than more complex ones. In this paper we characterize the complexity of a class G of functions by its growth function Π. The growth function corresponds to the maximum number of binary labelings that can be obtained by G over all possible samples SX . It is closely related to the VC-dimension when G takes values in {0, 1} and to the pseudo-dimension [Morgenstern and Roughgarden, 2015, Mohri et al., 2012] when G takes values in R. We can give a bound on the generalization error associated with minimizing the empirical separation over a class of functions G. The following theorem is an adaptation of Theorem 1 of [Mohri and Medina, 2014] to our particular setup. Theorem 1. Let δ > 0, with probability at least 1 −δ over the choice of the sample S the following bound holds uniformly for r ∈G S(r) ≤bS(r) + 2 r log 1/δ 2m + 4 r 2 log(Π(G, m)) m . (1) Therefore, in order to minimize the expected separation S(r) it suffices to minimize the empirical separation bS(r) over a class of functions G whose growth function scales polynomially in m. 3 Warmup In order to better understand the problem at hand, we begin by introducing a straightforward mechanism for transforming the hypothesis function h to a reserve price function r with guarantees on its achievable revenue. Lemma 1. Let r: X →[0, 1] be defined by r(x) := max(h(x) −η2/3, 0). The function r then satisfies S(r) ≤η1/2 + 2η2/3. The proof is a simple application of Jensen’s and Markov’s inequalities and it is deferred to Appendix B. This surprisingly simple algorithm shows there are ways to obtain revenue guarantees from a simple regressor. To the best of our knowledge these is the first guarantee of its kind. The reader may be 3 curious about the choice of η2/3 as the offset in our reserve price function. We will show that the dependence on η2/3 is not a simple artifact of our analysis, but a cost inherent to the problem of revenue optimization. Moreover, observe that this simple algorithm fixes a static offset, and does not make a distinction between those parts of the feature space, where the algorithm makes a low error, and those where the error is relatively high. By contrast our proposed algorithm partitions the space appropriately and calculates a different reserve for each partition. More importantly we will provide a data dependent bound on the performance of our algorithm that only in the worst case scenario behaves like η2/3. 4 Results Overview In principle to maximize revenue we need to find a class of functions G with small complexity, but that contains a function which approximately minimizes the empirical separation. The challenge comes from the fact that the revenue function, Rev, is not continuous and highly non-concave—a small change in the price, p, may lead to very large changes in revenue. This is the main reason why simply using the predictor h(x) as a proxy for a reserve function is a poor choice, even if its average error, η2 is small. For example a function h, that is just as likely to over-predict by η as to under predict by η will have very small error, but lead to 0 revenue in half the cases. A solution on the other end of the spectrum would simply memorize the optimum prices from the sample S, setting r(xi) = bi. While this leads to optimal empirical revenue, a function class G containing r would satisfy Π(G, m) = 2m, making the bound of Theorem 1 vacuous. In this work we introduce a family G(h, k) of classes parameterized by k ∈N. This family admits an approximate minimizer that can be computed in polynomial time, has low generalization error, and achieves provable guarantees to the overall revenue. More precisely, we show that given S, and a hypothesis h with expected squared loss of η2: • For every k ≥1 there exists a set of functions G(h, k) such that Π(G(h, k), m) = O(m2k). • For every k ≥1, there is a polynomial time algorithm that outputs rk ∈G(h, k) such that in the worst case scenario bS(rk) is bounded by O( 1 k2/3 + η2/3 + 1 m1/6 ). Effectively, we show how to transform any classifier h with low squared loss, η2, to a reserve price predictor that recovers all but O(η2/3) revenue in expectation. 4.1 Algorithm Description In this section we give an overview of the algorithm that uses both the predictor h and the set of samples in S to develop a pricing function r. Our approach has two steps. First we partition the set of feasible prices, 0 ≤p ≤1, into k partitions, C1, C2, . . . , Ck. The exact boundaries between partitions depend on the samples S and their predicted values, as given by h. For each partition we find the price that maximizes the empirical revenue in the partition. We let r(x) return the empirically optimum price in the partition that contains h(x). For a more formal description, let Tk be the set of k-partitions of the interval [0, 1] that is: Tk = {t = (t0, t1, . . . , tk−1, tk) | 0 = t0 < . . . < tk = 1}. We define G(h, k) = {x 7→Pk−1 j=0 ri1tj≤h(x)<tj+1 | rj ∈[ti, tj+1] and t ∈Tk}. A function in G(h, k) chooses k level sets of h and k reserve prices. Given x, price rj is chosen if x falls on the j-th level set. It remains to define the function rk ∈G(h, k). Given a partition vector t ∈Tk, let the partition Ch = {Ch 1 , . . . , Ch k } of X be given by Ch j = {x ∈X|tj−1 < h(x) ≤tj}. Let mj = |SX ∩Ch j | be the number of elements that fall into the j-th partition. We define the predicted mean and variance of each group Ch j as µh j = 1 mj X xi∈Ch j h(xi) and (σh j )2 = 1 mj X xi∈Ch j (h(xi) −µj)2. 4 We are now ready to present algorithm RIC-h for computing rk ∈Hk. Algorithm 1. Reserve Inference from Clusters Compute th ∈Tk that minimizes 1 m Pk−1 j=0 mjσh j . Let Ch = Ch 1 , Ch 2 , . . . , Ch k be the induced partitions. For each j ∈1, . . . , k, set rj = maxr r · |{i|bi ≥r ∧xi ∈Ch j }|. Return x 7→Pk−1 j=0 rj1h(x)∈Ch j . end Our main theorem states that the separation of rk is bounded by the cluster variance of Ch. For a partition C = {C1, . . . , Ck} of X let σj denote the empirical variance of bids for auctions in Cj. We define the weighted empirical variance by: Φ(C): = k X j=1 s X i,i′:xi,xi′∈Ck (bi −bi′)2 = 2 k X j=1 mjbσj (2) Theorem 2. Let δ > 0 and let rk denote the output of Algorithm 1 then rk ∈G(h, k) and with probability at least 1 −δ over the samples S: bS(rk) ≤(3 bB)1/3 1 2mΦ(Ch) ≤(3 bB)1/3 1 2k + 2 η2 + r log 1/δ 2m 1/2 !2/3 . Notice that our bound is data dependent and only in he worst case scenario it behaves like η2/3. In general it could be much smaller. We also show that the complexity of G(h, k) admits a favorable bound. The proof is similar to that in [Morgenstern and Roughgarden, 2015]; we include it in Appendix E for completness. Theorem 3. The growth function of the class G(h, k) can be bounded as: Π(G(h, k), m) ≤m2k−1 kk . We can combine these results with Equation 1 and an easy bound on bB in terms of B to conclude: Corollary 1. Let δ > 0 and let rk denote the output of Algorithm 1 then rk ∈G(h, k) and with probability at least 1 −δ over the samples S: S(rk) ≤(3 bB)1/3Φ(Ch) 2m +O r k log m m ≤(12Bη2)1/3+O 1 k2/3 + log 1/δ 2m 1/6 + r k log m m . Since B ∈[0, 1], this implies that when k = Θ(m3/7), the separation is bounded by 2.28η2/3 plus additional error factors that go to 0 with the number of samples, m, as ˜O(m−2/7). 5 Bounding Separation In this section we prove the main bound motivating our algorithm. This bound relates the variance of the bid distribution and the maximum revenue that can be extracted when a buyer’s bids follow such distribution. It formally shows what makes a distribution amenable to revenue optimization. To gain intuition for the kind of bound we are striving for, consider a bid distribution F. If the variance of F is 0, that is F is a point mass at some value v, then setting a reserve price to v leads to no separation. On the other hand, consider the equal revenue distribution, with F(x) = 1−1/x. Here any reserve price leads to revenue of 1. However, the distribution has unbounded expected bid and variance, so it is not too surprising that more revenue cannot be extracted. We make this connection precise, showing that after setting the optimal reserve price, the separation can be bounded by a function of the variance of the distribution. Given any bid distribution F over [0, 1] we denote by G(r) = 1 −limr′→r−F(r′) the probability that a bid is greater than or equal to r. Finally, we will let R = maxr rG(r) denote the maximum revenue achievable when facing a bidder whose bids are drawn from distribution F. As before we denote by B = Eb∼F [b] the mean bid and by S = B −R the expected separation of distribution F. 5 Theorem 4. Let σ2 denote the variance of F. Then σ2 ≥2R2e S R −B2 −R2. The proof of this theorem is highly technical and we present it in Appendix A. Corollary 2. The following bound holds for any distribution F: S ≤(3R)1/3σ2/3 ≤(3B)1/3σ2/3 The proof of this corollary follows immediately by an application of Taylor’s theorem to the bound of Theorem 4. It is also easy to show that this bound is tight (see Appendix D). 5.1 Approximating Maximum Revenue In their seminal work Goldberg et al. [2001] showed that when faced with a bidder drawing values distribution F on [1, M] with mean B, an auctioneer setting the optimum monopoly reserve would recover at least Ω(B/ log M) revenue. We show how to adapt the result of Theorem 4 to refine this approximation ratio as a function of the variance of F. We defer the proof to Appendix B. Theorem 5. For any distribution F with mean B and variance σ2, the maximum revenue with monopoly reserves, R, satisfies: B R ≤4.78 + 2 log 1 + σ2 B2 Note that since σ2 ≤M 2 this always leads to a tighter bound on the revenue. 5.2 Partition of X Corollary 2 suggests clustering points in such a way that the variance of the bids in each cluster is minimized. Given a partition C = {C1, . . . , Ck} of X we denote by mj = |SX ∪Cj|, bBj = 1 mj P i:xi∈Cj bi, bσ2 j = 1 mj P i:xi∈Cj(bi −bBj)2. Let also rj = argmaxp>0 p|{bi > p|xi ∈Cj}| and bRj = rj|{bi > rj|xi ∈Cj}|. Lemma 2. Let r(x) = Pk j=1 rj1x∈Cj then bS(r) ≤ 3 bB 1/3 1 m Pk j=1 mjbσj 2/3 = 3 bB 1/3 1 2mΦ(C) . Proof. Let bSj = bBj−bRj, Corollary 2 applied to the empirical bid distribution in Cj yields bSj ≤ (3 bBj)1/3bσ2/3 j . Multiplying by mj m , summing over all clusters and using H¨older’s inequality gives: bS(r) = 1 m k X j=1 mjSj ≤1 m k X j=1 (3 bBj)1/3bσ2/3 j mj ≤ k X j=1 3mj m bBj 1/3 k X j=1 mj m bσj 2/3 . 6 Clustering Algorithm In view of Lemma 2 and since the quantity bB is fixed, we can find a function minimizing the expected separation by finding a partition of X that minimizes the weighted variance Φ(C) defined Section 4.1. From the definition of Φ, this problem resembles a traditional k-means clustering problem with distance function d(xi, xi′) = (bi−bi′)2. Thus, one could use one of several clustering algorithms to solve it. Nevertheless, in order to allocate a new point x ∈X to a cluster, we would require access to the bid b which at evaluation time is unknown. Instead, we show how to utilize the predictions of h to define an almost optimal clustering of X. For any partition C = {C1, . . . , Ck} of X define Φh(C) = k X j=1 s X i,i′:xi,xi′∈Ck (h(xi) −h(xi′))2. Notice that 1 2mΦh(C) is the function minimized by Algorithm 1. The following lemma, proved in Appendix B, bounds the cluster variance achieved by clustering bids according to their predictions. 6 Lemma 3. Let h be a function such that 1 m Pm i=1(h(xi)−bi)2 ≤bη2, and let C∗denote the partition that minimizes Φ(C). If Ch minimizes Φh(C) then Φ(Ch) ≤Φ(C∗) + 4mbη. Corollary 3. Let rk be the output of Algorithm 1. If 1 m Pm j=1(h(xi) −bi)2 ≤bη2 then: bS(rk) ≤(3 bB)1/3 1 2mΦ(Ch) 2/3 ≤(3 bB)1/3Big( 1 2mΦ(C∗) + 2bη 2/3 . (3) Proof. It is easy to see that the elements Ch j of Ch are of the form Cj = {x|tj ≤h(x) ≤tj+1} for t ∈Tk. Thus if rk is the hypothesis induced by the partition Ch, then rk ∈G(h, k). The result now follows by definition of Φ and lemmas 2 and 3. The proof of Theorem 2 is now straightforward. Define a partition C by xi ∈Cj if bi ∈ j−1 k , j k . Since (bi −bi′)2 ≤ 1 k2 for bi, bi′ ∈Cj we have Φ(C) ≤ k X j=1 s m2 j k2 = m k . (4) Furthermore since E[(h(x) −b)2] ≤η2, Hoeffding’s inequality implies that with probability 1 −δ: 1 m m X i=1 (h(xi) −bi)2 ≤ η2 + r log 1/δ 2m . (5) In view of inequalities (4) and (5) as well as Corollary 3 we have: bS(rk) ≤(3 bB)1/3 1 2mΦ(C)+2 η2+ r log 1/δ 2m 1/2 !2/3 ≤(3 bB)1/3 1 2k +2 η2+ r log 1/δ 2m 1/2 !2/3 This completes the proof of the main result. To implement the algorithm, note that the problem of minimizing Φh(C) reduces to finding a partition t ∈Tk such that the sum of the variances within the partitions is minimized. It is clear that it suffices to consider points tj in the set B = {h(x1), . . . , h(xm)}. With this observation, a simple dynamic program leads to a polynomial time algorithm with an O(km2) running time (see Appendix C). 7 Experiments We now compare the performance of our algorithm against the following baselines: 1. The offset algorithm presented in Section 3, where instead of using the theoretical offset η2/3 we find the optimal t maximizing the empirical revenue Pm i=1 h(xi)−t)1h(xi)−t≤bi. 2. The DC algorithm introduced by Mohri and Medina [2014], which represents the state of the art in learning a revenue optimal reserve price. Synthetic data. We begin by running experiments on synthetic data to demonstrate the regimes where each algorithm excels. We generate feature vectors xi ∈R10 with coordinates sampled from a mixture of lognormal distributions with means µ1 = 0, µ2 = 1, variance σ1 = σ2 = 0.5 and mixture parameter p = 0.5. Let 1 ∈Rd denote the vector with entries set to 1. Bids are generated according to two different scenarios: Linear Bids bi generated according to bi = max(x⊤ i 1 + βi, 0) where βi is a Gaussian random variable with mean 0, and standard deviation σ ∈{0.01, 0.1, 1.0, 2.0, 4.0}. Bimodal Bids bi generated according to the following rule: let si = max(x⊤ i 1 + βi, 0) if si > 30 then bi = 40 + αi otherwise bi = si. Here αi has the same distribution as βi. The linear scenario demonstrates what happens when we have a good estimate of the bids. The bimodal scenario models a buyer, which for the most part will bid as a continuous function of features but that is interested in a particular set of objects (for instance retargeting buyers in online advertisement) for which she is willing to pay a much higher price. 7 (a) (b) (c) Figure 1: (a) Mean revenue of the three algorithms on the linear scenario. (b) Mean revenue of the three algorithms on the bimodal scenario. (c) Mean revenue on auction data. For each experiment we generated a training dataset Strain, a holdout set Sholdout and a test set Stest each with 16,000 examples. The function h used by RIC-h and the offset algorithm is found by training a linear regressor over Strain. For efficiency, we ran RIC-h algorithm on quantizations of predictions h(xi). Quantized predictions belong to one of 1000 buckets over the interval [0, 50]. Finally, the choice of hyperparameters γ for the Lipchitz loss and k for the clustering algorithm was done by selecting the best performing parameter over the holdout set. Following the suggestions in [Mohri and Medina, 2014] we chose γ ∈{0.001, 0.01, 0.1, 1.0} and k ∈{2, 4, . . . , 24}. Figure 1(a),(b) shows the average revenue of the three approaches across 20 replicas of the experiment as a function of the log of σ. Revenue is normalized so that the DC algorithm revenue is 1.0 when σ = 0.01. The error bars at one standard deviation are indistinguishable in the plot. It is not surprising to see that in the linear scenario, the DC algorithm of [Mohri and Medina, 2014] and the offset algorithm outperform RIC-h under low noise conditions. Both algorithms will recover a solution close to the true weight vector 1. In this case the offset is minimal, thus recovering virtually all revenue. On the other hand, even if we set the optimal reserve price for every cluster, the inherent variance of each cluster makes us leave some revenue on the table. Nevertheless, notice that as the noise increases all three algorithms seem to achieve the same revenue. This is due to the fact that the variance in each cluster is comparable with the error in the prediction function h. The results are reversed for the bimodal scenario where RIC-h outperforms both algorithms under low noise. This is due to the fact that RIC-h recovers virtually all revenue obtained from high bids while the offset and DC algorithms must set conservative prices to avoid losing revenue from lower bids. Auction data. In practice, however, neither of the synthetic regimes is fully representative of the bidding patterns. In order to fully evaluate RIC-h, we collected auction bid data from AdExchange for 4 different publisher-advertiser pairs. For each pair we sampled 100,000 examples with a set of discrete and continuous features. The final feature vectors are in Rd for d ∈[100, 200] depending on the publisher-buyer pair. For each experiment, we extract a random training sample of 20,0000 points as well as a holdout and test sample. We repeated this experiment 20 times and present the results on Figure 1 (c) where we have normalized the data so that the performance of the DC algorithm is always 1. The error bars represent one standard deviation from the mean revenue lift. Notice that our proposed algorithm achieves on average up to 30% improvement over the DC algorithm. Moreover, the simple offset strategy never outperforms the clustering algorithm, and in some cases achieves significantly less revenue. 8 Conclusion We provided a simple, scalable reduction of the problem of revenue optimization with side information to the well studied problem of minimizing the squared loss. Our reduction provides the first polynomial time algoritm with a quantifiable bound on the achieved revenue. In the analysis of our algorithm we also provided the first variance dependent lower bound on the revenue attained by setting optimal monopoly prices. Finally, we provided extensive empirical evidence of the advantages of RIC-h over the current state of theart. 8 References Nicol`o Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve prices in second-price auctions. IEEE Trans. Information Theory, 61(1):549–564, 2015. Shuchi Chawla, Jason D. Hartline, and Robert D. Kleinberg. Algorithmic pricing via virtual valuations. In Proceedings 8th ACM Conference on Electronic Commerce (EC-2007), San Diego, California, USA, June 11-15, 2007, pages 243–251, 2007. doi: 10.1145/1250910.1250946. Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. CoRR, abs/1502.00963, 2015. Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Proceedings of STOC, pages 426–439, 2016. Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample. Games and Economic Behavior, 91:318–333, 2015. Andrew V. Goldberg, Jason D. Hartline, and Andrew Wright. Competitive auctions and digital goods. In Proceedings of the Twelfth Annual Symposium on Discrete Algorithms, January 7-9, 2001, Washington, DC, USA., pages 735–744, 2001. Jason D. Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In Proceedings 10th ACM Conference on Electronic Commerce (EC-2009), Stanford, California, USA, July 6–10, 2009, pages 225–234, 2009. Robert D. Kleinberg and Frank Thomson Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Proceedings of FOCS, pages 594–605, 2003. Renato Paes Leme, Martin P´al, and Sergei Vassilvitskii. A field guide to personalized reserve prices. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 1093–1102, 2016. doi: 10.1145/2872427.2883071. Mehryar Mohri and Andres Mu˜noz Medina. Learning theory and algorithms for revenue optimization in second-price auctions with reserve. In Proceedings of ICML, pages 262–270, 2014. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. ISBN 026201825X, 9780262018258. Jamie Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Proceedings of NIPS, pages 136–144, 2015. Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Proceedings ofCOLT, pages 1298–1318, 2016. R. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58–73, 1981. Tim Roughgarden and Joshua R. Wang. Minimizing regret with multiple reserves. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC ’16, Maastricht, The Netherlands, July 24-28, 2016, pages 601–616, 2016. doi: 10.1145/2940716.2940792. Maja R. Rudolph, Joseph G. Ellis, and David M. Blei. Objective variables for probabilistic revenue maximization in second-price auctions with reserve. In Proceedings of WWW 2016, pages 1113– 1122, 2016. 9 | 2017 | 362 |
6,855 | Mapping distinct timescales of functional interactions among brain networks Mali Sundaresan1 s.malisundar@gmail.com Arshed Nabeel2 arshed@iisc.ac.in Devarajan Sridharan1,2∗ sridhar@iisc.ac.in 1Center for Neuroscience, Indian Institute of Science, Bangalore 2Department of Computer Science and Automation, Indian Institute of Science, Bangalore Abstract Brain processes occur at various timescales, ranging from milliseconds (neurons) to minutes and hours (behavior). Characterizing functional coupling among brain regions at these diverse timescales is key to understanding how the brain produces behavior. Here, we apply instantaneous and lag-based measures of conditional linear dependence, based on Granger-Geweke causality (GC), to infer network connections at distinct timescales from functional magnetic resonance imaging (fMRI) data. Due to the slow sampling rate of fMRI, it is widely held that GC produces spurious and unreliable estimates of functional connectivity when applied to fMRI data. We challenge this claim with simulations and a novel machine learning approach. First, we show, with simulated fMRI data, that instantaneous and lag-based GC identify distinct timescales and complementary patterns of functional connectivity. Next, we analyze fMRI scans from 500 subjects and show that a linear classifier trained on either instantaneous or lag-based GC connectivity reliably distinguishes task versus rest brain states, with ∼80-85% cross-validation accuracy. Importantly, instantaneous and lag-based GC exploit markedly different spatial and temporal patterns of connectivity to achieve robust classification. Our approach enables identifying functionally connected networks that operate at distinct timescales in the brain. 1 Introduction Processes in the brain occur at various timescales. These range from the timescales of milliseconds for extremely rapid processes (e.g. neuron spikes), to timescales of tens to hundreds of milliseconds for processes coordinated across local populations of neurons (e.g. synchronized neural oscillations), to timescales of seconds for processes that are coordinated across diverse brain networks (e.g. language) and even up to minutes, hours or days for processes that involve large-scale neuroplastic changes (e.g. learning a new skill). Coordinated activity among brain regions that mediate each of these cognitive processes would manifest in the form of functional connections among these regions at the corresponding timescales. Characterizing patterns of functional connectivity that occur at these different timescales is, hence, essential for understanding how the brain produces behavior. Measures of linear dependence and feedback, based on Granger-Geweke causality (GC) [10][11]), have been used to estimate instantaneous and lagged functional connectivity in recordings of brain activity made with electroencephalography (EEG, [6]), and electrocorticography (ECoG, [3]). However, the application of GC measures to brain recordings made with functional magnetic resonance imaging (fMRI) remains controversial [22][20][2]. Because the hemodynamic response is produced and sampled at a timescale (seconds) several orders of magnitude slower than the underlying neural processes (milliseconds), previous studies have argued that GC measures, particularly lag-based GC, produce spurious and unreliable estimates of functional connectivity from fMRI data [22][20]. ∗Corresponding author 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Three primary confounds have been reported with applying lag-based GC to fMRI data. First, systematic hemodynamic lags: a slower hemodynamic response in one region, as compared to another could produce a spurious directed GC connection from the second to the first [22] [4]. Second, in simulations, measurement noise added to the signal during fMRI acquisition was shown to produce significant degradation in GC functional connectivity estimates [20]. Finally, downsampling recordings to the typical fMRI sampling rate (seconds), three orders of magnitude slower than the timescale of neural spiking (milliseconds), was shown to effectively eliminate all traces of functional connectivity inferred by GC [20]. Hence, a previous, widely cited study argued that same-time correlation based measures of functional connectivity, such as partial correlations, fare much better than GC for estimating functional connectivity from fMRI data [22]. The controversy over the application of GC measures to fMRI data remains unresolved to date, primarily because of the lack of access to “ground truth”. On the one hand, claims regarding the efficacy of GC estimates based on simulations, are only as valid as the underlying model of hemodynamic responses. Because the precise mechanism by which neural responses generate hemodynamic responses is an active area of research [7], strong conclusions cannot be drawn based on simulated fMRI data alone. On the other hand, establishing “ground truth” validity for connections estimated by GC on fMRI data require concurrent, brain-wide invasive neurophysiological recordings during fMRI scans, a prohibitive enterprise. Here, we seek to resolve this controversy by introducing a novel application of machine learning that works around these criticisms. We estimate instantaneous and lag-based GC connectivity, first, with simulated fMRI time series under different model network configurations and, next, from real fMRI time series (from 500 human subjects) recorded under different task conditions. Based on the GC connectivity matrices, we train a linear classifier to discriminate model network configurations or subject task conditions, and assess classifier accuracy with cross validation. Our results show that instantaneous and lag-based GC connectivity estimated from empirical fMRI data can distinguish task conditions with over 80% cross-validation accuracies. To permit such accurate classification, GC estimates of functional connectivity must be robustly consistent within each model configuration (or task condition) and reliably different across configurations (or task conditions). In addition, drawing inspiration from simulations, we show that GC estimated on real fMRI data downsampled to 3x-7x the original sampling rate provides novel insights into functional brain networks that operate at distinct timescales. 2 Simulations and Theory 2.1 Instantaneous and lag-based measures of conditional linear dependence The linear relationship among two multivariate signals x and y conditioned on a third multivariate signal z can be measured as the sum of linear feedback from x to y (Fx→y), linear feedback from y to x (Fy→x), and instantaneous linear feedback (Fx◦y) [11][16]. To quantify these linear relationships, we model the future of each time series in terms of their past values with a wellestablished multivariate autoregressive (MVAR) model (detailed in Supplementary Material, Section S1). Briefly, Fx→y is a measure of the improvement in the ability to predict the future values of y given the past values of x, over and above what can be predicted from the past values of z and y, itself (and vice versa for Fy→x). Fx◦y, on the other hand, measures the instantaneous influence between x and y conditioned on z (see Supplementary Material, Section S1). We refer to Fx◦y, as instantaneous GC (iGC), and Fx→y Fy→x as lag-based GC or directed GC (dGC), with the direction of the influence (x to y or vice versa) being indicated by the arrow. The “full” measure of linear dependence and feedback Fx,y is given by : Fx,y = Fx→y + Fy→x + Fx◦y (1) Fx,y measures the complete conditional linear dependence between two time series. If, at a given instant, no aspect of one time series can be explained by a linear model containing all the values (past and present) of the other, Fx,y will evaluate to zero [16]. These measures are firmly grounded in information theory and statistical inferential frameworks [9]. 2 D E F A B C Destination Source F E D C B A A C E B D F A F E D C B A Destination Source 2s 50ms Connection strength -200ms-1 6s-1 Node time-constant -6s-1 6s-1 5s 50ms B C D 600s 600s Amplitude (a.u.) Fast interaction (50ms) Slow interaction (5s) Time (s) Network H Network J -0.02 0 Network J Im fast intermediate slow Re 0.02 0.002 0.002 -0.02 0 Network H Im fast slow Re 0.02 0.002 0.002 Figure 1: Network simulations. (A) Network configuration H. (Left) Connectivity matrix. Red vs. blue: Excitatory vs. inhibitory connections. Deeper hues: Higher connection strengths. Non-zero value at (i, j) corresponds to a connection from node j to node i (column to row). Sub-network A-B-C operates at a fast timescale (50 ms) whereas D-E-F operates at a slow timescale (2 s). (Right) Network schematic showing the connectivity matrix as a graph. (B) Network configuration J. Conventions are the same as in A. (C) The eigenspectra of networks H (left) and J (right). (D) Simulated time series in network configuration J with fast (top panel) and slow (bottom panel) dynamics, corresponding to nodes A-B and E-F, respectively. Within each panel, the top plot is the simulated neural time series, and the bottom plot is the simulated fMRI time series. 2.2 Simulating functional interactions at different timescales To test the ability of GC measures to reliably recover functional interactions at different timescales, we simulated fMRI time series for model networks with two configurations of directed connectivity. Simulated fMRI time series were generated using a two-stage model (2): the first stage involved a latent variable model that described neural dynamics, and the second stage that convolved these dynamics with the hemodynamic response function (HRF) to obtain the simulated fMRI time series. ˙x = Ax + ε y = H ∗x (2) where A is the neural (“ground truth”) connectivity matrix, x is the neural time series, ˙x is dx/dt, H is the canonical hemodynamic response function (HRF; simulated with spm_hrf in SPM8 software), ∗is the convolution operation, y is the simulated BOLD time series, and ε is i.i.d Gaussian noise. Other than noise ε, other kinds of external input were not included in these simulations. Similar models have been employed widely for simulating fMRI time series data previously [22][2][20]. First, we sought to demonstrate the complementary nature of connections estimated by iGC and dGC. For this, we used network configuration H, shown in Fig. 1A. Note that this corresponds to two non-interacting sub-networks, each operating at distinctly different timescales (50 ms and 2000 ms node decay times, respectively) as revealed by the eigenspectrum of the connectivity matrix (Fig. 1C). For convenience, we term these two timescales as “fast” and “slow”. Moreover, each sub-network operated with a distinct pattern of connectivity, either purely feedforward, or with feedback (E-I). Dynamics were simulated with a 1 ms integration step (Euler scheme), convolved with the HRF and then downsampled to 0.5 Hz resolution (interval of 2 s) to match the sampling rate (repeat time, TR) of typical fMRI recordings. Second, we sought to demonstrate the ability of dGC to recover functional interactions at distinct timescales. For this, we simulated a different network configuration J, whose connectivity matrix 3 D E F A B C A C E B D F A B Sampling Interval 500ms 50ms iGC dGC Ground Truth Ground Truth 5s 0.06 0 0.02 0 Figure 2: Connectivity estimated from simulated data. (A) iGC and dGC values estimated from simulated fMRI time series, network H. (Leftmost) Ground truth connectivity used in simulations. (Top) Estimated iGC connectivity matrix (left) and significant connections (right, p<0.05) estimated by a bootstrap procedure using 1000 phase scrambled surrogates[18]. (Bottom) Same as top panel, but for dGC. (B) dGC estimates from simulated fMRI time series, network J, sampled at three different sampling intervals: 50 ms (left), 500 ms (middle) and 5 s (right). In each case the estimated dGC matrix and significant connections are shown, with the same conventions as in panel (A). is shown in Fig. 1B. This network comprised three non-interacting sub-networks operating at three distinct timescales (50 ms, 0.5 s, and 5 s node decay times; eigenspectrum in Fig. 1C). As before, simulated dynamics were downsampled at various rates – 20 Hz, 2 Hz, 0.2 Hz – corresponding to sampling intervals of 50 ms, 0.5 s, and 5 s, respectively. The middle interval (0.5 s) is closest to the repeat time (TR=0.7 s) of the experimental fMRI data used in our analyses; the first and last intervals were chosen to be one order of magnitude faster and slower, respectively. Sufficiently long (3000 s) simulated fMRI timeseries were generated for each network configuration (H and J). Sample time series from a subset of these simulations before and after hemodynamic convolution and downsampling are shown in Fig. 1D. 2.3 Instantaneous and lag-based GC identify complementary connectivity patterns Our goal was to test if the ground truth neural connectivity matrix (A in equation 2) could be estimated by applying iGC and dGC to the fMRI time series y. dGC was estimated from the time series with the MVGC toolbox (GCCA mode) [1][19] and iGC was estimated from the MVAR residuals [16]. For simulations with network configuration H, iGC and dGC identified connectivity patterns that differed in two key respects (Fig. 2A). First, iGC identified feedforward interactions at both fast and slow timescales whereas dGC was able to estimate only the slow interactions, which occurred at a timescale comparable to the sampling rate of the measurement. Second, dGC was able to identify the presence of the E-I feedback connection at the slow timescale, whereas iGC entirely failed to estimate this connection. In the Supplementary Material (Section S2), we show theoretically why iGC can identify mutually excitatory or mutually inhibitory feedback connections, but fails to identify the presence of reciprocal excitatory-inhibitory (E-I) feedback connections, particularly when the connection strengths are balanced. For simulations with network configuration J, dGC identified distinct connections depending on the sampling rate. At the highest sampling rate (20 Hz), connections at the fastest timescales (50 ms) were estimated most effectively, whereas at the slowest sampling rates (0.2 Hz), only the slowest timescale connections (5 s) were estimated; intermediate sampling rates (2 Hz) estimated connections at intermediate timescales (0.5 s). Thus, dGC estimated robustly those connections whose process timescale was closest to the sampling rate of the data. The first finding — that connections at fast timescales (50 ms) could not be estimated from data sampled at much lower rates (0.2 Hz) — is expected, and in line with previous findings. However, the converse finding — that the slowest timescale connections (5 s) could not be detected at the fastest sampling rates (20 Hz) — was indeed surprising. To better understand these puzzling findings, we performed simulations over a wide range of sampling rates for each of these connection timescales; the results are shown in Supplementary Figure S1. dGC values (both with and without convolution with the hemodynamic response function) systematically increased from baseline, peaked at a sampling rate corresponding to the process timescale and decreased rapidly at higher sampling rates, matching 4 recent analytical findings[2]. Thus, dGC for connections at a particular timescale was highest when the data were sampled at a rate that closely matched that timescale. Two key conclusions emerged from these simulations. First, functional connections estimated by dGC can be distinct from and complementary to connections identified by iGC, both spatially and temporally. Second, connections that operate at distinct timescales can be detected by estimating dGC on data sampled at distinct rates that match the timescales of the underlying processes. 3 Experimental Validation We demonstrated the success of instantaneous and lag-based GC to accurately estimate functional connectivity with simulated fMRI data. Nevertheless, application of GC measures to real fMRI data is fraught with significant caveats, associated with hemodynamic confounds and measurement noise, as described above. We asked whether, despite these confounds, iGC and dGC would be able to produce reliable estimates of connectivity in real fMRI data. Moreover, as with simulated data, would iGC and dGC reveal complementary patterns of connectivity that varied reliably with different task conditions? 3.1 Machine learning, cross-validation and recursive feature elimination We analyzed minimally preprocessed brain scans of 500 subjects, drawn from the Human Connectome Project (HCP) database [12]. We analyzed data from resting state and seven other task conditions (total of 4000 scans; Supplementary Table S1). In the main text we present results for classifying the resting state from the language task; the other classifications are reported in the Supplementary Material. The language task involves subjects listening to short segments of stories and evaluating semantic content in the stories. This task is expected to robustly engage a network of language processing regions in the brain. The resting state scans served as a “task-free” baseline, for comparison. Brain volumes were parcellated with a 14-network atlas [21] (see Supplementary Material Section S3; Supplementary Table S2). Network time series were computed by averaging time series across all voxels in a given network using Matlab and SPM8. These multivariate network time series were then fit with an MVAR model (Supplementary Material Section S1). Model order was determined with the Akaike Information Criterion for each subject, was typically 1, and did not change with further downsampling of the data (see next section). The MVAR model fit was then used to estimate both an instantaneous connectivity matrix using iGC (Fx◦y) and a lag-based connectivity matrix using dGC (Fx→y). The connection strengths in these matrices were used as feature vectors in a linear classifier based on support vector machines (SVMs) for high dimensional predictor data. We used Matlab’s fitclinear function, optimizing hyperparameters using a 5-fold approach: by estimating hyperparameters with five sets of 100 subjects in turn, and measuring classification accuracies with the remaining 400 subjects; the only exception was for the classification analysis with averaging GC matrices (Fig. 3B) for which classification was run with default hyperparameters (regularization strength = 1/(cardinality of training-set), ridge penalty). The number of features for iGC-based classification was 91 (upper triangular portion of the symmetric 14×14 iGC matrix) and for dGC-based classification was 182 (all entries of the 14×14 dGC matrix, barring self-connections on the main diagonal). Based on these functional connectivity features, we asked if we could reliably predict the task condition (e.g. language versus resting). Classification performance was tested with leave-one-out and k-fold crossvalidation. We also assessed the significance of the classification accuracy with permutation testing [14] (Supplementary Material, Section S4). Finally, we wished to identify a key set of connections that permitted accurately classifying task from resting states. To accomplish this, we applied a two-stage recursive feature elimination (RFE) algorithm [5], which identified a minimal set of features that provided maximal cross validation accuracy (generalization performance). Details are provided in the Supplementary Material (Section S5, Supplementary Figs. S2-S3). 5 Accuracy 60 70 80 90 100 No. of Subjects 0 10 20 30 40 50 Classification using dGC Classification using iGC A B Accuracy 50 60 70 80 90 100 14-Network dGC iGC fGC 90-Node Parcellation Scheme Figure 3: Classification based on GC connectivity estimates in real data. (A) Leave-one-out classification accuracies for different GC measures for the 14-network parcellation (left) and the 90-node parcellation (right). Within each group, the first two bars represent the classification accuracy with dGC and iGC respectively. The third bar is the classification accurcay with fGC (see equation 1). Chance: 50% (two-way classification). Error-bars: Clopper-Pearson binomial confidence intervals. (B) Classification accuracy when the classifier is tested on average GC matrices, as a function of number of subjects being averaged (see text for details). 3.2 Instantaneous and lag-based GC reliably distinguish task from rest Both iGC and dGC connectivity were able to distinguish task from resting state significantly above chance (Fig. 3A). Average leave-one-out cross validation accuracy was 80.0% with iGC and 83.4% with dGC (Fig. 3A, left). Both iGC and dGC classification exhibited high precision and recall at identifying language task (precision= 0.81, recall= 0.78 for iGC and precision= 0.85, recall= 0.81 for dGC). k-fold (k=10) cross-validation accuracy was also similar for both the GC measures (79.4% for iGC and 83.7% for dGC). dGC and iGC are complementary measures of linear dependence, by their definition. We asked if combining them would produce better classification performance. We combined dGC and iGC in two ways. First, we performed classification after pooling features (connectivity matrices) across both dGC and iGC (“iGC ∪dGC”). Second, we estimated the full GC measure (Fx,y), which is a direct sum of dGC and iGC estimates (see equation 1). Both of these approaches yielded marginally higher classification accuracies – 88.2% for iGC ∪dGC and 84.6% for fGC – than dGC or iGC alone. Next, we asked if classification would be more accurate if we averaged the GC measures across a few subjects, to remove uncorrelated noise (e.g. measurement noise) in connectivity estimates. For this, the data were partitioned into two groups of 250 subjects: a training (T) group and a test (S) group. The classifier was trained on group T and the classifier prediction was tested by averaging GC matrices across several folds of S, each fold containing a few (m=2,4,5,10 or 25) subjects. Prediction accuracy for both dGC and iGC reached ∼90% with averaging as few as two subjects’ GC matrices, and reached ∼100%, with averaging 10 subjects’ matrices (Fig. 3B). We also tested if these classification accuracies were brain atlas or cognitive task specific. First, we tested an alternative atlas with 90 functional nodes based on a finer regional parcellation of the 14 functional networks [21]. Classification accuracies for iGC and fGC improved (87.9% and 90.8%, respectively), and for dGC remained comparable (81.4%), to the 14 network case (Fig. 3A, right). Second, we performed the same GC-based classification analysis for six other tasks drawn from the HCP database (Supplementary Table S1) . We discovered that all of the remaining six tasks could be classified from the resting state with accuracy comparable to the language versus resting classification (Supplementary Fig. S4). Finally, we asked how iGC and dGC classification accuracies would compare to those of other functional connectivity estimators. For example, partial correlations (PC) have been proposed as a robust measure of functional connectivity in previous studies [22]. Classification accuracies for PC varied between 81-96% across tasks (Supplementary Fig. S5B). PC’s better performance is expected: estimators based on same-time covariance are less susceptible to noise than those based on lagged covariance, a result we derive analytically in the Supplementary Material (Section S6). Also, when classifying language task versus rest, PC and iGC relied on largely overlapping connections (∼60% overlap) whereas PC and dGC relied on largely non-overlapping connections (∼25% overlap; Supplementary Fig. S5C). These results highlight the complementary nature of PC and dGC connectivity. Moreover, we demonstrate, both with simulations and with real-data, that 6 Sampling rate 1x (0.72s) 3x (2.16s) 5x (3.60s) 7x (5.04s) 1 31 61 91 121 151 181 1 31 61 91 121 151 181 1 31 61 91 121 151 181 1 31 61 91 121 151 181 0.5 1 31 61 # Features Accuracy 1 D-DMN LECN RECN A-SAL P-SAL LANG AUD SENMOT BG PREC V-DMN VISPA PR-VIS HI-VIS 0.4001 0.5320 0.6161 0.7246 0.4309 0 0 0 0 0 A B Figure 4: Maximally discriminative connections identified with RFE (A) (Top) iGC connections that were maximally discriminative between the language task and resting state, identified using recursive feature elimination (RFE). Darker gray shades denote more discriminative connections (higher beta weights) (Bottom) RFE curves, with classification accuracy plotted as a function of the number of remaining features. The dots mark the elbow-points of the RFE curves, corresponding to the optimal number of discriminative connections. (B) Same as in (A), except that RFE was performed on dGC connectivity matrices with data sampled at 1x, 3x, 5x, and 7x of the original sampling interval (TR=0.72 s). Non-zero value at (i, j) corresponds to a connection from node j to node i (column to row). classification accuracy with GC typically increased with more scan timepoints, consistent with GC being an information theoretic measure (Supplementary Fig. S6). These superior classification accuracies show that, despite conventional caveats for estimating GC with fMRI data, both iGC and dGC yield functional connectivity estimates that are reliable across subjects. Moreover, dGC’s lag-based functional connectivity provides a robust feature space for classifying brain states into task or rest. In addition, we found that dGC connectivity can be used to predict task versus rest brain states with near-perfect (>95-97%) accuracy, by averaging connectivity estimates across as few as 10 subjects, further confirming the robustness of these estimates. 3.3 Characterizing brain functional networks at distinct timescales Recent studies have shown that brain regions, across a range of species, operate at diverse timescales. For example, a recent calcium imaging study demonstrated the occurrence of fast (∼100 ms) and slow (∼1 s) functional interactions in mouse cortex [17]. In non-human primates, cortical brain regions operate at a hierarchy of intrinsic timescales, with the sensory cortex operating at faster timescales compared to prefrontal cortex [13]. In the resting human brain, cortical regions organize into a hierarchy of functionally-coupled networks characterized by distinct timescales [24]. It is likely that these characteristic timescales of brain networks are also modulated by task demands. We asked if the framework presented in our study could characterize brain networks operating at distinct timescales across different tasks (and rest) from fMRI data. We had already observed, in simulations, that instantaneous and lag-based GC measures identified functional connections that operate at different timescales (Fig. 2A). We asked if these measures could identify connections at fast versus slow timescales (compared to TR=0.72s) that were specific to task verus rest, from fMRI recordings. To identify these task-specific connections, we performed recursive feature elimination (described in Supplementary Material, Section S5) with the language task and resting state scans, separately with iGC and dGC features (connections). Prior to analysis of real data, we validated RFE by applying it to estimate key differences in two simulated networks (Supplementary Material Fig. S2 and Fig. S3). RFE accurately identified connections that differed in simulation “ground truth”: specifically, differences in fast timescale connections were identified by iGC, and in slow timescale connections by dGC. When applied to the language task versus resting state fMRI data, RFE identified a small subset of 18(/91) connections based on iGC (Fig. 4A), and an overlapping but non-identical set of 17(/182) connections based on dGC (Fig. 4B); these connections were key to distinguishing task (language) 7 from resting brain states. Specifically, the highest iGC beta weights, corresponding to the most discriminative iGC connections, occurred among various cognitive control networks, including the anterior and posterior salience networks, the precuneus and the visuospatial network (Fig. 5A). Some of these connections were also detected by dGC. Nevertheless, the highest dGC beta weights occurred for connections to and from the language network, for example from the language network to dorsal default mode network and from the precuneus to the language network (Fig. 5B). Notably, these latter connections were important for classification based on dGC, but not based on iGC. Moreover, iGC identified a connection between the language network and the basal ganglia whereas dGC, in addition, identified the directionality of the connection, as being from the language network to the basal ganglia. In summary, dGC and iGC identified several complementary connections, but dGC alone identified many connections with the language network, indicating that slow processes in this network significantly distinguished language from resting states. Next, we tested whether estimating dGC after systematically downsampling the fMRI time series would permit identifying maximally discriminative connections at progressively slower timescales. To avoid degradation of GC estimates because of fewer numbers of samples with downsampling (by decimation), we concatenated the different downsampled time series to maintain an identical total number of samples. RFE was applied to GC estimates based on data sampled at different rates: 1.4 Hz, 0.5 Hz, 0.3 Hz and 0.2 Hz corresponding to 1x, 3x, 5x, and 7x of TR (sampling period of 0.72 s, 2.16 s, 3.6 s and 5.04 s), respectively. RFE with dGC identified 17(/182) key connections at each of these timescales (Fig. 4B). Interestingly, some connections manifested in dGC estimates across all sampling rates. For instance, the connection from the precuneus to the language network was important for classification across all sampling rates (Fig. 5C). On the other hand, connections between the language network and various other networks manifested at specific sampling rates only. For instance an outgoing connection from the language network to the basal ganglia manifested only at the 1.4 Hz sampling rate, to the visuospatial network and default mode networks only at 0.5 Hz, to the higher-visual network only at 0.2-0.3 Hz, and an incoming connection from the anterior salience only at 0.2 Hz. None of these connections were identified by the iGC classifier (compare Fig. 5A and 5C). Similar timescale generic and timescale specific connections were observed in other tasks as well (Supplementary Fig. S7). Despite downsampling, RFE accuracies were significantly above chance, although accuracies decreased at lower sampling rates (Fig. 4 lower panels) [20]. Thus, dGC identified distinct connectivity profiles for data sampled at different timescales, without significantly compromising classification performance. Finally, we sought to provide independent evidence to confirm whether these network connections operated at different timescales. For this, we estimated the average cross coherence (Supplementary Material, Section S7) between the fMRI time series of two connections from the language network that were identified by RFE exclusively at 0.2-0.3 Hz (language to higher visual) and 0.5 Hz (language to visuospatial) sampling rates, respectively (Fig. 5C). Each connection exhibited an extremum in the coherence plot at a frequency which closely matched the respective connection’s timescale (Fig. 5D). These findings, from experimental data, provide empirical validation to our simulation results, which indicate that estimating dGC on downsampled data is a tenable approach for identifying functional connections that operate at specific timescales. 4 Conclusions These results contain three novel insights. First, we show that two measures of conditional linear dependence – instantaneous and directed Granger-Geweke causality – provide robust measures of functional connectivity in the brain, resolving over a decade of controversy in the field [23][22]. Second, functional connections identified by iGC and dGC carry complementary information, both in simulated and in real fMRI recordings. In particular, dGC is a powerful approach for identifying reciprocal excitatory-inhibitory connections, which are easily missed by iGC and other same-time correlation based metrics like partial correlations [22]. Third, when processes at multiple timescales exist in the data, our results show that downsampling the time series to different extents provides an effective method for recovering connections at these distinct timescales. Our simulations highlight the importance of capturing emergent timescales in simulations of neural data. For instance, a widely-cited study [22] employed purely feedforward connectivity matrices with a 50 ms neural timescale in their simulations, and argued that functional connections are not reliably inferred with GC on fMRI data. However, such connectivity matrices preclude the occurrence of 8 A B C D iGC only dGC only Both D-DMN RECN A-SAL BG PREC V-DMN VISPA HI-VIS P-SAL AUD D-DMN BG PREC VISPA HI-VIS A-SAL V-DMN P-SAL AUD A-SAL PREC VISPA LANG RECN VISPA HI-VIS RECN A-SAL LANG D-DMN LECN RECN A-SAL P-SAL LANG AUD SENMOT BG PREC V-DMN VISPA PR-VIS HI-VIS A-SAL LANG PREC VISPA HI-VIS D-DMN RECN P-SAL LANG BG PREC V-DMN VISPA HI-VIS 1.39 Hz 0.46 Hz 0.28 Hz 0.19 Hz All LANG→VISPA (dGC 3x) 0.01 -0.01 0.7Hz Freq Coherence 0.46Hz 0.32Hz 0.01 -0.01 0.7Hz Freq Coherence 0.28Hz 0.14Hz LANG→HI-VIS (dGC 5x) LANG V-DMN D-DMN P-SAL BG PREC Figure 5: Connectivity at different timescales. (A-B) Discriminative connections identified exclusively by iGC (teal), exclusively by dGC (blue), or by both (yellow). Each connection is represented as a band going from a source node on the left to a destination node on the right. (C) (Top) Discriminative connections identified by dGC, exclusively at different sampling intervals (1x, 3x, 5x, 7x TR). (D) (Left) Directed connection between language network and visuospatial network identified by dGC with fMRI data sampled at 0.5 Hz (sampling interval, 3x TR). (Right) Directed connection between language network and higher visual network identified by dGC with fMRI data sampled at 0.3 Hz (sampling interval, 5x TR). (Lower plots) Cross coherence between respective network time series. Shaded area: Frequencies from Fs/2 to Fs, where Fs is the sampling rate of the fMRI timeseries from which dGC was estimated. slower, behaviorally relevant timescales of seconds, which readily emerge in the presence of feedback connections, both in simulations [8][15] and in the brain [17][24]. Our simulations explicitly incorporated these slow timescales to show that connections at these timescales could be robustly estimated with GC on simulated fMRI data. Moreover, we show that such slow interactions also occur in human brain networks. Our approach is particularly relevant for studies that seek to investigate dynamic functional connectivity with slow sampling techniques, such as fMRI or calcium imaging. Our empirical validation of the robustness of GC measures, by applying machine learning to fMRI data from 500 subjects (and 4000 functional scans), is widely relevant for studies that seek to apply GC to estimate directed functional networks from fMRI data. Although, scanner noise or hemodynamic confounds can influence GC estimates in fMRI data [20][4], our results demonstrate that dGC contains enough directed connectivity information for robust prediction, reaching over 95% validation accuracy with averaging even as few as 10 subjects’ connectivity matrices (Fig. 3B). These results strongly indicate the existence of slow information flow networks in the brain that can be meaningfully inferred from fMRI data. Future work will test if these functional networks influence behavior at distinct timescales. Acknowledgments. This research was supported by a Wellcome Trust DBT-India Alliance Intermediate Fellowship, a SERB Early Career Research award, a Pratiksha Trust Young Investigator award, a DBT-IISc Partnership program grant, and a Tata Trusts grant (all to DS). We would like to thank Hritik Jain for help with data analysis. References [1] L. Barnett and A. K. Seth. The MVGC multivariate Granger causality toolbox: A new approach to Granger-causal inference. Journal of Neuroscience Methods, 223:50 – 68, 2014. 9 [2] L. Barnett and A. K. Seth. Detectability of Granger causality for subsampled continuous-time neurophysiological processes. Journal of Neuroscience Methods, 275:93 – 121, 2017. [3] A. M. Bastos, J. Vezoli, C. A. Bosman, J.-M. Schoffelen, R. Oostenveld, J. R. Dowdall, P. De Weerd, H. Kennedy, and P. Fries. Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2):390–401, 2015. [4] C. Chang, M. E. Thomason, and G. H. Glover. Mapping and correction of vascular hemodynamic latency in the BOLD signal. NeuroImage, 43(1):90 – 102, 2008. [5] F. De Martino, G. Valente, N. Staeren, J. Ashburner, R. Goebel, and E. Formisano. Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns. NeuroImage, 43(1):44–58, 2008. [6] M. Dhamala, G. Rangarajan, and M. Ding. Analyzing information flow in brain networks with nonparametric Granger causality. NeuroImage, 41(2):354 – 362, 2008. [7] K. J. Friston, A. Mechelli, R. Turner, and C. J. Price. Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamics. NeuroImage, 12(4):466–477, 2000. [8] S. Ganguli, J. W. Bisley, J. D. Roitman, M. N. Shadlen, M. E. Goldberg, and K. D. Miller. One-dimensional dynamics of attention and decision making in LIP. Neuron, 58(1):15–25, 2008. [9] I. M. Gel’fand and A. M. Yaglom. Calculation of the amount of information about a random function contained in another such function. American Mathematical Society Translations, 12(1):199–246, 1959. [10] J. Geweke. Measurement of linear dependence and feedback between multiple time series. Journal of the American Statistical Association, 77(378):304–313, 1982. [11] J. F. Geweke. Measures of conditional linear dependence and feedback between time series. Journal of the American Statistical Association, 79(388):907–915, 1984. [12] M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi, M. Webster, J. R. Polimeni, et al. The minimal preprocessing pipelines for the Human Connectome Project. NeuroImage, 80:105–124, 2013. [13] J. D. Murray, A. Bernacchia, D. J. Freedman, R. Romo, J. D. Wallis, X. Cai, C. Padoa-Schioppa, T. Pasternak, H. Seo, D. Lee, et al. A hierarchy of intrinsic timescales across primate cortex. Nature Neuroscience, 17(12):1661–1663, 2014. [14] M. Ojala and G. C. Garriga. Permutation tests for studying classifier performance. Journal of Machine Learning Research, 11:1833–1863, 2010. [15] K. Rajan and L. Abbott. Eigenvalue spectra of random matrices for neural networks. Physical Review Letters, 97(18):188104, 2006. [16] A. Roebroeck, E. Formisano, and R. Goebel. Mapping directed influence over the brain using Granger causality and fMRI. NeuroImage, 25(1):230–242, 2005. [17] C. A. Runyan, E. Piasini, S. Panzeri, and C. D. Harvey. Distinct timescales of population coding across cortex. Nature, 548(7665):92–96, 2017. [18] S. Ryali, K. Supekar, T. Chen, and V. Menon. Multivariate dynamical systems models for estimating causal interactions in fMRI. NeuroImage, 54(2):807–823, 2011. [19] A. K. Seth. A MATLAB toolbox for Granger causal connectivity analysis. Journal of Neuroscience Methods, 186(2):262–273, 2010. [20] A. K. Seth, P. Chorley, and L. C. Barnett. Granger causality analysis of fMRI bold signals is invariant to hemodynamic convolution but not downsampling. NeuroImage, 65:540–555, 2013. [21] W. Shirer, S. Ryali, E. Rykhlevskaia, V. Menon, and M. Greicius. Decoding subject-driven cognitive states with whole-brain connectivity patterns. Cerebral Cortex, 22(1):158–165, 2012. [22] S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich. Network modelling methods for fMRI. NeuroImage, 54(2):875–891, 2011. [23] D. Sridharan, D. J. Levitin, and V. Menon. A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proceedings of the National Academy of Sciences, 105(34):12569–12574, 2008. [24] D. Vidaurre, S. M. Smith, and M. W. Woolrich. Brain network dynamics are hierarchically organized in time. Proceedings of the National Academy of Sciences, 114(48):12827–12832, 2017. 10 | 2017 | 363 |
6,856 | Improved Training of Wasserstein GANs Ishaan Gulrajani1⇤, Faruk Ahmed1, Martin Arjovsky2, Vincent Dumoulin1, Aaron Courville1,3 1 Montreal Institute for Learning Algorithms 2 Courant Institute of Mathematical Sciences 3 CIFAR Fellow igul222@gmail.com {faruk.ahmed,vincent.dumoulin,aaron.courville}@umontreal.ca ma4371@nyu.edu Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † 1 Introduction Generative Adversarial Networks (GANs) [9] are a powerful class of generative models that cast generative modeling as a game between two networks: a generator network produces synthetic data given some noise source and a discriminator network discriminates between the generator’s output and true data. GANs can produce very visually appealing samples, but are often hard to train, and much of the recent work on the subject [22, 18, 2, 20] has been devoted to finding ways of stabilizing training. Despite this, consistently stable training of GANs remains an open problem. In particular, [1] provides an analysis of the convergence properties of the value function being optimized by GANs. Their proposed alternative, named Wasserstein GAN (WGAN) [2], leverages the Wasserstein distance to produce a value function which has better theoretical properties than the original. WGAN requires that the discriminator (called the critic in that work) must lie within the space of 1-Lipschitz functions, which the authors enforce through weight clipping. Our contributions are as follows: 1. On toy datasets, we demonstrate how critic weight clipping can lead to undesired behavior. 2. We propose gradient penalty (WGAN-GP), which does not suffer from the same problems. 3. We demonstrate stable training of varied GAN architectures, performance improvements over weight clipping, high-quality image generation, and a character-level GAN language model without any discrete sampling. ⇤Now at Google Brain †Code for our models is available at https://github.com/igul222/improved wgan training. 2 Background 2.1 Generative adversarial networks The GAN training strategy is to define a game between two competing networks. The generator network maps a source of noise to the input space. The discriminator network receives either a generated sample or a true data sample and must distinguish between the two. The generator is trained to fool the discriminator. Formally, the game between the generator G and the discriminator D is the minimax objective: min G max D E x⇠Pr[log(D(x))] + E ˜x⇠Pg[log(1 −D(˜x))], (1) where Pr is the data distribution and Pg is the model distribution implicitly defined by ˜x = G(z), z ⇠p(z) (the input z to the generator is sampled from some simple noise distribution, such as the uniform distribution or a spherical Gaussian distribution). If the discriminator is trained to optimality before each generator parameter update, then minimizing the value function amounts to minimizing the Jensen-Shannon divergence between Pr and Pg [9], but doing so often leads to vanishing gradients as the discriminator saturates. In practice, [9] advocates that the generator be instead trained to maximize E˜x⇠Pg[log(D(˜x))], which goes some way to circumvent this difficulty. However, even this modified loss function can misbehave in the presence of a good discriminator [1]. 2.2 Wasserstein GANs [2] argues that the divergences which GANs typically minimize are potentially not continuous with respect to the generator’s parameters, leading to training difficulty. They propose instead using the Earth-Mover (also called Wasserstein-1) distance W(q, p), which is informally defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p (where the cost is mass times transport distance). Under mild assumptions, W(q, p) is continuous everywhere and differentiable almost everywhere. The WGAN value function is constructed using the Kantorovich-Rubinstein duality [24] to obtain min G max D2D E x⇠Pr ⇥ D(x) ⇤ − E ˜x⇠Pg ⇥ D(˜x)) ⇤ (2) where D is the set of 1-Lipschitz functions and Pg is once again the model distribution implicitly defined by ˜x = G(z), z ⇠p(z). In that case, under an optimal discriminator (called a critic in the paper, since it’s not trained to classify), minimizing the value function with respect to the generator parameters minimizes W(Pr, Pg). The WGAN value function results in a critic function whose gradient with respect to its input is better behaved than its GAN counterpart, making optimization of the generator easier. Additionally, WGAN has the desirable property that its value function correlates with sample quality, which is not the case for GANs. To enforce the Lipschitz constraint on the critic, [2] propose to clip the weights of the critic to lie within a compact space [−c, c]. The set of functions satisfying this constraint is a subset of the k-Lipschitz functions for some k which depends on c and the critic architecture. In the following sections, we demonstrate some of the issues with this approach and propose an alternative. 2.3 Properties of the optimal WGAN critic In order to understand why weight clipping is problematic in a WGAN critic, as well as to motivate our approach, we highlight some properties of the optimal critic in the WGAN framework. We prove these in the Appendix. 2 Proposition 1. Let Pr and Pg be two distributions in X, a compact metric space. Then, there is a 1-Lipschitz function f ⇤which is the optimal solution of maxkfkL1 Ey⇠Pr[f(y)] −Ex⇠Pg[f(x)]. Let ⇡be the optimal coupling between Pr and Pg, defined as the minimizer of: W(Pr, Pg) = inf⇡2⇧(Pr,Pg) E(x,y)⇠⇡[kx −yk] where ⇧(Pr, Pg) is the set of joint distributions ⇡(x, y) whose marginals are Pr and Pg, respectively. Then, if f ⇤is differentiable‡, ⇡(x = y) = 0§, and xt = tx + (1 −t)y with 0 t 1, it holds that P(x,y)⇠⇡ h rf ⇤(xt) = y−xt ky−xtk i = 1. Corollary 1. f ⇤has gradient norm 1 almost everywhere under Pr and Pg. 3 Difficulties with weight constraints We find that weight clipping in WGAN leads to optimization difficulties, and that even when optimization succeeds the resulting critic can have a pathological value surface. We explain these problems below and demonstrate their effects; however we do not claim that each one always occurs in practice, nor that they are the only such mechanisms. Our experiments use the specific form of weight constraint from [2] (hard clipping of the magnitude of each weight), but we also tried other weight constraints (L2 norm clipping, weight normalization), as well as soft constraints (L1 and L2 weight decay) and found that they exhibit similar problems. To some extent these problems can be mitigated with batch normalization in the critic, which [2] use in all of their experiments. However even with batch normalization, we observe that very deep WGAN critics often fail to converge. 8 Gaussians 25 Gaussians Swiss Roll (a) Value surfaces of WGAN critics trained to optimality on toy datasets using (top) weight clipping and (bottom) gradient penalty. Critics trained with weight clipping fail to capture higher moments of the data distribution. The ‘generator’ is held fixed at the real data plus Gaussian noise. 13 10 7 4 1 Discriminator layer −20 −10 0 10 Gradient norm (log scale) Weight clipping (c = 0.001) Weight clipping (c = 0.01) Weight clipping (c = 0.1) Gradient penalty −0.02 −0.01 0.00 0.01 0.02 Weights Weight clipping −0.50 −0.25 0.00 0.25 0.50 Weights Gradient penalty (b) (left) Gradient norms of deep WGAN critics during training on toy datasets either explode or vanish when using weight clipping, but not when using a gradient penalty. (right) Weight clipping (top) pushes weights towards two values (the extremes of the clipping range), unlike gradient penalty (bottom). Figure 1: Gradient penalty in WGANs does not exhibit undesired behavior like weight clipping. 3.1 Capacity underuse Implementing a k-Lipshitz constraint via weight clipping biases the critic towards much simpler functions. As stated previously in Corollary 1, the optimal WGAN critic has unit gradient norm almost everywhere under Pr and Pg; under a weight-clipping constraint, we observe that our neural network architectures which try to attain their maximum gradient norm k end up learning extremely simple functions. To demonstrate this, we train WGAN critics with weight clipping to optimality on several toy distributions, holding the generator distribution Pg fixed at the real distribution plus unit-variance Gaussian noise. We plot value surfaces of the critics in Figure 1a. We omit batch normalization in the ‡We can actually assume much less, and talk only about directional derivatives on the direction of the line; which we show in the proof always exist. This would imply that in every point where f ⇤is differentiable (and thus we can take gradients in a neural network setting) the statement holds. §This assumption is in order to exclude the case when the matching point of sample x is x itself. It is satisfied in the case that Pr and Pg have supports that intersect in a set of measure 0, such as when they are supported by two low dimensional manifolds that don’t perfectly align [1]. 3 Algorithm 1 WGAN with gradient penalty. We use default values of λ = 10, ncritic = 5, ↵= 0.0001, β1 = 0, β2 = 0.9. Require: The gradient penalty coefficient λ, the number of critic iterations per generator iteration ncritic, the batch size m, Adam hyperparameters ↵, β1, β2. Require: initial critic parameters w0, initial generator parameters ✓0. 1: while ✓has not converged do 2: for t = 1, ..., ncritic do 3: for i = 1, ..., m do 4: Sample real data x ⇠Pr, latent variable z ⇠p(z), a random number ✏⇠U[0, 1]. 5: ˜x G✓(z) 6: ˆx ✏x + (1 −✏)˜x 7: L(i) Dw(˜x) −Dw(x) + λ(krˆxDw(ˆx)k2 −1)2 8: end for 9: w Adam(rw 1 m Pm i=1 L(i), w, ↵, β1, β2) 10: end for 11: Sample a batch of latent variables {z(i)}m i=1 ⇠p(z). 12: ✓ Adam(r✓1 m Pm i=1 −Dw(G✓(z)), ✓, ↵, β1, β2) 13: end while critic. In each case, the critic trained with weight clipping ignores higher moments of the data distribution and instead models very simple approximations to the optimal functions. In contrast, our approach does not suffer from this behavior. 3.2 Exploding and vanishing gradients We observe that the WGAN optimization process is difficult because of interactions between the weight constraint and the cost function, which result in either vanishing or exploding gradients without careful tuning of the clipping threshold c. To demonstrate this, we train WGAN on the Swiss Roll toy dataset, varying the clipping threshold c in [10−1, 10−2, 10−3], and plot the norm of the gradient of the critic loss with respect to successive layers of activations. Both generator and critic are 12-layer ReLU MLPs without batch normalization. Figure 1b shows that for each of these values, the gradient either grows or decays exponentially as we move farther back in the network. We find our method results in more stable gradients that neither vanish nor explode, allowing training of more complicated networks. 4 Gradient penalty We now propose an alternative way to enforce the Lipschitz constraint. A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere, so we consider directly constraining the gradient norm of the critic’s output with respect to its input. To circumvent tractability issues, we enforce a soft version of the constraint with a penalty on the gradient norm for random samples ˆx ⇠Pˆx. Our new objective is L = E ˜x⇠Pg [D(˜x)] − E x⇠Pr [D(x)] | {z } Original critic loss + λ E ˆx⇠Pˆ x ⇥ (krˆxD(ˆx)k2 −1)2⇤ . | {z } Our gradient penalty (3) Sampling distribution We implicitly define Pˆx sampling uniformly along straight lines between pairs of points sampled from the data distribution Pr and the generator distribution Pg. This is motivated by the fact that the optimal critic contains straight lines with gradient norm 1 connecting coupled points from Pr and Pg (see Proposition 1). Given that enforcing the unit gradient norm constraint everywhere is intractable, enforcing it only along these straight lines seems sufficient and experimentally results in good performance. Penalty coefficient All experiments in this paper use λ = 10, which we found to work well across a variety of architectures and datasets ranging from toy tasks to large ImageNet CNNs. 4 No critic batch normalization Most prior GAN implementations [21, 22, 2] use batch normalization in both the generator and the discriminator to help stabilize training, but batch normalization changes the form of the discriminator’s problem from mapping a single input to a single output to mapping from an entire batch of inputs to a batch of outputs [22]. Our penalized training objective is no longer valid in this setting, since we penalize the norm of the critic’s gradient with respect to each input independently, and not the entire batch. To resolve this, we simply omit batch normalization in the critic in our models, finding that they perform well without it. Our method works with normalization schemes which don’t introduce correlations between examples. In particular, we recommend layer normalization [3] as a drop-in replacement for batch normalization. Two-sided penalty We encourage the norm of the gradient to go towards 1 (two-sided penalty) instead of just staying below 1 (one-sided penalty). Empirically this seems not to constrain the critic too much, likely because the optimal WGAN critic anyway has gradients with norm 1 almost everywhere under Pr and Pg and in large portions of the region in between (see subsection 2.3). In our early observations we found this to perform slightly better, but we don’t investigate this fully. We describe experiments on the one-sided penalty in the appendix. 5 Experiments 5.1 Training random architectures within a set We experimentally demonstrate our model’s ability to train a large number of architectures which we think are useful to be able to train. Starting from the DCGAN architecture, we define a set of architecture variants by changing model settings to random corresponding values in Table 1. We believe that reliable training of many of the architectures in this set is a useful goal, but we do not claim that our set is an unbiased or representative sample of the whole space of useful architectures: it is designed to demonstrate a successful regime of our method, and readers should evaluate whether it contains architectures similar to their intended application. Table 1: We evaluate WGAN-GP’s ability to train the architectures in this set. Nonlinearity (G) [ReLU, LeakyReLU, softplus(2x+2) 2 −1, tanh] Nonlinearity (D) [ReLU, LeakyReLU, softplus(2x+2) 2 −1, tanh] Depth (G) [4, 8, 12, 20] Depth (D) [4, 8, 12, 20] Batch norm (G) [True, False] Batch norm (D; layer norm for WGAN-GP) [True, False] Base filter count (G) [32, 64, 128] Base filter count (D) [32, 64, 128] From this set, we sample 200 architectures and train each on 32⇥32 ImageNet with both WGAN-GP and the standard GAN objectives. Table 2 lists the number of instances where either: only the standard GAN succeeded, only WGAN-GP succeeded, both succeeded, or both failed, where success is defined as inception score > min score. For most choices of score threshold, WGAN-GP successfully trains many architectures from this set which we were unable to train with the standard GAN objective. Table 2: Outcomes of training 200 random architectures, for different success thresholds. For comparison, our standard DCGAN achieved a score of 7.24. A longer version of this table can be found in the appendix. Min. score Only GAN Only WGAN-GP Both succeeded Both failed 1.0 0 8 192 0 3.0 1 88 110 1 5.0 0 147 42 11 7.0 1 104 5 90 9.0 0 0 0 200 5 DCGAN LSGAN WGAN (clipping) WGAN-GP (ours) Baseline (G: DCGAN, D: DCGAN) G: No BN and a constant number of filters, D: DCGAN G: 4-layer 512-dim ReLU MLP, D: DCGAN No normalization in either G or D Gated multiplicative nonlinearities everywhere in G and D tanh nonlinearities everywhere in G and D 101-layer ResNet G and D Figure 2: Different GAN architectures trained with different methods. We only succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP. 5.2 Training varied architectures on LSUN bedrooms To demonstrate our model’s ability to train many architectures with its default settings, we train six different GAN architectures on the LSUN bedrooms dataset [30]. In addition to the baseline DCGAN architecture from [21], we choose six architectures whose successful training we demonstrate: (1) no BN and a constant number of filters in the generator, as in [2], (2) 4-layer 512-dim ReLU MLP generator, as in [2], (3) no normalization in either the discriminator or generator (4) gated multiplicative nonlinearities, as in [23], (5) tanh nonlinearities, and (6) 101-layer ResNet generator and discriminator. Although we do not claim it is impossible without our method, to the best of our knowledge this is the first time very deep residual networks were successfully trained in a GAN setting. For each architecture, we train models using four different GAN methods: WGAN-GP, WGAN with weight clipping, DCGAN [21], and Least-Squares GAN [17]. For each objective, we used the default set of optimizer hyperparameters recommended in that work (except LSGAN, where we searched over learning rates). For WGAN-GP, we replace any batch normalization in the discriminator with layer normalization (see section 4). We train each model for 200K iterations and present samples in Figure 2. We only succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP. For every other training method, some of these architectures were unstable or suffered from mode collapse. 5.3 Improved performance over weight clipping One advantage of our method over weight clipping is improved training speed and sample quality. To demonstrate this, we train WGANs with weight clipping and our gradient penalty on CIFAR10 [13] and plot Inception scores [22] over the course of training in Figure 3. For WGAN-GP, 6 0.0 0.5 1.0 1.5 2.0 Generator iterations ⇥105 1 2 3 4 5 6 7 Inception Score Convergence on CIFAR-10 Weight clipping Gradient Penalty (RMSProp) Gradient Penalty (Adam) DCGAN 0 1 2 3 4 Wallclock time (in seconds) ⇥105 1 2 3 4 5 6 7 Inception Score Convergence on CIFAR-10 Weight clipping Gradient Penalty (RMSProp) Gradient Penalty (Adam) DCGAN Figure 3: CIFAR-10 Inception score over generator iterations (left) or wall-clock time (right) for four models: WGAN with weight clipping, WGAN-GP with RMSProp and Adam (to control for the optimizer), and DCGAN. WGAN-GP significantly outperforms weight clipping and performs comparably to DCGAN. we train one model with the same optimizer (RMSProp) and learning rate as WGAN with weight clipping, and another model with Adam and a higher learning rate. Even with the same optimizer, our method converges faster and to a better score than weight clipping. Using Adam further improves performance. We also plot the performance of DCGAN [21] and find that our method converges more slowly (in wall-clock time) than DCGAN, but its score is more stable at convergence. 5.4 Sample quality on CIFAR-10 and LSUN bedrooms For equivalent architectures, our method achieves comparable sample quality to the standard GAN objective. However the increased stability allows us to improve sample quality by exploring a wider range of architectures. To demonstrate this, we find an architecture which establishes a new state of the art Inception score on unsupervised CIFAR-10 (Table 3). When we add label information (using the method in [19]), the same architecture outperforms all other published models except for SGAN. Table 3: Inception scores on CIFAR-10. Our unsupervised model achieves state-of-the-art performance, and our conditional model outperforms all others except SGAN. Unsupervised Method Score ALI [8] (in [26]) 5.34 ± .05 BEGAN [4] 5.62 DCGAN [21] (in [11]) 6.16 ± .07 Improved GAN (-L+HA) [22] 6.86 ± .06 EGAN-Ent-VI [7] 7.07 ± .10 DFM [26] 7.72 ± .13 WGAN-GP ResNet (ours) 7.86 ± .07 Supervised Method Score SteinGAN [25] 6.35 DCGAN (with labels, in [25]) 6.58 Improved GAN [22] 8.09 ± .07 AC-GAN [19] 8.25 ± .07 SGAN-no-joint [11] 8.37 ± .08 WGAN-GP ResNet (ours) 8.42 ± .10 SGAN [11] 8.59 ± .12 We also train a deep ResNet on 128 ⇥128 LSUN bedrooms and show samples in Figure 4. We believe these samples are at least competitive with the best reported so far on any resolution for this dataset. 5.5 Modeling discrete data with a continuous generator To demonstrate our method’s ability to model degenerate distributions, we consider the problem of modeling a complex discrete distribution with a GAN whose generator is defined over a continuous space. As an instance of this problem, we train a character-level GAN language model on the Google Billion Word dataset [6]. Our generator is a simple 1D CNN which deterministically transforms a latent vector into a sequence of 32 one-hot character vectors through 1D convolutions. We apply a softmax nonlinearity at the output, but use no sampling step: during training, the softmax output is 7 Figure 4: Samples of 128⇥128 LSUN bedrooms. We believe these samples are at least comparable to the best published results so far. passed directly into the critic (which, likewise, is a simple 1D CNN). When decoding samples, we just take the argmax of each output vector. We present samples from the model in Table 4. Our model makes frequent spelling errors (likely because it has to output each character independently) but nonetheless manages to learn quite a lot about the statistics of language. We were unable to produce comparable results with the standard GAN objective, though we do not claim that doing so is impossible. Table 4: Samples from a WGAN character-level language model trained with our method on sentences from the Billion Word dataset, truncated to 32 characters. The model learns to directly output one-hot character embeddings from a latent vector without any discrete sampling step. We were unable to achieve comparable results with the standard GAN objective and a continuous generator. WGAN with gradient penalty (1D CNN) Busino game camperate spent odea Solice Norkedin pring in since In the bankaway of smarling the ThiS record ( 31. ) UBS ) and Ch SingersMay , who kill that imvic It was not the annuas were plogr Keray Pents of the same Reagun D This will be us , the ect of DAN Manging include a tudancs shat " These leaded as most-worsd p2 a0 His Zuith Dudget , the Denmbern The time I paidOa South Cubry i In during the Uitational questio Dour Fraps higs it was these del Divos from The ’ noth ronkies of This year out howneed allowed lo She like Monday , of macunsuer S Kaulna Seto consficutes to repor The difference in performance between WGAN and other GANs can be explained as follows. Consider the simplex ∆n = {p 2 Rn : pi ≥0, P i pi = 1}, and the set of vertices on the simplex (or one-hot vectors) Vn = {p 2 Rn : pi 2 {0, 1}, P i pi = 1} ✓∆n. If we have a vocabulary of size n and we have a distribution Pr over sequences of size T, we have that Pr is a distribution on V T n = Vn ⇥· · · ⇥Vn. Since V T n is a subset of ∆T n, we can also treat Pr as a distribution on ∆T n (by assigning zero probability mass to all points not in V T n ). 8 0 2 4 Generator iterations ⇥104 0 10 20 30 40 50 Negative critic loss train validation (a) 0.0 0.5 1.0 1.5 2.0 Generator iterations ⇥104 0 5 10 Negative critic loss train validation 0.0 0.5 1.0 1.5 2.0 Generator iterations ⇥104 0.0 0.2 0.4 0.6 0.8 Negative critic loss train validation (b) Figure 5: (a) The negative critic loss of our model on LSUN bedrooms converges toward a minimum as the network trains. (b) WGAN training and validation losses on a random 1000-digit subset of MNIST show overfitting when using either our method (left) or weight clipping (right). In particular, with our method, the critic overfits faster than the generator, causing the training loss to increase gradually over time even as the validation loss drops. Pr is discrete (or supported on a finite number of elements, namely V T n ) on ∆T n, but Pg can easily be a continuous distribution over ∆T n. The KL divergences between two such distributions are infinite, and so the JS divergence is saturated. In practice, this means a discriminator might quickly learn to reject all samples that don’t lie on V T n (sequences of one-hot vectors) and give meaningless gradients to the generator. However, it is easily seen that the conditions of Theorem 1 and Corollary 1 of [2] are satisfied even on this non-standard learning scenario with X = ∆T n. This means that W(Pr, Pg) is still well defined, continuous everywhere and differentiable almost everywhere, and we can optimize it just like in any other continuous variable setting. The way this manifests is that in WGANs, the Lipschitz constraint forces the critic to provide a linear gradient from all ∆T n towards towards the real points in V T n . Other attempts at language modeling with GANs [31, 14, 29, 5, 15, 10] typically use discrete models and gradient estimators [27, 12, 16]. Our approach is simpler to implement, though whether it scales beyond a toy language model is unclear. 5.6 Meaningful loss curves and detecting overfitting An important benefit of weight-clipped WGANs is that their loss correlates with sample quality and converges toward a minimum. To show that our method preserves this property, we train a WGAN-GP on the LSUN bedrooms dataset [30] and plot the negative of the critic’s loss in Figure 5a. We see that the loss converges as the generator minimizes W(Pr, Pg). GANs, like all models trained on limited data, will eventually overfit. To explore the loss curve’s behavior when the network overfits, we train large unregularized WGANs on a random 1000-image subset of MNIST and plot the negative critic loss on both the training and validation sets in Figure 5b. In both WGAN and WGAN-GP, the two losses diverge, suggesting that the critic overfits and provides an inaccurate estimate of W(Pr, Pg), at which point all bets are off regarding correlation with sample quality. However in WGAN-GP, the training loss gradually increases even while the validation loss drops. [28] also measure overfitting in GANs by estimating the generator’s log-likelihood. Compared to that work, our method detects overfitting in the critic (rather than the generator) and measures overfitting against the same loss that the network minimizes. 6 Conclusion In this work, we demonstrated problems with weight clipping in WGAN and introduced an alternative in the form of a penalty term in the critic loss which does not exhibit the same problems. Using our method, we demonstrated strong modeling performance and stability across a variety of architectures. Now that we have a more stable algorithm for training GANs, we hope our work opens the path for stronger modeling performance on large-scale image datasets and language. Another interesting direction is adapting our penalty term to the standard GAN objective function, where it might stabilize training by encouraging the discriminator to learn smoother decision boundaries. 9 Acknowledgements We would like to thank Mohamed Ishmael Belghazi, L´eon Bottou, Zihang Dai, Stefan Doerr, Ian Goodfellow, Kyle Kastner, Kundan Kumar, Luke Metz, Alec Radford, Sai Rajeshwar, Aditya Ramesh, Tom Sercu, Zain Shah and Jake Zhao for insightful comments. References [1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. 2017. [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. [3] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [4] D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. [5] T. Che, Y. Li, R. Zhang, R. D. Hjelm, W. Li, Y. Song, and Y. Bengio. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983, 2017. [6] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. [7] Z. Dai, A. Almahairi, P. Bachman, E. Hovy, and A. Courville. Calibrating energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017. [8] V. Dumoulin, M. I. D. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. Adversarially learned inference. 2017. [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [10] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial networks. arXiv preprint arXiv:1702.08431, 2017. [11] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie. Stacked generative adversarial networks. arXiv preprint arXiv:1612.04357, 2016. [12] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. [13] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009. [14] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. [15] X. Liang, Z. Hu, H. Zhang, C. Gan, and E. P. Xing. Recurrent topic-transition gan for visual paragraph generation. arXiv preprint arXiv:1703.07022, 2017. [16] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. [17] X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang. Least squares generative adversarial networks. arXiv preprint arXiv:1611.04076, 2016. [18] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. 10 [19] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. [20] B. Poole, A. A. Alemi, J. Sohl-Dickstein, and A. Angelova. Improved generator objectives for gans. arXiv preprint arXiv:1612.02780, 2016. [21] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [22] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226–2234, 2016. [23] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages 4790–4798, 2016. [24] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. [25] D. Wang and Q. Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016. [26] D. Warde-Farley and Y. Bengio. Improving generative adversarial networks with denoising feature matching. 2017. [27] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. [28] Y. Wu, Y. Burda, R. Salakhutdinov, and R. Grosse. On the quantitative analysis of decoderbased generative models. arXiv preprint arXiv:1611.04273, 2016. [29] Z. Yang, W. Chen, F. Wang, and B. Xu. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887, 2017. [30] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. [31] L. Yu, W. Zhang, J. Wang, and Y. Yu. Seqgan: sequence generative adversarial nets with policy gradient. arXiv preprint arXiv:1609.05473, 2016. 11 | 2017 | 364 |
6,857 | Adaptive stimulus selection for optimizing neural population responses Benjamin R. Cowley1,2, Ryan C. Williamson1,2,5, Katerina Acar2,6, Matthew A. Smith∗,2,7, Byron M. Yu∗,2,3,4 1Machine Learning Dept., 2Center for Neural Basis of Cognition, 3Dept. of Electrical and Computer Engineering, 4Dept. of Biomedical Engineering, Carnegie Mellon University 5School of Medicine, 6Dept. of Neuroscience, 7Dept. of Ophthalmology, University of Pittsburgh bcowley@cs.cmu.edu, {rcw30, kac216, smithma}@pitt.edu, byronyu@cmu.edu ∗denotes equal contribution. Abstract Adaptive stimulus selection methods in neuroscience have primarily focused on maximizing the firing rate of a single recorded neuron. When recording from a population of neurons, it is usually not possible to find a single stimulus that maximizes the firing rates of all neurons. This motivates optimizing an objective function that takes into account the responses of all recorded neurons together. We propose “Adept,” an adaptive stimulus selection method that can optimize population objective functions. In simulations, we first confirmed that population objective functions elicited more diverse stimulus responses than single-neuron objective functions. Then, we tested Adept in a closed-loop electrophysiological experiment in which population activity was recorded from macaque V4, a cortical area known for mid-level visual processing. To predict neural responses, we used the outputs of a deep convolutional neural network model as feature embeddings. Natural images chosen by Adept elicited mean neural responses that were 20% larger than those for randomly-chosen natural images, and also evoked a larger diversity of neural responses. Such adaptive stimulus selection methods can facilitate experiments that involve neurons far from the sensory periphery, for which it is often unclear which stimuli to present. 1 Introduction A key choice in a neurophysiological experiment is to determine which stimuli to present. Often, it is unknown a priori which stimuli will drive a to-be-recorded neuron, especially in brain areas far from the sensory periphery. Most studies either choose from a class of parameterized stimuli (e.g., sinusoidal gratings or pure tones) or present many randomized stimuli (e.g., white noise) to find the stimulus that maximizes the response of a neuron (i.e., the preferred stimulus) [1, 2]. However, the first approach limits the range of stimuli explored, and the second approach may not converge in a finite amount of recording time [3]. To efficiently find a preferred stimulus, studies have employed adaptive stimulus selection (also known as “adaptive sampling” or “optimal experimental design”) to determine the next stimulus to show given the responses to previous stimuli in a closed-loop experiment [4, 5]. Many adaptive methods have been developed to find the smallest number of stimuli needed to fit parameters of a model that predicts the recorded neuron’s activity from the stimulus [6, 7, 8, 9, 10, 11]. When no encoding model exists for a neuron (e.g., neurons in higher visual cortical areas), adaptive methods rely on maximizing the neuron’s firing rate via genetic algorithms [12, 13, 14] or gradient ascent [15, 16] to home in on the neuron’s preferred stimulus. To our knowledge, all current adaptive stimulus selection methods focus solely on optimizing the firing rate of a single neuron. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. V4 neuron 1 spikes/sec 0 100 0 1500 sorted image indices V4 neuron 2 0 100 0 1500 sorted image indices spikes/sec V4 neuron 2 (spikes/sec) 0 100 0 100 V4 neuron 1 (spikes/sec) A B Figure 1: Responses of two macaque V4 neurons. A. Different neurons prefer different stimuli. Displayed images evoked 5 of top 25 largest responses. B. Images placed according to their responses. Gray dots represent responses to other images. Same neurons as in A. Developments in neural recording technologies now enable the simultaneous recordings of tens to hundreds of neurons [17], each of which has its own preferred stimulus. For example, consider two neurons recorded in V4, a mid-level visual cortical area (Fig. 1A). Whereas neuron 1 responds most strongly to teddy bears, neuron 2 responds most strongly to arranged circular fruit. Both neurons moderately respond to images of animals (Fig. 1B). Given that different neurons have different preferred stimuli, how do we select which stimuli to present when simultaneously recording from multiple neurons? This necessitates defining objective functions for adaptive stimulus selection that are based on a population of neurons rather than any single neuron. Importantly, these objective functions can go beyond simply maximizing the firing rates of neurons and instead can be optimized for other attributes of the population response, such as maximizing the scatter of the responses in a multi-neuronal response space (Fig. 1B). We propose Adept, an adaptive stimulus selection method that “adeptly” chooses the next stimulus to show based on a population objective function. Because the neural responses to candidate stimuli are unknown, Adept utilizes feature embeddings of the stimuli to predict to-be-recorded responses. In this work, we use the feature embeddings of a deep convolutional neural network (CNN) for prediction. We first confirmed with simulations that Adept, using a population objective function, elicited larger mean responses and a larger diversity of responses than optimizing the response of each neuron separately. Then, we ran Adept on V4 population activity recorded during a closed-loop electrophysiological experiment. Images chosen by Adept elicited higher mean firing rates and more diverse population responses compared to randomly-chosen images. This demonstrates that Adept is effective at finding stimuli to drive a population of neurons in brain areas far from the sensory periphery. 2 Population objective functions Depending on the desired outcomes of an experiment, one may favor one objective function over another. Here we discuss different objection functions for adaptive stimulus selection and the resulting responses r ∈Rp, where the ith element ri is the response of the ith neuron (i = 1, . . . , p) and p is the number of neurons recorded simultaneously. To illustrate the effects of different objective functions, we ran an adaptive stimulus selection method on the activity of two simulated neurons (see details in Section 5.1). We first consider a single-neuron objective function employed by many adaptive methods [12, 13, 14]. Using this objective function f(r) = ri, which maximizes the response for the ith neuron of the population, the adaptive method for i = 1 chose stimuli that maximized neuron 1’s response (Fig. 2A, red dots). However, images that produced large responses for neuron 2 were not chosen (Fig. 2A, top left gray dots). A natural population-level extension to this objective function is to maximize the responses of all neurons by defining the objective function to be f(r) = ∥r∥2. This objective function led to choosing stimuli that maximized responses for neurons 1 and 2 individually, as well as large responses for both neurons together (Fig. 2B). Another possible objective function is to maximize the scatter of the responses. In particular, we would like to choose the next stimulus such that the response vector r is far away from the previously-seen response vectors r1, . . . , rM after M chosen stimuli. One way to achieve this is to maximize the average Euclidean distance between r and r1, . . . , rM, which leads 2 B C A D unseen responses max r max 1 M M j=1 r −rj 2 + r 2 max 1 M M j=1 r −rj 2 max r 2 responses to chosen stimuli 0 80 0 80 neuron 2’s activity neuron 1’s activity 0 80 0 80 neuron 1’s activity 0 80 0 80 neuron 1’s activity 0 80 0 80 neuron 1’s activity Figure 2: Different objective functions for adaptive stimulus selection yield different observed population responses (red dots). Blue * denote responses to stimuli used to initialize the adaptive method (the same for each panel). to the objective function f(r, r1, . . . , rM) = 1 M PM j=1 ∥r −rj∥2. This objective function led to a large scatter in responses for neurons 1 and 2 (Fig. 2C, red dots near and far from origin). This is because choosing stimuli that yield small and large responses produces the largest distances between responses. Finally, we considered an objective function that favored large responses that are far away from one another. To achieve this, we summed the objectives in Fig. 2B and 2C. The objective function f(r, r1, . . . , rM) = ∥r∥2 + 1 M PM j=1 ∥r−rj∥2 was able to uncover large responses for both neurons (Fig. 2D, red dots far from origin). It also led to a larger scatter than maximizing the norm of r alone (e.g., compare red dots in bottom right of Fig. 2B and Fig. 2D). For these reasons, we use this objection function in the remainder of this work. However, the Adept framework is general and can be used with many different objective functions, including all presented in this section. 3 Using feature embeddings to predict norms and distances We now formulate the optimization problem using the last objective function in Section 2. Consider a pool of N candidate stimuli s1, . . . , sN. After showing (t −1) stimuli, we are given previouslyrecorded response vectors rn1, . . . , rnt−1 ∈Rp, where n1, . . . , nt−1 ∈{1, . . . , N}. In other words, rnj is the vector of responses to the stimulus snj. At the tth iteration of adaptive stimulus selection, we choose the index nt of the next stimulus to show by the following: nt = arg max s∈{1,...,N}\{n1,...,nt−1} ∥rs∥2 + 1 t −1 t−1 X j=1 ∥rs −rnj∥2 (1) where rs is the unseen population response vector to stimulus ss. If the rs were known, we could directly optimize Eqn. 1. However, in an online setting, we do not have access to the rs. Instead, we can directly predict the norm and average distance terms in Eqn. 1 by relating distances in neural response space to distances in a feature embedding space. The key idea is that if two stimuli have similar feature embeddings, then the corresponding neural responses will have similar norms and average distances. Concretely, consider feature embedding vectors x1, . . . , xN ∈Rq corresponding to candidate stimuli s1, . . . , sN. For example, we can use the activity of q neurons from a CNN as a feature embedding vector for natural images [18]. To predict the norm of unseen response vector rs ∈Rp, we use kernel regression with the previously-recorded response vectors rn1, . . . , rnt−1 as training data [19]. To predict the distance between rs and a previously-recorded response vector rnj, we extend kernel regression to account for the paired nature of distances. Thus, the norm and average distance in Eqn. 1 for the unseen response vector rs to the sth candidate stimulus are predicted by the following: ∥rs∥2 V = X k K(xs, xnk) P ℓK(xs, xnℓ)∥rnk∥2, ∥rs −rnj∥2 V = X k K(xs, xnk) P ℓK(xs, xnℓ)∥rnk −rnj∥2 (2) where k, ℓ∈{1, . . . , t −1}. Here we use the radial basis function kernel K(xj, xk) = exp(−∥xj − xk∥2 2/h2) with kernel bandwidth h, although other kernels can be used. We tested the performance of this approach versus three other possible prediction approaches. The first two approaches use linear ridge regression and kernel regression, respectively, to predict rs. Their 3 prediction ˆrs is then used to evaluate the objective in place of rs. The third approach is a linear ridge regression version of Eqn. 2 to directly predict ∥rs∥2 and ∥rs −rnj∥2. To compare the performance of these approaches, we developed a testbed in which we sampled two distinct populations of neurons from the same CNN, and asked how well one population can predict the responses of the other population using the different approaches described above. Formally, we let x1, . . . , xN be feature embedding vectors of q = 500 CNN neurons, and response vectors rn1, . . . , rn800 be the responses of p = 200 different CNN neurons to 800 natural images. CNN neurons were from the same GoogLeNet CNN [18] (see CNN details in Results). To compute performance, we took the Pearson’s correlation ρ between the predicted and actual objective values on a held out set of responses not used for training. We also tracked the computation time τ (computed on an Intel Xeon 2.3GHz CPU with 36GB RAM) because these computations need to occur between stimulus presentations in an electrophysiological experiment. The approach in Eqn. 2 performed the best (ρ = 0.64) and was the fastest (τ = 0.2 s) compared to the other prediction approaches (ρ = 0.39, 0.41, 0.23 and τ = 12.9 s, 1.5 s, 48.4 s, for the three other approaches, respectively). The remarkably faster speed of Eqn. 2 over other approaches comes from the evaluation of the objective function (fast matrix operations), the fact that no training of linear regression weight vectors is needed, and the fact that distances are directly predicted (unlike the approaches that first predict ˆrs and then must re-compute distances between ˆrs and rn1, . . . , rnt−1 for each candidate stimulus s). Due to its performance and fast computation time, we use the prediction approach in Eqn. 2 for the remainder of this work. 4 Adept algorithm We now combine the optimization problem in Eqn. 1 and prediction approach in Eqn. 2 to formulate the Adept algorithm. We first discuss the adaptive stimulus selection paradigm (Fig. 3, left) and then the Adept algorithm (Fig. 3, right). For the adaptive stimulus selection paradigm (Fig. 3, left), the experimenter first selects a candidate stimulus pool s1, . . . , sN from which Adept chooses, where N is large. For a vision experiment, the candidate stimulus pool could comprise natural images, textures, or sinusoidal gratings. For an auditory experiment, the stimulus pool could comprise natural sounds or pure tones. Next, feature embedding vectors x1, . . . , xN ∈Rq are computed for each candidate stimulus, and the pre-computed N × N kernel matrix K(xj, xk) (i.e., similarity matrix) is input into Adept. For visual neurons, the feature embeddings could come from a bank of Gabor-like filters with different orientations and spatial frequencies [20], or from a more expressive model, such as CNN neurons in a middle layer of a pre-trained CNN. Because Adept only takes as input the kernel matrix K(xj, xk) and not the feature embeddings x1, . . . , xN, one could alternatively use a similarity matrix computed from psychophysical data to define the similarity between stimuli if no model exists. The previouslyrecorded response vectors rn1, . . . , rnt−1 are also input into Adept, which then outputs the next chosen stimulus snt to show. While the observer views snt, the response vector rnt is recorded and appended to the previously-recorded response vectors. This procedure is iteratively repeated until the end of the recording session. To show as many stimuli as possible, Adept does not choose the same stimulus more than once. For the Adept algorithm (Fig. 3, right), we initialize by randomly choosing a small number of stimuli (e.g., Ninit = 5) from the large pool of N candidate stimuli and presenting them to the observer. Using the responses to these stimuli R(:, 1:Ninit), Adept then adaptively chooses a new stimulus by finding the candidate stimulus that yields the largest objective (in this case, using the objective defined by Eqns. 1 and 2). This search is carried out by evaluating the objective for every candidate stimulus. There are three primary reasons why Adept is computationally fast enough to consider all candidate stimuli. First, the kernel matrix KX is pre-computed, which is then easily indexed. Second, the prediction of the norm and average distance is computed with fast matrix operations. Third, Adept updates the distance matrix DR, which contains the pairwise distances between recorded response vectors, instead of re-computing DR at each iteration. 5 Results We tested Adept in two settings. First, we tested Adept on a surrogate for the brain—a pre-trained CNN. This allowed us to perform comparisons between methods with a noiseless system. Second, in a closed-loop electrophysiological experiment, we performed Adept on population activity recorded in macaque V4. In both settings, we used the same candidate image pool of N ≈10,000 natural 4 observer (e.g., monkey) recorded responses response chosen stimulus s1, . . . , sN K(xj, xk) model (e.g., CNN) compute similarity candidate stimulus pool x1, . . . , xN feature embeddings snt rnt rn1, . . . , rnt−1 Adept Algorithm 1: Adept algorithm Input: N candidate stimuli, feature embeddings X(q × N), kernel bandwidth h (hyperparameter) Initialization: KX(j, k) = exp(−∥X(:, j) −X(:, k)∥2 2/h2) for all j, k R(:, 1:Ninit) ←responses to Ninit initial stimuli DR(j, k) = ∥R(:, j) −R(:, k)∥2 for j, k = 1, . . . , Ninit ind_obs ←indices of Ninit observed stimuli Online algorithm: for tth stimulus to show do for sth candidate stimulus do kX = KX(ind_obs, s)/ P ℓ∈ind_obs KX(ℓ, s) % predict norm from recorded responses norms(s) ←∥rs∥2 V = kX T diag( √ RT R) % predict average distance from recorded responses avgdists(s) ← 1 t−1 P ℓ∥rs −rnℓ∥2 V = mean(kX T DR) end ind_obs(Ninit + t) ←argmax(norms + avgdists) R(:, Ninit + t) ←recorded responses to chosen stimulus update DR with ∥R(:, Ninit + t) −R(:, ℓ)∥2 for all ℓ end Figure 3: Flowchart of the adaptive sampling paradigm (left) and the Adept algorithm (right). images from the McGill natural image dataset [21] and Google image search [22]. For the predictive feature embeddings in both settings, we used responses from a pre-trained CNN different from the CNN used as a surrogate for the brain in the first setting. The motivation to use CNNs was inspired by the recent successes of CNNs to predict neural activity in V4 [23]. 5.1 Testing Adept on CNN neurons The testbed for Adept involved two different CNNs. One CNN is the surrogate for the brain. For this CNN, we took responses of p = 200 neurons in a middle layer of the pre-trained ResNet CNN [24] (layer 25 of 50, named ‘res3dx’). A second CNN is used for feature embeddings to predict responses of the first CNN. For this CNN, we took responses of q = 750 neurons in a middle layer of the pre-trained GoogLeNet CNN [18] (layer 5 of 10, named ‘icp4_out’). Both CNNs were trained for image classification but had substantially different architectures. Pre-trained CNNs were downloaded from MatConvNet [25], with the PVT version of GoogLeNet [26]. We ran Adept for 2,000 out of the 10,000 candidate images (with Ninit = 5 and kernel bandwidth h = 200—similar results were obtained for different h), and compared the CNN responses to those of 2,000 randomly-chosen images. We asked two questions pertaining to the two terms in the objective function in Eqn. 1. First, are responses larger for Adept than for randomly-chosen images? Second, to what extent does Adept produce larger scatter of responses than if we had chosen images at random? A larger scatter implies a greater diversity in evoked population responses (Fig. 1B). To address the first question, we computed the mean response across all 2,000 images for each CNN neuron. The mean responses using Adept were on average 15.5% larger than the mean responses to randomly chosen images (Fig. 4A, difference in means was significantly greater than zero, p < 10−4). For the second question, we assessed the amount of response scatter by computing the amount of variance captured by each dimension. We applied PCA separately to the responses to images chosen by Adept and those to images selected randomly. For each dimension, we computed the ratio between the Adept eigenvalue divided by the randomly-chosen-image eigenvalue. In this way, we compared the dimensions of greatest variance, followed by the dimensions of the second-most variance, and so on. Ratios above 1 indicate that Adept explored a dimension more than the corresponding ordered dimension of random selection. We found that Adept produced larger response scatter compared to randomly-chosen images for many dimensions (Fig. 4B). Ratios for dimensions of lesser variance (e.g., dimensions 10 to 75) are nearly as meaningful as those of the dimensions of greatest variance 5 A B D * mean response fraction of CNN neurons -0.4 0 0.4 0.8 0.2 0 µAdept µrandom dimension index σ2 Adept/σ2 random 1 20 40 60 75 0.8 1.0 1.4 1.8 0% 5% 0 75 %σ2 Adept random dim equal to random selection C random Adept-1 Adept-50 genetic-50 n Adept- orm Adept-avgdist σ2 Adept σ 2 method / Adept better µAdept/µmethod single neuron multineuron 1.0 1.2 1.0 1.4 random Adept-1 Adept-50 genetic-50 n Adept- orm Adept-avgdist 0.0 0.2 0.4 0.6 CNN layer index 1 2 3 4 5 6 7 8 9 10 corr predicted vs. actual better prediction equal to Adept equal to Adept Figure 4: CNN testbed for Adept. A. Mean responses (arbitrary units) to images chosen by Adept were greater than to randomly-chosen images. B. Adept produced higher response variance for each PC dimension than when randomly choosing images. Inset: Percent variance explained. C. Relative to the full objective function in Eqn. 1, population objective functions (green) yielded higher response mean and variance than those of single-neuron objective functions (blue). D. Feature embeddings for all CNN layers were predictive. Error bars are ± s.d. across 10 runs. (i.e., dimensions 1 to 10), as the top 10 dimensions explained only 16.8% of the total variance (Fig. 4B, inset). Next, we asked to what extent does optimizing a population objective function perform better than optimizing a single-neuron objective function. For the single-neuron case, we implemented three different methods. First, we ran Adept to optimize the response of a single CNN neuron with the largest mean response (‘Adept-1’). Second, we applied Adept in a sequential manner to optimize the response of 50 randomly-chosen CNN neurons individually. After optimizing a CNN neuron for 40 images, optimization switched to the next CNN neuron (‘Adept-50’). Third, we sequentially optimized 50 randomly-chosen CNN neurons individually using a genetic algorithm (‘genetic-50’), similar to the ones proposed in previous studies [12, 13, 14]. We found that Adept produced higher mean responses than the three single-neuron methods (Fig. 4C, blue points in left panel), likely because Adept chose images that evoked large responses across neurons together. All methods produced higher mean responses than randomly choosing images (Fig. 4C, black point above blue points in left panel). Adept also produced higher mean eigenvalue ratios across the top 75 PCA dimensions than the three single-neuron methods (Fig. 4C, blue points in right panel). This indicates that Adept, using a population objective, is better able to optimize population responses than using a single-neuron objective to optimize the response of each neuron in the population. We then modified the Adept objective function to include only the norm term (‘Adept-norm’, Fig. 2B) and only the average distance term (‘Adept-avgdist’, Fig. 2C). Both of these population methods performed better than single-neuron methods (Fig. 4C, green points below blue points). While their performance was comparable to Adept using the full objective function, upon closer inspection, we observed differences in performance that matched our intuition about the objective functions. The mean response ratio for Adept using the full objection function and Adept-norm was close to 1 (Fig. 4C, left panel, Adept-norm on red-dashed line, p = 0.65), but the eigenvalue ratio was greater than 1 (Fig. 4C, right panel, Adept-norm above red-dashed line, p < 0.005). Thus, Adept-norm maximizes mean responses at the expense of less scatter. On the other hand, Adept-avgdist produced a lower mean response than that of Adept using the full objective function (Fig. 4C, left panel, Adept-avgdist above red-dashed line, p < 10−4), but an eigenvalue ratio of 1 (Fig. 4C, right panel, Adept-avgdist on red-dashed line, p = 0.62). Thus, Adept-avgdist increases the response scatter at the expense of a lower mean response. The results in this section were based on middle layer neurons in the GoogLeNet CNN predicting middle layer neurons in the ResNet CNN. However, it is possible that CNN neurons in other layers may be better predictors than those in a middle layer. To test for this, we asked which layers of the GoogLeNet CNN were most predictive of the objective values of the middle layer of the ResNet CNN. For each layer of increasing depth, we computed the correlation between the predicted objective (using 750 CNN neurons from that layer) and the actual objective of the ResNet responses (200 CNN neurons) (Fig. 4D). We found that all layers were predictive (ρ ≈0.6), although there was variation across layers. Middle layers were slightly more predictive than deeper layers, likely because 6 deeper layers of GoogLeNet have a different embedding of natural images than the middle layer of the ResNet CNN. 5.2 Testing Adept on V4 population recordings Next, we tested Adept in a closed-loop neurophysiological experiment. We implanted a 96-electrode array in macaque V4, whose neurons respond differently to a wide range of image features, including orientation, spatial frequency, color, shape, texture, and curvature, among others [27]. Currently, no existing parametric encoding model fully captures the stimulus-response relationship of V4 neurons. The current state-of-the-art model for predicting the activity of V4 neurons uses the output of middle layer neurons in a CNN previously trained without any information about the responses of V4 neurons [23]. Thus, we used a pre-trained CNN (GoogLeNet) to obtain the predictive feature embeddings. The experimental task flow proceeded as follows. On each trial, a monkey fixated on a central dot while an image flashed four times in the aggregate receptive fields of the recorded V4 neurons. After the fourth flash, the monkey made a saccade to a target dot (whose location was unrelated to the shown image), for which he received a juice reward. During this task, we recorded threshold crossings on each electrode (referred to as “spikes”), where the threshold was defined as a multiple of the RMS voltage set independently for each channel. This yielded 87 to 96 neural units in each session. The spike counts for each neural unit were averaged across the four 100 ms flashes to obtain mean responses. The mean response vector for the p neural units was then appended to the previously-recorded responses and input into Adept. Adept then output an image to show on the next trial. For the predictive feature embeddings, we used q = 500 CNN neurons in the fifth layer of GoogLeNet CNN (kernel bandwidth h = 200). In each recording session, the monkey typically performed 2,000 trials (i.e., 2,000 of the N =10,000 natural images would be sampled). Each Adept run started with Ninit = 5 randomly-chosen images. We first recorded a session in which we used Adept during one block of trials and randomly chose images in another block of trials. To qualitatively compare Adept and randomly selecting images, we first applied PCA to the response vectors of both blocks, and plotted the top two PCs (Fig. 5A, left panel). Adept uncovers more responses that are far away from the origin (Fig. 5A, left panel, red dots farther from black * than black dots). For visual clarity, we also computed kernel density estimates for the Adept responses (pAdept) and responses to randomly-chosen images (prandom), and plotted the difference pAdept −prandom (Fig. 5A, right panel). Responses for Adept were denser than for randomly-chosen images further from the origin, whereas the opposite was true closer to the origin (Fig. 5A, right panel, red region further from origin than black region). These plots suggest that Adept uncovers large responses that are far from one another. Quantitatively, we verified that Adept chose images with larger objective values in Eqn. 1 than randomly-chosen images (Fig. 5B). This result is not trivial because it relies on the ability of the CNN to predict V4 population responses. If the CNN predicted V4 responses poorly, the objective evaluated on the V4 responses to images chosen by Adept could be lower than that evaluated on random images. We then compared Adept and random stimulus selection across 7 recording sessions, including the above session (450 trials per block, with three sessions with the Adept block before the random selection block, three sessions with the opposite ordering, and one session with interleaved trials). We found that the images chosen by Adept produced on average 19.5% higher mean responses than randomly-chosen images (Fig. 5C, difference in mean responses were significantly greater than zero, p < 10−4). We also found that images chosen by Adept produced greater response scatter than for randomly-chosen images, as the mean ratios of eigenvalues were greater than 1 (Fig. 5D, dimensions 1 to 5). Yet, there were dimensions for which the mean ratios of eigenvalues were less than 1 (Fig. 5D, dimensions 9 and 10). These dimensions explained little overall variance (< 5% of the total response variance). Finally, we asked to what extent do the different CNN layers predict the objective of V4 responses, as in Fig. 4D. We found that, using 500 CNN neurons for each layer, all layers had some predictive ability (Fig. 5E, ρ > 0). Deeper layers (5 to 10) tended to have better prediction than superficial layers (1 to 4). To establish a noise level for the V4 responses, we also predicted the norm and average distance for one session (day 1) with the V4 responses of another session (day 2), where the same images were shown each day. In other words, we used the V4 responses of day 2 as feature embeddings to predict V4 responses of day 1. The correlation of prediction was much higher 7 PC1 (spikes/sec) 0 600 -200 400 0 Adept random pAdept > pAdept = prandom pAdept < 0 1200 800 1600 avgdist + norm A B trial number Adept random prandom prandom 1 2 3 4 5 6 7 8 9 10 corr predicted vs. actual 0.0 0.2 0.4 0.6 CNN layer index responses from day 2 predict day 1 responses with day 2 responses predict day 1 responses with CNN responses C D E 0 600 -200 400 0 * fraction of neural units 0.3 0-40 -20 0 20 40 mean response (spikes/sec) µAdept µrandom σ2 Adept/σ2 random 0.8 1.0 1.2 1.4 1.6 40% 0% %σ2 Adept random 1 15 dimension index 1 5 10 15 PC1 (spikes/sec) PC2 (spikes/sec) dim Figure 5: Closed-loop experiments in V4. A. Top 2 PCs of V4 responses to stimuli chosen by Adept and random selection (500 trials each). Left: scatter plot, where each dot represents the population response to one stimulus. Right: difference of kernel densities, pAdept −prandom. Black * denotes a zero response for all neural units. B. Objective function evaluated across trials (one stimulus per trial) using V4 responses. Same data as in A. C. Difference in mean responses across neural units from 7 sessions. D. Ratio of eigenvalues for different PC dimensions. Error bars: ± s.e.m. E. Ability of different CNN layers to predict V4 responses. For comparison, we also used V4 responses from a different day to predict the same V4 responses. Error bars: ± s.d. across 100 runs. (ρ ≈0.5) than that of any CNN layer (ρ < 0.25). This discrepancy indicates that finding feature embeddings that are more predictive of V4 responses is a way to improve Adept’s performance. 5.3 Testing Adept for robustness to neural noise and overfitting A potential concern for an adaptive method is that stimulus responses are susceptible to neural noise. Specifically, spike counts are subject to Poisson-like variability, which might not be entirely averaged away based on a finite number of stimulus repeats. Moreover, adaptation to stimuli and changes in attention or motivation may cause a gain factor to scale responses dynamically across a session [9]. To examine how Adept performs in the presence of noise, we first recorded a “ground-truth”, spike-sorted dataset in which 2,000 natural images were presented (100 ms flashes, 5 to 30 repeats per image randomly presented throughout the session). We then re-ran Adept on simulated responses under three different noise models (whose parameters were fit to the ground truth data): a Poisson model (‘Poisson noise’), a model that scales each response by a gain factor that varies independently from trial to trial [28] (‘trial-to-trial gain’), and the same gain model but where the gain varies smoothly across trials (‘slowly-drifting gain’). Because the drift in gain was randomly generated and may not match the actual drift in the recorded dataset, we also considered responses in which the drift was estimated across the recording session and added to the mean responses as their corresponding images were chosen (‘recorded drift’). For reference, we also ran Adept on responses with no noise (‘no noise’). To compare performance across the different settings, we computed the mean response and variance ratios between responses based on Adept and random selection (Fig. 6A). All settings showed better performance using Adept than random selection (Fig. 6A, all points above red-dashed line), and Adept performed best with no noise (Fig. 6, ‘no noise’ point at or above others). For a fair comparison, ratios were computed with the ground truth responses, where only the chosen images could differ across settings. These results indicate that, although Adept would benefit from removing neural noise, Adept continues to outperform random selection in the presence of noise. Another concern for an adaptive method is overfitting. For example, when no relationship exists between the CNN feature embeddings and neural responses, Adept may overfit to a spurious stimulus8 A 1.0 1.1 unshufed responses shufed responses unshufed subset shufed subset 1.0 1.3 unshufed responses shufed responses unshufed subset shufed subset µAdept/µ random B µAdept/µrandom 1.0 1.1 1.0 1.3 no noise Poisson noise trial-to-trial gain slowly-drifting gain recorded drift no noise Poisson noise trial-to-trial gain slowly-drifting gain recorded drift σ2 Adept σ 2 random / σ2 Adept σ 2 random / equal to random selection Figure 6: A. Adept is robust to neural noise. B. Adept shows no overfitting when responses are shuffled across images. Error bars: ± s.d. across 10 runs. response mapping and perform worse than random selection. To address this concern, we performed two analyses using the same ground truth dataset as in Fig. 6A. For the first analysis, we ran Adept on the ground truth responses (choosing 500 of the 2,000 candidate images) to yield on average a 6% larger mean response and a 21% larger response scatter (average over top 5 PCs) than random selection (Fig. 6B, unshuffled responses). Next, to break any stimulus-response relationship, we shuffled all of the ground truth responses across images, and re-ran Adept. Adept performed no worse than random selection (Fig. 6B, shuffled responses, blue points on red-dashed line). For the second analysis, we asked if Adept focuses on the most predictable neurons to the detriment of other neurons. We shuffled all of the ground truth responses across images for half of the neurons, and ran Adept on the full population. Adept performed better than random selection for the subset of neurons with unshuffled responses (Fig. 6B, unshuffled subset), but no worse than random selection for the subset with shuffled responses (Fig. 6B, shuffled subset, green points on red-dashed line). Adept showed no overfitting in either scenario, likely because Adept cannot choose exceedingly similar images (i.e., differing by a few pixels) from its discrete candidate pool. 6 Discussion Here we proposed Adept, an adaptive method for selecting stimuli to optimize neural population responses. To our knowledge, this is the first adaptive method to consider a population of neurons together. We found that Adept, using a population objective, is better able to optimize population responses than using a single-neuron objective to optimize the response of each neuron in the population (Fig. 4C). While Adept can flexibly incorporate different feature embeddings, we take advantage of the recent breakthroughs in deep learning and apply them to adaptive stimulus selection. Adept does not try to predict the response of each V4 neuron, but rather uses the similarity of CNN feature embeddings to different images to predict the similarity of the V4 population responses to those images. Widely studied neural phenomena such as changes in responses due to attention [29] and trial-to-trial variability [30, 31] likely depend on mean response levels [32]. When recording from a single neuron, one can optimize to produce large mean responses in a straightforward manner. For example, one can optimize the orientation and spatial frequency of a sinusoidal grating to maximize a neuron’s firing rate [9]. However, when recording from a population of neurons, identifying stimuli that optimize the firing rate of each neuron can be infeasible due to limited recording time. Moreover, neurons far from the sensory periphery tend to be more responsive to natural stimuli [33], and the search space for natural stimuli is vast. Adept is a principled way to efficiently search through a space of natural stimuli to optimize the responses of a population of neurons. Experimenters can run Adept for a recording session, and then present the Adept-chosen stimuli in subsequent sessions when probing neural phenomena. A future challenge for adaptive stimulus selection is to generate natural images rather than selecting from a pre-existing pool of candidate images. For Adept, one could use a parametric model to generate natural images, such as a generative adversarial network [34], and optimize Eqn. 1 with gradient-based or Bayesian optimization. 9 Acknowledgments B.R.C. was supported by a BrainHub Richard K. Mellon Fellowship. R.C.W. was supported by NIH T32 GM008208, T90 DA022762, and the Richard K. Mellon Foundation. K.A. was supported by NSF GRFP 1747452. M.A.S. and B.M.Y. were supported by NSF-NCS BCS-1734901/1734916. M.A.S. was supported by NIH R01 EY022928 and NIH P30 EY008098. B.M.Y. was supported by NSF-NCS BCS-1533672, NIH R01 HD071686, NIH R01 NS105318, and Simons Foundation 364994. References [1] D. Ringach and R. Shapley, “Reverse correlation in neurophysiology,” Cognitive Science, vol. 28, no. 2, pp. 147–166, 2004. [2] N. C. Rust and J. A. Movshon, “In praise of artifice,” Nature Neuroscience, vol. 8, no. 12, pp. 1647–1650, 2005. [3] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli, “Spike-triggered neural characterization,” Journal of Vision, vol. 6, no. 4, pp. 13–13, 2006. [4] J. Benda, T. Gollisch, C. K. Machens, and A. V. Herz, “From response to stimulus: adaptive sampling in sensory physiology,” Current Opinion in Neurobiology, vol. 17, no. 4, pp. 430–436, 2007. [5] C. DiMattina and K. Zhang, “Adaptive stimulus optimization for sensory systems neuroscience,” Closing the Loop Around Neural Systems, p. 258, 2014. [6] C. K. Machens, “Adaptive sampling by information maximization,” Physical Review Letters, vol. 88, no. 22, p. 228104, 2002. [7] C. K. Machens, T. Gollisch, O. Kolesnikova, and A. V. Herz, “Testing the efficiency of sensory coding with optimal stimulus ensembles,” Neuron, vol. 47, no. 3, pp. 447–456, 2005. [8] L. Paninski, “Asymptotic theory of information-theoretic experimental design,” Neural Computation, vol. 17, no. 7, pp. 1480–1507, 2005. [9] J. Lewi, R. Butera, and L. Paninski, “Sequential optimal design of neurophysiology experiments,” Neural Computation, vol. 21, no. 3, pp. 619–687, 2009. [10] M. Park, J. P. Weller, G. D. Horwitz, and J. W. Pillow, “Bayesian active learning of neural firing rate maps with transformed gaussian process priors,” Neural Computation, vol. 26, no. 8, pp. 1519–1541, 2014. [11] J. W. Pillow and M. Park, “Adaptive bayesian methods for closed-loop neurophysiology,” in Closed Loop Neuroscience (A. E. Hady, ed.), Elsevier, 2016. [12] E. T. Carlson, R. J. Rasquinha, K. Zhang, and C. E. Connor, “A sparse object coding scheme in area V4,” Current Biology, vol. 21, no. 4, pp. 288–293, 2011. [13] Y. Yamane, E. T. Carlson, K. C. Bowman, Z. Wang, and C. E. Connor, “A neural code for three-dimensional object shape in macaque inferotemporal cortex,” Nature Neuroscience, vol. 11, no. 11, pp. 1352–1360, 2008. [14] C.-C. Hung, E. T. Carlson, and C. E. Connor, “Medial axis shape coding in macaque inferotemporal cortex,” Neuron, vol. 74, no. 6, pp. 1099–1113, 2012. [15] P. Földiák, “Stimulus optimisation in primary visual cortex,” Neurocomputing, vol. 38, pp. 1217–1222, 2001. [16] K. N. O’Connor, C. I. Petkov, and M. L. Sutter, “Adaptive stimulus optimization for auditory cortical neurons,” Journal of Neurophysiology, vol. 94, no. 6, pp. 4051–4067, 2005. [17] I. H. Stevenson and K. P. Kording, “How advances in neural recording affect data analysis,” Nature Neuroscience, vol. 14, no. 2, pp. 139–142, 2011. [18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. [19] G. S. Watson, “Smooth regression analysis,” Sankhy¯a: The Indian Journal of Statistics, Series A, pp. 359– 372, 1964. [20] E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: A flexible architecture for multi-scale derivative computation,” in Image Processing, 1995. Proceedings., International Conference on, vol. 3, pp. 444–447, IEEE, 1995. [21] A. Olmos and F. A. Kingdom, “A biologically inspired algorithm for the recovery of shading and reflectance images,” Perception, vol. 33, no. 12, pp. 1463–1473, 2004. 10 [22] “Google google image search.” http://images.google.com. Accessed: 2017-04-25. [23] D. L. Yamins and J. J. DiCarlo, “Using goal-driven deep learning models to understand sensory cortex,” Nature Neuroscience, vol. 19, no. 3, pp. 356–365, 2016. [24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. [25] A. Vedaldi and K. Lenc, “Matconvnet – convolutional neural networks for Matlab,” in Proceeding of the ACM Int. Conf. on Multimedia, 2015. [26] J. Xiao, “Princeton vision and robotics toolkit,” 2013. Available from: http://3dvision.princeton. edu/pvt/GoogLeNet/. [27] A. W. Roe, L. Chelazzi, C. E. Connor, B. R. Conway, I. Fujita, J. L. Gallant, H. Lu, and W. Vanduffel, “Toward a unified theory of visual area V4,” Neuron, vol. 74, no. 1, pp. 12–29, 2012. [28] I.-C. Lin, M. Okun, M. Carandini, and K. D. Harris, “The nature of shared cortical variability,” Neuron, vol. 87, no. 3, pp. 644–656, 2015. [29] M. R. Cohen and J. H. Maunsell, “Attention improves performance primarily by reducing interneuronal correlations,” Nature Neuroscience, vol. 12, no. 12, pp. 1594–1600, 2009. [30] A. Kohn, R. Coen-Cagli, I. Kanitscheider, and A. Pouget, “Correlations and neuronal population information,” Annual Review of Neuroscience, vol. 39, pp. 237–256, 2016. [31] M. Okun, N. A. Steinmetz, L. Cossell, M. F. Iacaruso, H. Ko, P. Barthó, T. Moore, S. B. Hofer, T. D. Mrsic-Flogel, M. Carandini, et al., “Diverse coupling of neurons to populations in sensory cortex,” Nature, vol. 521, no. 7553, pp. 511–515, 2015. [32] M. R. Cohen and A. Kohn, “Measuring and interpreting neuronal correlations,” Nature Neuroscience, vol. 14, no. 7, pp. 811–819, 2011. [33] G. Felsen, J. Touryan, F. Han, and Y. Dan, “Cortical sensitivity to visual features in natural scenes,” PLoS biology, vol. 3, no. 10, p. e342, 2005. [34] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. 11 | 2017 | 365 |
6,858 | Matrix Norm Estimation from a Few Entries Ashish Khetan Department of ISE University of Illinois Urbana-Champaign khetan2@illinois.edu Sewoong Oh Department of ISE University of Illinois Urbana-Champaign swoh@illinois.edu Abstract Singular values of a data in a matrix form provide insights on the structure of the data, the effective dimensionality, and the choice of hyper-parameters on higher-level data analysis tools. However, in many practical applications such as collaborative filtering and network analysis, we only get a partial observation. Under such scenarios, we consider the fundamental problem of recovering various spectral properties of the underlying matrix from a sampling of its entries. We propose a framework of first estimating the Schatten k-norms of a matrix for several values of k, and using these as surrogates for estimating spectral properties of interest, such as the spectrum itself or the rank. This paper focuses on the technical challenges in accurately estimating the Schatten norms from a sampling of a matrix. We introduce a novel unbiased estimator based on counting small structures in a graph and provide guarantees that match its empirical performances. Our theoretical analysis shows that Schatten norms can be recovered accurately from strictly smaller number of samples compared to what is needed to recover the underlying low-rank matrix. Numerical experiments suggest that we significantly improve upon a competing approach of using matrix completion methods. 1 Introduction Computing and analyzing the set of singular values of a data in a matrix form, which is called the spectrum, provide insights into the geometry and topology of the data. Such a spectral analysis is routinely a first step in general data analysis with the goal of checking if there exists a lower dimensional subspace explaining the important aspects of the data, which itself might be high dimensional. Concretely, it is a first step in dimensionality reduction methods such as principal component analysis or canonical correlation analysis. However, spectral analysis becomes challenging in practical scenarios where the data is only partially observed. We commonly observe pairwise relations of randomly chosen pairs: each user only rates a few movies in recommendation systems, and each player/team only plays against a few opponents in sports. In other applications, we have more structured samples. For example, in a network analysis we might be interested in the spectrum of the adjacency matrix of a large network, but only get to see the connections within a small subset of nodes. Whatever the sampling pattern is, typical number of paired relations we observe is significantly smaller than the dimension of the data matrix. We study all such variations in sampling patterns for partially observed data matrices, and ask the following fundamental question: can we estimate spectral properties of a data matrix from partial observations? We build on the fact that several spectral properties of interest, such as the spectrum itself or the rank, can be estimated accurately via first estimating the Schatten k-norms of a matrix and then aggregating those norms to estimate the spectral properties. In this paper, we focus on the challenging task of estimating the Schatten k-norms defined as ∥M∥k = (Pd i=1 σi(M)k)1/k, where σ1(M) ≥· · · ≥σd(M) are singular values of the data matrix M ∈Rd×d. Once we obtain accurate estimates of Schatten k-norms, these estimates, as well as corresponding performance guarantees, can readily be translated into accurate estimates of the spectral properties of interest. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Setup We want to estimate the Schatten k-norm of a positive semidefinite matrix M ∈Rd×d from a subset of its entries. The restriction to positive semidefinite matrices is for notational convenience, and our analyses, the estimator, and the efficient algorithms naturally generalize to any non-square matrices. Namely, we can extend our framework to bipartite graphs and estimate Schatten k-norm of any matrix for any even k. Let Ωdenote the set of indices of samples we are given and let PΩ(M) = {(i, j, Mij)}(i,j)∈Ωdenote the samples. With a slight abuse of notation, we used PΩ(M) to also denote the d × d sampled matrix: PΩ(M)ij = Mij if (i, j) ∈Ω, 0 otherwise , and it should be clear from the context which one we refer to. Although we propose a framework that generally applies to any probabilistic sampling, it is necessary to propose specific sampling scenarios to provide tight analyses on the performance. Hence, we focus on Erdös-Rényi sampling. There is an extensive line of research in low-rank matrix completion problems [3, 11], which addresses a fundamental question of how many samples are required to complete a matrix (i.e. estimate all the missing entries) from a small subset of sampled entries. It is typically assumed that each entry of the matrix is sampled independently with a probability p ∈(0, 1]. We refer to this scenario as Erdös-Rényi sampling, as the resulting pattern of the samples encoded as a graph is distributed as an Erdös-Rényi random graph. The spectral properties of such an sampled matrix have been well studied in the literature [7, 1, 6, 11, 14]. In particular, it is known that the original matrix is close in spectral norm to the sampled one where the missing entries are filled in with zeros and properly rescaled under certain incoherence assumptions. This suggests using the singular values of (d2/|Ω|)P(M) directly for estimating the Schatten norms. However, in the sub-linear regime in which the number of samples |Ω| = d2p is comparable to or significantly smaller than the degrees of freedom in representing a symmetric rank-r matrix, which is dr −r2, the spectrum of the sampled matrix is significantly different from the spectrum of the original matrix as shown in Figure 1. We need to design novel estimators that are more sample efficient in the sub-linear regime where d2p ≪dr. 10 20 30 40 50 0 10 20 30 histogram of {σi(M)}d i=1 ← histogram of (rescaled) {σi(PΩ(M))}d i=1 ↓ Figure 1: Histogram of (positive) singular values of M with rank r = 100 (in yellow), and singular values of the sampled matrix (in black). 1.2 Summary of the approach and preview of results We propose using an alternative expression of the Schatten k-norm for positive semidefinite matrices as the trace of the k-th power of M, i.e. (∥M∥k)k = Tr(M k). This sum of the entries along the diagonal of M k is the sum of total weights of all the closed walks of length k. Consider the entries of M as weights on a complete graph Kd over d nodes (with self-loops). A closed walk of length k is defined as a sequence of nodes w = (w1, w2, . . . , wk+1) with w1 = wk+1, where we allow repeated nodes and repeated edges. The weight of a closed walk w = (w1, . . . , wk, w1) is defined as ωM(w) ≡Qk i=1 Mwiwi+1, which is the product of the weights along the walk. It follows that ∥M∥k k = X w: all length k closed walks ωM(w) . (1) Following the notations from enumeration of small simple cycles in a graph by [2], we partition this summation into those with the same pattern H that we call a k-cyclic pseudograph. Let 2 Ck = (Vk, Ek) denote the undirected simple cycle graph with k nodes, e.g. A3 in Figure 2 is C3. We expand the standard notion of simple k-cyclic graphs to include multiedges and loops, hence the name pseudograph. Definition 1 We define an unlabelled and undirected pseudograph H = (VH, EH) to be a k-cyclic pseudograph for k ≥3 if there exists an onto node-mapping from Ck = (Vk, Ek), i.e. f : Vk →VH, and a one-to-one edge-mapping g : Ek →EH such that g(e) = (f(ue), f(ve)) for all e = (ue, ve) ∈ Ek. We use Hk to denote the set of all k-cyclic pseudographs. We use c(H) to the number of different node mappings f from Ck to a k-cyclic pseudograph H. A1 A2 A3 c(A1) = 1 c(A2) = 3 c(A3) = 6 Figure 2: The 3-cyclic pseudographs H3 = {A1, A2, A3}. In the above example, each member of H3 is a distinct pattern that can be mapped from C3. For A1, it is clear that there is only one mapping from C3 to A1 (i.e. c(A1) = 1). For A2, one can map any of the three nodes to the left-node of A2, hence c(A2) = 3. For A3, any of the three nodes can be mapped to the bottom-left-node of A3 and also one can map the rest of the nodes clockwise or counter-clockwise, resulting in c(A3) = 6. For k ≤7, all the k-cyclic pseudo graphs are given in the Appendix E (See Figures 8–13). Each closed walk w of length k is associated with one of the graphs in Hk, as there is a unique H that the walk is an Eulerian cycle of (under a one-to-one mapping of the nodes). We denote this graph by H(w) ∈Hk. Considering the weight of a walk ωM(w), there are multiple distinct walks with the same weight. For example, a length-3 walk w = (v1, v2, v2, v1) has H(w) = A2 and there are 3 walks with the same weight ω(w) = (Mv1v2)2Mv2v2, i.e. (v1, v2, v2, v1), (v2, v2, v1, v2), and (v2, v1, v2, v2). This multiplicity of the weight depends only on the structure H(w) of a walk, and it is exactly c(H(w)) the number of mappings from Ck to H(w) in Definition 1. The total sum of the weights of closed walks of length k can be partitioned into their respective pattern, which will make computation of such terms more efficient (see Section 2) and also de-biasing straight forward (see Equation (3)): ∥M∥k k = X H∈Hk ωM(H) c(H) , (2) where with a slight abuse of a notation, we let ωM(H) for H ∈Hk be the sum of all distinct weights of walks w with H(w) = H, and c(H) is the multiplicity of each distinct weight. This is an alternative tool for computing the Schatten norm without explicitly computing the σi(M)’s. Given only the access to a subset of sampled entries, one might be tempted to apply the above formula to the sampled matrix with an appropriate scaling, i.e. ∥(d2/|Ω|)PΩ(M)∥k k = (d2/|Ω|) P H∈Hk ωPΩ(M)(H) c(H) , to estimate ∥M∥k k. However, this is significantly biased. To eliminate the bias, we propose rescaling each term in (1) by the inverse of the probability of sampling that particular walk w (i.e. the probability that all edges in w are sampled). A crucial observation is that, for any sampling model that is invariant under a relabelling of the nodes, this probability only depends on the pattern H(w). In particular, this is true for Erdös-Rényi sampling. Based on this observation, we introduce a novel estimator that de-biases each group separately: bΘk(PΩ(M)) = X H∈Hk 1 p(H) ωPΩ(M)(H) c(H) , (3) where p(H) is the probability the pattern H is sampled. It immediately follows that this estimator is unbiased, i.e. EΩ[bΘk(PΩ(M))] = ∥M∥k k, where the randomness is in Ω. However, computing this estimate can be challenging. Naive enumeration over all closed walks of length k takes time scaling as O(d ∆k−1), where ∆is the maximum degree of the graph. Except for extremely sparse graphs, this is impractical. Inspired by the work of [2] in counting short cycles in a graph, we introduce a novel and efficient method for computing the proposed estimate for small values of k. 3 Proposition 2 For a positive semidefinite matrix M and any sampling pattern Ω, the proposed estimate bΘk(PΩ(M)) in (3) can be computed in time O(dα) for k ∈{3, 4, 5, 6, 7}, where α < 2.373 is the exponent of matrix multiplication. For k = 1 or 2, bΘk(PΩ(M)) can be computed in time O(d) and O(d2), respectively. This bound holds regardless of the degree, and the complexity can be even smaller for sparse graphs as matrix multiplications are more efficient. We give a constructive proof by introducing a novel algorithm achieving this complexity in Section 2. For k ≥8, our approach can potentially be extended, but the complexity of the problem fundamentally changes as it is at least as hard as counting K4 in a graph, for which the best known run time is O(dα+1) for general graphs [12]. We make the following contributions in this paper: • We introduce in (3) a novel unbiased estimator of the Schatten k-norm of a positive semidefinite matrix M, from a random sampling of its entries. In general, the complexity of computing the estimate scales as O(d∆k) where ∆is the maximum degree (number of sampled entries in a column) in the sampled matrix. We introduce a novel efficient algorithm for computing the estimate in (3) exactly for small k ≤7, which involves only matrix operations. This algorithm is significantly more efficient and has run-time scaling as O(dα) independent of the degree and for all k ≤7 (see Proposition 2) . • Under the canonical Erdös-Rényi sampling, we show that the Schatten k-norm of an incoherent rank-r matrix can be approximated within any constant multiplicative error, with number of samples scaling as O(dr1−2/k) (see Theorem 1). In particular, this is strictly smaller than the number of samples necessary to complete the matrix, which scales as O(dr log d). Below this matrix completion threshold, numerical experiments confirm that the proposed estimator significantly outperforms simple heuristics of using singular values of the sampled matrices directly or applying state-of-the-art matrix completion methods (see Figure 4). • Given estimation of first K Schatten norms, it is straight forward to estimate spectral properties. We apply our Schatten norm estimates to the application of estimating the generalized rank studied in [20] and estimating the spectrum studied in [13]. We provide performance guarantees for both applications and provide experimental results suggesting we improve upon other competing methods. Due to space limitations, these results are included in Appendix B. In the remainder, we provide an efficient implementation of the estimator (3) for small k in Section 2. In Section 3, we provide a theoretical analysis of our estimator. 1.3 Related work Several Schatten norm estimation problems under different resource constrained scenarios have been studied. However, those approaches assume specific noisy observations which allow them to use the relation E ∥f(M)g∥2 2 = P i f(σi(M))2 which holds for a standard i.i.d. Gaussian g ∼N(0, I) and any polynomial function f(·). This makes the estimation significantly easier than our setting, and none of those algorithms can be applied under our random sampling model. In particular, counting small structure for de-biasing is not required. [20, 8] and [9] propose multiplying Gaussian random vectors to the data matrix, in order to reduce communication and/or computation. [13] proposes an interesting estimator for the spectrum of the covariance matrix from samples of random a vector. [15] propose similar estimators for Schatten norms from random linear projections of a data matrix, and [16] study the problem for sparse data matrices in a streaming model. One of our contribution is that we propose an efficient algorithm for computing the weighted counts of small structures in Section 2, which can significantly improve upon less sample-efficient counterpart in, for example, [13]. Under the setting of [13] (and also [15]), the main idea of the estimator is that the weight of each length-k cycle in the observed empirical covariance matrix (1/n) Pn i=1 XiXT i provides an unbiased estimator of ∥E[XXT ]∥k k. One prefers to sum over the weights of as many cycles as computationally allowed in order to reduce the variance. As counting all cycles is in general computationally hard, they propose counting only increasing cycles (which only accounts for only 1/k! fraction of all the cycles), which can be computed in time O(dα). If one has an efficient method to count all the (weighted) cycles, then the variance of the estimator could potentially decrease by an order of k!. For k ≤7, our proposed algorithm in Section 2 provides exactly such an estimator. 4 We replace [13, Algorithm 1] with ours, and run the same experiment to showcase the improvement in Figure 3, for dimension d = 2048 and various values of number of samples n comparing the multiplicative error in estimating ∥E[XXT ]∥k k, for k = 7. With the same run-time, significant gain is achieved by simply substituting our proposed algorithm for counting small structures, in the sub-routine. In general, the efficient algorithm we propose might be of independent interest to various applications, and can directly substitute (and significantly improve upon) other popular but less efficient counterparts. 0.01 0.1 1 10 100 256 512 1024 2048 increasing simple cycles all simple cycles number of samples, n \ |∥E[XXT ]∥k k−∥E[XXT ]∥k k| ∥E[XXT ]∥k k Figure 3: By replacing [13, Algorithm 1] that only counts increasing cycles with our proposed algorithm that counts all cycles, significant gain is acheived in estimating ∥E[XXT ]∥k k, for k = 7. The main challenge under our sampling scenario is that existing counting methods like that of [13] cannot be applied, regardless of how much computational power we have. Under the matrix completion scenario, we need to (a) sum over all small structures H ∈Hk and not just Ck as in [13]; and (b) for each structure we need to sum over all subgraphs with the same structure and not just those walks whose labels form a monotonically increasing sequence as in [13]. 2 Efficient Algorithm In this section we give a constructive proof of Proposition 2. In computing the estimate in (3), c(H) can be computed in time O(k!) and suppose p(H) has been computed (we will explain how to compute p(H) for Erös-Rényi sampling in Section 3). The bottleneck then is computing the weights ωPΩ(M)(H) for each H ∈Hk. Let γM(H) ≡ωM(H)c(H). We give matrix multiplication based equations to compute γM(H) for every H ∈Hk for k ∈{3, 4, 5, 6, 7}. This establishes that γM(H), and hence ωM(H), can be computed in time O(dα), proving Proposition 2. For any matrix A ∈Rd×d, let diag(A) to be a diagonal matrix such that (diag(A))ii = Aii, for all i ∈[d] and (diag(A))i,j = 0, for all i ̸= j ∈[d]. For a given matrix M ∈Rd×d, define the following: OM to be matrix of off-diagonal entries of M that is OM ≡M −diag(M) and we let DM ≡diag(M). Let tr(A) denote trace of A, that is tr(A) = P i∈[d] Aii, and let A∗B denote the standard matrix multiplication of two matrices A and B to make it more explicit. Consider computing γM(H) for H ∈H3 as labeled in Figure 2: γM(A1) = tr(DM∗DM∗DM) (4) γM(A2) = 3 tr(DM∗OM∗OM) (5) γM(A3) = tr(OM∗OM∗OM) (6) The first weighted sum γM(A1) is sum of all weights of walks of length 3 that consists of three self-loops. One can show that γM(A1) = P i∈[d] M 3 ii, which in our matrix operation notations is (4). Similarly, γM(A3) is the sum of weights of length 3 walks with no self-loop, which leads to (6). γM(A2) is the sum of weights of length 3 walks with a single self-loop, which leads to (5). The factor 3 accounts for the fact that the self loop could have been placed at various positions. Similarly, for each k-cyclic pseudographs in Hk for k ≤7, computing γM(H) involves a few matrix operations with run-time O(dα). We provide the complete set of explicit expressions in Appendix F. A MATLAB implementation of the estimator (3), that includes as its sub-routines the computation of the weights of all k-cyclic pseudographs, is available for download at 5 https://github.com/khetan2/Schatten_norm_estimation. The explicit formulae in Appendix F together with the implementation in the above url might be of interest to other problems involving counting small structures in graphs. For k = 1, the estimator simplifies to bΘk(PΩ(M)) = (1/p) P i PΩ(M)ii, which can be computed in time O(d). For k = 2, the estimator simplifies to bΘk(PΩ(M)) = (1/p) P i,j PΩ(M)2 ij, which can be computed in time O(|Ω|). However, for k ≥8, there exists walks over K4, a clique over 4 nodes, that cannot be decomposed into simple computations involving matrix operations. The best known algorithm for a simpler task of counting K4 has run-time scaling as O(dα+1), which is fundamentally different. Algorithm 1 Schatten k-norm estimator Require: PΩ(M), k, Hk, p(H) for all H ∈Hk Ensure: bΘk(PΩ(M)) 1: if k ≤7 then 2: For each H ∈Hk, compute γPΩ(M)(H) using the formula from Eq. (4)–(6) for k = 3 and Eq. (43) – (186) for k ∈{4, 5, 6, 7} 3: bΘk(PΩ(M)) ←P H∈Hk 1 p(H) γPΩ(M)(H) 4: else 5: bΘk(PΩ(M)) ←Algorithm 2[PΩ(M), k, Hk, p(H) for all H ∈Hk] [Appendix A] 6: end if 3 Performance guarantees Under the stylized but canonical Erdös-Rényi sampling, notice that the probability p(H) that we observe all edges in a walk with pattern H is p(H) = pm(H) , (7) where p is the probability an edge is sampled and m(H) is the number of distinct edges in a k-cyclic pseudograph H. Plugging in this value of p(H), which can be computed in time linear in k, into the estimator (3), we get an estimate customized for Erdös-Rényi sampling. Given a rank-r matrix M, the difficulty of estimating properties of M from sampled entries is captured by the incoherence of the original matrix M, which we denote by µ(M) ∈R [3]. Formally, let M ≡UΣU ⊤be the singular value decomposition of a positive definite matrix where U is a d × r orthonormal matrix and Σ ≡diag(σ1, · · · , σr) with singular values σ1 ≥σ2 ≥· · · ≥σr > 0. Let Ui,r denote the i-th row and j-th column entry of matrix U. The incoherence µ(M) is defined as the smallest positive value µ such that the following holds: A1. For all i ∈[d], we have Pr a=1 U 2 ia(σa/σ1) ≤µr/d. A2. For all i ̸= j ∈[d], we have | Pr a=1 UiaUja(σa/σ1)| ≤µ√r/d. The incoherence measures how well spread out the matrix is and is a common measure of difficulty in completing a matrix from random samples [3, 11]. 3.1 Performance guarantee For any d × d positive semidefinite matrix M of rank r with incoherence µ(M) = µ and the effective condition number κ = σmax(M)/σmin(M), we define ρ2 ≡ (κµ)2kg(k) max ( 1, (dp)k−1 d , rkpk−1 dk−1 ) , (8) such that the variance of our estimator is bounded by Var(bΘ(PΩ(M))/∥M∥k k) ≤ρ2(r1−2/k/dp)k as we show in the proof of Theorem 1 in Section D.1. Here, g(k) = O(k!). 6 Theorem 1 (Upper bound under the Erdös-Rényi sampling) For any integer k ∈[3, ∞), any δ > 0, any rank-r positive semidefinite matrix M ∈Rd×d, and given i.i.d. samples of the entries of M with probability p, the proposed estimate of (3) achieves normalized error δ with probability bounded by P bΘk(PΩ(M)) −∥M∥k k ∥M∥k k ≥ δ ! ≤ ρ2 δ2 r1−2/k dp k . (9) Consider a typical scenario where µ, κ, and k are finite with respect to d and r. Then the Chebyshev’s bound in (9) implies that the sample d2p = O(dr1−2/k) is sufficient to recover ∥M∥k k up to arbitrarily small multiplicative error and arbitrarily small (but strictly positive) error probability. This is strictly less than the known minimax sample complexity for recovering the entire low-rank matrix, which scales is Θ(rd log d). As we seek to estimate only a property of the matrix (i.e. the Schatten k-norm) and not the whole matrix itself, we can be more efficient on the sample complexity by a factor of r2/k in rank and a factor of log d in the dimension. We emphasize here that such a gain can only be established using the proposed estimator based on the structure of the k-cyclic pseudographs. We will show empirically that the standard matrix completion approaches fail in the critical regime of samples below the recovery threshold of O(rd log d). 0 0.2 0.4 0.6 0.8 1 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 proposed estimator scaled sampled matrix matrix completion sampling probability, p relative error d = 500, r = 100 0 0.2 0.4 0.6 0.8 1 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 proposed estimator scaled sampled matrix matrix completion sampling probability, p relative error d = 500, r = 500 Figure 4: The proposed estimator outperforms both baseline approaches below the matrix completion threshold. For k = 5, comparison of the absolute relative error in estimated Schatten norm that is ∥M∥k k−\ ∥M∥k k /∥M∥k k for the three algorithms: (1) the proposed estimator, \ ∥M∥k k = bΘk(PΩ(M)), (2) Schatten norm of the scaled sampled matrix, \ ∥M∥k k = ∥(1/p)Pr(PΩ(M))∥k k, (3) Schatten norm of the completed matrix, f M = AltMin(PΩ(M)) from [10], \ ∥M∥k k = ∥f M∥k k, where Pr(·) is the standard best rank-r projection of a matrix. Ωis generated by Erdös-Rényi sampling of matrix M with probability p. Figure 4 is a scatter plot of the absolute relative error in estimated Schatten k-norm, ∥M∥k k − \ ∥M∥k k /∥M∥k k, for k = 5, for three approaches: the proposed estimator, Schatten norm of the scaled sampled matrix (after rank-r projection), and Schatten norm of the completed matrix, using state-of-the-art alternating minimization algorithm [10]. All the three estimators are evaluated 20 times for each value of p. M is a symmetric positive semi-definite matrix of size d = 500, and rank r = 100 (left panel) and r = 500 (right panel). Singular vectors U of M = UΣU ⊤, are generated by QR decomposition of N(0, Id×d) and Σi,i is uniformly distributed over [1, 2]. For a low rank matrix on the left, there is a clear critical value of p ≃0.45, above which matrix completion is exact with high probability. However, this algorithm knows the underlying rank and crucially exploits the fact that the underlying matrix is exactly low-rank. In comparison, our approach is agnostic to the low-rank assumption but finds the accurate estimate that is adaptive to the actual rank in a data-driven manner. Using the first r singular values of the (rescaled) sampled matrix fails miserably for all regimes (we truncate the error at one for illustration purposes). In this paper, we are interested in the 7 regime where exact matrix completion is impossible as we do not have enough samples to exactly recover the underlying matrix: p ≤0.45 in the left panel and all regimes in the right panel. The sufficient condition of d2p = O(dr1−2/k) in Theorem 1 holds for a broad range of parameters where the rank is sufficiently small r = O(dk/((k−1)(k−2))) (to ensure that the first term in ρ2 dominates). However, the following results in Figure 5 on numerical experiments suggest that our analysis holds more generally for all regimes of the rank r, even those close to d. M is generated using settings similar to that of Figure 4. Empirical probabilities are computed by averaging over 100 instances. One might hope to tighten the Chebyshev bound by exploiting the fact that the correlation among the summands in our estimator (3) is weak. This can be made precise using recent result from [18], where a Bernstein-type bound was proved for sum of polynomials of independent random variables that are weakly correlated. The first term in the bound (10) is the natural Bernstein-type bound corresponding to the Chebyshev’s bound in (9). However, under the regime where k is large or p is large, the correlation among the summands become stronger, and the second and third term in the bound (10) starts to dominate. In the typical regime of interest where µ, κ, k are finite, d2p = O(dr1−2/k), and sufficiently small rank r = O(dk/((k−1)(k−2))), the error probability is dominated by the first term in the right-hand side of (10). Neither one of the two bounds in (9) and (10) dominates the other, and depending on the values of the problem parameters, we might want to apply the one that is tighter. We provide a proof in Section D.2. Theorem 2 Under the hypotheses of Theorem 1, the error probability is upper bounded by P bΘk(PΩ(M)) −∥M∥k k ∥M∥k k ≥δ ! ≤ e2 max ( e −δ2 ρ2 dp r1−2/k k , e −(dp) δd ρrk−1 1/k , e −(dp) δd ρrk−1 , e−δdp ρ ) . (10) 5 50 500 0.002 0.02 0.2 0 0.2 0.4 0.6 0.8 1 rank, r sampling probability, p k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 5 50 500 0.002 0.02 0.2 0 0.2 0.4 0.6 0.8 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 Figure 5: Each colormap in each block for k ∈{2, 3, 4, 5, 6, 7} show empirical probability of the event ∥M∥k k −bΘk(PΩ(M)) /∥M∥k k ≤δ , for δ = 0.5 (left panel) and δ = 0.2 (right panel). Ωis generated by Erdös-Rényi sampling of matrix M with probability p (vertical axis). M is a symmetric positive semi-definite matrix of size d = 1000. The solid lines correspond to our theoretical prediction p = (1/d)r1−2/k. These two results show that the sample size of d2p = O(dr1−2/k) is sufficient to estimate a Schatten k-norm accurately. In general, we do not expect to get a universal upper bound that is significantly tighter for all r, because for a special case of r = d, the following corollary of [15, Theorem 3.2] provides a lower bound; it is necessary to have sample size d2p = Ω(d2−4/k) when r = d. Hence, the gap is at most a factor of r2/k in the sample complexity. Corollary 1 Consider any linear observation X ∈Rn of a matrix M ∈Rd×d and any estimate θ(X) satisfying (1 −δk)∥M∥k k ≤θ(X) ≤(1 + δk)∥M∥k k for any M with probability at least 3/4, where δk = (1.2k −1)/(1.2k + 1). Then, n = Ω(d2−4/k). 8 For k ∈{1, 2}, precise bounds can be obtained with simpler analyses. In particular, we have the following remarks, whose proof follows immediately by applying Chebyshev’s inequality and Bernstien’s inequality along with the incoherence assumptions. Remark 3 For k = 1, the probability of error in (9) is upper bounded by min{ν1, ν2}, where ν1 ≡1 δ2 (κµ)2 dp , and ν2 ≡2 exp −δ2 2 (κµ)2 dp + δ (κµ) 3dp −1 . Remark 4 For k = 2, the probability of error in (9) is upper bounded by min{ν1, ν2}, where ν1 ≡1 δ2 (κµ)4 d2p 2 + r2 d , and ν2 ≡2 exp −δ2 2 (κµ)4 d2p 2 + r2 d + δ (κµ)2r 3d2p −1 . When k = 2, for rank small r ≤C √ d, we only need d2p = Ω(1) samples for recovery up to any arbitrary small multiplicative error. When rank r is large, our estimator requires d2p = Ω(d) for both k ∈{1, 2}. Acknowledgments This work was partially supported by NSF grants CNS-1527754, CCF-1553452, CCF-1705007 and GOOGLE Faculty Research Award. References [1] Dimitris Achlioptas and Frank McSherry. Fast computation of low rank matrix approximations. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 611–618. ACM, 2001. [2] N. Alon, R. Yuster, and U. Zwick. Finding and counting given length cycles. Algorithmica, 17(3):209–223, 1997. [3] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [4] E. Di Napoli, E. Polizzi, and Y. Saad. Efficient estimation of eigenvalue counts in an interval. Numerical Linear Algebra with Applications, 2016. [5] Khaled M Elbassioni. A polynomial delay algorithm for generating connected induced subgraphs of a given cardinality. J. Graph Algorithms Appl., 19(1):273–280, 2015. [6] U. Feige and E. Ofek. Spectral techniques applied to sparse random graphs. Random Struct. Algorithms, 27(2):251–275, 2005. [7] J. Friedman, J. Kahn, and E. Szemerédi. On the second eigenvalue in random regular graphs. In Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, pages 587–598, Seattle, Washington, USA, may 1989. ACM. [8] I. Han, D. Malioutov, H. Avron, and J. Shin. Approximating the spectral sums of large-scale matrices using chebyshev approximations. arXiv preprint arXiv:1606.00942, 2016. [9] I. Han, D. Malioutov, and J. Shin. Large-scale log-determinant computation through stochastic chebyshev expansions. In ICML, pages 908–917, 2015. [10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, pages 665–674, 2013. [11] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information Theory, IEEE Transactions on, 56(6):2980–2998, 2010. [12] T. Kloks, D. Kratsch, and H. Müller. Finding and counting small induced subgraphs efficiently. Information Processing Letters, 74(3):115–121, 2000. 9 [13] W. Kong and G. Valiant. Spectrum estimation from samples. arXiv preprint arXiv:1602.00061, 2016. [14] C. M. Le, E. Levina, and R. Vershynin. Sparse random graphs: regularization and concentration of the laplacian. arXiv preprint arXiv:1502.03049, 2015. [15] Y. Li, H. L. Nguyên, and D. P. Woodruff. On sketching matrix norms and the top singular vector. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1562–1581. Society for Industrial and Applied Mathematics, 2014. [16] Y. Li and D. P. Woodruff. On approximating functions of the singular values in a stream. arXiv preprint arXiv:1604.08679, 2016. [17] J. C. Mason and D. C. Handscomb. Chebyshev polynomials. CRC Press, 2002. [18] W. Schudy and M. Sviridenko. Bernstein-like concentration and moment inequalities for polynomials of independent random variables: multilinear case. arXiv preprint arXiv:1109.5193, 2011. [19] Ryuhei Uehara et al. The number of connected components in graphs and its applications. Manuscript. URL: http://citeseerx. ist. psu. edu/viewdoc/summary, 1999. [20] Y. Zhang, M. J. Wainwright, and M. I. Jordan. Distributed estimation of generalized matrix rank: Efficient algorithms and lower bounds. arXiv preprint arXiv:1502.01403, 2015. 10 | 2017 | 366 |
6,859 | On the Power of Truncated SVD for General High-rank Matrix Estimation Problems Simon S. Du Carnegie Mellon University ssdu@cs.cmu.edu Yining Wang Carnegie Mellon University yiningwa@cs.cmu.edu Aarti Singh Carnegie Mellon University aartisingh@cmu.edu Abstract We show that given an estimate A that is close to a general high-rank positive semidefinite (PSD) matrix A in spectral norm (i.e., A−A2 ≤δ), the simple truncated Singular Value Decomposition of A produces a multiplicative approximation of A in Frobenius norm. This observation leads to many interesting results on general high-rank matrix estimation problems: 1. High-rank matrix completion: we show that it is possible to recover a general high-rank matrix A up to (1 + ε) relative error in Frobenius norm from partial observations, with sample complexity independent of the spectral gap of A. 2. High-rank matrix denoising: we design an algorithm that recovers a matrix A with error in Frobenius norm from its noise-perturbed observations, without assuming A is exactly low-rank. 3. Low-dimensional approximation of high-dimensional covariance: given N i.i.d. samples of dimension n from Nn(0, A), we show that it is possible to approximate the covariance matrix A with relative error in Frobenius norm with N ≈n, improving over classical covariance estimation results which requires N ≈n2. 1 Introduction Let A be an unknown general high-rank n × n PSD data matrix that one wishes to estimate. In many machine learning applications, though A is unknown, it is relatively easy to obtain a crude estimate A that is close to A in spectral norm (i.e., A −A2 ≤δ). For example, in matrix completion a simple procedure that fills all unobserved entries with 0 and re-scales observed entries produces an estimate that is consistent in spectral norm (assuming the matrix satisfies a spikeness condition, standard assumption in matrix completion literature). In matrix de-noising, an observation that is corrupted by Gaussian noise is close to the underlying signal, because Gaussian noise is isotropic and has small spectral norm. In covariance estimation, the sample covariance in low-dimensional settings is close to the population covariance in spectral norm under mild conditions [Bunea and Xiao, 2015]. However, in most such applications it is not sufficient to settle for a spectral norm approximation. For example, in recommendation systems (an application of matrix completion) the zero-filled re-scaled rating matrix is close to the ground truth in spectral norm, but it is an absurd estimator because most of the estimated ratings are zero. It is hence mandatory to require a more stringent measure of performance. One commonly used measure is the Frobenius norm of the estimation error A −AF , which ensures that (on average) the estimate is close to the ground truth in an element-wise sense. A spectral norm approximation A is in general not a good estimate under Frobenius norm, because in high-rank scenarios A −AF can be √n times larger than A −A2. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we show that in many cases a powerful multiplicative low-rank approximation in Frobenius norm can be obtained by applying a simple truncated SVD procedure on a crude, easyto-find spectral norm approximate. In particular, given the spectral norm approximation condition A −A2 ≤δ, the top-k SVD of Ak of A multiplicatively approximates A in Frobenius norm; that is, Ak −AF ≤C(k, δ, σk+1(A))A −AkF , where Ak is the best rank-k approximation of A in Frobenius and spectral norm. To our knowledge, the best existing result under the assumption A −A2 ≤δ is due to Achlioptas and McSherry [2007], who showed that Ak −AF ≤ A −AkF + √ kδ + 2k1/4 δAkF , which depends on AkF and is not multiplicative in A −AkF . Below we summarize applications in several matrix estimation problems. High-rank matrix completion Matrix completion is the problem of (approximately) recovering a data matrix from very few observed entries. It has wide applications in machine learning, especially in online recommendation systems. Most existing work on matrix completion assumes the data matrix is exactly low-rank [Candes and Recht, 2012, Sun and Luo, 2016, Jain et al., 2013]. Candes and Plan [2010], Keshavan et al. [2010] studied the problem of recovering a low-rank matrix corrupted by stochastic noise; Chen et al. [2016] considered sparse column corruption. All of the aforementioned work assumes that the ground-truth data matrix is exactly low-rank, which is rarely true in practice. Negahban and Wainwright [2012] derived minimax rates of estimation error when the spectrum of the data matrix lies in an q ball. Zhang et al. [2015], Koltchinskii et al. [2011] derived oracle inequalities for general matrix completion; however their error bound has an additional O(√n) multiplicative factor. These results also require solving computationally expensive nuclear-norm penalized optimization problems whereas our method only requires solving a single truncated singular value decomposition. Chatterjee et al. [2015] also used the truncated SVD estimator for matrix completion. However, his bound depends on the nuclear norm of the underlying matrix which may be √n times larger than our result. Hardt and Wootters [2014] used a “soft-deflation” technique to remove condition number dependency in the sample complexity; however, their error bound for general high-rank matrix completion is additive and depends on the “consecutive” spectral gap (σk(A) −σk+1(A)), which can be small in practical settings [Balcan et al., 2016, Anderson et al., 2015]. Eriksson et al. [2012] considered high-rank matrix completion with additional union-ofsubspace structures. In this paper, we show that if the n × n data matrix A satisfies µ0-spikeness condition, 1 then for any ∈(0, 1), the truncated SVD of zero-filled matrix Ak satisfies Ak −AF ≤(1 + O())A − AkF if the sample complexity is lower bounded by Ω( n max{−4,k2}µ2 0A2 F log n σk+1(A)2 ) ,which can be further simplified to Ω(µ2 0 max{−4, k2}γk(A)2 · nrs(A) log n), where γk(A) = σ1(A)/σk+1(A) is the kth-order condition number and rs(A) = A2 F /A2 2 ≤rank(A) is the stable rank of A. Compared to existing work, our error bound is multiplicative, gap-free, and the estimator is computationally efficient. 2 High-rank matrix de-noising Let A = A + E be a noisy observation of A, where E is a PSD Gaussian noise matrix with zero mean and ν2/n variance on each entry. By simple concentration results we have A −A2 = ν with high probability; however, A is in general not a good estimator of A in Frobenius norm when A is high-rank. Specifically, A −AF can be as large as √nν. Applying our main result, we show that if ν < σk+1(A) for some k n, then the top-k SVD Ak of A satisfies Ak −AF ≤(1 + O( ν/σk+1(A)))A −AkF + √ kν. This suggests a form of bias-variance decomposition as larger rank threshold k induces smaller bias A −AkF but larger variance kν2. Our results generalize existing work on matrix de-noising [Donoho and Gavish, 2014, Donoho et al., 2013, Gavish and Donoho, 2014], which focus primarily on exact low-rank A. 1nAmax ≤µ0AF ; see also Definition 2.1. 2We remark that our relative-error analysis does not, however, apply to exact rank-k matrix where σk+1 = 0. This is because for exact rank-k matrix a bound of the form (1 + O())A −AkF requires exact recovery of A, which truncated SVD cannot achieve. On the other hand, in the case of σk+1 = 0 a weaker additive-error bound is always applicable, as we show in Theorem 2.3. 2 Low-rank estimation of high-dimensional covariance The (Gaussian) covariance estimation problem asks to estimate an n × n PSD covariance matrix A, either in spectral or Frobenius norm, from N i.i.d. samples X1, · · · , XN ∼N(0, A). The high-dimensional regime of covariance estimation, in which N ≈n or even N n, has attracted enormous interest in the mathematical statistics literature [Cai et al., 2010, Cai and Zhou, 2012, Cai et al., 2013, 2016]. While most existing work focus on sparse or banded covariance matrices, the setting where A has certain low-rank structure has seen rising interest recently [Bunea and Xiao, 2015, Kneip and Sarda, 2011]. In particular, Bunea and Xiao [2015] shows that if n = O(N β) for some β ≥0 then the sample covariance estimator A = 1 N N i=1 XiX i satisfies A −AF = OP A2re(A) log N N , (1) where re(A) = tr(A)/A2 ≤rank(A) is the effective rank of A. For high-rank matrices where re(A) ≈n, Eq. (1) requires N = Ω(n2 log n) to approximate A consistently in Frobenius norm. In this paper we consider a reduced-rank estimator Ak and show that, if re(A) max{−4,k2}γk(A)2 log N N ≤c for some small universal constant c > 0, then Ak −AF admits a relative Frobenius-norm error bound (1+O())A−AkF with high probability. Our result allows reasonable approximation of A in Frobenius norm under the regime of N = Ω(npoly(k) log n) if γk = O (poly (k)), which is significantly more flexible than N = Ω(n2 log n), though the dependency of is worse than [Bunea and Xiao, 2015]. The error bound is also agnostic in nature, making no assumption on the actual or effective rank of A. Notations For an n×n PSD matrix A, denote A = UΣU as its eigenvalue decomposition, where U is an orthogonal matrix and Σ = diag(σ1, · · · , σn) is a diagonal matrix, with eigenvalues sorted in descending order σ1 ≥σ2 ≥· · · ≥σn ≥0. The spectral norm and Frobenius norm of A are defined as A2 = σ1 and AF = σ2 1 + · · · + σ2n, respectively. Suppose u1, · · · , un are eigenvectors associated with σ1, · · · , σn. Define Ak = k i=1 σiuiu i = UkΣkU k , An−k = n i=k+1 σiuiu i = Un−kΣn−kU n−k and Am1:m2 = m2 i=m1+1 σiuiu i = Um1:m2Σm1:m2U m1:m2. For a tall matrix U ∈Rn×k, we use U = Range(U) to denote the linear subspace spanned by the columns of U. For two linear subspaces U and V, we write W = U⊕V if U∩V = {0} and W = {u+v : u ∈U, v ∈V}. For a sequence of random variables {Xn}∞ n=1 and real-valued function f : N →R, we say Xn = OP(f(n)) if for any > 0, there exists N ∈N and C > 0 such that Pr[|Xn| ≥C ·|f(n)|] ≤ for all n ≥N. 2 Multiplicative Frobenius-norm Approximation and Applications We first state our main result, which shows that truncated SVD on a weak estimator with small approximation error in spectral norm leads to a strong estimator with multiplicative Frobenius-norm error bound. We remark that truncated SVD in general has time complexity O min n2k, nnz A + npoly (k) , where nnz( A) is the number of non-zero entries in A, and the time complexity is at most linear in matrix sizes when k is small. We refer readers to [Allen-Zhu and Li, 2016] for details. Theorem 2.1. Suppose A is an n × n PSD matrix with eigenvalues σ1(A) ≥· · · ≥σn(A) ≥0, and a symmetric matrix A satisfies A −A2 ≤δ = 2σk+1(A) for some ∈(0, 1/4]. Let Ak and Ak be the best rank-k approximations of A and A. Then Ak −AF ≤(1 + 32)A −AkF + 102 √ 2k2A −Ak2. (2) Remark 2.1. Note when = O(1/ √ k) we obtain an (1 + O ()) error bound. Remark 2.2. This theorem only studies PSD matrices. Using similar arguments in the proof, we believe similar results for general asymmetric matrices can be obtained as well. 3 To our knowledge, the best existing bound for Ak −AF assuming A −A2 ≤δ is due to Achlioptas and McSherry [2007], who showed that Ak −AF ≤ A −AkF + ( A −A)kF + 2 ( A −A)kF AkF ≤ A −AkF + √ kδA −Ak2 + 2k1/4√ δ AkF . (3) Compared to Theorem 2.1, Eq. (3) is not relative because the third term 2k1/4 AkF depends on the k largest eigenvalues of A, which could be much larger than the remainder term A −AkF . In contrast, Theorem 2.1, together with Remark 2.1, shows that Ak −AF could be upper bounded by a small factor multiplied with the remainder term A −AkF . We also provide a gap-dependent version. Theorem 2.2. Suppose A is an n × n PSD matrix with eigenvalues σ1(A) ≥· · · ≥σn(A) ≥0, and a symmetric matrix A satisfies A −A2 ≤δ = (σk(A) −σk+1(A)) for some ∈(0, 1/4]. Let Ak and Ak be the best rank-k approximations of A and A. Then Ak −AF ≤A −AkF + 102 √ 2k (σk(A) −σk+1(A)) . (4) If A is an exact rank-k matrix, Theorem 2.2 implies that truncated SVD gives an √ 2kσk error approximation in Frobenius norm, which has been established by many previous works [Yi et al., 2016, Tu et al., 2015, Wang et al., 2016]. Before we proceed to the applications and proof of Theorem 2.1, we first list several examples of A with classical distribution of eigenvalues and discuss how Theorem 2.1 could be applied to obatin good Frobenius-norm approximations of A. We begin with the case where eigenvalues of A have a polynomial decay rate (i.e., power law). Such matrices are ubiquitous in practice [Liu et al., 2015]. Corollary 2.1 (Power-law spectral decay). Suppose ˆA −A2 ≤δ for some δ ∈(0, 1/2] and σj(A) = j−β for some β > 1/2. Set k =
min{C1δ−1/β, n} −1. If k ≥1 then Ak −AF ≤C 1 · max δ 2β−1 2β , n−2β−1 2β , where C1, C 1 > 0 are constants that only depend on β. We remark that the assumption σj(A) = j−β implies that the eigenvalues lie in an q ball for q = 1/β; that is, n j=1 σj(A)q = O(1). The error bound in Corollary 2.1 matches the minimax rate (derived by Negahban and Wainwright [2012]) for matrix completion when the spectrum is constrained in an q ball, by replacing δ with n/N where N is the number of observed entries. Next, we consider the case where eigenvalues satisfy a faster decay rate. Corollary 2.2 (Exponential spectral decay). Suppose ˆA −A2 ≤δ for some δ ∈(0, e−16) and σj(A) = exp{−cj} for some c > 0. Set k =
min{c−1 log(1/δ) −c−1 log log(1/δ), n} −1. If k ≥1 then Ak −AF ≤C 2 · max δ log(1/δ)3, n1/2 exp(−cn) , where C 2 > 0 is a constant that only depends on c. Both corollaries are proved in the appendix. The error bounds in both Corollaries 2.1 and 2.2 are significantly better than the trivial estimate A, which satisfies A −AF ≤n1/2δ. We also remark that the bound in Corollary 2.1 cannot be obtained by a direct application of the weaker bound Eq. (3), which yields a δ β 2β−1 bound. We next state results that are consequences of Theorem 2.1 in several matrix estimation problems. 2.1 High-rank Matrix Completion Suppose A is a high-rank n × n PSD matrix that satisfies µ0-spikeness condition defined as follows: 4 Definition 2.1 (Spikeness condition). An n × n PSD matrix A satisfies µ0-spikeness condition if nAmax ≤µ0AF , where Amax = max1≤i,j≤n |Aij| is the max-norm of A. Spikeness condition makes uniform sampling of matrix entries powerful in matrix completion problems. If A is exactly low rank, the spikeness condition is implied by an upper bound on max1≤i≤n e i Uk2, which is the standard incoherence assumption on the top-k space of A [Candes and Recht, 2012]. For general high-rank A, the spikeness condition is implied by a more restrictive incoherence condition that imposes an upper bound on max1≤i≤n e i Un−k2 and An−kmax, which are assumptions adopted in [Hardt and Wootters, 2014]. Suppose A is a symmetric re-scaled zero-filled matrix of observed entries. That is, [ A]ij =
Aij/p, with probability p; 0, with probability 1 −p; ∀1 ≤i ≤j ≤n. (5) Here p ∈(0, 1) is a parameter that controls the probability of observing a particular entry in A, corresponding to a sample complexity of O(n2p). Note that both A and A are symmetric so we only specify the upper triangle of A. By a simple application of matrix Bernstein inequality [Mackey et al., 2014], one can show A is close to A in spectral norm when A satisfies µ0-spikeness. Here we cite a lemma from [Hardt, 2014] to formally establish this observation: Lemma 2.1 (Corollary of [Hardt, 2014], Lemma A.3). Under the model of Eq. (5) and µ0-spikeness condition of A, for t ∈(0, 1) it holds with probability at least 1 −t that A −A2 ≤O max µ2 0A2 F log(n/t) np , µ0AF log(n/t) np . Let Ak be the best rank-k approximation of A in Frobenius/spectral norm. Applying Theorem 2.1 and 2.2 we obatin the following result: Theorem 2.3. Fix t ∈(0, 1). Then with probability 1 −t we have Ak −AF ≤O( √ k) · A −AkF if p = Ω µ2 0A2 F log(n/t) nσk+1(A)2 . Furthermore, for fixed ∈(0, 1/4], with probability 1 −t we have Ak −AF ≤(1 + O()) A −AkF if p = Ω µ2 0 max{−4, k2}A2 F log(n/t) nσk+1(A)2 Ak −AF ≤A −AkF + (σk (A) −σk+1 (A)) if p = Ω µ2 0kA2 F log(n/t) n2 (σk(A) −σk+1(A))2 . As a remark, because µ0 ≥1 and AF /σk+1(A) ≥ √ k always hold, the sample complexity is lower bounded by Ω(nk log n), the typical sample complexity in noiseless matrix completion. In the case of high rank A, the results in Theorem 2.3 are the strongest when A has small stable rank rs(A) = A2 F /A2 2 and the top-k condition number γk(A) = σ1(A)/σk+1(A) is not too large. For example, if A has stable rank rs(A) = r then Ak −AF has an O( √ k) multiplicative error bound with sample complexity Ω(µ2 0γk(A)2 · nr log n); or an (1 + O()) relative error bound with sample complexity Ω(µ2 0 max{−4, k2}γk(A)2 · nr log n). Finally, when σk+1(A) is very small and the “gap” σk(A) −σk+1(A) is large, a weaker additive-error bound is applicable with sample complexity independent of σk+1(A)−1. Comparing with previous works, if‘ the gap (1 −σk+1/σk) is of order , then sample complexity of[Hardt, 2014] Theorem 1.2 and [Hardt and Wootters, 2014] Theorem 1 scale with 1/7. Our result improves their results to the scaling of 1/4 with a much simpler algorithm (truncated SVD). 5 2.2 High-rank matrix de-noising Let A be an n×n PSD signal matrix and E a symmetric random Gaussian matrix with zero mean and ν2/n variance. That is, Eij i.i.d. ∼N(0, ν2/n) for 1 ≤i ≤j ≤n and Eij = Eji. Define A = A+E. The matrix de-noising problem is then to recover the signal matrix A from noisy observations A. We refer the readers to [Gavish and Donoho, 2014] for a list of references that shows the ubiquitous application of matrix de-noising in scientific fields. It is well-known by concentration results of Gaussian random matrices, that A −A2 = E2 = OP(ν). Let Ak be the best rank-k approximation of A in Frobenius/spectral norm. Applying Theorem 2.1, we immediately have the following result: Theorem 2.4. There exists an absolute constant c > 0 such that, if ν < c · σk+1(A) for some 1 ≤k < n, then with probability at least 0.8 we have that Ak −AF ≤ 1 + O ν σk+1(A) A −AkF + O( √ kν). (6) Eq. (6) can be understood from a classical bias-variance tradeoff perspective: the first (1 + O( ν/σk+1(A)))A −AkF acts as a bias term, which decreases as we increase cut-off rank k, corresponding to a more complicated model; on the other hand, the second O( √ kν) term acts as the (square root of) variance, which does not depend on the signal A and increases with k. 2.3 Low-rank estimation of high-dimensional covariance Suppose A is an n × n PSD matrix and X1, · · · , XN are i.i.d. samples drawn from the multivariate Gaussian distribution Nn(0, A). The question is to estimate A from samples X1, · · · , XN. A common estimator is the sample covariance A = 1 N N i=1 XiX i . While in low-dimensional regimes (i.e., n fixed and N →∞) the asymptotic efficiency of A is obvious (cf. [Van der Vaart, 2000]), its statistical power in high-dimensional regimes where n and N are comparable are highly non-trivial. Below we cite results by Bunea and Xiao [2015] for estimation error A−Aξ, ξ = 2/F when n is not too large compared to N: Lemma 2.2 (Bunea and Xiao [2015]). Suppose n = O(N β) for some β ≥0 and let re(A) = tr(A)/A2 denote the effective rank of the covariance A. Then the sample covariance A = 1 N N i=1 XiX i satisfies A −AF = OP A2re(A) log N N (7) and A −A2 = OP A2 max re(A) log(Nn) N , re(A) log(Nn) N . (8) Let Ak be the best rank-k approximation of A in Frobenius/spectral norm. Applying Theorem 2.1 and 2.2 together with Eq. (8), we immediately arrive at the following theorem. Theorem 2.5. Fix ∈(0, 1/4] and 1 ≤k < n. Recall that re(A) = tr(A)/A2 and γk(A) = σ1(A)/σk+1(A). There exists a universal constant c > 0 such that, if re(A) max{−4, k2}γk(A)2 log(N) N ≤c then with probability at least 0.8, Ak −AF ≤(1 + O()) A −AkF and if re(A)kA2 2 log(N) N2 (σk (A) −σk+1 (A))2 ≤c 6 then with probability at least 0.8, Ak −AF ≤A −AkF + (σk (A) −σk+1 (A)) . Theorem 2.5 shows that it is possible to obtain a reasonable Frobenius-norm approximation of A by truncated SVD in the asymptotic regime of N = Ω(re(A)poly(k) log N), which is much more flexible than Eq. (7) that requires N = Ω(re(A)2 log N). 3 Proof Sketch of Theorem 2.1 In this section we give a proof sketch of Theorem 2.1. The proof of Theorem 2.2 is similar and less challenging so we defer it to appendix. We defer proofs of technical lemmas to Section A. Because both Ak and Ak are low-rank, Ak −AkF is upper bounded by an O( √ k) factor of Ak −Ak2. From the condition that A −A2 ≤δ, a straightforward approach to upper bound Ak −Ak2 is to consider the decomposition Ak −Ak2 ≤ A −A2 + 2UkU k − Uk U k 2 Ak2, where UkU k and Uk U k are projection operators onto the top-k eigenspaces of A and A, respectively. Such a naive approach, however, has two major disadvantages. First, the upper bound depends on Ak2, which is additive and may be much larger than A −A2. Perhaps more importantly, the quantity UkU k −Uk U k 2 depends on the “consecutive” sepctral gap (σk(A) −σk+1(A)), which could be very small for large matrices. The key idea in the proof of Theorem 2.1 is to find an “envelope” m1 ≤k ≤m2 in the spectrum of A surrounding k, such that the eigenvalues within the envelope are relatively close. Define m1 = argmax0≤j≤k{σj(A) ≥(1 + 2)σk+1(A)}; m2 = argmaxk≤j≤n{σj(A) ≥σk(A) −2σk+1(A)}, where we let σ0 (A) = ∞for convenience. Let Um, Um be basis of the top m-dimensional linear subspaces of A and A, respectively. Also denote Un−m and Un−m as basis of the orthogonal complement of Um and Um. By asymmetric Davis-Kahan inequality (Lemma C.1) and Wely’s inequality we can obtain the following result. Lemma 3.1. If A −A2 ≤2σk+1(A) for ∈(0, 1) then U n−kUm12, U k Un−m22 ≤. Let Um1:m2 be the linear subspace of A associated with eigenvalues σm1+1(A), · · · , σm2(A). Intuitively, we choose a (k −m1)-dimensional linear subspace in Um1:m2 that is “most aligned” with the top-k subspace Uk of A. Formally, define W = argmaxdim(W)=k−m1,W∈Um1:m2 σk−m1 W Uk . W is then a d × (k −m1) matrix with orthonormal columns that corresponds to a basis of W. W is carefully constructed so that it is closely aligned with Uk, yet still lies in Uk. In particular, Lemma 3.2 shows that sin ∠(W, Uk) = U n−kW2 is upper bounded by . Lemma 3.2. If A −A2 ≤2σk+1(A) for ∈(0, 1) then U n−kW2 ≤. Now define A = Am1 + WWAWW. We use A as the “reference matrix" because we can decompose Ak −AF as Ak −AF ≤A − AF + Ak − AF ≤A − AF + √ 2k Ak − A2 (9) and bound each term on the right hand side separately. Here the last inequality holds because both Ak and A have rank at most k. The following lemma bounds the first term. Lemma 3.3. If A−A2 ≤2σk+1(A)2 for ∈(0, 1/4] then A− AF ≤(1+32)A−AkF . 7 The proof of this lemma relies Pythagorean theorem and Poincaré separation theorem. Let Um1:m2 be the (m2 −m1)-dimensional linear subspace such that Um2 = Um1 ⊕Um1:m2. Define Am1:m2 = Um1:m2Σm1:m2U m1:m2, where Σm1:m2 = diag(σm1+1(A), · · · , σm2(A)) and Um1:m2 is an orthonormal basis associated with Um1:m2. Applying Pythagorean theorem (Lemma C.2), we can decompose A − A2 F = A −Am22 F + Am1:m22 F −WWAm1:m2WW2 F . Applying Poincaré separation theorem (Lemma C.3) where X = Σm1:m2 and P = U m1:m2W, we have WAm1:m2W2 F ≥m2−m1 j=m2−k+1 σj(Am1:m2)2 = m2 j=m1+m2−k+1 σj(A)2. With some routine algebra we can prove Lemma 3.3. To bound the second term of Eq. (9) we use the following lemma. Lemma 3.4. If A −A2 ≤2σk+1(A) for ∈(0, 1/4] then Ak − A2 ≤1022A −Ak2. The proof of Lemma 3.4 relies on the low-rankness of Ak and A. Recall the definition that U = Range( A) and U⊥= Null( A). Consider v2 = 1 such that v( Ak − A)v = Ak − A2. Because v maximizes v( Ak− A)v over all unit-length vectors, it must lie in the range of Ak − A because otherwise the component outside the range will not contribute. Therefore, we can choose v that v = v1 + v2 where v1 ∈Range( Ak) = Uk and v2 ∈Range( A) = U. Subsequently, we have that v = Uk U k v + U U Un−k U n−kv (10) = U Uv + Uk U k U⊥ U ⊥v. (11) Consider the following decomposition: v( Ak − A)v ≤ v( A −A)v + v( Ak −A)v + v(A − A)v . The first term |v( A −A)v| is trivially upper bounded by A −A2 ≤2σk+1(A). The second and the third term can be bounded by Wely’s inequality (Lemma C.4) and basic properties of A (Lemma A.3). See Section A for details. 4 Discussion We mention two potential directions to further extend results of this paper. 4.1 Model selection for general high-rank matrices The validity of Theorem 2.1 depends on the condition A −A2 ≤2σk+1(A), which could be hard to verify if σk+1(A) is unknown and difficult to estimate. Furthermore, for general high-rank matrices, the model selection problem of determining an appropriate (or even optimal) cut-off rank k requires knowledge of the distribution of the entire spectrum of an unknown data matrix, which is even more challenging to obtain. One potential approach is to impose a parametric pattern of decay of the eigenvalues (e.g., polynomial and exponential decay), and to estimate a small set of parameters (e.g., degree of polynomial) from the noisy observations A. Afterwards, the optimal cut-off rank k could be determined by a theoretical analysis, similar to the examples in Corollaries 2.1 and 2.2. Another possibility is to use repeated sampling techniques such as boostrap in a stochastic problem (e.g., matrix de-noising) to estimate the “bias” term A −AkF for different k, as the variance term √ kν is known or easy to estimate. 4.2 Minimax rates for polynomial spectral decay Consider the class of PSD matrices whose eigenvalues follow a polynomial (power-law) decay: Θ(β, n) = {A ∈Rn×n : A 0, σj(A) = j−β}. We are interested in the following minimax rates for completing or de-noising matrices in Θ(β, n): 8 Question 1 (Completion of Θ(β, n)). Fix n ∈N, p ∈(0, 1) and define N = pn2. For M ∈Θ(β, n), let Aij = Mij with probability p and Aij = 0 with probability 1 −p. Also let Λ(µ0, n) = {M ∈ Rn×n : nMmax ≤µ0MF } be the class of all non-spiky matrices. Determine R1(µ0, β, n, N) := inf A → M sup M∈Θ(β,n)∩Λ(µ0,n) E M −M2 F . Question 2 (De-noising of Θ(β, n)). Fix n ∈N, ν > 0 and let A = M + ν/√nZ, where Z is a symmetric matrices with i.i.d. standard Normal random variables on its upper triangle. Determine R2(ν, β, n) := inf A → M sup M∈Θ(β,n) E M −M2 F . Compared to existing settings on matrix completion and de-noising, we believe Θ(β, n) is a more natural matrix class which allows for general high-rank matrices, but also imposes sufficient spectral decay conditions so that spectrum truncation algorithms result in significant benefits. Based on Corollary 2.1 and its matching lower bounds for a larger p class [Negahban and Wainwright, 2012], we make the following conjecture: Conjecture 4.1. For β > 1/2 and ν not too small, we conjecture that R1(µ0, β, n, N) C(µ0) · n N 2β−1 2β and R2(ν, β, n) ! ν2" 2β−1 2β , where C(µ0) > 0 is a constant that depends only on µ0. 5 Acknowledgements S.S.D. was supported by ARPA-E Terra program. Y.W. and A.S. were supported by the NSF CAREER grant IIS-1252412. References Dimitris Achlioptas and Frank McSherry. Fast computation of low-rank matrix approximations. Journal of the ACM, 54(2):9, 2007. Zeyuan Allen-Zhu and Yuanzhi Li. Even faster svd decomposition yet without agonizing pain. In Advances in Neural Information Processing Systems, pages 974–982, 2016. David Anderson, Simon Du, Michael Mahoney, Christopher Melgaard, Kunming Wu, and Ming Gu. Spectral gap error bounds for improving cur matrix decomposition and the nyström method. In Artificial Intelligence and Statistics, pages 19–27, 2015. Maria Florina Balcan, Simon S Du, Yining Wang, and Adams Wei Yu. An improved gap-dependency analysis of the noisy power method. In 29th Annual Conference on Learning Theory, pages 284–309, 2016. Florentina Bunea and Luo Xiao. On the sample covariance matrix estimator of reduced effective rank population matrices, with applications to fpca. Bernoulli, 21(2):1200–1230, 2015. T Tony Cai and Harrison H Zhou. Optimal rates of convergence for sparse covariance matrix estimation. The Annals of Statistics, 40(5):2389–2420, 2012. T Tony Cai, Cun-Hui Zhang, and Harrison H Zhou. Optimal rates of convergence for covariance matrix estimation. The Annals of Statistics, 38(4):2118–2144, 2010. T Tony Cai, Zhao Ren, and Harrison H Zhou. Optimal rates of convergence for estimating toeplitz covariance matrices. Probability Theory and Related Fields, 156(1-2):101–143, 2013. T Tony Cai, Zhao Ren, and Harrison H Zhou. Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation. Electronic Journal of Statistics, 10(1): 1–59, 2016. 9 Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111–119, 2012. Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6): 925–936, 2010. Sourav Chatterjee et al. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):177–214, 2015. Yudong Chen, Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Matrix completion with column manipulation: Near-optimal sample-robustness-rank tradeoffs. IEEE Transactions on Information Theory, 62(1):503–526, 2016. David Donoho and Matan Gavish. Minimax risk of matrix denoising by singular value thresholding. The Annals of Statistics, 42(6):2413–2440, 2014. David L Donoho, Matan Gavish, and Andrea Montanari. The phase transition of matrix recovery from gaussian measurements matches the minimax mse of matrix denoising. Proceedings of the National Academy of Sciences, 110(21):8405–8410, 2013. Brian Eriksson, Laura Balzano, and Robert D Nowak. High-rank matrix completion. In AISTATS, pages 373–381, 2012. Matan Gavish and David L Donoho. The optimal hard threshold for singular values is 4/ √ 3. IEEE Transactions on Information Theory, 60(8):5040–5053, 2014. Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651–660. IEEE, 2014. Moritz Hardt and Mary Wootters. Fast matrix completion without the condition number. In COLT, pages 638–678, 2014. Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 665–674. ACM, 2013. Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. Information Theory, IEEE Transactions on, 56(6):2980–2998, 2010. Alois Kneip and Pascal Sarda. Factor models and variable selection in high-dimensional regression analysis. The Annals of Statistics, pages 2410–2447, 2011. Vladimir Koltchinskii, Karim Lounici, and Alexandre B Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, pages 2302–2329, 2011. Ziqi Liu, Yu-Xiang Wang, and Alexander Smola. Fast differentially private matrix factorization. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 171–178. ACM, 2015. Lester Mackey, Michael I Jordan, Richard Y Chen, Brendan Farrell, and Joel A Tropp. Matrix concentration inequalities via the method of exchangeable pairs. The Annals of Probability, 42(3): 906–945, 2014. Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 13(1): 1665–1697, 2012. Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factorization. IEEE Transactions on Information Theory, 62(11):6535–6579, 2016. Stephen Tu, Ross Boczar, Max Simchowitz, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank solutions of linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566, 2015. Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000. 10 Lingxiao Wang, Xiao Zhang, and Quanquan Gu. A unified computational and statistical framework for nonconvex low-rank matrix estimation. arXiv preprint arXiv:1610.05275, 2016. Xinyang Yi, Dohyung Park, Yudong Chen, and Constantine Caramanis. Fast algorithms for robust pca via gradient descent. In Advances in Neural Information Processing Systems, pages 4152–4160, 2016. Lijun Zhang, Tianbao Yang, Rong Jin, and Zhi-Hua Zhou. Analysis of nuclear norm regularization for full-rank matrix completion. arXiv preprint arXiv:1504.06817, 2015. 11 | 2017 | 367 |
6,860 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning Wei Wen1, Cong Xu2, Feng Yan3, Chunpeng Wu1, Yandan Wang4, Yiran Chen1, Hai Li1 1Duke University, 2Hewlett Packard Labs, 3University of Nevada – Reno, 4University of Pittsburgh 1{wei.wen, chunpeng.wu, yiran.chen, hai.li}@duke.edu 2cong.xu@hpe.com, 3fyan@unr.edu, 4yaw46@pitt.edu Abstract High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1. 1 Introduction The remarkable advances in deep learning is driven by data explosion and increase of model size. The training of large-scale models with huge amounts of data are often carried on distributed systems [1][2][3][4][5][6][7][8][9], where data parallelism is adopted to exploit the compute capability empowered by multiple workers [10]. Stochastic Gradient Descent (SGD) is usually selected as the optimization method because of its high computation efficiency. In realizing the data parallelism of SGD, model copies in computing workers are trained in parallel by applying different subsets of data. A centralized parameter server performs gradient synchronization by collecting all gradients and averaging them to update parameters. The updated parameters will be sent back to workers, that is, parameter synchronization. Increasing the number of workers helps to reduce the computation time dramatically. However, as the scale of distributed systems grows up, the extensive gradient and parameter synchronizations prolong the communication time and even amortize the savings of computation time [4][11][12]. A common approach to overcome such a network bottleneck is asynchronous SGD [1][4][7][12][13][14], which continues computation by using stale values without waiting for the completeness of synchronization. The inconsistency of parameters across computing workers, however, can degrade training accuracy and incur occasional divergence [15][16]. Moreover, its workload dynamics make the training nondeterministic and hard to debug. From the perspective of inference acceleration, sparse and quantized Deep Neural Networks (DNNs) have been widely studied, such as [17][18][19][20][21][22][23][24][25]. However, these methods generally aggravate the training effort. Researches such as sparse logistic regression and Lasso optimization problems [4][12][26] took advantage of the sparsity inherent in models and achieved 1https://github.com/wenwei202/terngrad 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. remarkable speedup for distributed training. A more generic and important topic is how to accelerate the distributed training of dense models by utilizing sparsity and quantization techniques. For instance, Aji and Heafield [27] proposed to heuristically sparsify dense gradients by dropping off small values in order to reduce gradient communication. For the same purpose, quantizing gradients to low-precision values with smaller bit width has also been extensively studied [22][28][29][30]. Our work belongs to the category of gradient quantization, which is an orthogonal approach to sparsity methods. We propose TernGrad that quantizes gradients to ternary levels {−1, 0, 1} to reduce the overhead of gradient synchronization. Furthermore, we propose scaler sharing and parameter localization, which can replace parameter synchronization with a low-precision gradient pulling. Comparing with previous works, our major contributions include: (1) we use ternary values for gradients to reduce communication; (2) we mathematically prove the convergence of TernGrad in general by proposing a statistical bound on gradients; (3) we propose layer-wise ternarizing and gradient clipping to move this bound closer toward the bound of standard SGD. These simple techniques successfully improve the convergence; (4) we build a performance model to evaluate the speed of training methods with compressed gradients, like TernGrad. 2 Related work Gradient sparsification. Aji and Heafield [27] proposed a heuristic gradient sparsification method that truncated the smallest gradients and transmitted only the remaining large ones. The method greatly reduced the gradient communication and achieved 22% speed gain on 4 GPUs for a neural machine translation, without impacting the translation quality. An earlier study by Garg et al. [31] adopted the similar approach, but targeted at sparsity recovery instead of training acceleration. Our proposed TernGrad is orthogonal to these sparsity-based methods. Gradient quantization. DoReFa-Net [22] derived from AlexNet reduced the bit widths of weights, activations and gradients to 1, 2 and 6, respectively. However, DoReFa-Net showed 9.8% accuracy loss as it targeted at acceleration on single worker. S. Gupta et al. [30] successfully trained neural networks on MNIST and CIFAR-10 datasets using 16-bit numerical precision for an energy-efficient hardware accelerator. Our work, instead, tends to speedup the distributed training by decreasing the communicated gradients to three numerical levels {−1, 0, 1}. F. Seide et al. [28] applied 1-bit SGD to accelerate distributed training and empirically verified its effectiveness in speech applications. As the gradient quantization is conducted by columns, a floating-point scaler per column is required. So it cannot yield speed benefit on convolutional neural networks [29]. Moreover, “cold start” of the method [28] requires floating-point gradients to converge to a good initial point for the following 1-bit SGD. More importantly, it is unknown what conditions can guarantee its convergence. Comparably, our TernGrad can start the DNN training from scratch and we prove the conditions that promise the convergence of TernGrad. A. T. Suresh et al. [32] proposed stochastic rotated quantization of gradients, and reduced gradient precision to 4 bits for MNIST and CIFAR dataset. However, TernGrad achieves lower precision for larger dataset (e.g. ImageNet), and has more efficient computation for quantization in each computing node. A parallel work by D. Alistarh et al. [29] presented QSGD that explores the trade-off between accuracy and gradient precision. The effectiveness of gradient quantization was justified and the convergence of QSGD was provably guaranteed. Compared to QSGD developed simultaneously, our TernGrad shares the same concept but advances in the following three aspects: (1) we prove the convergence from the perspective of statistic bound on gradients. The bound also explains why multiple quantization buckets are necessary in QSGD; (2) the bound is used to guide practices and inspires techniques of layer-wise ternarizing and gradient clipping; (3) TernGrad using only 3-level gradients achieves 0.92% top-1 accuracy improvement for AlexNet, while 1.73% top-1 accuracy loss is observed in QSGD with 4 levels. The accuracy loss in QSGD can be eliminated by paying the cost of increasing the precision to 4 bits (16 levels) and beyond. 3 Problem Formulation and Our Approach 3.1 Problem Formulation and TernGrad Figure 1 formulates the distributed training problem of synchronous SGD using data parallelism. At iteration t, a mini-batch of training samples are split and fed into multiple workers (i ∈{1, ..., N}). Worker i computes the gradients g(i) t of parameters w.r.t. its input samples z(i) t . All gradients are 2 first synchronized and averaged at parameter server, and then sent back to update workers. Note that parameter server in most implementations [1][12] are used to preserve shared parameters, while here we utilize it in a slightly different way of maintaining shared gradients. In Figure 1, each worker keeps a copy of parameters locally. We name this technique as parameter localization. The parameter consistency among workers can be maintained by random initialization with an identical seed. Parameter localization changes the communication of parameters in floating-point form to the transfer of quantized gradients that require much lighter traffic. Note that our proposed TernGrad can be integrated with many settings like Asynchronous SGD [1][4], even though the scope of this paper only focuses on the distributed SGD in Figure 1. Algorithm 1 formulates the t-th iteration of TernGrad algorithm according to Figure 1. Most steps of TernGrad remain the same as traditional distributed training, except that gradients shall be quantized into ternary precision before sending to parameter server. More specific, ternarize(·) aims to reduce the communication volume of gradients. It randomly quantizes gradient gt 2 to a ternary vector with values ∈{−1, 0, +1}. Formally, with a random binary vector bt, gt is ternarized as ˜gt = ternarize(gt) = st · sign (gt) ◦bt, (1) where st ≜max (abs (gt)) ≜||gt||∞ (2) is a scaler, e.g. maximum norm, that can shrink ±1 to a much smaller amplitude. ◦is the Hadamard product. sign(·) and abs(·) respectively returns the sign and absolute value of each element. Giving a gt, each element of bt independently follows the Bernoulli distribution P(btk = 1 | gt) = |gtk|/st P(btk = 0 | gt) = 1 −|gtk|/st , (3) where btk and gtk is the k-th element of bt and gt, respectively. This stochastic rounding, instead of deterministic one, is chosen by both our study and QSGD [29], as stochastic rounding has an unbiased expectation and has been successfully studied for low-precision processing [20][30]. Theoretically, ternary gradients can at least reduce the worker-to-server traffic by a factor of 32/log2(3) = 20.18×. Even using 2 bits to encode a ternary gradient, the reduction factor is still 16×. In this work, we compare TernGrad with 32-bit gradients, considering 32-bit is the default precision in modern deep learning frameworks. Although a lower-precision (e.g. 16-bit) may be enough in some scenarios, it will not undervalue TernGrad. As aforementioned, parameter localization reduces server-to-worker traffic by pulling quantized gradients from servers. However, summing up ternary values in P i ˜g(i) t will produce more possible levels and thereby the final averaged gradient gt is no longer ternary as shown in Figure 2(d). It emerges as a critical issue when workers use different scalers s(i) t . To minimize the number of levels, we propose a shared scaler st = max({s(i) t } : i = 1...N) (4) across all the workers. We name this technique as scaler sharing. The sharing process has a small overhead of transferring 2N floating scalars. By integrating parameter localization and scaler sharing, the maximum number of levels in gt decreases to 2N + 1. As a result, the server-to-worker communication reduces by a factor of 32/log2(1 + 2N), unless N ≥230. Parameter server Worker 1 𝒘"#$ ←𝒘" −𝒈" Worker 2 𝒘"#$ ←𝒘" −𝒈" Worker N 𝒘"#$ ←𝒘" −𝒈" …… 𝒈" ($) 𝒈" (*) 𝒈" (+) 𝒈" 𝒈" 𝒈" Figure 1: Distributed SGD with data parallelism. Algorithm 1 TernGrad: distributed SGD training using ternary gradients. Worker : i = 1, ..., N 1 Input z(i) t , a part of a mini-batch of training samples zt 2 Compute gradients g(i) t under z(i) t 3 Ternarize gradients to ˜g(i) t = ternarize(g(i) t ) 4 Push ternary ˜g(i) t to the server 5 Pull averaged gradients gt from the server 6 Update parameters wt+1 ←wt −η · gt Server : 7 Average ternary gradients gt = P i ˜g(i) t /N 2Here, the superscript of gt is omitted for simplicity. 3 3.2 Convergence Analysis and Gradient Bound We analyze the convergence of TernGrad in the framework of online learning systems. An online learning system adapts its parameter w to a sequence of observations to maximize performance. Each observation z is drawn from an unknown distribution, and a loss function Q(z, w) is used to measure the performance of current system with parameter w and input z. The minimization target then is the loss expectation C(w) ≜E {Q(z, w)} . (5) In General Online Gradient Algorithm (GOGA) [33], parameter is updated at learning rate γt as wt+1 = wt −γtgt = wt −γt · ∇wQ(zt, wt), (6) where g ≜∇wQ(z, w) (7) and the subscript t denotes observing step t. In GOGA, E {g} is the gradient of the minimization target in Eq. (5). According to Eq. (1), the parameter in TernGrad is updated, such as wt+1 = wt −γt (st · sign (gt) ◦bt) , (8) where st ≜||gt||∞is a random variable depending on zt and wt. As gt is known for given zt and wt, Eq. (3) is equivalent to P(btk = 1 | zt, wt) = |gtk|/st P(btk = 0 | zt, wt) = 1 −|gtk|/st . (9) At any given wt, the expectation of ternary gradient satisfies E {st · sign (gt) ◦bt} = E {st · sign (gt) ◦E {bt|zt}} = E {gt} = ∇wC(wt), (10) which is an unbiased gradient of minimization target in Eq. (5). The convergence analysis of TernGrad is adapted from the convergence proof of GOGA presented in [33]. We adopt two assumptions, which were used in analysis of the convergence of standard GOGA in [33]. Without explicit mention, vectors indicate column vectors here. Assumption 1. C(w) has a single minimum w∗and gradient −∇wC(w) always points to w∗, i.e., ∀ϵ > 0, inf ||w−w∗||2>ϵ (w −w∗)T ∇wC(w) > 0. (11) Convexity is a subset of Assumption 1, and we can easily find non-convex functions satisfying it. Assumption 2. Learning rate γt is positive and constrained as P+∞ t=0 γ2 t < +∞ P+∞ t=0 γt = +∞ , (12) which ensures γt decreases neither very fast nor very slow respectively. We define the square of distance between current parameter wt and the minimum w∗as ht ≜||wt −w∗||2 , (13) where || · || is ℓ2 norm. We also define the set of all random variables before step t as Xt ≜(z1...t−1, b1...t−1) . (14) Under Assumption 1 and Assumption 2, using Lyapunov process and Quasi-Martingales convergence theorem, L. Bottou [33] proved Lemma 1. If ∃A, B > 0 s.t. E ht+1 − 1 + γ2 t B ht |Xt ≤−2γt(wt −w∗)T ∇wC(wt) + γ2 t A, (15) then C(z, w) converges almost surely toward minimum w∗, i.e., P (limt→+∞wt = w∗) = 1. 4 We further make an assumption on the gradient as Assumption 3 (Gradient Bound). The gradient g is bounded as E {||g||∞· ||g||1} ≤A + B ||w −w∗||2 , (16) where A, B > 0 and || · ||1 is ℓ1 norm. With Assumption 3 and Lemma 1, we prove Theorem 1 ( in Supplementary Material): Theorem 1. When online learning systems update as wt+1 = wt −γt (st · sign (gt) ◦bt) (17) using stochastic ternary gradients, they converge almost surely toward minimum w∗, i.e., P (limt→+∞wt = w∗) = 1. Comparing with the gradient bound of standard GOGA [33] E ||g||2 ≤A + B ||w −w∗||2 , (18) the bound in Assumption 3 is stronger because ||g||∞· ||g||1 ≥||g||2. (19) We propose layer-wise ternarizing and gradient clipping to make two bounds closer, which shall be explained in Section 3.3. A side benefit of our work is that, by following the similar proof procedure, we can prove the convergence of GOGA when Gaussian noise N(0, σ2) is added to gradients [34], under the gradient bound of E ||g||2 ≤A + B ||w −w∗||2 −σ2. (20) Although the bound is also stronger, Gaussian noise encourages active exploration of parameter space and improves accuracy as was empirically studied in [34]. Similarly, the randomness of ternary gradients also encourages space exploration and improves accuracy for some models, as shall be presented in Section 4. 3.3 Feasibility Considerations The gradient bound of TernGrad in Assumption 3 is stronger than the bound in standard GOGA. Pushing the two bounds closer can improve the convergence of TernGrad. In Assumption 3, ||g||∞ is the maximum absolute value of all the gradients in the DNN. So, in a large DNN, ||g||∞could be relatively much larger than most gradients, implying that the bound in TernGrad becomes much stronger. Considering the situation, we propose layer-wise ternarizing and gradient clipping to reduce ||g||∞and therefore shrink the gap between these two bounds. Layer-wise ternarizing is proposed based on the observation that the range of gradients in each layer changes as gradients are back propagated. Instead of adopting a large global maximum scaler, (a) original (b) clipped (c) ternary (d) final Iteration # Iteration # conv fc Figure 2: Histograms of (a) original floating gradients, (b) clipped gradients, (c) ternary gradients and (d) final averaged gradients. Visualization by TensorBoard. The DNN is AlexNet distributed on two workers, and vertical axis is the training iteration. As examples, top row visualizes the third convolutional layer and bottom one visualizes the first fully-connected layer. 5 we independently ternarize gradients in each layer using the layer-wise scalers. More specific, we separately ternarize the gradients of biases and weights by using Eq. (1), where gt could be the gradients of biases or weights in each layer. To approach the standard bound more closely, we can split gradients to more buckets and ternarize each bucket independently as D. Alistarh et al. [29] does. However, this will introduce more floating scalers and increase communication. When the size of bucket is one, it degenerates to floating gradients. Layer-wise ternarizing can shrink the bound gap resulted from the dynamic ranges of the gradients across layers. However, the dynamic range within a layer still remains as a problem. We propose gradient clipping, which limits the magnitude of each gradient gi in g as f(gi) = gi |gi| ≤cσ sign(gi) · cσ |gi| > cσ , (21) where σ is the standard derivation of gradients in g. In distributed training, gradient clipping is applied to every worker before ternarizing. c is a hyper-parameter to select, but we cross validate it only once and use the constant in all our experiments. Specifically, we used a CNN [35] trained on CIFAR-10 by momentum SGD with staircase learning rate and obtained the optimal c = 2.5. Suppose the distribution of gradients is close to Gaussian distribution as shown in Figure 2(a), very few gradients can drop out of [−2.5σ, 2.5σ]. Clipping these gradients in Figure 2(b) can significantly reduce the scaler but slightly changes the length and direction of original g. Numerical analysis shows that gradient clipping with c = 2.5 only changes the length of g by 1.0% −1.5% and its direction by 2◦−3◦. In our experiments, c = 2.5 remains valid across multiple databases (MNIST, CIFAR-10 and ImageNet), various network structures (LeNet, CifarNet, AlexNet, GoogLeNet, etc) and training schemes (momentum, vanilla SGD, adam, etc). The effectiveness of layer-wise ternarizing and gradient clipping can also be explained as follows. When the scalar st in Eq. (1) and Eq. (3) is very large, most gradients have a high possibility to be ternarized to zeros, leaving only a few gradients to large-magnitude values. The scenario raises a severe parameter update pattern: most parameters keep unchanged while others likely overshoot. This will introduce large training variance. Our experiments on AlexNet show that by applying both layer-wise ternarizing and gradient clipping techniques, TernGrad can converge to the same accuracy as standard SGD. Removing any of the two techniques can result in accuracy degradation, e.g., 3% top-1 accuracy loss without applying gradient clipping as we shall show in Table 2. 4 Experiments We first investigate the convergence of TernGrad under various training schemes on relatively small databases and show the results in Section 4.1. Then the scalability of TernGrad to large-scale distributed deep learning is explored and discussed in Section 4.2. The experiments are performed by TensorFlow[2]. We maintain the exponential moving average of parameters by employing an exponential decay of 0.9999 [15]. The accuracy is evaluated by the final averaged parameters. This gives slightly better accuracy in our experiments. For fair comparison, in each pair of comparative experiments using either floating or ternary gradients, all the other training hyper-parameters are the same unless differences are explicitly pointed out. In experiments, when SGD with momentum is adopted, momentum value of 0.9 is used. When polynomial decay is applied to decay the learning rate (LR), the power of 0.5 is used to decay LR from the base LR to zero. 4.1 Integrating with Various Training Schemes We study the convergence of TernGrad using LeNet on MNIST and a ConvNet [35] (named as CifarNet) on CIFAR-10. LeNet is trained without data augmentation. While training CifarNet, images 98.00% 98.50% 99.00% 99.50% 100.00% 2 4 8 16 32 64 2 4 8 16 32 64 baseline TernGrad Accuracy N workers (a) momentum SGD (b) vanilla SGD Figure 3: Accuracy vs. worker number for baseline and TernGrad, trained with (a) momentum SGD or (b) vanilla SGD. In all experiments, total mini-batch size is 64 and maximum iteration is 10K. 6 Table 1: Results of TernGrad on CifarNet. SGD base LR total mini-batch size iterations gradients workers accuracy Adam 0.0002 128 300K floating 2 86.56% TernGrad 2 85.64% (-0.92%) Adam 0.0002 2048 18.75K floating 16 83.19% TernGrad 16 82.80% (-0.39%) are randomly cropped to 24 × 24 images and mirrored. Brightness and contrast are also randomly adjusted. During the testing of CifarNet, only center crop is used. Our experiments cover the scope of SGD optimizers over vanilla SGD, SGD with momentum [36] and Adam [37]. Figure 3 shows the results of LeNet. All are trained using polynomial LR decay with weight decay of 0.0005. The base learning rates of momentum SGD and vanilla SGD are 0.01 and 0.1, respectively. Given the total mini-batch size M and the worker number N, the mini-batch size per worker is M/N. Without explicit mention, mini-batch size refers to the total mini-batch size in this work. Figure 3 shows that TernGrad can converge to the similar accuracy within the same iterations, using momentum SGD or vanilla SGD. The maximum accuracy gain is 0.15% and the maximum accuracy loss is 0.22%. Very importantly, the communication time per iteration can be reduced. The figure also shows that TernGrad generalizes well to distributed training with large N. No degradation is observed even for N = 64, which indicates one training sample per iteration per worker. Table 1 summarizes the results of CifarNet, where all trainings terminate after the same epochs. Adam SGD is used for training. Instead of keeping total mini-batch size unchanged, we maintain the mini-batch size per worker. Therefore, the total mini-batch size linearly increases as the number of workers grows. Though the base learning rate of 0.0002 seems small, it can achieve better accuracy than larger ones like 0.001 for baseline. In each pair of experiments, TernGrad can converge to the accuracy level with less than 1% degradation. The accuracy degrades under a large mini-batch size in both baseline and TernGrad. This is because parameters are updated less frequently and large-batch training tends to converge to poorer sharp minima [38]. However, the noise inherent in TernGrad can help converge to better flat minimizers [38], which could explain the smaller accuracy gap between the baseline and TernGrad when the mini-batch size is 2048. In our experiments of AlexNet in Section 4.2, TernGrad even improves the accuracy in the large-batch scenario. This attribute is beneficial for distributed training as a large mini-batch size is usually required. 4.2 Scaling to Large-scale Deep Learning We also evaluate TernGrad by AlexNet and GoogLeNet trained on ImageNet. It is more challenging to apply TernGrad to large-scale DNNs. It may result in some accuracy loss when simply replacing the floating gradients with ternary gradients while keeping other hyper-parameters unchanged. However, we are able to train large-scale DNNs by TernGrad successfully after making some or all of the following changes: (1) decreasing dropout ratio to keep more neurons; (2) using smaller weight decay; and (3) disabling ternarizing in the last classification layer. Dropout can regularize DNNs by adding randomness, while TernGrad also introduces randomness. Thus, dropping fewer neurons helps avoid over-randomness. Similarly, as the randomness of TernGrad introduces regularization, smaller weight decay may be adopted. We suggest not to apply ternarizing to the last layer, considering that the one-hot encoding of labels generates a skew distribution of gradients and the symmetric ternary encoding {−1, 0, 1} is not optimal for such a skew distribution. Though asymmetric ternary levels could be an option, we decide to stick to floating gradients in the last layer for simplicity. The overhead of communicating these floating gradients is small, as the last layer occupies only a small percentage of total parameters, like 6.7% in AlexNet and 3.99% in ResNet-152 [39]. All DNNs are trained by momentum SGD with Batch Normalization [40] on convolutional layers. AlexNet is trained by the hyper-parameters and data augmentation depicted in Caffe. GoogLeNet is trained by polynomial LR decay and data augmentation in [41]. Our implementation of GoogLeNet does not utilize any auxiliary classifiers, that is, the loss from the last softmax layer is the total loss. More training hyper-parameters are reported in corresponding tables and published source code. Validation accuracy is evaluated using only the central crops of images. The results of AlexNet are shown in Table 2. Mini-batch size per worker is fixed to 128. For fast development, all DNNs are trained through the same epochs of images. In this setting, when there are 7 Table 2: Accuracy comparison for AlexNet. base LR mini-batch size workers iterations gradients weight decay DR† top-1 top-5 0.01 256 2 370K floating 0.0005 0.5 57.33% 80.56% TernGrad 0.0005 0.2 57.61% 80.47% TernGrad-noclip ‡ 0.0005 0.2 54.63% 78.16% 0.02 512 4 185K floating 0.0005 0.5 57.32% 80.73% TernGrad 0.0005 0.2 57.28% 80.23% 0.04 1024 8 92.5K floating 0.0005 0.5 56.62% 80.28% TernGrad 0.0005 0.2 57.54% 80.25% † DR: dropout ratio, the ratio of dropped neurons. ‡ TernGrad without gradient clipping. Table 3: Accuracy comparison for GoogLeNet. base LR mini-batch size workers iterations gradients weight decay DR top-5 0.04 128 2 600K floating 4e-5 0.2 88.30% TernGrad 1e-5 0.08 86.77% 0.08 256 4 300K floating 4e-5 0.2 87.82% TernGrad 1e-5 0.08 85.96% 0.10 512 8 300K floating 4e-5 0.2 89.00% TernGrad 2e-5 0.08 86.47% more workers, the number of iterations becomes smaller and parameters are less frequently updated. To overcome this problem, we increase the learning rate for large-batch scenario [10]. Using this scheme, SGD with floating gradients successfully trains AlexNet to similar accuracy, for mini-batch size of 256 and 512. However, when mini-batch size is 1024, the top-1 accuracy drops 0.71% for the same reason as we point out in Section 4.1. TernGrad converges to approximate accuracy levels regardless of mini-batch size. Notably, it improves the top-1 accuracy by 0.92% when mini-batch size is 1024, because its inherent randomness encourages to escape from poorer sharp minima [34][38]. Figure 4 plots training details vs. iteration when mini-batch size is 512. Figure 4(a) shows that the convergence curve of TernGrad matches well with the baseline’s, demonstrating the effectiveness of TernGrad. The training efficiency can be further improved by reducing communication time as shall be discussed in Section 5. The training data loss in Figure 4(b) shows that TernGrad converges to a slightly lower level, which further proves the capability of TernGrad to minimize the target function even with ternary gradients. A smaller dropout ratio in TernGrad can be another reason of the lower loss. Figure 4(c) simply illustrate that on average 71.32% gradients of a fully-connected layer (fc6) are ternarized to zeros. Finally, we summarize the results of GoogLeNet in Table 3. On average, the accuracy loss is less than 2%. In TernGrad, we adopted all that hyper-parameters (except dropout ratio and weight decay) that are well tuned for the baseline [42]. Tuning these hyper-parameters specifically for TernGrad could further optimize TernGrad and obtain higher accuracy. 5 Performance Model and Discussion Our proposed TernGrad requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. Moreover, our experiments in Section 4 demonstrate that within the 0% 10% 20% 30% 40% 50% 60% 70% 0 50000 100000 150000 baseline terngrad 0 2 4 6 8 0 50000 100000 150000 baseline terngrad 0% 20% 40% 60% 80% 0 50000 100000 150000 (c) gradient sparsity of terngrad in fc6 (b) training loss vs iteration (a) top-1 accuracy vs iteration Figure 4: AlexNet trained on 4 workers with mini-batch size 512: (a) top-1 validation accuracy, (b) training data loss and (c) sparsity of gradients in first fully-connected layer (fc6) vs. iteration. 8 (a) (b) 0 20000 40000 60000 80000 100000 Images/sec # of GPUs Training throughput on GPU cluster with Ethernet and PCI switch AlexNet FP32 AlexNet TernGrad GoogLeNet FP32 GoogLeNet TernGrad VggNet-A FP32 VggNet-A TernGrad 1 2 4 8 16 32 64 128 256 512 0 40000 80000 120000 160000 200000 240000 Images/sec # of GPUs Training throughput on GPU cluster with InfiniBand and NVLink AlexNet FP32 AlexNet TernGrad GoogLeNet FP32 GoogLeNet TernGrad VggNet-A FP32 VggNet-A TernGrad 1 2 4 8 16 32 64 128 256 512 0 1000 2000 3000 4000 0 2000 4000 6000 Figure 5: Training throughput on two different GPUs clusters: (a) 128-node GPU cluster with 1Gbps Ethernet, each node has 4 NVIDIA GTX 1080 GPUs and one PCI switch; (b) 128-node GPU cluster with 100 Gbps InfiniBand network connections, each node has 4 NVIDIA Tesla P100 GPUs connected via NVLink. Mini-batch size per GPU of AlexNet, GoogLeNet and VggNet-A is 128, 64 and 32, respectively same iterations, TernGrad can converge to approximately the same accuracy as its corresponding baseline. Consequently, a dramatical throughput improvement on the distributed DNN training is expected. Due to the resource and time constraint, unfortunately, we aren’t able to perform the training of more DNN models like VggNet-A [43] and distributed training beyond 8 workers. We plan to continue the experiments in our future work. We opt for using a performance model to conduct the scalability analysis of DNN models when utilizing up to 512 GPUs, with and without applying TernGrad. Three neural network models—AlexNet, GoogLeNet and VggNet-A—are investigated. In discussions of performance model, performance refers to training speed. Here, we extend the performance model that was initially developed for CPU-based deep learning systems [44] to estimate the performance of distributed GPUs/machines. The key idea is combining the lightweight profiling on single machine with analytical modeling for accurate performance estimation. In the interest of space, please refer to Supplementary Material for details of the performance model. Figure 5 presents the training throughput on two different GPUs clusters. Our results show that TernGrad effectively increases the training throughput for the three DNNs. The speedup depends on the communication-to-computation ratio of the DNN, the number of GPUs, and the communication bandwidth. DNNs with larger communication-to-computation ratios (e.g. AlexNet and VggNet-A) can benefit more from TernGrad than those with smaller ratios (e.g., GoogLeNet). Even on a very high-end HPC system with InfiniBand and NVLink, TernGrad is still able to double the training speed of VggNet-A on 128 nodes as shown in Figure 5(b). Moreover, the TernGrad becomes more efficient when the bandwidth becomes smaller, such as 1Gbps Ethernet and PCI switch in Figure 5(a) where TernGrad can have 3.04× training speedup for AlexNet on 8 GPUs. Acknowledgments This work was supported in part by NSF CCF-1744082 and DOE SC0017030. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, DOE, or their contractors. Thanks Ali Taylan Cemgil at Bogazici University for valuable suggestions on this work. References [1] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231. 2012. [2] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on 9 heterogeneous distributed systems. arXiv preprint:1603.04467, 2016. [3] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In International Conference on Machine Learning, pages 1337–1345, 2013. [4] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693–701, 2011. [5] Trishul M Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam: Building an efficient and scalable deep learning training system. In OSDI, volume 14, pages 571–582, 2014. [6] Eric P Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, and Yaoliang Yu. Petuum: A new platform for distributed machine learning on big data. IEEE Transactions on Big Data, 1(2):49–67, 2015. [7] Philipp Moritz, Robert Nishihara, Ion Stoica, and Michael I Jordan. Sparknet: Training deep networks in spark. arXiv preprint:1511.06051, 2015. [8] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint:1512.01274, 2015. [9] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685–693, 2015. [10] Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Carnegie Mellon University, 2017. [11] Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In OSDI, volume 14, pages 583–598, 2014. [12] Mu Li, David G Andersen, Alexander J Smola, and Kai Yu. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems, pages 19–27, 2014. [13] Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing. More effective distributed ml via a stale synchronous parallel parameter server. In Advances in neural information processing systems, pages 1223–1231, 2013. [14] Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pages 2595–2603, 2010. [15] Xinghao Pan, Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint:1702.05800, 2017. [16] Wei Zhang, Suyog Gupta, Xiangru Lian, and Ji Liu. Staleness-aware async-sgd for distributed deep learning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, pages 2350–2356. AAAI Press, 2016. ISBN 978-1-57735-770-4. URL http://dl.acm.org/ citation.cfm?id=3060832.3060950. [17] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [18] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074–2082, 2016. [19] J Park, S Li, W Wen, PTP Tang, H Li, Y Chen, and P Dubey. Faster cnns with direct sparse convolutions and guided pruning. In International Conference on Learning Representations (ICLR), 2017. [20] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in Neural Information Processing Systems, pages 4107–4115, 2016. [21] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525–542. Springer, 2016. [22] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. [23] Wei Wen, Yuxiong He, Samyam Rajbhandari, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, and Hai Li. Learning intrinsic sparse structures within long short-term memory. arXiv:1709.05027, 2017. [24] Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, and Yoshua Bengio. Recurrent neural networks with limited numerical precision. arXiv:1608.06902, 2016. 10 [25] Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv:1510.03009, 2015. [26] Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel coordinate descent for l1-regularized loss minimization. arXiv preprint arXiv:1105.5379, 2011. [27] Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. arXiv preprint:1704.05021, 2017. [28] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Interspeech, pages 1058–1062, 2014. [29] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communicationefficient sgd via gradient quantization and encoding. In Advances in Neural Information Processing Systems, pages 1707–1718, 2017. [30] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, pages 1737–1746, 2015. [31] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 337–344. ACM, 2009. [32] Ananda Theertha Suresh, Felix X Yu, H Brendan McMahan, and Sanjiv Kumar. Distributed mean estimation with limited communication. arXiv:1611.00429, 2016. [33] Léon Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9): 142, 1998. [34] Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint:1511.06807, 2015. [35] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105. 2012. [36] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145–151, 1999. [37] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint:1412.6980, 2014. [38] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017. [39] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [40] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint:1502.03167, 2015. [41] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016. [42] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint:1409.4842, 2015. [43] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint:1409.1556, 2014. [44] Feng Yan, Olatunji Ruwase, Yuxiong He, and Trishul M. Chilimbi. Performance modeling and scalability optimization of distributed deep learning systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pages 1355–1364, 2015. doi: 10.1145/2783258.2783270. URL http://doi.acm.org/10.1145/2783258. 2783270. 11 | 2017 | 368 |
6,861 | GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium Martin Heusel Hubert Ramsauer Thomas Unterthiner Bernhard Nessler Sepp Hochreiter LIT AI Lab & Institute of Bioinformatics, Johannes Kepler University Linz A-4040 Linz, Austria {mhe,ramsauer,unterthiner,nessler,hochreit}@bioinf.jku.at Abstract Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the ‘Fréchet Inception Distance” (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark. 1 Introduction Generative adversarial networks (GANs) [16] have achieved outstanding results in generating realistic images [42, 31, 25, 1, 4] and producing text [21]. GANs can learn complex generative models for which maximum likelihood or a variational approximations are infeasible. Instead of the likelihood, a discriminator network serves as objective for the generative model, that is, the generator. GAN learning is a game between the generator, which constructs synthetic data from random variables, and the discriminator, which separates synthetic data from real world data. The generator’s goal is to construct data in such a way that the discriminator cannot tell them apart from real world data. Thus, the discriminator tries to minimize the synthetic-real discrimination error while the generator tries to maximize this error. Since training GANs is a game and its solution is a Nash equilibrium, gradient descent may fail to converge [44, 16, 18]. Only local Nash equilibria are found, because gradient descent is a local optimization method. If there exists a local neighborhood around a point in parameter space where neither the generator nor the discriminator can unilaterally decrease their respective losses, then we call this point a local Nash equilibrium. To characterize the convergence properties of training general GANs is still an open challenge [17, 18]. For special GAN variants, convergence can be proved under certain assumptions [34, 20, 46]. A 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 0 50 100 150 200 250 mini-batch x 1k 0 100 200 300 400 500 FID orig 1e-5 orig 1e-4 TTUR 1e-5 1e-4 5000 10000 15000 Iteration Convergence of deterministic algorithm under different step sizes. 5000 10000 15000 Iteration Flow 1 (εn = 0.01) Flow 2 (εn = 0.01) Flow 3 (εn = 0.01) Flow 4 (εn = 0.01) Flow 1 (εn = 1/n) Flow 2 (εn = 1/n) Flow 3 (εn = 1/n) Flow 4 (εn = 1/n) Diminishing step size Constant step size Convergence under noisy feedback (the unbiased case). he convergence to a neighborhood is the best we can whereas by using diminishing step sizes, convergence obability one to the optimal points is made possible. ability of The Stochastic Algorithm: The Biased Case: hat when the gradient estimation error is biased, we hope to obtain almost sure convergence to the optimal s. Instead, we have shown that provided that the biased asymptotically uniformly bounded, the iterates return ntraction region” infinitely often. In this example, we that αs(n) = β(i,j)(n) and are uniformly bounded by a d positive value. We also assume that ζs(n) ∼N (0, 1) j)(n) ∼N (0, 1), for all s and (i, j). plot the iterates (using the relative distance to the points) in Fig. 4, which is further “zoomed in” in It can be observed from Fig. 4 that when the upperon the {αs, β(i,j)} are small, the iterates return to borhood of the optimal solution. However, when the on errors are large, the recurrent behavior of the may not occur, and the iterates may diverge. This rates the theoretical analysis. We can further observe g. 5 that the smaller the upper-bound is, the smaller the ction region” Aη becomes, indicating that the iterates e “closer” to the optimal points. 100 101 102 103 104 105 0 0.2 0.4 Iteration Fig. 4. Convergence under noisy feedback (the biased case). 101 102 103 104 105 0 0.1 0.2 0.3 0.4 0.5 0.6 Iteration ||x(n)−x*|| α = β = [0.05, 0.05, 0.05, 0.05] α = β = [0.5, 0.5, 0.5, 0.5] α = β = [1, 1, 1, 1] α = β = [5, 5, 5, 5] Fig. 5. “Zoomed-in” convergence behavior of the iterates in Figure 4. V. STOCHASTIC STABILITY OF TWO TIME-SCALE ALGORITHM UNDER NOISY FEEDBACK In the previous sections, we have applied the dual decomposition method to Problem (1) and devised the primal-dual algorithm, which is a single time-scale algorithm. As noted in Section I, there are many other decomposition methods. In particular, the primal decomposition method is a useful machinery for problem with coupled variables [31]; and when some of the variables are fixed, the rest of the problem may decouple into several subproblems. This naturally yields multiple time-scale algorithms. It is also of great interest to examine the stability of the multiple time-scale algorithms in the presence of noisy feedback, and compare with the single time-scale algorithms, in terms of complexity and robustness. To get a more concrete sense of the two time-scale algorithms based on primal decomposition, we consider the following NUM problem: Ξ2 : maximize {ms≤xs≤Ms, p} P s Us (xs) subject to P s:l∈L(s) xs ≤cl, ∀l cl = hl(p), ∀l p ∈H, (39) where the link capacities {cl} are functions of specific MAC parameters p (for instance, p can be transmission probabilities 10 Figure 1: Left: Original vs. TTUR GAN training on CelebA. Right: Figure from Zhang 2007 [50] which shows the distance of the parameter from the optimum for a one time-scale update of a 4 node network flow problem. When the upper bounds on the errors (α, β) are small, the iterates oscillate and repeatedly return to a neighborhood of the optimal solution (cf. Supplement Section 2.3). However, when the upper bounds on the errors are large, the iterates typically diverge. prerequisit for many convergence proofs is local stability [30] which was shown for GANs by Nagarajan and Kolter [39] for a min-max GAN setting. However, Nagarajan and Kolter require for their proof either rather strong and unrealistic assumptions or a restriction to a linear discriminator. Recent convergence proofs for GANs hold for expectations over training samples or for the number of examples going to infinity [32, 38, 35, 2], thus do not consider mini-batch learning which leads to a stochastic gradient [47, 23, 36, 33]. Recently actor-critic learning has been analyzed using stochastic approximation. Prasad et al. [41] showed that a two time-scale update rule ensures that training reaches a stationary local Nash equilibrium if the critic learns faster than the actor. Convergence was proved via an ordinary differential equation (ODE), whose stable limit points coincide with stationary local Nash equilibria. We follow the same approach. We adopt this approach for GANs and prove that also GANs converge to a local Nash equilibrium when trained by a two time-scale update rule (TTUR), i.e., when discriminator and generator have separate learning rates. This also leads to better results in experiments. The main premise is that the discriminator converges to a local minimum when the generator is fixed. If the generator changes slowly enough, then the discriminator still converges, since the generator perturbations are small. Besides ensuring convergence, the performance may also improve since the discriminator must first learn new patterns before they are transferred to the generator. In contrast, a generator which is overly fast, drives the discriminator steadily into new regions without capturing its gathered information. In recent GAN implementations, the discriminator often learned faster than the generator. A new objective slowed down the generator to prevent it from overtraining on the current discriminator [44]. The Wasserstein GAN algorithm uses more update steps for the discriminator than for the generator [1]. We compare TTUR and standard GAN training. Fig. 1 shows at the left panel a stochastic gradient example on CelebA for original GAN training (orig), which often leads to oscillations, and the TTUR. On the right panel an example of a 4 node network flow problem of Zhang et al. [50] is shown. The distance between the actual parameter and its optimum for an one time-scale update rule is shown across iterates. When the upper bounds on the errors are small, the iterates return to a neighborhood of the optimal solution, while for large errors the iterates may diverge (see also Supplement Section 2.3). Our novel contributions in this paper are: (i) the two time-scale update rule for GANs, (ii) the proof that GANs trained with TTUR converge to a stationary local Nash equilibrium, (iii) the description of Adam as heavy ball with friction and the resulting second order differential equation, (iv) the convergence of GANs trained with TTUR and Adam to a stationary local Nash equilibrium, (v) the “Fréchet Inception Distance” (FID) to evaluate GANs, which is more consistent than the Inception Score. Two Time-Scale Update Rule for GANs We consider a discriminator D(.; w) with parameter vector w and a generator G(.; θ) with parameter vector θ. Learning is based on a stochastic gradient ˜g(θ, w) of the discriminator’s loss function LD and a stochastic gradient ˜h(θ, w) of the generator’s loss function LG. The loss functions LD and LG can be the original as introduced in Goodfellow et al. [16], its improved versions [18], or recently proposed losses for GANs like the Wasserstein GAN [1]. Our setting is not restricted to min-max 2 GANs, but is also valid for all other, more general GANs for which the discriminator’s loss function LD is not necessarily related to the generator’s loss function LG. The gradients ˜g θ, w and ˜h θ, w are stochastic, since they use mini-batches of m real world samples x(i), 1 ⩽i ⩽m and m synthetic samples z(i), 1 ⩽i ⩽m which are randomly chosen. If the true gradients are g(θ, w) = ∇wLD and h(θ, w) = ∇θLG, then we can define ˜g(θ, w) = g(θ, w)+M (w) and ˜h(θ, w) = h(θ, w)+M (θ) with random variables M (w) and M (θ). Thus, the gradients ˜g θ, w and ˜h θ, w are stochastic approximations to the true gradients. Consequently, we analyze convergence of GANs by two time-scale stochastic approximations algorithms. For a two time-scale update rule (TTUR), we use the learning rates b(n) and a(n) for the discriminator and the generator update, respectively: wn+1 = wn + b(n) g θn, wn + M (w) n , θn+1 = θn + a(n) h θn, wn + M (θ) n . (1) For more details on the following convergence proof and its assumptions see Supplement Section 2.1. To prove convergence of GANs learned by TTUR, we make the following assumptions (The actual assumption is ended by ◀, the following text are just comments and explanations): (A1) The gradients h and g are Lipschitz. ◀Consequently, networks with Lipschitz smooth activation functions like ELUs (α = 1) [11] fulfill the assumption but not ReLU networks. (A2) P n a(n) = ∞, P n a2(n) < ∞, P n b(n) = ∞, P n b2(n) < ∞, a(n) = o(b(n))◀ (A3) The stochastic gradient errors {M (θ) n } and {M (w) n } are martingale difference sequences w.r.t. the increasing σ-field Fn = σ(θl, wl, M (θ) l , M (w) l , l ⩽ n), n ⩾ 0 with E h ∥M (θ) n ∥2 | F(θ) n i ⩽B1 and E h ∥M (w) n ∥2 | F(w) n i ⩽B2, where B1 and B2 are positive deterministic constants.◀The original Assumption (A3) from Borkar 1997 follows from Lemma 2 in [5] (see also [43]). The assumption is fulfilled in the Robbins-Monro setting, where mini-batches are randomly sampled and the gradients are bounded. (A4) For each θ, the ODE ˙w(t) = g θ, w(t) has a local asymptotically stable attractor λ(θ) within a domain of attraction Gθ such that λ is Lipschitz. The ODE ˙θ(t) = h θ(t), λ(θ(t)) has a local asymptotically stable attractor θ∗within a domain of attraction.◀The discriminator must converge to a minimum for fixed generator parameters and the generator, in turn, must converge to a minimum for this fixed discriminator minimum. Borkar 1997 required unique global asymptotically stable equilibria [7]. The assumption of global attractors was relaxed to local attractors via Assumption (A6) and Theorem 2.7 in Karmakar & Bhatnagar [26]. See for more details Assumption (A6) in Supplement Section 2.1.3. Here, the GAN objectives may serve as Lyapunov functions. These assumptions of locally stable ODEs can be ensured by an additional weight decay term in the loss function which increases the eigenvalues of the Hessian. Therefore, problems with a region-wise constant discriminator that has zero second order derivatives are avoided. For further discussion see Supplement Section 2.1.1 (C3). (A5) supn ∥θn∥< ∞and supn ∥wn∥< ∞.◀Typically ensured by the objective or a weight decay term. The next theorem has been proved in the seminal paper of Borkar 1997 [7]. Theorem 1 (Borkar). If the assumptions are satisfied, then the updates Eq. (1) converge to (θ∗, λ(θ∗)) a.s. The solution (θ∗, λ(θ∗)) is a stationary local Nash equilibrium [41], since θ∗as well as λ(θ∗) are local asymptotically stable attractors with g θ∗, λ(θ∗) = 0 and h θ∗, λ(θ∗) = 0. An alternative approach to the proof of convergence using the Poisson equation for ensuring a solution to the fast update rule can be found in the Supplement Section 2.1.2. This approach assumes a linear update function in the fast update rule which, however, can be a linear approximation to a nonlinear gradient [28, 29]. For the rate of convergence see Supplement Section 2.2, where Section 2.2.1 focuses on linear and Section 2.2.2 on non-linear updates. For equal time-scales it can only be proven that the updates revisit an environment of the solution infinitely often, which, however, can be very large [50, 12]. For more details on the analysis of equal time-scales see Supplement Section 2.3. The main idea of the proof of Borkar [7] is to use (T, δ) perturbed ODEs according to Hirsch 1989 [22] (see also Appendix Section C of Bhatnagar, Prasad, & Prashanth 2013 [6]). The proof relies on the fact 3 that there eventually is a time point when the perturbation of the slow update rule is small enough (given by δ) to allow the fast update rule to converge. For experiments with TTUR, we aim at finding learning rates such that the slow update is small enough to allow the fast to converge. Typically, the slow update is the generator and the fast update the discriminator. We have to adjust the two learning rates such that the generator does not affect discriminator learning in a undesired way and perturb it too much. However, even a larger learning rate for the generator than for the discriminator may ensure that the discriminator has low perturbations. Learning rates cannot be translated directly into perturbation since the perturbation of the discriminator by the generator is different from the perturbation of the generator by the discriminator. 2 Adam Follows an HBF ODE and Ensures TTUR Convergence In our experiments, we aim at using Adam stochastic approximation to avoid mode collapsing. GANs suffer from “mode collapsing” where large masses of probability are mapped onto a few modes that cover only small regions. While these regions represent meaningful samples, the variety of the real world data is lost and only few prototype samples are generated. Different methods have been proposed to avoid mode collapsing [9, 37]. We obviate mode collapsing by using Adam stochastic approximation [27]. Adam can be described as Heavy Ball with Friction (HBF) (see below), since it averages over past gradients. This averaging corresponds to a velocity that makes the generator resistant to getting pushed into small regions. Adam as an HBF method typically overshoots small local minima that correspond to mode collapse and can find flat minima which generalize well [24]. Fig. 2 depicts the dynamics of HBF, where the ball settles at a flat minimum. Next, we analyze whether GANs trained with TTUR converge when using Adam. For more details see Supplement Section 3. Figure 2: Heavy Ball with Friction, where the ball with mass overshoots the local minimum θ+ and settles at the flat minimum θ∗. We recapitulate the Adam update rule at step n, with learning rate a, exponential averaging factors β1 for the first and β2 for the second moment of the gradient ∇f(θn−1): gn ←−∇f(θn−1) (2) mn ←−(β1/(1 −βn 1 )) mn−1 + ((1 −β1)/(1 −βn 1 )) gn vn ←−(β2/(1 −βn 2 )) vn−1 + ((1 −β2)/(1 −βn 2 )) gn ⊙gn θn ←−θn−1 −a mn/(√vn + ϵ) , where following operations are meant componentwise: the product ⊙, the square root √., and the division / in the last line. Instead of learning rate a, we introduce the damping coefficient a(n) with a(n) = an−τ for τ ∈(0, 1]. Adam has parameters β1 for averaging the gradient and β2 parametrized by a positive α for averaging the squared gradient. These parameters can be considered as defining a memory for Adam. To characterize β1 and β2 in the following, we define the exponential memory r(n) = r and the polynomial memory r(n) = r/ Pn l=1 a(l) for some positive constant r. The next theorem describes Adam by a differential equation, which in turn allows to apply the idea of (T, δ) perturbed ODEs to TTUR. Consequently, learning GANs with TTUR and Adam converges. Theorem 2. If Adam is used with β1 = 1 −a(n + 1)r(n), β2 = 1 −αa(n + 1)r(n) and with ∇f as the full gradient of the lower bounded, continuously differentiable objective f, then for stationary second moments of the gradient, Adam follows the differential equation for Heavy Ball with Friction (HBF): ¨θt + a(t) ˙θt + ∇f(θt) = 0 . (3) Adam converges for gradients ∇f that are L-Lipschitz. Proof. Gadat et al. derived a discrete and stochastic version of Polyak’s Heavy Ball method [40], the Heavy Ball with Friction (HBF) [15]: θn+1 = θn −a(n + 1) mn , (4) mn+1 = 1 −a(n + 1) r(n) mn + a(n + 1) r(n) ∇f(θn) + Mn+1 . 4 These update rules are the first moment update rules of Adam [27]. The HBF can be formulated as the differential equation Eq. (3) [15]. Gadat et al. showed that the update rules Eq. (4) converge for loss functions f with at most quadratic grow and stated that convergence can be proofed for ∇f that are L-Lipschitz [15]. Convergence has been proved for continuously differentiable f that is quasiconvex (Theorem 3 in Goudou & Munier [19]). Convergence has been proved for ∇f that is L-Lipschitz and bounded from below (Theorem 3.1 in Attouch et al. [3]). Adam normalizes the average mn by the second moments vn of of the gradient gn: vn = E [gn ⊙gn]. mn is componentwise divided by the square root of the components of vn. We assume that the second moments of gn are stationary, i.e., v = E [gn ⊙gn]. In this case the normalization can be considered as additional noise since the normalization factor randomly deviates from its mean. In the HBF interpretation the normalization by √v corresponds to introducing gravitation. We obtain vn = 1 −β2 1 −βn 2 n X l=1 βn−l 2 gl ⊙gl , ∆vn = vn −v = 1 −β2 1 −βn 2 n X l=1 βn−l 2 (gl ⊙gl −v) . (5) For a stationary second moment v and β2 = 1 −αa(n + 1)r(n), we have ∆vn ∝a(n + 1)r(n). We use a componentwise linear approximation to Adam’s second moment normalization 1/√v + ∆vn ≈ 1/√v −(1/(2v ⊙√v)) ⊙∆vn + O(∆2vn), where all operations are meant componentwise. If we set M (v) n+1 = −(mn ⊙∆vn)/(2v ⊙√va(n + 1)r(n)), then mn/√vn ≈mn/√v + a(n + 1)r(n)M (v) n+1 and E h M (v) n+1 i = 0, since E [gl ⊙gl −v] = 0. For a stationary second moment v, the random variable {M (v) n } is a martingale difference sequence with a bounded second moment. Therefore {M (v) n+1} can be subsumed into {Mn+1} in update rules Eq. (4). The factor 1/√v can be componentwise incorporated into the gradient g which corresponds to rescaling the parameters without changing the minimum. According to Attouch et al. [3] the energy, that is, a Lyapunov function, is E(t) = 1/2| ˙θ(t)|2+f(θ(t)) and ˙E(t) = −a | ˙θ(t)|2 < 0. Since Adam can be expressed as differential equation and has a Lyapunov function, the idea of (T, δ) perturbed ODEs [7, 22, 8] carries over to Adam. Therefore the convergence of Adam with TTUR can be proved via two time-scale stochastic approximation analysis like in Borkar [7] for stationary second moments of the gradient. In the supplement we further discuss the convergence of two time-scale stochastic approximation algorithms with additive noise, linear update functions depending on Markov chains, nonlinear update functions, and updates depending on controlled Markov processes. Futhermore, the supplement presents work on the rate of convergence for both linear and nonlinear update rules using similar techniques as the local stability analysis of Nagarajan and Kolter [39]. Finally, we elaborate more on equal time-scale updates, which are investigated for saddle point problems and actor-critic learning. 3 Experiments Performance Measure. Before presenting the experiments, we introduce a quality measure for models learned by GANs. The objective of generative learning is that the model produces data which matches the observed data. Therefore, each distance between the probability of observing real world data pw(.) and the probability of generating model data p(.) can serve as performance measure for generative models. However, defining appropriate performance measures for generative models is difficult [45]. The best known measure is the likelihood, which can be estimated by annealed importance sampling [49]. However, the likelihood heavily depends on the noise assumptions for the real data and can be dominated by single samples [45]. Other approaches like density estimates have drawbacks, too [45]. A well-performing approach to measure the performance of GANs is the “Inception Score” which correlates with human judgment [44]. Generated samples are fed into an inception model that was trained on ImageNet. Images with meaningful objects are supposed to have low label (output) entropy, that is, they belong to few object classes. On the other hand, the entropy across images should be high, that is, the variance over the images should be large. Drawback of the Inception Score is that the statistics of real world samples are not used and compared to the statistics of synthetic samples. Next, we improve the Inception Score. The equality p(.) = pw(.) holds except for a non-measurable set if and only if R p(.)f(x)dx = R pw(.)f(x)dx for a basis f(.) spanning the function space in which p(.) and pw(.) live. These equalities of expectations 5 0 1 2 3 disturbance level 0 50 100 150 200 250 300 350 400 FID 0 1 2 3 disturbance level 0 50 100 150 200 250 300 350 400 FID 0 1 2 3 disturbance level 0 50 100 150 200 250 300 350 400 FID 0 1 2 3 disturbance level 0 50 100 150 200 250 FID 0 1 2 3 disturbance level 0 100 200 300 400 500 600 FID 0 1 2 3 disturbance level 0 50 100 150 200 250 300 FID Figure 3: FID is evaluated for upper left: Gaussian noise, upper middle: Gaussian blur, upper right: implanted black rectangles, lower left: swirled images, lower middle: salt and pepper noise, and lower right: CelebA dataset contaminated by ImageNet images. The disturbance level rises from zero and increases to the highest level. The FID captures the disturbance level very well by monotonically increasing. are used to describe distributions by moments or cumulants, where f(x) are polynomials of the data x. We generalize these polynomials by replacing x by the coding layer of an inception model in order to obtain vision-relevant features. For practical reasons we only consider the first two polynomials, that is, the first two moments: mean and covariance. The Gaussian is the maximum entropy distribution for given mean and covariance, therefore we assume the coding units to follow a multidimensional Gaussian. The difference of two Gaussians (synthetic and real-world images) is measured by the Fréchet distance [14] also known as Wasserstein-2 distance [48]. We call the Fréchet distance d(., .) between the Gaussian with mean (m, C) obtained from p(.) and the Gaussian with mean (mw, Cw) obtained from pw(.) the “Fréchet Inception Distance” (FID), which is given by [13]: d2((m, C), (mw, Cw)) = ∥m −mw∥2 2 + Tr C + Cw −2 CCw 1/2 . Next we show that the FID is consistent with increasing disturbances and human judgment. Fig. 3 evaluates the FID for Gaussian noise, Gaussian blur, implanted black rectangles, swirled images, salt and pepper noise, and CelebA dataset contaminated by ImageNet images. The FID captures the disturbance level very well. In the experiments we used the FID to evaluate the performance of GANs. For more details and a comparison between FID and Inception Score see Supplement Section 1, where we show that FID is more consistent with the noise level than the Inception Score. Model Selection and Evaluation. We compare the two time-scale update rule (TTUR) for GANs with the original GAN training to see whether TTUR improves the convergence speed and performance of GANs. We have selected Adam stochastic optimization to reduce the risk of mode collapsing. The advantage of Adam has been confirmed by MNIST experiments, where Adam indeed considerably reduced the cases for which we observed mode collapsing. Although TTUR ensures that the discriminator converges during learning, practicable learning rates must be found for each experiment. We face a trade-off since the learning rates should be small enough (e.g. for the generator) to ensure convergence but at the same time should be large enough to allow fast learning. For each of the experiments, the learning rates have been optimized to be large while still ensuring stable training which is indicated by a decreasing FID or Jensen-Shannon-divergence (JSD). We further fixed the time point for stopping training to the update step when the FID or Jensen-Shannon-divergence of the best models was no longer decreasing. For some models, we observed that the FID diverges or starts to increase at a certain time point. An example of this behaviour is shown in Fig. 5. The performance of generative models is evaluated via the Fréchet Inception Distance (FID) introduced above. For the One Billion Word experiment, the normalized JSD served as performance measure. For computing the FID, we propagated all images from the training dataset through the pretrained Inception-v3 model following the computation of the Inception Score [44], however, we use the last 6 pooling layer as coding layer. For this coding layer, we calculated the mean mw and the covariance matrix Cw. Thus, we approximate the first and second central moment of the function given by the Inception coding layer under the real world distribution. To approximate these moments for the model distribution, we generate 50,000 images, propagate them through the Inception-v3 model, and then compute the mean m and the covariance matrix C. For computational efficiency, we evaluate the FID every 1,000 DCGAN mini-batch updates, every 5,000 WGAN-GP outer iterations for the image experiments, and every 100 outer iterations for the WGAN-GP language model. For the one time-scale updates a WGAN-GP outer iteration for the image model consists of five discriminator mini-batches and ten discriminator mini-batches for the language model, where we follow the original implementation. For TTUR however, the discriminator is updated only once per iteration. We repeat the training for each single time-scale (orig) and TTUR learning rate eight times for the image datasets and ten times for the language benchmark. Additionally to the mean FID training progress we show the minimum and maximum FID over all runs at each evaluation time-step. For more details, implementations and further results see Supplement Section 4 and 6. Simple Toy Data. We first want to demonstrate the difference between a single time-scale update rule and TTUR on a simple toy min/max problem where a saddle point should be found. The objective f(x, y) = (1 + x2)(100 −y2) in Fig. 4 (left) has a saddle point at (x, y) = (0, 0) and fulfills assumption A4. The norm ∥(x, y)∥measures the distance of the parameter vector (x, y) to the saddle point. We update (x, y) by gradient descent in x and gradient ascent in y using additive Gaussian noise in order to simulate a stochastic update. The updates should converge to the saddle point (x, y) = (0, 0) with objective value f(0, 0) = 100 and the norm 0. In Fig. 4 (right), the first two rows show one time-scale update rules. The large learning rate in the first row diverges and has large fluctuations. The smaller learning rate in the second row converges but slower than the TTUR in the third row which has slow x-updates. TTUR with slow y-updates in the fourth row also converges but slower. x 3 2 1 0 1 2 3 y -8 -4 0 3 7 43.8 282.8 521.9 760.9 1000.0 150 200 objective 0.5 1.0 norm 0.25 0.00 x vs y 100 110 0.0 0.5 0.25 0.00 100 125 0.0 0.5 0.25 0.00 0 2000 4000 100 125 0 2000 4000 0.25 0.50 0.5 0.0 0.5 0.4 0.2 Figure 4: Left: Plot of the objective with a saddle point at (0, 0). Right: Training progress with equal learning rates of 0.01 (first row) and 0.001 (second row)) for x and y, TTUR with a learning rate of 0.0001 for x vs. 0.01 for y (third row) and a larger learning rate of 0.01 for x vs. 0.0001 for y (fourth row). The columns show the function values (left), norms (middle), and (x, y) (right). TTUR (third row) clearly converges faster than with equal time-scale updates and directly moves to the saddle point as shown by the norm and in the (x, y)-plot. DCGAN on Image Data. We test TTUR for the deep convolutional GAN (DCGAN) [42] at the CelebA, CIFAR-10, SVHN and LSUN Bedrooms dataset. Fig. 5 shows the FID during learning with the original learning method (orig) and with TTUR. The original training method is faster at the beginning, but TTUR eventually achieves better performance. DCGAN trained TTUR reaches constantly a lower FID than the original method and for CelebA and LSUN Bedrooms all one time-scale runs diverge. For DCGAN the learning rate of the generator is larger then that of the discriminator, which, however, does not contradict the TTUR theory (see the Supplement Section 5). In Table 1 we report the best FID with TTUR and one time-scale training for optimized number of updates and learning rates. TTUR constantly outperforms standard training and is more stable. WGAN-GP on Image Data. We used the WGAN-GP image model [21] to test TTUR with the CIFAR-10 and LSUN Bedrooms datasets. In contrast to the original code where the discriminator is trained five times for each generator update, TTUR updates the discriminator only once, therefore we align the training progress with wall-clock time. The learning rate for the original training was optimized to be large but leads to stable learning. TTUR can use a higher learning rate for the discriminator since TTUR stabilizes learning. Fig. 6 shows the FID during learning with the original 7 0 50 100 150 200 250 mini-batch x 1k 0 200 400 FID orig 1e-5 orig 1e-4 orig 5e-4 TTUR 1e-5 5e-4 20 40 60 80 100 120 mini-batch x 1k 40 60 80 100 120 FID orig 1e-4 orig 2e-4 orig 5e-4 TTUR 1e-4 5e-4 0 25 50 75 100 125 150 175 mini-batch x 1k 0 200 400 FID orig 1e-5 orig 5e-5 orig 1e-4 TTUR 1e-5 1e-4 0 50 100 150 200 250 300 350 400 mini-batch x 1k 200 400 FID orig 1e-5 orig 5e-5 orig 1e-4 TTUR 1e-5 1e-4 Figure 5: Mean FID (solid line) surrounded by a shaded area bounded by the maximum and the minimum over 8 runs for DCGAN on CelebA, CIFAR-10, SVHN, and LSUN Bedrooms. TTUR learning rates are given for the discriminator b and generator a as: “TTUR b a”. Top Left: CelebA. Top Right: CIFAR-10, starting at mini-batch update 10k for better visualisation. Bottom Left: SVHN. Bottom Right: LSUN Bedrooms. Training with TTUR (red) is more stable, has much lower variance, and leads to a better FID. 0 200 400 600 800 1000 minutes 50 100 150 FID orig 1e-4 orig 5e-4 orig 7e-4 TTUR 3e-4 1e-4 0 500 1000 1500 2000 minutes 0 100 200 300 400 FID orig 1e-4 orig 5e-4 orig 7e-4 TTUR 3e-4 1e-4 Figure 6: Mean FID (solid line) surrounded by a shaded area bounded by the maximum and the minimum over 8 runs for WGAN-GP on CelebA, CIFAR-10, SVHN, and LSUN Bedrooms. TTUR learning rates are given for the discriminator b and generator a as: “TTUR b a”. Left: CIFAR-10, starting at minute 20. Right: LSUN Bedrooms. Training with TTUR (red) has much lower variance and leads to a better FID. learning method and with TTUR. Table 1 shows the best FID with TTUR and one time-scale training for optimized number of iterations and learning rates. Again TTUR reaches lower FIDs than one time-scale training. WGAN-GP on Language Data. Finally the One Billion Word Benchmark [10] serves to evaluate TTUR on WGAN-GP. The character-level generative language model is a 1D convolutional neural network (CNN) which maps a latent vector to a sequence of one-hot character vectors of dimension 32 given by the maximum of a softmax output. The discriminator is also a 1D CNN applied to sequences of one-hot vectors of 32 characters. Since the FID criterium only works for images, we measured the performance by the Jensen-Shannon-divergence (JSD) between the model and the real world distribution as has been done previously [21]. In contrast to the original code where the critic is trained ten times for each generator update, TTUR updates the discriminator only once, therefore we align the training progress with wall-clock time. The learning rate for the original training was optimized to be large but leads to stable learning. TTUR can use a higher learning rate for the discriminator since TTUR stabilizes learning. We report for the 4 and 6-gram word evaluation the normalized mean JSD for ten runs for original training and TTUR training in Fig. 7. In Table 1 we report the best JSD at an optimal time-step where TTUR outperforms the standard training for both measures. The improvement of TTUR on the 6-gram statistics over original training shows that TTUR enables to learn to generate more subtle pseudo-words which better resembles real words. 8 200 400 600 800 1000 1200 minutes 0.35 0.40 0.45 0.50 0.55 JSD orig 1e-4 TTUR 3e-4 1e-4 200 400 600 800 1000 1200 minutes 0.75 0.80 0.85 JSD orig 1e-4 TTUR 3e-4 1e-4 Figure 7: Performance of WGAN-GP models trained with the original (orig) and our TTUR method on the One Billion Word benchmark. The performance is measured by the normalized JensenShannon-divergence based on 4-gram (left) and 6-gram (right) statistics averaged (solid line) and surrounded by a shaded area bounded by the maximum and the minimum over 10 runs, aligned to wall-clock time and starting at minute 150. TTUR learning (red) clearly outperforms the original one time-scale learning. Table 1: The performance of DCGAN and WGAN-GP trained with the original one time-scale update rule and with TTUR on CelebA, CIFAR-10, SVHN, LSUN Bedrooms and the One Billion Word Benchmark. During training we compare the performance with respect to the FID and JSD for optimized number of updates. TTUR exhibits consistently a better FID and a better JSD. DCGAN Image dataset method b, a updates FID method b = a updates FID CelebA TTUR 1e-5, 5e-4 225k 12.5 orig 5e-4 70k 21.4 CIFAR-10 TTUR 1e-4, 5e-4 75k 36.9 orig 1e-4 100k 37.7 SVHN TTUR 1e-5, 1e-4 165k 12.5 orig 5e-5 185k 21.4 LSUN TTUR 1e-5, 1e-4 340k 57.5 orig 5e-5 70k 70.4 WGAN-GP Image dataset method b, a time(m) FID method b = a time(m) FID CIFAR-10 TTUR 3e-4, 1e-4 700 24.8 orig 1e-4 800 29.3 LSUN TTUR 3e-4, 1e-4 1900 9.5 orig 1e-4 2010 20.5 WGAN-GP Language n-gram method b, a time(m) JSD method b = a time(m) JSD 4-gram TTUR 3e-4, 1e-4 1150 0.35 orig 1e-4 1040 0.38 6-gram TTUR 3e-4, 1e-4 1120 0.74 orig 1e-4 1070 0.77 4 Conclusion For learning GANs, we have introduced the two time-scale update rule (TTUR), which we have proved to converge to a stationary local Nash equilibrium. Then we described Adam stochastic optimization as a heavy ball with friction (HBF) dynamics, which shows that Adam converges and that Adam tends to find flat minima while avoiding small local minima. A second order differential equation describes the learning dynamics of Adam as an HBF system. Via this differential equation, the convergence of GANs trained with TTUR to a stationary local Nash equilibrium can be extended to Adam. Finally, to evaluate GANs, we introduced the ‘Fréchet Inception Distance” (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments we have compared GANs trained with TTUR to conventional GAN training with a one time-scale update rule on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark. TTUR outperforms conventional GAN training consistently in all experiments. Acknowledgment This work was supported by NVIDIA Corporation, Bayer AG with Research Agreement 09/2017, Zalando SE with Research Agreement 01/2016, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, IWT research grant IWT150865 (Exaptation), H2020 project grant 671555 (ExCAPE) and FWF grant P 28660-N31. 9 References [1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv e-prints, arXiv:1701.07875, 2017. [2] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, pages 224–232, 2017. [3] H. Attouch, X. Goudou, and P. Redont. The heavy ball with friction method, I. the continuous dynamical system: Global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Communications in Contemporary Mathematics, 2(1):1–34, 2000. [4] D. Berthelot, T. Schumm, and L. Metz. BEGAN: Boundary equilibrium generative adversarial networks. arXiv e-prints, arXiv:1703.10717, 2017. [5] D. P. Bertsekas and J. N. Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM Journal on Optimization, 10(3):627–642, 2000. [6] S. Bhatnagar, H. L. Prasad, and L. A. Prashanth. Stochastic Recursive Algorithms for Optimization. Lecture Notes in Control and Information Sciences. Springer-Verlag London, 2013. [7] V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters, 29(5):291–294, 1997. [8] V. S. Borkar and S. P. Meyn. The O.D.E. method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447–469, 2000. [9] T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li. Mode regularized generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. arXiv:1612.02136. [10] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv e-prints, arXiv:1312.3005, 2013. [11] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR), 2016. arXiv:1511.07289. [12] D. DiCastro and R. Meir. A convergent online single time scale actor critic algorithm. J. Mach. Learn. Res., 11:367–410, 2010. [13] D. C. Dowson and B. V. Landau. The Fréchet distance between multivariate normal distributions. Journal of Multivariate Analysis, 12:450–455, 1982. [14] M. Fréchet. Sur la distance de deux lois de probabilité. C. R. Acad. Sci. Paris, 244:689–692, 1957. [15] S. Gadat, F. Panloup, and S. Saadane. Stochastic heavy ball. arXiv e-prints, arXiv:1609.04228, 2016. [16] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680, 2014. [17] I. J. Goodfellow. On distinguishability criteria for estimating generative models. In Workshop at the International Conference on Learning Representations (ICLR), 2015. arXiv:1412.6515. [18] I. J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv e-prints, arXiv:1701.00160, 2017. 10 [19] X. Goudou and J. Munier. The gradient and heavy ball with friction dynamical systems: the quasiconvex case. Mathematical Programming, 116(1):173–191, 2009. [20] P. Grnarova, K. Y. Levy, A. Lucchi, T. Hofmann, and A. Krause. An online learning approach to generative adversarial networks. arXiv e-prints, arXiv:1706.03269, 2017. [21] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of Wasserstein GANs. arXiv e-prints, arXiv:1704.00028, 2017. Advances in Neural Information Processing Systems 31 (NIPS 2017). [22] M. W. Hirsch. Convergent activation dynamics in continuous time networks. Neural Networks, 2(5):331–349, 1989. [23] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial networks. arXiv e-prints, arXiv:1702.08431, 2017. [24] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997. [25] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. arXiv:1611.07004. [26] P. Karmakar and S. Bhatnagar. Two time-scale stochastic approximation with controlled Markov noise and off-policy temporal-difference learning. Mathematics of Operations Research, 2017. [27] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR)), 2015. arXiv:1412.6980. [28] V. R. Konda. Actor-Critic Algorithms. PhD thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 2002. [29] V. R. Konda and J. N. Tsitsiklis. Linear stochastic approximation driven by slowly varying Markov chains. Systems & Control Letters, 50(2):95–102, 2003. [30] H. J. Kushner and G. G. Yin. Stochastic Approximation Algorithms and Recursive Algorithms and Applications. Springer-Verlag New York, second edition, 2003. [31] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv e-prints, arXiv:1609.04802, 2016. [32] C.-L. Li, W.-C. Chang, Y. Cheng, Y. Yang, and B. Póczos. MMD GAN: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems 31 (NIPS 2017), 2017. arXiv:1705.08584. [33] J. Li, A. Madry, J. Peebles, and L. Schmidt. Towards understanding the dynamics of generative adversarial networks. arXiv e-prints, arXiv:1706.09884, 2017. [34] J. H. Lim and J. C. Ye. Geometric GAN. arXiv e-prints, arXiv:1705.02894, 2017. [35] S. Liu, O. Bousquet, and K. Chaudhuri. Approximation and convergence properties of generative adversarial learning. In Advances in Neural Information Processing Systems 31 (NIPS 2017), 2017. arXiv:1705.08991. [36] L. M. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Advances in Neural Information Processing Systems 31 (NIPS 2017), 2017. arXiv:1705.10461. [37] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. arXiv:1611.02163. [38] Y. Mroueh and T. Sercu. Fisher GAN. In Advances in Neural Information Processing Systems 31 (NIPS 2017), 2017. arXiv:1705.09675. 11 [39] V. Nagarajan and J. Z. Kolter. Gradient descent GAN optimization is locally stable. arXiv e-prints, arXiv:1706.04156, 2017. Advances in Neural Information Processing Systems 31 (NIPS 2017). [40] B. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964. [41] H. L. Prasad, L. A. Prashanth, and S. Bhatnagar. Two-timescale algorithms for learning Nash equilibria in general-sum stochastic games. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’15), pages 1371–1379, 2015. [42] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. arXiv:1511.06434. [43] A. Ramaswamy and S. Bhatnagar. Stochastic recursive inclusion in two timescales with an application to the lagrangian dual problem. Stochastics, 88(8):1173–1187, 2016. [44] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2234–2242, 2016. [45] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. arXiv:1511.01844. [46] I. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel, and B. Schölkopf. AdaGAN: Boosting generative models. arXiv e-prints, arXiv:1701.02386, 2017. Advances in Neural Information Processing Systems 31 (NIPS 2017). [47] R. Wang, A. Cully, H. J. Chang, and Y. Demiris. MAGAN: margin adaptation for generative adversarial networks. arXiv e-prints, arXiv:1704.03817, 2017. [48] L. N. Wasserstein. Markov processes over denumerable products of spaces describing large systems of automata. Probl. Inform. Transmission, 5:47–52, 1969. [49] Y. Wu, Y. Burda, R. Salakhutdinov, and R. B. Grosse. On the quantitative analysis of decoderbased generative models. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. arXiv:1611.04273. [50] J. Zhang, D. Zheng, and M. Chiang. The impact of stochastic noisy feedback on distributed network utility maximization. In IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications, pages 222–230, 2007. 12 | 2017 | 369 |
6,862 | Real-Time Bidding with Side Information Arthur Flajolet MIT, ORC flajolet@mit.edu Patrick Jaillet MIT, EECS, LIDS, ORC jaillet@mit.edu Abstract We consider the problem of repeated bidding in online advertising auctions when some side information (e.g. browser cookies) is available ahead of submitting a bid in the form of a d-dimensional vector. The goal for the advertiser is to maximize the total utility (e.g. the total number of clicks) derived from displaying ads given that a limited budget B is allocated for a given time horizon T. Optimizing the bids is modeled as a contextual Multi-Armed Bandit (MAB) problem with a knapsack constraint and a continuum of arms. We develop UCB-type algorithms that combine two streams of literature: the confidence-set approach to linear contextual MABs and the probabilistic bisection search method for stochastic root-finding. Under mild assumptions on the underlying unknown distribution, we establish distributionindependent regret bounds of order ˜O(d · √ T) when either B = ∞or when B scales linearly with T. 1 Introduction On the internet, advertisers and publishers now interact through real-time marketplaces called ad exchanges. Through them, any publisher can sell the opportunity to display an ad when somebody is visiting a webpage he or she owns. Conversely, any advertiser interested in such an opportunity can pay to have his or her ad displayed. In order to match publishers with advertisers and to determine prices, ad exchanges commonly use a variant of second-price auctions which typically runs as follows. Each participant is initially provided with some information about the person that will be targeted by the ad (e.g. browser cookies, IP address, and operating system) along with some information about the webpage (e.g. theme) and the ad slot (e.g. width and visibility). Based on this limited knowledge, advertisers must submit a bid in a timely fashion if they deem the opportunity worthwhile. Subsequently, the highest bidder gets his or her ad displayed and is charged the second-highest bid. Moreover, the winner can usually track the customer’s interaction with the ad (e.g. clicks). Because the auction is sealed, very limited feedback is provided to the advertiser if the auction is lost. In particular, the advertiser does not receive any customer feedback in this scenario. In addition, the demand for ad slots, the supply of ad slots, and the websurfers’ profiles cannot be predicted ahead of time and are thus commonly modeled as random variables, see [19]. These two features contribute to making the problem of bid optimization in ad auctions particularly challenging for advertisers. 1.1 Problem statement and contributions We consider an advertiser interested in purchasing ad impressions through an ad exchange. As standard practice in the online advertising industry, we suppose that the advertiser has allocated a limited budget B for a limited period of time, which corresponds to the next T ad auctions. Rounds, indexed by t ∈N, correspond to ad auctions in which the advertiser participates. At the beginning of round t ∈N, some contextual information about the ad slot and the person that will be targeted is revealed to the advertiser in the form of a multidimensional vector xt ∈X, where X is a subset of Rd. Without loss of generality, the coordinates of xt are assumed to be normalized in such a way that 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ∥x∥∞≤1 for all x ∈X. Given xt, the advertiser must submit a bid bt in a timely fashion. If bt is larger than the highest bid submitted by the competitors, denoted by pt and also referred to as the market price, the advertiser wins the auction, is charged pt, and gets his or her ad displayed, from which he or she derives a utility vt. Monetary amounts and utility values are assumed to be normalized in such a way that bt, pt, vt ∈[0, 1]. In this modeling, one of the competitors is the publisher himself who submits a reserve price so that pt > 0. No one wins the auction if no bid is larger than the reserve price. For the purpose of modeling, we suppose that ties are broken in favor of the advertiser but this choice is arbitrary and by no means a limitation of the approach. Hence, the advertiser collects a reward rt = vt · 1bt≥pt and is charged ct = pt · 1bt≥pt at the end of round t. Since the monetary value of getting an ad displayed is typically difficult to assess, vt and ct may be expressed in different units and thus cannot be compared directly in general, which makes the problem two-dimensional. This is the case, for example, when the goal of the advertiser is to maximize the number of clicks, in which case vt = 1 if the ad was clicked on and vt = 0 otherwise. We consider a stochastic setting where the environment and the competitors are not fully adversarial. Specifically, we assume that, at any round t ∈N, the vector (xt, vt, pt) is jointly drawn from a fixed probability distribution ν independently from the past. While this assumption may seem unnatural at first as the other bidders also act as learning agents, it is motivated by the following observation. In our setting, we consider that there are many bidders, each participating in a small subset of a large number of auctions, that value ad opportunities very differently depending on the intended audience, the nature and topic of the ads, and other technical constraints. Since bidders have no idea who they will be competing against for a particular ad (because the auctions are sealed), they are naturally led to be oblivious to the competition and to bid with the only objective of maximizing their own objective functions. Given the variety of objective functions and the large number of bidders and ad auctions, we argue that, by the law of large numbers, the process (xt, pt, vt)t=1,...,T that we experience as a bidder is i.i.d., at least for a short period of time. Moreover, while the assumption that the distribution of (xt, vt, pt) is stationary may only be valid for a short period of time, advertisers tend to participate in a large number of ad auctions per second so that T and B are typically large values, which motivates an asymptotic study. We generically denote by (X, V, P) a vector of random variables distributed according to ν. We make a structural assumption about ν, which we use throughout the paper. Assumption 1. The random variables V and P are conditionally independent given X. Moreover, there exists θ∗∈Rd such that E[V | X] = X Tθ∗and ∥θ∗∥∞≤1. Note, in particular, that Assumption 1 is satisfied if V and P are deterministic functions of X. The first part of Assumption 1 is very natural since: (i) X captures all and only the information about the ad shared to all bidders before submitting a bid and (ii) websurfers are oblivious to the ad auctions that take place behind the scenes to determine which ad they will be presented with. The second part of Assumption 1 is standard in the literature on linear contextual MABs, see [1] and [16], and is arguably the simplest model capturing a dependence between xt and vt. When the advertiser’s objective is to maximize the number of clicks, this assumption translates into a linear Click-Through Rate (CTR) model. We denote by (Ft)t∈N (resp. ( ˜Ft)t∈N) the natural filtration generated by ((xt, vt, pt))t∈N (resp. ((xt+1, vt, pt))t∈N). Since the advertiser can keep bidding only so long as he or she does not run out of money or time, he or she can no longer participate in ad auctions at round τ ∗, mathematically defined by: τ ∗= min(T + 1, min{t ∈N | t X τ=1 cτ > B}). Note that τ ∗is a stopping time with respect to (Ft)t∈N. The difficulty for the advertiser when it comes to determining how much to bid at each round lies in the fact that the underlying distribution ν is initially unknown. This task is further complicated by the fact that the feedback provided to the advertiser upon bidding bt is partially censored: pt and vt are only revealed if the advertiser wins the auction, i.e. if bt ≥pt. In particular when bt < pt, the advertiser can never evaluate how much reward would have been obtained and what price would have been charged if he or she had submitted a higher bid. The goal for the advertiser is to design a non-anticipating algorithm that, at any round t, selects bt based on the information acquired in the past so as to keep the pseudo-regret defined as: RB,T = EROPT(B, T) −E[ τ ∗−1 X t=1 rt] 2 as small as possible, where EROPT(B, T) is the maximum expected sum of rewards that can be obtained by a non-anticipating oracle algorithm that has knowledge of the underlying distribution. Here, an algorithm is said to be non-anticipating if the bid selection process does not depend on the future observations. We develop algorithms with bounds on the pseudo-regret that do not depend on the underlying distribution ν, which are referred to as distribution-independent regret bounds. This entails studying the asymptotic behavior of RB,T when B and T go to infinity. For mathematical convenience, we consider that the advertiser keeps bidding even if he or she has run out of time or money so that all quantities are well defined for any t ∈N. Of course, the rewards obtained for t ≥τ ∗are not taken into account in the advertiser’s total reward when establishing regret bounds. Contributions We develop UCB-type algorithms that combine the ellipsoidal confidence set approach to linear contextual MAB problems with a special-purpose stochastic binary search procedure. When the budget is unlimited or when it scales linearly with time, we show that, under additional technical assumptions on the underlying distribution ν, our algorithms incur a regret RB,T = ˜O(d · √ T), where the ˜O notation hides logarithmic factors in d and T. A key insight is that overbidding is not only essential to incentivize exploration in order to estimate θ∗, but also crucial to find the optimal bidding strategy given θ∗because bidding higher always provide more feedback in real-time bidding. 1.2 Literature review To handle the exploration-exploitation trade-off inherent to MAB problems, an approach that has proved to be particularly successful is the optimism in the face of uncertainty paradigm. The idea is to consider all plausible scenarios consistent with the information collected so far and to select the decision that yields the largest reward among all identified scenarios. Auer et al. [7] use this idea to solve the standard MAB problem where decisions are represented by K ∈N arms and pulling arm k ∈{1, · · · , K} at round t ∈{1, · · · , T} yields a random reward drawn from an unknown distribution specific to this arm independently from the past. Specifically, Auer et al. [7] develop the Upper Confidence Bound algorithm (UCB1), which consists in selecting the arm with the current largest upper confidence bound on its mean reward, and establish near-optimal regret bounds. This approach has since been successfully extended to a number of more general settings. Of most notable interest to us are: (i) linear contextual MAB problems, where, for each arm k and at each round t, some context xk t is provided to the decision maker ahead of pulling any arm and the expected reward of arm k is θT ∗xk t for some unknown θ∗∈Rd, and (ii) the Bandits with Knapsacks (BwK) framework, an extension to the standard MAB problem allowing to model resource consumption. UCB-type algorithms for linear contextual MAB problems were first developed in [6] and later extended and improved upon in [1] and [16]. In this line of work, the key idea is to build, at any round t, an ellipsoidal confidence set Ct on the unknown parameter θ∗and to pull the arm k that maximizes maxθ∈Ct θTxk t . Using this idea, Chu et al. [16] derive ˜O( √ d · T) upper bounds on regret that hold with high probability, where the ˜O notations hides logarithmic factors in d and T. While this result is not directly applicable in our setting, partly because of the knapsack constraint, we rely on this technique to estimate θ∗. The real-time bidding problem considered in this work can be formulated as a BwK problem with contextual information and a continuum of arms. This framework, first introduced in its full generality in [10] and later extended to incorporate contextual information in [11], [3], and [2], captures resource consumption by assuming that pulling any arm incurs the consumption of possibly many different limited resource types by random amounts. BwK problems are notoriously harder to solve than standard MAB problems. For example, sublinear regret cannot be achieved in general for BwK problems when an opponent is adversarially picking the rewards and the amounts of resource consumption at each round, see [10], while this is possible for standard MAB problems, see [8]. The problem becomes even more complex when some contextual information is available at the beginning of each round as approaches developed for standard contextual MAB problems and for BwK problems fail when applied to contextual BwK problems, see the discussion in [11], which calls for the development of new techniques. Agrawal and Devanur [2] consider a particular case where the expected rewards and the expected amounts of resource consumption are linear in the context and derive, in particular, ˜O( √ d · T) bounds on regret when the initial endowments of resources scale linearly with the time horizon T. These results do not carry over 3 to our setting because the expected costs, and in fact also the expected rewards, are not linear in the context. To the best of our knowledge, the only prior works that deal simultaneously with knapsack constraints and a non-linear dependence of the rewards and the amounts of resource consumption on the contextual information are Agrawal et al. [3] and Badanidiyuru et al. [11]. When there is a finite number of arms K, they derive regret bounds that scale as ˜O( p K · T · ln(Π)), where Π is the size of the set of benchmark policies. To some extent, at least when θ∗is known, it is possible to apply these results but this requires to discretize the set of valid bids [0, 1] and the regret bounds thus derived scale as ∼T 2/3, see the analysis in [10], which is suboptimal. On the modeling side, the most closely related prior works studying repeated ad auctions under the lens of online learning are [25], [23], [17], [12], and [5]. Weed et al. [25] develop algorithms to solve the problem considered in this work when no contextual information is available and when there is no budget constraint, in which case the rewards are defined as rt = (vt −pt) · 1bt≥pt, but in a more general adversarial setting where few assumptions are made concerning the sequence ((vt, pt))t∈N. They obtain ˜O( √ T) regret bounds with an improved rate O(ln(T)) in some favorable settings of interest. Inspired by [4], Tran-Thanh et al. [23] study a particular case of the problem considered in this work when no contextual information is available and when the goal is to maximize the number of impressions. They use a dynamic programming approach and claim to derive ˜O( √ T) regret bounds. Balseiro and Gur [12] identify near-optimal bidding strategies in a game-theoretic setting assuming that each bidder has a black-box function that maps the contextual information available before bidding to the expected utility derived from displaying an ad (which amounts to assuming that θ∗is known a priori in our setting). They show that bidding an amount equal to the expected utility derived from displaying an ad normalized by a bid multiplier, to be estimated, is a near-optimal strategy. We extend this observation to the contextual settings. Compared to their work, the difficulty in our setting lies in estimating simultaneously the bid multiplier and θ∗. Finally, the authors of [5] and [17] take the point of view of the publisher whose goal is to price ad impressions, as opposed to purchasing them, in order to maximize revenues with no knapsack constraint. Cohen et al. [17] derive O(ln(d2 · ln(T/d))) bounds on regret with high probability with a multidimensional binary search. On the technical side, our work builds upon and contributes to the stream of literature on probabilistic bisection search algorithms. This class of algorithms was originally developed for solving stochastic root finding problems, see [22] for an overview, but has also recently appeared in the MAB literature, see [20]. Our approach is largely inspired by the work of Lei et al. [20] who develop a stochastic binary search algorithm to solve a dynamic pricing problem with limited supply but no contextual information, which can be modeled as a BwK problem with a continuum of arms. Dynamic pricing problems with limited supply are often modeled as BwK problems in the literature, see [24], [9], and [20], but, to the best of our knowledge, the availability of contextual information about potential customers is never captured. Inspired by the technical developments introduced in these works, our approach is to characterize a near-optimal strategy in closed form and to refine our estimates of the (usually few) initially unknown parameters involved in the characterization as we make decisions online, implementing this strategy using the latest estimates for the parameters. However, the technical challenge in these works differs from ours in one key aspect: the feedback provided to the decision maker is completely censored in dynamic pricing problems, since the customers’ valuations are never revealed, while it is only partially censored in real-time bidding, since the market price is revealed if the auction is won. Making the most of this additional feature enables us to develop a stochastic binary search procedure that can be compounded with the ellipsoidal confidence set approach to linear contextual bandits in order to incorporate contextual information. Organization The remainder of the paper is organized as follows. In order to increase the level of difficulty progressively, we start by studying the situation of an advertiser with unlimited budget, i.e. B = ∞, in Section 2. Given that second-price auctions induce truthful bidding when the bidder has no budget constraint, this setting is easier since the optimal bidding strategy is to bid bt = xT tθ∗at any round t ∈N. This drives us to focus on the problem of estimating θ∗, which we do by means of ellipsoidal confidence sets. Next, in Section 3, we study the setting where B is finite and scales linearly with the time horizon T. We show that a near-optimal strategy is to bid bt = xT tθ∗/λ∗at any round t ∈N, where λ∗≥0 is a scalar factor whose purpose is to spread the budget as evenly as possible, i.e. E[P · 1XTθ∗≥λ∗·P ] = B/T. Given this characterization, we first assume that θ∗ 4 is known a priori to focus instead on the problem of computing an approximate solution λ ≥0 to E[P · 1XTθ∗≥λ·P ] = B/T in Section 3.1. We develop a stochastic binary search algorithm for this purpose which is shown to incur ˜O( √ T) regret under mild assumptions on the underlying distribution ν. In Section 3.2, we bring the stochastic binary search algorithm together with the estimation method based on ellipsoidal confidence sets to tackle the general problem and derive ˜O(d· √ T) regret bounds. All the proofs are deferred to the Appendix. Notations For a vector x ∈Rd, ∥x∥∞refers to the L∞-norm of x. For a positive definite matrix M ∈Rd×d and a vector x ∈Rd, we define the norm ∥x∥M as ∥x∥M = √ xTMx. For x, y ∈Rd, it is well known that the following Cauchy-Schwarz inequality holds: |xTy| ≤∥x∥M · ∥y∥M −1. We denote by Id the identity matrix in dimension d. We use the standard asymptotic notation O(·) when T, B, and d go to infinity. We also use the notation ˜O(·) that hides logarithmic factors in d, T, and B. For x ∈R, (x)+ refers to the positive part of x. For a finite set S (resp. a compact interval I ⊂R), |S| (resp. |I|) denotes the cardinality of S (resp. the length of I). For a set S, P(S) denotes the set of all subsets of S. Finally, for a real-valued function f(·), supp f(·) denotes the support of f(·). 2 Unlimited budget In this section, we suppose that the budget is unlimited, i.e. B = ∞, which implies that the rewards have to be redefined in order to directly incorporate the costs. For this purpose, we assume in this section that vt is expressed in monetary value and we redefine the rewards as rt = (vt −pt) · 1bt≥pt. Since the budget constraint is irrelevant when B = ∞, we use the notations RT and EROPT(T) in place of RB,T and EROPT(B, T). As standard in the literature on MAB problems, we start by analyzing the optimal oracle strategy that has knowledge of the underlying distribution. This will not only guide the design of algorithms when ν is unknown but this will also facilitate the regret analysis. The algorithm developed in this section as well as the regret analysis are extensions of the work of Weed et al. [25] to the contextual setting. Benchmark analysis It is well known that second-price auctions induce truthful bidding in the sense that any participant whose only objective is to maximize the immediate payoff should always bid what he or she thinks the good being auctioned is worth. The following result should thus come at no surprise in the context of real-time bidding given Assumption 1 and the fact that each participant is provided with the contextual information xt before the t-th auction takes place. Lemma 1. The optimal non-anticipating strategy is to bid bt = xT tθ∗at any time period t ∈N and we have EROPT(T) = PT t=1 E[(xT tθ∗−pt)+]. Lemma 1 shows that the problem faced by the advertiser essentially boils down to estimating θ∗. Since the bidder only gets to observe vt if the auction is won, this gives advertisers a clear incentive to overbid early on so that they can progressively refine their estimates downward as they collect more data points. Specification of the algorithm Following the approach developed in [6] for linear contextual MAB problems, we define, at any round t, the regularized least square estimate of θ∗given all the feedback acquired in the past ˆθt = M −1 t Pt−1 τ=1 1bτ ≥pτ · vτ · xτ, where Mt = Id + Pt−1 τ=1 1bτ ≥pτ · xτxT τ, as well as the corresponding ellipsoidal confidence set: Ct = {θ ∈Rd |
θ −ˆθt
Mt ≤δT }, with δT = 2 p d · ln((1 + d · T) · T). For the reasons mentioned above, we take the optimism in the face of uncertainty approach and bid: bt = max(0, min(1, max θ∈Ct θ Txt)) = max(0, min(1, ˆθ T txt + δT · q xT tM −1 t xt)) (1) at any round t. Since Ct was designed with the objective of guaranteeing that θ∗∈Ct with high probability at any round t, irrespective of the number of auctions won in the past, bt is larger than the optimal bid xT tθ∗in general, i.e. we tend to overbid. 5 Regret analysis Concentration inequalities are intrinsic to any kind of learning and are thus key to derive regret bounds in online learning. We start with the following lemma, which is a consequence of the results derived in [1] for linear contextual MABs, that shows that θ∗lies in all the ellipsoidal confidence sets with high probability. Assumption 1 is key to establish this result. Lemma 2. We have P[θ∗/∈∩T t=1Ct] ≤1 T . Equipped with Lemma 2 along with some standard results for linear contextual bandits, we are now ready to extend the analysis of Weed et al. [25] to the contextual setting. Theorem 1. Bidding according to (1) incurs a regret RT = ˜O(d · √ T). Alternative algorithm with lazy updates As first pointed out by Abbasi-Yadkori et al. [1] in the context of linear bandits, updating the confidence set Ct at every round is not only inefficient but also unnecessary from a performance standpoint. Instead, we can perform batch updates, only updating Ct using all the feedback collected in the past at rounds t for which det(Mt) has increased by a factor at least (1+A) compared to the last time there was an update, for some constant A > 0 of our choosing. This leads to an interesting trade-off between computational efficiency and deterioration of the regret bound captured in our next result. For mathematical convenience, we keep the same notations as when we were updating the confidence sets at every round. The only difference lies in the fact that the bid submitted at time t is now defined as: bt = max(0, min(1, max θ∈Cτt θ Txt)), (2) where τt is the last round before round t where the last batch update happened. Theorem 2. Bidding according to (2) at any round t incurs a regret RT = ˜O(d · √ A · T). The fact that we can afford lazy updates will turn out to be important to tackle the general case in Section 3.2 since we will only be able to update the confidence sets at most O(ln(T)) times overall. 3 Limited budget In this section, we consider the setting where B is finite and scales linearly with the time horizon T. We will need the following assumptions for the remainder of the paper. Assumption 2. (a) B/T = β is a constant independent of any other relevant quantities. (b) There exists r > 0, known to the advertiser, such that pt ≥r for all t ∈N. (c) We have E[1/X Tθ∗] < ∞. (d) The random variable P has a continuous conditional probability density function given the occurrence of the value x of X, denoted by fx(·), that is upper bounded by ¯L < ∞. Conditions (a) and (b) are very natural in real-time bidding where the budget scales linearly with time and where r corresponds to the minimum reserve price across ad auctions. Observe that Condition (c) is satisfied, for example, when the probability of a click given any context is at least no smaller than a (possibly unknown) positive threshold. Condition (d) is motivated by technical considerations that will appear clear in the analysis. Note that ¯L is not assumed to be known to the advertiser. In order to increase the level of difficulty progressively and to prepare for the integration of the ellipsoidal confidence sets, we first look at an artificial setting in Section 3.1 where we assume that there exists a known set C ⊂Rd such that E[V |X] = min(1, maxθ∈C X Tθ) (as opposed to E[V |X] = X Tθ∗) and such that θ∗∈C. This is to sidestep the estimation problem in a first step in order to focus on determining an optimal bidding strategy given θ∗. Next, in Section 3.2, we bring together the methods developed in Section 2 and Section 3.1 to tackle the general setting. 3.1 Preliminary work In this section, we make the following modeling assumption in lieu of E[V |X] = X Tθ∗. Assumption 3. There exists C ⊂Rd such that E[V |X] = min(1, maxθ∈C X Tθ) and θ∗∈C. 6 Furthermore, we assume that C is known to the advertiser initially. Of course, we recover the original setting introduced in Section 1 when C = {θ∗} (since V ∈[0, 1] implies E[V |X] ∈[0, 1]) and θ∗ is known but the level of generality considered here will prove useful to tackle the general case in Section 3.2 when we define C as an ellipsoidal confidence set on θ∗. As in Section 2, we start by identifying a near-optimal oracle bidding strategy that has knowledge of the underlying distribution. This will not only guide the design of algorithms when ν is unknown but this will also facilitate the regret analysis. We use the shorthand g(X) = min(1, maxθ∈C X Tθ) throughout this section. Benchmark analysis To bound the performance of any non-anticipating strategy, we will be interested in the mappings φ : λ, C →E[P · 1g(X)≥λ·P ] and R : λ, C →E[g(X) · 1g(X)≥λ·P ] for (λ, C) ∈[0, 2/r] × P(Rd). Note that φ(·, C) is non-increasing and that, without loss of generality, we can restrict λ to be no larger than 2/r because φ(λ, C) = φ(2/r, C) = 0 for λ ≥2/r since P ≥r. Exploiting the structure of the MAB problem at hand, we can bound the sum of rewards obtained by any non-anticipating strategy by the value of a knapsack problem where the weights and the values of the items are drawn in an i.i.d. fashion from a fixed distribution. Since characterizing the expected optimal value of a knapsack problem is a well-studied problem, see [21], we can derive a simple upper bound on EROPT(B, T) through this reduction, as we next show. Lemma 3. We have EROPT(B, T) ≤T ·R(λ∗, C)+ √ T/r+1, where λ∗≥0 satisfies φ(λ∗, C) = β or λ∗= 0 if no such solution exists (i.e. if E[P] < β) in which case φ(λ∗, C) ≤β. Lemma 3 suggests that, given C, a good strategy is to bid bt = min(1, min(1, maxθ∈C xT tθ)/λ∗), at any round t. The following result shows that we can actually afford to settle for an approximate solution λ ≥0 to φ(λ, C) = β. Lemma 4. For any λ1, λ2 ≥0, we have: |R(λ1, C) −R(λ2, C)| ≤1/r · |φ(λ1, C) −φ(λ2, C)|. Lemma 3 combined with Lemma 4 suggests that the problem of computing a near-optimal bidding strategy essentially reduces to a stochastic root-finding problem for the function |φ(·, C) −β|. As it turns out, the fact that the feedback is only partially censored makes a stochastic bisection search possible with minimal assumptions on φ(·, C). Specifically, we only need that φ(·, C) be Lipschitz, while the technique developed in [20] for a dynamic pricing problem requires φ(·, C) to be biLipschitz. This is a significant improvement because this last condition is not necessarily satisfied uniformly for all confidence sets C, which will be important when we use a varying ellipsoidal confidence set instead of C = {θ∗} in Section 3.2. Note, however, that Assumption 2 guarantees that φ(·, C) is always Lipschitz, as we next show. Lemma 5. φ(·, C) is ¯L · E[1/X Tθ∗]-Lipschitz. We stress that Conditions (c) and (d) of Assumption 2 are crucial to establish Lemma 5 but are not relied upon anywhere else in this paper. Specification of the algorithm At any round t ∈N, we bid: bt = min(1, min(1, max θ∈C x T tθ)/λt), (3) where λt ≥0 is the current proxy for λ∗. We perform a binary search on λ∗by repeatedly using the same value of λt for consecutive rounds forming phases, indexed by k ∈N, and by keeping track of an interval, denoted by Ik = [λk, ¯λk]. We start with phase k = 0 and we initially set λ0 = 0 and ¯λ0 = 2/r. The length of the interval is shrunk by half at the end of every phase so that |Ik| = (2/r)/2k for any k. Phase k lasts for Nk = 3 · 4k · ln2(T) rounds during which we set the value of λt to λk. Since λk will be no larger than λ∗with high probability, this means that we tend to overbid. Note that there are at most ¯kT = inf{n ∈N | Pn k=0 Nk ≥T} phases overall. The key observation enabling a bisection search approach is that, since the feedback is only partially censored, we can build, at the end of any phase k, an empirical estimate of φ(λ, C), which we denote 7 by ˆφk(λ, C), for any λ ≥λk using all of the Nk samples obtained during phase k. The decision rule used to update Ik at the end of phase of k is specified next. Algorithm 1: Interval updating procedure at the end of phase k Data: ¯λk, λk, ∆k = 3 p 2 ln(2T)/Nk, and ˆφk(λ, C) for any λ ≥λk Result: ¯λk+1 and λk+1 ¯γk = ¯λk, γk = λk; while ˆφk(¯γk, C) > β + ∆k do ¯γk = ¯γk + |Ik|, γk = γk + |Ik|; end if ˆφk(1/2¯γk + 1/2γk, C) ≤β + ∆k then ¯λk+1 = 1/2¯γk + 1/2γk, λk+1 = γk; else ¯λk+1 = ¯γk, λk+1 = 1/2¯γk + 1/2γk; end The splitting decision is trivial when |ˆφk(1/2¯γk + 1/2γk, C) −β| > ∆k because we get a clear signal that dominates the stochastic noise to either increase or decrease the current proxy for λ∗. The tricky situation is when |ˆφk(1/2¯γk + 1/2γk, C) −β| ≤∆k, in which case the level of noise is too high to draw any conclusion. In this situation, we always favor a smaller value for λk even if that means shifting the interval upwards later on if we realize that we have made a mistake (which is the purpose of the while loop). This is because we can always recover from underestimating λ∗since the feedback is only partially censored. Finally, note that the while loop of Algorithm 1 always ends after a finite number of iterations since ˆφk(2/r, C) = 0 ≤β + ∆k. Regret analysis Just like in Section 2, using concentration inequalities is essential to establish regret bounds but this time we need uniform concentration inequalities. We use the Rademacher complexity approach to concentration inequalities (see, for example, [13] and [15]) to control the deviations of ˆφk(·, C) uniformly. Lemma 6. We have P[supλ∈[λk,2/r] |ˆφk(λ, C) −φ(λ, C)| ≤∆k] ≥1 −1/T, for any k. Next, we bound the number of phases as a function of the time horizon. Lemma 7. For T ≥3, we have ¯kT ≤ln(T + 1) and 4¯kT ≤ T ln2(T ) + 1. Using Lemma 6, we next show that the stochastic bisection search procedure correctly identifies λ ≥0 such that |φ(λ, C) −φ(λ∗, C)| is small with high probability, which is all we really need to lower bound the rewards accumulated in all rounds given Lemma 4. Lemma 8. For C = ¯L · E[1/X Tθ∗] and provided that T ≥exp(8r2/C2), we have: P[∩ ¯kT k=0{|ˆφk(λk, C) −φ(λ∗, C)| ≤4C · |Ik|, |φ(λk, C) −φ(λ∗, C)| ≤3C · |Ik|}] ≥1 −2 ln2(T) T . In a last step, we show, using the above result and at the cost of an additive logarithmic term in the regret bound, that we may assume that the advertiser participates in exactly T auctions. This enables us to combine Lemma 4, Lemma 7, and Lemma 8 to establish a distribution-free regret bound. Theorem 3. Bidding according to (3) incurs a regret RB,T = ˜O( ¯L·E[1/XTθ∗] r2 · √ T · ln(T)). Observe that Theorem 3 applies in particular when θ∗is known to the advertiser initially and that the regret bound derived does not depend on d. 3.2 General case In this section, we combine the methods developed in Sections 2 and 3.1 to tackle the general case. 8 Specification of the algorithm At any round t ∈N, we bid: bt = min(1, min(1, max θ∈Cτt x T tθ)/λt), (4) where τt is defined in the last paragraph of Section 2 and λt ≥0 is specified below. We use the bisection search method developed in Section 3.1 as a subroutine in a master algorithm that also runs in phases. Master phases are indexed by q = 0, · · · , Q and a new master phase starts whenever det(Mt) has increased by a factor at least (1 + A) compared to the last time there was an update, for some A > 0 of our choosing. By construction, the ellipsoidal confidence set used during the q-th master phase is fixed so that we can denote it by Cq. During the q-th master phase, we run the bisection search method described in Section 3.1 from scratch for the choice C = Cq in order to identify a solution λq,∗≥0 to φ(λq,∗, Cq) = β (or λq,∗= 0 if no solution exists). Thus, λt is a proxy for λq,∗during the q-th master phase. This bisection search lasts for ¯kq phases and stops as soon as we move on to a new master phase. Hence, there are at most ¯kq ≤¯kT = inf{n ∈N | Pn k=0 Nk ≥T} phases during the q-th master phase. We denote by λq,k the lower end of the interval used at the k-th phase of the bisection search run during the q-th master phase. Regret analysis First we show that there can be at most O(d · ln(T · d)) master phases overall. Lemma 9. We have Q ≤¯Q = d · ln(T · d)/ ln(1 + A) almost surely. Lemma 9 is important because it implies that the bisection searches run long enough to be able to identify sufficiently good approximate values for λq,∗. Note that our approach is “doubly” optimistic since both λq,k ≤λq,∗and θ∗∈Cq hold with high probability at any point in time. At a high level, the regret analysis goes as follows. First, just like in Section 3.1, we show, using Lemma 8 and at the cost of an additive logarithmic term in the final regret bound, that we may assume that the advertiser participates in exactly T auctions. Second, we show, using the analysis of Theorem 2, that we may assume that the expected per-round reward obtained during phase q is E[min(1, maxθ∈Cq xT tθ)] (as opposed to xT tθ∗) at any round t, up to an additive term of order ˜O(d · √ T) in the final regret bound. Third, we note that Theorem 3 essentially shows that the expected per-round reward obtained during phase q is R(λq,∗, Cq), up to an additive term of order ˜O( √ T) in the final regret bound. Finally, what remains to be done is to compare R(λq,∗, Cq) with R(λ∗, {θ∗}), which is done using Lemmas 2 and 3. Theorem 4. Bidding according to (4) incurs a regret RB,T = ˜O(d · ¯L·E[1/XTθ∗] r2 · f(A) · √ T), where f(A) = 1/ ln(1 + A) + √ 1 + A. 4 Concluding remark An interesting direction for future research is to characterize achievable regret bounds, in particular through the derivation of lower bounds on regret. When there is no budget limit and no contextual information, Weed et al. [25] provide a thorough characterization with rates ranging from Θ(ln(T)) to Θ( √ T), depending on whether a margin condition on the underlying distribution is satisfied. These lower bounds carry over to our more general setting and, as a result, the dependence of our regret bounds with respect to T cannot be improved in general. It is however unclear whether the dependence with respect to d is optimal. Based on the lower bounds established by Dani et al. [18] for linear stochastic bandits, a model which is arguably closer to our setting than that of Chu et al. [16] because of the need to estimate the bid multiplier λ∗, we conjecture that a linear dependence on d is optimal but this calls for more work. Given that the contextual information available in practice is often high-dimensional, developing algorithms that exploit the sparsity of the data in a similar fashion as done in [14] for linear contextual MAB problems is also a promising research direction. In this paper, observing that general BwK problems with contextual information are notoriously hard to solve, we exploit the structure of real-time bidding problems to develop a special-purpose algorithm (a stochastic binary search combined with an ellipsoidal confidence set) to get optimal regret bounds. We believe that the ideas behind this special-purpose algorithm could be adapted for other important applications such as contextual dynamic pricing with limited supply. Acknowledgments Research funded in part by the Office of Naval Research (ONR) grant N00014-15-1-2083. 9 References [1] Abbasi-Yadkori, Y., Pál, D., and Szepesvári, C. (2011). Improved algorithms for linear stochastic bandits. In Adv. Neural Inform. Processing Systems, pages 2312–2320. [2] Agrawal, S. and Devanur, N. (2016). Linear contextual bandits with knapsacks. In Adv. Neural Inform. Processing Systems, pages 3450–3458. [3] Agrawal, S., Devanur, N. R., and Li, L. (2016). An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives. In Proc. 29th Annual Conf. Learning Theory, pages 4–18. [4] Amin, K., Kearns, M., Key, P., and Schwaighofer, A. (2012). Budget optimization for sponsored search: Censored learning in mdps. In Proc. 28th Conf. Uncertainty in Artificial Intelligence, pages 54–63. [5] Amin, K., Rostamizadeh, A., and Syed, U. (2014). Repeated contextual auctions with strategic buyers. In Adv. Neural Inform. Processing Systems, pages 622–630. [6] Auer, P. (2002). Using confidence bounds for exploitation-exploration trade-offs. J. Machine Learning Res., 3(Nov):397–422. [7] Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002a). Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256. [8] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002b). The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77. [9] Babaioff, M., Dughmi, S., Kleinberg, R., and Slivkins, A. (2012). Dynamic pricing with limited supply. In Proc. 13th ACM Conf. Electronic Commerce, pages 74–91. [10] Badanidiyuru, A., Kleinberg, R., and Slivkins, A. (2013). Bandits with knapsacks. In Proc. 54th IEEE Annual Symp. Foundations of Comput. Sci., pages 207–216. [11] Badanidiyuru, A., Langford, J., and Slivkins, A. (2014). Resourceful contextual bandits. In Proc. 27th Annual Conf. Learning Theory, volume 35, pages 1109–1134. [12] Balseiro, S. and Gur, Y. (2017). Learning in repeated auctions with budgets: Regret minimization and equilibrium. In Proc. 18th ACM Conf. Economics and Comput., pages 609–609. [13] Bartlett, P. and Mendelson, S. (2002). Rademacher and gaussian complexities: Risk bounds and structural results. J. Machine Learning Res., 3(Nov):463–482. [14] Bastani, H. and Bayati, M. (2015). Online decision-making with high-dimensional covariates. Working Paper. [15] Boucheron, S., Bousquet, O., and Lugosi, G. (2005). Theory of classification: A survey of some recent advances. ESAIM: Probability and Statist., 9:323–375. [16] Chu, W., Li, L., Reyzin, L., and Schapire, R. (2011). Contextual bandits with linear payoff functions. In J. Machine Learning Res. - Proc., volume 15, pages 208–214. [17] Cohen, M., Lobel, I., and Leme, R. P. (2016). Feature-based dynamic pricing. In Proc. 17th ACM Conf. Economics and Comput., pages 817–817. [18] Dani, V., Hayes, T., and Kakade, S. (2008). Stochastic linear optimization under bandit feedback. In Proc. 21st Annual Conf. Learning Theory, pages 355–366. [19] Ghosh, A., Rubinstein, B. I. P., Vassilvitskii, S., and Zinkevich, M. (2009). Adaptive bidding for display advertising. In Proc. 18th Int. Conf. World Wide Web, pages 251–260. [20] Lei, Y., Jasin, S., and Sinha, A. (2015). Near-optimal bisection search for nonparametric dynamic pricing with inventory constraint. Working Paper. [21] Lueker, G. (1998). Average-case analysis of off-line and on-line knapsack problems. Journal of Algorithms, 29(2):277–305. 10 [22] Pasupathy, R. and Kim, S. (2011). The stochastic root-finding problem: Overview, solutions, and open questions. ACM Trans. Modeling and Comput. Simulation, 21(3):19. [23] Tran-Thanh, L., Stavrogiannis, C., Naroditskiy, V., Robu, V., Jennings, N. R., and Key, P. (2014). Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions. In Proc. 30th Conf. Uncertainty in Artificial Intelligence, pages 809–818. [24] Wang, Z., Deng, S., and Ye, Y. (2014). Close the gaps: A learning-while-doing algorithm for single-product revenue management problems. Operations Research, 62(2):318–331. [25] Weed, J., Perchet, V., and Rigollet, P. (2016). Online learning in repeated auctions. In Proc. 29th Annual Conf. Learning Theory, volume 49, pages 1562–1583. 11 | 2017 | 37 |
6,863 | A Unified Approach to Interpreting Model Predictions Scott M. Lundberg Paul G. Allen School of Computer Science University of Washington Seattle, WA 98105 slund1@cs.washington.edu Su-In Lee Paul G. Allen School of Computer Science Department of Genome Sciences University of Washington Seattle, WA 98105 suinlee@cs.washington.edu Abstract Understanding why a model makes a certain prediction can be as crucial as the prediction’s accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches. 1 Introduction The ability to correctly interpret a prediction model’s output is extremely important. It engenders appropriate user trust, provides insight into how a model may be improved, and supports understanding of the process being modeled. In some applications, simple models (e.g., linear models) are often preferred for their ease of interpretation, even if they may be less accurate than complex ones. However, the growing availability of big data has increased the benefits of using complex models, so bringing to the forefront the trade-off between accuracy and interpretability of a model’s output. A wide variety of different methods have been recently proposed to address this issue [5, 8, 9, 3, 4, 1]. But an understanding of how these methods relate and when one method is preferable to another is still lacking. Here, we present a novel unified approach to interpreting model predictions.1 Our approach leads to three potentially surprising results that bring clarity to the growing space of methods: 1. We introduce the perspective of viewing any explanation of a model’s prediction as a model itself, which we term the explanation model. This lets us define the class of additive feature attribution methods (Section 2), which unifies six current methods. 1https://github.com/slundberg/shap 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2. We then show that game theory results guaranteeing a unique solution apply to the entire class of additive feature attribution methods (Section 3) and propose SHAP values as a unified measure of feature importance that various methods approximate (Section 4). 3. We propose new SHAP value estimation methods and demonstrate that they are better aligned with human intuition as measured by user studies and more effectually discriminate among model output classes than several existing methods (Section 5). 2 Additive Feature Attribution Methods The best explanation of a simple model is the model itself; it perfectly represents itself and is easy to understand. For complex models, such as ensemble methods or deep networks, we cannot use the original model as its own best explanation because it is not easy to understand. Instead, we must use a simpler explanation model, which we define as any interpretable approximation of the original model. We show below that six current explanation methods from the literature all use the same explanation model. This previously unappreciated unity has interesting implications, which we describe in later sections. Let f be the original prediction model to be explained and g the explanation model. Here, we focus on local methods designed to explain a prediction f(x) based on a single input x, as proposed in LIME [5]. Explanation models often use simplified inputs x′ that map to the original inputs through a mapping function x = hx(x′). Local methods try to ensure g(z′) ≈f(hx(z′)) whenever z′ ≈x′. (Note that hx(x′) = x even though x′ may contain less information than x because hx is specific to the current input x.) Definition 1 Additive feature attribution methods have an explanation model that is a linear function of binary variables: g(z′) = φ0 + M X i=1 φiz′ i, (1) where z′ ∈{0, 1}M, M is the number of simplified input features, and φi ∈R. Methods with explanation models matching Definition 1 attribute an effect φi to each feature, and summing the effects of all feature attributions approximates the output f(x) of the original model. Many current methods match Definition 1, several of which are discussed below. 2.1 LIME The LIME method interprets individual model predictions based on locally approximating the model around a given prediction [5]. The local linear explanation model that LIME uses adheres to Equation 1 exactly and is thus an additive feature attribution method. LIME refers to simplified inputs x′ as “interpretable inputs,” and the mapping x = hx(x′) converts a binary vector of interpretable inputs into the original input space. Different types of hx mappings are used for different input spaces. For bag of words text features, hx converts a vector of 1’s or 0’s (present or not) into the original word count if the simplified input is one, or zero if the simplified input is zero. For images, hx treats the image as a set of super pixels; it then maps 1 to leaving the super pixel as its original value and 0 to replacing the super pixel with an average of neighboring pixels (this is meant to represent being missing). To find φ, LIME minimizes the following objective function: ξ = arg min g∈G L(f, g, πx′) + Ω(g). (2) Faithfulness of the explanation model g(z′) to the original model f(hx(z′)) is enforced through the loss L over a set of samples in the simplified input space weighted by the local kernel πx′. Ω penalizes the complexity of g. Since in LIME g follows Equation 1 and L is a squared loss, Equation 2 can be solved using penalized linear regression. 2 2.2 DeepLIFT DeepLIFT was recently proposed as a recursive prediction explanation method for deep learning [8, 7]. It attributes to each input xi a value C∆xi∆y that represents the effect of that input being set to a reference value as opposed to its original value. This means that for DeepLIFT, the mapping x = hx(x′) converts binary values into the original inputs, where 1 indicates that an input takes its original value, and 0 indicates that it takes the reference value. The reference value, though chosen by the user, represents a typical uninformative background value for the feature. DeepLIFT uses a "summation-to-delta" property that states: n X i=1 C∆xi∆o = ∆o, (3) where o = f(x) is the model output, ∆o = f(x) −f(r), ∆xi = xi −ri, and r is the reference input. If we let φi = C∆xi∆o and φ0 = f(r), then DeepLIFT’s explanation model matches Equation 1 and is thus another additive feature attribution method. 2.3 Layer-Wise Relevance Propagation The layer-wise relevance propagation method interprets the predictions of deep networks [1]. As noted by Shrikumar et al., this menthod is equivalent to DeepLIFT with the reference activations of all neurons fixed to zero. Thus, x = hx(x′) converts binary values into the original input space, where 1 means that an input takes its original value, and 0 means an input takes the 0 value. Layer-wise relevance propagation’s explanation model, like DeepLIFT’s, matches Equation 1. 2.4 Classic Shapley Value Estimation Three previous methods use classic equations from cooperative game theory to compute explanations of model predictions: Shapley regression values [4], Shapley sampling values [9], and Quantitative Input Influence [3]. Shapley regression values are feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all feature subsets S ⊆F, where F is the set of all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. To compute this effect, a model fS∪{i} is trained with that feature present, and another model fS is trained with the feature withheld. Then, predictions from the two models are compared on the current input fS∪{i}(xS∪{i}) −fS(xS), where xS represents the values of the input features in the set S. Since the effect of withholding a feature depends on other features in the model, the preceding differences are computed for all possible subsets S ⊆F \ {i}. The Shapley values are then computed and used as feature attributions. They are a weighted average of all possible differences: φi = X S⊆F \{i} |S|!(|F| −|S| −1)! |F|! fS∪{i}(xS∪{i}) −fS(xS) . (4) For Shapley regression values, hx maps 1 or 0 to the original input space, where 1 indicates the input is included in the model, and 0 indicates exclusion from the model. If we let φ0 = f∅(∅), then the Shapley regression values match Equation 1 and are hence an additive feature attribution method. Shapley sampling values are meant to explain any model by: (1) applying sampling approximations to Equation 4, and (2) approximating the effect of removing a variable from the model by integrating over samples from the training dataset. This eliminates the need to retrain the model and allows fewer than 2|F | differences to be computed. Since the explanation model form of Shapley sampling values is the same as that for Shapley regression values, it is also an additive feature attribution method. Quantitative input influence is a broader framework that addresses more than feature attributions. However, as part of its method it independently proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values. It is thus another additive feature attribution method. 3 3 Simple Properties Uniquely Determine Additive Feature Attributions A surprising attribute of the class of additive feature attribution methods is the presence of a single unique solution in this class with three desirable properties (described below). While these properties are familiar to the classical Shapley value estimation methods, they were previously unknown for other additive feature attribution methods. The first desirable property is local accuracy. When approximating the original model f for a specific input x, local accuracy requires the explanation model to at least match the output of f for the simplified input x′ (which corresponds to the original input x). Property 1 (Local accuracy) f(x) = g(x′) = φ0 + M X i=1 φix′ i (5) The explanation model g(x′) matches the original model f(x) when x = hx(x′), where φ0 = f(hx(0)) represents the model output with all simplified inputs toggled off (i.e. missing). The second property is missingness. If the simplified inputs represent feature presence, then missingness requires features missing in the original input to have no impact. All of the methods described in Section 2 obey the missingness property. Property 2 (Missingness) x′ i = 0 =⇒φi = 0 (6) Missingness constrains features where x′ i = 0 to have no attributed impact. The third property is consistency. Consistency states that if a model changes so that some simplified input’s contribution increases or stays the same regardless of the other inputs, that input’s attribution should not decrease. Property 3 (Consistency) Let fx(z′) = f(hx(z′)) and z′ \ i denote setting z′ i = 0. For any two models f and f ′, if f ′ x(z′) −f ′ x(z′ \ i) ≥fx(z′) −fx(z′ \ i) (7) for all inputs z′ ∈{0, 1}M, then φi(f ′, x) ≥φi(f, x). Theorem 1 Only one possible explanation model g follows Definition 1 and satisfies Properties 1, 2, and 3: φi(f, x) = X z′⊆x′ |z′|!(M −|z′| −1)! M! [fx(z′) −fx(z′ \ i)] (8) where |z′| is the number of non-zero entries in z′, and z′ ⊆x′ represents all z′ vectors where the non-zero entries are a subset of the non-zero entries in x′. Theorem 1 follows from combined cooperative game theory results, where the values φi are known as Shapley values [6]. Young (1985) demonstrated that Shapley values are the only set of values that satisfy three axioms similar to Property 1, Property 3, and a final property that we show to be redundant in this setting (see Supplementary Material). Property 2 is required to adapt the Shapley proofs to the class of additive feature attribution methods. Under Properties 1-3, for a given simplified input mapping hx, Theorem 1 shows that there is only one possible additive feature attribution method. This result implies that methods not based on Shapley values violate local accuracy and/or consistency (methods in Section 2 already respect missingness). The following section proposes a unified approach that improves previous methods, preventing them from unintentionally violating Properties 1 and 3. 4 SHAP (SHapley Additive exPlanation) Values We propose SHAP values as a unified measure of feature importance. These are the Shapley values of a conditional expectation function of the original model; thus, they are the solution to Equation 4 Figure 1: SHAP (SHapley Additive exPlanation) values attribute to each feature the change in the expected model prediction when conditioning on that feature. They explain how to get from the base value E[f(z)] that would be predicted if we did not know any features to the current output f(x). This diagram shows a single ordering. When the model is non-linear or the input features are not independent, however, the order in which features are added to the expectation matters, and the SHAP values arise from averaging the φi values across all possible orderings. 8, where fx(z′) = f(hx(z′)) = E[f(z) | zS], and S is the set of non-zero indexes in z′ (Figure 1). Based on Sections 2 and 3, SHAP values provide the unique additive feature importance measure that adheres to Properties 1-3 and uses conditional expectations to define simplified inputs. Implicit in this definition of SHAP values is a simplified input mapping, hx(z′) = zS, where zS has missing values for features not in the set S. Since most models cannot handle arbitrary patterns of missing input values, we approximate f(zS) with E[f(z) | zS]. This definition of SHAP values is designed to closely align with the Shapley regression, Shapley sampling, and quantitative input influence feature attributions, while also allowing for connections with LIME, DeepLIFT, and layer-wise relevance propagation. The exact computation of SHAP values is challenging. However, by combining insights from current additive feature attribution methods, we can approximate them. We describe two model-agnostic approximation methods, one that is already known (Shapley sampling values) and another that is novel (Kernel SHAP). We also describe four model-type-specific approximation methods, two of which are novel (Max SHAP, Deep SHAP). When using these methods, feature independence and model linearity are two optional assumptions simplifying the computation of the expected values (note that ¯S is the set of features not in S): f(hx(z′)) = E[f(z) | zS] SHAP explanation model simplified input mapping (9) = Ez ¯ S|zS[f(z)] expectation over z ¯S | zS (10) ≈Ez ¯ S[f(z)] assume feature independence (as in [9, 5, 7, 3]) (11) ≈f([zS, E[z ¯S]]). assume model linearity (12) 4.1 Model-Agnostic Approximations If we assume feature independence when approximating conditional expectations (Equation 11), as in [9, 5, 7, 3], then SHAP values can be estimated directly using the Shapley sampling values method [9] or equivalently the Quantitative Input Influence method [3]. These methods use a sampling approximation of a permutation version of the classic Shapley value equations (Equation 8). Separate sampling estimates are performed for each feature attribution. While reasonable to compute for a small number of inputs, the Kernel SHAP method described next requires fewer evaluations of the original model to obtain similar approximation accuracy (Section 5). Kernel SHAP (Linear LIME + Shapley values) Linear LIME uses a linear explanation model to locally approximate f, where local is measured in the simplified binary input space. At first glance, the regression formulation of LIME in Equation 2 seems very different from the classical Shapley value formulation of Equation 8. However, since linear LIME is an additive feature attribution method, we know the Shapley values are the only possible solution to Equation 2 that satisfies Properties 1-3 – local accuracy, missingness and consistency. A natural question to pose is whether the solution to Equation 2 recovers these values. The answer depends on the choice of loss function L, weighting kernel πx′ and regularization term Ω. The LIME choices for these parameters are made heuristically; using these choices, Equation 2 does not recover the Shapley values. One consequence is that local accuracy and/or consistency are violated, which in turn leads to unintuitive behavior in certain circumstances (see Section 5). 5 Below we show how to avoid heuristically choosing the parameters in Equation 2 and how to find the loss function L, weighting kernel πx′, and regularization term Ωthat recover the Shapley values. Theorem 2 (Shapley kernel) Under Definition 1, the specific forms of πx′, L, and Ωthat make solutions of Equation 2 consistent with Properties 1 through 3 are: Ω(g) = 0, πx′(z′) = (M −1) (M choose |z′|)|z′|(M −|z′|), L(f, g, πx′) = X z′∈Z [f(hx(z′)) −g(z′)]2 πx′(z′), where |z′| is the number of non-zero elements in z′. The proof of Theorem 2 is shown in the Supplementary Material. It is important to note that πx′(z′) = ∞when |z′| ∈{0, M}, which enforces φ0 = fx(∅) and f(x) = PM i=0 φi. In practice, these infinite weights can be avoided during optimization by analytically eliminating two variables using these constraints. Since g(z′) in Theorem 2 is assumed to follow a linear form, and L is a squared loss, Equation 2 can still be solved using linear regression. As a consequence, the Shapley values from game theory can be computed using weighted linear regression.2 Since LIME uses a simplified input mapping that is equivalent to the approximation of the SHAP mapping given in Equation 12, this enables regression-based, model-agnostic estimation of SHAP values. Jointly estimating all SHAP values using regression provides better sample efficiency than the direct use of classical Shapley equations (see Section 5). The intuitive connection between linear regression and Shapley values is that Equation 8 is a difference of means. Since the mean is also the best least squares point estimate for a set of data points, it is natural to search for a weighting kernel that causes linear least squares regression to recapitulate the Shapley values. This leads to a kernel that distinctly differs from previous heuristically chosen kernels (Figure 2A). 4.2 Model-Specific Approximations While Kernel SHAP improves the sample efficiency of model-agnostic estimations of SHAP values, by restricting our attention to specific model types, we can develop faster model-specific approximation methods. Linear SHAP For linear models, if we assume input feature independence (Equation 11), SHAP values can be approximated directly from the model’s weight coefficients. Corollary 1 (Linear SHAP) Given a linear model f(x) = PM j=1 wjxj + b: φ0(f, x) = b and φi(f, x) = wj(xj −E[xj]) This follows from Theorem 2 and Equation 11, and it has been previously noted by Štrumbelj and Kononenko [9]. Low-Order SHAP Since linear regression using Theorem 2 has complexity O(2M + M 3), it is efficient for small values of M if we choose an approximation of the conditional expectations (Equation 11 or 12). 2During the preparation of this manuscript we discovered this parallels an equivalent constrained quadratic minimization formulation of Shapley values proposed in econometrics [2]. 6 f3 f2 f1 f3 f2 f1 hapley (A) (B) Figure 2: (A) The Shapley kernel weighting is symmetric when all possible z′ vectors are ordered by cardinality there are 215 vectors in this example. This is distinctly different from previous heuristically chosen kernels. (B) Compositional models such as deep neural networks are comprised of many simple components. Given analytic solutions for the Shapley values of the components, fast approximations for the full model can be made using DeepLIFT’s style of back-propagation. Max SHAP Using a permutation formulation of Shapley values, we can calculate the probability that each input will increase the maximum value over every other input. Doing this on a sorted order of input values lets us compute the Shapley values of a max function with M inputs in O(M 2) time instead of O(M2M). See Supplementary Material for the full algorithm. Deep SHAP (DeepLIFT + Shapley values) While Kernel SHAP can be used on any model, including deep models, it is natural to ask whether there is a way to leverage extra knowledge about the compositional nature of deep networks to improve computational performance. We find an answer to this question through a previously unappreciated connection between Shapley values and DeepLIFT [8]. If we interpret the reference value in Equation 3 as representing E[x] in Equation 12, then DeepLIFT approximates SHAP values assuming that the input features are independent of one another and the deep model is linear. DeepLIFT uses a linear composition rule, which is equivalent to linearizing the non-linear components of a neural network. Its back-propagation rules defining how each component is linearized are intuitive but were heuristically chosen. Since DeepLIFT is an additive feature attribution method that satisfies local accuracy and missingness, we know that Shapley values represent the only attribution values that satisfy consistency. This motivates our adapting DeepLIFT to become a compositional approximation of SHAP values, leading to Deep SHAP. Deep SHAP combines SHAP values computed for smaller components of the network into SHAP values for the whole network. It does so by recursively passing DeepLIFT’s multipliers, now defined in terms of SHAP values, backwards through the network as in Figure 2B: mxjf3 = φi(f3, x) xj −E[xj] (13) ∀j∈{1,2} myifj = φi(fj, y) yi −E[yi] (14) myif3 = 2 X j=1 myifjmxjf3 chain rule (15) φi(f3, y) ≈myif3(yi −E[yi]) linear approximation (16) Since the SHAP values for the simple network components can be efficiently solved analytically if they are linear, max pooling, or an activation function with just one input, this composition rule enables a fast approximation of values for the whole model. Deep SHAP avoids the need to heuristically choose ways to linearize components. Instead, it derives an effective linearization from the SHAP values computed for each component. The max function offers one example where this leads to improved attributions (see Section 5). 7 (A) (B) SHAP Shapley sampling LIME True Shapley value Dense original model Sparse original model Feature importance Figure 3: Comparison of three additive feature attribution methods: Kernel SHAP (using a debiased lasso), Shapley sampling values, and LIME (using the open source implementation). Feature importance estimates are shown for one feature in two models as the number of evaluations of the original model function increases. The 10th and 90th percentiles are shown for 200 replicate estimates at each sample size. (A) A decision tree model using all 10 input features is explained for a single input. (B) A decision tree using only 3 of 100 input features is explained for a single input. 5 Computational and User Study Experiments We evaluated the benefits of SHAP values using the Kernel SHAP and Deep SHAP approximation methods. First, we compared the computational efficiency and accuracy of Kernel SHAP vs. LIME and Shapley sampling values. Second, we designed user studies to compare SHAP values with alternative feature importance allocations represented by DeepLIFT and LIME. As might be expected, SHAP values prove more consistent with human intuition than other methods that fail to meet Properties 1-3 (Section 2). Finally, we use MNIST digit image classification to compare SHAP with DeepLIFT and LIME. 5.1 Computational Efficiency Theorem 2 connects Shapley values from game theory with weighted linear regression. Kernal SHAP uses this connection to compute feature importance. This leads to more accurate estimates with fewer evaluations of the original model than previous sampling-based estimates of Equation 8, particularly when regularization is added to the linear model (Figure 3). Comparing Shapley sampling, SHAP, and LIME on both dense and sparse decision tree models illustrates both the improved sample efficiency of Kernel SHAP and that values from LIME can differ significantly from SHAP values that satisfy local accuracy and consistency. 5.2 Consistency with Human Intuition Theorem 1 provides a strong incentive for all additive feature attribution methods to use SHAP values. Both LIME and DeepLIFT, as originally demonstrated, compute different feature importance values. To validate the importance of Theorem 1, we compared explanations from LIME, DeepLIFT, and SHAP with user explanations of simple models (using Amazon Mechanical Turk). Our testing assumes that good model explanations should be consistent with explanations from humans who understand that model. We compared LIME, DeepLIFT, and SHAP with human explanations for two settings. The first setting used a sickness score that was higher when only one of two symptoms was present (Figure 4A). The second used a max allocation problem to which DeepLIFT can be applied. Participants were told a short story about how three men made money based on the maximum score any of them achieved (Figure 4B). In both cases, participants were asked to assign credit for the output (the sickness score or money won) among the inputs (i.e., symptoms or players). We found a much stronger agreement between human explanations and SHAP than with other methods. SHAP’s improved performance for max functions addresses the open problem of max pooling functions in DeepLIFT [7]. 5.3 Explaining Class Differences As discussed in Section 4.2, DeepLIFT’s compositional approach suggests a compositional approximation of SHAP values (Deep SHAP). These insights, in turn, improve DeepLIFT, and a new version 8 (A) (B) LIME SHAP Human Orig. DeepLIFT LIME SHAP Human Figure 4: Human feature impact estimates are shown as the most common explanation given among 30 (A) and 52 (B) random individuals, respectively. (A) Feature attributions for a model output value (sickness score) of 2. The model output is 2 when fever and cough are both present, 5 when only one of fever or cough is present, and 0 otherwise. (B) Attributions of profit among three men, given according to the maximum number of questions any man got right. The first man got 5 questions right, the second 4 questions, and the third got none right, so the profit is $5. Orig. DeepLift New DeepLift SHAP Input Explain 8 Explain 3 Masked (A) (B) LIME Orig. DeepLift New DeepLift SHAP LIME Change in log-odds 20 30 40 50 60 Figure 5: Explaining the output of a convolutional network trained on the MNIST digit dataset. Orig. DeepLIFT has no explicit Shapley approximations, while New DeepLIFT seeks to better approximate Shapley values. (A) Red areas increase the probability of that class, and blue areas decrease the probability. Masked removes pixels in order to go from 8 to 3. (B) The change in log odds when masking over 20 random images supports the use of better estimates of SHAP values. includes updates to better match Shapley values [7]. Figure 5 extends DeepLIFT’s convolutional network example to highlight the increased performance of estimates that are closer to SHAP values. The pre-trained model and Figure 5 example are the same as those used in [7], with inputs normalized between 0 and 1. Two convolution layers and 2 dense layers are followed by a 10-way softmax output layer. Both DeepLIFT versions explain a normalized version of the linear layer, while SHAP (computed using Kernel SHAP) and LIME explain the model’s output. SHAP and LIME were both run with 50k samples (Supplementary Figure 1); to improve performance, LIME was modified to use single pixel segmentation over the digit pixels. To match [7], we masked 20% of the pixels chosen to switch the predicted class from 8 to 3 according to the feature attribution given by each method. 6 Conclusion The growing tension between the accuracy and interpretability of model predictions has motivated the development of methods that help users interpret predictions. The SHAP framework identifies the class of additive feature importance methods (which includes six previous methods) and shows there is a unique solution in this class that adheres to desirable properties. The thread of unity that SHAP weaves through the literature is an encouraging sign that common principles about model interpretation can inform the development of future methods. We presented several different estimation methods for SHAP values, along with proofs and experiments showing that these values are desirable. Promising next steps involve developing faster model-type-specific estimation methods that make fewer assumptions, integrating work on estimating interaction effects from game theory, and defining new explanation model classes. 9 Acknowledgements This work was supported by a National Science Foundation (NSF) DBI-135589, NSF CAREER DBI-155230, American Cancer Society 127332-RSG-15-097-01-TBG, National Institute of Health (NIH) AG049196, and NSF Graduate Research Fellowship. We would like to thank Marco Ribeiro, Erik Štrumbelj, Avanti Shrikumar, Yair Zick, the Lee Lab, and the NIPS reviewers for feedback that has significantly improved this work. References [1] Sebastian Bach et al. “On pixel-wise explanations for non-linear classifier decisions by layerwise relevance propagation”. In: PloS One 10.7 (2015), e0130140. [2] A Charnes et al. “Extremal principle solutions of games in characteristic function form: core, Chebychev and Shapley value generalizations”. In: Econometrics of Planning and Efficiency 11 (1988), pp. 123–133. [3] Anupam Datta, Shayak Sen, and Yair Zick. “Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems”. In: Security and Privacy (SP), 2016 IEEE Symposium on. IEEE. 2016, pp. 598–617. [4] Stan Lipovetsky and Michael Conklin. “Analysis of regression in game theory approach”. In: Applied Stochastic Models in Business and Industry 17.4 (2001), pp. 319–330. [5] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should i trust you?: Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM. 2016, pp. 1135–1144. [6] Lloyd S Shapley. “A value for n-person games”. In: Contributions to the Theory of Games 2.28 (1953), pp. 307–317. [7] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. “Learning Important Features Through Propagating Activation Differences”. In: arXiv preprint arXiv:1704.02685 (2017). [8] Avanti Shrikumar et al. “Not Just a Black Box: Learning Important Features Through Propagating Activation Differences”. In: arXiv preprint arXiv:1605.01713 (2016). [9] Erik Štrumbelj and Igor Kononenko. “Explaining prediction models and individual predictions with feature contributions”. In: Knowledge and information systems 41.3 (2014), pp. 647–665. [10] H Peyton Young. “Monotonic solutions of cooperative games”. In: International Journal of Game Theory 14.2 (1985), pp. 65–72. 10 | 2017 | 370 |
6,864 | Nonbacktracking Bounds on the Influence in Independent Cascade Models Emmanuel Abbe1 2 Sanjeev Kulkarni2 Eun Jee Lee1 1Program in Applied and Computational Mathematics 2The Department of Electrical Engineering Princeton University {eabbe, kulkarni, ejlee}@princeton.edu Abstract This paper develops upper and lower bounds on the influence measure in a network, more precisely, the expected number of nodes that a seed set can influence in the independent cascade model. In particular, our bounds exploit nonbacktracking walks, Fortuin–Kasteleyn–Ginibre type inequalities, and are computed by message passing algorithms. Nonbacktracking walks have recently allowed for headways in community detection, and this paper shows that their use can also impact the influence computation. Further, we provide parameterized versions of the bounds that control the trade-off between the efficiency and the accuracy. Finally, the tightness of the bounds is illustrated with simulations on various network models. 1 Introduction Influence propagation is concerned with the diffusion of information from initially influenced nodes, called seeds, in a network. Understanding how information propagates in networks has become a central problem in a broad range of fields, such as viral marketing [18], sociology [9, 20, 24], communication [13], epidemiology [21], and social network analysis [25]. One of the most fundamental questions on influence propagation is to estimate the influence, i.e. the expected number of influenced nodes at the end of the propagation given a set of seeds. Estimating the influence is central to diverse research problems related to influence propagation, such as the widely-known influence maximization problem — finding a set of k nodes that maximizes the influence. Recent studies on influence propagation have proposed various algorithms [12, 19, 4, 8, 23, 22] for the influence maximization problem while using Monte Carlo (MC) simulations to approximate the influence. The submodularity argument and the probabilistic error bound on MC give a probabilistic lower bound on the influence that is obtainable by the algorithms in terms of the true maximum influence. Despite its benefits on the influence maximization problem, approximating the influence via MC simulations is far from ideal for large networks; in particular, MC may require a large amount of computations in order to stabilize the approximation. To overcome the limitations of Monte Carlo simulations, many researchers have been taking both algorithmic and theoretical approaches to approximate the influence of given seeds in a network. Chen and Teng [3] provided a probabilistic guarantee on estimating the influence of a single seed with a relative error bound with the expected running time O(ℓ(|V | + |E|)|V | log |V |/ε2), such that with probability 1 −1/nℓ, for all node v, the computed influence of v has relative error at most ε. Draief et al., [6] introduced an upper bound for the influence by using the spectral radius of the adjacency matrix. Tighter upper bounds were later suggested in [17] which relate the ratio of influenced nodes in a network to the spectral radius of the so-called Hazard matrix. Further, improved upper bounds which account for sensitive edges were introduced in [16]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In contrast, there has been little work on finding a tight lower bound for the influence. An exception is a work by Khim et al. [14], where the lower bound is obtained by only considering the influence through the maximal-weighted paths. In this paper, we propose both upper and lower bounds on the influence using nonbacktracking walks and Fortuin–Kasteleyn–Ginibre (FKG) type inequalities. The bounds can be efficiently obtained by message passing implementation. This shows that nonbacktracking walks can also impact influence propagation, making another case for the use of nonbacktracking walks in graphical model problems as in [15, 10, 2, 1], discussed later in the paper. Further, we provide a parametrized version of the bounds that can adjust the trade-off between the efficiency and the accuracy of the bounds. 2 Background We introduce here the independent cascade model and provide background for the main results. Definition 1 (Independent Cascade Model). Consider a directed graph G = (V, E) where |V | = n, a transmission probability matrix P ∈[0, 1]n×n, and a seed set S0 ⊆V . For all u ∈V , let N +(u) be the set of out-neighbors of node u. The independent cascade model IC(G, P, S0) sequentially generates the influenced set St ⊆V for each discrete time t ≥1 as follows. At time t, St is initialized to be an empty set. Then, each node u ∈St−1 attempts to influence v ∈N +(u)\∪t−1 i=0Si with probability Puv, i.e. node u influences its uninfluenced out-neighbor v with probability Puv. If v is influenced at time t, add v to St. The process stops at T if ST = ∅at the end of the step t = T. The set of the influenced nodes at the end of propagation is defined as S = ∪T −1 i=0 St. We often refer an edge (u, v) being open if node u influences node v. The IC model is equivalent to the live-arc graph model, where the influence happens at once, rather than sequentially. The live-arc graph model first decides the state of every edge with a Bernoulli trial, i.e. edge (u, v) is open independently with probability Puv and closed, otherwise. Then, the set of influenced nodes is defined as the nodes that are reachable from at least one of the seeds by the open edges. Definition 2 (Influence). The expected number of nodes that are influenced at the end of the propagation process is called the influence (rather than the expected influence, with a slight abuse of terminology) of IC(G, P, S0), and is defined as σ(S0) = X v∈V P(v is influenced). (1) It is shown in [5] that computing the influence σ(S0) in the independent cascade model IC(G, P, S0) is #P-hard, even with a single seed, i.e. |S0| = 1. Next, we define nonbacktracking (NB) walks on a directed graph. Nonbacktracking walks have already been used for studying the characteristics of networks. To the best of our knowledge, the use of NB walks in the context of epidemics was first introduced in the paper of Karrer et al. [11] and later applied to percolation in [10]. In particular, Karrer et al. reformulate the spread of influence as a message passing process and demonstrate how the resulting equations can be used to calculate an upper bound on the number of nodes that are susceptible at a given time. As we shall see, we take a different approach to the use of the NB walks, which focuses on the effective contribution of a node in influencing another node and accumulates such contributions to obtain upper and lower bounds. More recently, nonbacktracking walks are used for community detection [15, 2, 1]. Definition 3 (Nonbacktracking Walk). Let G = (V, E) be a directed graph. A nonbacktracking walk of length k is defined as w(k) = (v0, v1, . . . , vk), where vi ∈V and (vi−1, vi) ∈E for all i ∈[k], and vi−1 ̸= vi+1 for all i ∈[k −1]. We next recall a key inequality introduced by Fortuin et. al [7]. Theorem 1 (FKG Inequality). Let (Γ, ≺) be a distributive lattice, where Γ is a finite partially ordered set, ordered by ≺, and let µ be a positive measure on Γ satisfying the following condition: for all x, y ∈Γ, µ(x ∧y)µ(x ∨y) ≥ µ(x)µ(y), where x ∧y = max{z ∈Γ : z ⪯x, z ⪯y} and x ∨y = min{z ∈Γ : y ⪯z, y ⪯z}. Let f and g be both increasing (or both decreasing) functions on Γ. Then, ( X x∈Γ µ(x))( X x∈Γ f(x)g(x)µ(x)) ≥ ( X x∈Γ f(x)µ(x))( X x∈Γ g(x)µ(x)). (2) 2 FKG inequality is instrumental in studying influence propagation since the probability that a node is influenced is nondecreasing with respect to the partial order of random variables describing the states, open or closed, of the edges. 3 Nonbacktracking bounds on the influence In this section, we present upper and lower bounds on the influence in the independent cascade model and explain the motivations and intuitions of the bounds. The bounds utilize nonbacktracking walks and FKG inequalities and are computed efficiently by message passing algorithms. In particular, the upper bound on a network based on a graph G(V, E) runs in O(|V |2 + |V ||E|) and the lower bound runs in O(|V | + |E|), whereas Monte Carlo simulation would require O(|V |3 + |V |2|E|) computations without knowing the variance of the influence, which is harder to estimate than the influence. The reason for the large computational complexity of MC is that in order to ensure that the standard error of the estimation does not grow with respect to |V |, MC requires O(|V |2) computations. Hence, for large networks, where MC may not be feasible, our algorithms can still provide bounds on the influence. Furthermore, from the proposed upper σ+ and lower bounds σ−, we can compute an upper bound on the variance given by (σ+ −σ−)2/4. This could be used to estimate the number of computations needed by MC. Computing the upper bound on the variance with the proposed bounds can be done in O(|V |2 + |V ||E|), whereas computing the variance with MC simulation requires O(|V |5 + |V |4|E|). 3.1 Nonbacktracking upper bounds (NB-UB) We start by defining the following terms for the independent cascade model IC(G, P, S0), where G = (V, E) and |V | = n. Definition 4. For any v ∈V , we define the set of in-neighbors N −(v) = {u ∈V : (u, v) ∈E} and the set of out-neighbors N +(v) = {u ∈V : (v, u) ∈E}. Definition 5. For any v ∈V and l ∈[n −1], the set Pl(S0 →v) is defined as the set of all paths with length l from any seed s ∈S0 to v. We call a path P is open iff every edge in P is open. For l = 0, we define P0(S0 →v) as the set (of size one) of the zero-length path containing node v and assume the path P ∈P0(S0 →v) is open iff v ∈S0. Definition 6. For any v ∈V and l ∈{0, . . . , n −1}, we define p(v) = P(v is influenced) (3) pl(v) = P(∪P ∈Pl(S0→v){P is open}) (4) pl(u→v) = P(∪P ∈Pl(S0→u),P ̸ ∈ v{P is open and edge (u, v) is open}) (5) In other words, pl(v) is the probability that node v is influenced by open paths of length l, i.e. there exists an open path of length l from a seed to v, and pl(u→v) is the probability that v is influenced by node u with open paths of length l + 1, i.e. there exists an open path of length l + 1 from a seed to v that ends with edge (u, v). Lemma 1. For any v ∈V , p(v) ≤ 1 − n−1 Y l=0 (1 −pl(v)). (6) For any v ∈V and l ∈[n −1], pl(v) ≤ 1 − Y u∈N −(v) (1 −pl−1(u→v)). (7) Lemma 1, which can be proved by FKG inequalities, suggests that given pl−1(u →v), we may compute an upper bound on the influence. Ideally, pl−1(u→v) can be computed by considering all paths that end with (u, v) having length l. However, this results in exponential complexity O(nl), as l goes up to n −1. Thus, we present an efficient way to compute an upper bound UBl−1(u→v) on pl−1(u→v), which in turns gives an upper bound UBl(v) on pl(v), with the following recursion formula. 3 Definition 7. For all l ∈{0, . . . , n−1} and u, v ∈V such that (u, v) ∈E, UBl(u) ∈[0, 1] and UBl(u→v) ∈[0, 1] are defined recursively as follows. Initial condition: For every s∈S0, s+∈N +(s), u∈V \S0, and v∈N +(u), UB0(s) = 1, UB0(s→s+) = Pss+ (8) UB0(u) = 0, UB0(u→v) = 0. (9) Recursion: For every l∈[n−1], s∈S0, s+∈N +(s), s−∈N −(s), u∈V \S0, and v∈N +(u)\S0, UBl(s) = 0, UBl(s→s+) = 0, UBl(s−→s) = 0 (10) UBl(u) = 1 − Y w∈N −(u) (1 −UBl−1(w→u)) (11) UBl(u→v) = ( Puv(1 − 1−UBl(u) 1−UBl−1(v→u)), if v∈N −(u) PuvUBl(u), otherwise. (12) Equation (10) follows from that for any seed node s ∈S0 and for all l > 0, the probabilities pl(s) = 0, pl(s →s+) = 0, and pl(s−→s) = 0. A naive way to compute UBl(u →v) is UBl(u→v) = PuvUBl−1(u), but this results in an extremely loose bound due to the backtracking. For a tighter bound, we use nonbacktracking in Equation (12), i.e. when computing UBl(u→v), we ignore the contribution of UBl−1(v→u). Theorem 2. For any independent cascade model IC(G, P, S0), σ(S0) ≤ X v∈V (1 − n−1 Y l=0 (1 −UBl(v))) =: σ+(S0), (13) where UBl(v) is obtained recursively as in Definition 7. Next, we present Nonbacktracking Upper Bound (NB-UB) algorithm which computes UBl(v) and UBl(u→v) by message passing. At the l-th iteration, the variables in NB-UB represent as follows. · Sl is the set of nodes that are processed at the l-th iteration. · Mcurr(v) = {(u, UBl−1(u →v)) : u is an in-neighbor of v, and u ∈Sl−1} is the set of pairs (previously processed in-neighbor u of v, incoming message from u to v). · MSrc(v) = {u : u is a in-neighbor of v, and u ∈Sl−1} is the set of in-neighbor nodes of v that were processed at the previous step. · Mcurr(v)[u] = UBl−1(u→v) is the incoming message from u to v. · Mnext(v) = {(u, UBl(u →v)) : u is an in-neighbor of v, and u ∈Sl} is the set of pairs (currently processed in-neighbor u, next iteration’s incoming message from u to v). Algorithm 1 Nonbacktracking Upper Bound (NB-UB) Initialize: UBl(v) = 0 for all 0 ≤l ≤n −1 and v ∈V Initialize: Insert (s, 1) to Mnext(s) for all s ∈S0 for l = 0 to n −1 do for u ∈Sl do Mcurr(u) = Mnext(u) and Clear Mnext(u) UBl(u) = ProcessIncomingMsgUB(Mcurr(u)) for u ∈Sl do for v ∈N +(u) \ S0 do Sl+1.insert(v) if v ∈MSrc(u) then UBl(u→v) = GenerateOutgoingMsgUB(Mcurr(u)[v], UBl(u), Puv) Mnext(v).insert((u, UBl(u→v))). else UBl(u→v) = GenerateOutgoingMsgUB(0, UBl(u), Puv) Mnext(v).insert((u, UBl(u→v))). Output: UBl(u) for all l, u 4 At the beginning, every seed node s ∈S0 is initialized such that Mcurr(s) = {(s, 1)} in order to satisfy the initial condition, UB0(s) = 1. For each l-th iteration, every node u in Sl is processed as follows. First, ProcessIncomingMsgUB(Mcurr(u)) computes UBl(u) as in Equation (11). Second, u passes a message to its neighbor v ∈N +(u) \ S0 along the edge (u, v), and v stores (inserts) the message in Mnext(v) for the next iteration. The message contains 1) the source of the message, u, and 2) UBl(u→v), which is computed as in Equation (12), by the function GenerateOutgoingMsgUB. Finally, the algorithm outputs UBl(u) for all u ∈V and l ∈{0, . . . , n−1}, and the upper bound σ+(S0) is computed by Equation (13). The description of how the algorithm runs on a small network can be found in the supplementary material. Computational complexity: Notice that for each iteration l ∈{0, . . . , n −1}, the algorithm accesses at most n nodes, and for each node v, the functions ProcessIncomingMsgUB and GenerateOutgoingMsgUB are computed in O(deg(v)) and O(1), respectively. Therefore, the worst case computational complexity is O(|V |2 + |V ||E|). 3.2 Nonbacktracking lower bounds (NB-LB) A naive way to compute a lower bound on the influence in a network IC(G, P, S0) is to reduce the network to a (spanning) tree network, by removing edges. Then, since there is a unique path from a node to another, we can compute the influence of the tree network, which is a lower bound on the influence in the original network, in O(|V |). We take this approach of generating a subnetwork from the original network, yet we avoid the significant gap between the bound and the influence by considering the following directed acyclic subnetwork, in which there is no backtracking walk. Definition 8 (Min-distance Directed Acyclic Subnetwork). Consider an independent cascade model IC(G, P, S0) with G = (V, E) and |V | = n. Let d(S0, v) := mins∈S0 d(s, v), i.e. the minimum distance from a seed in S0 to v. A minimum-distance directed acyclic subnetwork (MDAS), IC(G′, P′, S0), where G′ = (V ′, E′), is obtained as follows. · V ′ = {v1, ..., vn} is an ordered set of nodes such that d(S0, vi) ≤d(S0, vj), for every i < j. · E′ = {(vi, vj) ∈E : i < j}, i.e. E′ is obtained from E by removing edges whose source node comes later in the order than its destination node. · P′ vivj = Pvivj, if (vi, vj) ∈E′, and P′ vivj = 0, otherwise. If there are multiple ordered sets of vertices satisfying the condition, we may choose one arbitrarily. For any k ∈[n], let p(vk) be the probability that vk ∈V ′ is influenced in the MDAS, IC(G′, P′, S0). Since p(vk) is equivalent to the probability of the union of the events that an in-neighbor ui ∈N −(vk) influences vk, p(vk) can be computed by the principle of inclusion and exclusion. Thus, we may compute a lower bound on p(vk), using Bonferroni inequalities, if we know the probabilities that in-neighbors u and v both influences vk, for every pair u, v ∈N −(vk). However, computing such probabilities can take O(kk). Hence, we present LB(vk) which efficiently computes a lower bound on p(vk) by the following recursion. Definition 9. For all vk ∈V ′, LB(vk) ∈[0, 1] is defined by the recursion on k as follows. Initial condition: For every vs ∈S0, LB(vs) = 1. (14) Recursion: For every vk ∈V ′ \ S0, LB(vk) = m∗ X i=1 P′ uivkLB(ui)(1 − i−1 X j=1 P′ ujvk) , (15) where N −(vk) = {u1, . . . , um} is the ordered set of in-neighbors of vk in IC(G′, P′, S0) and m∗=max{m′ ≤m : Pm′−1 j=1 P′ ujvk ≤1}. Remark. Since the i-th summand in Equation (15) can utilize Pi−2 j=1 P′ ujvk, which is already computed in (i−1)-th summand, to compute Pi−1 j=1 P′ ujvk, the summation takes at most O(deg(vk)). Theorem 3. For any independent cascade model IC(G, P, S0) and its MDAS IC(G′, P′, S0), σ(S0) ≥ X vk∈V ′ LB(vk) =: σ−(S0), (16) where LB(vk) is obtained recursively as in Definition 9. 5 Next, we present Nonbacktracking Lower Bound (NB-LB) algorithm which efficiently computes LB(vk). At the k-th iteration, the key variable in NB-LB has the following meaning. · M(vk) = {(LB(vj), P′ vjvk) : vj is an in-neighbor of vk} is the set of pairs (incoming message from an in-neighbor vj to vk, the transmission probability of edge (vj, vk)). Algorithm 2 Nonbacktracking Lower Bound (NB-LB) Input: directed acyclic network IC(G′, P′, S0) Initialize: σ−= 0 Initialize: Insert (1, 1) to M(vi) for all vi ∈S0 for k = 1 to n do LB(vk) = ProcessIncomingMsgLB(M(vk)) σ−+= LB(vk) for vl ∈N +(vk) \ S0 do M(vl).insert((LB(vk), P′ vkvl)) Output: σ− At the beginning, every seed node s ∈S0 is initialized such that M(s) = {(1, 1)} in order to satisfy the initial condition, LB(s) = 1. For each k-th iteration, node vk is processed as follows. First, LB(vk) is computed as in the Equation (15), by the function ProcessIncomingMsgLB, and added to σ−. Second, vk passes the message (LB(vk), P′ vkvl) to its out-neighbor vl ∈N +(vk)\S0, and vl stores (inserts) it in M(vl). Finally, the algorithm outputs σ−, the lower bound on the influence. The description of how the algorithm runs on a small network can be found in the supplementary material. Computational complexity: Obtaining an arbitrary directed acyclic subnetwork from the original network takes O(|V | + |E|). Next, the algorithm iterates through the nodes V ′ = {v1, . . . , vn}. For each node vk, ProcessIncomingMsgLB takes O(deg(vk)), and vk sends messages to its outneighbors in O(deg(vk)). Hence, the worst case computational complexity is O(|V | + |E|). 3.3 Tunable bounds In this section, we briefly introduce the parametrized version of NB-UB and NB-LB which provide control to adjust the trade-off between the efficiency and the accuracy of the bounds. Upper bounds (tNB-UB): Given a non-negative integer t ≤n −1, for every node u ∈V , we compute the probability p≤t(u) that node u is influenced by open paths whose length is less than or equal to t, and for each v ∈N +(u), we compute the probability pt(u→v). Then, we start NB-UB from l = t + 1 with the new initial conditions that UBt(u→v) = pt(u→v) and UBt(u) = p≤t(u), and compute the upper bound as P v∈V (1 −Qn−1 l=t (1 −UBl(v))). For higher values of t, the algorithm results in tighter upper bounds, while the computational complexity may increase exponentially for dense networks. Thus, this method is most applicable in sparse networks, where the degree of each node is bounded. Lower bounds (tNB-LB): We first order the set of nodes {v1, . . . , vn} such that d(S0, vi) ≤ d(S0, vj) for every i < j. Given a non-negative integer t ≤n, we obtain a subnetwork IC(G[Vt], P[Vt], S0 ∩Vt) of size t, where G[Vt] is the subgraph induced by the set of nodes Vt = {v1, . . . , vt}, and P[Vt] is the corresponding transmission probability matrix. For each vi ∈Vt, we compute the exact probability pt(vi) that node vi is influenced in the subnetwork IC(G[Vt], P[Vt], S0 ∩Vt). Then, we start NB-LB from i = t + 1 with the new initial conditions that LB(vk) = pt(vk), for all k ≤t. For larger t, the algorithm results in tighter lower bounds. However, the computational complexity may increase exponentially with respect to t, the size of the subnetwork. This algorithm can adopt Monte Carlo simulations on the subnetwork to avoid the large computational complexity. However, this modification results in probabilistic lower bounds, rather than theoretically guaranteed lower bounds. Nonetheless, this can still give a significant improvement, because the Monte Carlo simulations on a smaller size of network require less computation to stabilize the estimation. 6 4 Experimental Results In this section, we evaluate the NB-UB and NB-LB in independent cascade models on a variety of classical synthetic networks. Network Generation. We consider 4 classical random graph models with the parameters shown as follows: Erdos Renyi random graphs with ER(n = 1000, p = 0.003), scale-free networks SF(n = 1000, α = 2.5), random regular graphs Reg(n = 1000, d = 3), and random tree graphs with power-law degree distributions T(n = 1000, α = 3). For each graph model, we generate 100 networks IC(G, pA, {s}) as follows. The graph G is the largest connected component of a graph drawn from the graph model, the seed node s is a randomly selected vertex, and A is the adjacency matrix of G. The corresponding IC model has the same transmission probability p for every edge. Evaluation of Bounds. For each network generated, we compute the following quantities for each p ∈{0.1, 0.2, . . . , 0.9}. · σmc: the estimation of the influence with 106 Monte Carlo simulations. · σ+: the upper bound obtained by NB-UB. · σ+ spec: the spectral upper bound by [17]. · σ−: the lower bound obtained by NB-LB. · σ− prob: the probabilistic lower bound obtained by 10 Monte Carlo simulations. Figure 1: This figure compares the average relative gap of the bounds: NB-UB, the spectral upper bound in [17], NB-LB, and the probabilistic lower bound computed by MC simulations, for various types of networks. The probabilistic lower bound is chosen for the experiments since there has not been any tight lower bound. The sample size of 10 is determined to overly match the computational complexity of NB-LB algorithm. In Figure 1, we compare the average relative gap of the bounds for every network model and for each transmission probability, where the true value is assumed to be σmc. For example, the average relative gap of NB-UB for 100 Erdos Renyi networks {Ni}100 i=1 with the transmission probability p is computed by 1 100 P i∈[100] σ+[Ni]−σmc[Ni] σmc[Ni] , where σ+[Ni] and σmc[Ni] denote the NB-UB and the MC estimation, respectively, for the network Ni. Results. Figure 1 shows that NB-UB outperforms the upper bound in [17] for the Erdos-Renyi and random 3-regular networks, and performs comparably for the scale-free networks. Also, NB-LB gives tighter bounds than the MC bounds on the Erdos-Renyi, scale-free, and random regular networks when the transmission probability is small, p < 0.4. Both NB-UB and NB-LB compute the exact influence for the tree networks since both algorithms avoid backtracking walks. Next, we show the bounds on exemplary networks. 4.1 Upper Bounds Selection of Networks. In order to illustrate a typical behavior of the bounds, we have chosen the network in Figure 2a as follows. First, we generate 100 random 3-regular graphs G with 1000 nodes and assign a random seed s. Then, the corresponding IC model is defined as IC(G, P = 7 0 0 10 20 30 40 50 60 70 80 90 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Influence Transmission Probability Upper bounds of the influence MC (5) MC (10) MC (30) MC (300) MC (3000) NB-UB Spectral 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0 100 200 300 400 500 600 700 800 900 1000 0.5 0.6 0.7 0.8 0.9 1 180.92 605.8 (a) 0 500 1000 1500 2000 2500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Influence Transmission Probability Lower bounds of the influence MC (5) MC (12) MC (30) MC (300) MC (3000) NB-LB (b) Figure 2: (a) The figure compares various upper bounds on the influence in the 3-regular network in section 4.1. The MC upper bounds are computed with various simulation sizes and shown with the data points indicated with MC(N), where N is the number of simulations. The spectral upper bound in [17] is shown in red line, and NB-UB is shown in green line. (b) The figure shows lower bounds on the influence of a scale-free network in section 4.2. The probabilistic lower bounds shown with points are obtained from Monte Carlo simulations with various simulation sizes, and the data points indicated with MC(N) are obtained by N number of simulations. NB-LB is shown in green line. pA, S0 = {s}). For each network, we compute NB-UB and MC estimation. Then, we compute the score for each network, where the score is defined as the sum of the square differences between the upper bounds and MC estimations over the transmission probability p ∈{0.1, 0.2,. . ., 0.9}. Finally, a graph whose score is the median from all 100 scores is chosen for Figure 2a. Results. In figure 2a, we compare 1) the upper bounds introduced [17] and 2) the probabilistic upper bounds obtained by Monte Carlo simulations with 99% confidence level, to NB-UB. The MC upper bounds are computed with the various sample sizes N ∈{5, 10, 30, 300, 3000}. It is evident from the figure that a larger sample size provides a tighter probabilistic upper bound. NB-UB outperforms the bound by [17] and the probabilistic MC bound when the transmission probability is relatively small. Further, it shows a similar trend as the MC simulations with a large sample size. 4.2 Lower Bounds Selection of Networks. We adopt a similar selection process as in the selection for the upper bounds, but with the scale free networks, with 3000 nodes and α = 2.5. Results. We compare probabilistic lower bounds obtained by MC with 99% confidence level to NB-LB. The lower bounds from Monte Carlo simulations are computed with various sample sizes N ∈{5, 12, 30, 300, 3000}, which accounts for a constant, log(|V |), 0.01|V |, 0.1|V |, and |V |. NB-LB outperforms the probabilistic bounds by MC with small sample sizes. Recall that the computational complexity of the lower bound in algorithm 2 is O(|V | + |E|), which is the computational complexity of a constant number of Monte Carlo simulations. In figure 2b, it shows that NB-LB is tighter than the probabilistic lower bounds with the same computational complexity, and it also agrees with the behavior of the MC simulations. 5 Conclusion In this paper, we propose both upper and lower bounds on the influence in the independent cascade models and provide algorithms to efficiently compute the bounds. We extend the results by proposing tunable bounds which can adjust the trade-off between the efficiency and the accuracy. Finally, the tightness and the performance of the bounds are shown with the experimental results. One can further improve the bounds considering r-nonbacktracking walks, i.e. avoiding cycles of length r rather than just backtracks, and we leave this for future study. Acknowledgement. The authors thank Colin Sandon for helpful discussions. This research was partly supported by the NSF CAREER Award CCF-1552131 and the ARO grant W911NF-16-1-0051 8 References [1] E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic bp, and the information-computation gap. arXiv preprint arXiv:1512.09080, 2015. [2] C. Bordenave, M. Lelarge, and L. Massoulié. Non-backtracking spectrum of random graphs: community detection and non-regular ramanujan graphs. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 1347–1357. IEEE, 2015. [3] W. Chen and S.-H. Teng. Interplay between social influence and network centrality: A comparative study on shapley centrality and single-node-influence centrality. In Proceedings of the 26th International Conference on World Wide Web, pages 967–976. International World Wide Web Conferences Steering Committee, 2017. [4] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 199–208. ACM, 2009. [5] W. Chen, Y. Yuan, and L. Zhang. Scalable influence maximization in social networks under the linear threshold model. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 88–97. IEEE, 2010. [6] M. Draief, A. Ganesh, and L. Massoulié. Thresholds for virus spread on networks. In Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, page 51. ACM, 2006. [7] C. M. Fortuin, P. W. Kasteleyn, and J. Ginibre. Correlation inequalities on some partially ordered sets. Communications in Mathematical Physics, 22(2):89–103, 1971. [8] A. Goyal, W. Lu, and L. V. Lakshmanan. Celf++: optimizing the greedy algorithm for influence maximization in social networks. In Proceedings of the 20th international conference companion on World wide web, pages 47–48. ACM, 2011. [9] M. Granovetter. Threshold models of collective behavior. American journal of sociology, pages 1420–1443, 1978. [10] B. Karrer, M. Newman, and L. Zdeborová. Percolation on sparse networks. Physical review letters, 113(20):208702, 2014. [11] B. Karrer and M. E. Newman. Message passing approach for general epidemic models. Physical Review E, 82(1):016101, 2010. [12] D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146. ACM, 2003. [13] A. Khelil, C. Becker, J. Tian, and K. Rothermel. An epidemic model for information diffusion in manets. In Proceedings of the 5th ACM international workshop on Modeling analysis and simulation of wireless and mobile systems, pages 54–60. ACM, 2002. [14] J. T. Khim, V. Jog, and P.-L. Loh. Computing and maximizing influence in linear threshold and triggering models. In Advances in Neural Information Processing Systems, pages 4538–4546, 2016. [15] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborová, and P. Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of Sciences, 110(52):20935–20940, 2013. [16] E. J. Lee, S. Kamath, E. Abbe, and S. R. Kulkarni. Spectral bounds for independent cascade model with sensitive edges. In 2016 Annual Conference on Information Science and Systems (CISS), pages 649–653, March 2016. 9 [17] R. Lemonnier, K. Scaman, and N. Vayatis. Tight bounds for influence in diffusion networks and application to bond percolation and epidemiology. In Advances in Neural Information Processing Systems, pages 846–854, 2014. [18] J. Leskovec, L. A. Adamic, and B. A. Huberman. The dynamics of viral marketing. ACM Transactions on the Web (TWEB), 1(1):5, 2007. [19] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 420–429. ACM, 2007. [20] D. Lopez-Pintado and D. J. Watts. Social influence, binary decisions and collective dynamics. Rationality and Society, 20(4):399–443, 2008. [21] B. Shulgin, L. Stone, and Z. Agur. Pulse vaccination strategy in the sir epidemic model. Bulletin of Mathematical Biology, 60(6):1123–1148, 1998. [22] Y. Tang, X. Xiao, and Y. Shi. Influence maximization: Near-optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 75–86. ACM, 2014. [23] C. Wang, W. Chen, and Y. Wang. Scalable influence maximization for independent cascade model in large-scale social networks. Data Mining and Knowledge Discovery, 25(3):545–576, 2012. [24] D. J. Watts. A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences, 99(9):5766–5771, 2002. [25] J. Yang and S. Counts. Predicting the speed, scale, and range of information diffusion in twitter. 2010. 10 | 2017 | 371 |
6,865 | Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls∗ Zeyuan Allen-Zhu Microsoft Research, Redmond zeyuan@csail.mit.edu Elad Hazan Princeton University ehazan@cs.princeton.edu Wei Hu Princeton University huwei@cs.princeton.edu Yuanzhi Li Princeton University yuanzhil@cs.princeton.edu Abstract We propose a rank-k variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation (1-SVD) in Frank-Wolfe with a top-k singular-vector computation (k-SVD), which can be done by repeatedly applying 1-SVD k times. Alternatively, our algorithm can be viewed as a rank-k restricted version of projected gradient descent. We show that our algorithm has a linear convergence rate when the objective function is smooth and strongly convex, and the optimal solution has rank at most k. This improves the convergence rate and the total time complexity of the Frank-Wolfe method and its variants. 1 Introduction Minimizing a convex matrix function over a trace-norm ball, which is: (recall that the trace norm ∥X∥∗of a matrix X equals the sum of its singular values) minX∈Rm×n f(X) : ∥X∥∗≤θ , (1.1) is an important optimization problem that serves as a convex surrogate to many low-rank machine learning tasks, including matrix completion [2, 10, 16], multiclass classification [4], phase retrieval [3], polynomial neural nets [12], and more. In this paper we assume without loss of generality that θ = 1. One natural algorithm for Problem (1.1) is projected gradient descent (PGD). In each iteration, PGD first moves X in the direction of the gradient, and then projects it onto the trace-norm ball. Unfortunately, computing this projection requires the full singular value decomposition (SVD) of the matrix, which takes O(mn min{m, n}) time in general. This prevents PGD from being efficiently applied to problems with large m and n. Alternatively, one can use projection-free algorithms. As first proposed by Frank and Wolfe [5], one can select a search direction (which is usually the gradient direction) and perform a linear optimization over the constraint set in this direction. In the case of Problem (1.1), performing linear optimization over a trace-norm ball amounts to computing the top (left and right) singular vectors of a matrix, which can be done much faster than full SVD. Therefore, projection-free algorithms become attractive for convex minimization over trace-norm balls. Unfortunately, despite its low per-iteration complexity, the Frank-Wolfe (FW) algorithm suffers from slower convergence rate compared with PGD. When the objective f(X) is smooth, FW requires O(1/ε) iterations to convergence to an ε-approximate minimizer, and this 1/ε rate is tight even if the objective is also strongly convex [6]. In contrast, PGD achieves 1/√ε rate if f(X) is smooth (under Nesterov’s acceleration [14]), and log(1/ε) rate if f(X) is both smooth and strongly convex. ∗The full version of this paper can be found on https://arxiv.org/abs/1708.02105. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, there were several results to revise the FW method to improve its convergence rate for strongly-convex functions. The log(1/ε) rate was obtained when the constraint set is a polyhedron [7, 11], and the 1/√ε rate was obtained when the constraint set is strongly convex [8] or is a spectrahedron [6]. Among these results, the spectrahedron constraint (i.e., for all positive semidefinite matrices X with Tr(X) = 1) studied by Garber [6] is almost identical to Problem (1.1), but slightly weaker.2 When stating the result of Garber [6], we assume for simplicity that it also applies to Problem (1.1). Our Question. In this paper, we propose to study the following general question: Can we design a “rank-k variant” of Frank-Wolfe to improve the convergence rate? (That is, in each iteration it computes the top k singular vectors – i.e., k-SVD – of some matrix.) Our motivation to study the above question can be summarized as follows: • Since FW computes a 1-SVD and PGD computes a full SVD in each iteration, is there a value k ≪min{n, m} such that a rank-k variant of FW can achieve the convergence rate of PGD? • Since computing k-SVD costs roughly the same (sequential) time as “computing 1-SVD for k times” (see recent work [1, 13]),3 if using a rank-k variant of FW, can the number of iterations be reduced by a factor more than k? If so, then we can improve the sequential running time of FW. • k-SVD can be computed in a more distributed manner than 1-SVD. For instance, using block Krylov [13], one can distribute the computation of k-SVD to k machines, each in charge of independent matrix-vector multiplications. Therefore, it is beneficial to study a rank-k variant of FW in such settings. 1.1 Our Results We propose blockFW, a rank-k variant of Frank-Wolfe. Given a convex function f(X) that is βsmooth, in each iteration t, blockFW performs an update Xt+1 ←Xt + η(Vt −Xt), where η > 0 is a constant step size and Vt is a rank-k matrix computed from the k-SVD of (−∇f(Xt) + βηXt). If k = min{n, m}, blockFW can be shown to coincide with PGD, so it can also be viewed as a rank-k restricted version of PGD. Convergence. Suppose f(X) is also α-strongly convex and suppose the optimal solution X∗ of Problem (1.1) has rank k, then we show that blockFW achieves linear convergence: it finds an ε-approximate minimizer within O( β α log 1 ε) iterations, or equivalently, in T = O kβ α log 1 ε computations of 1-SVD. We denote by T the number of 1-SVD computations throughout this paper. In contrast, TFW = O β ε for Frank-Wolfe TGar = O min n β ε , β α 1/4 β ε 3/4√ k , β α 1/2 β ε 1/2 1 σmin(X∗) o for Garber [6]. Above, σmin(X∗) is the minimum non-zero singular value of X∗. Note that σmin(X∗) ≤ ∥X∗∥∗ rank(X∗) ≤ 1 k. We note that TGar is always outperformed by min{T, TFW}: ignoring the log(1/ε) factor, we have • min β ε , kβ α ≤ β α 1/4 β ε 3/4k1/4 < β α 1/4 β ε 3/4√ k, and • min β ε , kβ α ≤ β α 1/2 β ε 1/2k1/2 < β α 1/2 β ε 1/2 1 σmin(X∗). 2The the best of our knowledge, given an algorithm that works for spectrahedron, to solve Problem (1.1), one has to define a function g(Y ) over (n + m) × (n + m) matrices, by setting g(Y ) = f(2Y1:m,m+1:m+n) [10]. After this transformation, the function g(Y ) is no longer strongly convex, even if f(X) is strongly convex. In contrast, most algorithms for trace-norm balls, including FW and our later proposed algorithm, work as well for spectrahedron after minor changes to the analysis. 3Using block Krylov [13], Lanszos [1], or SVRG [1], at least when k is small, the time complexity of (approximately) computing the top k singular vectors of a matrix is no more than k times the complexity of (approximately) computing the top singular vector of the same matrix. We refer interested readers to [1] for details. 2 algorithm # rank # iterations time complexity per iteration PGD [14] min{m, n} κ log(1/ε) O mn min{m, n} accelerated PGD [14] min{m, n} √κ log(1/ε) O mn min{m, n} FrankWolfe [9] 1 β ε ˜O nnz(∇) × min ∥∇∥1/2 2 ε1/2 , ∥∇∥1/2 2 (σ1(∇)−σ2(∇))1/2 Garber [6] 1 κ 1 4 β ε 3 4 √ k , or κ 1 2 β ε 1 2 1 σmin(X∗) ˜O nnz(∇) + (m + n) × min ∥∇∥1/2 2 ε1/2 , ∥∇∥1/2 2 (σ1(∇)−σ2(∇))1/2 blockFW k κ log(1/ε) k · ˜O nnz(∇) + k(m + n)κ × min n (∥∇∥2+α)1/2 ε1/2 , κ(∥∇∥2+α)1/2 α1/2σmin(X∗) o Table 1: Comparison of first-order methods to minimize a β-smooth, α-strongly convex function over the unit-trace norm ball in Rm×n. In the table, k is the rank of X∗, κ = β α is the condition number, ∇= ∇f(Xt) is the gradient matrix, nnz(∇) is the complexity to multiply ∇to a vector, σi(X) is the i-th largest singular value of X, and σmin(X) is the minimum non-zero singular value of X. REMARK. The low-rank assumption on X∗should be reasonable: as we mentioned, in most applications of Problem (1.1), the ultimate reason for imposing a trace-norm constraint is to ensure that the optimal solution is low-rank; otherwise the minimization problem may not be interesting to solve in the first place. Also, the immediate prior work [6] also assumes X∗to have low rank. k-SVD Complexity. For theoreticians who are concerned about the time complexity of k-SVD, we also compare it with the 1-SVD complexity of FW and Garber. If one uses LazySVD [1]4 to compute k-SVD in each iteration of blockFW, then the per-iteration k-SVD complexity can be bounded by k · ˜O nnz(∇) + k(m + n)κ × min (∥∇∥2 + α)1/2 ε1/2 , κ(∥∇∥2 + α)1/2 α1/2σmin(X∗) . (1.2) Above, κ = β α is the condition number of f, ∇= ∇f(Xt) is the gradient matrix of the current iteration t, nnz(∇) is the complexity to multiply ∇to a vector, σmin(X∗) is the minimum non-zero singular value of X∗, and ˜O hides poly-logarithmic factors. In contrast, if using Lanczos, the 1-SVD complexity for FW and Garber can be bounded as (see [6]) ˜O nnz(∇) × min n∥∇∥1/2 2 ε1/2 , ∥∇∥1/2 2 (σ1(∇) −σ2(∇))1/2 o . (1.3) Above, σ1(∇) and σ2(∇) are the top two singular values of ∇, and the gap σ1(∇) −σ2(∇) can be as small as zero. We emphasize that our k-SVD complexity (1.2) can be upper bounded by a quantity that only depends poly-logarithmically on 1/ε. In contrast, the worst-case 1-SVD complexity (1.3) of FW and Garber depends on ε−1/2 because the gap σ1 −σ2 can be as small as zero. Therefore, if one takes this additional ε dependency into consideration for the convergence rate, then blockFW has rate polylog(1/ε), but FW and Garber have rates ε−3/2 and ε−1 respectively. The convergence rates and per-iteration running times of different algorithms for solving Problem (1.1) are summarized in Table 1. Practical Implementation. Besides our theoretical results above, we also provide practical suggestions for implementing blockFW. Roughly speaking, one can automatically select a different “good” rank k for each iteration. This can be done by iteratively finding the 1st, 2nd, 3rd, etc., top singular vectors of the underlying matrix, and then stop this process whenever the objective decrease is not worth further increasing the value k. We discuss the details in Section 6. 4In fact, LazySVD is a general framework that says, with a meaningful theoretical support, one can apply a reasonable 1-SVD algorithm k times in order to compute k-SVD. For simplicity, in this paper, whenever referring to LazySVD, we mean to apply the Lanczos method k times. 3 2 Preliminaries and Notation For a positive integer n, we define [n] := {1, 2, . . . , n}. For a matrix A, we denote by ∥A∥F , ∥A∥2 and ∥A∥∗respectively the Frobenius norm, the spectral norm, and the trace norm of A. We use ⟨·, ·⟩to denote the (Euclidean) inner products between vectors, or the (trace) inner products between matrices (i.e., ⟨A, B⟩= Tr(AB⊤)). We denote by σi(A) the i-th largest singular value of a matrix A, and by σmin(A) the minimum non-zero singular value of A. We use nnz(A) to denote the time complexity of multiplying matrix A to a vector (which is at most the number of non-zero entries of A). We define the (unit) trace-norm ball Bm,n in Rm×n as Bm,n := {X ∈Rm×n : ∥X∥∗≤1}. Definition 2.1. For a differentiable convex function f : K →R over a convex set K ⊆Rm×n, we say • f is β-smooth if f(Y ) ≤f(X) + ⟨∇f(X), Y −X⟩+ β 2 ∥X −Y ∥2 F for all X, Y ∈K; • f is α-strongly convex if f(Y ) ≥f(X) + ⟨∇f(X), Y −X⟩+ α 2 ∥X −Y ∥2 F for all X, Y ∈K. For Problem (1.1), we assume f is differentiable, β-smooth, and α-strongly convex over Bm,n. We denote by κ = β α the condition number of f, and by X∗the minimizer of f(X) over the trace-norm ball Bm,n. The strong convexity of f(X) implies: Fact 2.2. f(X) −f(X∗) ≥α 2 ∥X −X∗∥2 F for all X ∈K. Proof. The minimality of X∗implies ⟨∇f(X∗), X −X∗⟩≥0 for all X ∈K. The fact follows then from the α-strong convexity of f. □ The Frank-Wolfe Algorithm. We now quickly review the Frank-Wolfe algorithm (see Algorithm 1) and its relation to PGD. Algorithm 1 Frank-Wolfe Input: Step sizes {ηt}t≥1 (ηt ∈[0, 1]), starting point X1 ∈Bm,n 1: for t = 1, 2, . . . do 2: Vt ←argminV ∈Bm,n⟨∇f(Xt), V ⟩ ⋄by finding the top left/right singular vectors ut, vt of −∇f(Xt), and taking Vt = utv⊤ t . 3: Xt+1 ←Xt + ηt(Vt −Xt) 4: end for Let ht = f(Xt)−f(X∗) be the approximation error of Xt. The convergence analysis of Algorithm 1 is based on the following relation: ht+1 = f(Xt + ηt(Vt −Xt)) −f(X∗) x ≤ht + ηt⟨∇f(Xt), Vt −Xt⟩+ β 2 η2 t ∥Vt −Xt∥2 F y ≤ht + ηt⟨∇f(Xt), X∗−Xt⟩+ β 2 η2 t ∥Vt −Xt∥2 F z ≤(1 −ηt)ht + β 2 η2 t ∥Vt −Xt∥2 F . (2.1) Above, inequality x uses the β-smoothness of f, inequality y is due to the choice of Vt in Line 2, and inequality z follows from the convexity of f. Based on (2.1), a suitable choice of the step size ηt = Θ(1/t) gives the convergence rate O(β/ε) for the Frank-Wolfe algorithm. If f is also α-strongly convex, a linear convergence rate can be achieved if we replace the linear optimization step (Line 2) in Algorithm 1 with a constrained quadratic minimization: Vt ←argmin V ∈Bm,n ⟨∇f(Xt), V −Xt⟩+ β 2 ηt∥V −Xt∥2 F . (2.2) In fact, if Vt is defined as above, we have the following relation similar to (2.1): ht+1 ≤ht + ηt⟨∇f(Xt), Vt −Xt⟩+ β 2 η2 t ∥Vt −Xt∥2 F ≤ht + ηt⟨∇f(Xt), X∗−Xt⟩+ β 2 η2 t ∥X∗−Xt∥2 F ≤(1 −ηt + κη2 t )ht , (2.3) where the last inequality follows from Fact 2.2. Given (2.3), we can choose ηt = 1 2κ to obtain a linear convergence rate because ht+1 ≤(1 −1/4κ)ht. This is the main idea behind the projected gradient 4 descent (PGD) method. Unfortunately, optimizing Vt from (2.2) requires a projection operation onto Bm,n, and this further requires a full singular value decomposition of the matrix ∇f(Xt) −βηtXt. 3 A Rank-k Variant of Frank-Wolfe Our main idea comes from the following simple observation. Suppose we choose ηt = η = 1 2κ for all iterations, and suppose rank(X∗) ≤k. Then we can add a low-rank constraint to Vt in (2.2): Vt ← argmin V ∈Bm,n, rank(V )≤k ⟨∇f(Xt), V −Xt⟩+ β 2 η∥V −Xt∥2 F . (3.1) Under this new choice of Vt, it is obvious that the same inequalities in (2.3) remain to hold, and thus the linear convergence rate of PGD can be preserved. Let us now discuss how to solve (3.1). 3.1 Solving the Low-Rank Quadratic Minimization (3.1) Although (3.1) is non-convex, we prove that it can be solved efficiently. To achieve this, we first show that Vt is in the span of the top k singular vectors of βηXt −∇f(Xt). Lemma 3.1. The minimizer Vt of (3.1) can be written as Vt = Pk i=1 aiuiv⊤ i , where a1, . . . , ak are nonnegative scalars, and (ui, vi) is the pair of the left and right singular vectors of At := βηXt −∇f(Xt) corresponding to its i-th largest singular value. The proof of Lemma 3.1 is given in the full version of this paper. Now, owing to Lemma 3.1, we can perform k-SVD on At to compute {(ui, vi)}i∈[k], plug the expression Vt = Pk i=1 aiuiv⊤ i into the objective of (3.1), and then search for the optimal values {ai}i∈[k]. The last step is equivalent to minimizing −Pk i=1 σiai + β 2 η Pk i=1 a2 i (where σi = u⊤ i Atvi) over the simplex ∆:= a ∈Rk : a1, . . . , ak ≥0, ∥a∥1 ≤1 , which is the same as projecting the vector 1 βη(σ1, . . . , σk) onto the simplex ∆. It can be easily solved in O(k log k) time (see for instance the applications in [15]). 3.2 Our Algorithm and Its Convergence We summarize our algorithm in Algorithm 2 and call it blockFW. Algorithm 2 blockFW Input: Rank parameter k, starting point X1 = 0 1: η ← 1 2κ. 2: for t = 1, 2, . . . do 3: At ←βηXt −∇f(Xt) 4: (u1, v1, . . . , uk, vk) ←k-SVD(At) ⋄(ui, vi) is the i-th largest pair of left/right singular vectors of At 5: a ←argmina∈Rk,a≥0,∥a∥1≤1 ∥a − 1 βησ∥2 ⋄where σ := (u⊤ i Atvi)k i=1 6: Vt ←Pk i=1 aiuiv⊤ i 7: Xt+1 ←Xt + η(Vt −Xt) 8: end for Since the state-of-the-art algorithms for k-SVD are iterative methods, which in theory can only give approximate solutions, we now study the convergence of blockFW given approximate k-SVD solvers. We introduce the following notion of an approximate solution to the low-rank quadratic minimization problem (3.1). Definition 3.2. Let gt(V ) = ⟨∇f(Xt), V −Xt⟩+ β 2 η∥V −Xt∥2 F be the objective function in (3.1), and let g∗ t = gt(X∗). Given parameters γ ≥0 and ε ≥0, a feasible solution V to (3.1) is called (γ, ε)-approximate if it satisfies g(V ) ≤(1 −γ)g∗ t + ε. Note that the above multiplicative-additive definition makes sense because g∗ t ≤0: Fact 3.3. If rank(X∗) ≤k, for our choice of step size η = 1 2κ, we have g∗ t = gt(X∗) ≤ −(1 −κη)ht = −ht 2 ≤0 according to (2.3). The next theorem gives the linear convergence of blockFW under the above approximate solutions to (3.1). Its proof is simple and uses a variant of (2.3) (see the full version of this paper). 5 Theorem 3.4. Suppose rank(X∗) ≤k and ε > 0. If each Vt computed in blockFW is a ( 1 2, ε 8)approximate solution to (3.1), then for every t, the error ht = f(Xt) −f(X∗) satisfies ht ≤ 1 − 1 8κ t−1 h1 + ε 2 . As a consequence, it takes O(κ log h1 ε ) iterations to achieve the target error ht ≤ε. Based on Theorem 3.4, the per-iteration running time of blockFW is dominated by the time necessary to produce a ( 1 2, ε 8)-approximate solution Vt to (3.1), which we study in Section 4. 4 Per-Iteration Running Time Analysis In this section, we study the running time necessary to produce a ( 1 2, ε)-approximate solution Vt to (3.1). In particular, we wish to show a running time that depends only poly-logarithmically on 1/ε. The reason is that, since we are concerning about the linear convergence rate (i.e., log(1/ε)) in this paper, it is not meaningful to have a per-iteration complexity that scales polynomially with 1/ε. Remark 4.1. To the best of our knowledge, the Frank-Wolfe method and Garber’s method [6] have their worst-case per-iteration complexities scaling polynomially with 1/ε. In theory, this also slows down their overall performance in terms of the dependency on 1/ε. 4.1 Step 1: The Necessary k-SVD Accuracy We first show that if the k-SVD in Line 4 of blockFW is solved sufficiently accurate, then Vt obtained in Line 6 will be a sufficiently good approximate solution to (3.1). For notational simplicity, in this section we denote Gt := ∥∇f(Xt)∥2 + α, and we let k∗= rank(X∗) ≤k. Lemma 4.2. Suppose γ ∈[0, 1] and ε ≥0. In each iteration t of blockFW, if the vectors u1, v1, . . . , uk, vk returned by k-SVD in Line 4 satisfy u⊤ i Atvi ≥(1 −γ)σi(At) −ε for all i ∈[k∗], then Vt = Pk i=1 aiuiv⊤ i obtained in Line 6 is 6Gt ht + 2 γ, ε -approximate to (3.1). The proof of Lemma 4.2 is given in the full version of this paper, and is based on our earlier characterization Lemma 3.1. 4.2 Step 2: The Time Complexity of k-SVD We recall the following complexity statement for k-SVD: Theorem 4.3 ([1]). The running time to compute the k-SVD of A ∈Rm×n using LazySVD is5 ˜O k·nnz(A)+k2(m+n) √γ or ˜O k·nnz(A)+k2(m+n) √gap . In the former case, we can have u⊤ i Avi ≥(1 −γ)σi(A) for all i ∈[k]; in the latter case, if gap ∈ 0, σk∗(A)−σk∗+1(A) σk∗(A) i for some k∗∈[k], then we can guarantee u⊤ i Avi ≥σi(A) −ε for all i ∈[k∗]. The First Attempt. Recall that we need a ( 1 2, ε)-approximate solution to (3.1). Using Lemma 4.2, it suffices to obtain a (1 −γ)-multiplicative approximation to the k-SVD of At (i.e., u⊤ i Atvi ≥ (1 −γ)σi(At) for all i ∈[k]), as long as γ ≤ 1 12Gt/ht+4. Therefore, we can directly apply the first running time in Theorem 4.3: ˜O k·nnz(At)+k2(m+n) √γ . However, when ht is very small, this running time can be unbounded. In that case, we observe that γ = ε Gt (independent of ht) also suffices: since ∥At∥2 =
α 2 Xt −∇f(Xt)
2 ≤α 2 + ∥∇f(Xt)∥2 ≤Gt, from u⊤ i Atvi ≥(1 −ε/Gt)σi(At) we have u⊤ i Atvi ≥σi(At) − ε Gt σi(At) ≥σi(At) − ε Gt ∥At∥2 ≥σi(At) −ε; then according to Lemma 4.2 we can obtain (0, ε)-approximation to (3.1), which is stronger than ( 1 2, ε)-approximation. We summarize this running time (using γ = ε Gt ) in Claim 4.5; the running time depends polynomially on 1 ε. The Second Attempt. To make our linear convergence rate (i.e., the log(1/ε) rate) meaningful, we want the k-SVD running time to depend poly-logarithmically on 1/ε. Therefore, when ht is small, we wish to instead apply the second running time in Theorem 4.3. 5The first is known as the gap-free result because it does not depend on the gap between any two singular values. The second is known as the gap-dependent result, and it requires a k×k full SVD after the k approximate singular vectors are computed one by one. The ˜O notation hides poly-log factors in 1/ε, 1/γ, m, n, and 1/gap. 6 Recall that X∗has rank k∗so σk∗(X∗) −σk∗+1(X∗) = σmin(X∗). We can show that this implies A∗:= α 2 X∗−∇f(X∗) also has a large gap σk∗(A∗) −σk∗+1(A∗). Now, according to Fact 2.2, when ht is small, Xt and X∗are sufficiently close. This means At = α 2 Xt −∇f(Xt) is also close to A∗, and thus has a large gap σk∗(At) −σk∗+1(At). Then we can apply the second running time in Theorem 4.3. 4.2.1 Formal Running Time Statements Fact 4.4. We can store Xt as a decomposition into at most rank(Xt) ≤kt rank-1 components.6 Therefore, for At = α 2 Xt −∇f(Xt), we have nnz(At) ≤nnz(∇f(Xt)) + (m + n)rank(Xt) ≤ nnz(∇f(Xt)) + (m + n)kt. If we always use the first running time in Theorem 4.3, then Fact 4.4 implies: Claim 4.5. The k-SVD computation in the t-th iteration of blockFW can be implemented in ˜O k · nnz(∇f(Xt)) + k2(m + n)t p Gt/ε time. Remark 4.6. As long as (m + n)kt ≤nnz(∇f(Xt)), the k-SVD running time in Claim 4.5 becomes ˜O k · nnz(∇f(Xt)) p Gt/ε , which roughly equals k-times the 1-SVD running time ˜O nnz(∇) p ∥∇∥2/ε) of FW and Garber [6]. Since in practice, it suffices to run blockFW and FW for a few hundred 1-SVD computations, the relation (m + n)kt ≤nnz(∇f(Xt)) is often satisfied. If, as discussed above, we apply the first running time in Theorem 4.3 only for large ht, and apply the second running time in Theorem 4.3 for small ht, then we obtain the following theorem whose proof is given in the full version of this paper. Theorem 4.7. The k-SVD comuputation in the t-th iteration of blockFW can be implemented in ˜O k · nnz(∇f(Xt)) + k2(m + n)t κ√ Gt/α σmin(X∗) time. Remark 4.8. Since according to Theorem 3.4 we only need to run blockFW for O(κ log(1/ε)) iterations, we can plug t = O(κ log(1/ε)) into Claim 4.5 and Theorem 4.7, and obtain the running time presented in (1.2). The per-iteration running time of blockFW depends poly-logarithmically on 1/ε. In contrast, the per-iteration running times of Garber [6] and FW depend polynomially on 1/ε, making their total running times even worse in terms of dependency on 1/ε. 5 Maintaining Low-Rank Iterates One of the main reasons to impose trace-norm constraints is to produce low-rank solutions. However, the rank of iterate Xt in our algorithm blockFW can be as large as kt, which is much larger than k, the rank of the optimal solution X∗. In this section, we show that by adding a simple modification to blockFW, we can make sure the rank of Xt is O(kκ log κ) in all iterations t, without hurting the convergence rate much. We modify blockFW as follows. Whenever t −1 is a multiple of S = ⌈8κ(log κ + 1)⌉, we compute (note that this is the same as setting η = 1 in (3.1)) Wt ← argmin W ∈Bm,n, rank(W )≤k ⟨∇f(Xt), W −Xt⟩+ β 2 ∥W −Xt∥2 F , and let the next iterate Xt+1 be Wt. In all other iterations the algorithm is unchanged. After this change, the function value f(Xt+1) may be greater than f(Xt), but can be bounded as follows: Lemma 5.1. Suppose rank(X∗) ≤k. Then we have f(Wt) −f(X∗) ≤κht. Proof. We have the following relation similar to (2.3): f(Wt) −f(X∗) ≤ht + ⟨∇f(Xt), Wt −Xt⟩+ β 2 ∥Wt −Xt∥2 F ≤ht + ⟨∇f(Xt), X∗−Xt⟩+ β 2 ∥X∗−Xt∥2 F ≤ht −ht + β 2 · 2 αht = κht . □ 6 In Section 5, we show how to ensure that rank(Xt) is always O(kκ log κ), a quantity independent of t. 7 From Theorem 3.4 we know that hS+1 ≤(1 − 1 8κ)Sh1 + ε 2 ≤(1 − 1 8κ)8κ(log κ+1)h1 + ε 2 ≤ e−(log κ+1)h1 + ε 2 = 1 eκh1 + ε/2. Therefore, after setting XS+2 = WS+1, we still have hS+2 ≤ 1 eh1 + κε 2 (according to Lemma 5.1). Continuing this analysis (letting the κε here be the “new ε”), we know that this modified version of blockFW converges to an ε-approximate minimizer in O κ log κ · log h1 ε iterations. Remark 5.2. Since in each iteration the rank of Xt is increased by at most k, if we do the modified step every S = O(κ log κ) iterations, we have that throughout the algorithm, rank(Xt) is never more than O(kκ log κ). Furthermore we can always store Xt using O(kκ log κ) vectors, instead of storing all the singular vectors obtained in previous iterations. 6 Preliminary Empirical Evaluation We conclude this paper with some preliminary experiments to test the performance of blockFW. We first recall two machine learning tasks that fall into Problem (1.1). Matrix Completion. Suppose there is an unknown matrix M ∈Rm×n close to low-rank, and we observe a subset Ωof its entries – that is, we observe Mi,j for every (i, j) ∈Ω. (Think of Mi,j as user i’s rating of movie j.) One can recover M by solving the following convex program: minX∈Rm×n 1 2 P (i,j)∈Ω(Xi,j −Mi,j)2 | ∥X∥∗≤θ . (6.1) Although Problem (6.1) is not strongly convex, our experiments show the effectiveness of blockFW on this problem. Polynomial Neural Networks. Polynomial networks are neural networks with quadratic activation function σ(a) = a2. Livni et al. [12] showed that such networks can express any function computed by a Turing machine, similar to networks with ReLU or sigmoid activations. Following [12], we consider the class of 2-layer polynomial networks with inputs from Rd and k hidden neurons: Pk = n x 7→Pk j=1 aj(w⊤ j x)2 ∀j ∈[k], wj ∈Rd, ∥wj∥2 = 1 V a ∈Rko . If we write A = Pk i=1 ajwjw⊤ j , we have the following equivalent formulation: Pk = x 7→x⊤Ax A ∈Rd×d, rank(A) ≤k . Therefore, if replace the hard rank constraint with trace norm ∥A∥∗≤θ, the task of empirical risk minimization (ERM) given training data {(x1, y1), . . . , (xN, yN)} ⊂Rd × R can be formulated as7 minA∈Rd×d 1 2 PN i=1(x⊤ i Axi −yi)2 ∥A∥∗≤θ . (6.2) Since f(A) = 1 2 PN i=1(x⊤ i Axi −yi)2 is convex in A, the above problem falls into Problem (1.1). Again, this objective f(A) might not be strongly convex, but we still perform experiments on it. 6.1 Preliminary Evaluation 1: Matrix Completion on Synthetic Data We consider the following synthetic experiment for matrix completion. We generate a random rank-10 matrix in dimension 1000 × 1000, plus some small noise. We include each entry into Ωwith probability 1/2. We scale M to ∥M∥∗= 10000, so we set θ = 10000 in (6.1). We compare blockFW with FW and Garber [6]. When implementing the three algorithms, we use exact line search. For Garber’s algorithm, we tune its parameter ηt = c t with different constant values c, and then exactly search for the optimum ˜ηt. When implementing blockFW, we use k = 10 and η = 0.2. We use the MATLAB built-in solver for 1-SVD and k-SVD. In Figure 1(a), we compare the numbers of 1-SVD computations for the three algorithms. The plot confirms that it suffices to apply a rank-k variant FW in order to achieve linear convergence. 6.2 Auto Selection of k In practice, it is often unrealistic to know k in advance. Although one can simultaneously try k = 1, 2, 4, 8, . . . and output the best possible solution, this can be unpleasant to work with. We propose the following modification to blockFW which automatically chooses k. In each iteration t, we first run 1-SVD and compute the objective decrease, denoted by d1 ≥0. Now, given any approximate k-SVD decomposition of the matrix At = βηXt −∇f(Xt), we can compute its (k + 1)-SVD using one additional 1-SVD computation according to the LazySVD framework [1]. 7We consider square loss for simplicity. It can be any loss function ℓ(x⊤ i Axi, yi) convex in its first argument. 8 # 1-SVD computations 0 20 40 60 80 100 Log(error) -2 -1 0 1 2 3 4 5 6 7 8 FW Garber This paper (a) matrix completion on synthetic data # 1-SVD computations 0 50 100 150 200 Log(error) 1 2 3 4 5 6 7 FW Garber This paper (b) matrix completion on MOVIELENS1M, θ = 10000 # 1-SVD computations 0 100 200 300 400 500 Log(error) -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 FW Garber This paper (c) polynomial neural network on MNIST, θ = 0.03 Figure 1: Partial experimental results. The full 6 plots for MOVIELENS and 3 plots for MNIST are included in the full version of this paper. We compute the new objective decrease dk+1. We stop this process and move to the next iteration t+1 whenever dk+1 k+1 < dk k . In other words, we stop whenever it “appears” not worth further increasing k. We count this iteration t as using k + 1 computations of 1-SVD. All the experiments on real-life datasets are performed using this above auto-k process. 6.3 Preliminary Evaluation 2: Matrix Completion on MOVIELENS We study the same experiment in Garber [6], the matrix completion Problem (6.1) on datasets MOVIELENS100K (m = 943, n = 1862 and |Ω| = 105) and MOVIELENS1M (m = 6040, n = 3952 and |Ω| ≈106). In the second dataset, following [6], we further subsample Ωso it contains about half of the original entries. For each dataset, we run FW, Garber, and blockFW with three different choices of θ.8 We present the six plots side-by-side in the full version of this paper. We observe that when θ is large, there is no significant advantage for using blockFW. This is because the rank of the optimal solution X∗is also high for large θ. In contrast, when θ is small (so X∗is of low rank), as demonstrated for instance by Figure 1(b), it is indeed beneficial to apply blockFW. 6.4 Preliminary Evaluation 3: Polynomial Neural Network on MNIST We use the 2-layer neural network Problem (6.2) to train a binary classifier on the MNIST dataset of handwritten digits, where the goal is to distinguish images of digit “0” from images of other digits. The training set contains N = 60000 examples each of dimension d = 28×28 = 784. We set yi = 1 if that example belongs to digit “0” and yi = 0 otherwise. We divide the original grey levels by 256 so xi ∈[0, 1]d. We again try three different values of θ, and compare FW, Garber, and blockFW.9 We present the three plots side-by-side in the full version of this paper. The performance of our algorithm is comparable to FW and Garber for large θ, but as demonstrated for instance by Figure 1(c), when θ is small so rank(X∗) is small, it is beneficial to use blockFW. 7 Conclusion In this paper, we develop a rank-k variant of Frank-Wolfe for Problem (1.1) and show that: (1) it converges in log(1/ε) rate for smooth and strongly convex functions, and (2) its per-iteration complexity scales with polylog(1/ε). Preliminary experiments suggest that the value k can also be automatically selected, and our algorithm outperforms FW and Garber [6] when X∗is of relatively smaller rank. We hope more rank-k variants of Frank-Wolfe can be developed in the future. Acknowledgments Elad Hazan was supported by NSF grant 1523815 and a Google research award. The authors would like to thank Dan Garber for sharing his code for [6]. 8We perform exact line search for all algorithms. For Garber [6], we tune the best ηt = c t and exactly search for the optimal ˜ηt. For blockFW, we let k be chosen automatically and choose η = 0.01 for all the six experiments. 9We perform exact line search for all algorithms. For Garber [6], we tune the best ηt = c t and exactly search for the optimal ˜ηt. For blockFW, we let k be chosen automatically and choose η = 0.0005 for all the three experiments. 9 References [1] Zeyuan Allen-Zhu and Yuanzhi Li. LazySVD: Even faster SVD decomposition yet without agonizing pain. In NIPS, pages 974–982, 2016. [2] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111–119, 2012. [3] Emmanuel J Candes, Yonina C Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase retrieval via matrix completion. SIAM review, 57(2):225–251, 2015. [4] Miroslav Dudik, Zaid Harchaoui, and Jérôme Malick. Lifted coordinate descent for learning with trace-norm regularization. In AISTATS, pages 327–336, 2012. [5] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95–110, 1956. [6] Dan Garber. Faster projection-free convex optimization over the spectrahedron. In NIPS, pages 874–882, 2016. [7] Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013. [8] Dan Garber and Elad Hazan. Faster rates for the frank-wolfe method over strongly-convex sets. In ICML, pages 541–549, 2015. [9] Elad Hazan. Sparse approximate solutions to semidefinite programs. In Latin American Symposium on Theoretical Informatics, pages 306–316. Springer, 2008. [10] Martin Jaggi and Marek Sulovský. A simple algorithm for nuclear norm regularized problems. In ICML, pages 471–478, 2010. [11] Simon Lacoste-Julien and Martin Jaggi. An affine invariant linear convergence analysis for frank-wolfe algorithms. arXiv preprint arXiv:1312.7864, 2013. [12] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In NIPS, pages 855–863, 2014. [13] Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and faster approximate singular value decomposition. In NIPS, pages 1396–1404, 2015. [14] Yurii Nesterov. Introductory Lectures on Convex Programming Volume: A Basic course, volume I. Kluwer Academic Publishers, 2004. [15] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127–152, December 2005. [16] Shai Shalev-Shwartz, Alon Gonen, and Ohad Shamir. Large-scale convex minimization with a low-rank constraint. arXiv preprint arXiv:1106.1622, 2011. 10 | 2017 | 372 |
6,866 | Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach Roel Dobbe∗ Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 dobbe@eecs.berkeley.edu David Fridovich-Keil∗ Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 dfk@eecs.berkeley.edu Claire Tomlin Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 tomlin@eecs.berkeley.edu Abstract Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy. 1 Introduction Finding optimal decentralized policies for multiple agents is often a hard problem hampered by partial observability and a lack of coordination between agents. The distributed multi-agent problem has been approached from a variety of angles, including distributed optimization [Boyd et al., 2011], game theory [Aumann and Dreze, 1974] and decentralized or networked partially observable Markov decision processes (POMDPs) [Oliehoek and Amato, 2016, Goldman and Zilberstein, 2004, Nair et al., 2005]. In this paper, we analyze a different approach consisting of a simple learning scheme to design fully decentralized policies for all agents that collectively mimic the solution to a common optimization problem, while having no access to a global reward signal and either no or restricted access to other agents’ local state. This algorithm is a generalization of that proposed in our prior work [Sondermeijer et al., 2016] related to decentralized optimal power flow (OPF). Indeed, the success of regression-based decentralization in the OPF domain motivated us to understand when and how well the method works in a more general decentralized optimal control setting. The key contribution of this work is to view decentralization as a compression problem, and then apply classical results from information theory to analyze performance limits. More specifically, we treat the ith agent’s optimal action in the centralized problem as a random variable u∗ i , and model its conditional dependence on the global state variables x = (x1, . . . , xn), i.e. p(u∗ i |x), which we ∗Indicates equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. assume to be stationary in time. We now restrict each agent i to observe only the ith state variable xi. Rather than solving this decentralized problem directly, we train each agent to replicate what it would have done with full information in the centralized case. That is, the vector of state variables x is compressed, and the ith agent must decompress xi to compute some estimate ˆui ≈u∗ i . In our approach, each agent learns a parameterized Markov control policy ˆui = ˆπi(xi) via regression. The ˆπi are learned from a data set containing local states xi taken from historical measurements of system state x and corresponding optimal actions u∗ i computed by solving an offline centralized optimization problem for each x. In this context, we analyze the fundamental limits of compression. In particular, we are interested in unraveling the relationship between the dependence structure of u∗ i and x and the corresponding ability of an agent with partial information to approximate the optimal solution, i.e. the difference – or distortion – between decentralized action ˆui = ˆπi(xi) and u∗ i . This type of relationship is well studied within the information theory literature as an instance of rate distortion theory [Cover and Thomas, 2012, Chapter 13]. Classical results in this field provide a means of finding a lower bound on the expected distortion as a function of the mutual information – or rate of communication – between u∗ i and xi. This lower bound is valid for each specified distortion metric, and for any arbitrary strategy of computing ˆui from available data xi. Moreover, we are able to leverage a similar result to provide a conceptually simple algorithm for choosing a communication structure – letting the regressor ˆπi depend on some other local states xj̸=i – in such a way that the lower bound on expected distortion is minimized. As such, our method generalizes [Sondermeijer et al., 2016] and provides a novel approach for the design and analysis of regression-based decentralized optimal policies for general multi-agent systems. We demonstrate these results on synthetic examples, and on a real example drawn from solving OPF in electrical distribution grids. 2 Related Work Decentralized control has long been studied within the system theory literature, e.g. [Lunze, 1992, Siljak, 2011]. Recently, various decomposition based techniques have been proposed for distributed optimization based on primal or dual decomposition methods, which all require iterative computation and some form of communication with either a central node [Boyd et al., 2011] or neighbor-toneighbor on a connected graph [Pu et al., 2014, Raffard et al., 2004, Sun et al., 2013]. Distributed model predictive control (MPC) optimizes a networked system composed of subsystems over a time horizon, which can be decentralized (no communication) if the dynamic interconnections between subsystems are weak in order to achieve closed-loop stability as well as performance [Christofides et al., 2013]. The work of Zeilinger et al. [2013] extended this to systems with strong coupling by employing time-varying distributed terminal set constraints, which requires neighbor-to-neighbor communication. Another class of methods model problems in which agents try to cooperate on a common objective without full state information as a decentralized partially observable Markov decision process (Dec-POMDP) [Oliehoek and Amato, 2016]. Nair et al. [2005] introduce networked distributed POMDPs, a variant of the Dec-POMDP inspired in part by the pairwise interaction paradigm of distributed constraint optimization problems (DCOPs). Although the specific algorithms in these works differ significantly from the regression-based decentralization scheme we consider in this paper, a larger difference is in problem formulation. As described in Sec. 3, we study a static optimization problem repeatedly solved at each time step. Much prior work, especially in optimal control (e.g. MPC) and reinforcement learning (e.g. Dec-POMDPs), poses the problem in a dynamic setting where the goal is to minimize cost over some time horizon. In the context of reinforcement learning (RL), the time horizon can be very long, leading to the well known tradeoff between exploration and exploitation; this does not appear in the static case. Additionally, many existing methods for the dynamic setting require an ongoing communication strategy between agents – though not all, e.g. [Peshkin et al., 2000]. Even one-shot static problems such as DCOPs tend to require complex communication strategies, e.g. [Modi et al., 2005]. Although the mathematical formulation of our approach is rather different from prior work, the policies we compute are similar in spirit to other learning and robotic techniques that have been proposed, such as behavioral cloning [Sammut, 1996] and apprenticeship learning [Abbeel and Ng, 2004], which aim to let an agent learn from examples. In addition, we see a parallel with recent work on information-theoretic bounded rationality [Ortega et al., 2015] which seeks to formalize decision-making with limited resources such as the time, energy, memory, and computational effort 2 x1 x2 x3 x4 x5 x6 u2 u5 u6 (a) Distributed multi-agent problem. uC * û1 ûi ûC ui * u1 * xj xN x1 (b) Graphical model of dependency structure. Figure 1: (a) shows a connected graph corresponding to a distributed multi-agent system. The circles denote the local state xi of an agent, the dashed arrow denotes its action ui, and the double arrows denote the physical coupling between local state variables. (b) shows the Markov Random Field (MRF) graphical model of the dependency structure of all variables in the decentralized learning problem. Note that the state variables xi and the optimal actions u∗ i form a fully connected undirected network, and the local policy ˆui only depends on the local state xi. allocated for arriving at a decision. Our work is also related to swarm robotics [Brambilla et al., 2013], as it learns simple rules aimed to design robust, scalable and flexible collective behaviors for coordinating a large number of agents or robots. 3 General Problem Formulation Consider a distributed multi-agent problem defined by a graph G = (N, E), with N denoting the nodes in the network with cardinality |N| = N, and E representing the set of edges between nodes. Fig. 1a shows a prototypical graph of this sort. Each node has a real-valued state vector xi ∈Rαi , i ∈N. A subset of nodes C ⊂N, with cardinality |C| = C, are controllable and hence are termed “agents.” Each of these agents has an action variable ui ∈Rβi , i ∈C. Let x = (xi, . . . , xN)⊤∈R P i∈N αi = X denote the full network state vector and u ∈R P i∈C βi = U the stacked network optimization variable. Physical constraints such as spatial coupling are captured through equality constraints g(x, u) = 0. In addition, the system is subject to inequality constraints h(x, u) ≤0 that incorporate limits due to capacity, safety, robustness, etc. We are interested in minimizing a convex scalar function fo(x, u) that encodes objectives that are to be pursued cooperatively by all agents in the network, i.e. we want to find u∗= arg min u fo(x, u) , s.t. g(x, u) = 0, h(x, u) ≤0. (1) Note that (1) is static in the sense that it does not consider the future evolution of the state x or the corresponding future values of cost fo. We apply this static problem to sequential control tasks by repeatedly solving (1) at each time step. Note that this simplification from an explicitly dynamic problem formulation (i.e. one in which the objective function incorporates future costs) is purely for ease of exposition and for consistency with the OPF literature as in [Sondermeijer et al., 2016]. We could also consider the optimal policy which solves a dynamic optimal control or RL problem and the decentralized learning step in Sec. 3.1 would remain the same. Since (1) is static, applying the learned decentralized policies repeatedly over time may lead to dynamical instability. Identifying when this will and will not occur is a key challenge in verifying the regression-based decentralization method, however it is beyond the scope of this work. 3.1 Decentralized Learning We interpret the process of solving (1) as applying a well-defined function or stationary Markov policy π∗: X −→U that maps an input collective state x to the optimal collective control or action u∗. We presume that this solution exists and can be computed offline. Our objective is to learn C decentralized policies ˆui = ˆπi(xi), one for each agent i ∈C, based on T historical measurements of the states {x[t]}T t=1 and the offline computation of the corresponding optimal actions {u∗[t]}T t=1. Although each policy ˆπi individually aims to approximate u∗ i based on local state xi, we are able 3 Decentralize dLearning Decentralize dLearning Decentralized Learning Multi-Agent System Local training sets 2 64 x⇤ 1 ... x⇤ 6 3 75 , 2 4 u⇤ 2 u⇤ 5 u⇤ 6 3 5 2 64 x1 ... x6 3 75 , 2 4 u2 u5 u6 3 5 ˆu2 = ˆ⇡2(x2) ˆu5 = ˆ⇡5(x5) ˆu6 = ˆ⇡6(x6) Data gathering Optimal data Local policies approximate Centralized Optimization {x2[t], u⇤ 2[t]}T t=1 Figure 2: A flow diagram explaining the key steps of the decentralized regression method, depicted for the example system in Fig. 1a. We first collect data from a multi-agent system, and then solve the centralized optimization problem using all the data. The data is then split into smaller training and test sets for all agents to develop individual decentralized policies ˆπi(xi) that approximate the optimal solution of the centralized problem. These policies are then implemented in the multi-agent system to collectively achieve a common global behavior. to reason about how well their collective action can approximate π∗. Figure 2 summarizes the decentralized learning setup. More formally, we describe the dependency structure of the individual policies ˆπi : Rαi −→Rβi with a Markov Random Field (MRF) graphical model, as shown in Fig. 1b. The ˆui are only allowed to depend on local state xi while the u∗ i may depend on the full state x. With this model, we can determine how information is distributed among different variables and what information-theoretic constraints the policies {ˆπi}i∈C are subject to when collectively trying to reconstruct the centralized policy π∗. Note that although we may refer to π∗as globally optimal, this is not actually required for us to reason about how closely the ˆπi approximate π∗. That is, our analysis holds even if (1) is solved using approximate methods. In a dynamical reformulation of (1), for example, π∗could be generated using techniques from deep RL. 3.2 A Rate-Distortion Framework We approach the problem of how well the decentralized policies ˆπi can perform in theory from the perspective of rate distortion. Rate distortion theory is a sub-field of information theory which provides a framework for understanding and computing the minimal distortion incurred by any given compression scheme. In a rate distortion context, we can interpret the fact that the output of each individual policy ˆπi depends only on the local state xi as a compression of the full state x. For a detailed overview, see [Cover and Thomas, 2012, Chapter 10]. We formulate the following variant of the the classical rate distortion problem D∗= min p(ˆu|u∗) E [d(ˆu, u∗)] , (2) s.t. I(ˆui; u∗ j) ≤I(xi; u∗ j) ≜γij , I(ˆui; ˆuj) ≤I(xi; xj) ≜δij, ∀i, j ∈C , where I(·, ·) denotes mutual information and d(·, ·) an arbitrary non-negative distortion measure. As usual, the minimum distortion between random variable u∗and its reconstruction ˆu may be found by minimizing over conditional distributions p(ˆu|u∗). The novelty in (2) lies in the structure of the constraints. Typically, D∗is written as a function D(R), where R is the maximum rate or mutual information I(ˆu; u∗). From Fig. 1b however, we know that pairs of reconstructed and optimal actions cannot share more information than is contained in the intermediate nodes in the graphical model, e.g. ˆu1 and u∗ 1 cannot share more information than x1 and u∗ 1. This is a simple consequence of the data processing inequality [Cover and Thomas, 2012, Thm. 2.8.1]. Similarly, the reconstructed optimal actions at two different nodes cannot be more closely related than the measurements xi’s from which they are computed. The resulting constraints are fixed by the joint distribution of the state x and the optimal actions u∗. That is, they are fully determined by the structure of the optimization problem (1) that we wish to solve. 4 We emphasize that we have made virtually no assumptions about the distortion function. For the remainder of this paper, we will measure distortion as the deviation between ˆui and u∗ i . However, we could also define it to be the suboptimality gap fo(x, ˆu) −fo(x, u∗), which may be much more complicated to compute. This definition could allow us to reason explicitly about the cost of decentralization, and it could address the valid concern that the optimal decentralized policy may bear no resemblance to π∗. We leave further investigation for future work. 3.3 Example: Squared Error, Jointly Gaussian To provide more intuition into the rate distortion framework, we consider an idealized example in which the xi, ui ∈R1. Let d(ˆu, u∗) = ∥ˆu −u∗∥2 2 be the squared error distortion measure, and assume the state x and optimal actions u∗to be jointly Gaussian. These assumptions allow us to derive an explicit formula for the optimal distortion D∗and corresponding regression policies ˆπi. We begin by stating an identity for two jointly Gaussian X, Y ∈R with correlation ρ: I(X; Y ) ≤ γ ⇐⇒ρ2 ≤1 −e−2γ , which follows immediately from the definition of mutual information and the formula for the entropy of a Gaussian random variable. Taking ρˆui,u∗ i to be the correlation between ˆui and u∗ i , σ2 ˆui and σ2 u∗ i to be the variances of ˆui and u∗ i respectively, and assuming that u∗ i and ˆui are of equal mean (unbiased policies ˆπi), we can show that the minimum distortion attainable is D∗= min p(ˆu|u∗) E ∥u∗−ˆu∥2 2 : ρ2 ˆui,u∗ i ≤1 −e−2γii = ρ2 u∗ i ,xi, ∀i ∈C , (3) = min {ρˆui,u∗ i },{σˆui} X i σ2 u∗ i + σ2 ˆui −2ρˆui,u∗ i σu∗ i σˆui : ρ2 ˆui,u∗ i ≤ρ2 u∗ i ,xi , (4) = min {σˆui} X i σ2 u∗ i + σ2 ˆui −2ρu∗ i ,xiσu∗ i σˆui , (5) = X i σ2 u∗ i (1 −ρ2 u∗ i ,xi) . (6) In (4), we have solved for the optimal correlations ρˆui,u∗ i . Unsurprisingly, the optimal value turns out to be the maximum allowed by the mutual information constraint, i.e. ˆui should be as correlated to u∗ i as possible, and in particular as much as u∗ i is correlated to xi. Similarly, in (5) we solve for the optimal σˆui, with the result that at optimum, σˆui = ρu∗ i ,xiσu∗ i . This means that as the correlation between the local state xi and the optimal action u∗ i decreases, the variance of the estimated action ˆui decreases as well. As a result, the learned policy will increasingly “bet on the mean” or “listen less” to its local measurement to approximate the optimal action. Moreover, we may also provide a closed form expression for the regressor which achieves the minimum distortion D∗. Since we have assumed that each u∗ i and the state x are jointly Gaussian, we may write any u∗ i as an affine function of xi plus independent Gaussian noise. Thus, the minimum mean squared estimator is given by the conditional expectation ˆui = ˆπi(xi) = E [u∗ i |xi] = E [u∗ i ] + ρu∗ i xiσu∗ i σxi (xi −E [xi]) . (7) Thus, we have found a closed form expression for the best regressor ˆπi to predict u∗ i from only xi in the joint Gaussian case with squared error distortion. This result comes as a direct consequence of knowing the true parameterization of the joint distribution p(u∗, x) (in this case, as a Gaussian). 3.4 Determining Minimum Distortion in Practice Often in practice, we do not know the parameterization p(u∗|x) and hence it may be intractable to determine D∗and the corresponding decentralized policies ˆπi. However, if one can assume that p(u∗|x) belongs to a family of parameterized functions (for instance universal function approximators such as deep neural networks), then it is theoretically possible to attain or at least approach minimum distortion for arbitrary non-negative distortion measures. Practically, one can compute the mutual information constraint I(u∗ i , xi) from (2) to understand how much information a regressor ˆπi(xi) has available to reconstruct u∗ i . In the Gaussian case, we were able to compute this mutual information in closed form. For data from general distributions 5 however, there is often no way to compute mutual information analytically. Instead, we rely on access to sufficient data {x[t], u∗[t]}T t=1, in order to estimate mutual informations numerically. In such situations (e.g. Sec. 5), we discretize the data and then compute mutual information with a minimax risk estimator, as proposed by Jiao et al. [2014]. 4 Allowing Restricted Communication Suppose that a decentralized policy ˆπi suffers from insufficient mutual information between its local measurement xi and the optimal action u∗ i . In this case, we would like to quantify the potential benefits of communicating with other nodes j ̸= i in order to reduce the distortion limit D∗from (2) and improve its ability to reconstruct u∗ i . In this section, we present an information-theoretic solution to the problem of how to choose optimally which other data to observe, and we provide a lower bound-achieving solution for the idealized Gaussian case introduced in Sec. 3.3. We assume that in addition to observing its own local state xi, each ˆπi is allowed to depend on at most k other xj̸=i. Theorem 1. (Restricted Communication) If Si is the set of k nodes j ̸= i ∈N which ˆui is allowed to observe in addition to xi, then setting Si = arg max S I(u∗ i ; xi, {xj : j ∈S}) : |S| = k , (8) minimizes the best-case expectation of any distortion measure. That is, this choice of Si yields the smallest lower bound D∗from (2) of any possible choice of S. Proof. By assumption, Si maximizes the mutual information between the observed local states {xi, xj : j ∈Si} and the optimal action u∗ i . This mutual information is equivalent to the notion of rate R in the classical rate distortion theorem [Cover and Thomas, 2012]. It is well-known that the distortion rate function D(R) is convex and monotone decreasing in R. Thus, by maximizing mutual information R we are guaranteed to minimize distortion D(R), and hence D∗. Theorem 1 provides a means of choosing a subset of the state {xj : j ̸= i} to communicate to each decentralized policy ˆπi that minimizes the corresponding best expected distortion D∗. Practically speaking, this result may be interpreted as formalizing the following intuition: “the best thing to do is to transmit the most information.” In this case, “transmitting the most information” corresponds to allowing ˆπi to observe the set S of nodes {xj : j ̸= i} which contains the most information about u∗ i . Likewise, by “best” we mean that Si minimizes the best-case expected distortion D∗, for any distortion metric d. As in Sec. 3.3, without making some assumption about the structure of the distribution of x and u∗, we cannot guarantee that any particular regressor ˆπi will attain D∗. Nevertheless, in a practical situation where sufficient data {x[t], u∗[t]}T t=1 is available, we can solve (8) by estimating mutual information [Jiao et al., 2014]. 4.1 Example: Joint Gaussian, Squared Error with Communication Here, we reexamine the joint Gaussian-distributed, mean squared error distortion case from Sec. 3.3, and apply Thm. 1. We will take u∗∈R1, x ∈R10 and u∗, x jointly Gaussian with zero mean and arbitrary covariance. The specific covariance matrix Σ of the joint distribution p(u∗, x) is visualized in Fig. 3a. For simplicity, we show the squared correlation coefficients of Σ which lie in [0, 1]. The boxed cells in Σ in Fig. 3a indicate that x9 solves (8), i.e. j = 9 maximizes I(u∗; x1, xj) the mutual information between the observed data and regression target u∗. Intuitively, this choice of j is best because x9 is highly correlated to u∗and weakly correlated to x1, which is already observed by ˆu; that is, it conveys a significant amount of information about u∗that is not already conveyed by x1. Figure 3b shows empirical results. Along the horizontal axis we increase the value of k, the number of additional variables xj which regressor ˆπi observes. The vertical axis shows the resulting average distortion. We show results for a linear regressor of the form of (7) where we have chosen Si optimally according to (8), as well as uniformly at random from all possible sets of unique indices. Note that the optimal choice of Si yields the lowest average distortion D∗for all choices of k. Moreover, the linear regressor of (7) achieves D∗for all k, since we have assumed a Gaussian joint distribution. 6 u∗ u∗ x1 x1 x2 x2 x3 x3 x4 x4 x5 x5 x6 x6 x7 x7 x8 x8 x9 x9 x10 x10 0.2 0.4 0.6 0.8 1 (a) Squared correlation coefficients. 0 2 4 6 8 10 Additional Observations k 0 5 10 15 20 25 MSE optimal strategy average random strategy (b) Comparison of communication strategies. Figure 3: Results for optimal communication strategies on a synthetic Gaussian example. (a) shows squared correlation coefficients between of u∗and all xi’s. The boxed entries correspond to x9, which was found to be optimal for k = 1. (b) shows that the optimal communication strategy of Thm. 1 achieves the lowest average distortion and outperforms the average over random strategies. 5 Application to Optimal Power Flow In this case study, we aim to minimize the voltage variability in an electric grid caused by intermittent renewable energy sources and the increasing load caused by electric vehicle charging. We do so by controlling the reactive power output of distributed energy resources (DERs), while adhering to the physics of power flow and constraints due to energy capacity and safety. Recently, various approaches have been proposed, such as [Farivar et al., 2013] or [Zhang et al., 2014]. In these methods, DERs tend to rely on an extensive communication infrastructure, either with a central master node [Xu et al., 2017] or between agents leveraging local computation [Dall’Anese et al., 2014]. We study regression-based decentralization as outlined in Sec. 3 and Fig. 2 to the optimal power flow (OPF) problem [Low, 2014], as initially proposed by Sondermeijer et al. [2016]. We apply Thm. 1 to determine the communication strategy that minimizes optimal distortion to further improve the reconstruction of the optimal actions u∗ i . Solving OPF requires a model of the electricity grid describing both topology and impedances; this is represented as a graph G = (N, E). For clarity of exposition and without loss of generality, we introduce the linearized power flow equations over radial networks, also known as the LinDistFlow equations [Baran and Wu, 1989]: Pij = X (j,k)∈E,k̸=i Pjk + pc j −pg j , (9a) Qij = X (j,k)∈E,k̸=i Qjk + qc j −qg j , (9b) vj = vi −2 (rijPij + ξijQij) (9c) In this model, capitals Pij and Qij represent real and reactive power flow on a branch from node i to node j for all branches (i, j) ∈E, lower case pc i and qc i are the real and reactive power consumption at node i, and pg i and qg i are its real and reactive power generation. Complex line impedances rij +√−1ξij have the same indexing as the power flows. The LinDistFlow equations use the squared voltage magnitude vi, defined and indexed at all nodes i ∈N. These equations are included as constraints in the optimization problem to enforce that the solution adheres to laws of physics. To formulate our decentralized learning problem, we will treat xi ≜(pc i, qc i , pg i ) to be the local state variable, and, for all controllable nodes, i.e. agents i ∈C, we have ui ≜qg i , i.e. the reactive power generation can be controlled (vi, Pij, Qij are treated as dummy variables). We assume that for all nodes i ∈N, consumption pc i , qc i and real power generation pg i are predetermined respectively by the demand and the power generated by a potential photovoltaic (PV) system. The action space is constrained by the reactive power capacity |ui| = qg i ≤¯qi. In addition, voltages are maintained 7 (a) Voltage output with and without control. 0 1 2 3 4 5 Additional Observations k 1 1.2 1.4 1.6 1.8 2 MSE ×10−3 linear, random linear, optimal quadratic, random quadratic, optimal (b) Comparison of OPF communication strategies. Figure 4: Results for decentralized learning on an OPF problem. (a) shows an example result of decentralized learning - the shaded region represents the range of all voltages in a network over a full day. As compared to no control, the fully decentralized regression-based control reduces voltage variation and prevents constraint violation (dashed line). (b) shows that the optimal communication strategy Si outperforms the average for random strategies on the mean squared error distortion metric. The regressors used are stepwise linear policies ˆπi with linear or quadratic features. within ±5% of 120V , which is expressed as the constraint v ≤vi ≤v . The OPF problem now reads u∗= arg min qg i , ∀i∈C X i∈N |vi −vref| , (10) s.t. (9) , qg i ≤¯qi , v ≤vi ≤v . Following Fig. 2, we employ models of real electrical distribution grids (including the IEEE Test Feeders [IEEE PES, 2017]), which we equip with with T historical readings {x[t]}T t=1 of load and PV data, which is composed with real smart meter measurements sourced from Pecan Street Inc. [2017]. We solve (10) for all data, yielding a set of minimizers {u∗[t]}T t=1. We then separate the overall data set into C smaller data sets {xi[t], u∗ i [t]}T t=1 , ∀i ∈C and train linear policies with feature kernels φi(·) and parameters θi of the form ˆπi(xi) = θ⊤ i φi(xi). Practically, the challenge is to select the best feature kernel φi(·). We extend earlier work which showed that decentralized learning for OPF can be done satisfactorily via a hybrid forward- and backward-stepwise selection algorithm [Friedman et al., 2001, Chapter 3] that uses a quadratic feature kernels. Figure 4a shows the result for an electric distribution grid model based on a real network from Arizona. This network has 129 nodes and, in simulation, 53 nodes were equipped with a controllable DER (i.e. N = 129, C = 53). In Fig. 4a we show the voltage deviation from a normalized setpoint on a simulated network with data not used during training. The improvement over the no-control baseline is striking, and performance is nearly identical to the optimum achieved by the centralized solution. Concretely, we observed: (i) no constraint violations, and (ii) a suboptimality deviation of 0.15% on average, with a maximum deviation of 1.6%, as compared to the optimal policy π∗. In addition, we applied Thm. 1 to the OPF problem for a smaller network [IEEE PES, 2017], in order to determine the optimal communication strategy to minimize a squared error distortion measure. Fig. 4b shows the mean squared error distortion measure for an increasing number of observed nodes k and shows how the optimal strategy outperforms an average over random strategies. 6 Conclusions and Future Work This paper generalizes the approach of Sondermeijer et al. [2016] to solve multi-agent static optimal control problems with decentralized policies that are learned offline from historical data. Our rate distortion framework facilitates a principled analysis of the performance of such decentralized policies and the design of optimal communication strategies to improve individual policies. These techniques work well on a model of a sophisticated real-world OPF example. There are still many open questions about regression-based decentralization. It is well known that strong interactions between different subsystems may lead to instability and suboptimality in decentralized control problems [Davison and Chang, 1990]. There are natural extensions of our work 8 to address dynamic control problems more explicitly, and stability analysis is a topic of ongoing work. Also, analysis of the suboptimality of regression-based decentralization should be possible within our rate distortion framework. Finally, it is worth investigating the use of deep neural networks to parameterize both the distribution p(u∗|x) and local policies ˆπi in more complicated decentralized control problems with arbitrary distortion measures. Acknowledgments The authors would like to acknowledge Roberto Calandra for his insightful suggestions and feedback on the manuscript. This research is supported by NSF under the CPS Frontiers VehiCal project (1545126), by the UC-Philippine-California Advanced Research Institute under projects IIID-2016005 and IIID-2015-10, and by the ONR MURI Embedded Humans (N00014-16-1-2206). David Fridovich-Keil was also supported by the NSF GRFP. References P. Abbeel and A. Y. Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In International Conference on Machine Learning, New York, NY, USA, 2004. ACM. R. J. Aumann and J. H. Dreze. Cooperative games with coalition structures. International Journal of Game Theory, 3(4):217–237, Dec. 1974. M. Baran and F. Wu. Optimal capacitor placement on radial distribution systems. IEEE Transactions on Power Delivery, 4(1):725–734, Jan. 1989. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, July 2011. M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo. Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, 7(1):1–41, Mar. 2013. P. D. Christofides, R. Scattolini, D. M. de la Pena, and J. Liu. Distributed model predictive control: A tutorial review and future research directions. Computers & Chemical Engineering, 51:21–41, 2013. T. M. Cover and J. A. Thomas. Elements of information theory. John Wiley & Sons, 2012. E. Dall’Anese, S. V. Dhople, and G. Giannakis. Optimal dispatch of photovoltaic inverters in residential distribution systems. Sustainable Energy, IEEE Transactions on, 5(2):487–497, 2014. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6719562. E. J. Davison and T. N. Chang. Decentralized stabilization and pole assignment for general proper systems. IEEE Transactions on Automatic Control, 35(6):652–664, 1990. M. Farivar, L. Chen, and S. Low. Equilibrium and dynamics of local voltage control in distribution systems. In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC), pages 4329–4334, Dec. 2013. doi: 10.1109/CDC.2013.6760555. J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. C. V. Goldman and S. Zilberstein. Decentralized control of cooperative systems: Categorization and complexity analysis. J. Artif. Int. Res., 22(1):143–174, Nov. 2004. ISSN 1076-9757. URL http://dl.acm.org/citation.cfm?id=1622487.1622493. IEEE PES. IEEE Distribution Test Feeders, 2017. URL http://ewh.ieee.org/soc/pes/ dsacom/testfeeders/. J. Jiao, K. Venkat, Y. Han, and T. Weissman. Minimax Estimation of Functionals of Discrete Distributions. arXiv preprint, June 2014. arXiv: 1406.6956. S. Low. Convex Relaxation of Optimal Power Flow; Part I: Formulations and Equivalence. IEEE Transactions on Control of Network Systems, 1(1):15–27, Mar. 2014. 9 J. Lunze. Feedback Control of Large Scale Systems. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1992. ISBN 013318353X. P. J. Modi, W.-M. Shen, M. Tambe, and M. Yokoo. Adopt: Asynchronous distributed constraint optimization with quality guarantees. Artif. Intell., 161(1-2):149–180, Jan. 2005. ISSN 0004-3702. doi: 10.1016/j.artint.2004.09.003. URL http://dx.doi.org/10.1016/j.artint.2004.09. 003. R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked Distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs. In AAAI, volume 5, pages 133–139, 2005. F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs. Springer International Publishing, 1 edition, 2016. P. A. Ortega, D. A. Braun, J. Dyer, K.-E. Kim, and N. Tishby. Information-Theoretic Bounded Rationality. arXiv preprint, 2015. arXiv:1512.06789. Pecan Street Inc. Dataport, 2017. URL http://www.pecanstreet.org/. L. Peshkin, K.-E. Kim, N. Meuleau, and L. P. Kaelbling. Learning to cooperate via policy search. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, UAI’00, pages 489–496, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1-55860-709-9. URL http://dl.acm.org/citation.cfm?id=2073946.2074003. Y. Pu, M. N. Zeilinger, and C. N. Jones. Inexact fast alternating minimization algorithm for distributed model predictive control. In Conference on Decision and Control, Los Angeles, CA, USA, 2014. IEEE. R. L. Raffard, C. J. Tomlin, and S. P. Boyd. Distributed optimization for cooperative agents: Application to formation flight. In Conference on Decision and Control, Nassau, The Bahamas, 2004. IEEE. C. Sammut. Automatic construction of reactive control systems using symbolic machine learning. The Knowledge Engineering Review, 11(01):27–42, 1996. D. D. Siljak. Decentralized control of complex systems. Dover Books on Electrical Engineering. Dover, New York, NY, 2011. URL http://cds.cern.ch/record/1985961. O. Sondermeijer, R. Dobbe, D. B. Arnold, C. Tomlin, and T. Keviczky. Regression-based Inverter Control for Decentralized Optimal Power Flow and Voltage Regulation. In Power and Energy Society General Meeting, Boston, MA, USA, July 2016. IEEE. A. X. Sun, D. T. Phan, and S. Ghosh. Fully decentralized AC optimal power flow algorithms. In Power and Energy Society General Meeting, Vancouver, Canada, July 2013. IEEE. Y. Xu, Z. Y. Dong, R. Zhang, and D. J. Hill. Multi-Timescale Coordinated Voltage/Var Control of High Renewable-Penetrated Distribution Systems. IEEE Transactions on Power Systems, PP(99): 1–1, 2017. ISSN 0885-8950. doi: 10.1109/TPWRS.2017.2669343. M. N. Zeilinger, Y. Pu, S. Riverso, G. Ferrari-Trecate, and C. N. Jones. Plug and play distributed model predictive control based on distributed invariance and optimization. In Conference on Decision and Control, Florence, Italy, 2013. IEEE. B. Zhang, A. Lam, A. Dominguez-Garcia, and D. Tse. An Optimal and Distributed Method for Voltage Regulation in Power Distribution Systems. IEEE Transactions on Power Systems, PP(99): 1–13, 2014. ISSN 0885-8950. doi: 10.1109/TPWRS.2014.2347281. 10 | 2017 | 373 |
6,867 | Neural system identification for large populations separating “what” and “where” David A. Klindt * 1-3, Alexander S. Ecker * 1,2,4,6, Thomas Euler 1-3, Matthias Bethge 1,2,4-6 * Authors contributed equally 1 Centre for Integrative Neuroscience, University of Tübingen, Germany 2 Bernstein Center for Computational Neuroscience, University of Tübingen, Germany 3 Institute for Ophthalmic Research, University of Tübingen, Germany 4 Institute for Theoretical Physics, University of Tübingen, Germany 5 Max Planck Institute for Biological Cybernetics, Tübingen, Germany 6 Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA klindt.david@gmail.com, alexander.ecker@uni-tuebingen.de, thomas.euler@cin.uni-tuebingen.de, matthias.bethge@bethgelab.org Abstract Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional methods for neural system identification do not capitalize on this separation of “what” and “where”. Learning deep convolutional feature spaces that are shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron’s response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations – a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse readout layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex. 1 Introduction In neural system identification, we seek to construct quantitative models that describe how a neuron responds to arbitrary stimuli [1, 2]. In sensory neuroscience, the standard way to approach this problem is with a generalized linear model (GLM): a linear filter followed by a point-wise nonlinearity [3, 4]. However, neurons elicit complex nonlinear responses to natural stimuli even as early as in the retina [5, 6] and the degree of nonlinearity increases as ones goes up the visual hierarchy. At the same time, neurons in the same brain area tend to perform similar computations at different positions in the visual field. This separability of what is computed from where it is computed is a key idea underlying the notion of functional cell types tiling the visual field in a retinotopic fashion. For early visual processing stages like the retina or primary visual cortex, several nonlinear methods have been proposed, including energy models [7, 8], spike-triggered covariance methods [9, 10], linear-nonlinear (LN-LN) cascades [11, 12], convolutional subunit models [13, 14] and GLMs based on handcrafted nonlinear feature spaces [15]. While these models outperform the simple GLM, they 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. still cannot fully account for the responses of even early visual processing stages (i.e. retina, V1), let alone higher-level areas such as V4 or IT. The main problem is that the expressiveness of the model (i.e. number of parameters) is limited by the amount of data that can be collected for each neuron. The recent success of deep learning in computer vision and other fields has sparked interest in using deep learning methods for understanding neural computations in the brain [16, 17, 18], including promising first attempts to learn feature spaces for neural system identification [19, 20, 21, 22, 23]. In this study, we would like to achieve a better understanding of the possible advantages of deep learning methods over classical tools for system identification by analyzing their effectiveness on ground truth models. Classical approaches have traditionally been framed as individual multivariate regression problems for each recorded neuron, without exploiting computational similarities between different neurons for regularization. One of the most obvious similarities between different neurons, however, is that the visual system simultaneously extracts similar features at many different locations. Because of this spatial equivariance, the same nonlinear subspace is spanned at many nearby locations and many neurons share similar nonlinear computations. Thus, we should be able to learn much more complex nonlinear functions by combining data from many neurons and learning a common feature space from which we can linearly predict the activity of each neuron. We propose a convolutional neural network (CNN) architecture with a special readout layer that separates the problem of learning a common feature space from estimating each neuron’s receptive field location and cell type, but can still be trained end-to-end on experimental data. We evaluate this model architecture using simple simulations and show its potential for developing a functional characterization of cell types. Moreover, we show that our model outperforms the current state-ofthe-art on a publicly available dataset of mouse V1 responses to natural images [19]. 2 Related work Using artificial neural networks to predict neural responses has a long history [24, 25, 26]. Recently, two studies [13, 14] fit two-layer models with a convolutional layer and a pooling layer. They do find marked improvements over GLMs and spike-triggered covariance methods, but like most other previous studies they fit their model only to individual cells’ responses and do not exploit computational similarities among neurons. Antolik et al. [19] proposed learning a common feature space to improve neural system identification. They outperform GLM-based approaches by fitting a multi-layer neural network consisting of parameterized difference-of-Gaussian filters in the first layer, followed by two fully-connected layers. However, because they do not use a convolutional architecture, features are shared only locally. Thus, every hidden unit has to be learned ‘from scratch’ at each spatial location and the number of parameters in the fully-connected layers grows quadratically with population size. McIntosh et al. [20] fit a CNN to retinal data. The bottleneck in their approach is the final fullyconnected layer that maps the convolutional feature space to individual cells’ responses. The number of parameters in this final readout layer grows very quickly and even for their small populations represents more than half of the total number of parameters. Batty et al. [21] also advocate feature sharing and explore using recurrent neural networks to model the shared feature space. They use a two-step procedure, where they first estimate each neuron’s location via spike-triggered average, then crop the stimulus accordingly for each neuron and then learn a model with shared features. The performance of this approach depends critically on the accuracy of the initial location estimate, which can be problematic for nonlinear neurons with a weak spike-triggered average response (e. g. complex cells in primary visual cortex). Our contribution is a novel network architecture consisting of a number of convolutional layers followed by a sparse readout layer factorizing the spatial and feature dimensions. Our approach has two main advantages over prior art. First, it reduces the effective number of parameters in the readout layer substantially while still being trainable end-to-end. Second, our readout forces all computations to be performed in the convolutional layers while the factorized readout layer provides an estimate of the receptive field location and the cell type of each neuron. In addition, our work goes beyond the findings of these previous studies by providing a systematic evaluation, on ground truth models, of the advantages of feature sharing in neural system identification – in particular in settings with many neurons and few observations. 2 Figure 1: Feature sharing makes more efficient use of the available data. Red line: System identification performance with one recorded neuron. Blue lines: Performance for a hypothetical population of 10 neurons with identical receptive field shapes whose locations we know. A shared model (solid blue) is equivalent to having 10× as much data, i. e. the performance curve shifts to the left. If we fit all neurons independently (dashed blue), we do not benefit from their similarity. 3 Learning a common feature space We illustrate why learning a common feature space makes much more efficient use of the available data by considering a simple thought experiment. Suppose we record from ten neurons that all compute exactly the same function, except that they are located at different positions. If we know each neuron’s position, we can pool their data to estimate a single model by shifting the stimulus such that it is centered on each neuron’s receptive field. In this case we have effectively ten times as much data as in the single-neuron case (Fig. 1, red line) and we will achieve the same model performance with a tenth of the data (Fig. 1, solid blue line). In contrast, if we treat each neuron as an individual regression problem, the performance will on average be identical to the single-neuron case (Fig. 1, dashed blue line). Although this insight has been well known from transfer learning in machine learning, it has so far not been applied widely in a neuroscience context. In practice we neither know the receptive field locations of all neurons a priori nor do all neurons implement exactly the same nonlinear function. However, the improvements of learning a shared feature space can still be substantial. First, estimating the receptive field location of an individual neuron is a much simpler task than estimating its entire nonlinear function from scratch. Second, we expect the functional response diversity within a cell type to be much smaller than the overall response diversity across cell types [27, 28]. Third, cells in later processing stages (e. g. V1) share the nonlinear computations of their upstream areas (retina, LGN), suggesting that equipping them with a common feature space will simplify learning their individual characteristics [19]. 4 Feature sharing in a simple linear ground-truth model We start by investigating the possible advantages of learning a common feature space with a simple ground truth model – a population of linear neurons with Poisson-like output noise: rn = aT ns yn ∼N rn, p |rn| (1) Here, s is the (Gaussian white noise) stimulus, rn the firing rate of neuron n, an its receptive field kernel and yn its noisy response. In this simple model, the classical GLM-based approch reduces to (regularized) multivariate linear regression, which we compare to a convolutional neural network. 4.1 Convolutional neural network model Our neural network consists of a convolutional layer and a readout layer (Fig. 2). The first layer convolves the image with a number of kernels to produce K feature maps, followed by batch normalization [29]. There is no nonlinearity in the network (i.e. activation function is the identity). Batch normalization ensures that the output has fixed variance, which is important for the regularization in the second layer. The readout layer pools the output, c, of the convolutional layer by applying a sparse mask, q, for each neuron: ˆrn = X i,j,k cijkqijkn (2) Here, ˆrn is the predicted firing rate of neuron n. The mask q is factorized in the spatial and feature dimension: qijkn = mijnwkn, (3) where m is a spatial mask and w is a set of K feature weights for each neuron. The spatial mask and feature weights encode each neuron’s receptive field location and cell type, respectively. As we expect them to be highly sparse, we regularize both by an L1 penalty (with strengths λm and λw). 3 ... Neuron 1 Neuron N (32 × 32 + K) × N 32 × 32 K 32 × 32 × K 48 × 48 . 17 × 17 × K convolution ... ... Feature Space Input Receptive Fields Responses Original N × 1 48 × 48 Figure 2: Our proposed CNN architecture in its simplest form. It consists of a feature space module and a readout layer. The feature space is extracted via one or more convolutional layers (here one is shown). The readout layer computes for each neuron a weighted sum over the entire feature space. To keep the number of parameters tractable and facilitate interpretability, we factorize the readout into a location mask and a vector of feature weights, which are both encouraged to be sparse by regularizing with L1 penalty. By factorizing the spatial and feature dimension in the readout layer, we achieve several useful properties: first, it reduces the number of parameters substantially compared to a fully-connected layer [20]; second, it limits the expressiveness of the layer, forcing the ‘computations’ down to the convolutional layers, while the readout layer performs only the selection; third, this separation of computation from selection facilitates the interpretation of the learned parameters in terms of functional cell types. We minimize the following penalized mean-squared error using the Adam optimizer [30]: L = 1 B X b,n (ybn −ˆrbn)2 + λm X i,j,n |mijn| + λw X k,n |wkn| (4) where b denotes the sample index and B = 256 is the minibatch size. We use an initial learning rate of 0.001 and early stopping based on a separate validation set consisting of 20% of the training set. When the validation error has not improved for 300 consecutive steps, we go back to the best parameter set and decrease the learning rate once by a factor of ten. After the second time we end the training. We find the optimal regularization weights λm and λw via grid search. To achieve optimal performance, we found it to be useful to initialize the masks well. Shifting the convolution kernel by one pixel in one direction while shifting the mask in the opposite direction in principle produces the same output. However, because in practice the filter size is finite, poorly initialized masks can lead to suboptimal solutions with partially cropped filters (cf. Fig. 3C, CNN10). To initialize the masks, we calculated the spike-triggered average for each neuron, smoothed it with a large Gaussian kernel and took the pixel with the maximum absolute value as our initial guess for the neurons’ location. We set this pixel to the standard deviation of the neuron’s response (because the output of the convolutional layer has unit variance) and initialized the rest of the mask randomly from a Gaussian N(0, 0.001). We initialized the convolution kernels randomly from N(0, 0.01) and the feature weights from N(1/K, 0.01). 4.2 Baseline models In the linear example studied here, the GLM reduces to simple linear regression. We used two forms of regularization: lasso (L1) and ridge (L2). To maximize the performance of these baseline models, we cropped the stimulus around each neuron’s receptive field. Thus, the number of parameters these models have to learn is identical to those in the convolution kernel of the CNN. Again, we cross-validated over the regularization strength. 4.3 Performance evaluation To measure the models’ performance we compute the fraction of explainable variance explained: FEV = 1 − (ˆr −r)2 /Var(r) (5) 4 A B Number of samples Model 28 OLS Lasso Ridge CNN1 210 212 CNN1000 CNN100 CNN10 C 28 26 214 216 218 210 212 OLS Lasso Ridge CNN1 CNN1000 CNN100 CNN10 Kernel known Number of samples 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of explainable variance Figure 3: Feature sharing in homogeneous linear population. A, Population of homogeneous spatially shifted on-center/off-surround neurons. B, Model comparison: Fraction of explainable variance explained vs. the number of samples used for fitting the models. Ordinary least squares (OLS), L1 (Lasso) and L2 (Ridge) regularized regression models are fit to individual neurons. CNNN are convolutional models with N neurons fit jointly. The dashed line shows the performance (for N →∞) of estimating the mask given the ground truth convolution kernel.C, Learned filters for different methods and number of samples. which is evaluated on the ground-truth firing rates r without observation noise. A perfect model would achieve FEV = 1. We evaluate FEV on a held-out test set not seen during model fitting and cross-validation. 4.4 Single cell type, homogeneous population We first considered the idealized situation where all neurons share the same 17×17 px on-center/offsurround filter, but at different locations (Fig. 3A). In other words, there is only one feature map in the convolutional layer (K = 1). We used a 48 × 48 px Gaussian white noise stimulus and scaled the neurons’ output such that ⟨|r|⟩= 0.1, mimicking a neurally-plausible signal-to-noise ratio at firing rates of 1 spike/s and an observation window of 100 ms. We simulated populations of N = 1, 10, 100 and 1000 neurons and varied the amount of training data. The CNN model consistently outperformed the linear regression models (Fig. 3B). The ridgeregularized linear regression explained around 60% of the explainable variance with 4,000 samples (i. e. pairs of stimulus and N-dimensional neural response vector). A CNN model pooling over 10 neurons achieved the same level of performance with less than a quarter of the data. The margin in performance increased with the number of neurons pooled over in the model, although the relative improvement started to level off when going from 100 to 1,000 neurons. With few observations, the bottleneck appears to be estimating each neuron’s location mask. Two observations support this hypothesis. First, the CNN1000 model learned much ‘cleaner’ weights with 256 samples than ridge regression with 4,096 (Fig. 3C), although the latter achieved a higher predictive performance (FEV = 55% vs. 65%). This observation suggests that the feature space can be learned efficiently with few samples and many neurons, but that the performance is limited by the estimation of neurons’ location masks. Second, when using the ground-truth kernel and optimizing solely the location masks, performance was only marginally better than for 1,000 neurons (Fig. 3B, blue dotted line), indicating an upper performance bound by the problem of estimating the location masks. 4.5 Functional classification of cell types Our next step was to investigate whether our model architecture can learn interpretable features and obtain a functional classification of cell types. Using the same simple linear model as above, we simulated two cell types with different filter kernels. To make the simulation a bit more realistic, we made the kernels heterogeneous within a cell type (Fig. 4A). We simulated a population of 1,000 neurons (500 of each type). With sparsity on the readout weights every neuron has to select one of the two convolutional kernels. As a consequence, the feature weights represent more or less directly the cell type identity of each 5 Figure 4: A, Example receptive fields of two types of neurons, differing in their average size. B, Learned filters of the CNN model. C, Scatter plot of the feature weights for the two cell types. neuron (Fig. 4C). This in turn forces the kernels to learn the average of each type (Fig. 4B). However, any other set of kernels spanning the same subspace would have achieved the same predictive performance. Thus, we find that sparsity on the feature weights facilitates interpretability: each neuron chooses one feature channel which represents the essential computation of this type of neuron. 5 Learning nonlinear feature spaces 5.1 Ground truth model Next, we investigated how our approach scales to more complex, nonlinear neurons and natural stimuli. To keep the benefits of having ground truth data available, we chose our model neurons from the VGG-19 network [31], a popular CNN trained on large-scale object recognition. We selected four random feature maps from layer conv2_2 as ‘cell types’. For each cell type, we picked 250 units with random locations (32 × 32 possible locations). We computed ground-truth responses for all 1000 cells on 44 × 44 px image patches obtained by randomly cropping images from the ImageNet (ILSVRC2012) dataset. As before, we rescaled the output to produce sparse, neurally plausible mean responses of 0.1 and added Poisson-like noise. We fit a CNN with three convolutional layers consisting of 32, 64 and 4 feature maps (kernel size 5 × 5), followed by our sparse, factorized readout layer (Fig. 5A). Each convolutional layer was followed by batch normalization and a ReLU nonlinearity. We trained the model using Adam with a batch-size of 64 and the same initial step size, early stopping, cross-validation and initialization of the masks as described above. As a baseline, we fit a ridge-regularized GLM with ReLU nonlinearity followed by an additional bias. To show that our sparse, factorized readout layer is an important feature of our architecture, we also implemented two alternative ways of choosing the readout, which have been proposed in previous work on learning common feature spaces for neural populations. The first approach is to estimate the receptive field location in advance based on the spike-triggered average of each neuron [21].1 To do so, we determined the pixel with the strongest spike-triggered average. We then set this pixel to one in the location mask and all other pixels to zero. We then kept the location mask fixed while optimizing convolution kernels and feature weights. The second approach is to use a fully-connected readout tensor [20] and regularize the activations of all neurons with L1 penalty. In addition, we regularized the fully-connected readout tensor with L2 weight decay. We fit both models to populations of 1,000 neurons. Our CNN with the factorized readout outperformed all three baselines (Fig. 5B).2 The performance of the GLM saturated at ≈20% FEV (Fig. 5B), highlighting the high degree of nonlinearity of our model neurons. Using a fully-connected readout [20] incurred a substantial performance penalty when the number of samples was small and only asymptotically (for a large number of samples) reached the same performance as our factorized readout. Estimating the receptive field location in 1Note that they used a recurrent neural network for the shared feature space. Here we only reproduce their approach to defining the readout. 2It did not reach 100% performance, since the feature space we fit was smaller and the network shallower than the one used to generate the ground truth data. 6 A Neuron 1 Neuron N (32 × 32 + K) × N ... ... F e a t u r e S p a c e Input Receptive Fields Responses 5 × 5 × 3 5 × 5 × 64 5 × 5 × 32 44 × 44 × 3 40 × 40 × 32 36 × 36 × 64 32 × 32 × 4 B E C D Number of samples 29 214 215 216 210 212 213 211 Fraction of explained variance GLM CNN 12 neurons CNN 100 neurons CNN 1,000 neurons Fixed mask Full readout Ours Full readout Fixed mask Fraction of explained variance Number of cell types 4 16 8 Type 2 Type 1 Feature 1 Feature 2 Type 4 Type 3 Feature 3 Feature 4 Figure 5: Inferring a complex, nonlinear feature space. A, Model architecture. B, Dependence of model performance (FEV) on number of samples used for training. C, Feature weights of the four cell types for CNN1000 with 215 samples cluster strongly. D, Learned location masks for four randomly chosen cells (one per type). E, Dependence of model performance (FEV) on number of types of neurons in population, number of samples fixed to 212. advance [21] led to a drop in performance – even for large sample sizes. A likely explanation for this finding is the fact that the responses are quite nonlinear and, thus, estimates of the receptive field location via spike-triggered average (a linear method) are not very reliable, even for large sample sizes. Note that the fact that we can fit the model is not trivial, although ground truth is a CNN. We have observations of noise-perturbed VGG units whose locations we do not know. Thus, we have to infer both the location of each unit as well as the complex, nonlinear feature space simultaneously. Our results show that our model solves this task more efficiently than both simpler (GLM) and equally expressive [20] models when the number of samples is relatively small. In addition to fitting the data well, the model also recovered both the cell types and the receptive field locations correctly (Fig. 5C, D). When fit using 216 samples (210 for validation/test and the rest for training), the readout weights of the four cell types clustered nicely (Fig. 5C) and it successfully recovered the location masks (Fig. 5D). In fact, all cells were classified correctly based on their largest feature weight. Next, we investigated how our model and its competitors [20, 21] fare when scaling up to large recordings with many types of neurons. To simulate this scenario, we sampled again VGG units (from the same layer as above), taking 64 units with random locations from up to 16 different feature maps (i.e. cell types). Correspondingly we increased the number of feature maps in the last convolutional layer of the models. We fixed the number of training samples to 212 to compare models in a challenging regime (cf. Fig. 5B) where performance can be high but is not yet asymptotic. Our CNN model scales gracefully to more diverse neural populations (Fig. 5E), remaining roughly at the same level of performance. Similarly, the CNN with the fixed location masks estimated in advance scales well, although with lower overall performance. In contrast, the performance of the fully-connected readout drops fast, because the number of parameters in the readout layer grows very quickly with the number of feature maps in the final convolutional layer. In fact, we were unable to fit models with more than 16 feature maps with this approach, because the size of the read-out tensor became prohibitively large for GPU memory. 7 Table 1: Application to data from primary visual cortex (V1) of mice [19]. The table shows average correlations between model predictions and neural responses on the test set. Scan 1 2 3 Average Antolik et al. 2016 [19] 0.51 0.43 0.46 0.47 LNP 0.37 0.30 0.38 0.36 CNN with fully connected readout 0.47 0.34 0.43 0.43 CNN with fixed mask 0.45 0.38 0.41 0.42 CNN with factorized readout (ours) 0.55 0.45 0.49 0.50 Finally, we asked how far we can push our model with long recordings and many neurons. We tested our model with 216 training samples from 128 different types of neurons (again 64 units each). On this large dataset with ≈60.000 recordings from ≈8.000 neurons we were still able to fit the model on a single GPU and perform at 90% FEV (data not shown). Thus, we conclude that our model scales well to large-scale problems with thousands of nonlinear and diverse neurons. 5.2 Application to data from primary visual cortex To test our approach on real data and going beyond the previously explored retinal data [20, 21], we used the publicly available dataset from Antolik et al. [19].3 The dataset has been obtained by two-photon imaging in the primary visual cortex of sedated mice viewing natural images. It contains three scans with 103, 55 and 102 neurons, respectively, and their responses to static natural images. Each scan consists of a training set of images that were each presented once (1800, 1260 and 1800 images, respectively) as well as a test set consisting of 50 images (each image repeated 10, 8 and 12 times, respectively). We use the data in the same form as the original study [19], to which we refer the reader for full details on data acquisition, post-processing and the visual stimulation paradigm. To fit this dataset, we used the same basic CNN architecture described above, with three small modifications. First, we replaced the ReLU activation functions by a soft-thresholding nonlinearity, f(x) = log(1 + exp(x)). Second, we replaced the mean-squared error loss by a Poisson loss (because neural responses are non-negativeand the observation noise scales with the mean response). Third, we had to regularize the convolutional kernels, because the dataset is relatively limited in terms of recording length and number of neurons. We used two forms of regularization: smoothness and group sparsity. Smoothness is achieved by an L2 penalty on the Laplacian of the convolution kernels: Llaplace = λlaplace X i,j,k,l (W:,:,kl ∗L)2 ij, L = h 0.5 1 0.5 1 −6 1 0.5 1 0.5 i (6) where Wijkl is the 4D tensor representing the convolution kernels, i and j depict the two spatial dimensions of the filters and k, l the input and output channels. Group sparsity encourages filters to pool from only a small set of feature maps in the previous layer and is defined as: Lgroup = λgroup X i,j sX kl W 2 ijkl. (7) We fit CNNs with one, two and three layers. After an initial exploration of different CNN architectures (filter sizes, number of feature maps) on the first scan, we systematically cross-validated over different filter sizes, number of feature maps and regularization strengths via grid search on all three scans. We fit all models using 80% of the training dataset for training and the remaining 20% for validation using Adam and early stopping as described above. For each scan, we selected the best model based on the likelihood on the validation set. In all three scans, the best model had 48 feature maps per layer and 13 × 13 px kernels in the first layer. The best model for the first two scans had 3 × 3 kernels in the subsequent layers, while for the third scan larger 8 × 8 kernels performed best. We compared our model to four baselines: (a) the Hierarchical Structural Model from the original paper publishing the dataset [19], (b) a regularized linear-nonlinear Poisson (LNP) model, (c) a CNN with fully-connected readout (as in [20]) and (d) a CNN with fixed spatial masks, inferred 3See [22, 23] for concurrent work on primate V1. 8 from the spike-triggered averages of each neuron (as in [21]). We used a separate, held-out test set to compare the performance of the models. On the test set, we computed the correlation coefficient between the response predicted by each model and the average observed response across repeats of the same image.4 Our CNN with factorized readout outperformed all four baselines on all three scans (Table 1). The other two CNNs, which either did not use a factorized readout (as in [20]) or did not jointly optimize feature space and readout (as in [21]), performed substantially worse. Interestingly, they did not even reach the performance of [19], which uses a three-layer fully-connected neural network instead of a CNN. Thus, our model is the new state of the art for predicting neural responses in mouse V1 and the factorized readout was necessary to outperform an earlier (and simpler) neural network architecture that also learned a shared feature space for all neurons [19]. 6 Discussion Our results show that the benefits of learning a shared convolutional feature space can be substantial. Predictive performance increases, however, only until an upper bound imposed by the difficulty of estimating each neuron’s location in the visual field. We propose a CNN architecture with a sparse, factorized readout layer that separates these two problems effectively. It allows scaling up the complexity of the convolutional layers to many parallel channels (which are needed to describe diverse, nonlinear neural populations), while keeping the inference problem of each neuron’s receptive field location and type identity tractable. Furthermore, our performance curves (see Figs. 3 and 5) may inform experimental designs by determining whether one should aim for longer recordings or more neurons. For instance, if we want to explain at least 80% of the variance in a very homogenous population of neurons, we could choose to record either ≈2,000 responses from 10 cells or ≈500 responses from 1,000 cells. Besides making more efficient use of the data to infer their nonlinear computations, the main promise of our new regularization scheme for system identification with CNNs is that the explicit separation of “what” and “where” provides us with a principled way to functionally classify cells into different types: the feature weights of our model can be thought of as a “barcode” identifying each cell type. We are currently working on applying this approach to large-scale data from the retina and primary visual cortex. Later processing stages, such as primary visual cortex could additionally benefit from similarly exploiting equivariance not only in the spatial domain, but also (approximately) in the orientation or direction-of-motion domain. Availability of code The code to fit the models and reproduce the figures is available online at: https://github.com/david-klindt/NIPS2017 Acknowledgements We thank Philipp Berens, Katrin Franke, Leon Gatys, Andreas Tolias, Fabian Sinz, Edgar Walker and Christian Behrens for comments and discussions. This work was supported by the German Research Foundation (DFG) through Collaborative Research Center (CRC 1233) “Robust Vision” as well as DFG grant EC 479/1-1; the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 674901; the German Excellency Initiative through the Centre for Integrative Neuroscience Tübingen (EXC307). The research was also supported by Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. 4We used the correlation coefficient for evaluation (a) to facilitate comparison with the original study [19] and (b) because estimating FEV on data with a small number of repetitions per image is unreliable. 9 References [1] Matteo Carandini, Jonathan B. Demb, Valerio Mante, David J. Tolhurst, Yang Dan, Bruno A. Olshausen, Jack L. Gallant, and Nicole C. Rust. Do we know what the early visual system does? The Journal of Neuroscience, 25(46):10577–10597, 2005. [2] Michael C.-K. Wu, Stephen V. David, and Jack L. Gallant. Complete functional characterization of sensory neurons by system identification. Annual Review of Neuroscience, 29:477–505, 2006. [3] Judson P. Jones and Larry A. Palmer. The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6):1187–1211, 1987. [4] Alison I. Weber and Jonathan W. Pillow. Capturing the dynamical repertoire of single neurons with generalized linear models. arXiv:1602.07389 [q-bio], 2016. [5] Tim Gollisch and Markus Meister. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron, 65(2):150–164, 2010. [6] Alexander Heitman, Nora Brackbill, Martin Greschner, Alexander Sher, Alan M. Litke, and E. J. Chichilnisky. Testing pseudo-linear models of responses to natural scenes in primate retina. bioRxiv, page 45336, 2016. [7] David H. Hubel and Torsten N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1):106, 1962. [8] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2(2):284–299, 1985. [9] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal Elements of Macaque V1 Receptive Fields. Neuron, 46(6):945–956, 2005. [10] Jon Touryan, Gidon Felsen, and Yang Dan. Spatial structure of complex cell receptive fields measured with natural images. Neuron, 45(5):781–791, 2005. [11] James M. McFarland, Yuwei Cui, and Daniel A. Butts. Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs. PLOS Computational Biology, 9(7):e1003143, 2013. [12] Esteban Real, Hiroki Asari, Tim Gollisch, and Markus Meister. Neural Circuit Inference from Function to Structure. Current Biology, 2017. [13] Brett Vintch, J. Anthony Movshon, and Eero P. Simoncelli. A convolutional subunit model for neuronal responses in macaque V1. The Journal of Neuroscience, 35(44):14829–14841, 2015. [14] Ryan J. Rowekamp and Tatyana O. Sharpee. Cross-orientation suppression in visual area V2. Nature Communications, 8, 2017. [15] Ben Willmore, Ryan J. Prenger, Michael C.-K. Wu, and Jack L. Gallant. The berkeley wavelet transform: a biologically inspired orthogonal wavelet transform. Neural Computation, 20(6):1537–1564, 2008. [16] Daniel L. K. Yamins, Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619–8624, 2014. [17] Ari S. Benjamin, Hugo L. Fernandes, Tucker Tomlinson, Pavan Ramkumar, Chris VerSteeg, Lee Miller, and Konrad P. Kording. Modern machine learning far outperforms GLMs at predicting spikes. bioRxiv, page 111450, 2017. [18] Seyed-Mahdi Khaligh-Razavi, Linda Henriksson, Kendrick Kay, and Nikolaus Kriegeskorte. Explaining the hierarchy of visual representational geometries by remixing of features from many computational vision models. bioRxiv, page 9936, 2014. [19] Ján Antolík, Sonja B. Hofer, James A. Bednar, and Thomas D. Mrsic-Flogel. Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes. PLOS Computational Biology, 12(6):e1004927, 2016. [20] Lane T. McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen A. Baccus. Deep Learning Models of the Retinal Response to Natural Scenes. arXiv:1702.01825 [q-bio, stat], 2017. 10 [21] Eleanor Batty, Josh Merel, Nora Brackbill, Alexander Heitman, Alexander Sher, Alan Litke, E. J. Chichilnisky, and Liam Paninski. Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses. In 5th International Conference on Learning Representations, 2017. [22] William F. Kindel, Elijah D. Christensen, and Joel Zylberberg. Using deep learning to reveal the neural code for images in primary visual cortex. arXiv:1706.06208 [cs, q-bio], 2017. [23] Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, and Alexander S. Ecker. Deep convolutional models improve predictions of macaque V1 responses to natural images. bioRxiv, page 201764, 2017. [24] S. R. Lehky, T. J. Sejnowski, and R. Desimone. Predicting responses of nonlinear neurons in monkey striate cortex to complex patterns. The Journal of Neuroscience, 12(9):3568–3581, 1992. [25] Brian Lau, Garrett B. Stanley, and Yang Dan. Computational subunits of visual cortical neurons revealed by artificial neural networks. Proceedings of the National Academy of Sciences, 99(13):8974–8979, 2002. [26] Ryan Prenger, Michael C. K. Wu, Stephen V. David, and Jack L. Gallant. Nonlinear V1 responses to natural scenes revealed by neural network analysis. Neural Networks, 17(5–6):663–679, 2004. [27] Tom Baden, Philipp Berens, Katrin Franke, Miroslav R. Rosón, Matthias Bethge, and Thomas Euler. The functional diversity of retinal ganglion cells in the mouse. Nature, 529(7586):345–350, 2016. [28] Katrin Franke, Philipp Berens, Timm Schubert, Matthias Bethge, Thomas Euler, and Tom Baden. Inhibition decorrelates visual feature representations in the inner retina. Nature, 542(7642):439–444, 2017. [29] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs], 2015. [30] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [31] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. 11 | 2017 | 374 |
6,868 | Learning Active Learning from Data Ksenia Konyushkova⇤ CVLab, EPFL Lausanne, Switzerland ksenia.konyushkova@epfl.ch Sznitman Raphael ARTORG Center, University of Bern Bern, Switzerland raphael.sznitman@artorg.unibe.ch Pascal Fua CVLab, EPFL Lausanne, Switzerland pascal.fua@epfl.ch Abstract In this paper, we suggest a novel data-driven approach to active learning (AL). The key idea is to train a regressor that predicts the expected error reduction for a candidate sample in a particular learning state. By formulating the query selection procedure as a regression problem we are not restricted to working with existing AL heuristics; instead, we learn strategies based on experience from previous AL outcomes. We show that a strategy can be learnt either from simple synthetic 2D datasets or from a subset of domain-specific data. Our method yields strategies that work well on real data from a wide range of domains. 1 Introduction Many modern machine learning techniques require large amounts of training data to reach their full potential. However, annotated data is hard and expensive to obtain, notably in specialized domains where only experts whose time is scarce and precious can provide reliable labels. Active learning (AL) aims to ease the data collection process by automatically deciding which instances an annotator should label to train an algorithm as quickly and effectively as possible. Over the years many AL strategies have been developed for various classification tasks, without any one of them clearly outperforming others in all cases. Consequently, a number of meta-AL approaches have been proposed to automatically select the best strategy. Recent examples include bandit algorithms [2, 11, 3] and reinforcement learning approaches [5]. A common limitation of these methods is that they cannot go beyond combining pre-existing hand-designed heuristics. Besides, they require reliable assessment of the classification performance which is problematic because the annotated data is scarce. In this paper, we overcome these limitations thanks to two features of our approach. First, we look at a whole continuum of AL strategies instead of combinations of pre-specified heuristics. Second, we bypass the need to evaluate the classification quality from application-specific data because we rely on experience from previous tasks and can seamlessly transfer strategies to new domains. More specifically, we formulate Learning Active Learning (LAL) as a regression problem. Given a trained classifier and its output for a specific sample without a label, we predict the reduction in generalization error that can be expected by adding the label to that datapoint. In practice, we show that we can train this regression function on synthetic data by using simple features, such as the variance of the classifier output or the predicted probability distribution over possible labels for a ⇤http://ksenia.konyushkova.com 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. specific datapoint. The features for the regression are not domain-specific and this enables to apply the regressor trained on synthetic data directly to other classification problems. Furthermore, if a sufficiently large annotated set can be provided initially, the regressor can be trained on it instead of on synthetic data. The resulting AL strategy is then tailored to the particular problem at hand. We show that LAL works well on real data from several different domains such as biomedical imaging, economics, molecular biology and high energy physics. This query selection strategy outperforms competing methods without requiring hand-crafted heuristics and at a comparatively low computational cost. 2 Related work The extensive development of AL in the last decade has resulted in various strategies. They include uncertainty sampling [32, 15, 27, 34], query-by-committee [7, 13], expected model change [27, 30, 33], expected error or variance minimization [14, 9] and information gain [10]. Among these, uncertainty sampling is both simple and computationally efficient. This makes it one of the most popular strategies in real applications. In short, it suggests labeling samples that are the most uncertain, i.e., closest to the classifier’s decision boundary. The above methods work very well in cases such as the ones depicted in the top row of Fig. 2, but often fail in the more difficult ones depicted in the bottom row [2]. Among AL methods, some cater to specific classifiers, such as those relying on Gaussian processes [16], or to specific applications, such as natural language processing [32, 25], sequence labeling tasks [28], visual recognition [21, 18], semantic segmentation [33], foreground-background segmentation [17], and preference learning [29, 22]. Moreover, various query strategies aim to maximize different performance metrics, as evidenced in the case of multi-class classification [27]. However, there is no one algorithm that consistently outperforms all others in all applications [28]. Meta-learning algorithms have been gaining in popularity in recent years [31, 26], but few of them tackle the problem of learning AL strategies. Baram et al. [2] combine several known heuristics with the help of a bandit algorithm. This is made possible by the maximum entropy criterion, which estimates the classification performance without labels. Hsu et al. [11] improve it by moving the focus from datasamples as arms to heuristics as arms in the bandit and use a new unbiased estimator of the test error. Chu and Lin [3] go further and transfer the bandit-learnt combination of AL heuristics between different tasks. Another approach is introduced by Ebert et al. [5]. It involves balancing exploration and exploitation in the choice of samples with a Markov decision process. The two main limitations of these approaches are as follows. First, they are restricted to combining already existing techniques and second, their success depends on the ability to estimate the classification performance from scarce annotated data. The data-driven nature of LAL helps to overcome these limitations. Sec. 5 shows that it outperforms several baselines including those of Hsu et al. [11] and Kapoor et al. [16]. 3 Towards data-driven active learning In this section we briefly introduce the active leaning framework along with uncertainty sampling (US), the most frequently-used AL heuristic. Then, we motivate why a data-driven approach can improve AL strategies and how it can deal with the situations where US fails. We select US as a representative method because it is popular and widely applicable, however the behavior that we describe is typical for a wide range of AL strategies. 3.1 Active learning (AL) Given a machine learning model and a pool of unlabeled data, the goal of AL is to select which data should be annotated in order to learn the model as quickly as possible. In practice, this means that instead of asking experts to annotate all the data, we select iteratively and adaptively which datapoints should be annotated next. In this paper we are interested in classifying datapoints from a target dataset Z = {(x1, y1), . . . , (xN, yN)}, where xi is a D-dimensional feature vector and yi 2 {0, 1} is its binary label. We choose a probabilistic classifier f that can be trained on some Lt ⇢Z to map 2 features to labels, ft(xi) = ˆyi, through the predicted probability pt(yi = y | xi). The standard AL procedure unfolds as follows. 1. The algorithm starts with a small labeled training dataset Lt ⇢Z and large pool of unannotated data Ut = Z \ Lt with t = 0. 2. A classifier ft is trained using Lt. 3. A query selection procedure picks an instance x⇤2 Ut to be annotated at the next iteration. 4. x⇤is given a label y⇤by an oracle. The labeled and unlabeled sets are updated. 5. t is incremented, and steps 2–5 iterate until the desired accuracy is achieved or the number of iterations has reached a predefined limit. Uncertainty sampling (US) US has been reported to be successful in numerous scenarios and settings and despite its simplicity, it often works remarkably well [32, 15, 27, 34, 17, 24]. It focuses its selection on samples which the current classifier is the least certain about. There are several definitions of maximum uncertainty but one of the most widely used ones is to select a sample x⇤ that maximizes the entropy H over the probability of predicted classes: x⇤= arg max xi2Ut H[pt(yi = y | xi)] . (1) 3.2 Success, failure, and motivation We now motivate the need for LAL by presenting two toy examples. In the first one, US is empirically observed to be the best greedy approach, but in the second it makes suboptimal decisions. Let us consider simple two-dimensional datasets Z and Z0 drawn from the same distribution with an equal number of points in each class (Fig. 1, left). The data in each class comes from a Gaussian distribution with a different mean and the same isotropic covariance. We can initialize the AL procedure of Sec. 3.1 with one sample from each class and its respective label: L0 = {(x1, 0), (x2, 1)} ⇢Z and U0 = Z \ L0. Here we train a simple logistic regression classifier f on L0 and then test it on Z0. If |Z0| is large, the test error can be considered as a good approximation of the generalization error: `0 = P (x0,y0)2Z0 `(ˆy, y0), where ˆy = f0(x0). Let us try to label every point x from U0 one by one, form a new labeled set Lx = L0 [ (x, y) and check what error a new classifier fx yields on Z0, that is, `x = P (x0,y0)2Z0 `(ˆy, y0), where ˆy = fx(x0). The difference between errors obtained with classifiers constructed on L0 and Lx indicates how much the addition of a new datapoint x reduces the generalization error: δx = `0 −`x. We plot δx for the 0/1 loss function, averaged over 10 000 experiments as a function of the predicted probability p0 (Fig. 1, left). By design, US would select a datapoint with probability of class 0 close to 0.5. We observe that in this experiment, the datasample with p0 closest to 0.5 is indeed the one that yields the greatest error reduction. 0 1 0.00 0.01 0.02 0 1 −0.03 0.00 0.03 Figure 1: Balanced vs unbalanced. Left: two Gaussian clouds of the same size. Right: two Gaussian clouds with the class 0 twice bigger than class 1. The test error reduction as a function of predicted probability of class 0 in the respective datasets. In the next experiment, the class 0 contains twice as many datapoints as the other class, see Fig. 1 (right). As before, we plot the average error reduction as a function of p0. We observe this time that the value of p0 that corresponds to the largest expected error reduction is different from 0.5 and thus the choice of US becomes suboptimal. Also, the reduction in error is no longer symmetric for the two classes. The more imbalanced the two classes are, the further from the optimum the choice made by 3 US is. In a complex realistic scenario, there are many other factors such as label noise, outliers and shape of distribution that further compound the problem. Although query selection procedures can take into account statistical properties of the datasets and classifier, there is no simple way to foresee the influence of all possible factors. Thus, in this paper, we suggest Learning Active Learning (LAL). It uses properties of classifiers and data to predict the potential error reduction. We tackle the query selection problem by using a regression model; this perspective enables us to construct new AL strategies in a flexible way. For instance, in the example of Fig. 1 (right) we expect LAL to learn a model that automatically adapts its selection to the relative prevalence of the two classes without having to explicitly state such a rule. Moreover, having learnt the error reduction prediction function, we can seamlessly transfer LAL strategy to other domains with very little annotated data. 4 Monte-Carlo LAL Our approach to AL is data-driven and can be formulated as a regression problem. Given a representative dataset with ground truth, we simulate an online learning procedure using a Monte-Carlo technique. We propose two versions of AL strategies that differ in the way how datasets for learning a regressor are constructed. When building the first one, LALINDEPENDENT, we incorporate unused labels individually and at random to retrain the classifier. Our goal is to correlate the change in test performance with the properties of the classifier and of newly added datapoint. To build the LALITERATIVE strategy, we further extend our method by a sequential procedure to account for selection bias caused by AL. We formalize our LAL procedures in the remainder of the section. 4.1 Independent LAL Let the representative dataset2 consist of a training set D and a testing set D0. Let f be a classifier with a given training procedure. We start collecting data for the regressor by splitting D into a labeled set L⌧of size ⌧and an unlabeled set U⌧containing the remaining points (Alg. 1 DATAMONTECARLO). We then train a classifier f on L⌧, resulting in a function f⌧that we use to predict class labels for elements x0 from the test set D0 and estimate the test classification loss `⌧. We characterize the classifier state by K parameters φ⌧= {φ1 ⌧, . . . , φK ⌧}, which are specific to the particular classifier type and are sensitive to the change in the training set while being relatively invariant to the stochasticity of the optimization procedure. For example, they can be the parameters of the kernel function if f is kernel-based, the average depths of the trees if f is a tree-based method, or prediction variability if f is an ensemble classifier. The above steps are summarized in lines 3–5 of Alg. 1. Algorithm 1 DATAMONTECARLO 1: Input: training set D and test set D0, classification procedure f, partitioning function SPLIT, size ⌧ 2: Initialize: L⌧, U⌧ SPLIT(D, ⌧) 3: train a classifier f⌧ 4: estimate the test set loss `⌧ 5: compute the classification state parameters φ {φ1 ⌧, . . . , φK ⌧} 6: for m = 1 to M do 7: select x 2 U⌧at random 8: form a new labeled dataset Lx L⌧[ {x} 9: compute the datapoint parameters { 1 x, . . . , R x } 10: train a classifier fx 11: estimate the new test loss `x 12: compute the loss reduction δx `⌧−`x 13: ⇠m ⇥ φ1 ⌧ · · · φK ⌧ 1 x · · · R x ⇤ , δm δx 14: ⌅ {⇠m} , ∆ {δm} : 1 m M 15: Return: matrix of learning states ⌅2 RM⇥(K+R), vector of reductions in error ∆2 RM 2The representative dataset is an annotated dataset that does not need to come from the domain of interest. In Sec. 5 we show that a simple synthetic dataset is sufficient for learning strategies that can be applied to various real tasks across various domains. 4 Algorithm 2 BUILDLALINDEPENDENT 1: Input: iteration range {⌧min, . . . , ⌧max}, classification procedure f 2: SPLIT random partitioning function 3: Initialize: generate train set D and test dataset D0 4: for ⌧in {⌧min, . . . , ⌧max} do 5: for q = 1 to Q do 6: ⌅⌧q, ∆⌧q DATAMONTECARLO (D, D0, f, SPLIT, ⌧) 7: ⌅, ∆ {⌅⌧q}, {∆⌧q} 8: train a regressor g : ⇠7! δ on data ⌅, ∆ 9: construct LALINDEPENDENT A(g): x⇤= arg maxx2Ut g[⇠t,x)] 10: Return: LALINDEPENDENT Algorithm 3 BUILDLALITERATIVE 1: Input: iteration range {⌧min, . . . , ⌧max}, classification procedure f 2: SPLIT random partitioning function 3: Initialize: generate train set D and test dataset D0 4: for ⌧in {⌧min, . . . , ⌧max} do 5: for q = 1 to Q do 6: ⌅⌧q, ∆⌧q DATAMONTECARLO (D, D0, f, SPLIT, ⌧) 7: ⌅⌧, ∆⌧ {⌅⌧q, ∆⌧q} 8: train regressor g⌧: ⇠7! δ on ⌅⌧, ∆⌧ 9: SPLIT A(g⌧) 10: ⌅, ∆ {⌅⌧, ∆⌧} 11: train a regressor g : ⇠7! δ on ⌅, ∆ 12: construct LALITERATIVE A(g) 13: Return: LALITERATIVE Next, we randomly select a new datapoint x from U⌧which is characterized by R parameters x = { 1 x, . . . , R x }. For example, they can include the predicted probability to belong to class y, the distance to the closest point in the dataset or the distance to the closest labeled point, but they do not include the features of x. We form a new labeled set Lx = L⌧[ {x} and retrain f (lines 7–13 of Alg. 1). The new classifier fx results in the test-set loss `x. Finally, we record the difference between previous and new loss δx = `⌧−`x which is associated to the learning state in which it was received. The learning state is characterized by a vector ⇠x ⌧= ⇥ φ1 ⌧ · · · φK ⌧ 1 x · · · R x ⇤ 2 RK+R, whose elements depend both on the state of the current classifier f⌧and on the datapoint x. To build an AL strategy LALINDEPENDENT we repeat the DATAMONTECARLO procedure for Q different initializations L1 ⌧, L2 ⌧, . . . , LQ ⌧and T various labeled subset sizes ⌧= 2, . . . , T + 1 (Alg. 2 lines 4 and 5). For each initialization q and iteration ⌧, we sample M different datapoints x each of which yields classifier/datapoint state pairs with an associated reduction in error (Alg. 1, line 13). This results in a matrix ⌅2 R(QMT )⇥(K+R) of observations ⇠and a vector ∆2 RQMT of labels δ (Alg. 2, line 9). Our insight is that observations ⇠should lie on a smooth manifold and that similar states of the classifier result in similar behaviors when annotating similar samples. From this, a regression function can predict the potential error reduction of annotating a specific sample in a given classifier state. Line 10 of the BUILDLALINDEPENDENT algorithm looks for a mapping g : ⇠7! δ. This mapping is not specific to the dataset D, and thus can be used to detect samples that promise the greatest increase in classifier performance in other target domains Z. The resulting LALINDEPENDENT strategy greedily selects a datapoint with the highest potential error reduction at iteration t by taking the maximum of the value predicted by the regressor g: x⇤= arg max x2Ut g(φt, x). (2) 4.2 Iterative LAL For any AL strategy at iteration t > 0, the labeled set Lt consists of samples selected at previous iterations, which is clearly not random. However, in Sec. 4.1 the dataset D is split into L⌧and U⌧ randomly no matter how many labeled samples ⌧are available. To account for this, we modify the approach of Section 4.1 in Alg. 3 BUILDLALITERATIVE. Instead of partitioning the dataset D into L⌧and U⌧randomly, we suggest simulating the AL procedure which selects datapoints according to the strategy learnt on the previously collected data (Alg. 3, line 10). It first learns a strategy A(g2) based on a regression function g2 which selects the most promising 3rd datapoint when 2 random points are available. In the next iteration, it learns a strategy A(g3) that selects 4th datapoint given 2 random points and 1 selected by A(g2) etc. In this way, 5 samples at each iteration depend on the samples at the previous iteration and the sampling bias of AL is represented in the data ⌅, ∆from which the final strategy LALITERATIVE is learnt. The resulting strategies LALINDEPENDENT and LALITERATIVE are both reasonably fast during the online steps of AL: they just require evaluating the RF regressor. The offline part, generating a datasets to learn a regression function, can induce a significant computational cost depending on the parameters of the algorithm. For this reason, LALINDEPENDENT is preferred to LALITERATIVE when an application-specific strategy is needed. 5 Experiments Implementation details We test AL strategies in two possible settings: a) cold start, where we start with one sample from each of two classes and b) warm start, where a larger dataset of size N0 ⌧N is available to train the initial classifier. In cold start we take the representative dataset to be a 2D synthetic dataset where class-conditional data distributions are Gaussian and we use the same LAL regressor in all 7 classification tasks. While we mostly concentrate on cold start scenario, we look at a few examples of warm start because we believe that it is largely overloooked in the litterature, but it has a significant practical interest. Learning a classifier for a real-life application with AL rarely starts from scratch, but a small initial annotated set is provided to understand if a learning-based approach is applicable at all. While a small set is good to provide an initial insight, a real working prototype still requires much more training data. In this situation, we can benefit from the available training data to learn a specialized AL strategy for an application. In most of the experiments, we use Random Forest (RF) classifiers for f and a RF regressor for g. The state of the learning process ⇠t at time t consists of the following features: a) predicted probability p(y = 0|Lt, x); b) proportion of class 0 in Lt; c) out-of-bag cross-validated accuracy of ft; d) variance of feature importances of ft; e) forest variance computed as variance of trees’ predictions on Ut; f) average tree depth of the forest; g) size of Lt. For additional implementational details, including examples of the synthetic datasets, parameters of the data generation algorithm and features in the case of GP classification, we refer the reader to the supplementary material. The code is made available at https://github.com/ksenia-konyushkova/LAL. Baselines and protocol We consider the three versions of our approach: a) LAL-independent-2D, LALINDEPENDENT strategy trained on a synthetic dataset of cold start; b) LAL-iterative-2D, LALITERATIVE strategy trained on a synthetic dataset of cold start; c) LAL-independent-WS, LALINDEPENDENT strategy trained on warm start representative data. We compare them against the following 4 baselines: a) Rs, random sampling; b) Us, uncertainty sampling; c) Kapoor [16], an algorithm that balances exploration and exploitation by incorporating mean and variance estimation of the GP classifier; d) ALBE [11], a recent example of meta-AL that adaptively uses a combination of strategies, including Us, Rs and that of Huang et al. [12] (a strategy that uses the topology of the feature space in the query selection). The method of Hsu et al. [11] is chosen as a our main baseline because it is a recent example of meta AL and is known to outperform several benchmarks. In all AL experiments we select samples from a training set and report the classification performance on an independent test set. We repeat each experiment 50–100 times with random permutations of training and testing splits and different initializations. Then we report the average test performance as a function of the number of labeled samples. The performance metrics are task-specific and include classification accuracy, IOU [6], dice score [8], AMS score [1], as well as area under the ROC curve (AUC). 5.1 Synthetic data Two-Gaussian-clouds experiments In this dataset we test our approach with two classifiers: RF and Gaussian Process classifier (GPC). Due to the the computational cost of GPC, it is only tested in this experiment. We generate 100 new unseen synthetic datasets of the form as shown in the top row of Fig. 2 and use them for testing AL strategies. In both cases the proposed LAL strategies select datapoints that help to construct better classifiers faster than Rs, Us, Kapoor and ALBE. XOR-like experiments XOR-like datasets are known to be challenging for many machine learning methods and AL is no exception. It was reported in Baram et al. [2] that various AL algorithms 6 0 50 100 0.6 0.7 0.8 0.9 accuracy Gaussian clouds, RF 0 50 100 0.6 0.7 0.8 0.9 Gaussian clouds, GP Rs Us ALBE Kapoor LAL-independent-2D LAL-iterative-2D 0 100 200 # labelled points 0.6 0.8 1.0 accuracy Checkerboard 2x2 0 100 200 # labelled points 0.45 0.55 0.65 0.75 0.85 Checkerboard 4x4 0 100 200 # labelled points 0.6 0.7 0.8 0.9 1.0 Rotated checkerboard 2x2 Figure 2: Experiments on the synthetic data. Top row: RF and GP on 2 Gaussian clouds. Bottom row from left to right: experiments on Checkerboard 2 ⇥2, Checkerboard 4 ⇥4, and Rotated Checkerboard 2 ⇥2 datasets. struggle with tasks such as those depicted in the bottom row of Fig. 2, namely Checkerboard 2 ⇥2 and Checkerboard 4 ⇥4. Additionally, we consider Rotated Checkerboard 2 ⇥2 dataset (Fig. 2, bottom row, right). The task for RF becomes more difficult in this case because the discriminating features are no longer aligned to the axis. As previously observed [2], Us loses to Rs in these cases. ALBE does not suffer from such adversarial conditions as much as Us, but LAL-iterative-2D outperforms it on all XOR-like datasets. 5.2 Real data We now turn to real data from domains where annotating is hard because it requires special training to do it correctly: Striatum, 3D Electron Microscopy stack of rat neural tissue, the task is to detect and segment mitochondria [20, 17]; MRI, brain scans obtained from the BRATS competition [23], the task is to segment brain tumor in T1, T2, FLAIR, and post-Gadolinium T1 MR images; Credit card [4], a dataset of credit card transactions made in 2013 by European cardholders, the task is to detect fraudulent transactions; Splice, a molecular biology dataset with the task of detecting splice junctions in DNA sequences [19]; Higgs, a high energy physics dataset that contains measurements simulating the ATLAS experiment [1], the task is to detect the Higgs boson in the noise signal. Additional details about the above datasets including sizes, dimensionalities and preprocessing techniques can be found in the supplementary materials. Cold Start AL Top row of Fig. 3 depicts the results of applying Rs, Us, LAL-independent2D, and LAL-iterative-2D on the Striatum, MRI, and Credit card datasets. Both LAL strategies outperform Us, with LAL-iterative-2D being the best of the two. The best score of Us in these complex real-life tasks is reached 2.2–5 times faster by the LAL-iterative-2D. Considering that the LAL regressor was learned using a simple synthetic 2D dataset, it is remarkable that it works effectively on such complex and high-dimensional tasks. Due to the high computational cost of ALBE, we downsample Striatum and MRI datasets to 2000 datapoints (referred to as Striatum mini and MRI mini). Downsampling was not possible for the Credit card dataset due to the sparsity of positive labels (0.17%). We see in the bottom row of Fig. 3 that ALBE performs worse than 7 0 250 500 0.05 0.20 0.35 0.50 0.65 IOU Striatum 0 100 200 0.0 0.2 0.4 0.6 0.8 dice MRI 0 150 300 0.84 0.87 0.90 0.93 0.96 AUC Credit card 0 100 200 # labelled points 0.10 0.25 0.40 0.55 0.70 IOU 0 100 200 # labelled points 0.0 0.2 0.4 0.6 0.8 dice Rs Us ALBE LAL-independent-2D LAL-iterative-2D Figure 3: Experiments on real data. Top row: IOU for Striatum, dice score for MRI and AUC for Credit card as a function of a number of labeled points. Bottom row: Comparison with ALBE on the Striatum mini and MRI mini datasets. Us but better than Rs. We ascribe this to the lack of labeled data, which ALBE needs to estimate classification accuracy (see Sec. 2). Warm Start AL In Fig. 4 we compare LAL-independent-WS on the Splice and Higgs datasets by initializing BUILDLALINDEPENDENT with 100 and 200 datapoints from the corresponding tasks. Notice that this is the only experiment where a significant amount of labelled data in the domain of interest is available prior to AL. We tested ALBE on the Splice dataset, however in the Higgs dataset the number of iterations in the experiment is too big. LAL-independent-WS outperforms other methods with ALBE delivering competitive performance—yet, at a high computational cost—only after many AL iterations. 100 200 300 labelled points 0.89 0.92 0.95 accuracy Splice 210 1000 2000 labelled points 240 270 300 AMS Higgs Rs Us ALBE LAL-independent-WS Figure 4: Experiments on the real datasets in warm start scenario. Accuracy for Splice is on the left, AMS score for Higgs is on the right. 5.3 Analysis of LAL strategies and time comparison To better understand LAL strategies, we show in Fig. 5 (left) the relative importance of the features of the regressor g for LALITERATIVE. We observe that both classifier state parameters and datapoint parameters influence the AL selection giving evidence that both of them are important for selecting a point to label. In order to understand what kind of selection LALINDEPENDENT and LALITERATIVE do, we record the predicted probability of the chosen datapoint p(y⇤= 0|Dt, x⇤) in 10 cold start experiments with the same initialization on the MRI dataset. Fig. 5 (right) shows the histograms of these probabilities for Us, LAL-independent-2D and LAL-iterative-2D. LAL strategies have 8 0.0 0.1 0.2 0.3 0.4 Relative Importance size proportion out-of-bag probability tree depth feature importance forest variance 0.0 0.2 0.4 0.6 0.8 1.0 probability p⋆ 0 500 1000 Us LAL-independent-2D LAL-iterative-2D Figure 5: Left: feature importances of the RF regressor representing LALITERATIVE strategy. Right: histograms of the selected probability for different AL strategies in experiments with MRI dataset. high variance and modes different from 0.5. Not only does the selection by LAL strategies differ significantly from standard Us, but also the independent and iterative approaches differ from each other. Computational costs While collecting synthetic data can be slow, it must only be done once, offline, for all applications. Besides, Alg. 1, 2 and 3 can be trivially parallelised thanks to a number of independent loops. Collecting data offline for warm start, that is application specific, took us approximately 2.7h and 1.9h for Higgs and Splice datasets respectively. By contrast, the online user-interaction part is fast: it simply consists of learning ft, extracting learning state parameters and evaluating the regressor g. The LAL run time depends on the parameters of the random forest regressor which are estimated via cross-validation (discussed in the supplementary materials). Run times of a Python-based implementation running on 1 core are given in Tab. 1 for a typical parameter set (± 20% depending on exact parameter values). Real-time performance can be attained by parallelising and optimising the code, even in applications with large amounts of high-dimensional data. Table 1: Time in seconds for one iteration of AL for various strategies and tasks. Dataset Dimensions # samples Us ALBE LAL Checkerboard 2 1000 0.11 13.12 0.54 MRI mini 188 2000 0.11 64.52 0.55 MRI 188 22 934 0.12 — 0.88 Striatum mini 272 2000 0.11 75.64 0.59 Striatum 272 276 130 2.05 — 19.50 Credit 30 142 404 0.43 — 4.73 6 Conclusion In this paper we introduced a new approach to AL that is driven by data: Learning Active Learning. We found out that Learning Active Learning from simple 2D data generalizes remarkably well to challenging new domains. Learning from a subset of application-specific data further extends the applicability of our approach. Finally, LAL demonstrated robustness to the choice of type of classifier and features. In future work we would like to address issues of multi-class classification and batch-mode AL. Also, we would like to experiment with training the LAL regressor to predict the change in various performance metrics and with different families of classifiers. Another interesting direction is to transfer a LAL strategy between different real datasets, for example, by training a regressor on multiple real datasets and evaluating its performance on unseen datasets. Finally, we would like to go beyond constructing greedy strategies by using reinforcement learning. 9 Acknowledgements This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1). We would like to thank Carlos Becker and Helge Rhodin for their comments on the text, and Lucas Maystre for his discussions and attention to details. References [1] C. Adam-Bourdarios, G. Cowan, C. Germain, I. Guyon, B. Kégl, and D. Rousseau. The higgs boson machine learning challenge. In NIPS 2014 Workshop on High-energy Physics and Machine Learning, 2015. [2] Y. Baram, R. El-Yaniv, and K. Luz. Online choice of active learning algorithms. Journal of Machine Learning Research, 2004. [3] H.-M. Chu and H.-T. Lin. Can active learning experience be transferred? arXiv preprint arXiv:1608.00667, 2016. [4] A. Dal Pozzolo, O. Caelen, R. A. Johnson, and G. Bontempi. Calibrating probability with undersampling for unbalanced classification. In IEEE Symposium Series on Computational Intelligence, 2015. [5] S. Ebert, M. Fritz, and B. Schiele. RALF: A reinforced active learning formulation for object class recognition. In Conference on Computer Vision and Pattern Recognition, 2012. [6] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 2010. [7] R. Gilad-bachrach, A. Navot, and N. Tishby. Query by committee made real. In Advances in Neural Information Processing Systems, 2005. [8] N. Gordillo, E. Montseny, and P. Sobrevilla. State of the art survey on MRI brain tumor segmentation. Magnetic Resonance in Medicine, 2013. [9] S.C.and Hoi, R. Jin, J. Zhu, and M.R. Lyu. Batch mode active learning and its application to medical image classification. In International Conference on Machine Learning, 2006. [10] N. Houlsby, F. Huszár, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011. [11] W.-N. Hsu, , and H.-T. Lin. Active learning by learning. American Association for Artificial Intelligence Conference, 2015. [12] S.-J. Huang, R. Jin, and Z.-H. Zhou. Active learning by querying informative and representative examples. In Advances in Neural Information Processing Systems, 2010. [13] J.E. Iglesias, E. Konukoglu, A. Montillo, Z. Tu, and A. Criminisi. Combining generative and discriminative models for semantic segmentation. In Information Processing in Medical Imaging, 2011. [14] A. J. Joshi, F. Porikli, and N. P. Papanikolopoulos. Scalable active learning for multiclass image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012. [15] A.J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classification. In Conference on Computer Vision and Pattern Recognition, 2009. [16] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active learning with Gaussian Processes for object categorization. In International Conference on Computer Vision, 2007. [17] K. Konyushkova, R. Sznitman, and P. Fua. Introducing geometry into active learning for image segmentation. In International Conference on Computer Vision, 2015. 10 [18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Conference on Computer Vision and Pattern Recognition, 2015. [19] A. C. Lorena, G. E. A. P. A. Batista, A. C. P. L. F. de Carvalho, and M. C. Monard. Splice junction recognition using machine learning techniques. In Brazilian Workshop on Bioinformatics, 2002. [20] A. Lucchi, Y. Li, K. Smith, and P. Fua. Structured image segmentation using kernelized features. In European Conference on Computer Vision, 2012. [21] T. Luo, K. Kramer, S. Samson, A. Remsen, D. B. Goldgof, L. O. Hall, and T. Hopkins. Active learning to recognize multiple types of plankton. In International Conference on Pattern Recognition, 2004. [22] L. Maystre and M. Grossglauser. Just sort it! A simple and effective approach to active preference learning. In International Conference on Machine Learning, 2017. [23] B. Menza, A. Jacas, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 2014. [24] A. Mosinska, R. Sznitman, P. Glowacki, and P. Fua. Active learning for delineation of curvilinear structures. In Conference on Computer Vision and Pattern Recognition, 2016. [25] F. Olsson. A literature survey of active machine learning in the context of natural language processing. Swedish Institute of Computer Science, 2009. [26] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, 2016. [27] B. Settles. Active learning literature survey. Technical report, University of Wisconsin–Madison, 2010. [28] B. Settles and M. Craven. An analysis of active learning strategies for sequence labeling tasks. In Conference on Empirical Methods in Natural Language Processing, 2008. [29] A. Singla, S. Tschiatschek, and A. Krause. Actively learning hemimetrics with applications to eliciting user preferences. In International Conference on Machine Learning, 2016. [30] R. Sznitman and B. Jedynak. Active testing for face detection and localization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010. [31] A. Tamar, Y. WU, G. Thomas, S. Levine, and P. Abbeel. Value iteration networks. In Advances in Neural Information Processing Systems, 2016. [32] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. Machine Learning, 2002. [33] A. Vezhnevets, V. Ferrari, and J.M. Buhmann. Weakly supervised structured output learning for semantic segmentation. In Conference on Computer Vision and Pattern Recognition, 2012. [34] Y. Yang, Z. Ma, F. Nie, X. Chang, and A. G. Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 2015. 11 | 2017 | 375 |
6,869 | Controllable Invariance through Adversarial Feature Learning Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig Language Technologies Institute Carnegie Mellon University {qizhex, dzihang, yulund, hovy, gneubig}@cs.cmu.edu Abstract Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance. 1 Introduction How to produce a data representation that maintains meaningful variations of data while eliminating noisy signals is a consistent theme of machine learning research. In the last few years, the dominant paradigm for finding such a representation has shifted from manual feature engineering based on specific domain knowledge to representation learning that is fully data-driven, and often powered by deep neural networks [Bengio et al., 2013]. Being universal function approximators [Gybenko, 1989], deep neural networks can easily uncover the complicated variations in data [Zhang et al., 2017], leading to powerful representations. However, how to systematically incorporate a desired invariance into the learned representation in a controllable way remains an open problem. A possible avenue towards the solution is to devise a dedicated neural architecture that by construction has the desired invariance property. As a typical example, the parameter sharing scheme and pooling mechanism in modern deep convolutional neural networks (CNN) [LeCun et al., 1998] take advantage of the spatial structure of image processing problems, allowing them to induce more generic feature representations than fully connected networks. Since the invariance we care about can vary greatly across tasks, this approach requires us to design a new architecture each time a new invariance desideratum shows up, which is time-consuming and inflexible. When our belief of invariance is specific to some attribute of the input data, an alternative approach is to build a probabilistic model with a random variable corresponding to the attribute, and explicitly reason about the invariance. For instance, the variational fair auto-encoder (VFAE) [Louizos et al., 2016] employs the maximum mean discrepancy (MMD) to eliminate the negative influence of specific “nuisance variables”, such as removing the lighting conditions of images to predict the person’s identity. Similarly, under the setting of domain adaptation, standard binary adversarial cost [Ganin and Lempitsky, 2015, Ganin et al., 2016] and central moment discrepancy (CMD) [Zellinger et al., 2017] have been utilized to learn features that are domain invariant. However, all these invariance 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. inducing criteria suffer from a similar drawback, which is they are defined to measure the divergence between a pair of distributions. Consequently, they can only express the invariance belief w.r.t. a pair of values of the random variable at a time. When the attribute is a multinomial variable that takes more than two values, combinatorial number of pairs (specifically, O(n2)) have to be added to express the belief that the representation should be invariant to the attribute. The problem is even more dramatic when the attribute represents a structure that has exponentially many possible values (e.g. the parse tree of a sentence) or when the attribute is simply a continuous variable. Motivated by the aforementioned drawbacks and difficulties, in this work, we consider the problem of learning a feature representation with the desired invariance. We aim at creating a unified framework that is (1) generic enough such that it can be easily plugged into different models, and (2) more flexible to express an invariance belief in quantities beyond discrete variables with limited value choices. Specifically, inspired by the recent advancement of adversarial learning [Goodfellow et al., 2014], we formulate the representation learning as a minimax game among three players: an encoder which maps the observed data deterministically into a feature space, a discriminator which looks at the representation and tries to identify a specific type of variation we hope to eliminate from the feature, and a predictor which makes use of the invariant representation to make predictions as in typical discriminative models. We provide theoretical analysis of the equilibrium condition of the minimax game, and give an intuitive interpretation. On three benchmark tasks from different domains, we show that the proposed approach not only improves upon vanilla discriminative approaches that do not encourage invariance, but also outperforms existing approaches that enforce invariant features. 2 Adversarial Invariant Feature Learning In this section, we formulate our problem and then present the proposed framework of learning invariant features. (a) y and s are marginally independent (b) y and s are not marginally independent Figure 1: Dependencies between x, s, y, where x is the observation and y is the target to be predicted. s is the attribute to which the prediction should be invariant. Given observation/input x, we are interested in the task of predicting the target y based on the value of x using a discriminative approach. In addition, we have access to some intrinsic attribute s of x as well as a prior belief that the prediction result should be invariant to s. There are two possible dependency scenarios of x, s and y here: (1) s and y can be marginally independent. For example, in image classifications, lighting conditions s and identities of persons y are independent. The data generation process is s ∼p(s), y ∼p(y), x ∼p(x | s, y). (2) In some cases, s and y are not marginally independent. For example, in fairness classifications, s are the sensitive factors such as age and gender. y can be the saving, credit and health condition of a person. s and y are related due to the inherent bias within the data. Using a latent variable z to model the dependency between s and y, the data generation process is z ∼p(z), s ∼p(s | z), y ∼p(y | z), x ∼p(x | s, y). We show the corresponding dependency graphs in Figure 1. Unlike vanilla discriminative models that outputs the conditional distribution p(y | x), we model p(y | x, s) to make predictions invariant to s. Our intuition is that, due to the explaining away effect, y and s are not independent when conditioned on x although they can be marginally independent. Consequently, p(y | x, s) is a more accurate estimation of y than p(y | x). Intuitively, this can inform and guide the model to remove information about undesired variations. For example, if we want to learn a representation of image x that is invariant to the lighting condition s, the model can learn to “brighten” the input if it knows the original picture is dark, and vice versa. Also, in multi-lingual machine translation, a word with the same surface form may have different meanings in different languages. For instance, “gift” means “present” in English but means “poison” in German. 2 Hence knowing the language of a source sentence helps inferring the meaning of the sentence and conducting translation. As the input x can have highly complicated structure, we employ a dedicated model or algorithm to extract an expressive representation h from x. Thus, when we extract the representation h from x, we want the representation h to preserve variations that are necessary to predict y while eliminating information of s. To achieve the aforementioned goal, we employ a deterministic encoder E to obtain the representation by encoding x and s into h, namely, h = E(x, s). It should be noted here that we are using s as an additional input. Given the obtained representation h, the target y is predicted by a predictor M, which effectively models the distribution qM(y | h). By construction, instead of modeling p(y | x) directly, the discriminative model we formulate captures the conditional distribution p(y | x, s) with additional information coming from s. Surely, feeding s into the encoder by no means guarantees the induced feature h will be invariant to s. Thus, in order to enforce the desired invariance and eliminate variations of factor s from h, we set up an adversarial game by introducing a discriminator D which inspects the representation h and ensure that it is invariant to s. Concretely, the discriminator D is trained to predict s based on the encoded representation h, which effectively maximizes the likelihood qD(s | h). Simultaneously, the encoder fights to minimize the same likelihood of inferring the correct s by the discriminator. Intuitively, the discriminator and the encoder form an adversarial game where the discriminator tries to detect an attribute of the data while the encoder learns to conceal it. Note that under our framework, in theory, s can be any type of data as long as it represents an attribute of x. For example, s can be a real value scalar/vector, which may take many possible values, or a complex sub-structure such as the parse tree of a natural language sentence. But in this paper, we focus mainly on instances where s is a discrete label with multiple choices. We plan to extend our framework to deal with continuous s and structured s in the future. Formally, E, M and D jointly play the following minimax game: min E,M max D J(E, M, D) where J(E, M, D) = E x,s,y∼p(x,s,y) [γ log qD(s | h = E(x, s)) −log qM(y | h = E(x, s))] (1) where γ is a hyper-parameter to adjust the strength of the invariant constraint, and p(x, s, y) is the true underlying distribution that the empirical observations are drawn from. Note that the problem of domain adaption can be seen as a special case of our problem, where s is a Bernoulli variable representing the domain and the model only has access to the target y when s = “source domain” during training. 3 Theoretical Analysis In this section, we theoretically analyze, given enough capacity and training time, whether such a minimax game will converge to an equilibrium where variations of y are preserved and variations of s are removed. The theoretical analysis is done in a non-parametric limit, i.e., we assume a model with infinite capacity. In addition, we discuss the equilibriums of the minimax game when s is independent/dependent to y. Since both the discriminator and the predictor only use h which is transformed deterministically from x and s, we can substitute x with h and define a joint distribution ˜p(h, s, y) of h, s and y as follows ˜p(h, s, y) = Z x ˜p(x, s, h, y)dx = Z x p(x, s, y)pE(h | x, s)dx = Z x p(x, s, y)δ(E(x, s) = h)dx Here, we have used the fact that the encoder is a deterministic transformation and thus the distribution pE(h | x, s) is merely a delta function denoted by δ(·). Intuitively, h absorbs the randomness in x and has an implicit distribution of its own. Also, note that the joint distribution ˜p(h, s, y) depends on the transformation defined by the encoder. Thus, we can equivalently rewrite objective (1) as J(E, M, D) = E h,s,y∼˜p(h,s,y) [γ log qD(s | h) −log qM(y | h)] (2) 3 To analyze the equilibrium condition of the new objective (2), we first deduce the optimal discriminator D and the optimal predictor M for a given encoder E and then prove the global optimality of the minimax game. Claim 1. Given a fixed encoder E, the optimal discriminator outputs q∗ D(s | h) = ˜p(s | h) and the optimal predictor corresponds to q∗ M(y | h) = ˜p(y | h). Proof. The proof uses the fact that the objective is functionally convex w.r.t. each distribution, and by taking the variations we can obtain the stationary point for qD and qM as a function of ˜q. The detailed proof is included in the supplementary material A. Note that the optimal q∗ D(s | h) and q∗ M(y | h) given in Claim 1 are both functions of the encoder E. Thus, by plugging q∗ D and q∗ M into the original minimax objective (2), it can be simplified as a minimization problem only w.r.t. the encoder E with the following form: min E J(E) = min E E h,s,y∼˜q(h,s,y) [γ log ˜q(s | h) −log ˜q(y | h)] = min E −γH(˜q(s | h)) + H(˜q(y | h)) (3) where H(˜q(s | h)) is the conditional entropy of the distribution ˜q(s | h). Equilibrium Analysis As we can see, the objective (3) consists of two conditional entropies with different signs. Optimizing the first term amounts to maximizing the uncertainty of inferring s based on h, which is essentially filtering out any information of s from the representation. On the contrary, optimizing the second term leads to increasing the certainty of predicting y based on h. Implicitly, the objective defines the equilibrium of the minimax game. • Win-win equilibrium: Firstly, for cases where the attribute s is entirely irrelevant to the prediction task (corresponding to the dependency graph shown in Figure 1a), the two terms can reach the optimum at the same time, leading to a win-win equilibrium. For example, with the lighting condition of an image removed, we can still/better classify the identity of the people in that image. With enough model capacity, the optimal equilibrium solution would be the same regardless of the value of γ. • Competing equilibrium: However, there are cases where these two optimization objectives are competing. For example, in fair classifications, sensitive factors such as gender and age may help the overall prediction accuracies due to inherent biases within the data. In other words, knowing s may help in predicting y since s and y are not marginally independent (corresponding to the dependency graph shown in Figure 1b). Learning a fair/invariant representation is harmful to predictions. In this case, the optimality of these two entropies cannot be achieved simultaneously, and γ defines the relative strengths of the two objectives in the final equilibrium. 4 Parametric Instantiation of the Proposed Framework 4.1 Models To show the general applicability of our framework, we experiment on three different tasks including sentence generation, image classification and fair classifications. Due to the different natures of data of x and y, here we present the specific model instantiations we use. Sentence Generation We use multi-lingual machine translation as the testbed for sentence generation. Concretely, we have translation pairs between several source languages and a target language. x is the source sentence to be translated and s is a scalar denoting which source language x belongs to. y is the translated sentence for the target language. Recall that s is used as an input of E to obtain a language-invariant representation. To make full use of s, we employ separate encoders Encs for sentences in each language s. In other words, h = E(s, x) = Encs(x) where each Encs is a different encoder. The representation of a sentence is captured by the hidden states of an LSTM encoder [Hochreiter and Schmidhuber, 1997] at each time step. 4 We employ a single LSTM predictor for different encoders. As often used in language generation, the probability qM output by the predictor is parametrized by an autoregressive process, i.e., qM(y1:T | h) = T Y t=1 qM(yt|y<t, h) where we use an LSTM with attention model [Bahdanau et al., 2015] to compute qM(yt|y<t, h). The discriminator is also parameterized as an LSTM which gives it enough capacity to deal with input of multiple timesteps. qD(s | h) is instantiated with the multinomial distribution computed by a softmax layer on the last hidden state of the discriminator LSTM. Classification For our classification experiments, the input is either a picture or a feature vector. All of the three players in the minimax game are constructed by feedforward neural networks. We feed s to the encoder as an embedding vector. 4.2 Optimization There are two possible approaches to optimize our framework in an adversarial setting. The first one is similar to the alternating approach used in Generative Adversarial Nets (GANs) [Goodfellow et al., 2014]. We can alternately train the two adversarial components while freezing the third one. This approach has more control in balancing the encoder and the discriminator, which effectively avoids saturation. Another method is to train all three components together with a gradient reversal layer [Ganin and Lempitsky, 2015]. In particular, the encoder admits gradients from both the discriminator and the predictor, with the gradient from the discriminator negated to push the encoder in the opposite direction desired by the discriminator. Chen et al. [2016b] found the second approach easier to optimize since the discriminator and the encoder are fully in sync being optimized altogether. Hence we adopt the latter approach. In all of our experiments, we use Adam [Kingma and Ba, 2014] with a learning rate of 0.001. 5 Experiments In this section, we perform empirical experiments to evaluate the effectiveness of proposed framework. We first introduce the tasks and corresponding datasets we consider. Then, we present the quantitative results showing the superior performance of our proposed framework, and discuss some qualitative analysis which verifies the learned representations have the desired invariance property. 5.1 Datasets Our experiments include three tasks in different domains: (1) fair classification, in which predictions should be unaffected by nuisance factors; (2) language-independent generation which is conducted on the multi-lingual machine translation problem; (3) lighting-independent image classification. Fair Classification For fair classification, we use three datasets to predict the savings, credit ratings and health conditions of individuals with variables such as gender or age specified as “nuisance variable” that we would like to not consider in our decisions [Zemel et al., 2013, Louizos et al., 2016]. The German dataset [Frank et al., 2010] is a small dataset with 1, 000 samples describing whether a person has a good credit rating. The sensitive nuisance variable to be factored out is gender. The Adult income dataset [Frank et al., 2010] has 45, 222 data points and the objective is to predict whether a person has savings of over 50, 000 dollars with the sensitive factor being age. The task of the health dataset1 is to predict whether a person will spend any days in the hospital in the following year. The sensitive variable is also the age and the dataset contains 147, 473 entries. We follow the same 5-fold train/validation/test splits and feature preprocessing used in [Zemel et al., 2013, Louizos et al., 2016]. Both the encoder and the predictor are parameterized by single-layer neural networks. A three-layer neural network with batch normalization [Ioffe and Szegedy, 2015] is employed for the discriminator. We use a batch size of 16 and the number of hidden units is set to 64. γ is set to 1 in our experiments. 1www.heritagehealthprize.com 5 Multi-lingual Machine Translation For the multi-lingual machine translation task we use French to English (fr-en) and German to English (de-en) pairs from IWSLT 2015 dataset [Cettolo et al., 2012]. There are 198, 435 pairs of fr-en sentences and 188, 661 pairs of de-en sentences in the training set. In the test set, there are 4, 632 pairs of fr-en sentences and 7, 054 pairs of de-en sentences. We evaluate BLEU scores [Papineni et al., 2002] using the standard Moses multi-bleu.perl script. Here, s indicates the language of the source sentence. We use the OpenNMT [Klein et al., 2017] in our multi-lingual MT experiments2. The encoder is a two-layer bidirectional LSTM with 256 units for each direction. The discriminator is a one-layer single-directional LSTM with 256 units. The predictor is a two-layer LSTM with 512 units and attention mechanism [Bahdanau et al., 2015]. We follow Johnson et al. [2016] and use Byte Pair Encoding (BPE) subword units [Sennrich et al., 2016] as the cross-lingual input. Every model is run for 20 epochs. γ is set to 8 and the batch size is set to 64. Image Classification We use the Extended Yale B dataset [Georghiades et al., 2001] for our image classification task. It comprises face images of 38 people under 5 different lighting conditions: upper right, lower right, lower left, upper left, or the front. The variable s to be purged is the lighting condition. The label y is the identity of the person. We follow Li et al. [2014], Louizos et al. [2016]’s train/test split and no validation is used: 38 × 5 = 190 samples are used for training and all other 1, 096 data points are used for testing. We use a one-layer neural network for the encoder and a one-layer neural network for prediction. γ is set to 2. The discriminator is a two-layer neural network with batch normalization. The batch size is set to 16 and the hidden size is set to 100. 5.2 Results Fair Classification The results on three fairness tasks are shown in Figure 2. We compare our model with two prior works on learning fair representations: Learning Fair Representations (LFR) [Zemel et al., 2013] and Variational Fair Autoencoder (VFAE) [Louizos et al., 2016]. Results of VAE and directly using x as the representation are also shown. We first study how much information about s is retained in the learned representation h by using a logistic regression to predict factor s. In the top row, we see that s cannot be recognized from the representations learned by three models targeting at fair representations. The accuracy of classifying s is similar to the trivial baseline predicting the majority label shown by the black line. The performance on predicting label y is shown in the second row. We see that LFR and VFAE suffer on Adult and German datasets after removing information of s. In comparison, our model’s performance does not suffer even when making fair predictions. Specifically, on German, our model’s accuracy is 0.744 compared to 0.727 and 0.723 achieved by VFAE and LFR. On Adult, our model’s accuracy is 0.844 while VFAE and LFR have accuracies of 0.813 and 0.823 respectively. On the health dataset, all models’ performances are barely better than the majority baseline. The unsatisfactory performances of all models may be due to the extreme imbalance of the dataset, in which 85% of the data has the same label. We also investigate how fair representations would alleviate biases of machine learning models. We measure the unbiasedness by evaluating models’ performances on identifying minority groups. For instance, suppose the task is to predict savings with the nuisance factor being age, with savings above a threshold of $50, 000 being adequate, otherwise being insufficient. If people of advanced age generally have fewer savings, then a biased model would tend to predict insufficient savings for those with an advanced age. In contrast, an unbiased model can better factor out age information and recognize people that do not fit into these stereotypes. Concretely, for groups pooled by each possible value of y, we seek for the minority s in each of these groups and define the minority s as the biased category for the group. Then we first calculate the accuracy on each biased category and report the average performance for all categories. We do not compute the instance-level average performance since one category may hold the dominant amount of data among all categories. 2Our MT code is available at https://github.com/qizhex/Controllable-Invariance 6 Adult 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.67 Majority German 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.8 Majority Health 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.58 Majority (a) Accuracy on predicting s. The closer the result is to the majority line, the better the model is in eliminating the effect of nuisance variables. Adult 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.75 Majority German 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.71 Majority Health 0.4 0.53 0.65 0.78 0.9 x LFR VAE VFAE Ours 0.84 Majority (b) Accuracy on predicting y. High accuracy in predicting y is desireable. Adult 0.4 0.525 0.65 0.775 0.9 Overall Biased categories x Ours German 0.4 0.53 0.65 0.78 0.9 Overall Biased categories x Ours Health 0.4 0.53 0.65 0.78 0.9 Overall Biased categories x Ours (c) Overall performance and performance on biased categories. Fair representations lead to high accuracy on baised categories. Figure 2: Fair classification results on different representations. x denotes directly using the observation x as the representation. The black lines in the first and the second row show the performance of predicting the majority label. “Biased categories” in the third row are explained in the fourth paragraph of Section 5.2. Model test (fr-en) test (de-en) Bilingual Enc-Dec [Bahdanau et al., 2015] 35.2 27.3 Multi-lingual Enc-Dec [Johnson et al., 2016] 35.5 27.7 Our model 36.1 28.1 w.o. discriminator 35.3 27.6 w.o. separate encoders 35.4 27.7 Table 1: Results on multi-lingual machine translation. As shown in the third row of Figure 2, on German and Adult, we achieve higher accuracy on the biased categories, even though our overall accuracy is similar to or lower than the baseline which does not employ fairness constraints. Specifically, on Adult, our performance on the biased categories is 0.788 while the baseline’s accuracy is 0.748. On German, our accuracy on biased categories is 0.676 while the baseline achieves 0.648. The results show that our model is able to learn a more unbiased representation. Multi-lingual Machine Translation The results of systems on multi-lingual machine translation are shown in Table 1. We compare our model with attention based encoder-decoder trained on bilingual data [Bahdanau et al., 2015] and multi-lingual data [Johnson et al., 2016]. The encoderdecoder trained on multi-lingual data employs a single encoder for both source languages. Firstly, both multi-lingual systems outperform the bilingual encoder-decoder even though multi-lingual systems use similar number of parameters to translate two languages, which shows that learning 7 Method Accuracy of classifying s Accuracy of classifying y Logistic regression 0.96 0.78 NN + MMD [Li et al., 2014] 0.82 VFAE [Louizos et al., 2016] 0.57 0.85 Ours 0.57 0.89 Table 2: Results on Extended Yale B dataset. A better representation has lower accuracy of classifying factor s and higher accuracy of classifying label y (a) Using the original image x as the representation (b) Representation learned by our model Figure 3: t-SNE visualizations of images in the Extended Yale B. The original pictures are clustered by the lighting conditions, while the representation learned by our model is clustered by identities of individuals invariant representation leads to better generalization in this case. The better generalization may be due to transferring statistical strength between data in two languages. Comparing two multi-lingual systems, our model outperforms the baseline multi-lingual system on both languages, where the improvement on French-to-English is 0.6 BLEU score. We also verify the design decisions in our framework by ablation studies. Firstly, without the discriminator, the model’s performance is worse than the standard multi-lingual system, which rules out the possibility that the gain of our model comes from more parameters of separating encoders. Secondly, when we do not employ separate encoders, the model’s performance deteriorates and it is more difficult to learn a cross-lingual representation, which • verifies the theoretical advantage of modeling p(y | x, s) instead of p(y | x) as mentioned in Section 2. Intuitively, German and French have different grammars and vocabulary, so it is hard to obtain a unified semantic representation by performing the same operations. • means that the encoder needs to have enough capacity to reach the equilibrium in the minimax game. We also observe that the discriminator needs enough capacity to provide faithful gradients towards the equilibrium. Specifically, instantiating the discriminator with feedforward neural network w./w.o. attention mechanism [Bahdanau et al., 2015] does not work in our experiments. Image Classification We report the results in Table 2 with two baselines [Li et al., 2014, Louizos et al., 2016] that use MMD regularizations to remove lighting conditions. The advantage of factoring out lighting conditions is shown by the improved accuracy 89% for classifying identities, while the best baseline achieves an accuracy of 85%. In terms of removing s, our framework can filter the lighting conditions since the accuracy of classifying s drops from 0.96 to 0.57, as shown in Table 2. We also visualize the learned representation by t-SNE [Maaten and Hinton, 2008] in comparison to the visualization of original pictures in Figure 3. We see that, without removing lighting conditions, the images are clustered based on the lighting conditions. After removing information of lighting conditions, images are clustered according to the identity of each person. 8 6 Related Work As a specific case of our problem where s takes two values, domain adaption has attracted a large amount of research interest. Domain adaptation aims to learn domain-invariant representations that are transferable to other domains. For example, in image classification, adversarial training has been shown to able to learn an invariant representation across domains [Ganin and Lempitsky, 2015, Ganin et al., 2016, Bousmalis et al., 2016, Tzeng et al., 2017] and enables classifiers trained on the source domain to be applicable to the target domain. Moment discrepancy regularizations can also effectively remove domain specific information [Zellinger et al., 2017, Bousmalis et al., 2016] for the same purpose. By learning language-invariant representations, classifiers trained on the source language can be applied to the target language [Chen et al., 2016b, Xu and Yang, 2017]. Works targeting the development of fair, bias-free classifiers also aim to learn representations invariant to “nuisance variables” that could induce bias and hence makes the predictions fair, as data-driven models trained using historical data easily inherit the bias exhibited in the data. Zemel et al. [2013] proposes to regularize the ℓ1 distance between representation distributions for data with different nuisance variables to enforce fairness. The Variational Fair Autoencoder [Louizos et al., 2016] targets the problem with a Variational Autoencoder [Kingma and Welling, 2014, Rezende et al., 2014] approach with maximum mean discrepancy regularization. Our work is also related to learning disentangled representations, where the aim is to separate different influencing factors of the input data into different parts of the representation. Ideally, each part of the learned representation can be marginally independent to the other. An early work by Tenenbaum and Freeman [1997] propose a bilinear model to learn a representation with the style and content disentangled. From information theory perspective, Chen et al. [2016a] augments standard generative adversarial networks with an inference network, whose objective is to infer part of the latent code that leads to the generated sample. This way, the information carried by the chosen part of the latent code can be retained in the generative sample, leading to disentangled representation. As we have discussed in Section 1, these methods bear the same drawback that the cost used to regularize the representation is pairwise, which does not scale well as the number of values that the attribute can take could be large. Louppe et al. [2016] propose an adversarial training framework to learn representations independent to a categorical or continuous variable. A basic assumption in their theoretical analysis is that the attribute is irrelevant to the prediction, which limits its capabilities in analyzing the fairness classifications. 7 Conclusion In sum, we propose a generic framework to learn representations invariant to a specified factor or trait. We cast the representation learning problem as an adversarial game among an encoder, a discriminator, and a predictor. We theoretically analyze the optimal equilibrium of the minimax game and evaluate the performance of our framework on three tasks from different domains empirically. We show that an invariant representation is learned, resulting in better generalization and improvements on the three tasks. Acknowledgement We thank Shi Feng, Di Wang and Zhilin Yang for insightful discussions. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. 9 Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In NIPS, 2016. Mauro Cettolo, Christian Girardi, and Marcello Federico. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), volume 261, page 268, 2012. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016a. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv preprint arXiv:1606.01614, 2016b. Andrew Frank, Arthur Asuncion, et al. Uci machine learning repository, 2010. URL http:// archive.ics.uci.edu/ml. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. ICML, 2015. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. Athinodoros S. Georghiades, Peter N. Belhumeur, and David J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE transactions on pattern analysis and machine intelligence, 23(6):643–660, 2001. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. G Gybenko. Approximation by superposition of sigmoidal functions. Mathematics of Control, Signals and Systems, 2(4):303–314, 1989. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. Google’s multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint arXiv:1611.04558, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014. G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints, 2017. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Yujia Li, Kevin Swersky, and Richard Zemel. Learning unbiased features. arXiv preprint arXiv:1412.5244, 2014. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. ICLR, 2016. Gilles Louppe, Michael Kagan, and Kyle Cranmer. Learning to pivot with adversarial networks. arXiv preprint arXiv:1611.01046, 2016. 10 Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, 2002. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. ACL, 2016. Joshua B Tenenbaum and William T Freeman. Separating style and content. NIPS, 1997. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. arXiv preprint arXiv:1702.05464, 2017. Ruochen Xu and Yiming Yang. Cross-lingual distillation for text classification. ACL, 2017. Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne SamingerPlatz. Central moment discrepancy (cmd) for domain-invariant representation learning. ICLR, 2017. Richard S Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning fair representations. ICML, 2013. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. ICLR, 2017. 11 A Supplementary Material: Proofs The proof for Claim 1: Claim. Given a fixed encoder E, the optimal discriminator outputs q∗ D(s | h) = ˜p(s | h). The optimal predictor corresponds to q∗ M(y | h) = ˜p(y | h). Proof. We first prove the optimal solution of the discriminator. With a fixed encoder, we have the following optimization problem min qD −J(E, M, D) s.t. X s qD(s | h) = 1, ∀h Then L = J(E, M, D) −P h λ(h)(P s qD(s | h) −1) is the Lagrangian dual function of the above optimization problem where λ(h) are the dual variables introduced for equality constraints. The optimal D satisfies the following equation 0 = ∂L ∂q∗ D(s | h) ⇐⇒ 0 = − ∂J ∂q∗ D(s | h) −λ(h) ⇐⇒ λ(h) = − P y ˜q(h, s, y) q∗ D(s | h) ⇐⇒ λ(h)q∗ D(s | h) = −˜q(s, h) (4) Summing w.r.t. s on both sides of the last line of Eqn. (4) and using the fact that P s q∗ D(s | h) = 1, we get λ(h) = −˜q(h) (5) Substituting Eqn. 5 back into Eqn. 4, we can prove the optimal discriminator is q∗ D(s | h) = ˜q(s | h) Similarly, taking derivation w.r.t. qM(y | h) and setting it to 0, we can prove q∗ M(y | h) = ˜q(y | h). 12 | 2017 | 376 |
6,870 | Visual Interaction Networks: Learning a Physics Simulator from Video Nicholas Watters, Andrea Tacchetti, Théophane Weber Razvan Pascanu, Peter Battaglia, Daniel Zoran DeepMind London, United Kingdom {nwatters, atacchet, theophane, razp, peterbattaglia, danielzoran}@google.com Abstract From just a glance, humans can make rich predictions about the future of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains or require information about the underlying state. We introduce the Visual Interaction Network, a generalpurpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments. 1 Introduction Physical reasoning is a core domain of human knowledge [22] and among the earliest topics in AI [24, 25]. However, we still do not have a system for physical reasoning that can approach the abilities of even a young child. A key obstacle is that we lack a general-purpose mechanism for making physical predictions about the future from sensory observations of the present. Overcoming this challenge will help close the gap between human and machine performance on important classes of behavior that depend on physical reasoning, such as model-based decision-making [3], physical inference [13], and counterfactual reasoning [10, 11]. We introduce the Visual Interaction Network (VIN), a general-purpose model for predicting future physical states from video data. The VIN is learnable and can be trained from supervised data sequences which consist of input image frames and target object state values. It can learn to approximate a range of different physical systems which involve interacting entities by implicitly internalizing the rules necessary for simulating their dynamics and interactions. The VIN model is comprised of two main components: a visual encoder based on convolutional neural networks (CNNs) [17], and a recurrent neural network (RNN) with an interaction network (IN) [2] as its core, for making iterated physical predictions. Using this architecture we are able to learn a model which infers object states and can make accurate predictions about these states in future time 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. steps. We show that this model outperforms various baselines and can generate compelling future rollout trajectories. 1.1 Related work One approach to learning physical reasoning is to train models to make state-to-state predictions. One early algorithm using this approach was the “NeuroAnimator” [12], which was able to simulate articulated bodies. Ladicky et al. [16] proposed a learned model for simulating fluid dynamics based on regression forests. Battaglia et al. [2] introduced a general-purpose learnable physics engine, termed an Interaction Network (IN), which could learn to predict gravitational systems, rigid body dynamics, and mass-spring systems. Chang et al. [7] introduced a similar model in parallel that could likewise predict rigid body dynamics. Another class of approaches learn to predict summary physical judgments and produce simple actions from images. There have been several efforts [18, 19] which used CNN-based models to predict whether a stack of blocks would fall. Mottaghi et al. [20, 21] predicted coarse, image-space motion trajectories of objects in real images. Several efforts [4, 6, 26, 27] have fit the parameters of Newtonian mechanics equations to systems depicted in images and videos, though the dynamic equations themselves were not learned. Agrawal et al. [1] trained a system that learns to move objects by poking. A third class of methods [5, 8, 9, 23], like our Visual Interaction Network, have been used to predict future state descriptions from pixels. However, in contrast to the Visual Interaction Network, these models have to be tailored to the particular physical domain of interest, are only effective over a few time steps, or use side information such as object locations and physical constraints at test time. 2 Model The Visual Interaction Network (VIN) learns to produce future trajectories of objects in a physical system from video frames of that system. The VIN is depicted in Figure 1, and consists of the following components: • The visual encoder takes a triplet of frames as input and outputs a state code. A state code is a list of vectors, one for each object in the scene. Each of these vectors is a distributed representation of the position and velocity of its corresponding object. We apply the encoder in a sliding window over a sequence of frames, producing a sequence of state codes. See Section 2.1 and Figure 2a for details. • The dynamics predictor takes a sequence of state codes (output from a visual encoder applied in a sliding-window manner to a sequence of frames) and predicts a candidate state code for the next frame. The dynamics predictor is comprised of several interaction-net cores, each taking input at a different temporal offset and producing candidate state codes. These candidates are aggregated by an MLP to produce a predicted state code for the next frame. See Section 2.2 and Figure 2b for details. • The state decoder converts a state code to a state. A state is a list of each object’s position/velocity vector. The training targets for the system are ground truth states. See Section 2.3 for details. 2.1 Visual Encoder The visual encoder is a CNN that produces a state code from a sequence of 3 images. It has a frame pair encoder Epair shown in Figure 2a which takes a pair of consecutive frames and outputs a candidate state code. This frame pair encoder is applied to both consecutive pairs of frames in a sequence of 3 frames. The two resulting candidate state codes are aggregated by a shared MLP applied to the concatenation of each pair of slots. The result is an encoded state code. Epair itself applies a CNN with two different kernel sizes to a channel-stacked pair of frames, appends constant x, y coordinate channels, and applies a CNN with alternating convolutional and max-pooling layers until unit width and height. The resulting matrix of shape 1 × 1 × (Nobject · Lcode) is reshaped into a state code of shape Nobject × Lcode, where Nobject is the number of objects in the scene and Lcode is the length of each state code slot. The two state codes are fed into an MLP to produce the final 2 Figure 1: Visual Interaction Network: The general architecture is depicted here (see legend on the bottom right). The visual encoder takes triplets of consecutive frames and produces a state code for the third frame in each triplet. The visual encoder is applied in a sliding window over the input sequence to produce a sequence of state codes. Auxiliary losses applied to the decoded output of the encoder help in training. The state code sequence is then fed into the dynamics predictor which has several Interaction Net cores (2 in this example) working on different temporal offsets. The outputs of these Interaction Nets are then fed into an aggregator to produce the prediction for the next time step. The core is applied in a sliding window manner as depicted in the figure. The predicted state codes are linearly decoded and are used in the prediction loss when training. encoder output from the triplet. See the Supplementary Material for further details of the visual encoder model. One important feature of this visual encoder architecture is its weight sharing given by applying the same Epair on both pairs of frames, which approximates a temporal convolution over the input sequence. Another important feature is the inclusion of constant coordinate channels (an x- and y-coordinate meshgrid over the image), which allows positions to be incorporated throughout much of the processing. Without the coordinate channels, such a convolutional architecture would have to infer position from the boundaries of the image, a more challenging task. 2.2 Dynamics Predictor The dynamics predictor is a variant of an Interaction Network (IN) [2]. An IN, summarized in Figure 2b, is a state-to-state physical predictor model that uses a shared relation net on pairs of objects as well as shared self-dynamics and global affector nets to predict per-object dynamics. The main difference between our predictor and a vanilla IN is aggregation over multiple temporal offsets. Our predictor has a set of temporal offsets (in practice we use {1, 2, 4}), with one IN core for each. Given an input state code sequence, for each offset t a separate IN core computes a candidate predicted state code from the input state code at index t. An MLP aggregator transforms the list of candidate state codes into a predicted state code. This aggregator is applied independently to the concatenation over candidate state codes of each slot and is shared across slots to enforce some consistency of object representations. See the Supplementary Material for further details of the dynamics predictor model. As with the visual encoder, we explored many dynamics predictor architectures (some of which we compare as baselines below). The temporal offset aggregation of this architecture enhances its power by allowing it to accommodate both fast and slow movements by different objects within a sequence of frames. See the Supplementary Material for an exploration of the importance of temporal offset aggregation. The factorized representation of INs, which allows efficient learning of interactions even in scenes with many objects, is an important contributor to our predictor architecture’s performance. 3 (a) Frame Pair Encoder (b) Interaction Net Figure 2: Frame Pair Encoder and Interaction Net. (a) The frame pair encoder is a CNN which transforms two consecutive input frame into a state code. Important features are the concatenation of coordinate channels before pooling to unit width and height. The pooled output is reshaped into a state code. (b) An Interaction Net (IN) is used for each temporal offset by the dynamics predictor. For each slot, a relation net is applied to the slot’s concatenation with each other slot. A self-dynamics net is applied to the slot itself. Both of these results are summed and post-processed by the affector to produce the predicted slot. 2.3 State Decoder The state decoder is simply a linear layer with input size Lcode and output size 4 (for a position/velocity vector). This linear layer is applied independently to each slot of the state code. We explored more complicated architectures, but this yielded the best performance. The state decoder is applied to both encoded state codes (for auxiliary encoding loss) and predicted state codes (for prediction loss). 3 Experiments 3.1 Physical Systems Simulations We focused on five types of physical systems with high dynamic complexity but low visual complexity, namely 2-dimensional simulations of colored objects on natural-image backgrounds interacting with a variety of forces (see the Supplementary Material for details). In each system the force law is applied pair-wise to all objects and all objects have the same mass and density unless otherwise stated. • Spring Each pair of objects has an invisible spring connection with non-zero equilibrium. All springs share the same equilibrium and Hooke’s constant. • Gravity Objects are massive and obey Newton’s Law of gravity. • Billiards No long-distance forces are present, but the billiards bounce off each other and off the boundaries of the field of vision. • Magnetic Billiards All billiards are positively charged, so instead of bouncing, they repel each other according to Coulomb’s Law. They still bounce off the boundaries. • Drift No forces of any kind are present. Objects drift with their initial velocities. These systems include previously studied gravitational and billiards systems [3, 1] with the added challenge of natural image backgrounds. For example videos of these systems, see the Supplementary Material or visit (https://goo.gl/yVQbUa). One limitation of the above systems is that the positions, masses, and radii of all objects are either visually observable in every frame or global constants. Furthermore, while occlusion is allowed, the objects have the same radius so total occlusion never occurs. In contrast, systems with hidden quantities that influence dynamics abound in the real world. To mimic this, we explored a few challenging additional systems: • Springs with Invisibility. In each simulation a random object is not rendered. In this way a model must infer the location of the invisible object from its effects on the other objects. • Springs and Billiards with Variable Mass. In each simulation, each object’s radius is randomly generated. This not only causes total occlusion (in the Spring system), but density is held constant, so a model must determine each object’s mass from its radius. 4 To simulate each system, we initialized the position and velocity of each ball randomly and used a physics engine to simulate the resulting dynamics. See the Supplementary Material for more details. To generate video data, we rendered the system state on top of a CIFAR-10 natural image background. The background was randomized between simulations. Importantly, we rendered the objects with 15-fold anti-aliasing so the visual encoder could learn to distinguish object positions much more finely than pixel resolution, as evident by the visual encoder accuracy described in Section 4.1. For each system we generated a dataset with 3 objects and a dataset with 6 objects. Each dataset had a training set of 2.5 · 105 simulations and a test set of 2.5 · 104 simulations, with each simulation 64 frames long. Since we trained on sequences of 14 frames, this ensures we had more than 1 · 107 training samples with distinct dynamics. We rendered natural image backgrounds online from separate training and testing CIFAR-10 sets. 3.2 Baseline Models We compared the VIN to a suite of baseline and competitor models, including ablation experiments. For each model, we performed hyperparameter sweeps across all datasets and choose the hyperparameter set with the lowest average test loss. The Visual RNN has the same visual encoder as the VIN, but the core of its dynamics predictor core is an MLP instead of an IN. Each state code is flattened before being passed to the dynamics predictor. The dynamics predictor is still treated as a recurrent network with temporal offset aggregation, but the dynamics predictor no longer supports the factorized representation of the IN core. Without the weight-sharing of the IN, this model is forced to learn the same force law for each pair of objects, which is not scalable as the object number increases. The Visual LSTM has the same visual encoder as the VIN, but its dynamics predictor is an LSTM [14] with MLP pre- and post-processors. It has no temporal offset aggregation, since the LSTM implicitly integrates temporal information through state updates. During rollouts, the output state code from the post-processor MLP is fed into the pre-processor MLP. The VIN Without Relations is an ablation modification of the VIN. The only difference between this and the VIN is an omitted relation network in the dynamics predictor cores. Note that there is still ample opportunity to compute relations between objects (both in the visual encoder and the dynamics predictor’s temporal offset aggregator), just not specifically through the relation network. Note that we performed a second ablation experiment to isolate the effect of temporal offset aggregation. See the Supplementary Material for details. The Vision With Ground-Truth Dynamics model uses a visual encoder and a miniature version of the dynamics predictor to predict not the next-step state but the current-step state (i.e. the state corresponding to the last observed frame). Since this predicts static dynamics, we did not train it on rollouts. However, when testing, we fed the static state estimation into a ground-truth physics engine to generate rollouts. This model is not a fair comparison to the other models because it does not learn dynamics. It serves instead as a performance bound imposed by the visual encoder. We normalized our results by the performance of this model, as described in Section 4. All models described above learn state from pixels. However, we also trained two baselines with privileged information: IN from State and LSTM from State models, which have the IN and LSTM dynamics predictors, but make their predictions directly from state to state. Hence, they do not have a visual encoder but instead have access to the ground truth states for observed frames. These, in combination with the Vision with Ground Truth Dynamics, allowed us to comprehensively test our model in part and in full. 3.3 Training procedure Our goal was for the models to accurately predict physical dynamics into the future. As shown in Figure 1, the VIN lends itself well to long-term predictions because the dynamics predictor can be treated as a recurrent net and rolled out on state codes. We trained the model to predict a sequence of 8 consecutive unseen future states from 6 frames of input video. Our prediction loss was a normalized weighted sum of the corresponding 8 error terms. The sum was weighted by a discount factor that started at 0.0 and approached 1.0 throughout training, so at the start of training the model must only predict the first unseen state and at the end it must predict an average of all 8 future states. Our 5 training loss was the sum of this prediction loss and an auxiliary encoding loss, as indicated in Figure 1. The model was trained by backpropagation with an Adam optimizer [15]. See the Supplementary Material for full training parameters. 4 Results Our results show that the VIN predicts dynamics accurately, outperforming baselines on all datasets (see Figures 3 and 4). It is scalable, can accommodate forces with a variety of strengths and distance ranges, and can infer visually unobservable quantities (invisible object location) from dynamics. Our model also generates long rollout sequences that are both visually plausible and similar to a ground-truth physics, even outperforming state-of-the-art state-to-state models on this measure. 4.1 Inverse Normalized Loss We evaluated the performance of each model with the Inverse Normalized Loss, defined as Lbound/Lmodel. Here Lbound is the test loss of the Vision with Ground Truth Dynamics and Lmodel is the test loss of the model in question (See Section 3.3). We used this metric because it is much more interpretable than Lmodel itself. The Vision with Ground Truth Dynamics produces the best possible predictions given the visual encoder’s error, so the Inverse Normalized Loss always lies in [0, 1], where a value closer to 1.0 indicates better performance. The visual encoder learned position predictions accurate to within 0.15% of the framewidth (0.048 times the pixel width), so we have no concerns about the accuracy of the Vision with Ground Truth Dynamics. Figure 3 shows the Inverse Normalized Loss on all test datasets after 3 · 105 training steps. The VIN out-performs all baselines on nearly all systems. The only baseline with comparable performance is the VIN Without Relations on Drift, which matches the VIN’s performance. This makes sense, because the objects do not interact in the Drift system, so the relation net should be unnecessary. Of particular note is the performance of the VIN on the invisible dataset (spring system with random invisible object), where its performance is comparable to the fully visible 3-object Spring system. It can locate the invisible object’s position to within 4% of the frame width (1.3 times the pixel width) for the first 8 rollout steps. Figure 3: Performance. We compare our model’s Inverse Normalized Loss to that of the baselines on all test datasets. 3-object dataset are on the upper row, and 6-object datasets are on the lower row. By definition of the Inverse Normalized Loss, all values are in [0, 1] with 1.0 being the performance of a ground-truth simulator given the visual encoder. The VIN (red) outperforms every baseline on every dataset (except the VIN Without Relations on Drift, the system with no object interactions). 4.2 Euclidean Prediction Error of Rollout Positions One important desirable feature of a physical predictor is the ability to extrapolate from a short input video. We addressed this by comparing performance of all models on long rollout sequences and observing the Euclidean Prediction Error. To compute the Euclidean Prediction Error from a 6 predicted state and ground-truth state, we calculated the mean over objects of the Euclidean norm between the predicted and true position vectors. We computed the Euclidean Prediction Error at each step over a 50-timestep rollout sequence. Figure 4 shows the average of this quantity over all 3-object test datasets with respect to both timestep and object distance traveled. The VIN out-performs all other models, including the IN from state and LSTM from state even though they have access to privileged information. This demonstrates the remarkable robustness and generalization power of the VIN. We hypothesize that it outperforms state-to-state models in part because its dynamics predictor must tolerate visual encoder noise during training. This noise-robustness translates to rollouts, where the dynamics predictor remains accurate even as its predictions deviate from true physical dynamics. The state-to-state methods are not trained on noisy state inputs, so they exhibit poorer generalization. See the Supplementary Material for a dataset-specific quantification of these results. (a) Distance Comparison (b) Time Comparison Figure 4: Euclidean Prediction Error on 3-object datasets. We compute the mean over all test datasets of the Euclidean Prediction Error for 50-timestep rollouts. The VIN outperforms all other pixel-to-state models (solid lines) and state-to-state models (dashed lines). Errorbars show 95% confidence intervals. (a) Mean Euclidean Prediction Error with respect to object distance traveled (measured as a fraction of the frame-width). The VIN is accurate to within 6% after objects have traversed 0.72 times the framewidth. (b) Mean Euclidean Prediction Error with respect to timestep. The VIN is accurate to within 7.5% after 50 timesteps. The optimal information-less predictor (predicting all objects to be at the frame’s center) has an error of 37%, higher than all models. 4.3 Visualized Rollouts To qualitatively evaluate the plausibility of the VIN’s rollout predictions, we generated videos by rendering the rollout predictions. These are best seen in video format, though we show them in trajectory-trail images here as well. The backgrounds made trajectory-trails difficult to see, so we masked the background (only for rendering purposes). Trajectory trails are shown for rollouts between 40 and 60 time steps, depending on the dataset. We encourage the reader to view the videos at (https://goo.gl/RjE3ey). Those include the CIFAR backgrounds and show very long rollouts of up to 200 timesteps, which demonstrate the VIN’s extremely realistic predictions. We find no reason to doubt that the predictions would continue to be visually realistic (if not exactly tracking the ground-truth simulator) ad infinitum. 5 Conclusion Here we introduced the Visual Interaction Network and showed that it can infer the physical states of multiple objects from video input and make accurate predictions about their future trajectories. The model uses a CNN-based visual encoder to obtain accurate measurements of object states in the scene. The model also harnesses the prediction abilities and relational computation of Interaction Networks, providing accurate predictions far into the future. We have demonstrated that our model performs well on a variety of physical systems and is robust to visual complexity and partially observable data. 7 Sample Frame Truth Prediction Sample Frame Truth Prediction Spring Gravity Magnetic Billiards Billiards Drift Table 1: Rollout Trajectories. For each of our datasets, we show a sample frame, an example true future trajectory, and a corresponding predicted rollout trajectory (for 40-60 frames, depending on the dataset). The left half shows the 3-object regime and the right half shows the 6-object regime. For visual clarity, all objects are rendered at a higher resolution here than in the training input. One property of our model is the inherent presence of noise from the visual encoder. In contrast to state-to-state models such as the Interaction Net, here the dynamic predictor’s input is inherently noisy due to the discretization of our synthetic dataset rendering. Surprisingly, this noise seemed to confer an advantage because it helped the model learn to overcome temporally compounding errors generated by inaccurate predictions. This is especially notable when doing long term roll outs where we achieve performance which surpasses even a pure state-to-state Interaction Net. Since this dependence on noise would be inherent in any model operating on visual input, we postulate that this is an important feature of any prediction model and warrants further research. While experimentation with variable number of objects falls outside the scope of the material presented here, this is an important direction that could be explored in further work. Importantly, INs generalize out of the box to scenes with a variable number of objects. Should the present form of the perceptual encoder be insufficient to support this type of generalization, this could be addressed by using an attentional encoder and order-agnostic loss function. Our Visual Interaction Network provides a step toward understanding how representations of objects, relations, and physics can be learned from raw data. This is part of a broader effort toward understanding how perceptual models support physical predictions and how the structure of the physical world influences our representations of sensory input, which will help AI research better capture the powerful object- and relation-based system of reasoning that supports humans’ powerful and flexible general intelligence. Acknowledgments We thank Max Jaderberg, David Reichert, Daan Wierstra, and Koray Kavukcuoglu for helpful discussions and insights. 8 References [1] Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by poking: Experiential learning of intuitive physics. arXiv preprint arXiv:1606.07419, 2016. [2] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems, pages 4502–4510, 2016. [3] Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110(45):18327–18332, 2013. [4] Kiran Bhat, Steven Seitz, Jovan Popovi´c, and Pradeep Khosla. Computing the physical parameters of rigid-body motion from video. Computer Vision—ECCV 2002, pages 551–565, 2002. [5] Apratim Bhattacharyya, Mateusz Malinowski, Bernt Schiele, and Mario Fritz. Long-term image boundary extrapolation. arXiv preprint arXiv:1611.08841, 2016. [6] Marcus A Brubaker, Leonid Sigal, and David J Fleet. Estimating contact dynamics. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2389–2396. IEEE, 2009. [7] Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional objectbased approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016. [8] Sebastien Ehrhardt, Aron Monszpart, Niloy J Mitra, and Andrea Vedaldi. Learning a physical long-term predictor. arXiv preprint arXiv:1703.00247, 2017. [9] Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive models of physics for playing billiards. arXiv preprint arXiv:1511.07404, 2015. [10] Tobias Gerstenberg, Noah Goodman, David A Lagnado, and Joshua B Tenenbaum. Noisy newtons: Unifying process and dependency accounts of causal attribution. In In proceedings of the 34th. Citeseer, 2012. [11] Tobias Gerstenberg, Noah Goodman, David A Lagnado, and Joshua B Tenenbaum. From counterfactual simulation to causal judgment. In CogSci, 2014. [12] Radek Grzeszczuk, Demetri Terzopoulos, and Geoffrey Hinton. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 9–20. ACM, 1998. [13] Jessica B Hamrick, Peter W Battaglia, Thomas L Griffiths, and Joshua B Tenenbaum. Inferring mass in complex scenes by mental simulation. Cognition, 157:61–76, 2016. [14] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [16] Lubor Ladicky, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, Markus Gross, et al. Data-driven fluid simulations using regression forests. ACM Transactions on Graphics (TOG), 34(6):199, 2015. [17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [18] Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. arXiv preprint arXiv:1603.01312, 2016. [19] Wenbin Li, Seyedmajid Azimi, Aleš Leonardis, and Mario Fritz. To fall or not to fall: A visual approach to physical stability prediction. arXiv preprint arXiv:1604.00066, 2016. [20] Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi. Newtonian scene understanding: Unfolding the dynamics of objects in static images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3521–3529, 2016. [21] Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. “what happens if...” learning to predict the effect of forces in images. In European Conference on Computer Vision, pages 269–285. Springer, 2016. [22] Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89–96, 2007. [23] Russell Stewart and Stefano Ermon. Label-free supervision of neural networks with physics and domain knowledge. arXiv preprint arXiv:1609.05566, 2016. [24] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. Technical report, DTIC Document, 1971. [25] Patrick H Winston. Learning structural descriptions from examples. 1970. [26] Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman13. Physics 101: Learning physical object properties from unlabeled videos. psychological science, 13(3):89–94, 2016. [27] Jiajun Wu, Ilker Yildirim, Joseph J Lim, Bill Freeman, and Josh Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Advances in neural information processing systems, pages 127–135, 2015. 9 | 2017 | 377 |
6,871 | Repeated Inverse Reinforcement Learning Kareem Amin∗ Google Research New York, NY 10011 kamin@google.com Nan Jiang∗ Satinder Singh Computer Science & Engineering, University of Michigan, Ann Arbor, MI 48104 {nanjiang,baveja}@umich.edu Abstract We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted. Each time the human is surprised, the agent is provided a demonstration of the desired behavior by the human. We formalize this problem, including how the sequence of tasks is chosen, in a few different ways and provide some foundational results. 1 Introduction One challenge in building AI agents that learn from experience is how to set their goals or rewards. In the Reinforcement Learning (RL) setting, one interesting answer to this question is inverse RL (or IRL) in which the agent infers the rewards of a human by observing the human’s policy in a task [2]. Unfortunately, the IRL problem is ill-posed for there are typically many reward functions for which the observed behavior is optimal in a single task [3]. While the use of heuristics to select from among the set of feasible reward functions has led to successful applications of IRL to the problem of learning from demonstration [e.g., 4], not identifying the reward function poses fundamental challenges to the question of how well and how safely the agent will perform when using the learned reward function in other tasks. We formalize multiple variations of a new repeated IRL problem in which the agent and (the same) human face multiple tasks over time. We separate the reward function into two components, one which is invariant across tasks and can be viewed as intrinsic to the human, and a second that is task specific. As a motivating example, consider a human doing tasks throughout a work day, e.g., getting coffee, driving to work, interacting with co-workers, and so on. Each of these tasks has a task-specific goal, but the human brings to each task intrinsic goals that correspond to maintaining health, financial well-being, not violating moral and legal principles, etc. In our repeated IRL setting, the agent presents a policy for each new task that it thinks the human would do. If the agent’s policy “surprises” the human by being sub-optimal, the human presents the agent with the optimal policy. The objective of the agent is to minimize the number of surprises to the human, i.e., to generalize the human’s behavior to new tasks. In addition to addressing generalization across tasks, the repeated IRL problem we introduce and our results are of interest in resolving the question of unidentifiability of rewards from observations in standard IRL. Our results are also of interest to a particular aspect of the concern about how to make sure that the AI systems we build are safe, or AI safety. Specifically, the issue of reward misspecification is often mentioned in AI safety articles [e.g., 5, 6, 7]. These articles mostly discuss broad ethical concerns and possible research directions, while our paper develops mathematical formulations and algorithmic solutions to a specific way of addressing reward misspecification. *This paper extends an unpublished arXiv paper by the authors [1]. ∗Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In summary form, our contributions include: (1) an efficient reward-identification algorithm when the agent can choose the tasks in which it observes human behavior; (2) an upper bound on the number of total surprises when no assumptions are made on the tasks, along with a corresponding lower bound; (3) an extension to the setting where the human provides sample trajectories instead of complete behavior; and (4) identification guarantees when the agent can only choose the task rewards but is given a fixed task environment. 2 Markov Decision Processes (MDPs) An MDP is specified by its state space S, action space A, initial state distribution µ ∈∆(S), transition function (or dynamics) P : S × A →∆(S), reward function Y : S →R, and discount factor γ ∈[0, 1). We assume finite S and A, and ∆(S) is the space of all distributions over S. A policy π : S →A describes an agent’s behavior by specifying the action to take in each state. The (normalized) value function or long-term utility of π is defined as V π(s) = (1−γ) E[P∞ t=1 γt−1Y (st)|s0 = s; π].2 Similarly, the Q-value function is Qπ(s, a) = (1−γ) E[P∞ t=1 γt−1Y (st)|s0 = s, a0 = a; π]. Where necessary we will use the notation V π P,Y to avoid ambiguity about the dynamics and the reward function. Let π⋆: S →A be an optimal policy, which maximizes V π and Qπ in all states (and actions) simultaneously. Given an initial distribution over states, µ, a scalar value that measures the goodness of π is defined as Es∼µ[V π(s)]. We introduce some further notation to express Es∼µ[V π(s)] in vector-matrix form. Let ηπ µ,P ∈R|S| be the normalized state occupancy under initial distribution µ, dynamics P, and policy π, whose s-th entry is (1−γ) E[P∞ t=1 γt−1I(st = s)|s0 ∼µ; π] (I(·) is the indicator function). This vector can be computed in closed-form as ηπ µ,P = (1 −γ) µ⊤P π I|S| −γP π−1⊤ , where P π is an |S| × |S| matrix whose (s, s′)-th element is P(s′|s, π(s)), and I|S| is the |S| × |S| identity matrix. For convenience we will also treat the reward function Y as a vector in R|S|, and we have Es∼µ[V π(s)] = Y ⊤ηπ µ,P . (1) 3 Problem setup Here we define the repeated IRL problem. The human’s reward function θ⋆captures his/her safety concerns and intrinsic/general preferences. This θ⋆is unknown to the agent and is the object of interest herein, i.e., if θ⋆were known to the agent, the concerns addressed in this paper would be solved. We assume that the human cannot directly communicate θ⋆to the agent but can evaluate the agent’s behavior in a task as well as demonstrate optimal behavior. Each task comes with an external reward function R, and the goal is to maximize the reward with respect to Y := θ⋆+ R in each task. As a concrete example, consider an agent for an autonomous vehicle. In this case, θ⋆represents the cross-task principles that define good driving (e.g., courtesy towards pedestrians and other vehicles), which are often difficult to explicitly describe. In contrast, R, the task-specific reward, could reward the agent for successfully completing parallel parking. While R is easier to construct, it may not completely capture what a human deems good driving. (For example, an agent might successfully parallel park while still boxing in neighboring vehicles.) More formally, a task is defined by a pair (E, R), where E = (S, A, µ, P, γ) is the task environment (i.e., a controlled Markov process) and R is the task-specific reward function (task reward). We assume that all tasks share the same S, A, γ, with |A| ≥2, but may differ in the initial distribution µ, dynamics P, and task reward R; all of the task-specifying quantities are known to the agent. In any task, the human’s optimal behavior is always with respect to the reward function Y = θ⋆+ R. We emphasize again that θ⋆is intrinsic to the human and remains the same across all tasks. Our use of task specific reward functions R allows for greater generality than the usual IRL setting, and most of our results apply equally to the case where R ≡0. While θ⋆is private to the human, the agent has some prior knowledge on θ⋆, represented as a set of possible parameters Θ0 ⊂R|S| that contains θ⋆. Throughout, we assume that the human’s reward has bounded and normalized magnitude, that is, ∥θ⋆∥∞≤1. 2Here we differ (w.l.o.g.) from common IRL literature in assuming that reward occurs after transition. 2 A demonstration in (E, R) reveals π⋆, optimal for Y = θ⋆+ R under environment E, to the agent. A common assumption in the IRL literature is that the full mapping is revealed, which can be unrealistic if some states are unreachable from the initial distribution. We address the issue by requiring only the state occupancy vector ηπ∗ µ,P . In Section 7 we show that this also allows an easy extension to the setting where the human only demonstrates trajectories instead of providing a policy. Under the above framework for repeated IRL, we consider two settings that differ in how the sequence of tasks are chosen. In both settings, we will want to minimize the number of demonstrations needed. 1. (Section 5) Agent chooses the tasks, observes the human’s behavior in each of them, and infers the reward function. In this setting where the agent is powerful enough to choose tasks arbitrarily, we will show that the agent will be able to identify the human’s reward function which of course implies the ability to generalize to new tasks. 2. (Section 6) Nature chooses the tasks, and the agent proposes a policy in each task. The human demonstrates a policy only if the agent’s policy is significantly suboptimal (i.e., a mistake). In this setting we will derive upper and lower bounds on the number of mistakes our agent will make. 4 The challenge of identifying rewards Note that it is impossible to identify θ⋆from watching human behavior in a single task. This is because any θ⋆is fundamentally indistinguishable from an infinite set of reward functions that yield exactly the policy observed in the task. We introduce the idea of behavioral equivalence below to tease apart two separate issues wrapped up in the challenge of identifying rewards. Definition 1. Two reward functions θ, θ′ ∈R|S| are behaviorally equivalent in all MDP tasks, if for any (E, R), the set of optimal policies for (R + θ) and (R + θ′) are the same. We argue that the task of identifying the reward function should amount only to identifying the (behavioral) equivalence class to which θ⋆belongs. In particular, identifying the equivalence class is sufficient to get perfect generalization to new tasks. Any remaining unidentifiability is merely representational and of no real consequence. Next we present a constraint that captures the reward functions that belong to the same equivalence class. Proposition 1. Two reward functions θ and θ′ are behaviorally equivalent in all MDP tasks if and only if θ −θ′ = c · 1|S| for some c ∈R, where 1|S| is an all-1 vector of length |S|. The proof is elementary and deferred to Appendix A. For any class of θ’s that are equivalent to each other, we can choose a canonical element to represent this class. For example, we can fix an arbitrary reference state sref ∈S, and fix the reward of this state to 0 for θ⋆and all candidate θ’s. In the rest of the paper, we will always assume such canonicalization in the MDP setting, hence θ⋆∈Θ0 ⊆{θ ∈[−1, 1]|S| : θ(sref) = 0}. 5 Agent chooses the tasks In this section, the protocol is that the agent chooses a sequence of tasks {(Et, Rt)}. For each task (Et, Rt), the human reveals π⋆ t , which is optimal for environment Et and reward function θ⋆+ Rt. Our goal is to design an algorithm which chooses {(Et, Rt)} and identifies θ⋆to a desired accuracy, ϵ, using as few tasks as possible. Theorem 1 shows that a simple algorithm can identify θ⋆after only O(log(1/ϵ)) tasks, if any tasks may be chosen. Roughly speaking, the algorithm amounts to a binary search on each component of θ⋆by manipulating the task reward Rt.3 See the proof for the algorithm specification. As noted before, once the agent has identified θ⋆within an appropriate tolerance, it can compute a sufficiently-near-optimal policy for all tasks, thus completing the generalization objective through the far stronger identification objective in this setting. Theorem 1. If θ⋆∈Θ0 ⊆{θ ∈[−1, 1]|S| : θ(sref) = 0}, there exists an algorithm that outputs θ ∈R|S| that satisfies ∥θ −θ⋆∥∞≤ϵ after O(log(1/ϵ)) demonstrations. Proof. The algorithm chooses the following fixed environment in all tasks: for each s ∈S \ {sref}, let one action be a self-loop, and the other action transitions to sref. In sref, all actions cause self-loops. 3While we present a proof that manipulates Rt, an only slightly more complex proof applies to the setting where all the Rt are exactly zero and the manipulation is limited to the environment [1]. 3 The initial distribution over states is uniformly at random over S \ {sref}. Each task only differs in the task reward Rt (where Rt(sref) ≡0 always). After observing the state occupancy of the optimal policy, for each s we check if the occupancy is equal to 0. If so, it means that the demonstrated optimal policy chooses to go to sref from s in the first time step, and θ⋆(s) + Rt(s) ≤θ⋆(sref) + Rt(sref) = 0; if not, we have θ⋆(s) + Rt(s) ≥0. Consequently, after each task we learn the relationship between θ⋆(s) and −Rt(s) on each s ∈S \ {sref}, so conducting a binary search by manipulating Rt(s) will identify θ⋆to ϵ-accuracy after O(log(1/ϵ)) tasks. 6 Nature chooses the tasks While Theorem 1 yields a strong identification guarantee, it also relies on a strong assumption, that {(Et, Rt)} may be chosen by the agent in an arbitrary manner. In this section, we let nature, who is allowed to be adversarial for the purpose of the analysis, choose {(Et, Rt)}. Generally speaking, we cannot obtain identification guarantees in such an adversarial setup. As an example, if Rt ≡0 and Et remains the same over time, we are essentially back to the classical IRL setting and suffer from the degeneracy issue. However, generalization to future tasks, which is our ultimate goal, is easy in this special case: after the initial demonstration, the agent can mimic it to behave optimally in all subsequent tasks without requiring further demonstrations. More generally, if nature repeats similar tasks, then the agent obtains little new information, but presumably it knows how to behave in most cases; if nature chooses a task unfamiliar to the agent, then the agent is likely to err, but it may learn about θ⋆from the mistake. To formalize this intuition, we consider the following protocol: the nature chooses a sequence of tasks {(Et, Rt)} in an arbitrary manner. For every task (Et, Rt), the agent proposes a policy πt. The human examines the policy’s value under µt, and if the loss lt = Es∼µ h V π⋆ t Et, θ⋆+Rt(s) i −Es∼µ h V πt Et, θ⋆+Rt(s) i (2) is less than some ϵ then the human is satisfied and no demonstration is needed; otherwise a mistake is counted and ηπ⋆ t µt,Pt is revealed to the agent (note that ηπ⋆ t µt,Pt can be computed by the agent if needed from π∗ t and its knowledge of the task). The main goal of this section is to design an algorithm that has a provable guarantee on the total number of mistakes. On human supervision Here we require the human to evaluate the agent’s policies in addition to providing demonstrations. We argue that this is a reasonable assumption because (1) only a binary signal I(lt > ϵ) is needed as opposed to the precise value of lt, and (2) if a policy is suboptimal but the human fails to realize it, arguably it should not be treated as a mistake. Meanwhile, we will also provide identification guarantees in Section 6.4, as the human will be relieved from the supervision duty once θ⋆is identified. Before describing and analyzing our algorithm, we first notice that the Equation 2 can be rewritten as lt = (θ⋆+ R)⊤(ηπ⋆ t µt,Pt −ηπt µt,Pt), (3) using Equation 1. So effectively, the given environment Et in each round induces a set of state occupancy vectors {ηπ µt,Pt : π ∈(S →A)}, and we want the agent to choose the vector that has the largest dot product with θ⋆+ R. The exponential size of the set will not be a concern because our main result (Theorem 2) has no dependence on the number of vectors, and only depends on the dimension of those vectors. The result is enabled by studying the linear bandit version of the problem, which subsumes the MDP setting for our purpose and is also a model of independent interest. 6.1 The linear bandit setting In the linear bandit setting, D is a finite action space with size |D| = K. Each task is denoted as a pair (X, R), where R is the task specific reward function as before. X = [x(1) · · · x(K)] is a d × K feature matrix, where x(i) is the feature vector for the i-th action, and ∥x(i)∥1 ≤1. When we reduce MDPs to linear bandits, each element of D corresponds to an MDP policy, and the feature vector is the state occupancy of that policy. As before, R, θ⋆∈Rd are the task reward and the human’s unknown reward, respectively. The initial uncertainty set for θ⋆is Θ0 ⊆[−1, 1]d. The value of the i-th action is calculated as (θ⋆+ R)⊤x(i), 4 Algorithm 1 Ellipsoid Algorithm for Repeated Inverse Reinforcement Learning 1: Input: Θ0. 2: Θ1 ←MVEE(Θ0). 3: for t = 1, 2, . . . do 4: Nature reveals (Xt, Rt). 5: Learner plays at = arg maxa∈D c⊤ t xa t , where ct is the center of Θt. Θt+1 ←Θt. 6: if lt > ϵ then 7: Human reveals a⋆ t . Θt+1 ←MVEE({θ ∈Θt : (θ −ct)⊤(xa⋆ t t −xat t ) ≥0}). 8: end if 9: end for and a⋆is the action that maximizes this value. Every round the agent proposes an action a ∈D, whose loss is defined as lt = (θ⋆+ R)⊤(xa⋆−xa). As before, a mistake is counted when lt > ϵ, in which case the optimal demonstration xa⋆is provided to the agent. We reiterate here that the agent only receives a binary signal I(lt > ϵ) in addition to the demonstration. We use the term “linear bandit” to refer to the generative process, but our interaction protocol differs from those in the standard bandit literature where reward or cost is revealed [8, 9]. We now show how to embed the previous MDP setting in the linear bandit setting. Example 1. Given an MDP problem with variables S, A, γ, θ⋆, sref, Θ0, {(Et, Rt)}, we can convert it into a linear bandit problem as follows: (all variables with prime belong to the linear bandit problem, and we use v\i to denote the vector v with the i-th coordinate removed) • D = {π : S →A}, d = |S| −1, θ′ ⋆= θ\sref ⋆ , Θ′ 0 = {θ\sref : θ ∈Θ0}. • xπ t = (ηπ µt,Pt)\sref. R′ t = R\sref t −Rt(sref) · 1d. Note that there is a more straightforward conversion by letting d = |S|, θ′ ⋆= θ⋆, Θ′ 0 = Θ0, xπ t = ηπ µt,Pt, R′ t = Rt, which also preserves losses. We perform a more succinct conversion in Example 1 by canonicalizing both θ⋆(already assumed) and Rt (explicitly done here) and dropping the coordinate for sref in all relevant vectors. MDPs with linear rewards In IRL literature, a generalization of the MDP setting is often considered, that reward is linear in state features φ(s) ∈Rd [2, 3]. In this new setting, θ⋆and R are reward parameters, and the actual reward is (θ⋆+ R)⊤φ(s). This new setting can also be reduced to the linear bandit setting similarly to Example 1, except that the state occupancy is replaced by the discounted sum of expected feature values. Our main result, Theorem 2, will still apply automatically, but now the guarantee will only depend on the dimension of the feature space and has no dependence on |S|. We include the conversion below but do not further discuss this setting in the rest of the paper. Example 2. Consider an MDP problem with state features, defined by S, A, γ, d ∈Z+, θ⋆∈ Rd, Θ0 ⊆[−1, 1]d, {(Et, φt ∈Rd, Rt ∈Rd)}, where task reward and background reward in state s are θ⊤ ⋆φt(s) and R⊤φt(s) respectively, and θ⋆∈Θ0. Suppose ∥φt(s)∥∞≤1 always holds, then we can convert it into a linear bandit problem as follows: D = {π : S →A}. d, θ⋆, and Rt remain the same. xπ t = (1 −γ) P∞ h=1 γh−1E[φ(sh) | µt, Pt, π]/d. Note that the division of d in xπ t is for the purpose of normalization, so that ∥xπ t ∥1 ≤∥φ∥1/d ≤∥φ∥∞≤1. 6.2 Ellipsoid Algorithm for Repeated Inverse Reinforcement Learning We propose Algorithm 1, and provide the mistake bound in the following theorem. Theorem 2. For Θ0 = [−1, 1]d, the number of mistakes made by Algorithm 1 is guaranteed to be O(d2 log(d/ϵ)). To prove Theorem 2, we quote a result from linear programming literature in Lemma 1, which is found in standard lecture notes (e.g., [10], Theorem 8.8; see also [11], Lemma 3.1.34). Lemma 1 (Volume reduction in ellipsoid algorithm). Given any non-degenerate ellipsoid B in Rd centered at c ∈Rd, and any non-zero vector v ∈Rd, let B+ be the minimum-volume enclosing ellipsoid (MVEE) of {u ∈B : (u −c)⊤v ≥0}. We have vol(B+)/vol(B) ≤e− 1 2(d+1) . 5 Proof of Theorem 2. Whenever a mistake is made, we can induce the constraint (Rt + θ⋆)⊤(xa⋆ t t − xat t ) > ϵ. Meanwhile, since at is greedy w.r.t. ct, we have (Rt + ct)⊤(xa⋆ t t −xat t ) ≤0, where ct is the center of Θt as in Line 5. Taking the difference of the two inequalities, we obtain (θ⋆−ct)⊤(xa⋆ t t −xat t ) > ϵ. (4) Therefore, the update rule on Line 7 of Algorithm 1 preserves θ⋆in Θt+1. Since the update makes a central cut through the ellipsoid, Lemma 1 applies and the volume shrinks every time a mistake is made. To prove the theorem, it remains to upper bound the initial volume and lower bound the terminal volume of Θt. We first show that an update never eliminates B∞(θ⋆, ϵ/2), the ℓ∞ball centered at θ⋆with radius ϵ/2. This is because, any eliminated θ satisfies (θ + ct)⊤(xa⋆ t t −xat t ) < 0. Combining this with Equation 4, we have ϵ < (θ⋆−θ)⊤(xa⋆ t t −xat t ) ≤∥θ⋆−θ∥∞∥xa⋆ t t −xat t ∥1 ≤2∥θ⋆−θ∥∞. The last step follows from ∥x∥1 ≤1. We conclude that any eliminated θ should be ϵ/2 far away from θ⋆in ℓ∞distance. Hence, we can lower bound the volume of Θt for any t by that of Θ0 T B∞(θ⋆, ϵ/2), which contains an ℓ∞ball with radius ϵ/4 at its smallest (when θ⋆is one of Θ0’s vertices). To simplify calculation, we relax this lower bound (volume of the ℓ∞ball) to the volume of the inscribed ℓ2 ball. Finally we put everything together: let MT be the number of mistakes made from round 1 to T, Cd be the volume of the unit hypersphere in Rd (i.e., ℓ2 ball with radius 1), and vol(·) denote the volume of an ellipsoid, we have MT 2(d + 1) ≤log(vol(Θ1)) −log(vol(ΘT +1)) ≤log(Cd( √ d)d) −log(Cd(ϵ/4)d) = d log 4 √ d ϵ . So MT ≤2d(d + 1) log 4 √ d ϵ = O(d2 log d ϵ ). 6.3 Lower bound In Section 5, we get an O(log(1/ϵ)) upper bound on the number of demonstrations, which has no dependence on |S| (which corresponds to d + 1 in the linear bandit setting). Comparing Theorem 2 to 1, one may wonder whether the polynomial dependence on d is an artifact of the inefficiency of Algorithm 1. We clarify this issue by proving a lower bound, showing that Ω(d log(1/ϵ)) mistakes are inevitable in the worst case when nature chooses the tasks. We provide a proof sketch below, and the complete proof is deferred to Appendix E. Theorem 3. For any randomized algorithm4 in the linear bandit setting, there always exists θ⋆∈ [−1, 1]d and an adversarial sequence of {(Xt, Rt)} that potentially adapts to the algorithm’s previous decisions, such that the expected number of mistakes made by the algorithm is Ω(d log(1/ϵ)). Proof Sketch. We randomize θ⋆by sampling each element i.i.d. from Unif([−1, 1]). We will prove that there exists a strategy of choosing (Xt, Rt) such that any algorithm’s expected number of mistakes is Ω(d log(1/ϵ), which proves the theorem as max is no less than average. In our construction, Xt = [0d, ejt], where jt is some index to be specified. Hence, every round the agent is essentially asked to decided whether θ(jt) ≥−Rt(jt). The adversary’s strategy goes in phases, and Rt remains the same during each phase. Every phase has d rounds where jt is enumerated over {1, . . . , d}. The adversary will use Rt to shift the posterior on θ(jt) + Rt(jt) so that it is centered around the origin; in this way, the agent has about 1/2 probability to make an error (regardless of the algorithm), and the posterior interval will be halved. Overall, the agent makes d/2 mistakes in each phase, and there will be about log(1/ϵ) phases in total, which gives the lower bound. Applying the lower bound to MDPs The above lower bound is stated for the linear bandit setting. In principle, we need to prove lower bound for MDPs separately, because linear bandits are more general than MDPs for our purpose, and the hard instances in linear bandits may not have corresponding 4While our Algorithm 1 is deterministic, randomization is often crucial for online learning in general [12]. 6 MDP instances. In Lemma 2 below, we show that a certain type of linear bandit instances can always be emulated by MDPs with the same number of actions, and the hard instances constructed in Theorem 3 indeed satisfy the conditions for such a type; in particular, we require the feature vectors to be non-negative and have ℓ1 norm bounded by 1. As a corollary, an Ω(|S| log(1/ϵ)) lower bound for the MDP setting (even with a small action space |A| = 2) follows directly from Theorem 3. The proof of Lemma 2 is deferred to Appendix B. Lemma 2 (Linear bandit to MDP conversion). Let (X, R) be a linear bandit task, and K be the number of actions. If every xa is non-negative and ∥xa∥1 ≤1, then there exists an MDP task (E, R′) with d + 1 states and K actions, such that under some choice of sref, converting (E, R′) as in Example 1 recovers the original problem. 6.4 On identification when nature chooses tasks While Theorem 2 successfully controls the number of total mistakes, it completely avoids the identification problem and does not guarantee to recover θ⋆. In this section we explore further conditions under which we can obtain identification guarantees when Nature chooses the tasks. The first condition, stated in Proposition 2, implies that if we have made all the possible mistakes, then we have indeed identified the θ⋆, where the identification accuracy is determined by the tolerance parameter ϵ that defines what is counted as a mistake. Due to space limit, the proof is deferred to Appendix C. Proposition 2. Consider the linear bandit setting. If there exists T0 such that for any round t ≥T0, no more mistakes can be ever made by the algorithm for any choice of (Et, Rt) and any tie-braking mechanism, then we have θ⋆∈B∞(cT0, ϵ). While the above proposition shows that identification is guaranteed if the agent exhausts the mistakes, the agent has no ability to actively fulfill this condition when nature chooses tasks. For a stronger identification guarantee, we may need to grant the agent some freedom in choosing the tasks. Identification with fixed environment Here we consider a setting that fits in between Section 5 (completely active) and Section 6.1 (completely passive), where the environment E (hence the induced feature vectors {x(1), x(2), . . . , x(K)}) is given and fixed, and the agent can arbitrarily choose the task reward Rt. The goal is to obtain identification guarantee in this intermediate setting. Unfortunately, a degenerate case can be easily constructed that prevents the revelation of any information about θ⋆. In particular, if x(1) = x(2) = . . . = x(K), i.e., the environment is completely uncontrolled, then all actions are equally optimal and nothing can be learned. More generally, if for some v ̸= 0 we have v⊤x(1) = v⊤x(2) = . . . = v⊤x(K), then we may never recover θ⋆along the direction of v. In fact, Proposition 1 can be viewed as an instance of this result where v = 1|S| (recall that 1⊤ |S|ηπ µ,P ≡1), and that is why we have to remove such redundancy in Example 1 in order to discuss identification in MDPs. Therefore, to guarantee identification in a fixed environment, the feature vectors must have significant variation in all directions, and we capture this intuition by defining a diversity score spread(X) (Definition 2) and showing that the identification accuracy depends inversely on the score (Theorem 4). Definition 2. Given the feature matrix X = x(1) x(2) · · · x(K) whose size is d × K, define spread(X) as the d-th largest singular value of e X := X(IK −1 K 1K1⊤ K). Theorem 4. For a fixed feature matrix X, if spread(X) > 0, then there exists a sequence R1, R2, . . . , RT with T = O(d2 log(d/ϵ)) and a sequence of tie-break choices of the algorithm, such that after round T we have ∥cT −θ⋆∥∞≤ϵ p (K −1)/2/spread(X). The proof is deferred to Appendix D. The √ K dependence in Theorem 4 may be of concern as K can be exponentially large. However, Theorem 4 also holds if we replace X by any matrix that consists of X’s columns, so we may choose a small yet most diverse set of columns as to optimize the bound. 7 Working with trajectories In previous sections, we have assumed that the human evaluates the agent’s performance based on the state occupancy of the agent’s policy, and demonstrates the optimal policy in terms of state occupancy 7 Algorithm 2 Trajectory version of Algorithm 1 for MDPs 1: Input: Θ0, H, n. 2: Θ1 ←MVEE(Θ0), i ←0, ¯Z ←0, ¯Z⋆←0. 3: for t = 1, 2, . . . do 4: Nature reveals (Et, Rt). Agent rolls-out a trajectory using πt greedily w.r.t. ct + Rt. 5: Θt+1 ←Θt. 6: if agent takes a in s with Q⋆(s, a) < V ⋆(s) −ϵ then 7: Human produces an H-step trajectory from s. Let the empirical state occupancy be ˆz⋆,H i . 8: i ←i + 1, ¯Z⋆←¯Z⋆+ ˆz⋆,H i . 9: Let zi be the state occupancy of πt from initial state s, and ¯Z ←¯Z + zi. 10: if i = n then 11: Θt+1 ←MVEE({θ ∈Θt : (θ −ct)⊤( ¯Z⋆−¯Z) ≥0}). i ←0, ¯Z ←0, ¯Z⋆←0. 12: end if 13: end if 14: end for as well. In practice, we would like to instead assume that for each task, the agent rolls out a trajectory, and the human shows an optimal trajectory if he/she finds the agent’s trajectory unsatisfying. We are still concerned about upper bounding the number of total mistakes, and aim to provide a parallel version of Theorem 2. Unlike in traditional IRL, in our setting the agent is also acting, which gives rise to many subtleties. First, the total reward on the agent’s single trajectory is a random variable, and may deviate from the expected value of its policy. Therefore, it is generally impossible to decide if the agent’s policy is near-optimal, and instead we assume that the human can check if each action that the agent takes in the trajectory is near-optimal: when the agent takes a at state s, an error is counted if and only if Q⋆(s, a) < V ⋆(s) −ϵ. This criterion can be viewed as a noisy version of the one used in previous sections, as taking expectation of V ⋆(s) −Q⋆(s, π(s)) over the occupancy induced by π will recover Equation 2. While this resolves the issue on the agent’s side, how should the human provide his/her optimal trajectory? The most straightforward protocol is that the human rolls out a trajectory from the initial distribution of the task, µt. We argue that this is not a reasonable protocol for two reasons: (1) in expectation, the reward collected by the human may be less than that by the agent, because conditioning on the event that an error is spotted may introduce a selection bias; (2) the human may not encounter the problematic state in his/her own trajectory, hence the information provided in the trajectory may be irrelevant. To resolve this issue, we consider a different protocol where the human rolls out a trajectory using an optimal policy from the very state where the agent errs. Now we discuss how we can prove a parallel of Theorem 2 under this new protocol. First, let’s assume that the demonstration were still given in the form a state occupancy vector starting at the problematic state. In this case, we can reduce to the setting of Section 6 by changing µt to a point mass on the problematic state.5 To apply the algorithm and the analysis in Section 6, it remains to show that the notion of error in this section (a suboptimal action) implies the notion of error in Section 6 (a suboptimal policy): let s be the problematic state and π be the agent’s policy, we have V π(s) = Qπ(s, π(s)) ≤Q⋆(s, π(s)) < V ⋆(s) −ϵ. So whenever a suboptimal action is spotted in state s, it indeed implies that the agent’s policy is suboptimal for s as the initial state. Hence, we can run Algorithm 1 as-is and Theorem 2 immediately applies. To tackle the remaining issue that the demonstration is in terms of a single trajectory, we will not update Θt after each mistake as in Algorithm 1, but only make an update after every mini-batch of mistakes, and aggregate them to form accurate update rules. See Algorithm 2. The formal guarantee of the algorithm is stated in Theorem 5, whose proof is deferred to Appendix G. 5At the first glance this might seem suspicious: the problematic state is random and depends on the learner’s current policy, but in RL the initial distribution is usually fixed and the learner has no control over it. This concern is removed thanks to our adversarial setup on (Et, Rt) (of which µt is a component). 8 Theorem 5. ∀δ ∈(0, 1), with probability at least 1−δ, the number of mistakes made by Algorithm 2 with parameters Θ0 = [−1, 1]d, H = l log(12/ϵ) 1−γ m , and n = & log( 4d(d+1) log 6 √ d ϵ δ ) 32ϵ2 ' where d = |S|,6 is at most ˜O( d2 ϵ2 log( d δϵ)).7 8 Related work & Conclusions Most existing work in IRL focused on inferring the reward function8 using data acquired from a fixed environment [2, 3, 18, 19, 20, 21, 22]. There is prior work on using data collected from multiple — but exogenously fixed — environments to predict agent behavior [23]. There are also applications where methods for single-environment MDPs have been adapted to multiple environments [19]. Nevertheless, all these works consider the objective of mimicking an optimal behavior in the presented environment(s), and do not aim at generalization to new tasks that is the main contribution of this paper. Recently, Hadfield-Menell et al. [24] proposed cooperative inverse reinforcement learning, where the human and the agent act in the same environment, allowing the human to actively resolve the agent’s uncertainty on the reward function. However, they only consider a single environment (or task), and the unidentifiability issue of IRL still exists. Combining their interesting framework with our resolution to unidentifiability (by multiple tasks) can be an interesting future direction. Acknowledgement This work was supported in part by NSF grant IIS 1319365 (Singh & Jiang) and in part by a Rackham Predoctoral Fellowship from the University of Michigan (Jiang). Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors. References [1] Kareem Amin and Satinder Singh. Towards resolving unidentifiability in inverse reinforcement learning. arXiv preprint arXiv:1601.06569, 2016. [2] Andrew Y Ng and Stuart J Russell. Algorithms for inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 663–670, 2000. [3] Pieter Abbeel and Andrew Y Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In Proceedings of the 21st International Conference on Machine learning, page 1. ACM, 2004. [4] Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y Ng. An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems, 19:1, 2007. [5] Nick Bostrom. Ethical issues in advanced artificial intelligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, pages 277–284, 2003. [6] Stuart Russell, Daniel Dewey, and Max Tegmark. Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4):105–114, 2015. [7] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. [8] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397–422, 2002. 6Here we use the simpler conversion explained right after Example 1. We can certainly improve the dimension to d = |S| −1 by dropping the sref coordinate in all relevant vectors but that complicates presentation. 7A log log(1/ϵ) term is suppressed in ˜O(·). 8While we do not discuss it here, in the economics literature, the problem of inferring an agent’s utility from behavior-queries has long been studied under the heading of utility or preference elicitation [13, 14, 15, 16, 17]. While our result in Section 5 uses similar techniques to elicit the reward function, we do so purely by observing the human’s behavior without external source of information (e.g., query responses). 9 [9] Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. In COLT, pages 355–366, 2008. [10] Ryan O’Donnell. 15-859(E) – linear and semidefinite programming: lecture notes. Carnegie Mellon University, 2011. https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/ class/15859-f11/www/notes/lecture08.pdf. [11] Martin Grötschel, László Lovász, and Alexander Schrijver. Geometric algorithms and combinatorial optimization, volume 2. Springer Science & Business Media, 2012. [12] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. [13] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility elicitation. In AAAI/IAAI, pages 363–369, 2000. [14] John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior (60th Anniversary Commemorative Edition). Princeton university press, 2007. [15] Kevin Regan and Craig Boutilier. Regret-based reward elicitation for markov decision processes. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 444–451. AUAI Press, 2009. [16] Kevin Regan and Craig Boutilier. Eliciting additive reward functions for markov decision processes. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 2159, 2011. [17] Constantin A Rothkopf and Christos Dimitrakakis. Preference elicitation and inverse reinforcement learning. In Machine Learning and Knowledge Discovery in Databases, pages 34–48. Springer, 2011. [18] Adam Coates, Pieter Abbeel, and Andrew Y Ng. Learning for control from multiple demonstrations. In Proceedings of the 25th international conference on Machine learning, pages 144–151. ACM, 2008. [19] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages 1433–1438, 2008. [20] Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. Urbana, 51:61801, 2007. [21] Umar Syed and Robert E Schapire. A game-theoretic approach to apprenticeship learning. In Advances in neural information processing systems, pages 1449–1456, 2007. [22] Kevin Regan and Craig Boutilier. Robust policy computation in reward-uncertain MDPs using nondominated policies. In AAAI, 2010. [23] Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. Maximum margin planning. In Proceedings of the 23rd International Conference on Machine Learning, pages 729–736. ACM, 2006. [24] Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. In Advances in Neural Information Processing Systems, pages 3909– 3917, 2016. 10 | 2017 | 378 |
6,872 | Inference in Graphical Models via Semidefinite Programming Hierarchies Murat A. Erdogdu Microsoft Research erdogdu@cs.toronto.edu Yash Deshpande MIT and Microsoft Research yash@mit.edu Andrea Montanari Stanford University montanari@stanford.edu Abstract Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are intimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with n vertices require solving an SDP with n⇥(d) variables where d is the degree in the hierarchy. In practice, for d ≥4, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation. 1 Introduction Graphical models provide a powerful framework for analyzing systems comprised by a large number of interacting variables. Inference in graphical models is crucial in scientific methodology with countless applications in a variety of fields including causal inference, computer vision, statistical physics, information theory, and genome research [WJ08, KF09, MM09]. In this paper, we propose a class of inference algorithms for pairwise undirected graphical models. Such models are fully specified by assigning: (i) a finite domain X for the variables; (ii) a finite graph G = (V, E) for V = [n] ⌘{1, . . . , n} capturing the interactions of the basic variables; (iii) a collection of functions ✓= ({✓v i }i2V , {✓e ij}(i,j)2E) that quantify the vertex potentials and interactions between the variables; whereby for each vertex i 2 V we have ✓v i : X ! R and for each edge (i, j) 2 E, we have ✓e ij : X ⇥X ! R (an arbitrary ordering is fixed on the pair of vertices {i, j}). These parameters can be used to form a probability distribution on X V for the random vector x = (x1, x2, ..., xn) 2 X V by letting, p(x|✓) = 1 Z(✓) eU(x;✓) , U(x; ✓) = X (i,j)2E ✓e ij(xi, xj) + X i2V ✓v i (xi) , (1.1) where Z(✓) is the normalization constant commonly referred to as the partition function. While such models can encode a rich class of multivariate probability distributions, basic inference tasks are 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. intractable except for very special graph structures such as trees or small treewidth graphs [CD+06]. In this paper, we will focus on MAP estimation, which amounts to solving the combinatorial optimization problem ˆx(✓) ⌘arg max x2X V U(x; ✓). (1.2) Intractability plagues other classes of graphical models as well (e.g. Bayesian networks, factor graphs), and has motivated the development of a wide array of heuristics. One of the simplest such heuristics is the loopy belief propagation (BP) [WJ08, KF09, MM09]. In its max-product version (that is well-suited for MAP estimation), BP is intimately related to the linear programming (LP) relaxation of the combinatorial problem maxx2X V U(x; ✓). Denoting the decision variables by b = ({bi}i2V , {bij}(i,j)2E), LP relaxation form of BP can be written as maximize b X (i,j)2E X xi,xj2X ✓ij(xi, xj)bij(xi, xj) + X i2V X xi2X ✓i(xi)bi(xi) , (1.3) subject to X xj2X bij(xi, xj) = bi(xi) 8(i, j) 2 E , (1.4) bi 2 ∆X 8i 2 V, bij 2 ∆X⇥X 8(i, j) 2 E , (1.5) where ∆S denotes the simplex of probability distributions over set S. The decision variables are referred to as ‘beliefs’, and their feasible set is a relaxation of the polytope of marginals of distributions. The beliefs satisfy the constraints on marginals involving at most two variables connected by an edge. Loopy belief propagation is successful on some applications, e.g. sparse locally tree-like graphs that arise, for instance, decoding modern error correcting codes [RU08] or in random constraint satisfaction problems [MM09]. However, in more structured instances – arising for example in computer vision – BP can be substantially improved by accounting for local dependencies within subsets of more than two variables. This is achieved by generalized belief propagation (GBP) [YFW05] where the decision variables are beliefs bR that are defined on subsets of vertices (a ‘region’) R ✓[n], and that represent the marginal distributions of the variables in that region. The basic constraint on the beliefs is the linear marginalization constraint: P xR\S bR(xR) = bS(xS), holding whenever S ✓R. Hence GBP itself is closely related to LP relaxation of the polytope of marginals of probability distributions. The relaxation becomes tighter as larger regions are incorporated. In a prototypical application, G is a two-dimensional grid, and regions are squares induced by four contiguous vertices (plaquettes), see Figure 1, left frame. Alternatively in the right frame of the same figure, the regions correspond to triangles. The LP relaxations that correspond to GBP are closely related to the Sherali-Adams hierarchy [SA90]. Similar to GBP, the variables within this hierarchy are beliefs over subsets of variables bR = (bR(xR))xR2X R which are consistent under marginalization: P xR\S bR(xR) = bS(xS). However, these two approaches differ in an important point: Sherali-Adams hierarchy uses beliefs over all subsets of |R| d variables, where d is the degree in the hierarchy; this leads to an LP of size ⇥(nd). In contrast, GBP only retains regions that are contiguous in G. If G has maximum degree k, this produces an LP of size O(nkd), a reduction which is significant for large-scale problems. Given the broad empirical success of GBP, it is natural to develop better methods for inference in graphical models using tighter convex relaxations. Within combinatorial optimization, it is well understood that the semidefinite programming (SDP) relaxations provide superior approximation guarantees with respect to LP [GW95]. Nevertheless, SDP has found limited applications in inference tasks for graphical models for at least two reasons. A structural reason: standard SDP relaxations (e.g. [GW95]) do not account exactly for correlations between neighboring vertices in the graph which is essential for structured graphical models. As a consequence, BP or GBP often outperforms basic SDPs. A computational reason: basic SDP relaxations involve ⇥(n2) decision variables, and generic interior point solvers do not scale well for the large-scale applications. An exception is [WJ04] which employs the simplest SDP relaxation (degree 2 Sum-Of-Squares, see below) in conjunction with a relaxation of the entropy and interior point methods – higher order relaxations are briefly discussed without implementation as the resulting program suffers from the aforementioned limitations. In this paper, we revisit MAP inference in graphical models via SDPs, and propose an approach that carries over the favorable performance guarantees of SDPs into inference tasks. For simplicity, we focus on models with binary variables, but we believe that many of the ideas developed here can be naturally extended to other finite domains. We present the following contributions: 2 Region 1 Region 2 Region 1 Region 2 Region 4 Region 3 Figure 1: A two dimensional grid, and two typical choices for regions for GBP and PSOS. Left: Regions are plaquettes comprising four vertices. Right: Regions are triangles. Partial Sum-Of-Squares relaxations. We use SDP hierarchies, specifically the Sum-Of-Squares (SOS) hierarchy [Sho87, Las01, Par03] to formulate tighter SDP relaxations for binary MAP inference that account exactly for the joint distributions of small subsets of variables xR, for R ✓V . However, SOS introduces decision variables for all subsets R ✓V with |R| d/2 (d is a fixed even integer), and hence scales poorly for large-scale inference problems. We propose a similar modification as in GBP. Instead of accounting for all subsets R with |R| d/2, we only introduce decision variables to represent a certain family of such subsets (regions) of vertices in G. The resulting SDP has (for d and the maximum degree of G bounded) only O(n2) decision variables which is suitable for practical implementations. We refer to these relaxations as Partial Sum-Of-Squares (PSOS), cf. Section 2. Theoretical analysis. In Section 2.1, we prove that suitable PSOS relaxations are tight for certain classes of graphs, including planar graphs, with ✓v = 0. While this falls short of explaining the empirical results (which uses simpler relaxations, and ✓v 6= 0), it points in the right direction. Optimization algorithm and rounding. Despite the simplification afforded by PSOS, interior-point solvers still scale poorly to large instances. In order to overcome this problem, we adopt a non-convex approach proposed by Burer and Monteiro [BM03]. We constrain the rank of the SDP matrix in PSOS to be at most r, and solve the resulting non-convex problem using a trust-region coordinate ascent method, cf. Section 3.1. Further, we develop a rounding procedure called Confidence Lift and Project (CLAP) which iteratively uses PSOS relaxations to obtain an integer solution, cf. Section 3.2. Numerical experiments. In Section 4, we present numerical experiments with PSOS by solving problems of size up to 10, 000 within several minutes. While additional work is required to scale this approach to massive sizes, we view this as an exciting proof-of-concept. To the best of our knowledge, no earlier attempt was successful in scaling higher order SOS relaxations beyond tens of dimensions. More specifically, we carry out experiments with two-dimensional grids – an image denoising problem, and Ising spin glasses. We demonstrate through extensive numerical studies that PSOS significantly outperforms BP and GBP in the inference tasks we consider. 2 Partial Sum-Of-Squares Relaxations For concreteness, throughout the paper we focus on pairwise models with binary variables. We do not expect fundamental problems extending the same approach to other domains. For binary variables x = (x1, x2, ..., xn), MAP estimation amounts to solving the following optimization problem maximize x X (i,j)2E ✓e ijxixj + X i2V ✓v i xi , (INT) subject to xi 2 {+1, −1} , 8i 2 V , where ✓e = (✓e ij)1i,jn and ✓v = (✓v i )1in are the parameters of the graphical model. For the reader’s convenience, we recall a few basic facts about SOS relaxations, referring to [BS16] for further details. For an even integer d, SOS(d) is an SDP relaxation of INT with decision variable X : #[n] d $ ! R where #[n] d $ denotes the set of subsets S ✓[n] of size |S| d; it is given as maximize X X (i,j)2E ✓e ijX({i, j}) + X i2V ✓v i X({i}) , (SOS) subject to X(;) = 1, M(X) < 0 . The moment matrix M(X) is indexed by sets S, T ✓[n], |S|, |T| d/2, and has entries M(X)S,T = X(S4T) with 4 denoting the symmetric difference of two sets. Note that M(X)S,S = X(;) = 1. 3 0 200 400 600 662 0 50 100 150 200 Iterations Objective Value Rank 20 10 5 3 2 1e−07 1e−05 1e−03 1e−01 1e+01 0 50 100 150 200 Iterations Duality Gap Rank 20 10 5 3 2 Figure 2: Effect of the rank constraint r on n = 400 square lattice (20 ⇥20): Left plot shows the change in the value of objective at each iteration. Right plot shows the duality gap of the Lagrangian. We can equivalently represent M(X) as a Gram matrix by letting M(X)S,T = hσS, σT i for a collection of vectors σS 2 Rr indexed by S 2 # [n] d/2 $ . The case r = %%# [n] d/2 $%% can represent any semidefinite matrix; however, in what follows it is convenient from a computational perspective to consider smaller choices of r. The constraint M(X)S,S = 1 is equivalent to kσSk = 1, and the condition M(X)S,T = X(S4T) can be equivalently written as hσS1, σT1i = hσS2, σT2i , 8S14T1 = S24T2. (2.1) In the case d = 2, SOS(2) recovers the classical Goemans-Williamson SDP relaxation [GW95]. In the following, we consider the simplest higher-order SDP, namely SOS(4) for which the general constraints in Eq. (2.1) can be listed explicitly. Fixing a region R ✓V , and defining the Gram vectors σ;, (σi)i2V , (σij){i,j}✓V , we list the constraints that involve vectors σS for S ✓R and |S| = 1, 2: kσik = 1 8i 2 S [ {;}, (Sphere ⃝ i ) hσi, σji = hσij, σ;i 8i, j 2 S, (Undirected i −j) hσi, σiji = hσj, σ;i 8i, j 2 S, (Directed i ! j) hσi, σjki = hσk, σiji 8i, j, k 2 S, (V-shaped i jV k) hσij, σjki = hσik, σ;i 8i, j, k 2 S, (Triangle i j4k) hσij, σkli = hσik, σjli 8i, j, k, l 2 S. (Loop i k⇤j l ) Given an assignment of the Gram vectors σ = (σ;, (σi)i2V , (σij){i,j}✓V ), we denote by σ|R its restriction to R, namely σ|R = (σ;, (σi)i2R, (σij){i,j}✓R). We denote by ⌦(R), the set of vectors σ|R that satisfy the above constraints. With these notations, the SOS(4) SDP can be written as maximize σ X (i,j)2E ✓e ijhσi, σji + X i2V ✓v i hσi, σ;i , (SOS(4)) subject to σ 2 ⌦(V ) . A specific Partial SOS (PSOS) relaxation is defined by a collection of regions R = {R1, R2, . . . , Rm}, Ri ✓V . We will require R to be a covering, i.e. [m i=1Ri = V and for each (i, j) 2 E there exists ` 2 [m] such that {i, j} ✓R`. Given such a covering, the PSOS(4) relaxation is maximize σ X (i,j)2E ✓e ijhσi, σji + X i2V ✓v i hσi, σ;i , subject to σ|Ri 2 ⌦(Ri) 8i 2 {1, 2, . . . , m} . (PSOS(4)) Notice that variables σij only enter the above program if {i, j} ✓R` for some `. As a consequence, the dimension of the above optimization problem is O(r Pm `=1 |R`|2), which is O(nr) if the regions have bounded size; this will be the case in our implementation. Of course, the specific choice of regions R is crucial for the quality of this relaxation. A natural heuristic is to choose each region R` to be a subset of contiguous vertices in G, which is generally the case for GBP algorithms. 4 Algorithm 1: Partial-SOS Input :G = (V, E), ✓e 2 Rn⇥n, ✓v 2 Rn, σ 2 Rr⇥(1+|V |+|E|), Reliables = ; Actives = V [ E \ Reliables, and ∆=1, while ∆> tol do ∆= 0 for s 2 Actives do if s 2 V then /* s 2 V is a vertex */ cs = P t2@s ✓e stσt + ✓v sσ; else /* s = (s1, s2) 2 E is an edge */ cs = ✓e s1s2σ; + ✓v s1σs2 + ✓v s2σs1 Form matrix As, vector bs, and the corresponding Lagrange multipliers λs (see text). σnew s −arg max kσk=1 & hcs, σi + ⇢ 2kAsσ −bs + λsk2 /* sub-problem */ ∆ −∆+ kσnew s −σsk2 + kAsσs −bsk2 σs −σnew s /* update variables */ λs −λs + Asσs −bs 2.1 Tightness guarantees Solving exactly INT is NP-hard even if G is a three-dimensional grid [Bar82]. Therefore, we do not expect PSOS(4) to be tight for general graphs G. On the other hand, in our experiments (cf. Section 4), PSOS(4) systematically achieves the exact maximum of INT for two-dimensional grids with random edge and vertex parameters (✓e ij)(i,j)2E, (✓v i )i2V . This finding is quite surprising and calls for a theoretical explanation. While full understanding remains an open problem, we present here partial results in that direction. Recall that a cycle in G is a sequence of distinct vertices (i1, . . . , i`) such that, for each j 2 [`] ⌘ {1, 2, . . . , `}, (ij, ij+1) 2 E (where ` + 1 is identified with 1). The cycle is chordless if there is no j, k 2 [`], with j −k 6= ±1 mod ` such that (ij, ik) 2 E. We say that a collection of regions R on graph G is circular if for each chordless cycle in G there exists a region in R 2 R such that all vertices of the cycle belong to R. We also need the following straightforward notion of contractibility. A contraction of G is a new graph obtained by identifying two vertices connected by an edge in G. G is contractible to H if there exists a sequence of contractions transforming G into H. The following theorem is a direct consequence of a result of Barahona and Mahjoub [BM86] (see Supplement for a proof). Theorem 1. Consider the problem INT with ✓v = 0. If G is not contractible to K5 (the complete graph over 5 vertices), then PSOS(4) with a circular covering R is tight. The assumption that ✓v = 0 can be made without loss of generality (see Supplement for the reduction from the general case). Furthermore, INT can be solved in polynomial time if G is planar, and ✓v = 0 [Bar82]. Note however, the reduction from ✓v 6= 0 to ✓v = 0 can transform a planar graph to a non-planar graph. This theorem implies that (full) SOS(4) is also tight if G is not contractible to K5. Notice that planar graphs are not contractible to K5, and we recover the fact that INT can be solved in polynomial time if ✓v = 0. This result falls short of explaining the empirical findings in Section 4, for at least two reasons. Firstly the reduction to ✓v = 0 induces K5 subhomomorphisms for grids. Second, the collection of regions R described in the previous section does not include all chordless cycles. Theoretically understanding the empirical performance of PSOS(4) as stated remains open. However, similar cycle constraints have proved useful in analyzing LP relaxations [WRS16]. 3 Optimization Algorithm and Rounding 3.1 Solving PSOS(4) via Trust-Region Coordinate Ascent We will approximately solve PSOS(4) while keeping r = O(1). Earlier work implies that (under suitable genericity condition on the SDP) there exists an optimal solution with rank p 2 # constraints [Pat98]. Recent work [BVB16] shows that for r > p 2 # constraints, the non-convex optimization problem has no non-global local maxima. For SOS(2), [MM+17] proves that setting r = O(1) is sufficient for achieving O(1/r) relative error from the global maximum for specific choices of potentials ✓e, ✓v. We find that there is little or no improvement beyond r = 10 (cf. Figure 2). 5 Algorithm 2: CLAP: Confidence Lift And Project Input :G = (V, E), ✓e 2 Rn⇥n, ✓v 2 Rn, regions R = {R1, ..., Rm} Initialize variable matrix σ 2 Rr⇥(1+|V |+|E|) and set Reliables = ;. while Reliables 6= V [ E do Run Partial-SOS on inputs G = (V, E), ✓e, ✓v, σ, Reliables /* lift procedure */ Promotions = ; and Confidence = 0.9 while Confidence > 0 and Promotions 6= ; do for s 2 V [ E \ Reliables do /* find promotions */ if |hσ;, σsi| > Confidence then σs = sign(hσ;, σsi) · σ; /* project procedure */ Promotions −Promotions [ {sc} if Promotions = ; then /* decrease confidence level */ Confidence −Confidence −0.1 Reliables −Reliables [ Promotions /* update Reliables */ Output :(hσi, σ;i)i2V 2 {−1, +1}n We will assume that R = (R1, . . . , Rm) is a covering of G (in the sense introduced in the previous section), and –without loss of generality– we will assume that the edge set is E = & (i, j) 2 V ⇥V : 9` 2 [m] such that {i, j} ✓R` . (3.1) In other words, E is the maximal set of edges that is compatible with R being a covering. This can always be achieved by adding new edges (i, j) to the original edge set with ✓e ij = 0. Hence, the decision variables σs are indexed by s 2 S = {;}[V [E. Apart from the norm constraints, all other consistency constraints take the form hσs, σri = hσt, σpi for some 4-tuple of indices (s, r, t, p). We denote the set of all such 4-tuples by C, and construct the augmented Lagrangian of PSOS(4) as L(σ, λ) = X i2V ✓v i hσi, σ;i + X (i,j)2E ✓e ijhσi, σji + ⇢ 2 X (s,r,t,p)2C ⇣ hσs, σri −hσt, σpi + λs,r,t,p ⌘2 . At each step, our algorithm execute two operations: (i) maximize the cost function with respect to one of the vectors σs; (ii) perform one step of gradient descent with respect to the corresponding subset of Lagrangian parameters, to be denoted by λs. More precisely, fixing s 2 S \ {;} (by rotational invariance, it is not necessary to update σ;), we note that σs appears in the constraints linearly (or it does not appear). Hence, we can write these constraints in the form Asσs = bs where As, bs depend on (σr)r6=s but not on σs. We stack the corresponding Lagrangian parameters in a vector λs; therefore the Lagrangian term involving σs reads (⇢/2)kAsσs −bs + λsk2. On the other hand, the graphical model contribution is that the first two terms in L(σ, λ) are linear in σs, and hence they can be written as hcs, σsi. Summarizing, we have L(σ, λ) =hcs, σsi + kAsσs −bs + λsk2 + eL # (σr)r6=s, λ $ . (3.2) It is straightforward to compute As, bs, cs; in particular, for (s, r, t, p) 2 C, the rows of As and bs are indexed by r such that the vectors σr form the rows of As, and hσt, σpi form the corresponding entry of bs. Further, if s is a vertex and @s are its neighbors, we set cs = P t2@s ✓e stσt + ✓v sσ; while if s = (s1, s2) is an edge, we set cs = ✓e s1s2σ; + ✓v s1σs2 + ✓v s2σs1. Note that we are using the equivalent representations hσi, σji = hσij, σ;i, hσij, σji = hσi, σ;i, and hσij, σii = hσj, σ;i. Finally, we maximize Eq. (3.2) with respect to σs by a Moré-Sorenson style method [MS83]. 3.2 Rounding via Confidence Lift and Project After Algorithm 1 generates an approximate optimizer σ for PSOS(4), we reduce its rank to produce a solution of the original combinatorial optimization problem INT. To this end, we interpret hσi, σ;i as our belief about the value of xi in the optimal solution of INT, and hσij, σ;i as our belief about the value of xixj. This intuition can be formalized using the notion of pseudo-probability [BS16]. We then recursively round the variables about which we have strong beliefs; we fix rounded variables in the next iteration, and solve the induced PSOS(4) on the remaining ones. More precisely, we set a confidence threshold Confidence. For any variable σs such that |hσs, σ;i| > Confidence, we let xs = sign(hσs, σ;i) and fix σs = xs σ;. These variables σs are no longer 6 True Noisy BP-SP BP-MP GBP PSOS(2) PSOS(4) Bernoulli p = 0.2 Blockwise p=0.006 U(x) : 25815 19237 26165 26134 26161 26015 26194 Time: - - 2826s 2150s 7894s 454s 5059s U(x) : 27010 26808 27230 27012 27232 26942 27252 Time: - - 1674s 729s 8844s 248s 4457s Figure 3: Denoising a binary image by maximizing the objective function Eq. (4.1). Top row: i.i.d. Bernoulli error with flip probability p = 0.2 with ✓0 = 1.26. Bottom row: blockwise noise where each pixel is the center of a 3 ⇥3 error block independently with probability p = 0.006 and ✓0 = 1. updated, and instead the reduced SDP is solved. If no variable satisfies the confidence condition, the threshold is reduced until variables are found that satisfy it. After the first iteration, most variables yield strong beliefs and are fixed; hence the consequent iterations have fewer variables and are faster. 4 Numerical Experiments In this section, we validate the performance of the Partial SOS relaxation and the CLAP rounding scheme on models defined on two-dimensional grids. Grid-like graphical models are common in a variety of fields such as computer vision [SSZ02], and statistical physics [MM09]. In Section 4.1, we study an image denoising example and in Section 4.2 we consider the Ising spin glass – a model in statistical mechanics that has been used as a benchmark for inference in graphical models. Our main objective is to demonstrate that Partial SOS can be used successfully on large-scale graphical models, and is competitive with the following popular inference methods: • Belief Propagation - Sum Product (BP-SP): Pearl’s belief propagation computes exact marginal distributions on trees [Pea86]. Given a graph structured objective function U(x), we apply BP-SP to the Gibbs-Boltzmann distribution p(x) = exp{U(x)}/Z using the standard sum-product update rules with an inertia of 0.5 to help convergence [YFW05], and threshold the marginals at 0.5. • Belief Propagation - Max Product (BP-MP): By replacing the marginal probabilities in the sumproduct updates with max-marginals, we obtain BP-MP, which can be used for exact inference on trees [MM09]. For general graphs, BP-MP is closely related to an LP relaxation of the combinatorial problem INT [YFW05, WF01]. Similar to BP-SP, we use an inertia of 0.5. Note that the Max-Product updates can be equivalently written as Min-Sum updates [MM09]. • Generalized Belief Propagation (GBP): The decision variables in GBP are beliefs (joint probability distributions) over larger subsets of variables in the graph G, and they are updated in a message passing fashion [YFW00, YFW05]. We use plaquettes in the grid (contiguous groups of four vertices) as the largest regions, and apply message passing with inertia 0.1 [WF01]. • Partial SOS - Degree 2 (PSOS(2)): By defining regions as single vertices and enforcing only the sphere constraints, we recover the classical Goemans-Williamson SDP relaxation [GW95]. Non-convex Burer-Monteiro approach is extremely efficient in this case [BM03]. We round the SDP solution by ˆxi = sign(hσi, σ;i) which is closely related to the classical approach of [GW95]. • Partial SOS - Degree 4 (PSOS(4)): This is the algorithm developed in the present paper. We take the regions R` to be triangles, cf. Figure 1, right frame. In an pn ⇥pn grid, we have 2(pn −1)2 such regions resulting in O(n) constraints. In Figures 3 and 4, PSOS(4) refers to the CLAP rounding scheme applied together with PSOS(4) in the lift procedure. 4.1 Image Denoising via Markov Random Fields Given a pn ⇥pn binary image x0 2 {+1, −1}n, we generate a corrupted version of the same image y 2 {+1, −1}n. We then try to denoise y by maximizing the following objective function: U(x) = X (i,j)2E xixj + ✓0 X i2V yixi , (4.1) 7 Ratio to the best algorithm PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP PSOS(4) PSOS(2) GBP BP-SP BP-MP Figure 4: Solving the MAP inference problem INT for Ising spin glasses on two-dimensional grids. U and N represent uniform and normal distributions. Each bar contains 100 independent realizations. We plot the ratio between the objective value achieved by that algorithm and the exact optimum for n 2 {16, 25}, or the best value achieved by any of the 5 algorithms for n 2 {100, 400, 900}. where the graph G is the pn ⇥pn grid, i.e., V = {i = (i1, i2) : i1, i2 2 {1, . . . , pn}} and E = {(i, j) : ki −jk1 = 1}. In applying Algorithm 1, we add diagonals to the grid (see right plot in Figure 1) in order to satisfy the condition (3.1) with corresponding weight ✓e ij = 0. In Figure 3, we report the output of various algorithms for a 100 ⇥100 binary image. We are not aware of any earlier implementation of SOS(4) beyond tens of variables, while PSOS(4) is applied here to n = 10, 000 variables. Running times for CLAP rounding scheme (which requires several runs of PSOS(4)) are of order an hour, and are reported in Figure 3. We consider two noise models: i.i.d. Bernoulli noise and blockwise noise. The model parameter ✓0 is chosen in each case as to approximately optimize the performances under BP denoising. In these (as well as in 4 other experiments of the same type reported in the supplement), PSOS(4) gives consistently the best reconstruction (often tied with GBP), in reasonable time. Also, it consistently achieves the largest value of the objective function among all algorithms. 4.2 Ising Spin Glass The Ising spin glass (also known as Edwards-Anderson model [EA75]) is one of the most studied models in statistical physics. It is given by an objective function of the form INT with G a ddimensional grid, and i.i.d. parameters {✓e ij}(i,j)2E, {✓v i }i2V . Following earlier work [YFW05], we use Ising spin glasses as a testing ground for our algorithm. Denoting the uniform and normal distributions by U and N respectively, we consider two-dimensional grids (i.e. d = 2), and the following parameter distributions: (i) ✓e ij ⇠U({+1, −1}) and ✓v i ⇠U({+1, −1}), (ii) ✓e ij ⇠ U({+1, −1}) and ✓v i ⇠U({+1/2, −1/2}), (iii) ✓e ij ⇠N(0, 1) and ✓v i ⇠N(0, σ2) with σ = 0.1 (this is the setting considered in [YFW05]), and (iv) ✓e ij ⇠N(0, 1) and ✓v i ⇠N(0, σ2) with σ = 1. For each of these settings, we considered grids of size n 2 {16, 25, 100, 400, 900}. In Figure 4, we report the results of 8 experiments as a box plot. We ran the five inference algorithms described above on 100 realizations; a total of 800 experiments are reported in Figure 4. For each of the realizations, we record the ratio of the achieved value of an algorithm to the exact maximum (for n 2 {16, 25}), or to the best value achieved among these algorithms (for n 2 {100, 400, 900}). This is because for lattices of size 16 and 25, we are able to run an exhaustive search to determine the true maximizer of the integer program. Further details are reported in the supplement. In every single instance of 800 experiments, PSOS(4) achieved the largest objective value, and whenever this could be verified by exhaustive search (i.e. for n 2 {16, 25}) it achieved an exact maximizer of the integer program. 8 References [Bar82] Francisco Barahona. On the computational complexity of Ising spin glass models. Journal of Physics A: Mathematical and General, 15(10):3241, 1982. [BM86] Francisco Barahona and Ali Ridha Mahjoub. On the cut polytope. Mathematical programming, 36(2):157–173, 1986. [BM03] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming, 95(2):329–357, 2003. [BS16] Boaz Barak and David Steurer. Proofs, beliefs, and algorithms through the lens of sum-of-squares. Course notes: http://www. sumofsquares. org/public/index. html, 2016. [BVB16] Nicolas Boumal, Vlad Voroninski, and Afonso Bandeira. The non-convex Burer-Monteiro approach works on smooth semidefinite programs. In Advances in Neural Information Processing Systems, pages 2757–2765, 2016. [CD+06] Robert G Cowell, Philip Dawid, Steffen L Lauritzen, and David J Spiegelhalter. Probabilistic networks and expert systems: Exact computational methods for Bayesian networks. Springer Science & Business Media, 2006. [EA75] Samuel Frederick Edwards and Phil W Anderson. Theory of spin glasses. Journal of Physics F: Metal Physics, 5(5):965, 1975. [EM15] Murat A Erdogdu and Andrea Montanari. Convergence rates of sub-sampled newton methods. In Advances in Neural Information Processing Systems, pages 3052–3060, 2015. [GW95] Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. [KF09] Daphne Koller and Nir Friedman. Probabilistic graphical models. MIT press, 2009. [Las01] Jean B Lasserre. An explicit exact SDP relaxation for nonlinear 0-1 programs. In International Conference on Integer Programming and Combinatorial Optimization, pages 293–303, 2001. [MM09] Marc Mézard and Andrea Montanari. Information, physics, and computation. Oxford Press, 2009. [MM+17] Song Mei, Theodor Misiakiewicz, Andrea Montanari, and Roberto I Oliveira. Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality. arXiv preprint arXiv:1703.08729, 2017. [MS83] Jorge J Moré and Danny C Sorensen. Computing a trust region step. SIAM Journal on Scientific and Statistical Computing, 4(3):553–572, 1983. [Par03] Pablo A Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical programming, 96(2):293–320, 2003. [Pat98] Gábor Pataki. On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues. Mathematics of operations research, 23(2):339–358, 1998. [Pea86] Judea Pearl. Fusion, propagation, and structuring in belief networks. Artificial intelligence, 29(3):241– 288, 1986. [RU08] Tom Richardson and Ruediger Urbanke. Modern coding theory. Cambridge Press, 2008. [SA90] Hanif D Sherali and Warren P Adams. A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM Journal on Discrete Mathematics, 3(3):411–430, 1990. [Sho87] Naum Z Shor. Class of global minimum bounds of polynomial functions. Cybernetics and Systems Analysis, 23(6):731–734, 1987. [SSZ02] Jian Sun, Heung-Yeung Shum, and Nan-Ning Zheng. Stereo matching using belief propagation. In European Conference on Computer Vision, pages 510–524. Springer, 2002. [WF01] Yair Weiss and William T Freeman. On the optimality of solutions of the max-product beliefpropagation algorithm in arbitrary graphs. IEEE Trans. on Info. Theory, 47(2):736–744, 2001. [WJ04] Martin J Wainwright and Michael I Jordan. Semidefinite relaxations for approximate inference on graphs with cycles. In Advances in Neural Information Processing Systems, pages 369–376, 2004. [WJ08] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305, 2008. [WRS16] Adrian Weller, Mark Rowland, and David Sontag. Tightness of lp relaxations for almost balanced models. In Artificial Intelligence and Statistics, pages 47–55, 2016. [YFW00] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems, pages 689–695, 2000. [YFW05] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282–2312, 2005. 9 | 2017 | 379 |
6,873 | Learning Spherical Convolution for Fast Features from 360° Imagery Yu-Chuan Su Kristen Grauman The University of Texas at Austin Abstract While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution. 1 Introduction Unlike a traditional perspective camera, which samples a limited field of view of the 3D scene projected onto a 2D plane, a 360° camera captures the entire viewing sphere surrounding its optical center, providing a complete picture of the visual world—an omnidirectional field of view. As such, viewing 360° imagery provides a more immersive experience of the visual content compared to traditional media. 360° cameras are gaining popularity as part of the rising trend of virtual reality (VR) and augmented reality (AR) technologies, and will also be increasingly influential for wearable cameras, autonomous mobile robots, and video-based security applications. Consumer level 360° cameras are now common on the market, and media sharing sites such as Facebook and YouTube have enabled support for 360° content. For consumers and artists, 360° cameras free the photographer from making real-time composition decisions. For VR/AR, 360° data is essential to content creation. As a result of this great potential, computer vision problems targeting 360° content are capturing the attention of both the research community and application developer. Immediately, this raises the question: how to compute features from 360° images and videos? Arguably the most powerful tools in computer vision today are convolutional neural networks (CNN). CNNs are responsible for state-of-the-art results across a wide range of vision problems, including image recognition [17,42], object detection [12,30], image and video segmentation [16,21,28], and action detection [10,32]. Furthermore, significant research effort over the last five years (and really decades [27]) has led to well-honed CNN architectures that, when trained with massive labeled image datasets [8], produce “pre-trained" networks broadly useful as feature extractors for new problems. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. φ θ Input: 360◦image Strategy I φ θ Equirectangular Projection · · · Fully Convolution Np Sample ˆn Perspective Projection Strategy II · · · · · · Output Np · · · Np · · · Np Figure 1: Two existing strategies for applying CNNs to 360° images. Top: The first strategy unwraps the 360° input into a single planar image using a global projection (most commonly equirectangular projection), then applies the CNN on the distorted planar image. Bottom: The second strategy samples multiple tangent planar projections to obtain multiple perspective images, to which the CNN is applied independently to obtain local results for the original 360° image. Strategy I is fast but inaccurate; Strategy II is accurate but slow. The proposed approach learns to replicate flat filters on spherical imagery, offering both speed and accuracy. Indeed such networks are widely adopted as off-the-shelf feature extractors for other algorithms and applications (c.f., VGG [33], ResNet [17], and AlexNet [25] for images; C3D [36] for video). However, thus far, powerful CNN features are awkward if not off limits in practice for 360° imagery. The problem is that the underlying projection models of current CNNs and 360° data are different. Both the existing CNN filters and the expensive training data that produced them are “flat", i.e., the product of perspective projection to a plane. In contrast, a 360° image is projected onto the unit sphere surrounding the camera’s optical center. To address this discrepancy, there are two common, though flawed, approaches. In the first, the spherical image is projected to a planar one,1 then the CNN is applied to the resulting 2D image [19,26] (see Fig. 1, top). However, any sphere-to-plane projection introduces distortion, making the resulting convolutions inaccurate. In the second existing strategy, the 360° image is repeatedly projected to tangent planes around the sphere, each of which is then fed to the CNN [34,35,38,41] (Fig. 1, bottom). In the extreme of sampling every tangent plane, this solution is exact and therefore accurate. However, it suffers from very high computational cost. Not only does it incur the cost of rendering each planar view, but also it prevents amortization of convolutions: the intermediate representation cannot be shared across perspective images because they are projected to different planes. We propose a learning-based solution that, unlike the existing strategies, sacrifices neither accuracy nor efficiency. The main idea is to learn a CNN that processes a 360° image in its equirectangular projection (fast) but mimics the “flat" filter responses that an existing network would produce on all tangent plane projections for the original spherical image (accurate). Because convolutions are indexed by spherical coordinates, we refer to our method as spherical convolution (SPHCONV). We develop a systematic procedure to adjust the network structure in order to account for distortions. Furthermore, we propose a kernel-wise pre-training procedure which significantly accelerates the training process. In addition to providing fast general feature extraction for 360° imagery, our approach provides a bridge from 360° content to existing heavily supervised datasets dedicated to perspective images. In particular, training requires no new annotations—only the target CNN model (e.g., VGG [33] pre-trained on millions of labeled images) and an arbitrary collection of unlabeled 360° images. We evaluate SPHCONV on the Pano2Vid [35] and PASCAL VOC [9] datasets, both for raw convolution accuracy as well as impact on an object detection task. We show that it produces more precise outputs than baseline methods requiring similar computational cost, and similarly precise outputs as the exact solution while using orders of magnitude less computation. Furthermore, we demonstrate that SPHCONV can successfully replicate the widely used Faster-RCNN [30] detector on 360° data when training with only 1,000 unlabeled 360° images containing unrelated objects. For a similar cost as the baselines, SPHCONV generates better object proposals and recognition rates. 1e.g., with equirectangular projection, where latitudes are mapped to horizontal lines of uniform spacing 2 2 Related Work 360° vision Vision for 360° data is quickly gaining interest in recent years. The SUN360 project samples multiple perspective images to perform scene viewpoint recognition [38]. PanoContext [41] parses 360° images using 3D bounding boxes, applying algorithms like line detection on perspective images then backprojecting results to the sphere. Motivated by the limitations of existing interfaces for viewing 360° video, several methods study how to automate field-of-view (FOV) control for display [19,26,34,35], adopting one of the two existing strategies for convolutions (Fig. 1). In these methods, a noted bottleneck is feature extraction cost, which is hampered by repeated sampling of perspective images/frames, e.g., to represent the space-time “glimpses" of [34,35]. This is exactly where our work can have positive impact. Prior work studies the impact of panoramic or wide angle images on hand-crafted features like SIFT [11,14,15]. While not applicable to CNNs, such work supports the need for features specific to 360° imagery, and thus motivates SPHCONV. Knowledge distillation Our approach relates to knowledge distillation [3, 5, 13, 18, 29, 31, 37], though we explore it in an entirely novel setting. Distillation aims to learn a new model given existing model(s). Rather than optimize an objective function on annotated data, it learns the new model that can reproduce the behavior of the existing model, by minimizing the difference between their outputs. Most prior work explores distillation for model compression [3, 5, 18, 31]. For example, a deep network can be distilled into a shallower [3] or thinner [31] one, or an ensemble can be compressed to a single model [18]. Rather than compress a model in the same domain, our goal is to learn across domains, namely to link networks on images with different projection models. Limited work considers distillation for transfer [13,29]. In particular, unlabeled target-source paired data can help learn a CNN for a domain lacking labeled instances (e.g., RGB vs. depth images) [13], and multi-task policies can be learned to simulate action value distributions of expert policies [29]. Our problem can also be seen as a form of transfer, though for a novel task motivated strongly by image processing complexity as well as supervision costs. Different from any of the above, we show how to adapt the network structure to account for geometric transformations caused by different projections. Also, whereas most prior work uses only the final output for supervision, we use the intermediate representation of the target network as both input and target output to enable kernel-wise pre-training. Spherical image projection Projecting a spherical image into a planar image is a long studied problem. There exists a large number of projection approaches (e.g., equirectangular, Mercator, etc.) [4]. None is perfect; every projection must introduce some form of distortion. The properties of different projections are analyzed in the context of displaying panoramic images [40]. In this work, we unwrap the spherical images using equirectangular projection because 1) this is a very common format used by camera vendors and researchers [1,35,38], and 2) it is equidistant along each row and column so the convolution kernel does not depend on the azimuthal angle. Our method in principle could be applied to other projections; their effect on the convolution operation remains to be studied. CNNs with geometric transformations There is an increasing interest in generalizing convolution in CNNs to handle geometric transformations or deformations. Spatial transformer networks (STNs) [20] represent a geometric transformation as a sampling layer and predict the transformation parameters based on input data. STNs assume the transformation is invertible such that the subsequent convolution can be performed on data without the transformation. This is not possible in spherical images because it requires a projection that introduces no distortion. Active convolution [22] learns the kernel shape together with the weights for a more general receptive field, and deformable convolution [7] goes one step further by predicting the receptive field location. These methods are too restrictive for spherical convolution, because they require a fixed kernel size and weight. In contrast, our method adapts the kernel size and weight based on the transformation to achieve better accuracy. Furthermore, our method exploits problem-specific geometric information for efficient training and testing. Some recent work studies convolution on a sphere [6,24] using spectral analysis, but those methods require manually annotated spherical images as training data, whereas our method can exploit existing models trained on perspective images as supervision. Also, it is unclear whether CNNs in the spectral domain can reach the same accuracy and efficiency as CNNs on a regular grid. 3 Approach We describe how to learn spherical convolutions in equirectangular projection given a target network trained on perspective images. We define the objective in Sec. 3.1. Next, we introduce how to adapt the structure from the target network in Sec. 3.2. Finally, Sec. 3.3 presents our training process. 3 𝜽= 36° 𝜽= 108° 𝜽= 180° Figure 2: Inverse perspective projections P−1 to equirectangular projections at different polar angles θ. The same square image will distort to different sizes and shapes depending on θ. Because equirectangular projection unwraps the 180° longitude, a line will be split into two if it passes through the 180° longitude, which causes the double curve in θ = 36°. 3.1 Problem Definition Let Is be the input spherical image defined on spherical coordinates (θ, φ), and let Ie ∈IWe×He×3 be the corresponding flat RGB image in equirectangular projection. Ie is defined by pixels on the image coordinates (x, y) ∈De, where each (x, y) is linearly mapped to a unique (θ, φ). We define the perspective projection operator P which projects an α-degree field of view (FOV) from Is to W pixels on the the tangent plane ˆn = (θ, φ). That is, P(Is, ˆn) = Ip ∈IW ×W ×3. The projection operator is characterized by the pixel size ∆pθ = α/W in Ip, and Ip denotes the resulting perspective image. Note that we assume ∆θ = ∆φ following common digital imagery. Given a target network2 Np trained on perspective images Ip with receptive field (Rf) R × R, we define the output on spherical image Is at ˆn = (θ, φ) as Np(Is)[θ, φ] = Np(P(Is, (θ, φ))), (1) where w.l.o.g. we assume W = R for simplicity. Our goal is to learn a spherical convolution network Ne that takes an equirectangular map Ie as input and, for every image position (x, y), produces as output the results of applying the perspective projection network to the corresponding tangent plane for spherical image Is: Ne(Ie)[x, y] ≈Np(Is)[θ, φ], ∀(x, y) ∈De, (θ, φ) = (180° × y He , 360° × x We ). (2) This can be seen as a domain adaptation problem where we want to transfer the model from the domain of Ip to that of Ie. However, unlike typical domain adaptation problems, the difference between Ip and Ie is characterized by a geometric projection transformation rather than a shift in data distribution. Note that the training data to learn Ne requires no manual annotations: it consists of arbitrary 360° images coupled with the “true" Np outputs computed by exhaustive planar reprojections, i.e., evaluating the rhs of Eq. 1 for every (θ, φ). Furthermore, at test time, only a single equirectangular projection of the entire 360° input will be computed using Ne to obtain the dense (inferred) Np outputs, which would otherwise require multiple projections and evaluations of Np. 3.2 Network Structure The main challenge for transferring Np to Ne is the distortion introduced by equirectangular projection. The distortion is location dependent—a k × k square in perspective projection will not be a square in the equirectangular projection, and its shape and size will depend on the polar angle θ. See Fig. 2. The convolution kernel should transform accordingly. Our approach 1) adjusts the shape of the convolution kernel to account for the distortion, in particular the content expansion, and 2) reduces the number of max-pooling layers to match the pixel sizes in Ne and Np, as we detail next. We adapt the architecture of Ne from Np using the following heuristic. The goal is to ensure each kernel receives enough information from the input in order to compute the target output. First, we untie the weight of convolution kernels at different θ by learning one kernel Ky e for each output row y. Next, we adjust the shape of Ky e such that it covers the Rf of the original kernel. We consider Ky e ∈Ne to cover Kp ∈Np if more than 95% of pixels in the Rf of Kp are also in the Rf of Ke in Ie. The Rf of Kp in Ie is obtained by backprojecting the R × R grid to ˆn = (θ, 0) using P−1, where the center of the grid aligns on ˆn. Ke should be large enough to cover Kp, but it should also be as small as possible to avoid overfitting. Therefore, we optimize the shape of Kl,y e for layer l as follows. The shape of Kl,y e is initialized as 3 × 3. We first adjust the height kh and increase kh by 2 2e.g., Np could be AlexNet [25] or VGG [33] pre-trained for a large-scale recognition task. 4 φ θ Kl e Kl e Kl e ...... ...... Kl+1 e Kl+1 e Kl+1 e Figure 3: Spherical convolution. The kernel weight in spherical convolution is tied only along each row of the equirectangular image (i.e., φ), and each kernel convolves along the row to generate 1D output. Note that the kernel size differs at different rows and layers, and it expands near the top and bottom of the image. until the height of the Rf is larger than that of Kp in Ie. We then adjust the width kw similar to kh. Furthermore, we restrict the kernel size kh × kw to be smaller than an upper bound Uk. See Fig. 4. Because the Rf of Kl e depends on Kl−1 e , we search for the kernel size starting from the bottom layer. It is important to relax the kernel from being square to being rectangular, because equirectangular projection will expand content horizontally near the poles of the sphere (see Fig. 2). If we restrict the kernel to be square, the Rf of Ke can easily be taller but narrower than that of Kp which leads to overfitting. It is also important to restrict the kernel size, otherwise the kernel can grow wide rapidly near the poles and eventually cover the entire row. Although cutting off the kernel size may lead to information loss, the loss is not significant in practice because pixels in equirectangular projection do not distribute on the unit sphere uniformly; they are denser near the pole, and the pixels are by nature redundant in the region where the kernel size expands dramatically. Besides adjusting the kernel sizes, we also adjust the number of pooling layers to match the pixel size ∆θ in Ne and Np. We define ∆θe = 180°/He and restrict We = 2He to ensure ∆θe = ∆φe. Because max-pooling introduces shift invariance up to kw pixels in the image, which corresponds to kw × ∆θ degrees on the unit sphere, the physical meaning of max-pooling depends on the pixel size. Since the pixel size is usually larger in Ie and max-pooling increases the pixel size by a factor of kw, we remove the pooling layer in Ne if ∆θe ≥∆θp. Fig. 3 illustrates how spherical convolution differs from ordinary CNN. Note that we approximate one layer in Np by one layer in Ne, so the number of layers and output channels in each layer is exactly the same as the target network. However, this does not have to be the case. For example, we could use two or more layers to approximate each layer in Np. Although doing so may improve accuracy, it would also introduce significant overhead, so we stick with the one-to-one mapping. 3.3 Training Process Given the goal in Eq. 2 and the architecture described in Sec. 3.2, we would like to learn the network Ne by minimizing the L2 loss E[(Ne(Ie) −Np(Is))2]. However, the network converges slowly, possibly due to the large number of parameters. Instead, we propose a kernel-wise pre-training process that disassembles the network and initially learns each kernel independently. To perform kernel-wise pre-training, we further require Ne to generate the same intermediate representation as Np in all layers l: N l e(Ie)[x, y] ≈N l p(Is)[θ, φ] ∀l ∈Ne. (3) Given Eq. 3, every layer l ∈Ne is independent of each other. In fact, every kernel is independent and can be learned separately. We learn each kernel by taking the “ground truth” value of the previous layer N l−1 p (Is) as input and minimizing the L2 loss E[(N l e(Ie)−N l p(Is))2], except for the first layer. Note that N l p refers to the convolution output of layer l before applying any non-linear operation, e.g. ReLU, max-pooling, etc. It is important to learn the target value before applying ReLU because it provides more information. We combine the non-linear operation with Kl+1 e during kernel-wise pre-training, and we use dilated convolution [39] to increase the Rf size instead of performing max-pooling on the input feature map. For the first convolution layer, we derive the analytic solution directly. The projection operator P is linear in the pixels in equirectangular projection: P(Is, ˆn)[x, y] = P ij cijIe[i, j], for coefficients cij 5 Receptive Field Target Network Np Perspective Projection (Inverse) Receptive Field Network Ne No kw× kh > Uk kw kh Increase kh No Yes > Yes Output kh Target Kernel Kp Kernel Ke Figure 4: Method to select the kernel height kh. We project the receptive field of the target kernel to equirectangular projection Ie and increase kh until it is taller than the target kernel in Ie. The kernel width kw is determined using the same procedure after kh is set. We restrict the kernel size kw × kh by an upper bound Uk. from, e.g., bilinear interpolation. Because convolution is a weighted sum of input pixels Kp ∗Ip = P xy wxyIp[x, y], we can combine the weight wxy and interpolation coefficient cij as a single convolution operator: K1 p ∗Is[θ, φ] = X xy wxy X ij cijIe[i, j] = X ij X xy wxycij Ie[i, j] = K1 e ∗Ie. (4) The output value of N 1 e will be exact and requires no learning. Of course, the same is not possible for l > 1 because of the non-linear operations between layers. After kernel-wise pre-training, we can further fine-tune the network jointly across layers and kernels by minimizing the L2 loss of the final output. Because the pre-trained kernels cannot fully recover the intermediate representation, fine-tuning can help to adjust the weights to account for residual errors. We ignore the constraint introduced in Eq. 3 when performing fine-tuning. Although Eq. 3 is necessary for kernel-wise pre-training, it restricts the expressive power of Ne and degrades the performance if we only care about the final output. Nevertheless, the weights learned by kernel-wise pre-training are a very good initialization in practice, and we typically only need to fine-tune the network for a few epochs. One limitation of SPHCONV is that it cannot handle very close objects that span a large FOV. Because the goal of SPHCONV is to reproduce the behavior of models trained on perspective images, the capability and performance of the model is bounded by the target model Np. However, perspective cameras can only capture a small portion of a very close object in the FOV, and very close objects are usually not available in the training data of the target model Np. Therefore, even though 360° images offer a much wider FOV, SPHCONV inherits the limitations of Np, and may not recognize very close large objects. Another limitation of SPHCONV is the resulting model size. Because it unties the kernel weights along θ, the model size grows linearly with the equirectangular image height. The model size can easily grow to tens of gigabytes as the image resolution increases. 4 Experiments To evaluate our approach, we consider both the accuracy of its convolutions as well as its applicability for object detections in 360° data. We use the VGG architecture3 and the Faster-RCNN [30] model as our target network Np. We learn a network Ne to produce the topmost (conv5_3) convolution output. Datasets We use two datasets: Pano2Vid for training, and Pano2Vid and PASCAL for testing. Pano2Vid: We sample frames from the 360° videos in the Pano2Vid dataset [35] for both training and testing. The dataset consists of 86 videos crawled from YouTube using four keywords: “Hiking,” “Mountain Climbing,” “Parade,” and “Soccer”. We sample frames at 0.05fps to obtain 1,056 frames for training and 168 frames for testing. We use “Mountain Climbing” for testing and others for training, so the training and testing frames are from disjoint videos. See Supp. for sampling process. Because the supervision is on a per pixel basis, this corresponds to N × We × He ≈250M (non i.i.d.) samples. Note that most object categories targeted by the Faster-RCNN detector do not appear in Pano2Vid, meaning that our experiments test the content-independence of our approach. PASCAL VOC: Because the target model was originally trained and evaluated on PASCAL VOC 2007, we “360-ify” it to evaluate the object detector application. We test with the 4,952 PASCAL images, which contain 12,032 bounding boxes. We transform them to equirectangular images as if they 3https://github.com/rbgirshick/py-faster-rcnn 6 originated from a 360° camera. In particular, each object bounding box is backprojected to 3 different scales {0.5R, 1.0R, 1.5R} and 5 different polar angles θ∈{36°, 72°, 108°, 144°, 180°} on the 360° image sphere using the inverse perspective projection, where R is the resolution of the target network’s Rf. Regions outside the bounding box are zero-padded. See Supp. for details. Backprojection allows us to evaluate the performance at different levels of distortion in the equirectangular projection. Metrics We generate the output widely used in the literature (conv5_3) and evaluate it with the following metrics. Network output error measures the difference between Ne(Ie) and Np(Is). In particular, we report the root-mean-square error (RMSE) over all pixels and channels. For PASCAL, we measure the error over the Rf of the detector network. Detector network performance measures the performance of the detector network in Faster-RCNN using multi-class classification accuracy. We replace the ROI-pooling in Faster-RCNN by pooling over the bounding box in Ie. Note that the bounding box is backprojected to equirectangular projection and is no longer a square region. Proposal network performance evaluates the proposal network in Faster-RCNN using average Intersection-over-Union (IoU). For each bounding box centered at ˆn, we project the conv5_3 output to the tangent plane ˆn using P and apply the proposal network at the center of the bounding box on the tangent plane. Given the predicted proposals, we compute the IoUs between foreground proposals and the bounding box and take the maximum. The IoU is set to 0 if there is no foreground proposal. Finally, we average the IoU over bounding boxes. We stress that our goal is not to build a new object detector; rather, we aim to reproduce the behavior of existing 2D models on 360° data with lower computational cost. Thus, the metrics capture how accurately and how quickly we can replicate the exact solution. Baselines We compare our method with the following baselines. • EXACT — Compute the true target value Np(Is)[θ, φ] for every pixel. This serves as an upper bound in performance and does not consider the computational cost. • DIRECT — Apply Np on Ie directly. We replace max-pooling with dilated convolution to produce a full resolution output. This is Strategy I in Fig. 1 and is used in 360° video analysis [19,26]. • INTERP — Compute Np(Is)[θ, φ] every S-pixels and interpolate the values for the others. We set S such that the computational cost is roughly the same as our SPHCONV. This is a more efficient variant of Strategy II in Fig. 1. • PERSPECT — Project Is onto a cube map [2] and then apply Np on each face of the cube, which is a perspective image with 90° FOV. The result is backprojected to Ie to obtain the feature on Ie. We use W=960 for the cube map resolution so ∆θ is roughly the same as Ip. This is a second variant of Strategy II in Fig. 1 used in PanoContext [41]. SPHCONV variants We evaluate three variants of our approach: • OPTSPHCONV — To compute the output for each layer l, OPTSPHCONV computes the exact output for layer l−1 using Np(Is) then applies spherical convolution for layer l. OPTSPHCONV serves as an upper bound for our approach, where it avoids accumulating any error across layers. • SPHCONV-PRE — Uses the weights from kernel-wise pre-training directly without fine-tuning. • SPHCONV — The full spherical convolution with joint fine-tuning of all layers. Implementation details We set the resolution of Ie to 640×320. For the projection operator P, we map α=65.5° to W=640 pixels following SUN360 [38]. The pixel size is therefore ∆θe=360°/640 for Ie and ∆θp=65.5°/640 for Ip. Accordingly, we remove the first three max-pooling layers so Ne has only one max-pooling layer following conv4_3. The kernel size upper bound Uk=7 × 7 following the max kernel size in VGG. We insert batch normalization for conv4_1 to conv5_3. See Supp. for details. 4.1 Network output accuracy and computational cost Fig. 5a shows the output error of layers conv3_3 and conv5_3 on the Pano2Vid [35] dataset (see Supp. for similar results on other layers.). The error is normalized by that of the mean predictor. We evaluate the error at 5 polar angles θ uniformly sampled from the northern hemisphere, since error is roughly symmetric with the equator. 7 18◦ 36◦ 54◦ 72◦ 90◦ 0 1 2 conv3 3 RMSE 18◦ 36◦ 54◦ 72◦ 90◦ 0 1 2 conv5 3 RMSE θ Direct Interp Perspective Exact OptSphConv SphConv-Pre SphConv (a) Network output errors vs. polar angle 101 102 103 0.4 0.5 0.6 0.7 0.8 Accuracy 0 2 4 6 0 0.5 1 1.5 conv5 3 RMSE Tera-MACs (b) Cost vs. accuracy Figure 5: (a) Network output error on Pano2Vid; lower is better. Note the error of EXACT is 0 by definition. Our method’s convolutions are much closer to the exact solution than the baselines’. (b) Computational cost vs. accuracy on PASCAL. Our approach yields accuracy closest to the exact solution while requiring orders of magnitude less computation time (left plot). Our cost is similar to the other approximations tested (right plot). Plot titles indicate the y-labels, and error is measured by root-mean-square-error (RMSE). Figure 6: Three AlexNet conv1 kernels (left squares) and their corresponding four SPHCONV-PRE kernels at θ ∈{9°, 18°, 36°, 72°} (left to right). First we discuss the three variants of our method. OPTSPHCONV performs the best in all layers and θ, validating our main idea of spherical convolution. It performs particularly well in the lower layers, because the Rf is larger in higher layers and the distortion becomes more significant. Overall, SPHCONV-PRE performs the second best, but as to be expected, the gap with OPTCONV becomes larger in higher layers because of error propagation. SPHCONV outperforms SPHCONV-PRE in conv5_3 at the cost of larger error in lower layers (as seen here for conv3_3). It also has larger error at θ=18° for two possible reasons. First, the learning curve indicates that the network learns more slowly near the pole, possibly because the Rf is larger and the pixels degenerate. Second, we optimize the joint L2 loss, which may trade the error near the pole with that at the center. Comparing to the baselines, we see that ours achieves lowest errors. DIRECT performs the worst among all methods, underscoring that convolutions on the flattened sphere—though fast—are inadequate. INTERP performs better than DIRECT, and the error decreases in higher layers. This is because the Rf is larger in the higher layers, so the S-pixel shift in Ie causes relatively smaller changes in the Rf and therefore the network output. PERSPECTIVE performs similarly in different layers and outperforms INTERP in lower layers. The error of PERSPECTIVE is particularly large at θ=54°, which is close to the boundary of the perspective image and has larger perspective distortion. Fig. 5b shows the accuracy vs. cost tradeoff. We measure computational cost by the number of Multiply-Accumulate (MAC) operations. The leftmost plot shows cost on a log scale. Here we see that EXACT—whose outputs we wish to replicate—is about 400 times slower than SPHCONV, and SPHCONV approaches EXACT’s detector accuracy much better than all baselines. The second plot shows that SPHCONV is about 34% faster than INTERP (while performing better in all metrics). PERSPECTIVE is the fastest among all methods and is 60% faster than SPHCONV, followed by DIRECT which is 23% faster than SPHCONV. However, both baselines are noticeably inferior in accuracy compared to SPHCONV. To visualize what our approach has learned, we learn the first layer of the AlexNet [25] model provided by the Caffe package [23] and examine the resulting kernels. Fig. 6 shows the original kernel Kp and the corresponding kernels Ke at different polar angles θ. Ke is usually the re-scaled version of Kp, but the weights are often amplified because multiple pixels in Kp fall to the same pixel in Ke like the second example. We also observe situations where the high frequency signal in the kernel is reduced, like the third example, possibly because the kernel is smaller. Note that we learn the first convolution layer for visualization purposes only, since l = 1 (only) has an analytic solution (cf. Sec 3.3). See Supp. for the complete set of kernels. 4.2 Object detection and proposal accuracy Having established our approach provides accurate and efficient Ne convolutions, we now examine how important that accuracy is to object detection on 360° inputs. Fig. 7a shows the result of the Faster-RCNN detector network on PASCAL in 360° format. OPTSPHCONV performs almost as well as EXACT. The performance degrades in SPHCONV-PRE because of error accumulation, but it still 8 18◦ 36◦ 54◦ 72◦ 90◦ 0.2 0.4 0.6 0.8 Accuracy 18◦ 36◦ 54◦ 72◦ 90◦ 0 0.5 1 1.5 2 Output RMSE Direct Interp Perspective Exact OptSphConv SphConv-Pre SphConv (a) Detector network performance. 18◦ 36◦ 54◦ 72◦ 90◦ 0 0.1 0.2 0.3 IoU Scale = 0.5R 18◦ 36◦ 54◦ 72◦ 90◦ 0 0.1 0.2 0.3 Scale = 1.0R (b) Proposal network accuracy (IoU). Figure 7: Faster-RCNN object detection accuracy on a 360° version of PASCAL across polar angles θ, for both the (a) detector network and (b) proposal network. R refers to the Rf of Np. Best viewed in color. Figure 8: Object detection examples on 360° PASCAL test images. Images show the top 40% of equirectangular projection; black regions are undefined pixels. Text gives predicted label, multi-class probability, and IoU, resp. Our method successfully detects objects undergoing severe distortion, some of which are barely recognizable even for a human viewer. significantly outperforms DIRECT and is better than INTERP and PERSPECTIVE in most regions. Although joint training (SPHCONV) improves the output error near the equator, the error is larger near the pole which degrades the detector performance. Note that the Rf of the detector network spans multiple rows, so the error is the weighted sum of the error at different rows. The result, together with Fig. 5a, suggest that SPHCONV reduces the conv5_3 error in parts of the Rf but increases it at the other parts. The detector network needs accurate conv5_3 features throughout the Rf in order to generate good predictions. DIRECT again performs the worst. In particular, the performance drops significantly at θ=18°, showing that it is sensitive to the distortion. In contrast, INTERP performs better near the pole because the samples are denser on the unit sphere. In fact, INTERP should converge to EXACT at the pole. PERSPECTIVE outperforms INTERP near the equator but is worse in other regions. Note that θ∈{18°, 36°} falls on the top face, and θ=54° is near the border of the face. The result suggests that PERSPECTIVE is still sensitive to the polar angle, and it performs the best when the object is near the center of the faces where the perspective distortion is small. Fig. 7b shows the performance of the object proposal network for two scales (see Supp. for more). Interestingly, the result is different from the detector network. OPTSPHCONV still performs almost the same as EXACT, and SPHCONV-PRE performs better than baselines. However, DIRECT now outperforms other baselines, suggesting that the proposal network is not as sensitive as the detector network to the distortion introduced by equirectangular projection. The performance of the methods is similar when the object is larger (right plot), even though the output error is significantly different. The only exception is PERSPECTIVE, which performs poorly for θ∈{54°, 72°, 90°} regardless of the object scale. It again suggests that objectness is sensitive to the perspective image being sampled. Fig. 8 shows examples of objects successfully detected by our approach in spite of severe distortions. See Supp. for more examples. 5 Conclusion We propose to learn spherical convolutions for 360° images. Our solution entails a new form of distillation across camera projection models. Compared to current practices for feature extraction on 360° images/video, spherical convolution benefits efficiency by avoiding performing multiple perspective projections, and it benefits accuracy by adapting kernels to the distortions in equirectangular projection. Results on two datasets demonstrate how it successfully transfers state-of-the-art vision models from the realm of limited FOV 2D imagery into the realm of omnidirectional data. Future work will explore SPHCONV in the context of other dense prediction problems like segmentation, as well as the impact of different projection models within our basic framework. 9 References [1] https://facebook360.fb.com/editing-360-photos-injecting-metadata/. [2] https://code.facebook.com/posts/1638767863078802/under-the-hood-building-360-video/. [3] J. Ba and R. Caruana. Do deep nets really need to be deep? In NIPS, 2014. [4] A. Barre, A. Flocon, and R. Hansen. Curvilinear perspective, 1987. [5] C. Buciluˇa, R. Caruana, and A. Niculescu-Mizil. Model compression. In ACM SIGKDD, 2006. [6] T. Cohen, M. Geiger, J. Köhler, and M. Welling. Convolutional networks for spherical signals. arXiv preprint arXiv:1709.04893, 2017. [7] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017. [8] J. Deng, W. Dong, R. Socher, L. Li, and L. Fei-Fei. Imagenet: a large-scale hierarchical image database. In CVPR, 2009. [9] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan. 2015. [10] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016. [11] A. Furnari, G. M. Farinella, A. R. Bruna, and S. Battiato. Affine covariant features for fisheye distortion local modeling. IEEE Transactions on Image Processing, 26(2):696–710, 2017. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [13] S. Gupta, J. Hoffman, and J. Malik. Cross modal distillation for supervision transfer. In CVPR, 2016. [14] P. Hansen, P. Corke, W. Boles, and K. Daniilidis. Scale-invariant features on the sphere. In ICCV, 2007. [15] P. Hansen, P. Corket, W. Boles, and K. Daniilidis. Scale invariant feature matching with wide angle images. In IROS, 2007. [16] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV, 2017. [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [18] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. [19] H.-N. Hu, Y.-C. Lin, M.-Y. Liu, H.-T. Cheng, Y.-J. Chang, and M. Sun. Deep 360 pilot: Learning a deep agent for piloting through 360° sports video. In CVPR, 2017. [20] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, 2015. [21] S. Jain, B. Xiong, and K. Grauman. Fusionseg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in video. In CVPR, 2017. [22] Y. Jeon and J. Kim. Active convolution: Learning the shape of convolution for image classification. In CVPR, 2017. [23] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM MM, 2014. [24] R. Khasanova and P. Frossard. Graph-based classification of omnidirectional images. arXiv preprint arXiv:1707.08301, 2017. [25] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [26] W.-S. Lai, Y. Huang, N. Joshi, C. Buehler, M.-H. Yang, and S. B. Kang. Semantic-driven generation of hyperlapse from 360° video. IEEE Transactions on Visualization and Computer Graphics, PP(99):1–1, 2017. [27] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proc. of the IEEE, 1998. [28] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [29] E. Parisotto, J. Ba, and R. Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. In ICLR, 2016. [30] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. [31] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015. [32] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014. [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [34] Y.-C. Su and K. Grauman. Making 360° video watchable in 2d: Learning videography for click free viewing. In CVPR, 2017. 10 [35] Y.-C. Su, D. Jayaraman, and K. Grauman. Pano2vid: Automatic cinematography for watching 360° videos. In ACCV, 2016. [36] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. [37] Y.-X. Wang and M. Hebert. Learning to learn: Model regression networks for easy small sample learning. In ECCV, 2016. [38] J. Xiao, K. A. Ehinger, A. Oliva, and A. Torralba. Recognizing scene viewpoint using panoramic place representation. In CVPR, 2012. [39] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. [40] L. Zelnik-Manor, G. Peters, and P. Perona. Squaring the circle in panoramas. In ICCV, 2005. [41] Y. Zhang, S. Song, P. Tan, and J. Xiao. Panocontext: A whole-room 3d context model for panoramic scene understanding. In ECCV, 2014. [42] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 11 | 2017 | 38 |
6,874 | Gauging Variational Inference Sungsoo Ahn∗ Michael Chertkov† Jinwoo Shin∗ ∗School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea †1 Theoretical Division, T-4 & Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA, †2Skolkovo Institute of Science and Technology, 143026 Moscow, Russia ∗{sungsoo.ahn, jinwoos}@kaist.ac.kr †chertkov@lanl.gov Abstract Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used in practice, where mean-field (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments indeed confirm that the proposed algorithms outperform and generalize MF and BP. 1 Introduction Graphical Models (GM) express factorization of the joint multivariate probability distributions in statistics via a graph of relations between variables. The concept of GM has been developed and/or used successfully in information theory [1, 2], physics [3, 4, 5, 6, 7], artificial intelligence [8], and machine learning [9, 10]. Of many inference problems one can formulate using a GM, computing the partition function (normalization), or equivalently computing marginal probability distributions, is the most important and universal inference task of interest. However, this paradigmatic problem is known to be computationally intractable in general, i.e., it is #P-hard even to approximate [11]. The Markov chain monte carlo (MCMC) [12] is a classical approach addressing the inference task, but it typically suffers from exponentially slow mixing or large variance. Variational inference is an approach stating the inference task as an optimization. Hence, it does not have such issues of MCMC and is often more favorable. The mean-field (MF) [6] and belief propagation (BP) [13] are arguably the most popular algorithms of the variational type. They are distributed, fast and overall very successful in practical applications even though they are heuristics lacking systematic error control. This has motivated researchers to seek for methods with some guarantees, e.g., providing lower bounds [14, 15] and upper bounds [16, 17, 15] for the partition function of GM. In another line of research, which this paper extends and contributes, the so-called re-parametrizations [18], gauge transformations (GT) [19, 20] and holographic transformations [21, 22] were explored. This class of distinct, but related, transformations consist in modifying a GM by changing factors, associated with elements of the graph, continuously such that the partition function stays the same/invariant.1 In this paper, we choose to work with GT as the most general one among the three 1See [23, 24, 25] for discussions of relations between the aforementioned techniques. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. approaches. Once applied to a GM, it transforms the original partition function, defined as a weighted series/sum over states, to a new one, dependent on the choice of gauges. In particular, a fixed point of BP minimizes the so-called Bethe free energy [26], and it can also be understood as an optimal GT [19, 20, 27, 28]. Moreover, fixing GT in accordance with BP results in the so-called loop series expression for the partition function [19, 20]. In this paper we generalize [19, 20] and explore a more general class of GT: we develop a new gauge-optimization approach which results in ‘better’ variational inference schemes than MF, BP and other related methods. Contribution. The main contribution of this paper consists in developing two novel variational methods, called Gauged-MF (G-MF) and Gauged-BP (G-BP), providing lower bounds on the partition function of GM. While MF minimizes the (exact) Gibbs free energy under (reduced) product distributions, G-MF does the same task by introducing an additional GT. Due to the the additional degree of freedom in optimization, G-MF improves the lower bound of the partition function provided by MF systematically. Similarly, G-BP generalizes BP, extending interpretation of the latter as an optimization of the Bethe free energy over GT [19, 20, 27, 28], by imposing additional constraints on GT, and thus forcing all the terms in the resulting series for the partition function to remain non-negative. Consequently, G-BP results in a provable lower bound for the partition function, while BP does not (except for log-supermodular models [29]). We prove that both G-MF and G-BP are exact for GMs defined over single cycle, which we call ‘alternating cycle/loop’, as well as over line graph. The alternative cycle case is surprising as it represents the simplest ‘counter-example’ from [30], illustrating failures of MF and BP. For general GMs, we also establish that G-MF is better than, or at least as good as, G-BP. However, we also develop novel error correction schemes for G-BP such that the lower bound of the partition function provided by G-BP is improved systematically/sequentially, eventually outperforming G-MF on the expense of increasing computational complexity. Such error correction scheme has been studied for improving BP by accounting for the loop series consisting of positive and negative terms [31, 32]. According to to our design of G-BP, the corresponding series consists of only non-negative terms, which leads to easier systematic corrections to G-BP. We also show that the proposed GT-based optimizations can be restated as smooth and unconstrained, thus allowing efficient solutions via algorithms of a gradient descent type or any generic optimization solver, such as IPOPT [33]. We experiment with IPOPT on complete GMs of relatively small size and on large GM (up-to 300 variables) of fixed degree. Our experiments indeed confirm that the newly proposed algorithms outperform and generalize MF and BP. Finally, we remark that all statements of the paper are made within the framework of the so-called Forney-style GMs [34] which is general as it allows interactions beyond pair-wise (i.e., high-order GM) and includes other/alternative GM formulations, based on factor graphs [35]. 2 Preliminaries 2.1 Graphical model Factor-graph model. Given (undirected) bipartite factor graph G = (X, F, E), a joint distribution of (binary) random variables x = [xv ∈{0, 1} : v ∈X] is called a factor-graph Graphical Model (GM) if it factorizes as follows: p(x) = 1 Z Y a∈F fa(x∂a), where fa are some non-negative functions called factor functions, ∂a ⊆X consists of nodes neighboring factor a, and the normalization constant Z := P x∈{0,1}X Q a∈F fa(x∂a), is called the partition function. A factor-graph GM is called pair-wise if |∂a| ≤2 for all a ∈F, and high-order otherwise. It is known that approximating the partition function is #P-hard in general [11]. Forney-style model. In this paper, we primarily use the Forney-style GM [34] instead of factor-graph GM. Elementary random variables in the Forney-style GM are associated with edges of an undirected graph, G = (V, E). Then the random vector, x = [xab ∈{0, 1} : {a, b} ∈E] is realized with the probability distribution p(x) = 1 Z Y a∈V fa(xa), (1) 2 where xa is associated with set of edges neighboring node a, i.e. xa = [xab : b ∈∂a] and Z := P x∈{0,1}E Q a∈V fa(xa). As argued in [19, 20], the Forney-style GM constitutes a more universal/compact description of gauge transformations without any restriction of generality: given any factor-graph GM, one can construct an equivalent Forney-style (see the supplementary material). 2.2 Mean-field and belief propagation We now introduce two most popular methods for approximating the partition function: the mean-field and Bethe (i.e., belief propagation) approximation methods. Given any (Forney-style) GM p(x) defined as in (1) and any distribution q(x) over all variables, the Gibbs free energy is defined as FGibbs(q) := X x∈{0,1}E q(x) log q(x) Q a∈V fa(xa). (2) The partition function is related to the Gibbs free energy according to −log Z = minq FGibbs(q), where the optimum is achieved at q = p [35]. This optimization is over all valid probability distributions from the exponentially large space, and obviously intractable. In the case of the mean-field (MF) approximation, we minimize the Gibbs free energy over a family of tractable probability distributions factorized into the following product: q(x) = Q {a,b}∈E qab(xab), where each independent qab(xab) is a proper probability distribution, behaving as a (mean-field) proxy to the marginal of q(x) over xab. By construction, the MF approximation provides a lower bound for log Z. In the case of the Bethe approximation, the so-called Bethe free energy approximates the Gibbs free energy [36]: FBethe(b) = X a∈V X xa∈{0,1}∂a ba(xa) log ba(xa) fa(xa) − X {a,b}∈E X xab∈{0,1} bab(xab) log bab(xab), (3) where beliefs b = [ba, bab : a ∈V, {a, b} ∈E] should satisfy following ‘consistency’ constraints: 0 ≤ba, bab ≤1, X xab∈{0,1} ba(xab) = 1, X x′a\xab∈{0,1}∂a b(x′ a) = b(xab) ∀{a, b} ∈E. Here, x′ a\xab denotes a vector with x′ ab = xab fixed and minb FBethe(b) is the Bethe estimation for −log Z. The popular belief propagation (BP) distributed heuristics solves the optimization iteratively [36]. The Bethe approximation is exact over trees, i.e., −log Z = minb FBethe(b). However, in the case of a general loopy graph, the BP estimation lacks approximation guarantees. It is known, however, that the result of BP-optimization lower bounds the log-partition function, log Z, if the factors are log-supermodular [29]. 2.3 Gauge transformation Gauge transformation (GT) [19, 20] is a family of linear transformations of the factor functions in (1) which leaves the the partition function Z invariant. It is defined with respect to the following set of invertible 2 × 2 matrices Gab for {a, b} ∈E, coined gauges: Gab = Gab(0, 0) Gab(0, 1) Gab(1, 0) Gab(1, 1) . The GM, gauge transformed with respect to G = [Gab, Gba : {a, b} ∈E], consists of factors expressed as: fa,G(xa) = X x′a∈{0,1}∂a fa(x′ a) Y b∈∂a Gab(xab, x′ ab). Here one treats independent xab and xba equivalently for notational convenience, and {Gab, Gba} is a conjugated pair of distinct matrices satisfying the gauge constraint G⊤ abGba = I, where I is the identity matrix. Then, one can prove invariance of the partition function under the transformation: Z = X x∈{0,1}|E| Y a∈V fa(xa) = X x∈{0,1}|E| Y a∈V fa,G(xa). (4) 3 Consequently, GT results in the gauge transformed distribution pG(x) = 1 Z Q a∈V fa,G(xa). Note that some components of pG(x) can be negative, in which case it is not a valid distribution. We remark that the Bethe/BP approximation can be interpreted as a specific choice of GT [19, 20]. Indeed any fixed point of BP corresponds to a special set of gauges making an arbitrarily picked configuration/state x to be least sensitive to the local variation of the gauge. Formally, the following non-convex optimization is known to be equivalent to the Bethe approximation: maximize G X a∈V log fa,G(0, 0, . . . ) subject to G⊤ abGba = I, ∀{a, b} ∈E, (5) and the set of BP-gauges correspond to stationary points of (5), having the objective as the respective Bethe free energy, i.e., P a∈V log fa,G(0, 0, . . . ) = −FBethe. 3 Gauge optimization for approximating partition functions Now we are ready to describe two novel gauge optimization schemes (different from (5)) providing guaranteed lower bound approximations for log Z. Our first GT scheme, coined Gauged-MF (G-MF), shall be considered as modifying and improving the MF approximation, while our second GT scheme, coined Gauged-BP (G-BP), modifies and improves the Bethe approximation in a way that it now provides a provable lower bound for log Z, while the bare BP does not have such guarantees. The G-BP scheme also allows further improvement (in terms of the output quality) on the expense of making underlying algorithm/computation more complex. 3.1 Gauged mean-field We first propose the following optimization inspired by, and also improving, the MF approximation: maximize q,G X a∈V X xa∈{0,1}∂a qa(xa) log fa,G(xa) − X {a,b}∈E X xab∈{0,1} qab(xab) log qab(xab) subject to G⊤ abGba = I, ∀{a, b} ∈E, fa,G(xa) ≥0, ∀a ∈V, ∀xa ∈{0, 1}∂a, q(x) = Y {a,b}∈E qab(xab), qa(xa) = Y b∈∂a qab(xab), ∀a ∈V. (6) Recall that the MF approximation optimizes the Gibbs free energy with respect to q given the original GM, i.e. factors. On the other hand, (6) jointly optimizes it over q and G. Since the partition function of the gauge transformed GM is equal to that of the original GM, (6) also outputs a lower bound on the (original) partition function, and always outperforms MF due to the additional degree of freedom in G. The non-negative constraints fa,G(xa) ≥0 for each factor enforce that the gauge transformed GM results in a valid probability distribution (all components are non-negative). To solve (6), we propose a strategy, alternating between two optimizations, formally stated in Algorithm 1. The alternation is between updating q, within Step A, and updating G, within Step C. The optimization in Step A is simple as one can apply any solver of the mean-field approximation. On the other hand, Step C requires a new solver and, at the first glance, looks complicated due to nonlinear constraints. However, the constraints can actually be eliminated. Indeed, one observes that the non-negative constraint fa,G(xa) ≥0 is redundant, because each term q(xa) log fa,G(xa) in the optimization objective already prevents factors from getting close to zero, thus keeping them positive. Equivalently, once current G satisfies the non-negative constraints, the objective, q(xa) log fa,G(xa), acts as a log-barrier forcing the constraints to be satisfied at the next step within an iterative optimization procedure. Furthermore, the gauge constraint, G⊤ abGba = I, can also be removed simply expressing one (of the two) gauge via another, e.g., Gba via (G⊤ ab)−1. Then, Step C can be resolved by any unconstrained iterative optimization method of a gradient descent type. Next, the additional (intermediate) procedure Step B was considered to handle extreme cases when for some {a, b}, qab(xab) = 0 at the optimum. We resolve the singularity perturbing the distribution by setting zero probabilities to a small value, qab(xab) = δ where δ > 0 is sufficiently small. In 4 Algorithm 1 Gauged mean-field 1: Input: GM defined over graph G = (V, E) with factors {fa}a∈V. A sequence of decreasing barrier terms δ1 > δ2 > · · · > δT > 0 (to handle extreme cases). 2: for t = 1, 2, · · · , T do 3: Step A. Update q by solving the mean-field approximation, i.e., solve the following optimization: maximize q X a∈V X xa∈{0,1}∂a qa(xa) log fa,G(xa) − X {a,b}∈E X xab∈{0,1} qab(xab) log qab(xab) subject to q(x) = Y {a,b}∈E qab(xab), qa(xa) = Y b∈∂a qab(xab), ∀a ∈V. 4: Step B. For factors with zero values, i.e. qab(xab) = 0, make perturbation by setting qab(x′ ab) = δt if x′ ab = xab 1 −δt otherwise. 5: Step C. Update G by solving the following optimization: maximize G X a∈V X x∈{0,1}E q(x) log Y a∈V fa,G(xa) subject to G⊤ abGba = I, ∀{a, b} ∈E. 6: end for 7: Output: Set of gauges G and product distribution q. summary, it is straightforward to check that the Algorithm 1 converges to a local optimum of (6), similar to some other solvers developed for the mean-field and Bethe approximations. We also provide an important class of GMs where the Algorithm 1 provably outperforms both the MF and BP (Bethe) approximations. Specifically, we prove that the optimization (6) is exact in the case when the graph is a line (which is a special case of a tree) and, somewhat surprisingly, a single loop/cycle with odd number of factors represented by negative definite matrices. In fact, the latter case is the so-called ‘alternating cycle’ example which was introduced in [30] as the simplest loopy example where the MF and BP approximations perform quite badly. Formally, we state the following theorem whose proof is given in the supplementary material. Theorem 1. For GM defined on any line graph or alternating cycle, the optimal objective of (6) is equal to the exact log partition function, i.e., log Z. 3.2 Gauged belief propagation We start discussion of the G-BP scheme by noticing that, according to [37], the G-MF gauge optimization (6) can be reduced to the BP/Bethe gauge optimization (5) by eliminating the nonnegative constraint fa,G(xa) ≥0 for each factor and replacing the product distribution q(x) by: q(x) = 1 if x = (0, 0, · · · ), 0 otherwise. (7) Motivated by this observation, we propose the following G-BP optimization: maximize G X a∈V log fa,G(0, 0, · · · ) subject to G⊤ abGba = I, ∀(a, b) ∈E, fa,G(xa) ≥0, ∀a ∈V, ∀xa ∈{0, 1}∂a. (8) 5 The only difference between (5) and (8) is addition of the non-negative constraints for factors in (8). Hence, (8) outputs a lower bound on the partition function, while (5) can be larger or smaller then log Z. It is also easy to verify that (8) (for G-BP) is equivalent to (6) (for G-MF) with q fixed to (7). Hence, we propose the algorithmic procedure for solving (8), formally described in Algorithm 2, and it should be viewed as a modification of Algorithm 1 with q replaced by (7) in Step A, also with a properly chosen log-barrier term in Step C. As we discussed for Algorithm 1, it is straightforward to verify that Algorithm 2 also converges to a local optimum of (8) and one can replace Gba by (G⊤ ab)−1 for each pair of the conjugated matrices in order to build a convergent gradient descent algorithmic implementation for the optimization. Algorithm 2 Gauged belief propagation 1: Input: GM defined over graph G = (V, E) with and factors {fa}a∈V. A sequence of decreasing barrier terms δ1 > δ2 > · · · > δT > 0. 2: for t = 1, 2, · · · do 3: Update G by solving the following optimization: maximize G X a∈V log fa,G(0, 0, · · · ) + δt X a∈V X x∈{0,1}E q(x) log Y a∈V fa,G(xa) subject to G⊤ abGba = I, ∀{a, b} ∈E. 4: end for 5: Output: Set of gauges G. Since fixing q(x) eliminates the degree of freedom in (6), G-BP should perform worse than G-MF, i.e., (8) ≤(6). However, G-BP is still meaningful due to the following reasons. First, Theorem 1 still holds for (8), i.e., the optimal q of (6) is achieved at (7) for any line graph or alternating cycle (see the proof of the Theorem 1 in the supplementary material). More importantly, G-BP can be corrected systematically. At a high level, the “error-correction" strategy consists in correcting the approximation error of (8) sequentially while maintaining the desired lower bounding guarantee. The key idea here is to decompose the error of (8) into partition functions of multiple GMs, and then repeatedly lower bound each partition function. Formally, we fix an arbitrary ordering of edges e1, · · · e|E| and define the corresponding GM for each ei as follows: p(x) = 1 Zi Q a∈V fa,G(xa) for x ∈Xi, where Zi := P x∈Xi Q a∈V fa,G(x) and Xi := {x : xei = 1, xej = 0, xek ∈{0, 1} ∀j, k, such that 1 ≤j < i < k ≤|E|}. Namely, we consider GMs from sequential conditioning of xe1, · · · , xei in the gauge transformed GM. Next, recall that (8) maximizes and outputs a single configuration Q a fa,G(0, 0, · · · ). Then, since Xi T Xj = ∅and S|E| i=1 Xi = {0, 1}E\(0, 0, · · · ), the error of (8) can be decomposed as follows: Z − Y a fa,G(0, 0, · · · ) = |E| X i=1 X x∈Xi Y a∈V fa,G(x) = |E| X i=1 Zi, (9) Now, one can run G-MF, G-BP or any other methods (e.g., MF) again to obtain a lower bound bZi of Zi for all i and then output Q a∈V fa,G(0, 0, · · · ) + P|E| i=1 bZi. However, such additional runs of optimization inevitably increase the overall complexity. Instead, one can also pick a single term Q a fa,G(x(i) a ) for x(i) = [xei = 1, xej = 0, ∀j ̸= i] from Xi, as a choice of bZi just after solving (8) initially, and output Y a∈V fa,G(0, 0, · · · ) + |E| X i=1 fa,G(x(i) a ), x(i) = [xei = 1, xej = 0, ∀j ̸= i], (10) as a better lower bound for log Z than Q a∈V fa,G(0, 0, · · · ). This choice is based on the intuition that configurations partially different from (0, 0, · · · ) may be significant too as they share most of the same factor values with the zero configuration maximized in (8). In fact, one can even choose more configurations (partially different from (0, 0, · · · )) by paying more complexity, which is always 6 Figure 1: Averaged log-partition approximation error vs interaction strength β in the case of generic (non-log-supermodular) GMs on complete graphs of size 4, 5 and 6 (left, middle, right), where the average is taken over 20 random models. Figure 2: Averaged log-partition approximation error vs interaction strength β in the case of logsupermodular GMs on complete graphs of size 4, 5 and 6 (left, middle, right), where the average is taken over 20 random models. better as it brings the approximation closer to the true partition function. In our experiments, we consider additional configurations {x : [xei = 1, xei′ = 1, xej = 0, ∀i, i′ ̸= j] for i′ = i, · · · |E|}, i.e., output Y a∈V fa,G(0, 0, · · · ) + |E| X i=1 |E| X i′=i fa,G(x(i,i′) a ), x(i,i′) = [xei = 1, xei′ = 1, xej = 0, ∀j ̸= i, i′], (11) as a better lower bound of log Z than (10). 4 Experimental results We report results of our experiments with G-MF and G-BP introduced in Section 3. We also experiment here with improved G-BPs correcting errors by accounting for single (10) and multiple (11) terms, as well as correcting G-BP by applying it (again) sequentially to each residual partition function Zi. The error decreases, while the evaluation complexity increases, as we move from G-BPsingle to G-BP-multiple and then to G-BP-sequential. To solve the proposed gauge optimizations, e.g., Step C. of Algorithm 1, we use the generic optimization solver IPOPT [33]. Even though the gauge optimizations can be formulated as unconstrained optimizations, IPOPT runs faster on the original constrained versions in our experiments.2 However, the unconstrained formulations has a strong future potential for developing fast gradient descent algorithms. We generate random GMs with factors dependent on the ‘interaction strength’ parameters {βa}a∈V (akin inverse temperature) according to: fa(xa) = exp(−βa|h0(xa) −h1(xa)|), where h0 and h1 count numbers of 0 and 1 contributions in xa, respectively. Intuitively, we expect that as |βa| increases, it becomes more difficult to approximate the partition function. See the supplementary material for additional information on how we generate the random models. In the first set of experiments, we consider relatively small, complete graphs with two types of factors: random generic (non-log-supermodular) factors and log-supermodular (positive/ferromagnetic) factors. Recall that the bare BP also provides a lower bound in the log-supermodular case [29], thus making the comparison between each proposed algorithm and BP informative. We use the log partition approximation error defined as | log Z −log ZLB|/| log Z|, where ZLB is the algorithm 2 The running time of the implemented algorithms are reported in the supplementary material. 7 Figure 3: Averaged ratio of the log partition function compared to MF vs graph size (i.e., number of factors) in the case of generic (non-log-supermodular) GMs on 3-regular graphs (left) and grid graphs (right), where the average is taken over 20 random models. Figure 4: Averaged ratio of the log partition function compared to MF vs interaction strength β in the case of log-supermodular GMs on 3-regular graphs of size 200 (left) and grid graphs of size 100 (right), where the average is taken over 20 random models. output (a lower bound of Z), to quantify the algorithm’s performance. In the first set of experiments, we deal with relatively small graphs and the explicit computation of Z (i.e., the approximation error) is feasible. The results for experiments over the small graphs are illustrated in Figure 1 and Figure 2 for the non-log-supermodular and log-supermodular cases, respectively. Figure 1 shows that, as expected, G-MF always outperforms MF. Moreover, we observe that G-MF typically provides the tightest low-bound, unless it is outperformed by G-BP-multiple or G-BP-sequential. We remark that BP is not shown in Figure 1, because in this non-log-supermodular case, it does not provide a lower bound in general. According to Figure 2, showing the log-supermodular case, both G-MF and G-BP outperform MF, while G-BP-sequential outperforms all other algorithms. Notice that G-BP performs rather similar to BP in the log-supermodular case, thus suggesting that the constraints, distinguishing (8) from (5), are very mildly violated. In the second set of experiments, we consider more sparse, larger graphs of two types: 3-regular and grid graphs with size up to 200 factors/300 variables. As in the first set of experiments, the same non-log-supermodular/log-supermodular factors are considered. Since computing the exact approximation error is not feasible for the large graphs, we instead measure here the ratio of estimation by the proposed algorithm to that of MF, i.e., log(ZLB/ZMF) where ZMF is the output of MF. Note that a larger value of the ratio indicates better performance. The results are reported in Figure 3 and Figure 4 for the non-log-supermodular and log-supermodular cases, respectively. In Figure 3, we observe that G-MF and G-BP-sequential outperform MF significantly, e.g., up-to e14 times better in 3-regular graphs of size 200. We also observe that even the bare G-BP outperforms MF. In Figure 4, algorithms associated with G-BP outperform G-MF and MF (up to e25 times). This is because the choice of q(x) for G-BP is favored by log-supermodular models, i.e., most of configurations are concentrated around (0, 0, · · · ) similar to the choice (7) of q(x) for G-BP. One observes here (again) that performance of G-BP in this log-supermodular case is almost on par with BP. This implies that G-BP generalizes BP well: the former provides a lower bound of Z for any GMs, while the latter does only for log-supermodular GMs. 5 Conclusion We explore the freedom in gauge transformations of GM and develop novel variational inference methods which result in significant improvement of the partition function estimation. We note that the GT methodology, applied here to improve MF and BP, can also be used to improve and extend utility of other variational methods. 8 Acknowledgments This work was supported in part by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-15-05-ETRI), Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No.2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework) and ICT R&D program of MSIP/IITP [2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion]. References [1] Robert Gallager. Low-density parity-check codes. IRE Transactions on information theory, 8(1):21–28, 1962. [2] Frank R. Kschischang and Brendan J. Frey. Iterative decoding of compound codes by probability propagation in graphical models. IEEE Journal on Selected Areas in Communications, 16(2):219–230, 1998. [3] Hans .A. Bethe. Statistical theory of superlattices. Proceedings of Royal Society of London A, 150:552, 1935. [4] Rudolf E. Peierls. Ising’s model of ferromagnetism. Proceedings of Cambridge Philosophical Society, 32:477–481, 1936. [5] Marc Mézard, Georgio Parisi, and M. A. Virasoro. Spin Glass Theory and Beyond. Singapore: World Scientific, 1987. [6] Giorgio Parisi. Statistical field theory, 1988. [7] Marc Mezard and Andrea Montanari. Information, Physics, and Computation. Oxford University Press, Inc., New York, NY, USA, 2009. [8] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 2014. [9] Michael Irwin Jordan. Learning in graphical models, volume 89. Springer Science & Business Media, 1998. [10] William T Freeman, Egon C Pasztor, and Owen T Carmichael. Learning low-level vision. International journal of computer vision, 40(1):25–47, 2000. [11] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the ising model. SIAM Journal on computing, 22(5):1087–1116, 1993. [12] Ethem Alpaydin. Introduction to machine learning. MIT press, 2014. [13] Judea Pearl. Reverend Bayes on inference engines: A distributed hierarchical approach. Cognitive Systems Laboratory, School of Engineering and Applied Science, University of California, Los Angeles, 1982. [14] Qiang Liu and Alexander T Ihler. Negative tree reweighted belief propagation. arXiv preprint arXiv:1203.3494, 2012. [15] Stefano Ermon, Ashish Sabharwal, Bart Selman, and Carla P Gomes. Density propagation and improved bounds on the partition function. In Advances in Neural Information Processing Systems, pages 2762–2770, 2012. [16] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313–2335, 2005. [17] Qiang Liu and Alexander T Ihler. Bounding the partition function using holder’s inequality. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 849–856, 2011. 9 [18] Martin J. Wainwright, Tommy S. Jaakkola, and Alan S. Willsky. Tree-based reparametrization framework for approximate estimation on graphs with cycles. Information Theory, IEEE Transactions on, 49(5):1120–1146, 2003. [19] Michael Chertkov and Vladimir Chernyak. Loop calculus in statistical physics and information science. Physical Review E, 73:065102(R), 2006. [20] Michael Chertkov and Vladimir Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical Mechanics, page P06009, 2006. [21] Leslie G Valiant. Holographic algorithms. SIAM Journal on Computing, 37(5):1565–1594, 2008. [22] Ali Al-Bashabsheh and Yongyi Mao. Normal factor graphs and holographic transformations. IEEE Transactions on Information Theory, 57(2):752–763, 2011. [23] Martin J. Wainwright and Michael E. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1):1–305, 2008. [24] G David Forney Jr and Pascal O Vontobel. Partition functions of normal factor graphs. arXiv preprint arXiv:1102.0316, 2011. [25] Michael Chertkov. Lecture notes on “statistical inference in structured graphical models: Gauge transformations, belief propagation & beyond", 2016. [26] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282– 2312, 2005. [27] Vladimir Y Chernyak and Michael Chertkov. Loop calculus and belief propagation for q-ary alphabet: Loop tower. In Information Theory, 2007. ISIT 2007. IEEE International Symposium on, pages 316–320. IEEE, 2007. [28] Ryuhei Mori. Holographic transformation, belief propagation and loop calculus for generalized probabilistic theories. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 1099–1103. IEEE, 2015. [29] Nicholas Ruozzi. The bethe partition function of log-supermodular graphical models. In Advances in Neural Information Processing Systems, pages 117–125, 2012. [30] Adrian Weller, Kui Tang, Tony Jebara, and David Sontag. Understanding the bethe approximation: when and how can it go wrong? In UAI, pages 868–877, 2014. [31] Michael Chertkov, Vladimir Y Chernyak, and Razvan Teodorescu. Belief propagation and loop series on planar graphs. Journal of Statistical Mechanics: Theory and Experiment, 2008(05):P05003, 2008. [32] Sung-Soo Ahn, Michael Chertkov, and Jinwoo Shin. Synthesis of mcmc and belief propagation. In Advances in Neural Information Processing Systems, pages 1453–1461, 2016. [33] Andreas Wächter and Lorenz T Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1):25–57, 2006. [34] G David Forney. Codes on graphs: Normal realizations. IEEE Transactions on Information Theory, 47(2):520–548, 2001. [35] Martin Wainwright and Michael Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, UC Berkeley, Department of Statistics, 2003. [36] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Bethe free energy, kikuchi approximations, and belief propagation algorithms. Advances in neural information processing systems, 13, 2001. [37] Michael Chertkov and Vladimir Y Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical Mechanics: Theory and Experiment, 2006(06):P06009, 2006. 10 | 2017 | 380 |
6,875 | Teaching Machines to Describe Images via Natural Language Feedback Huan Ling1, Sanja Fidler1,2 University of Toronto1, Vector Institute2 {linghuan,fidler}@cs.toronto.edu Abstract Robots will eventually be part of every household. It is thus critical to enable algorithms to learn from and be guided by non-expert users. In this paper, we bring a human in the loop, and enable a human teacher to give feedback to a learning agent in the form of natural language. We argue that a descriptive sentence can provide a much stronger learning signal than a numeric reward in that it can easily point to where the mistakes are and how to correct them. We focus on the problem of image captioning in which the quality of the output can easily be judged by non-experts. In particular, we first train a captioning model on a subset of images paired with human written captions. We then let the model describe new images and collect human feedback on the generated descriptions. We propose a hierarchical phrase-based captioning model, and design a feedback network that provides reward to the learner by conditioning on the human-provided feedback. We show that by exploiting descriptive feedback on new images our model learns to perform better than when given human written captions on these images. 1 Introduction In the era where A.I. is slowly finding its way into everyone’s lives, be in the form of social bots [36, 2], personal assistants [24, 13, 32], or household robots [1], it becomes critical to allow non-expert users to teach and guide their robots [37, 18]. For example, if a household robot keeps bringing food served on an ashtray thinking it’s a plate, one should ideally be able to educate the robot about its mistakes, possibly without needing to dig into the underlying software. Reinforcement learning has become a standard way of training artificial agents that interact with an environment. There have been significant advances in a variety of domains such as games [31, 25], robotics [17], and even fields like vision and NLP [30, 19]. RL agents optimize their action policies so as to maximize the expected reward received from the environment. Training typically requires a large number of episodes, particularly in environments with large action spaces and sparse rewards. Several works explored the idea of incorporating humans in the learning process, in order to help the reinforcement learning agent to learn faster [35, 12, 11, 6, 5]. In most cases, a human teacher observes the agent act in an environment, and is allowed to give additional guidance to the learner. This feedback typically comes in the form of a simple numerical (or “good”/“bad”) reward which is used to either shape the MDP reward [35] or directly shape the policy of the learner [5]. In this paper, we aim to exploit natural language as a way to guide an RL agent. We argue that a sentence provides a much stronger learning signal than a numeric reward in that it can easily point to where the mistakes occur and suggests how to correct them. Such descriptive feedback can thus naturally facilitate solving the credit assignment problem as well as to help guide exploration. Despite its clear benefits, very few approaches aimed at incorporating language in Reinforcement Learning. In pioneering work, [22] translated natural language advice into a short program which was used to bias action selection. While this is possible in limited domains such as in navigating a maze [22] or learning to play a soccer game [15], it can hardly scale to the real scenarios with large action spaces requiring versatile language feedback. Machine ( a cat ) ( sitting ) ( on a sidewalk ) ( next to a street . ) Human Teacher Feedback: There is a dog on a sidewalk, not a cat. Type of mistake: wrong object Select the mistake area: ( a cat ) ( sitting ) ( on a sidewalk ) ( next to a street . ) Correct the mistake: ( a dog ) ( sitting ) ( on a sidewalk ) ( next to a street . ) Figure 1: Our model accepts feedback from a human teacher in the form of natural language. We generate captions using the current snapshot of the model and collect feedback via AMT. The annotators are requested to focus their feedback on a single word/phrase at a time. Phrases, indicated with brackets in the example, are part or our captioning model’s output. We also collect information about which word the feedback applies to and its suggested correction. This information is used to train our feedback network. Here our goal is to allow a non-expert human teacher to give feedback to an RL agent in the form of natural language, just as one would to a learning child. We focus on the problem of image captioning in which the quality of the output can easily be judged by non-experts. Towards this goal, we make several contributions. We propose a hierarchical phrase-based RNN as our image captioning model, as it can be naturally integrated with human feedback. We design a web interface which allows us to collect natural language feedback from human “teachers” for a snapshot of our model, as in Fig. 1. We show how to incorporate this information in Policy Gradient RL [30], and show that we can improve over RL that has access to the same amount of ground-truth captions. Our code and data will be released (http://www.cs.toronto.edu/~linghuan/feedbackImageCaption/) to facilitate more human-like training of captioning models. 2 Related Work Several works incorporate human feedback to help an RL agent learn faster. [35] exploits humans in the loop to teach an agent to cook in a virtual kitchen. The users watch the agent learn and may intervene at any time to give a scalar reward. Reward shaping [26] is used to incorporate this information in the MDP. [6] iterates between “practice”, during which the agent interacts with the real environment, and a critique session where a human labels any subset of the chosen actions as good or bad. In [12], the authors compare different ways of incorporating human feedback, including reward shaping, Q augmentation, action biasing, and control sharing. The same authors implement their TAMER framework on a real robotic platform [11]. [5] proposes policy shaping which incorporates right/wrong feedback by utilizing it as direct policy labels. These approaches mostly assume that humans provide a numeric reward, unlike in our work where the feedback is given in natural language. A few attempts have been made to advise an RL agent using language. [22]’s pioneering work translated advice to a short program which was then implemented as a neural network. The units in this network represent Boolean concepts, which recognize whether the observed state satisfies the constraints given by the program. In such a case, the advice network will encourage the policy to take the suggested action. [15] incorporated natural language advice for a RoboCup simulated soccer task. They too translate the advice in a formal language which is then used to bias action selection. Parallel to our work, [7] exploits textual advice to improve training time of the A3C algorithm in playing an Atari game. Recently, [37, 18] incorporates human feedback to improve a text-based QA agent. Our work shares similar ideas, but applies them to the problem of image captioning. In [27], the authors incorporate human feedback in an active learning scenario, however not in an RL setting. Captioning represents a natural way of showing that our algorithm understands a photograph to a non-expert observer. This domain has received significant attention [8, 39, 10], achieving impressive performance on standard benchmarks. Our phrase model shares the most similarity with [16], but differs in that exploits attention [39], linguistic information, and RL to train. Several recent approaches trained the captioning model with policy gradients in order to directly optimize for the desired performance metrics [21, 30, 3]. We follow this line of work. However, to the best of our knowledge, our work is the first to incorporate natural language feedback into a captioning model. 2 Figure 2: Our hierarchical phrase-based captioning model, composed of a phrase-RNN at the top level, and a word-level RNN which outputs a sequence of words for each phrase. The useful property of this model is that it directly produces an output sentence segmented into linguistic phrases. We exploit this information while collecting and incorporating human feedback into the model. Our model also exploits attention, and linguistic information (phrase labels such as noun, preposition, verb, and conjunction phrase). Please see text for details. Related to our efforts is also work on dialogue based visual representation learning [40, 41], however this work tackles a simpler scenario, and employs a slightly more engineered approach. We stress that our work differs from the recent efforts in conversation modeling [19] or visual dialog [4] using Reinforcement Learning. These models aim to mimic human-to-human conversations while in our work the human converses with and guides an artificial learning agent. 3 Our Approach Our framework consists of a new phrase-based captioning model trained with Policy Gradients that incorporates natural language feedback provided by a human teacher. While a number of captioning methods exist, we design our own which is phrase-based, allowing for natural guidance by a nonexpert. In particular, we argue that the strongest learning signal is provided when the feedback describes one mistake at a time, e.g. a single wrong word or a phrase in a caption. An example can be seen in Fig. 1. This is also how one most effectively teaches a learning child. To avoid parsing the generated sentences at test time, we aim to predict phrases directly with our captioning model. We first describe our phrase-based captioner, then describe our feedback collection process, and finally propose how to exploit feedback as a guiding signal in policy gradient optimization. 3.1 Phrase-based Image Captioning Our captioning model, forming the base of our approach, uses a hierarchical Recurrent Neural Network, similar to [34, 14]. In [14], the authors use a two-level LSTM to generate paragraphs, while [34] uses it to generate sentences as a sequence of phrases. The latter model shares a similar overall structure as ours, however, our model additionally reasons about the type of phrases and exploits the attention mechanism over the image. The structure of our model is best explained through Fig. 2. The model receives an image as input and outputs a caption. It is composed of a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. Following [39], we use a convolutional neural network in order to extract a set of feature vectors a = (a1, . . . , an), with aj a feature in location j in the input image. We denote the hidden state of the phrase RNN at time step t with ht, and ht,i to denote the i-th hidden state of the word RNN for the t-th phrase. Computation in our model can be expressed with the following equations: phrase-RNN | {z } word-RNN | {z } ht = fphrase(ht−1, lt−1, ct−1, et−1) lt = softmax(fphrase−label(ht)) ct = fatt(ht, lt, a) ht,0 = fphrase−word(ht, lt, ct) ht,i = fword(ht,i−1, ct, wt,i) wt,i = fout(ht,i, ct, wt,i−1) et = fword−phrase(wt,1, . . . , wt,end) fphrase LSTM, dim 256 fphrase−label 3-layer MLP fatt 2-layer MLP with ReLu fphrase−word 3-layer MLP with ReLu fword LSTM, dim 256 fout deep decoder [28] fword−phrase mean+3-lay. MLP with ReLu 3 Image Ref. caption Feedback Corr. caption ( a woman ) ( is sitting ) ( on a bench ) ( with a plate ) ( of food . ) What the woman is sitting on is not visible. ( a woman ) ( is sitting ) ( with a plate ) ( of food . ) ( a horse ) ( is standing ) ( in a barn ) ( in a field . ) There is no barn. There is a fence. ( a horse ) ( is standing ) ( in a fence ) ( in a field . ) Image Ref. caption Feedback Corr. caption ( a man ) ( riding a motorcycle ) ( on a city street . ) There is a man and a woman. ( a man and a woman ) ( riding a motorcycle ) ( on a city street . ) ( a man ) ( is swinging a baseball bat ) ( on a field . ) The baseball player is not swinging a bate. ( a man ) ( is playing baseball ) ( on a field . ) Table 1: Examples of collected feedback. Reference caption comes from the MLE model. Table 2: Statistics for our collected feedback information. The table on the right shows how many times the feedback sentences mention words to be corrected and suggest correction. Num. of evaluated examples (annot. round 1) 9000 Evaluated as containing errors 5150 To ask for feedback (annot. round 2) 4174 Avg. num. of feedback rounds per image 2.22 Avg. num. of words in feedback sent. 8.04 Avg. num. of words needing correction 1.52 Avg. num. of modified words 1.46 Something should be replaced 2999 mistake word is in description 2664 correct word is in description 2674 Something is missing 334 missing word is in description 246 Something should be removed 841 removed word is in description 779 feedback round: number of correction rounds for the same example, description: natural language feedback perfect accecptable grammar minor_errormajor_error 0 500 1000 1500 2000 2500 3000 evaluation after correction Figure 3: Caption quality evaluation by the human annotators. Plot on the left shows evaluation for captions generated with our reference model (MLE). The right plot shows evaluation of the human-corrected captions (after completing at least one round of feedback). As in [39], ct denotes a context vector obtained by applying the attention mechanism to the image. This context vector essentially represents the image area that the model “looks at” in order to generate the t-th phrase. This information is passed to both the word-RNN as well as to the next hidden state of the phrase-RNN. We found that computing two different context vectors, one passed to the phrase and one to the word RNN, improves generation by 0.6 points (in weighted metric, see Table 4) mainly helping the model to avoid repetition of words. Furthermore, we noticed that the quality of attention significantly improves (1.5 points, Table 4) if we provide it with additional linguistic information. In particular, at each time step t our phrase RNN also predicts a phrase label lt, following the standard definition from the Penn Tree Bank. For each phrase, we predict one out of four possible phrase labels, i.e., a noun (NP), preposition (PP), verb (VP), and a conjunction phrase (CP). We use additional <EOS> token to indicate the end of the sentence. By conditioning on the NP label, we help the model look at the objects in the image, while VP may focus on more global image information. Above, wt,i denotes the i-th word output of the word-RNN in the t-th phrase, encoded with a one-hot vector. Note that we use an additional <EOP> token in word-RNN’s vocabulary, which signals the end-of-phrase. Further, et encodes the generated phrase via simple mean-pooling over the words, which provides additional word-level context to the next phrase. Details about the choices of the functions are given in the table. Following [39], we use a deep output layer [28] in the LSTM and double stochastic attention. Implementation details. To train our hierarchical model, we first process MS-COCO image caption data [20] using the Stanford Core NLP toolkit [23]. We flatten each parse tree, separate a sentence into parts, and label each part with a phrase label (<NP>, <PP>, <CP>, <VP>). To simplify the phrase structure, we merge some NPs to its previous phrase label if it is not another NP. Pre-training. We pre-train our model using the standard cross-entropy loss. We use the ADAM optimizer [9] with learning rate 0.001. We discuss Policy Gradient optimization in Subsec. 3.4. 3.2 Crowd-sourcing Human Feedback We aim to bring a human in the loop when training the captioning model. Towards this, we create a web interface that allows us to collect feedback information on a larger scale via AMT. Our interface 4 Figure 4: The architecture of our feedback network (FBN) that classifies each phrase (bottom left) in a sampled sentence (top left) as either correct, wrong or not relevant, by conditioning on the feedback sentence. is akin to that depicted in Fig. 1, and we provide further visualizations in the Appendix. We also provide it online on our project page. In particular, we take a snapshot of our model and generate captions for a subset of MS-COCO images [20] using greedy decoding. In our experiments, we take the model trained with the MLE objective. We do two rounds of annotation. In the first round, the annotator is shown a captioned image and is asked to assess the quality of the caption, by choosing between: perfect, acceptable, grammar mistakes only, minor or major errors. We asked the annotators to choose minor and major error if the caption contained errors in semantics, i.e., indicating that the “robot” is not understanding the photo correctly. We advised them to choose minor for small errors such as wrong or missing attributes or awkward prepositions, and go with major errors whenever any object or action naming is wrong. For the next (more detailed, and thus more costly) round of annotation, we only select captions which are not marked as either perfect or acceptable in the first round. Since these captions contain errors, the new annotator is required to provide detailed feedback about the mistakes. We found that some of the annotators did not find errors in some of these captions, pointing to the annotator noise in the process. The annotator is shown the generated caption, delineating different phrases with the “(” and “)” tokens. We ask the annotator to 1) choose the type of required correction, 2) write feedback in natural language, 3) mark the type of mistake, 4) highlight the word/phrase that contains the mistake, 5) correct the chosen word/phrase, 6) evaluate the quality of the caption after correction. We allow the annotator to submit the HIT after one correction even if her/his evaluation still points to errors. However, we plea to the good will of the annotators to continue in providing feedback. In the latter case, we reset the webpage, and replace the generated caption with their current correction. The annotator first chooses the type of error, i.e., something “ should be replaced”, “is missing”, or “should be deleted”. (S)he then writes a sentence providing feedback about the mistake and how it should be corrected. We require that the feedback is provided sequentially, describing a single mistake at a time. We do this by restricting the annotator to only select mistaken words within a single phrase (in step 4). In 3), the annotator marks further details about the mistake, indicating whether it corresponds to an error in object, action, attribute, preposition, counting, or grammar. For 4) and 5) we let the annotator highlight the area of mistake in the caption, and replace it with a correction. The statistics of the data is provided in Table 2, with examples shown in Table 1. An interesting fact is that the feedback sentences in most cases mention both the wrong word from the caption, as well as the correction word. Fig. 3 (left) shows evaluation of the caption quality of the reference (MLE) model. Out of 9000 captions, 5150 are marked as containing errors (either semantic or grammar), and we randomly choose 4174 for the second round of annotation (detailed feedback). Fig. 3 (left) shows the quality of all the captions after correction, i.e. good reference captions as well as 4174 corrected captions as submitted by the annotators. Note that we only paid for one round of feedback, thus some of the captions still contained errors even after correction. Interestingly, on average the annotators still did 2.2 rounds of feedback per image (Table 2). 3.3 Feedback Network Our goal is to incorporate natural language feedback into the learning process. The collected feedback contains rich information of how the caption can be improved: it conveys the location of the mistake and typically suggests how to correct it, as seen in Table 2. This provides strong supervisory signal which we want to exploit in our RL framework. In particular, we design a neural network which will provide additional reward based on the feedback sentence. We refer to it as the feedback network (FBN). We first explain our feedback network, and show how to integrate its output in RL. 5 Sampled caption Feedback Phrase Prediction A cat on a sidewalk. A cat wrong A dog on a sidewalk. There is a dog on a sidewalk not a cat. A dog correct A cat on a sidewalk. on a sidewalk not relevant Table 3: Example classif. of each phrase in a newly sampled caption into correct/wrong/not-relevant conditioned on the feedback sentence. Notice that we do not need the image to judge the correctness/relevance of a phrase. Note that RL training will require us to generate samples (captions) from the model. Thus, during training, the sampled captions for each training image will change (will differ from the reference MLE caption for which we obtained feedback for). The goal of the feedback network is to read a newly sampled caption, and judge the correctness of each phrase conditioned on the feedback. We make our FBN to only depend on text (and not on the image), making its learning task easier. In particular, our FBN performs the following computation: hcaption t = fsent(hcaption t−1 , wc t) (1) hfeedback t = fsent(hfeedback t−1 , wf t ) (2) qi = fphrase(wc i,1, . . . , wc i,N) (3) oi = ffbn(hc T , hf T ′, qi, m) (4) fsent LSTM, dim 256 fphrase linear+mean pool ffbn 3-layer MLP with dropout +3-way softmax Here, wc t and wf t denote the one-hot encoding of words in the sampled caption and feedback sentence, respectively. By wc i,· we denote words in the i-th phrase of the sampled caption. FBN thus encodes both the caption and feedback using an LSTM (with shared parameters), performs mean pooling over the words in a phrase to represent the phrase i, and passes this information through a 3-layer MLP. The MLP additionally accepts information about the mistake type (e.g., wrong object/action) encoded as a one hot vector m (denoted as “extra information” in Fig. 4). The output layer of the MLP is a 3-way classification layer that predicts whether the phrase i is correct, wrong, or not relevant (wrt feedback sentence). An example output from FBN is shown in Table 3. Implementation details. We train our FBN with the ground-truth data that we collected. In particular, we use (reference, feedback, marked phrase in reference caption) as an example of a wrong phrase, (corrected sentence, feedback, marked phrase in corrected caption) as an example of the correct phrase, and treat the rest as the not relevant label. Reference here means the generated caption that we collected feedback for, and marked phrase means the phrase that the annotator highlighted in either the reference or the corrected caption. We use the standard cross-entropy loss to train our model. We use ADAM [9] with learning rate 0.001, and a batch size of 256. When a reference caption has several feedback sentences, we treat each one as independent training data. 3.4 Policy Gradient Optimization using Natural Language Feedback We follow [30, 29] to directly optimize for the desired image captioning metrics using the Policy Gradient technique. For completeness, we briefly summarize it here [30]. One can think of an caption decoder as an agent following a parameterized policy pθ that selects an action at each time step. An “action” in our case requires choosing a word from the vocabulary (for the word RNN), or a phrase label (for the phrase RNN). An “agent” (our captioning model) then receives the reward after generating the full caption, i.e., the reward can be any of the automatic metrics, their weighted sum [30, 21], or in our case will also include the reward from feedback. The objective for learning the parameters of the model is the expected reward received when completing the caption ws = (ws 1, . . . , ws T ) (ws t is the word sampled from the model at time step t): L(θ) = −Ews∼pθ[r(ws)] (5) To optimize this objective, we follow the reinforce algorithm [38], as also used in [30, 29]. The gradient of (5) can be computed as ∇θL(θ) = −Ews∼pθ[r(ws)∇θ log pθ(ws)], (6) which is typically estimated by using a single Monte-Carlo sample: ∇θL(θ) ≈−r(ws)∇θ log pθ(ws) (7) 6 We follow [30] to define the baseline b as the reward obtained by performing greedy decoding: b = r( ˆw), ˆwt = arg max p(wt|ht) ∇θL(θ) ≈−(r(ws) −r( ˆw))∇θ log pθ(ws) (8) Note that the baseline does not change the expected gradient but can drastically reduce its variance. Reward. We define two different rewards, one at the sentence level (optimizing for a performance metrics), and one at the phrase level. We use human feedback information in both. We first define the sentence reward wrt to a reference caption as a weighted sum of the BLEU scores: r(ws) = β X i λi · BLEUi(ws, ref) (9) In particular, we choose λ1 = λ2 = 0.5, λ3 = λ4 = 1, λ5 = 0.3. As reference captions to compute the reward, we either use the reference captions generated by a snapshot of our model which were evaluated as not having minor and major errors, or ground-truth captions. The details are given in the experimental section. We weigh the reward by the caption quality as provided by the annotators. In particular, β = 1 for perfect (or GT), 0.8 for acceptable, and 0.6 for grammar/fluency issues only. We further incorporate the reward provided by the feedback network. In particular, our FBN allows us to define the reward at the phrase level (thus helping with the credit assignment problem). Since our generated sentence is segmented into phrases, i.e., ws = wp 1wp 2 . . . wp P , where wp t denotes the (sequence of words in the) t-th phrase, we define the combined phrase reward as: r(wp t ) = r(ws) + λfffbn(ws, feedback, wp t ) (10) Note that FBN produces a classification of each phrase. We convert this into reward, by assigning correct to 1, wrong to −1, and 0 to not relevant. We do not weigh the reward by the confidence of the network, which might be worth exploring in the future. Our final gradient takes the following form: ∇θL(θ) = − P X p=1 (r(wp) −r( ˆwp))∇θ log pθ(wp) (11) Implementation details. We use Adam with learning rate 1e−6 and batch size 50. As in [29], we follow an annealing schedule. We first optimize the cross entropy loss for the first K epochs, then for the following t = 1, . . . , T epochs, we use cross entropy loss for the first (P −floor(t/m)) phrases (where P denotes the number of phrases), and the policy gradient algorithm for the remaining floor(t/m) phrases. We choose m = 5. When a caption has multiple feedback sentences, we take the sum of the FBN’s outputs (converted to rewards) as the reward for each phrase. When a sentence does not have any feedback, we assign it a zero reward. 4 Experimental Results To validate our approach we use the MS-COCO dataset [20]. We use 82K images for training, 2K for validation, and 4K for testing. In particular, we randomly chose 2K val and 4K test images from the official validation split. To collect feedback, we randomly chose 7K images from the training set, as well as all 2K images from our validation. In all experiments, we report the performance on our (held out) test set. For all the models (including baselines) we used a pre-trained VGG [33] network to extract image features. We use a word vocabulary size of 23,115. Phrase-based captioning model. We analyze different instantiations of our phrase-based captioning in Table 4, showing the importance of predicting phrase labels. To sanity check our model we compare it to a flat approach (word-RNN only) [39]. Overall, our model performs slightly worse than [39] (0.66 points). However, the main strength of our model is that it allows a more natural integration with feedback. Note that these results are reported for the models trained with MLE. Feedback network. As reported in Table 2, our dataset which contains detailed feedback (descriptions) contains 4173 images. We randomly select 9/10 of them to serve as a training set for our feedback network, and use 1/10 of them to be our test set. The classification performance of our FBN is reported in Table 5. We tried exploiting additional information in the network. The second line reports the result for FBN which also exploits the reference caption (for which the feedback was written) as input, represented with a LSTM. The model in the third line uses the type of error, i.e. the phrase is “missing”, “wrong”, or “redundant”. We found that by using information about what kind of mistake the reference caption had (e.g., corresponding to misnaming an object, action, etc) achieves the best performance. We use this model as our FBN used in the following experiments. 7 BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L Weighted metric flat (word level) with att 65.36 44.03 29.68 20.40 51.04 104.78 phrase with att. 64.69 43.37 28.80 19.31 50.80 102.14 phrase with att +phrase label 65.46 44.59 29.36 19.25 51.40 103.64 phrase with 2 att +phrase label 65.37 44.02 29.51 19.91 50.90 104.12 Table 4: Comparing performance of the flat captioning model [39], and different instantiations of our phrasebased captioning model. All these models were trained using the cross-entropy loss. Feedback network Accuracy no extra information 73.30 use reference caption 73.24 use "missing"/"wrong"/"redundant" 72.92 use "action"/"object"/"preposition"/etc 74.66 Table 5: Classification results of our feedback network (FBN) on a held-out feedback data. The FBN predicts correct/wrong/not relevant for each phrase in a caption. See text for details. RL with Natural Language Feedback. In Table 6 we report the performance for several instantiations of the RL models. All models have been pre-trained using cross-entropy loss (MLE) on the full MS-COCO training set. For the next rounds of training, all the models are trained only on the 9K images that comprise our full evaluation+feedback dataset from Table 2. In particular, we separate two cases. In the first, standard case, the “agent” has access to 5 captions for each image. We experiment with different types of captions, e.g. ground-truth captions (provided by MS-COCO), as well as feedback data. For a fair comparison, we ensure that each model has access to (roughly) the same amount of data. This means that we count a feedback sentence as one source of information, and a human-corrected reference caption as yet another source. We also exploit reference (MLE) captions which were evaluated as correct, as well as corrected captions obtained from the annotators. In particular, we tried two types of experiments. We define “C” captions as all captions that were corrected by the annotators and were not evaluated as containing minor or major error, and ground-truth captions for the rest of the images. For “A”, we use all captions (including reference MLE captions) that did not have minor or major errors, and GT for the rest. A detailed break-down of these captions is reported in Table 7. We first test a model using the standard cross-entropy loss, but which now also has access to the corrected captions in addition to the 5GT captions. This model (MLEC) is able to improve over the original MLE model by 1.4 points. We then test the RL model by optimizing the metric wrt the 5GT captions (as in [30]). This brings an additional point, achieving 2.4 over the MLE model. Our RL agent with feedback is given access to 3GT captions, the “C" captions and feedback sentences. We show that this model outperforms the no-feedback baseline by 0.5 points. Interestingly, with “A” captions we get an additional 0.3 boost. If our RL agent has access to 4GT captions and feedback descriptions, we achieve a total of 1.1 points over the baseline RL model and 3.5 over the MLE model. Examples of generated captions are shown in Fig. 6. We also conducted a human evaluation using AMT. In particular, Turkers are shown an image captioned by the baseline RL and our method, and are asked to choose the better caption. As shown in Fig. 5, our RL with feedback is 4.7 percent higher than the RL baseline. We additionally count how much human interaction is required for either the baseline RL and our approach. In particular, we count every interaction with the keyboard as 1 click. In evaluation, choosing the quality of the caption counts as 1 click, and for captions/feedback, every letter counts as a click. The main save comes from the first evaluation round, in which we only as for the quality of captions. Overall, there is almost half clicks saved in our setting. We also test a more realistic scenario, in which the models have access to either a single GT caption, or in our case “C" (or “A”) and feedback. This mimics a scenario in which the human teacher observes the agent and either gives feedback about the agent’s mistakes, or, if the agent’s caption is completely wrong, the teacher writes a new caption. Interestingly, RL when provided with the corrected captions performs better than when given GT captions. Overall, our model outperforms the base RL (no feedback) by 1.2 points. We note that our RL agents are trained (not counting pre-training) only on a small (9K) subset of the full MS-COCO training set. Further improvements are thus possible. Discussion. These experiments make an important point. Instead of giving the RL agent a completely new target (caption), a better strategy is to “teach” the agent about the mistakes it is doing and suggest a correction. Natural language thus offers itself as a rich modality for providing such guidance not only to humans but also to artificial agents. 8 Table 6: Comparison of our RL with feedback information to baseline RL and MLE models. BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L Weighted metric MLE (5 GT) 65.37 44.02 29.51 19.91 50.90 104.12 MLEC (5 GT + C) 66.85 45.19 29.89 19.79 51.20 105.58 MLEC (5 GT + A) 66.14 44.87 30.17 20.27 51.32 105.47 RLB (5 GT) 66.90 45.10 30.10 20.30 51.10 106.55 RLF (3GT+FB+C) 66.52 45.23 30.48 20.66 51.41 107.02 RLF (3GT+FB+A) 66.98 45.54 30.52 20.53 51.54 107.31 5 sent. RLF (4GT + FB) 67.10 45.50 30.60 20.30 51.30 107.67 RLB (1 GT) 65.68 44.58 29.81 19.97 51.07 104.93 RLB (C) 65.84 44.64 30.01 20.23 51.06 105.50 RLB (A) 65.81 44.58 29.87 20.24 51.28 105.31 RLF (C + FB) 65.76 44.65 30.20 20.62 51.35 106.03 1 sent. RLF (A + FB) 66.23 45.00 30.15 20.34 51.58 106.12 GT: ground truth captions; FB: feedback; MLE(A)(C): MLE model using five GT sentences + either C or A captions (see text and Table 7); RLB: baseline RL (no feedback network); RLF: RL with feedback (here we also use C or A captions as well as FBN); ground-truth perfect acceptable grammar error only A 3107 2661 2790 442 C 6326 1502 1502 234 Table 7: Detailed break-down of what captions were used as “A” or “C” in Table 6 for computing additional rewards in RL. (a) 47.7 52.3 0 10 20 30 40 50 60 RLB RLF Human preferences (b) 0 50000 100000 150000 200000 250000 300000 350000 400000 RLB RLF # of clicks Figure 5: (a) Human preferences: RL baseline vs RL with feedback (our approach), (b) Number of human “clicks” required for MLE/baseline RL, and ours. A click is counted when an annotator hits the keyboard: in evaluation, choosing the quality of the caption counts as 1 click, and for captions/feedback, every letter counts as a click. The main save comes from the first evaluation round, in which we only as for the quality of captions. MLE: ( a man ) ( walking ) ( in front of a building ) ( with a cell phone . ) RLB: ( a man ) ( is standing ) ( on a sidewalk ) ( with a cell phone . ) RLF: ( a man ) ( wearing a black suit ) ( and tie ) ( on a sidewalk . ) MLE: ( two giraffes ) ( are standing ) ( in a field ) ( in a field . ) RLB: ( a giraffe ) ( is standing ) ( in front of a large building . ) RLF: ( a giraffe ) ( is ) ( in a green field ) ( in a zoo . ) MLE: ( a clock tower ) ( with a clock ) ( on top . ) RLB: ( a clock tower ) ( with a clock ) ( on top of it . ) RLF: ( a clock tower ) ( with a clock ) ( on the front . ) MLE: ( two birds ) ( are standing ) ( on the beach ) ( on a beach . ) RLB: ( a group ) ( of birds ) ( are ) ( on the beach . ) RLF: ( two birds ) ( are standing ) ( on a beach ) ( in front of water . ) Figure 6: Qualitative examples of captions from the MLE and RLB models (baselines), and our RBF model. 5 Conclusion In this paper, we enable a human teacher to provide feedback to the learning agent in the form of natural language. We focused on the problem of image captioning. We proposed a hierarchical phrase-based RNN as our captioning model, which allowed natural integration with human feedback. We crowd-sourced feedback for a snapshot of our model, and showed how to incorporate it in Policy Gradient optimization. We showed that by exploiting descriptive feedback our model learns to perform better than when given independently written captions. Acknowledgment We gratefully acknowledge the support from NVIDIA for their donation of the GPUs used for this research. This work was partially supported by NSERC. We also thank Relu Patrascu for infrastructure support. 9 References [1] Cmu’s herb robotic platform, http://www.cmu.edu/herb-robot/. [2] Microsoft’s tay, https://twitter.com/tayandyou. [3] Bo Dai, Dahua Lin, Raquel Urtasun, and Sanja Fidler. Towards diverse and natural image descriptions via a conditional gan. In arXiv:1703.06029, 2017. [4] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual dialog. In arXiv:1611.08669, 2016. [5] Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea Lockerd Thomaz. Policy shaping: Integrating human feedback with reinforcement learning. In NIPS, 2013. [6] K. Judah, S. Roy, A. Fern, and T. Dietterich. Reinforcement learning via practice and critique advice. In AAAI, 2010. [7] Russell Kaplan, Christopher Sauer, and Alexander Sosa. Beating atari with natural language guided reinforcement learning. In arXiv:1704.05539, 2017. [8] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, 2015. [9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [10] Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. CoRR, abs/1411.2539, 2014. [11] W. Bradley Knox, Cynthia Breazeal, , and Peter Stone. Training a robot via human feedback: A case study. In International Conference on Social Robotics, 2013. [12] W. Bradley Knox and Peter Stone. Reinforcement learning from simultaneous human and mdp reward. In Intl. Conf. on Autonomous Agents and Multiagent Systems, 2012. [13] Jacqueline Kory Westlund, Jin Joo Lee, Luke Plummer, Fardad Faridi, Jesse Gray, Matt Berlin, Harald Quintus-Bosz, Robert Hartmann, Mike Hess, Stacy Dyer, Kristopher dos Santos, Sigurdhur Örn Adhalgeirsson, Goren Gordon, Samuel Spaulding, Marayna Martinez, Madhurima Das, Maryam Archie, Sooyeon Jeong, and Cynthia Breazeal. Tega: A social robot. In International Conference on Human-Robot Interaction, 2016. [14] Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. A hierarchical approach for generating descriptive image paragraphs. In CVPR, 2017. [15] G. Kuhlmann, P. Stone, R. Mooney, and J. Shavlik. Guiding a reinforcement learner with natural language advice: Initial results in robocup soccer. In AAAI Workshop on Supervisory Control of Learning and Adaptive Systems, 2004. [16] Remi Lebret, Pedro O. Pinheiro, and Ronan Collobert. Phrase-based image captioning. In arXiv:1502.03671, 2015. [17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. J. Mach. Learn. Res., 17(1):1334–1373, 2016. [18] Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. Dialogue learning with human-in-the-loop. In arXiv:1611.09823, 2016. [19] Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement learning for dialogue generation. In arXiv:1606.01541, 2016. [20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. 2014. [21] Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. Improved image captioning via policy gradient optimization of spider. In arXiv:1612.00370, 2016. [22] Richard Maclin and Jude W. Shavlik. Incorporating advice into agents that learn from reinforcements. In National Conference on Artificial Intelligence, pages 694–699, 1994. [23] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In ICLR, 2014. [24] Maja J. Matariˇc. Socially assistive robotics: Human augmentation vs. automation. Science Robotics, 2(4), 2017. [25] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. 10 [26] Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, pages 278–287, 1999. [27] Amar Parkash and Devi Parikh. Attributes for classifier feedback. In European Conference on Computer Vision (ECCV), 2012. [28] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60, 2014. [29] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In arXiv:1511.06732, 2015. [30] Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In arXiv:1612.00563, 2016. [31] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. [32] Edgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. Neuroaesthetics in fashion: Modeling the perception of beauty. In CVPR, 2015. [33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. 23, 2015. [34] Ying Hua Tan and Chee Seng Chan. phi-lstm: A phrase-based hierarchical lstm model for image captioning. In ACCV, 2016. [35] A. Thomaz and C. Breazeal. Reinforcement learning with human teachers: Evidence of feedback and guidance. In AAAI, 2006. [36] Oriol Vinyals and Quoc Le. A neural conversational model. In arXiv:1506.05869, 2015. [37] Jason Weston. Dialog-based language learning. In arXiv:1604.06045, 2016. [38] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, 1992. [39] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. [40] Yanchao Yu, Arash Eshghi, and Oliver Lemon. Training an adaptive dialogue policy for interactive learning of visually grounded word meanings. In Proc. of SIGDIAL, 2016. [41] Yanchao Yu, Arash Eshghi, Gregory Mills, and Oliver Lemon. The burchak corpus: a challenge data set for interactive learning of visually grounded word meanings. In Workshop on Vision and Language, 2017. 11 | 2017 | 381 |
6,876 | Associative Embedding: End-to-End Learning for Joint Detection and Grouping Alejandro Newell Computer Science and Engineering University of Michigan Ann Arbor, MI alnewell@umich.edu Zhiao Huang* Institute for Interdisciplinary Information Sciences Tsinghua University Beijing, China hza14@mails.tsinghua.edu.cn Jia Deng Computer Science and Engineering University of Michigan Ann Arbor, MI jiadeng@umich.edu Abstract We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets. 1 Introduction Many computer vision tasks can be viewed in the context of detection and grouping: detecting smaller visual units and grouping them into larger structures. For example, in multi-person pose estimation we detect body joints and group them into individual people; in instance segmentation we detect pixels belonging to a semantic class and group them into object instances; in multi-object tracking we detect objects across video frames and group them into tracks. In all of these cases, the output is a variable number of visual units and their assignment into a variable number of visual groups. Such tasks are often approached with two-stage pipelines that perform detection first and grouping second. But such approaches may be suboptimal because detection and grouping are tightly coupled: for example, in multiperson pose estimation, the same features used to recognize wrists or elbows in an image would also suggest whether a wrist and elbow belong to the same limb. In this paper we ask whether it is possible to jointly perform detection and grouping using a singlestage deep network trained end-to-end. We propose associative embedding, a novel method to express output for joint detection and grouping. The basic idea is to introduce, for each detection, a vector embedding that serves as a “tag” to identify its group assignment. All detections associated with the same tag value belong to the same group. Concretely, the network outputs a heatmap of per-pixel * Work done while a visiting student at the University of Michigan. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. detection scores and a set of per-pixel embeddings. The detections and groups are decoded by extracting the corresponding embeddings from pixel locations with top detection scores. To train a network to produce the correct tags, we use a loss function that encourages pairs of tags to have similar values if the corresponding detections belong to the same group or dissimilar values otherwise. It is important to note that we have no “ground truth” tags for the network to predict, because what matters is not the particular tag values, only the differences between them. The network has the freedom to decide on the tag values as long as they agree with the ground truth grouping. We apply our approach to multiperson pose estimation, an important task for understanding humans in images. Given an input image, multi-person pose estimation seeks to detect each person and localize their body joints. Unlike single-person pose there are no prior assumptions of a person’s location or size. Multi-person pose systems must scan the whole image detecting all people and their corresponding keypoints. For this task, we integrate associative embedding with a stacked hourglass network [31], which produces a detection heatmap and a tagging heatmap for each body joint, and then group body joints with similar tags into individual people. Experiments demonstrate that our approach outperforms all recent methods and achieves state-of-the-art results on MS-COCO [27] and MPII Multiperson Pose [3]. Our contributions are two fold: (1) we introduce associative embedding, a new method for singlestage, end-to-end joint detection and grouping. This method is simple and generic; it works with any network architecture that produces pixel-wise prediction; (2) we apply associative embedding to multiperson pose estimation and achieve state-of-the-art results on two standard benchmarks. 2 Related Work Vector Embeddings Our method is related to many prior works that use vector embeddings. Works in image retrieval have used vector embeddings to measure similarity between images [12, 43]. Works in image classification, image captioning, and phrase localization have used vector embeddings to connect visual features and text features by mapping them to the same vector space [11, 14, 22]. Works in natural language processing have used vector embeddings to represent the meaning of words, sentences, and paragraphs [30, 24]. Our work differs from these prior works in that we use vector embeddings as identity tags in the context of joint detection and grouping. Perceptual Organization Work in perceptual organization aims to group the pixels of an image into regions, parts, and objects. Perceptual organization encompasses a wide range of tasks of varying complexity from figure-ground segmentation [28] to hierarchical image parsing [15]. Prior works typically use a two stage pipeline [29], detecting basic visual units (patches, superpixels, parts, etc.) first and grouping them second. Common grouping approaches include spectral clustering [41, 36], conditional random fields (e.g. [23]), and generative probabilistic models (e.g. [15]). These grouping approaches all assume pre-detected basic visual units and pre-computed affinity measures between them but differ among themselves in the process of converting affinity measures into groups. In contrast, our approach performs detection and grouping in one stage using a generic network that includes no special design for grouping. It is worth noting a close connection between our approach to those using spectral clustering. Spectral clustering (e.g. normalized cuts [36]) techniques takes as input pre-computed affinities (such as predicted by a deep network) between visual units and solves a generalized eigenproblem to produce embeddings (one per visual unit) that are similar for visual units with high affinity. Angular Embedding [28, 37] extends spectral clustering by embedding depth ordering as well as grouping. Our approach differs from spectral clustering in that we have no intermediate representation of affinities nor do we solve any eigenproblems. Instead our network directly outputs the final embeddings. Our approach is also related to the work by Harley et al. on learning dense convolutional embeddings [16], which trains a deep network to produce pixel-wise embeddings for the task of semantic segmentation. Our work differs from theirs in that our network produces not only pixel-wise embeddings but also pixel-wise detection scores. Our novelty lies in the integration of detection and grouping into a single network; to the best of our knowledge such an integration has not been attempted for multiperson human pose estimation. Multiperson Pose Estimation Recent methods have made great progress improving human pose estimation in images in particular for single person pose estimation [40, 38, 42, 31, 8, 5, 32, 4, 9, 13, 2 Figure 1: We use the stacked hourglass architecture from Newell et al. [31]. The network performs repeated bottom-up, top-down inference producing a series of intermediate predictions (marked in blue) until the last “hourglass” produces a final result (marked in green). Each box represents a 3x3 convolutional layer. Features are combined across scales by upsampling and performing elementwise addition. The same ground truth is enforced across all predictions made by the network. 26, 18, 7, 39, 34]. For multiperson pose, prior and concurrent work can be categorized as either topdown or bottom-up. Top-down approaches [33, 17, 10] first detect individual people and then estimate each person’s pose. Bottom-up approaches [35, 20, 21, 6] instead detect individual body joints and then group them into individuals. Our approach more closely resembles bottom-up approaches but differs in that there is no separation of a detection and grouping stage. The entire prediction is done at once in a single stage. This does away with the need for complicated post-processing steps required by other methods [6, 20]. 3 Approach To introduce associative embedding for joint detection and grouping, we first review the basic formulation of visual detection. Many visual tasks involve detection of a set of visual units. These tasks are typically formulated as scoring of a large set of candidates. For example, single-person human pose estimation can be formulated as scoring candidate body joint detections at all possible pixel locations. Object detection can be formulated as scoring candidate bounding boxes at various pixel locations, scales, and aspect ratios. The idea of associative embedding is to predict an embedding for each candidate in addition to the detection score. The embeddings serve as tags that encode grouping: detections with similar tags should be grouped together. In multiperson pose estimation, body joints with similar tags should be grouped to form a single person. It is important to note that the absolute values of the tags do not matter, only the distances between tags. That is, a network is free to assign arbitrary values to the tags as long as the values are the same for detections belonging to the same group. To train a network to predict the tags, we enforce a loss that encourages similar tags for detections from the same group and different tags for detections across different groups. Specifically, this tagging loss is enforced on candidate detections that coincide with the ground truth. We compare pairs of detections and define a penalty based on the relative values of the tags and whether the detections should be from the same group. 3.1 Network Architecture Our approach requires that a network produce dense output to define a detection score and vector embedding at each pixel of the input image. In this work we use the stacked hourglass architecture, a model used previously for single-person pose estimation [31]. Each “hourglass” is comprised of a standard set of convolutional and pooling layers to process features down to a low resolution capturing the full global context of the image. These features are upsampled and combined with outputs from higher resolutions until reaching a final output resolution. Stacking multiple hourglasses enables repeated bottom-up and top-down inference to produce a more accurate final prediction. Intermediate predictions are made by the network after each hourglass (Fig. 1). We refer the reader to [31] for more details of the network architecture. The stacked hourglass model was originally developed for single-person human pose estimation and designed to output a heatmap for each body joint of a target person. The pixel with the highest heatmap activation is used as the predicted location for that joint. The network consolidates global and local features to capture information about the full structure of the body while preserving fine 3 Figure 2: An overview of our approach for producing multi-person pose estimates. For each joint of the body, the network simultaneously produces detection heatmaps and predicts associative embedding tags. We take the top detections for each joint and match them to other detections that share the same embedding tag to produce a final set of individual pose predictions. details for precise localization. This balance between global and local context is just as important when predicting poses of multiple people. We make some modifications to the network architecture to increase its capacity and accommodate the increased difficulty of multi-person pose estimation. We increase the number of features at each drop in resolution of the hourglass (256 →384 →512 →640 →768). In addition, individual layers are composed of 3x3 convolutions instead of residual modules. Residual links are still included across each hourglass as well as skip connections at each resolution. 3.2 Detection and Grouping For multiperson pose estimation, we train the network to detect joints in a similar manner to prior work on single-person pose estimation [31]. The model predicts a detection score at each pixel location for each body joint (“left wrist”, “right shoulder”, etc.) regardless of person identity. The difference from single-person pose being that an ideal heatmap for multiple people should have multiple peaks (e.g. to identify multiple left wrists belonging to different people), as opposed to just a single peak for a single target person. During training, we impose a detection loss on the output heatmaps. The detection loss computes mean square error between each predicted detection heatmap and its “ground truth” heatmap which consists of a 2D gaussian activation at each keypoint location. This loss is the same as the one used by Newell et al. [31]. Given the top activating detections from these heatmaps we need to pull together all joints that belong to the same individual. For this, we turn to the associative embeddings. For each joint of the body, the network produces additional channels to define an embedding vector at every pixel. Note that the dimension of the embeddings is not critical. If a network can successfully predict high-dimensional embeddings to separate the detections into groups, it should also be able to learn to project those high-dimensional embeddings to lower dimensions, as long as there is enough network capacity. In practice we have found that 1D embedding is sufficient for multiperson pose estimation, and higher dimensions do not lead to significant improvement. Thus throughout this paper we assume 1D embeddings. We think of these 1D embeddings as “tags” indicating which person a detected joint belongs to. Each detection heatmap has its own corresponding tag heatmap, so if there are m body joints to predict then the network will output a total of 2m channels; m for detection and m for grouping. To parse detections into individual people, we get the peak detections for each joint and retrieve their corresponding tags at the same pixel location (illustrated in Fig. 2). We then group detections across body parts by comparing the tag values of detections and matching up those that are close enough. A group of detections now forms the pose estimate for a single person. 4 Figure 3: Tags produced by our network on a held-out validation image from the MS-COCO training set. The tag values are already well separated and decoding the groups is straightforward. The grouping loss assesses how well the predicted tags agree with the ground truth grouping. Specifically, we retrieve the predicted tags for all body joints of all people at their ground truth locations; we then compare the tags within each person and across people. Tags within a person should be the same, while tags across people should be different. Rather than enforce the loss across all possible pairs of keypoints, we produce a reference embedding for each person. This is done by taking the mean of the output embeddings of all joints belonging to a single person. Within an individual, we compute the squared distance between the reference embedding and the predicted embedding for each joint. Then, between pairs of people, we compare their reference embeddings to each other with a penalty that drops exponentially to zero as the distance between the two tags increases. Formally, let hk ∈RW ×H be the predicted tagging heatmap for the k-th body joint, where h(x) is a tag value at pixel location x. Given N people, let the ground truth body joint locations be T = {(xnk)}, n = 1, . . . , N, k = 1 . . . , K, where xnk is the ground truth pixel location of the k-th body joint of the n-th person. Assuming all K joints are annotated, the reference embedding for the nth person would be ¯hn = 1 K X k hk(xnk) The grouping loss Lg is then defined as Lg(h, T) = 1 NK X n X k ¯hn −hk(xnk) 2 + 1 N 2 X n X n′ exp{−1 2σ2 ¯hn −¯hn′2} The first half of the loss pulls together all of the embeddings belonging to an individual, and the second half pushes apart embeddings across people. We use a σ value of 1 in our training. 3.3 Parsing Network Output Once the network has been trained, decoding is straightforward. We perform non-maximum suppression on the detection heatmaps and threshold to get a set of detections for each body joint. Then, for each detection we retrieve its corresponding associative embedding tag. To give an impression of the types of tags produced by the network and the trivial nature of grouping we refer to Figure 3; we plot a set of detections where the y-axis indicates the class of body joint and the x-axis the assigned embedding. To produce a final set of predictions we iterate through each joint one by one. An ordering is determined by first considering joints around the head and torso and gradually moving out to the limbs. We use the detections from the first joint (the neck, for example) to form our initial pool of detected people. Then, given the next joint, say the left shoulder, we have to figure out how to best match its detections to the current pool of people. Each detection is defined by its score and embedding tag, and each person is defined by the mean embedding of their current joints. 5 Figure 4: Qualitative results on MSCOCO validation images We compare the distance between these embeddings, and for each person we greedily assign a new joint based on the detection with the highest score whose embedding falls within some distance threshold. New detections that are not matched are used to start a new person instance. This accounts for cases where perhaps only a leg or hand is visible for a particular person. We repeat this process for each joint of the body until every detection has been assigned to a person. No steps are taken to ensure anatomical correctness or reasonable spatial relationships between pairs of joints. Missing joints: In some evaluation settings we may need to ensure that each person has a prediction for all joints, but our parsing does not guarantee this. Missing joints are usually fine, as in cases with truncation and extreme occlusion, but when it is necessary to produce complete predictions we introduce an additional processing step: given a missing joint, we identify all pixels whose embedding falls close enough to the target person, and choose the pixel location with the highest activation. This score may be lower than our usual cutoff threshold for detections. Multiscale Evaluation: While it is feasible to train a network to predict poses for people of all scales, there are some drawbacks. Extra capacity is required of the network to learn the necessary scale invariance, and the precision of predictions for small people will suffer due to issues of low resolution after pooling. To account for this, we evaluate images at test time at multiple scales. We take the heatmaps produced at each scale and resize and average them together. Then, to combine tags across scales, we concatenate the set of tags at a pixel location into a vector v ∈Rm (assuming m scales). The decoding process remains unchanged. 4 Experiments Datasets We evaluate on two datasets: MS-COCO [27] and MPII Human Pose [3]. MPII Human Pose consists of about 25k images and contains around 40k total annotated people (three-quarters of which are available for training). Evaluation is performed on MPII Multi-Person, a set of 1758 groups of multiple people taken from the test set as outlined in [35]. The groups for MPII Multi-Person are usually a subset of the total people in a particular image, so some information is provided to make sure predictions are made on the correct targets. This includes a general bounding box and 6 Head Shoulder Elbow Wrist Hip Knee Ankle Total Iqbal&Gall, ECCV16 [21] 58.4 53.9 44.5 35.0 42.2 36.7 31.1 43.1 Insafutdinov et al., ECCV16 [20] 78.4 72.5 60.2 51.0 57.2 52.0 45.4 59.5 Insafutdinov et al., arXiv16a [35] 89.4 84.5 70.4 59.3 68.9 62.7 54.6 70.0 Levinkov et al., CVPR17 [25] 89.8 85.2 71.8 59.6 71.1 63.0 53.5 70.6 Insafutdinov et al., CVPR17 [19] 88.8 87.0 75.9 64.9 74.2 68.8 60.5 74.3 Cao et al., CVPR17 [6] 91.2 87.6 77.7 66.8 75.4 68.9 61.7 75.6 Fang et al., ICCV17 [10] 88.4 86.5 78.6 70.4 74.4 73.0 65.8 76.7 Our method 92.1 89.3 78.9 69.8 76.2 71.6 64.7 77.5 Table 1: Results (AP) on MPII Multi-Person. AP AP50 AP75 APM APL AR AR50 AR75 ARM ARL CMU-Pose [6] 0.611 0.844 0.667 0.558 0.684 0.665 0.872 0.718 0.602 0.749 G-RMI [33] 0.643 0.846 0.704 0.614 0.696 0.698 0.885 0.755 0.644 0.771 Our method 0.663 0.865 0.727 0.613 0.732 0.715 0.897 0.772 0.662 0.787 Table 2: Results on MS-COCO test-std, excluding systems trained with external data. AP AP50 AP75 APM APL AR AR50 AR75 ARM ARL CMU-Pose [6] 0.618 0.849 0.675 0.571 0.682 0.665 0.872 0.718 0.606 0.746 Mask-RCNN [17] 0.627 0.870 0.684 0.574 0.711 – – – – – G-RMI [33] 0.649 0.855 0.713 0.623 0.700 0.697 0.887 0.755 0.644 0.771 Our method 0.655 0.868 0.723 0.606 0.726 0.702 0.895 0.760 0.646 0.781 Table 3: Results on MS-COCO test-dev, excluding systems trained with external data. scale term used to indicate the occupied region. No information is provided on the number of people or the scales of individual figures. We use the evaluation metric outlined by Pishchulin et al. [35] calculating average precision of joint detections. MS-COCO [27] consists of around 60K training images with more than 100K people with annotated keypoints. We report performance on two test sets, a development test set (test-dev) and a standard test set (test-std). We use the official evaluation metric that reports average precision (AP) and average recall (AR) in a manner similar to object detection except that a score based on keypoint distance is used instead of bounding box overlap. We refer the reader to the MS-COCO website for details [1]. Implementation Details The network used for this task consists of four stacked hourglass modules, with an input size of 512 × 512 and an output resolution of 128 × 128. We train the network using a batch size of 32 with a learning rate of 2e-4 (dropped to 1e-5 after about 150k iterations) using Tensorflow [2]. The associative embedding loss is weighted by a factor of 1e-3 relative to the MSE loss of the detection heatmaps. The loss is masked to ignore crowds with sparse annotations. At test time an input image is run at multiple scales; the output detection heatmaps are averaged across scales, and the tags across scales are concatenated into higher dimensional tags. Following prior work [6], we apply a single-person pose model [31] trained on the same dataset to investigate further refinement of predictions. We run each detected person through the single person model, and average the output with the predictions from our multiperson pose model. From Table 5, it is clear the benefit of this refinement is most pronounced in the single-scale setting on small figures. This suggests output resolution is a limit of performance at a single scale. Using our method for evaluation at multiple scales, the benefits of single person refinement are almost entirely mitigated as illustrated in Tables 4 and 5. MPII Results Average precision results can be seen in Table 1 demonstrating an improvement over state-of-the-art methods in overall AP. Associative embedding proves to be an effective method for teaching the network to group keypoint detections into individual people. It requires no assumptions about the number of people present in the image, and also offers a mechanism for the network to express confusion of joint assignments. For example, if the same joint of two people overlaps at the exact same pixel location, the predicted associative embedding will be a tag somewhere between the respective tags of each person. We can get a better sense of the associative embedding output with visualizations of the embedding heatmap (Figure 5). We put particular focus on the difference in the predicted embeddings when 7 Figure 5: Here we visualize the associative embedding channels for different joints. The change in embedding predictions across joints is particularly apparent in these examples where there is significant overlap of the two target figures. Head Shoulder Elbow Wrist Hip Knee Ankle Total multi scale 92.9 90.9 81.0 71.0 79.3 70.6 63.4 78.5 multi scale + refine 93.1 90.3 81.9 72.1 80.2 72.0 67.8 79.6 Table 4: Effect of single person refinement on a held out validation set on MPII. AP AP50 AP75 APM APL single scale 0.566 0.818 0.618 0.498 0.670 single scale + refine 0.628 0.846 0.692 0.575 0.706 multi scale 0.650 0.867 0.713 0.597 0.725 multi scale + refine 0.655 0.868 0.723 0.606 0.726 Table 5: Effect of multi-scale evaluation and single person refinement on MS-COCO test-dev. people overlap heavily as the severe occlusion and close spacing of detected joints make it much more difficult to parse out the poses of individual people. MS-COCO Results Table 2 and 3 report our results on MS-COCO. We report results on both test-std and test-dev because not all recent methods report on test-std. We see that on both sets we achieve the state of the art performance. An illustration of the network’s predictions can be seen in Figure 4. Typical failure cases of the network stem from overlapping and occluded joints in cluttered scenes. Table 5 reports performance of ablated versions of our full pipeline, showing the contributions from applying our model at multiple scales and from further refinement using a single-person pose estimator. We see that simply applying our network at multiple scales already achieves competitive performance against prior state of the art methods, demonstrating the effectiveness of our end-to-end joint detection and grouping. We perform an additional experiment on MS-COCO to gauge the relative difficulty of detection versus grouping, that is, which part is the main bottleneck of our system. We evaluate our system on a held-out set of 500 training images. In this evaluation, we replace the predicted detections with the ground truth detections but still use the predicted tags. Using the ground truth detections improves AP from 59.2 to 94.0. This shows that keypoint detection is the main bottleneck of our system, whereas the network has learned to produce high quality grouping. This fact is also supported by qualitative inspection of the predicted tag values, as shown in Figure 3, from which we can see that the tags are well separated and decoding the grouping is straightforward. 5 Conclusion In this work we introduce associative embeddings to supervise a convolutional neural network such that it can simultaneously generate and group detections. We apply this method to multi-person pose and demonstrate the feasibility of training to achieve state-of-the-art performance. Our method is general enough to be applied to other vision problems as well, for example instance segmentation and multi-object tracking in video. The associative embedding loss can be implemented given any network that produces pixelwise predictions, so it can be easily integrated with other state-of-the-art architectures. 8 6 Acknowledgements This work is partially supported by the National Science Foundation under Grant No. 1734266. ZH is partially supported by the Institute for Interdisciplinary Information Sciences, Tsinghua University. References [1] COCO: Common Objects in Context. http://mscoco.org/home/. [2] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [3] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 3686–3693. IEEE, 2014. [4] Vasileios Belagiannis and Andrew Zisserman. Recurrent human pose estimation. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pages 468–475. IEEE, 2017. [5] Adrian Bulat and Georgios Tzimiropoulos. Human pose estimation via convolutional part heatmap regression. In ECCV, 2016. [6] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, 2017. [7] Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4733–4742, 2016. [8] Xiao Chu, Wei Yang, Wanli Ouyang, Cheng Ma, Alan L Yuille, and Xiaogang Wang. Multi-context attention for human pose estimation. 2017. [9] Xiaochuan Fan, Kang Zheng, Yuewei Lin, and Song Wang. Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1347–1355, 2015. [10] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017. [11] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129, 2013. [12] Andrea Frome, Yoram Singer, Fei Sha, and Jitendra Malik. Learning globally-consistent local distance functions for shape-based image retrieval and classification. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007. [13] Georgia Gkioxari, Alexander Toshev, and Navdeep Jaitly. Chained predictions using convolutional neural networks. In European Conference on Computer Vision, pages 728–743. Springer, 2016. [14] Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In European Conference on Computer Vision, pages 529–545. Springer, 2014. [15] Feng Han and Song-Chun Zhu. Bottom-up/top-down image parsing with attribute grammar. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):59–73, 2009. [16] Adam W Harley, Konstantinos G Derpanis, and Iasonas Kokkinos. Learning dense convolutional embeddings for semantic segmentation. In International Conference on Learning Representations (Workshop), 2016. 9 [17] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. 2017. [18] Peiyun Hu and Deva Ramanan. Bottom-up and top-down reasoning with hierarchical rectified gaussians. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5600–5609, 2016. [19] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Bjoern Andres, and Bernt Schiele. Articulated multi-person tracking in the wild. arXiv preprint arXiv:1612.01465, 2016. [20] Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In European Conference on Computer Vision (ECCV), May 2016. [21] Umar Iqbal and Juergen Gall. Multi-person pose estimation with local joint-to-person associations. arXiv preprint arXiv:1608.08526, 2016. [22] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015. [23] Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. Adv. Neural Inf. Process. Syst, 2011. [24] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, volume 14, pages 1188–1196, 2014. [25] Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, and Bjoern Andres. Joint graph decomposition & node labeling: Problem, algorithms, applications. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [26] Ita Lifshitz, Ethan Fetaya, and Shimon Ullman. Human pose estimation using deep consensus voting. In European Conference on Computer Vision, pages 246–260. Springer, 2016. [27] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014. [28] Michael Maire. Simultaneous segmentation and figure/ground organization using angular embedding. In European Conference on Computer Vision, pages 450–464. Springer, 2010. [29] Michael Maire, X Yu Stella, and Pietro Perona. Object detection and segmentation from joint embedding of parts and pixels. In 2011 International Conference on Computer Vision, pages 2142–2149. IEEE, 2011. [30] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013. [31] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. ECCV, 2016. [32] Guanghan Ning, Zhi Zhang, and Zhihai He. Knowledge-guided deep fractal neural networks for human pose estimation. arXiv preprint arXiv:1705.02407, 2017. [33] George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, and Kevin Murphy. Towards accurate multi-person pose estimation in the wild. arXiv preprint arXiv:1701.01779, 2017. [34] Leonid Pishchulin, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele. Poselet conditioned pictorial structures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 588–595, 2013. [35] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [36] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888–905, 2000. 10 [37] X Yu Stella. Angular embedding: from jarring intensity differences to perceived luminance. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 2302–2309. IEEE, 2009. [38] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 648–656, 2015. [39] Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in neural information processing systems, pages 1799–1807, 2014. [40] Alexander Toshev and Christian Szegedy. Deeppose: Human pose estimation via deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1653–1660. IEEE, 2014. [41] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416, 2007. [42] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016. [43] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in neural information processing systems, pages 1473–1480, 2005. 11 | 2017 | 382 |
6,877 | Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications Linus Hamilton∗ Frederic Koehler † Ankur Moitra ‡ Abstract Markov random fields are a popular model for high-dimensional probability distributions. Over the years, many mathematical, statistical and algorithmic problems on them have been studied. Until recently, the only known algorithms for provably learning them relied on exhaustive search, correlation decay or various incoherence assumptions. Bresler [4] gave an algorithm for learning general Ising models on bounded degree graphs. His approach was based on a structural result about mutual information in Ising models. Here we take a more conceptual approach to proving lower bounds on the mutual information. Our proof generalizes well beyond Ising models, to arbitrary Markov random fields with higher order interactions. As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on n nodes with r-order interactions in nr time and log n sample complexity. Our algorithms also extend to various partial observation models. 1 Introduction 1.1 Background Markov random fields are a popular model for defining high-dimensional distributions by using a graph to encode conditional dependencies among a collection of random variables. More precisely, the distribution is described by an undirected graph G = (V, E) where to each of the n nodes u ∈V we associate a random variable Xu which takes on one of ku different states. The crucial property is that the conditional distribution of Xu should only depend on the states of u’s neighbors. It turns out that as long as every configuration has positive probability, the distribution can be written as Pr(a1, · · · an) = exp r X ℓ=1 X i1<i2<···<iℓ θi1···iℓ(a1, · · · an) −C ! (1) Here θi1···iℓ: [ki1] × . . . × [kiℓ] →R is a function that takes as input the configuration of states on the nodes i1, i2, · · · iℓand is assumed to be zero on non-cliques. These functions are referred to as clique potentials. In the equation above, C is a constant that ensures the distribution is normalized and is called the log-partition function. Such distributions are also called Gibbs measures and arise frequently in statistical physics and have numerous applications in computer vision, computational biology, social networks and signal processing. The Ising model corresponds to the special case ∗Massachusetts Institute of Technology. Department of Mathematics. Email: luh@mit.edu. This work was supported in part by Hertz Fellowship. †Massachusetts Institute of Technology. Department of Mathematics. Email: fkoehler@mit.edu. ‡Massachusetts Institute of Technology. Department of Mathematics and the Computer Science and Artificial Intelligence Lab. Email: moitra@mit.edu. This work was supported in part by NSF CAREER Award CCF-1453261, NSF Large CCF-1565235, a David and Lucile Packard Fellowship and an Alfred P. Sloan Fellowship. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. where every node has two possible states and the only non-zero clique potentials correspond to single nodes or to pairs of nodes. Over the years, many sorts of mathematical, statistical and algorithmic problems have been studied on Markov random fields. Such models first arose in the context of statistical physics where they were used to model systems of interacting particles and predict temperatures at which phase transitions occur [6]. A rich body of work in mathematical physics aims to rigorously understand such phenomena. It is also natural to seek algorithms for sampling from the Gibbs distribution when given its clique potentials. There is a natural Markov chain to do so, and a number of works have identified a critical temperature (in our model this is a part of the clique potentials) above which the Markov chain mixes rapidly and below which it mixes slowly [14, 15]. Remarkably in some cases these critical temperatures also demarcate where approximate sampling goes from being easy to being computationally hard [19, 20]. Finally, various inference problems on Markov random fields lead to graph partitioning problems such as the metric labelling problem [12]. In this paper, we will be primarily concerned with the structure learning problem. Given samples from a Markov random field, our goal is to learn the underlying graph G with high probability. The problem of structure learning was initiated by Chow and Liu [7] who gave an algorithm for learning Markov random fields whose underlying graph is a tree by computing the maximum-weight spanning tree where the weight of each edge is equal to the mutual information of the variables at its endpoints. The running time and sample complexity are on the order of n2 and log n respectively. Since then, a number of works have sought algorithms for more general families of Markov random fields. There have been generalizations to polytrees [10], hypertrees [21] and tree mixtures [2]. Others works construct the neighborhood by exhaustive search [1, 8, 5], impose certain incoherence conditions [13, 17, 11] or require that there are no long range correlations (e.g. between nodes at large distance in the underlying graph) [3, 5]. In a breakthrough work, Bresler [4] gave a simple greedy algorithm that provably works for any bounded degree Ising model, even if it has long-range correlations. This work used mutual information as its underlying progress measure and for each node it constructed its neighborhood. For a set S of nodes, let XS denote the random variable representing their joint state. Then the key fact is the following: Fact 1.1. For any node u, for any S ⊆V \ {u} that does not contain all of u’s neighbors, there is a node v ̸= u which has non-negligible conditional mutual information (conditioned on XS) with u. This fact is simultaneously surprising and not surprising. When S contains all the neighbors of u, then Xu has zero conditional mutual information (again conditioned on XS) with any other node because Xu only depends on XS. Conversely shouldn’t we expect that if S does not contain the entire neighborhood of u, that there is some neighbor that has nonzero conditional mutual information with u? The difficulty is that the influence of a neighbor on u can be cancelled out indirectly by the other neighbors of u. The key fact above tells us that it is impossible for the influences to all cancel out. But is this fact only true for Ising models or is it an instance of a more general phenomenon that holds over any Markov random field? 1.2 Our Techniques In this work, we give a vast generalization of Bresler’s [4] lower bound on the conditional mutual information. We prove that it holds in general Markov random fields with higher order interactions provided that we look at sets of nodes. More precisely we prove, in a Markov random field with non-binary states and order up to r interactions, the following fundamental fact: Fact 1.2. For any node u, for any S ⊆V \ {u} that does not contain all of u’s neighbors, there is a set I of at most r −1 nodes which does not contain u where Xu and XI have non-negligible conditional mutual information (conditioned on XS). Our approach goes through a two-player game that we call the GUESSINGGAME between Alice and Bob. Alice samples a configuration X1, X2, . . . Xn and reveals I and XI for a randomly chosen set of u’s neighbors with |I| ≤r −1. Bob’s goal is to guess Xu with non-trivial advantage over its marginal distribution. We give an explicit strategy for Bob that achieves positive expected value. Our approach is quite general because we base Bob’s guess on the contribution of XI to the overall clique potentials that Xu participates in, in a way that the expectation over I yields an unbiased 2 estimator of the total clique potential. The fact that the strategy has positive expected value is then immediate, and all that remains is to prove a quantitative lower bound on it using the law of total variance. From here, the intuition is that if the mutual information I(Xu; XI) were zero for all sets I then Bob could not have positive expected value in the GUESSINGGAME. 1.3 Our Results Let Γ(u) denote the neighbors of u. We require certain conditions (Definition 2.3) on the clique potentials to hold, which we call α, β-non-degeneracy, to ensure that the presence or absence of each hyperedge can be information-theoretically determined from few samples (essentially that no clique potential is too large and no non-zero clique potential is too small). Under this condition, we prove: Theorem 1.3. Fix any node u in an α, β-non-degenerate Markov random field of bounded degree and a subset of the vertices S which does not contain the entire neighborhood of u. Then taking I uniformly at random from the subsets of the neighbors of u not contained in S of size s = min(r − 1, |Γ(u) \ S|), we have EI[I(Xu; XI|XS)] ≥C. See Theorem 4.3 which gives the precise dependence of C on all of the constants, including α, β, the maximum degree D, the order of the interactions r and the upper bound K on the number of states of each node. We remark that C is exponentially small in D, r and β and there are examples where this dependence is necessary [18]. Next we apply our structural result within Bresler’s [4] greedy framework for structure learning to obtain our main algorithmic result. We obtain an algorithm for learning Markov random fields on bounded degree graphs with a logarithmic number of samples, which is information-theoretically optimal [18]. More precisely we prove: Theorem 1.4. Fix any α, β-non-degenerate Markov random field on n nodes with r-order interactions and bounded degree. There is an algorithm for learning G that succeeds with high probability given C′ log n samples and runs in time polynomial in nr. Remark 1.5. It is easy to encode an r −1-sparse parity with noise as a Markov random field with order r interactions. This means if we could improve the running time to no(r) this would yield the first no(k) algorithm for learning k-sparse parities with noise, which is a long-standing open question. The best known algorithm of Valiant [22] runs in time n0.8k. See Theorem 5.1 for a more precise statement. The constant C′ depends doubly exponentially on D. In the special case of Ising models with no external field, Vuffray et al. [23] gave an algorithm based on convex programming that reduces the dependence on D to singly exponential. In greedy approaches based on mutual information like the one we consider here, doubly-exponential dependence on D seems intrinsic. As in Bresler’s [4] work, we construct a superset of the neighborhood that contains roughly 1/C nodes where C comes from Theorem 1.3. Recall that C is exponentially small in D. Then to accurately estimate conditional mutual information when conditioning on the states of this many nodes, we need doubly exponential in D many samples. Our results extend to a model where we are only allowed partial observations. More precisely, for each sample we are allowed to specify a set J of size at most C′′ and all we observe is XJ. We prove: Theorem 1.6. Fix any α, β-non-degenerate Markov random field on n nodes with r-order interactions and bounded degree. There is an algorithm for learning G with C′′-bounded queries that succeeds with high probability given C′ log n samples and runs in time polynomial in nr. See Theorem 5.3 for a more precise statement. This is a natural scenario that arises when it is too expensive to obtain a sample where the states of all nodes are known. We also consider a model where each node’s state is erased (and unobserved) independently with some fixed probability p. See the supplementary material for a precise statement. The fact that we can straightforwardly obtain algorithms for these alternative settings demonstrates the flexibility of greedy, information-theoretic approaches to learning. 3 2 Preliminaries For reference, all fundamental parameters of the graphical model (max degree, etc.) are defined in the next two subsections. In terms of these fundamental parameters, we define additional parameters γ and δ in (3), C′(γ, K, α) in Theorem 4.3, and τ in (5) and L in (6). 2.1 Markov Random Fields and the Canonical Form Let K be an upper bound on the maximum number of states of any node. Recall the joint probability distribution of the model, given in (1). For notational convenience, even when i1, . . . , iℓare not sorted in increasing order, we define θi1···iℓ(a1, . . . , aℓ) = θi′ 1···i′ ℓ(a′ 1, . . . , a′ ℓ) where the i′ 1, . . . , i′ ℓ are the sorted version of i1, . . . , iℓand the a′ 1, . . . , a′ ℓare the corresponding copies of a1, . . . , aℓ. The parameterization in (1) is not unique. It will be helpful to put it in a normal form as below. A tensor fiber is the vector given by fixing all of the indices of the tensor except for one; this generalizes the notion of row/column in matrices. For example for any 1 ≤m ≤ℓ, i1 < . . . < im < . . . iℓand a1, . . . , am−1, am+1, . . . aℓfixed, the corresponding tensor fiber is the set of elements θi1···iℓ(a1, . . . , am, . . . , aℓ) where am ranges from 1 to kim. Definition 2.1. We say that the weights θ are in canonical form4 if for every tensor θi1···iℓ, the sum over all of the tensor fibers of θi1···iℓis zero. Moreover we say that a tensor with the property that the sum over all tensor fibers is zero is a centered tensor. Hence having a Markov random field in canonical form just means that all of the tensors corresponding to its clique potentials are centered. We observe that every Markov random field can be put in canonical form: Claim 2.2. Every Markov random field can be put in canonical form 2.2 Non-Degeneracy Let H = (V, H) denote a hypergraph obtained from the Markov random field as follows. For every non-zero tensor θi1···iℓwe associate a hyperedge (i1 · · · iℓ). We say that a hyperedge h is maximal if no other hyperedge of strictly larger size contains h. Now G = (V, E) can be obtained by replacing every hyperedge with a clique. Let D be a bound on the maximum degree. Recall that Γ(u) denotes the neighbors of u. We will require the following conditions in order to ensure that the presence and absence of every maximal hyperedge is information-theoretically determined: Definition 2.3. We say that a Markov random field is α,β-non-degenerate if (a) Every edge (i, j) in the graph G is contained in some hyperedge h ∈H where the corresponding tensor is non-zero. (b) Every maximal hyperedge h ∈H has at least one entry lower bounded by α in absolute value. (c) Every entry of θi1i2···iℓis upper bounded by a constant β in absolute value. We will refer to a hyperedge h with an entry lower bounded by α in absolute value as αnonvanishing. 2.3 Bounds on Conditional Probabilities First we review properties of the conditional probabilities in a Markov random field as well as introduce some convenient notation which we will use later on. Fix a node u and its neighborhood U = Γ(u). Then for any R ∈[ku] we have P(Xu = R|XU) = exp(EX u,R) Pku B=1 exp(EX u,B) (2) 4This is the same as writing the log of the probability mass function according to the Efron-Stein decomposition with respect to the uniform measure on colors; this decomposition is known to be unique. See e.g. Chapter 8 of [16] 4 where we define EX u,R = r X ℓ=1 X i2<···<iℓ θui2···iℓ(R, Xi2, · · · , Xiℓ) and i2, . . . , iℓrange over elements of the neighborhood U; when ℓ= 1 the inner sum is just θu(R). Let X∼u = X[n]\{u}. To see that the above is true, first condition on X∼u, and observe that the probability for a certain Xu is proportional to exp(EX u,R), which gives the right hand side of (2). Then apply the tower property for conditional probabilities. Therefore if we define (where |T|max denotes the maximum entry of a tensor T) γ := sup u r X ℓ=1 X i2<···<iℓ |θui2···iℓ|max ≤β r X ℓ=1 D ℓ−1 , δ := 1 K exp(−2γ) (3) then for any R P(Xu = R|XU) ≥exp(−γ) K exp(γ) = 1 K exp(−2γ) = δ (4) Observe that if we pick any node i and consider the new Markov random field given by conditioning on a fixed value of Xi, then the value of γ for the new Markov random field is non-increasing. 3 The Guessing Game Here we introduce a game-theoretic framework for understanding mutual information in general Markov random fields. The GUESSINGGAME is defined as follows: 1. Alice samples X = (X1, . . . , Xn) and X′ = (X′ 1, . . . , X′ n) independently from the Markov random field 2. Alice samples R uniformly at random from [ku] 3. Alice samples a set I of size s = min(r −1, du) uniformly at random from the neighbors of u 4. Alice tells Bob I, XI and R 5. Bob wagers w with |w| ≤γK D r−1 6. Bob gets ∆= w1Xu=R −w1X′u=R Bob’s goal is to guess Xu given knowledge of the states of some of u’s neighbors. The Markov random field (including all of its parameters) are common knowledge. The intuition is that if Bob can obtain a positive expected value, then there must be some set I of neighbors of u which have non-zero mutual information. In this section, will show that there is a simple, explicit strategy for Bob that yields positive expected value. 3.1 A Good Strategy for Bob Here we will show an explicit strategy for Bob that has positive expected value. Our analysis will rest on the following key lemma: Lemma 3.1. There is a strategy for Bob that wagers at most γK D r−1 in absolute value that satisfies E I,XI[w|X∼u, R] = EX u,R − X B̸=R EX u,B Proof. First we explicitly define Bob’s strategy. Let Φ(R, I, XI) = s X ℓ=1 Cu,ℓ,s X i1<i2<···<iℓ 1{i1···iℓ}⊆Iθui1···iℓ(R, Xi1, . . . , Xiℓ) 5 where Cu,ℓ,s = ( du s ) ( du−ℓ s−ℓ). Then Bob wagers w = Φ(R, I, XI) − X B̸=R Φ(B, I, XI) Notice that the strategy only depends on XI because all terms in the summation where {i1 · · · iℓ} are not a subset of I have zero contribution. The intuition behind this strategy is that the weighting term satisifes Cu,ℓ,s = 1 Pr[{i1, . . . iℓ} ⊂I] Thus when we take the expectation over I and XI we get E I,XI[Φ(R, I, XI)|X∼u, R] = r X ℓ=1 X i2<···<iℓ θui2···iℓ(R, Xi2, · · · , Xiℓ) = EX u,R and hence EI,XI[w|X∼u, R] = EX u,R −P B̸=R EX u,B. To complete the proof, notice that Cu,ℓ,s ≤ D r−1 which using the definition of γ implies that |Φ(R, I, XI)| ≤γ D r−1 for any state B, and thus Bob wagers at most the desired amount (in absolute value). Now we are ready to analyze the strategy: Theorem 3.2. There is a strategy for Bob that wagers at most γK D r−1 in absolute value which satisfies E[∆] ≥4α2δr−1 r2re2γ Proof. We will use the strategy from Lemma 3.1. First we fix X∼u, X′ ∼u and R. Then we have E I,XI[∆|X∼u, X′ ∼u, R] = E I,XI[w|X∼u, R] Pr[Xu = R|X∼u, R] −Pr[X′ u = R|X′ ∼u, R] which follows because ∆= r1Xu=R −r1X′u=R and because r and Xu do not depend on X′ ∼u and similarly X′ u does not depend on X∼u . Now using (2) we calculate: Pr[Xu = R|X∼u, R] −Pr[X′ u = R|X′ ∼u, R] = exp(EX u,R) P B exp(EX u,B) − exp(EX′ u,R) P B exp(EX′ u,B) = 1 D X B̸=R exp(EX u,R + EX′ u,B) −exp(EX u,B + EX′ u,R) where D = P B exp(EX u,B) P B exp(EX′ u,B) . Thus putting it all together we have E I,XI[∆|X∼u, X′ ∼u, R] = 1 D EX u,R − X B̸=R EX u,B X B̸=R exp(EX u,R + EX′ u,B) −exp(EX u,B + EX′ u,R) Now it is easy to see that X distinct R,G,B EX u,B X G̸=R exp(EX u,R + EX′ u,G) −exp(EX u,G + EX′ u,R) = 0 which follows because when we interchange R and G the entire term multiplies by a negative one and so we can pair up the terms in the summation so that they exactly cancel. Using this identity we get E I,XI[∆|X∼u, X′ ∼u] = 1 kuD X R X B̸=R EX u,R −EX u,B exp(EX u,R + EX′ u,B) −exp(EX u,B + EX′ u,R) 6 where we have also used the fact that R is uniform on ku. And finally using the fact that X∼u and X′ ∼u are identically distributed we can sample Y∼u and Z∼u and flip a coin to decide whether we set X∼u = Y∼u and X′ ∼u = Z∼u or vice-versa. Now we have E I,XI[∆|Y∼u, Z∼u] = 1 2kuD X R X B̸=R EY u,R −EY u,B −EZ u,R + EZ u,B eEY u,R+EZ u,B −eEY u,B+EZ u,R With the appropriate notation it is easy to see that the above sum is strictly positive. Let aR,B = EY u,R + EZ u,B and bR,B = EZ u,R + EY u,B. With this notation: E I,XI[∆|Y∼u, Z∼u] = 1 2Dku X R X B̸=R aR,B −bR,B exp(aR,B) −exp(bR,B) Since exp(x) is a strictly increasing function it follows that as long as aR,B ̸= bR,B for some term in the sum, the sum is positive. In Lemma 3.3 we prove that the expectation over Y and Z of this sum is at least 4α2δr−1 r2re2γ , which completes the proof. In the supplementary material we show how to use the law of total variance to give a quantitative lower bound on the sum that arose in the proof of Theorem 3.2. More precisely we show: Lemma 3.3. E Y,Z h X R X B̸=R EY u,R −EY u,B −EZ u,R +EZ u,B exp(EY u,R +EZ u,B)−exp(EY u,B +EZ u,R) i ≥4α2δr−1 r2re2γ 4 Implications for Mutual Information In this section we show that Bob’s strategy implies a lower bound on the mutual information between node u and a subset I of its neighbors of size at most r −1. We then extend the argument to work with conditional mutual information as well. 4.1 Mutual Information in Markov Random Fields Recall that the goal of the GUESSINGGAME is for Bob to use information about the states of nodes I to guess the state of node u. Intuitively, if XI conveys no information about Xu then it should contradict the fact that Bob has a strategy with positive expected value. We make this precise below. Our argument proceeds in two steps. First we upper bound the expected value of any strategy. Lemma 4.1. For any strategy, E[∆] ≤γK D r −1 E I,XI,R h | Pr[Xu = R|XI] −Pr[Xu = R]| i Intuitively this follows because Bob’s optimal strategy given I, XI and R is to guess w = sgn(Pr[Xu = R|XI] −Pr[Xu = R])γK Next we lower bound the mutual information using (essentially) the same quantity. We prove Lemma 4.2. r 1 2I(Xu; XI) ≥ 1 Kr E XI,R h | Pr(Xu = R|XI) −Pr(Xu = R)| i These bounds together yield a lower bound on the mutual information. In the supplementary material, we show how to extend the lower bound for mutual information to conditional mutual information. The main idea is to show there is a setting of XS where the hyperedges do not completely cancel out in the Markov random field we obtain by conditioning on XS. Theorem 4.3. Fix a vertex u such that all of the maximal hyperedges containing u are αnonvanishing, and a subset of the vertices S which does not contain the entire neighborhood of 7 u. Then taking I uniformly at random from the subsets of the neighbors of u not contained in S of size s = min(r −1, |Γ(u) \ S|), E I "r 1 2I(Xu; XI|XS) # ≥C′(γ, K, α) where explicitly C′(γ, K, α) := 4α2δr+d−1 r2rKr+1 D r−1 γe2γ 5 Applications We now employ the greedy approach of Bresler [4] which was previously used to learn Ising models on bounded degree graphs. Suppose we are given m independent samples from the Markov random field. Let c Pr denote the empirical distribution and let bE denote the expectation under this distribution. We compute empirical estimates for a certain information theoretic quantity νu,I|S (defined in the supplementary material) as follows bνu,I|S := E R,G bEXS[|c Pr(Xu = R, XI = G|XS) −c Pr(Xu = R|XS)c Pr(XI = G|XS)|] where R is a state drawn uniformly at random from [ku], and G is an |I|-tuple of states drawn independently uniformly at random from [ki1]×[ki2]×. . .×[ki|I|] where I = (i1, i2, . . . i|I|). Also we define τ (which will be used as a thresholding constant) as τ := C′(γ, k, α)/2 (5) and L, which is an upper bound on the size of the superset of a neighborhood of u that the algorithm will construct, L := (8/τ 2) log K = (32/C′(γ, k, α)2) log K. (6) Then the algorithm MRFNBHD at node u is: 1. Fix input vertex u. Set S := ∅. 2. While |S| ≤L and there exists a set of vertices I ⊂[n] \ S of size at most r −1 such that bνu,I|S > τ, set S := S ∪I. 3. For each i ∈S, if bνu,i|S\i < τ then remove i from S. 4. Return set S as our estimate of the neighborhood of u. Theorem 5.1. Fix ω > 0. Suppose we are given m samples from an α, β-non-degenerate Markov random field with r-order interactions where the underlying graph has maximum degree at most D and each node takes on at most K states. Suppose that m ≥60K2L τ 2δ2L log(1/ω) + log(L + r) + (L + r) log(nK) + log 2 . Then with probability at least 1 −ω, MRFNBHD when run starting from each node u recovers the correct neighborhood of u, and thus recovers the underlying graph G. Furthermore, each run of the algorithm takes O(mLnr) time. In many situations, it is too expensive to obtain full samples from a Markov random field (e.g. this could involve needing to measure every potential symptom of a patient). Here we consider a model where we are allowed only partial observations in the form of a C-bounded query: Definition 5.2. A C-bounded query to a Markov random field is specified by a set S with |S| ≤C and we observe XS 8 Our algorithm MRFNBHD can be made to work with C-bounded queries instead of full observations. We prove: Theorem 5.3. Fix an α, β-non-degenerate Markov random field with r-order interactions where the underlying graph has maximum degree at most D and each node takes on at most K states. The bounded queries modification to the algorithm returns the correct neighborhood of every vertex u using m′Lrnr-bounded queries of size at most L + r where m′ = 60K2L τ 2δ2L log(Lrnr/ω) + log(L + r) + (L + r) log(nK) + log 2 , with probability at least 1 −ω. In the supplementary material, we extend our results to the setting where we observe partial samples where the state of each node is revealed independently with probability p, and the choice of which nodes to reveal is independent of the sample. Acknowledgements: We thank Guy Bresler for valuable discussions and feedback. References [1] Pieter Abbeel, Daphne Koller, and Andrew Y Ng. Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research, 7(Aug):1743–1788, 2006. [2] Anima Anandkumar, Daniel J Hsu, Furong Huang, and Sham M Kakade. Learning mixtures of tree graphical models. In Advances in Neural Information Processing Systems, pages 1052–1060, 2012. [3] Animashree Anandkumar, Vincent YF Tan, Furong Huang, and Alan S Willsky. High-dimensional structure estimation in ising models: Local separation criterion. The Annals of Statistics, pages 1346–1375, 2012. [4] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 771–782. ACM, 2015. [5] Guy Bresler, Elchanan Mossel, and Allan Sly. Reconstruction of markov random fields from samples: Some observations and algorithms. In Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques, pages 343–356. Springer, 2008. [6] Stephen G Brush. History of the lenz-ising model. Reviews of modern physics, 39(4):883, 1967. [7] C Chow and Cong Liu. Approximating discrete probability distributions with dependence trees. IEEE transactions on Information Theory, 14(3):462–467, 1968. [8] Imre Csisz´ar and Zsolt Talata. Consistent estimation of the basic neighborhood of markov random fields. In Information Theory, 2004. ISIT 2004. Proceedings. International Symposium on, page 170. IEEE, 2004. [9] Gautam Dasarathy, Aarti Singh, Maria-Florina Balcan, and Jong Hyuk Park. Active learning algorithms for graphical model selection. J. Mach. Learn. Res, page 199207, 2016. [10] Sanjoy Dasgupta. Learning polytrees. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 134–141. Morgan Kaufmann Publishers Inc., 1999. [11] Ali Jalali, Pradeep Ravikumar, Vishvas Vasuki, and Sujay Sanghavi. On learning discrete graphical models using group-sparse regularization. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 378–387, 2011. [12] Jon Kleinberg and Eva Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and markov random fields. Journal of the ACM (JACM), 49(5):616–639, 2002. [13] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient structure learning of markov networks using l 1-regularization. In Proceedings of the 19th International Conference on Neural Information Processing Systems, pages 817–824. MIT Press, 2006. [14] Fabio Martinelli and Enzo Olivieri. Approach to equilibrium of glauber dynamics in the one phase region. Communications in Mathematical Physics, 161(3):447–486, 1994. 9 [15] Elchanan Mossel, Dror Weitz, and Nicholas Wormald. On the hardness of sampling independent sets beyond the tree threshold. Probability Theory and Related Fields, 143(3):401–439, 2009. [16] Ryan O’Donnell. Analysis of Boolean Functions. Cambridge University Press, New York, NY, USA, 2014. [17] Pradeep Ravikumar, Martin J Wainwright, John D Lafferty, et al. High-dimensional ising model selection using ?1-regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010. [18] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117–4134, 2012. [19] Allan Sly. Computational transition at the uniqueness threshold. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 287–296. IEEE, 2010. [20] Allan Sly and Nike Sun. The computational hardness of counting in two-spin models on d-regular graphs. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 361–369. IEEE, 2012. [21] Nathan Srebro. Maximum likelihood bounded tree-width markov networks. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 504–511. Morgan Kaufmann Publishers Inc., 2001. [22] Gregory Valiant. Finding correlations in subquadratic time, with applications to learning parities and juntas. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 11–20. IEEE, 2012. [23] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction screening: Efficient and sample-optimal learning of ising models. In Advances in Neural Information Processing Systems, pages 2595–2603, 2016. 10 | 2017 | 383 |
6,878 | Subset Selection and Summarization in Sequential Data Ehsan Elhamifar Computer and Information Science College Northeastern University Boston, MA 02115 eelhami@ccs.neu.edu M. Clara De Paolis Kaluza Computer and Information Science College Northeastern University Boston, MA 02115 clara@ccs.neu.edu Abstract Subset selection, which is the task of finding a small subset of representative items from a large ground set, finds numerous applications in different areas. Sequential data, including time-series and ordered data, contain important structural relationships among items, imposed by underlying dynamic models of data, that should play a vital role in the selection of representatives. However, nearly all existing subset selection techniques ignore underlying dynamics of data and treat items independently, leading to incompatible sets of representatives. In this paper, we develop a new framework for sequential subset selection that finds a set of representatives compatible with the dynamic models of data. To do so, we equip items with transition dynamic models and pose the problem as an integer binary optimization over assignments of sequential items to representatives, that leads to high encoding, diversity and transition potentials. Our formulation generalizes the well-known facility location objective to deal with sequential data, incorporating transition dynamics among facilities. As the proposed formulation is non-convex, we derive a max-sum message passing algorithm to solve the problem efficiently. Experiments on synthetic and real data, including instructional video summarization, show that our sequential subset selection framework not only achieves better encoding and diversity than the state of the art, but also successfully incorporates dynamics of data, leading to compatible representatives. 1 Introduction Subset selection is the task of finding a small subset of most informative items from a ground set. Besides helping to reduce the computational time and memory of algorithms, due to working on a much smaller representative set [1], it has found numerous applications, including, image and video summarization [2, 3, 4], speech and document summarization [5, 6, 7], clustering [8, 9, 10, 11, 12], feature and model selection [13, 14, 15, 16], sensor placement [17, 18], social network marketing [19] and product recommendation [20]. Compared to dictionary learning methods such as Kmeans [21], KSVD [22] and HMMs [23], that learn centers/atoms in the input-space, subset selection methods choose centers/atoms from the given set of items. Sequential data, including time-series such as video, speech, audio and sensor measurements as well as ordered data such as text, form an important large part of modern datasets, requiring effective subset selection techniques. Such datasets contain important structural relationships among items, often imposed by underlying dynamic models, that should play a vital role in the selection of representatives. For example, there exists a logical way in which segments of a video or sentences of a document are connected together and treating segments/sentences as a bag of randomly permutable items results in losing the semantic content of the video/document. However, existing subset selection methods 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. x1 x2 xM y1 y2 y3 yT d1,1 dM,T Figure 1: We propose a framework, based on a generalization of the facility location problem, for the summarization of sequential data. Given a source set of items {x1, . . . , xM} with a dynamic transition model and a target set of sequential items (y1, . . . , yT ), we propose a framework to find a sequence of representatives from the source set that has a high global transition probability and well encodes the target set. ignore these relationships and treat items independent from each other. Thus, there is a need for sequential subset selection methods that, instead of treating items independently, use the underlying dynamic models of data to select high-quality, diverse and compatible representatives. Prior Work: A subset selection framework consists of three main components: i) the inputs to the algorithm; ii) the objective function to optimize, characterizing the informativeness and diversity of selected items; iii) the algorithm to optimize the objective function. The inputs to subset selection algorithms are in the form of either feature vector representations or pairwise similarities between items. Several subset selection criteria have been studied in the literature, including maximum cut objective [24, 25], maximum marginal relevance [26], capacitated and uncapacitated facility location objectives [27, 28], multi-linear coding [29, 30] and maximum volume subset [6, 31], which all try to characterize the informativeness/value of a subset of items in terms of ability to represent the entire distribution and/or having minimum information overlap among selected items. On the other hand, optimizing almost all subset selection criteria is, in general, NP-hard and non-convex [25, 32, 33, 34], which has motivated the development and study of approximate methods for optimizing these criteria. This includes greedy approximate algorithms [28] for maximizing submodular functions, such as graph-cuts and facility location, which have worst-case approximation guarantees, as well as sampling methods from Determinantal Point Process (DPP) [6, 31], a probability measure on the set of all subsets of a ground set, for approximately finding the maximum volume subset. Motivated by the maturity of convex optimization and advances in sparse and low-rank recovery, recent methods have focused on convex relaxation-based methods for subset selection [8, 9, 2, 35, 36]. When it comes to sequential data, however, the majority of subset selection methods ignore the underlying dynamics and relationships among items and treat items independent from each other. Recent results in [37, 3] have developed interesting extensions to DPP-based subset selection, by capturing representatives in a sequential order such that newly selected representatives are diverse with respect to the previously selected ones. However, sequential diversity by itself is generally insufficient, especially, when the sequence of diverse selected items are unlikely to follow each other according to underlying dynamic models. For example, in a video/document on a specific topic with intermediate irrelevant scenes/sentences to the topic, promoting sequential diversity results in selecting irrelevant scenes/sentences. [38] extends submodular functions to capture ordered preferences among items, where ordered preferences are represented by a directed acyclic graph over items, and presents a greedy algorithm to pick edges instead of items. The method, however, cannot deal with arbitrary graphs, such as Markov chains with cycles. On the other hand, while Hidden Markov Models (HMMs) [23, 39] and dynamical systems [40, 41] have been extensively studied for modeling sequential data, they have not been properly exploited in the context of subset selection. Paper Contributions: In this paper we develop a new framework for sequential subset selection that incorporates the dynamic model of sequential data into subset selection. We develop a new class of objective functions that promotes to select not only high-quality and diverse items, but also a sequence of representatives that are compatible with the dynamic model of data. To do so, we propose a dynamic subset selection framework, where we equip items with transition probabilities and design objective functions to select representatives that well capture the data distribution with a high overall transition probability in the sequence of representatives, see Figure 1. Our formulation generalizes the facility location objective [27, 28] to sequential data, by incorporating transition dynamics among facilities. Since our proposed integer binary optimization is non-convex, we develop a max-sum 2 message passing framework to solve the problem efficiently. By experiments on synthetic and real data, including instructional video summarization, we show that our method outperforms the state of the art in terms of selecting representatives with better encoding, diversity and dynamic compatibility. 2 Subset Selection for Sequential Data Sequential data, including time-series and ordered data contain important structural relationships among items, often imposed by underlying dynamic models of data, that should play a vital role in the selection of representatives. In this section, we develop a new framework for sequential subset selection that incorporates underlying dynamic models and relationships among items into subset selection. More specifically, we propose a dynamic subset selection framework, where we equip items with transition probabilities and design objectives to select representatives that capture the data distribution with a high transition probability in the sequence of representatives. In the next section, we develop an efficient algorithm to solve the proposed optimization problem. 2.1 Sequential Subset Selection Formulation Assume we have a source set of items X = {x1, . . . , xM}, equipped with a transition model, p(xi0|xi1, . . . , xin), between items, and a target set of sequential items Y = (y1, . . . , yT ). Our goal is to find a small representative subset of X that well encode Y, while the set of representatives are compatible according to the dynamic model of X. Let xrt be the representative of yt for t 2 {1, . . . , T}. We propose a potential function (r1, . . . , rT ) whose maximization over all possible assignments (r1, . . . , rT ) ✓{1, . . . , M}T , i.e., max (r1,...,rT )✓{1,...,M}T (r1, . . . , rT ) (1) achieves the three goals of i) minimizing the encoding cost of Y via the representative set; ii) selecting a small set of representatives from X; iii) selecting an ordered set of representatives (xr1, . . . , xrT ) that are compatible with the dynamics on X. To tackle the problem, we consider a decomposition of the potential function into the product of three potentials, corresponding to the three aforementioned objectives, as (r1, . . . , rT ) , Φenc(r1, . . . , rT ) ⇥Φcard(r1, . . . , rT ) ⇥Φdyn(r1, . . . , rT ), (2) where Φenc(r1, . . . , rT ) denotes the encoding potential that favors selecting a representative set from X that well encodes Y, Φcard(r1, . . . , rT ) denotes the cardinality potential that favors selecting a small number of distinct representatives. Finally, Φdyn(r1, . . . , rT ) denotes the dynamic potential that favors selecting an ordered set of representatives that are likely to be generated by the underlying dynamic model on X. Next, we study each of the three potentials. Encoding Potential: Since the encoding of each item of Y depends on its own representative, we assume that the encoding potential function factorizes as Φenc(r1, . . . , rT ) = T Y t=1 φenc,t(rt), (3) where φenc,t(i) characterizes how well xi encodes yt and becomes larger when xi better represents yt. In this paper, we assume that φenc,t(i) = exp(−di,t), where di,t indicates the dissimilarity of xi to yt.1 A lower dissimilarity di,t means that xi better encodes/represents yt. Cardinality Potential: Notice that maximizing the encoding potential alone results in selecting many representatives. Hence, we consider a cardinality potential to restrict the total number of representatives. Denoting the number of representatives by |{r1, . . . , rT }|, we consider Φcard(r1, . . . , rT ) = exp(−λ|{r1, . . . , rT }|), (4) which promotes to select a small number of representatives. The parameter λ > 0 controls the effect of the cardinality on the global potential , where a close to zero λ ignores the effect of cardinality potential, resulting in many representatives, and a larger λ results in a smaller representative set. 1We can also use similarities si,t instead of dissimilarities, in which case we set φenc,t(i) = exp(si,t). 3 Dynamic Potential: While encoding and cardinality potentials together promote selecting a few representatives from X that well encode Y, there is no guarantee that the sequence of representatives (xr1, . . . , xrT ) is compatible with the underlying dynamic of X. Thus, we introduce a dynamic potential that measures the compatibility of the sequence of representatives. To do so, we consider an n-th order Markov Model to represent the dynamic relationships among the items in X, where the selection of the representative xrt depends on the m previously selected representatives, i.e., xrt−1, . . . , xrt−n. More precisely, we consider Φdyn(r1, . . . , rT )= p1(xr1)⇥ n Y t=2 pt(xrt|xrt−1, . . . , xr1)⇥ T Y t=n+1 pt(xrt|xrt−1, . . . , xrt−n) !β , (5) where pt(xi) indicates the probability of selecting xi as the representative of yt and pt(xi0|xi1, . . . , xin) denotes the probability of selecting xi0 as the representative of yt given that xi1, . . . , xin has been selected as the representative of yt−1, . . . , yt−n, respectively. The regularization parameter β > 0 determines the effect of the dynamic potential on the overall potential , where a close to zero β results in discounting the effect of the dynamic of X. As a result, maximizing the dynamic potential promotes to select a sequence of representatives that are highly likely to follow the dynamic model on the source set. In this paper, we assume that the transition dynamic model on the source set is given and known. In the experiments on video summarization, we learn the dynamic model by fitting a hidden Markov Model to data. 2.2 Optimization Framework for Sequential Subset Selection In the rest of the paper, we consider a first order Markov model, which performs well in the application studied in the paper (our proposed optimization can be generalized to n-th order Markov models as well). Putting all three potentials together, we consider maximization of the global potential function = T Y t=1 φenc,t(rt) ⇥Φcard(r1, . . . , rT ) ⇥ p1(xr1) ⇥ T Y t=2 pt(xrt|xrt−1) !β . (6) over all possible assignments (r1, . . . , rT ) ✓{1, . . . , M}T . To do so, we cast the problem as an integer binary optimization. We define binary assignment variables {zi,t}t=1,...,T i=1,...,M, where zi,t 2 {0, 1} indicates if xi is a representative of yt. Since each item yt is associated with only a single representative, we have PM i=1 zi,t = 1. Also, we define variables {δi}i=1,...,M and {ut i0,i}t=1,...,T i,i0=1,...,M, where δi 2 {0, 1} indicates if xi is a representative of y1and ut i0,i 2 {0, 1} indicates if xi0 is a representative of yt given that xi is a representative of yt−1. As we will show, {δi} and {ut i0,i} are related to {zi,t}, hence, the final optimization only depends on {zi,t}. Using the variables defined above, we can rewrite the global potential function in (6) as = T Y t=1 M Y i=1 φenc,t(i)zi,t ⇥Φcard(r1, . . . , rT ) ⇥ M Y i=1 p1(xi)βδi ⇥ T Y t=2 M Y i0=1 M Y i=1 pt(xi0|xi)βut i0,i. (7) We can equivalently maximize the logarithm of , which is to maximize T X t=1 M X i=1 −zi,tdi,t + log Φcard(r1, . . . , rT ) + M X i=1 δi log p1(xi) + T X t=2 M X i,i0=1 ui0,i log pt(xi0|xi), (8) where we used log φenc,t(i) = −di,t. Notice that {δi} and {ut i0,i} can be written as functions of the assignment variables {zi,t}. Denoting the indicator function by 1(·), which is one when its argument is true and is zero otherwise, we can write δi = 1(r1=i) and ut i0,i = 1(rt=i0,rt−1=i). Hence, we have δi = zi,1, ut i0,i = zi,t−1zi0,t. (9) As a result, we can rewrite the maximization in (8) as the equivalent optimization max {zi,t} T X t=1 M X i=1 −zi,tdi,t + log Φcard(r1, . . . , rT ) + β( M X i=1 zi,1 log p1(xi) + T X t=2 M X i,i0=1 zi,t−1zi0,t log pt(xi0|xi)) s. t. zi,t 2 {0, 1}, M X i=1 zi,t = 1, 8 i, t. (10) 4 z11 z21 z31 ... z12 z22 z32 ... z13 z23 z33 ... · · · · · · · · · ✓11 ✓21 ✓31 ✓12 ✓22 ✓32 ✓13 ✓23 ✓33 ✓C 1 ✓C 2 ✓C 3 ✓R 1 ✓R 2 ✓R 3 ✓D 11;12 ... ✓D 21;12 ... ✓D 31;12 ... ✓D 12;13 ... ✓D 22;13 ... ✓D 32,13 ... zit ✓C t ✓it ✓R i ✓D it;1(t+1) ... ✓D it;i0(t+1) ... ✓D it;M(t+1) ✓D 1(t−1);it ... ✓D i0(t−1);it ... ✓D M(t−1);it ⌘it σit ↵it γ0 it,1(t+1) γ0 it,i0(t+1) γ0 it,M(t+1) γit,1(t+1) γit,i0(t+1) γit,M(t+1) Figure 2: Left: Factor graph representing (12). Right: Messages from each factor to a variable node zi,t. It is important to note that if xi becomes a representative of some items in Y, then k [zi,1 · · · zi,T ] k1 would be 1. Hence, the number of representatives is given by PM i=1 k [zi,1 · · · zi,T ] k1. As a result, we can rewrite the cardinality potential in (4) as Φcard(r1, . . . , rT ) = exp(−λ M X i=1 k [zi,1 · · · zi,T ] k1). (11) Finally, considering a homogeneous Markov Model on the dynamics of the source set, where pt(·|·) = p(·|·), i.e., transitioning from xi as the representative of yt−1 to xi0 as the representative of yt does not depend on t, we propose to solve the optimization max {zi,t} T X t=1 M X i=1 −zi,tdi,t −λ M X i=1 k [zi,1 · · · zi,T ] k1 + β( M X i=1 zi,1 log p1(xi) + T X t=2 M X i,i0=1 zi,t−1zi0,t log p(xi0|xi)) s. t. zi,t 2 {0, 1}, M X i=1 zi,t = 1, 8 i, t. (12) In our proposed formulation above, we assume that the dissimilarities {di,t} and the dynamic models, i.e., the probabilities p1(·) and p(·|·), are known. These models can be given by prior knowledge or by learning from training data, as we show in the experiments. It is important to notice that the optimization in (12) is non-convex, due to binary optimization variables and quadratic terms in the objective function, which is not necessarily positive semi-definite (this can be easily seen when p(xi0|xi) 6= p(xi|xi0) for some i, i0). In the next section, we treat (12) as a MAP inference on binary random variables and develop a message passing algorithm to find the hidden values {zi,t}. Once we solve the optimization in (12), we can obtain the representatives as the items of X for which zi,t is non-zero for some t. Moreover, we can obtain the segmentation of the sequential items in Y according to their assignments to the representatives. In fact, the sequence of representatives obtained by our proposed optimization in (12) not only corresponds to diverse items that well encode the sequential target data, but also is compatible with the underlying dynamic of the source data. Remark 1 Without the dynamic potential, i.e., with β = 0, our proposed optimization in (12) reduces to the uncapacitated facility location objective. Hence, our framework generalizes the facility location to sequential data by considering transition dynamics among facilities (source set items). On the other hand, if we assume uniform distributions for the initial and transition probabilities, the dynamic term (last term) in our objective function becomes a constant, hence, our formulation reduces to the uncapacitated facility location. As a result, our framework generalizes the facility location, where we consider arbitrary initial and transition probabilities on X instead of a uniform distribution. 3 Message Passing for Sequential Subset Selection In this section, we develop an efficient message passing algorithm to solve the proposed optimization in (12). To do so, we treat the sequential subset selection as a MAP inference, where {zi,t} correspond to binary random variables whose joint log-likelihood is given by the objective function in (12). We represent the log-likelihood, i.e., the objective function in (12), with a factor graph [42], which is shown in Figure 2. Recall that a factor graph is a bipartite graph that consists of variable nodes and 5 factor nodes, where every factor evaluates a potential function over variables it is connected to. The log-likelihood is then proportional to the sum of all factor potentials. To form the factors corresponding to the objective function in (12), we define mi,i0 , log p(xi0|xi) and ¯di,t , di,t −log p1(xi) if t = 1 and ¯di,t , di,t for all other values of t. Denoting zi,: , [zi,1 · · · zi,T ]> and z:,t , [z1,t · · · zM,t]>, we define factor potentials corresponding to our framework, shown in Figure 2. More specifically, we define the encoding and dynamic potentials, respectively, as ✓i,t(zi,t) , −¯di,tzi,t and ✓D i,t−1;i0,t(zi,t−1, zi0,t) , mi,i0zi,t−1zi0,t. Moreover we define the cardinality and constraint potentials, respectively, as ✓R i (zi,:) , ⇢−λ, kzi,:k1 > 0 0, otherwise , ✓C t (z:,t) , ( 0, PM i=1 zi,t = 1 −1, otherwise . The MAP formulation of our sequential subset selection is then given by max {zi,t} T X t=1 M X i=1 ✓i,t(zi,t) + M X i=1 ✓R i (zi,:) + T X t=1 ✓C l (z:,t) + β T −1 X t=1 M X i0=1 M X i=1 ✓D i,t−1;i0,t(zi,t−1, zi0,t). (13) To perform MAP inference, we use the max-sum message passing algorithm, which iteratively updates messages between variable and factor nodes in the graph. In our framework, the incoming messages to each variable node zi,t are illustrated in Figure 2. Messages are computed as follows (please see the supplementary materials for the derivations). σi,t −¯di,t (14) γi,t;j,t+1 max{0, mi,j + ⇢} −max{0, ⇢} (15) γ0 i,t−1;j,t max{0, mi,j + ⇢0} −max{0, ⇢0} (16) ⌘i,t −max k6=i0,i{↵i0,1 −¯di0,1 + M X j=1 γi0,t;j,t+1 + M X j=1 γ0 j,t−1;i0,t} (17) ↵i,t min{0, −λ + X k6=t max{0, −¯di,k + ⌘i,k + M X j=1 (γi,k;j,k+1 + γ0 j,k−1;i,k)}} (18) where, for brevity of notation, we have defined ⇢and ⇢0 as ⇢ 4= −¯dj,t+1 + ↵j,t+1 + ⌘j,t+1 + M X k=1 γj,t+1;k,t+2 + X k6=i γ0 k,t;j,t+1, (19) ⇢0 4= −¯di,t−1 + ↵i,t−1 + ⌘i,t−1 + X k6=i γi,t−1;k,t + M X k=1 γ0 k,t−2;i,t−1. (20) The update of messages continues until convergence, when each variable zi,t is assigned to the value that maximizes the sum of its incoming messages. It is important to note that the max-sum algorithm always converges to the optimal MAP assignment on trees, and has shown good performance on graphs with cycles in many applications, including our work. We also use a dampening factor λ 2 [0, 1) on message updates as so that a message µ is computed as µ(new) λµ(old) + (1 −λ)µ(update). 4 Experiments In this section, we evaluate the performance of our proposed method as well as the state of the art for subset selection on synthetic and real sequential data. For real applications, we consider the task of summarizing instructional videos to learn the key steps of the task described in the videos. In addition to our proposed message passing (MP) algorithm, we have implemented the optimization in (12) using an ADMM framework [43], where we have relaxed the integer binary constraints to zi,t 2 [0, 1]. In practice both MP and ADMM algorithms achieve similar results. Hence, we report the performance of our method using the MP algorithm. We compare our proposed method, Sequential Facility Location (SeqFL), with several subset selection algorithms. Since we study the performance of methods as a function of the size of the representative 6 0 5 10 15 20 Number of representatives 0 10 20 30 40 50 Encoding cost kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 0 5 10 15 20 Number of representatives 100 150 200 250 300 Dynamic cost kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 0 5 10 15 20 Number of representatives 0 10 20 30 40 50 Encoding + *Dynamic cost kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 0 5 10 15 20 Number of representatives 0 0.2 0.4 0.6 0.8 1 Diversity score kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL Figure 3: Encoding cost, dynamic cost, total cost and diversity score of different algorithms as a function of the number of selected representatives. The size of the source set is M = 50. set, we use the fixed-size variant of DPP, called kDPP [44]. In addition to kDPP, we evaluate the performance of Markov kDPP (M-kDPP) [37], in which successive representatives are diverse among themselves and with respect to the previously selected representatives, as well as Sequential kDPP (Seq-kDPP) [3], which divides a time-series into multiple windows and successively selects diverse samples from each window conditioned on the previous window.2 We also compare our method against DS3 [8] and the standard greedy method [28], which optimize the facility location objective function, which has no dynamic cost, via convex relaxation and greedy selection, respectively. To compare the performance of different methods, we evaluate several costs and scores that demonstrate the effectiveness of each method in terms of encoding, diversity and dynamic compatibility of the set of selected representatives. More specifically, given dissimilarities {di,t}, the dynamic model p1(·) and p(·|·), representative set ⇤, and the assignment of points to representatives {z⇤ i,t}, we compute the encoding cost as PT t=1 PM i=1 di,tz⇤ i,t, the dynamic cost as −PM i=1 log p1(xi)z⇤ i,1 −PT t=2 PM i,i0=1 log p(xi0|xi)z⇤ i,t−1z⇤ i0,t and the total cost as the sum of the encoding cost and the dynamic cost multiplied by β. We also compute the diversity score as det(K⇤), where K corresponds to the kernel matrix, used in DPP and its variants, and K⇤denotes the submatrix of K indexed by ⇤. We use Euclidean distances as dissimilarities and compute the corresponding inner-product kernel to run DPPs. Notice that the diversity score, which is the volume of the parallelotope spanned by the representatives, is what DPP methods aim to (approximately) maximize. As DPP methods only find representatives and not assignment of points, we compute z⇤ i,t’s by assigning each point to the closest representative in ⇤, according to the kernel. 4.1 Synthetic Data To demonstrate the effectiveness of our proposed method for sequential subset selection, we generate synthetic data where for a source set X with M items corresponding to means of M Gaussians, we generate a transition probability matrix among items and an initial probability vector. We draw a sequence of length T from the corresponding Markov model to form the target set Y and run different algorithms to generate k representatives. We then compute the average encoding and transition costs as well as the diversity scores for sequences drawn from the Markov model, as a function of k 2 {1, 2, . . . , M}. In the experiments we set M = 50, T = 100. For a fixed β, we run SeqFL for different values of λ to select different number of representatives. Figure 3 illustrates the encoding, dynamic and total costs as well as the diversity scores of different methods, where for SeqFL we have set β = 0.02. Notice that our proposed method consistently obtains lower encoding, dynamic and total costs for all numbers of representatives, demonstrating its effectiveness for obtaining a sequence of informative representatives that are compatible according 2To have a fair comparison and to select a fixed number of representatives, we modify the SeqDPP method [3] and implement Seq-kDPP where k representatives are chosen in each window. 7 0 2 4 6 8 10 12 0 5 10 15 20 25 Number representatives =0 =0.01 =0.02 =0.06 =0.08 0 2 4 6 8 10 12 0 5 10 15 20 Encoding cost =0 =0.01 =0.02 =0.06 =0.08 0 2 4 6 8 10 12 100 150 200 250 300 Dynamic cost =0 =0.01 =0.02 =0.06 =0.08 0 2 4 6 8 10 12 0 0.5 1 Diversity score =0 =0.01 =0.02 =0.06 =0.08 Figure 4: Number of representatives, encoding cost, dynamic cost and diversity score of our proposed method (SeqFL) as a function of the parameters (β, λ). Task kDPP M-kDPP Seq-kDPP DS3 SeqFL Change tire (P, R) (0.56, 0.50) (0.55, 0.60) (0.44, 0.40) (0.56, 0.50) (0.60, 0.60) F-score 0.53 0.57 0.42 0.53 0.60 Make coffee (P, R) (0.38, 0.33) (0.50, 0.44) (0.63, 0.56) (0.50, 0.56) (0.50, 0.56) F-score 0.35 0.47 0.59 0.53 0.53 CPR (P, R) (0.71, 0.71) (0.71, 0.71) (0.71, 0.71) (0.71, 0.71) (0.83, 0.71) F-score 0.71 0.71 0.71 0.71 0.77 Jump car (P, R) (0.50, 0.50) (0.56, 0.50) (0.56, 0.50) (0.50, 0.50) (0.60, 0.60) F-score 0.50 0.53 0.53 0.50 0.60 Repot plant (P, R) (0.57, 0.67) (0.60, 0.50) (0.57, 0.67) (0.57, 0.67) (0.80, 0.67) F-score 0.62 0.55 0.62 0.62 0.73 All tasks (P, R) (0.54, 0.54) (0.58, 0.55) (0.58, 0.57) (0.57, 0.59) (0.67, 0.63) F-score 0.54 0.57 0.57 0.58 0.65 Table 1: Precision (P), Recall (R) and F-score for the summarization of instructional videos for five tasks. to the underlying dynamics. It is important to notice that although our method does not maximize the diversity score, used and optimized in kDPP and its variants, it achieves slightly better diversity scores (higher is better) than kDPP and M-kDPP. Figure 4 demonstrates the effect of the parameters (β, λ) on the solution of our proposed method. Notice that for a fixed β, as λ increases, we select a smaller number of representatives, hence, the encoding cost increases. Also, for a fixed λ, as β increases, we put more more emphasis on dynamic compatibility of representatives, hence, the dynamic cost decreases. On the other hand, the diversity score decreases for smaller λ, as we select more representatives which become more redundant. The results in Figure 4 also demonstrate the robustness of our method to the change of parameters. 4.2 Summarization of Instructional Videos People learn how to perform tasks such as assembling a device or cooking a recipe, by watching instructional videos for which there often exists a large amount of videos on the internet. Summarization of instructional videos helps to learn the grammars of tasks in terms of key activities or procedures that need to be performed in order to do a certain task. On the other hand, there is a logical way in which the key actions or procedures are connected together, hence, emphasizing the importance of using the dynamic model of data when performing summarization. We apply SeqFL to the task of summarization of intructional videos to automatically learn the sequence of key actions to perform a task. We use videos from the instructional video dataset [45], which consists of 30 instructional videos for each of five activities. The dataset also provides labels for frames which contain the main steps required to perform that task. We preprocess the videos by segmenting each video into superframes [46] and obtain features using a deep neural network that we have constructed for feature extraction for summarization tasks. We use 60% of the videos from each task as the training set to build an HMM model whose states form the source set, X. For each of the 8 SeqFL Give Compression Check Breathing Give Breath Give Compression Give Breath Give Compression Ground Truth Check Response Open Airway Check Breathing Give Breath Give Compression Give Breath Give Compression 1 Figure 5: Ground-truth and the automatic summarization result of our method (SeqFL) for the task ‘CPR’. SeqFL Put Soil Add Top Loosen Root Place Plant Add Top Ground Truth Put Soil Tap Pot Take Plant Loosen Root Place Plant Add Top 1 Figure 6: Ground-truth and the summarization result of our method (SeqFL) for the task ‘Repot a Plant’. 40% remaining videos, we set Y to be the sequence of features extracted from the superframes of the video. Using the learned dynamic model, we apply our method to summarize each of these remaining videos. The summary for each video is the set of representative elements of X, i.e., selected states from the HMM model. The assignments of representatives to superframes gives the ordering of representatives, i.e., the ordering of performing key actions. For evaluation, we map each representative state into an action label. To do so, we use the ground-truth labels of the training videos, assigning a label to each representative state based on its five nearest neighbors in the training set. The summary for each video is an assignment of each superframe in the video to one of the representative action labels. Since each video may have shown each action performed for a different length of time, we remove consecutive repeated labels to form a list of actions performed, hence, removing the length of time each action was performed. To construct the final summary for each method for a given task, we align the lists of summary actions for all the test videos using the alignment method of [45] for several number of slots. For each method, we choose the number of HMM states and the number of slots for alignment that achieve the best performance. Given ground-truth summaries, we compute the precision, recall and the F-score of various methods (see the supplementary materials for details). Table 1 shows the results. Notice that existing methods, which do not incorporate the dynamic of data for summarization, perform similar to each other for most tasks. In particular, the results show that the sequential diversity promoted by Seq-kDPP and M-kDPP is not sufficient for capturing the important steps of tasks. On the other hand, for most tasks and over the entire dataset, our method (SeqFL) significantly outperforms other algorithms, better producing the sequence of important steps to perform a task, thanks to the ability of our framework to incorporate the underlying dynamics of the data. Figures 5 and 6 show the ground-truth and the summaries produced by our method for the tasks ‘CPR’ and ‘Repot a Plant’, respectively. Notice that SeqFL sufficiently well captures the main steps and the sequence of steps to perform these tasks. However, for each task, SeqFL does not capture two of the ground-truth steps. We believe this can be overcome using larger datasets and more effective feature extraction methods for summarization. 5 Conclusions and Future Work We developed a new framework for sequential subset selection that takes advantage of the underlying dynamic models of data, promoting to select a set of representatives that are compatible according to the dynamic models of data. By experiments on synthetic and real data, we showed the effectiveness of our method for summarization of sequential data. Our ongoing research include development of fast greedy algorithms for our sequential subset selection formulation, investigation of the theoretical guarantees of our method, as well as development of more effective summarization-based feature extraction techniques and working with larger datasets for the task of instructional data summarization. 9 Acknowledgements This work is supported by NSF IIS-1657197 award and startup funds from the Northeastern University, College of Computer and Information Science. References [1] S. Garcia, J. Derrac, J. R. Cano, and F. Herrera, “Prototype selection for nearest neighbor classification: Taxonomy and empirical study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 417–435, 2012. [2] E. Elhamifar and M. C. De-Paolis-Kaluza, “Online summarization via submodular and convex optimization,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017. [3] B. Gong, W. Chao, K. Grauman, and F. Sha, “Diverse sequential subset selection for supervised video summarization,” in Neural Information Processing Systems, 2014. [4] I. Simon, N. Snavely, and S. M. Seitz, “Scene summarization for online image collections,” in IEEE International Conference on Computer Vision, 2007. [5] H. Lin and J. Bilmes, “Learning mixtures of submodular shells with application to document summarization,” in Conference on Uncertainty in Artificial Intelligence, 2012. [6] A. Kulesza and B. Taskar, “Determinantal point processes for machine learning,” Foundations and Trends in Machine Learning, vol. 5, 2012. [7] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, 2007. [8] E. Elhamifar, G. Sapiro, and S. S. Sastry, “Dissimilarity-based sparse subset selection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. [9] E. Elhamifar, G. Sapiro, and R. Vidal, “Finding exemplars from pairwise dissimilarities via simultaneous sparse recovery,” Neural Information Processing Systems, 2012. [10] G. Kim, E. Xing, L. Fei-Fei, and T. Kanade, “Distributed cosegmentation via submodular optimization on anisotropic diffusion,” in International Conference on Computer Vision, 2011. [11] A. Shah and Z. Ghahramani, “Determinantal clustering process – a nonparametric bayesian approach to kernel based semi-supervised clustering,” in Conference on Uncertainty in Artificial Intelligence, 2013. [12] R. Reichart and A. Korhonen, “Improved lexical acquisition through dpp-based verb clustering,” in Conference of the Association for Computational Linguistics, 2013. [13] E. Elhamifar, S. Burden, and S. S. Sastry, “Adaptive piecewise-affine inverse modeling of hybrid dynamical systems,” in World Congress of the International Federation of Automatic Control (IFAC), 2014. [14] E. Elhamifar and S. S. Sastry, “Energy disaggregation via learning ‘powerlets’ and sparse coding,” in AAAI Conference on Artificial Intelligence, 2015. [15] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, 2003. [16] I. Misra, A. Shrivastava, and M. Hebert, “Data-driven exemplar model selection,” in Winter Conference on Applications of Computer Vision, 2014. [17] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta, “Robust submodular observation selection,” Journal of Machine Learning Research, vol. 9, 2008. [18] S. Joshi and S. Boyd, “Sensor selection via convex optimization,” IEEE Transactions on Signal Processing, vol. 57, 2009. [19] J. Hartline, V. S. Mirrokni, and M. Sundararajan, “Optimal marketing strategies over social networks,” in World Wide Web Conference, 2008. [20] D. McSherry, “Diversity-conscious retrieval,” in Advances in Case-Based Reasoning, 2002. [21] R. Duda, P. Hart, and D. Stork, Pattern Classification. Wiley-Interscience, October 2004. [22] M. Aharon, M. Elad, and A. M. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. [23] L. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, 1989. [24] F. Hadlock, “Finding a maximum cut of a planar graph in polynomial time,” SIAM Journal on Computing, vol. 4, 1975. [25] R. Motwani and P. Raghavan, “Randomized algorithms,” Cambridge University Press, New York, 1995. 10 [26] J. Carbonell and J. Goldstein, “The use of mmr, diversity-based reranking for reordering documents and producing summaries,” in SIGIR, 1998. [27] P. B. Mirchandani and R. L. Francis, Discrete Location Theory. Wiley, 1990. [28] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions,” Mathematical Programming, vol. 14, 1978. [29] E. Elhamifar, G. Sapiro, and R. Vidal, “See all by looking at a few: Sparse modeling for finding representative objects,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012. [30] E. Esser, M. Moller, S. Osher, G. Sapiro, and J. Xin, “A convex model for non-negative matrix factorization and dimensionality reduction on physical space,” IEEE Transactions on Image Processing, vol. 21, no. 7, pp. 3239–3252, 2012. [31] A. Borodin and G. Olshanski, “Distributions on partitions, point processes, and the hypergeometric kernel,” Communications in Mathematical Physics, vol. 211, 2000. [32] U. Feige, “A threshold of ln n for approximating set cover,” Journal of the ACM, 1998. [33] T. Gonzalez, “Clustering to minimize the maximum intercluster distance,” Theoretical Computer Science, vol. 38, 1985. [34] A. Civril and M. Magdon-Ismail, “On selecting a maximum volume sub-matrix of a matrix and related problems,” Theoretical Computer Science, vol. 410, 2009. [35] P. Awasthi, A. S. Bandeira, M. Charikar, R. Krishnaswamy, S. Villar, and R. Ward, “Relax, no need to round: Integrality of clustering formulations,” in Conference on Innovations in Theoretical Computer Science (ITCS), 2015. [36] A. Nellore and R. Ward, “Recovery guarantees for exemplar-based clustering,” in Information and Computation, 2015. [37] R. H. Affandi, A. Kulesza, and E. B. Fox, “Markov determinantal point processes,” in Conference on Uncertainty in Artificial Intelligence, 2012. [38] S. Tschiatschek, A. Singla, and A. Krause, “Selecting sequences of items via submodular maximization,” AAAI, 2017. [39] Z. Ghahramani and M. I. Jordan, “Factorial hidden markov models,” Machine Learning, vol. 29, no. 2-3, 1997. [40] Z. Ghahramani and S. Roweis, “Learning nonlinear dynamical systems using an em algorithm,” NIPS, 2008. [41] C. Bishop, Pattern Recognition and Machine Learning. New York: Springer, 2007. [42] F. Kschischang, B. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001. [43] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010. [44] A. Kulesza and B. Taskar, “k-dpps: Fixed-size determinantal point processes,” in International Conference on Machine Learning, 2011. [45] J.-B. Alayrac, P. Bojanowski, N. Agrawal, I. Laptev, J. Sivic, and S. Lacoste-Julien, “Unsupervised learning from narrated instruction videos,” in Computer Vision and Pattern Recognition (CVPR), 2016. [46] M. Gygli, H. Grabner, H. Riemenschneider, and L. V. Gool, “Creating summaries from user videos,” in European Conference on Computer Vision, 2014. 11 | 2017 | 384 |
6,879 | Z-Forcing: Training Stochastic Recurrent Networks Anirudh Goyal MILA, Université de Montréal Alessandro Sordoni Microsoft Maluuba Marc-Alexandre Côté Microsoft Maluuba Nan Rosemary Ke MILA, Polytechnique Montréal Yoshua Bengio MILA, Université de Montréal Abstract Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortised variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables. 1 Introduction Due to their ability to capture long-term dependencies, autoregressive models such as recurrent neural networks (RNN) have become generative models of choice for dealing with sequential data. By leveraging weight sharing across timesteps, they can model variable length sequences within a fixed parameter space. RNN dynamics involve a hidden state that is updated at each timestep to summarize all the information seen previously in the sequence. Given the hidden state at the current timestep, the network predicts the desired output, which in many cases corresponds to the next input in the sequence. Due to the deterministic evolution of the hidden state, RNNs capture the entropy in the observed sequences by shaping conditional output distributions for each step, which are usually of simple parametric form, i.e. unimodal or mixtures of unimodal. This may be insufficient for highly structured natural sequences, where there is correlation between output variables at the same step, i.e. simultaneities (Boulanger-Lewandowski et al., 2012), and complex dependencies between variables at different timesteps, i.e. long-term dependencies. For these reasons, recent efforts recur to highly multi-modal output distribution by augmenting the RNN with stochastic latent variables trained by amortised variational inference, or variational auto-encoding framework (VAE) (Kingma and Welling, 2014; Fraccaro et al., 2016; Kingma and Welling, 2014). The VAE framework allows efficient approximate inference by parametrizing the approximate posterior and generative model with neural networks trainable end-to-end by backpropagation. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Another motivation for including stochastic latent variables in autoregressive models is to infer, from the observed variables in the sequence (e.g. pixels or sound-waves), higher-level abstractions (e.g. objects or speakers). Disentangling in such way the factors of variations is appealing as it would increase high-level control during generation, ease semi-supervised and transfer learning, and enhance interpretability of the trained model (Kingma et al., 2014; Hu et al., 2017). Stochastic recurrent models proposed in the literature vary in the way they use the stochastic variables to perform output prediction and in how they parametrize the posterior approximation for variational inference. In this paper, we propose a stochastic recurrent generative model that incorporates into a single framework successful techniques from earlier models. We associate a latent variable with each timestep in the generation process. Similar to Fraccaro et al. (2016), we use a (deterministic) RNN that runs backwards through the sequence to form our approximate posterior, allowing it to capture the future of the sequence. However, akin to Chung et al. (2015); Bayer and Osendorfer (2014), the latent variables are used to condition the recurrent dynamics for future steps, thus injecting highlevel decisions about the upcoming elements of the output sequence. Our architectural choices are motivated by interpreting the latent variables as encoding a “plan” for the future of the sequence. The latent plan is injected into the recurrent dynamics in order to shape the distribution of future hidden states. We show that mixing stochastic forward pass, conditional prior and backward recognition network helps building effective stochastic recurrent models. The recent surge in generative models suggests that extracting meaningful latent representations is difficult when using a powerful autoregressive decoder, i.e. the latter captures well enough most of the entropy in the data distribution (Bowman et al., 2015; Kingma et al., 2016; Chen et al., 2017; Gulrajani et al., 2017). We show that by using an auxiliary, task-agnostic loss, we ease the training of the latent variables which, in turn, helps achieving higher performance for the tasks at hand. The latent variables in our model are forced to contain useful information by predicting the state of the backward encoder, i.e. by predicting the future information in the sequence. Our work provides the following contributions: • We unify several successful architectural choices into one generative stochastic model for sequences: backward posterior, conditional prior and latent variables that condition the hidden dynamics of the network. Our model achieves state-of-the-art in speech modeling. • We propose a simple way of improving model performance by providing the latent variables with an auxiliary, task-agnostic objective. In the explored tasks, the auxiliary cost yielded better performance than other strategies such as KL annealing. Finally, we show that the auxiliary signal helps the model to learn interpretable representations in a language modeling task. 2 Background We operate in the well-known VAE framework (Kingma and Ba, 2014; Burda et al., 2015; Rezende and Mohamed, 2015), a neural network based approach for training generative latent variable models. Let x be an observation of a random variable, taking values in X. We assume that the generation of x involves a latent variable z, taking values in Z, by means of a joint density pθ(x, z), parametrized by θ. Given a set of observed datapoints D = {x1, . . . , xn}, the goal of maximum likelihood estimation (MLE) is to estimate the parameters θ that maximize the marginal log-likelihood L(θ; D): θ∗= arg maxθ L(θ; D) = n X i=1 log Z z pθ(xi, z) dz . (1) Optimizing the marginal log-likelihood is usually intractable, due to the integration over the latent variables. A common approach is to maximize a variational lower bound on the marginal loglikelihood. The evidence lower bound (ELBO) is obtained by introducing an approximate posterior qφ(z|x) yielding: log pθ(x) ≥ E qφ(z|x) log pθ(x, z) qφ(z|x) = log p(x) −DKL qφ(z|x) ∥p(z|x) = F(x; θ, φ), (2) where KL denotes the Kullback-Leibler divergence. The ELBO is particularly appealing because the bound is tight when the approximate posterior matches the true posterior, i.e. it reduces to the 2 ht ht−1 zt xt dt dt−1 (a) STORN ht ht−1 zt xt (b) VRNN ht ht−1 zt zt−1 xt bt bt−1 (c) SRNN ht ht−1 zt xt bt bt−1 (d) Our model Figure 1: Computation graph for generative models of sequences that use latent variables: STORN (Bayer and Osendorfer, 2014), VRNN (Chung et al., 2015), SRNN (Fraccaro et al., 2016) and our model. In this picture, we consider that the task of the generative model consists in predicting the next observation in the sequence, given previous ones. Diamonds represent deterministic states, zt and xt are respectively the latent variables and the sequence input at step t. Dashed lines represent the computation that is part of the inference model. Double lines indicate auxiliary predictions implied by the proposed auxiliary cost. Differently from VRNN and SRNN, in STORN and our model the latent variable zt participates to the prediction of the next step xt+1. marginal log-likelihood. The ELBO can also be rewritten as a minimum description length loss function (Honkela and Valpola, 2004): F(x; θ, φ) = E qφ(z|x) h log pθ(x|z) i −DKL qφ(z|x) ∥pθ(z) , (3) where the second term measures the degree of dependence between x and z, i.e. if DKL qφ(z|x) ∥pθ(z) is zero then z is independent of x. Usually, the parameters of the generative model pθ(x|z), the prior pθ(z) and the inference model qφ(z|x) are computed using neural networks. In this case, the ELBO can be maximized by gradient ascent on a Monte Carlo approximation of the expectation. For particularly simple parametric forms of qφ(z|x), e.g. multivariate diagonal Gaussian or, more generally, for reparamatrizable distributions (Kingma and Welling, 2014), one can backpropagate through the sampling process z ∼qφ(z|x) by applying the reparametrization trick, which simulates sampling from qφ(z|x) by first sampling from a fixed distribution u, ϵ ∼u(ϵ), and then by applying deterministic transformation z = fφ(x, ϵ). This makes the approach appealing in comparison to other approximate inference approaches. In order to have a better generative model overall, many efforts have been put in augmenting the capacity of the approximate posteriors (Rezende and Mohamed, 2015; Kingma et al., 2016; Louizos and Welling, 2017), the prior distribution (Chen et al., 2017; Serban et al., 2017a) and the decoder (Gulrajani et al., 2017; Oord et al., 2016). By having more powerful decoders pθ(x|z), one could model more complex distributions over X. This idea has been explored while applying VAEs to sequences x = (x1, . . . , xT ), where the decoding distribution pθ(x|z) is modeled by an autoregressive model, pθ(x|z) = Q t pθ(xt|z, x1:t−1) (Bayer and Osendorfer, 2014; Chung et al., 2015; Fraccaro et al., 2016). In these models, z typically decomposes as a sequence of latent variables, z = (z1, . . . , zT ), yielding pθ(x|z) = Q t pθ(xt|z1:t−1, x1:t−1). We operate in this setting and, in the following section, we present our choices for parametrizing the generative model, the prior and the inference model. 3 Proposed Approach In Figure 1, we report the dependencies in the inference and the generative parts of our model, compared to existing models. From a broad perspective, we use a backward recurrent network for the approximate posterior (akin to SRNN (Fraccaro et al., 2016)), we condition the recurrent state of the forward auto-regressive model with the stochastic variables and use a conditional prior (akin to VRNN (Chung et al., 2015), STORN (Bayer and Osendorfer, 2014)). In order to make better use 3 of the latent variables, we use auxiliary costs (double arrows) to force the latent variables to encode information about the future. In the following, we describe each of these components. 3.1 Generative Model Decoder Given a sequence of observations x = (x1, . . . , xT ), and desired set of labels or predictions y = (y1, . . . , yT ), we assume that there exists a corresponding set of stochastic latent variables z = (z1, . . . , zT ). In the following, without loss of generality, we suppose that the set of predictions corresponds to a shifted version of the input sequence, i.e. the model tries to predict the next observation given the previous ones, a common setting in language and speech modeling (Fraccaro et al., 2016; Chung et al., 2015). The generative model couples observations and latent variables by using an autoregressive model, i.e. by exploiting a LSTM architecture (Hochreiter and Schmidhuber, 1997), that runs through the sequence: ht = −→f (xt, ht−1, zt). (4) The parameters of the conditional probability distribution on the next observation pθ(xt+1|x1:t, z1:t) are computed by a multi-layered feed-forward network that conditions on ht, f (o)(ht). In the case of continuous-valued observations, f (o) may output the µ, log σ parameters of a Gaussian distribution, or the categorical proportions in the case of one-hot predictions. Note that, even if f (o) is a simple unimodal distribution, the marginal distribution pθ(xt+1|x1:t) may be highly multimodal, due to the integration over the sequence of latent variables z. Note that f (o) does not condition on zt, i.e. zt is not directly used in the computation of the output conditional probabilities. We observed better performance by avoiding the latent variables from directly producing the next output. Prior The parameters of the prior distribution pθ(zt|x1:t, z1:t−1) over each latent variable are obtained by using a non-linear transformation of the previous hidden state of the forward network. A common choice in the VAE framework is to use Gaussian latent variables. Therefore, f (p) produces the parameters of a diagonal multivariate Gaussian distribution: pθ(zt|x1:t, z1:t−1) = N(zt; µ(p) t , σ(p) t ) where [µ(p) t , log σ(p) t ] = f (p)(ht−1). (5) This type of conditional prior has proven to be useful in previous work (Chung et al., 2015). 3.2 Inference Model The inference model is responsible for approximating the true posterior over the latent variables p(z1, . . . , zT |x) in order to provide a tractable lower-bound on the log-likelihood. Our posterior approximation uses a LSTM processing the sequence x backwards: bt = ←−f (xt+1, bt+1). (6) Each state bt contains information about the future of the sequence and can be used to shape the approximate posterior for the latent zt. As the forward LSTM uses zt to condition future predictions, the latent variable can directly inform the recurrent dynamics about the future states, acting as a “plan” of the future in the sequence. This information is channeled into the posterior distribution by a feed-forward neural network f (q) taking as input both the previous forward state ht−1 and the backward state bt: qφ(zt|x) = N(zt; µ(q) t , σ(q) t ) where [µ(q) t , log σ(q) t ] = f (q)(ht−1, bt). (7) By injecting stochasticity in the hidden state of the forward recurrent model, the true posterior distribution for a given variable zt depends on all the variables zt+1:T after zt through dependence on ht+1:T . In order to formulate an efficient posterior approximation, we drop the dependence on zt+1:T . This is at the cost of introducing intrinsic bias in the posterior approximation, e.g. we may exclude the true posterior from the space of functions modelled by our function approximator. This is in contrast with SRNN (Fraccaro et al., 2016), in which the posterior distribution factorizes in a tractable manner at the cost of not including the latent variables in the forward autoregressive dynamics, i.e. the latent variables don’t condition the hidden state, but only help in shaping a multi-modal distribution for the current prediction. 4 3.3 Auxiliary Cost In various domains, such as text and images, it has been empirically observed that it is difficult to make use of latent variables when coupled with a strong autoregressive decoder (Bowman et al., 2015; Gulrajani et al., 2017; Chen et al., 2017). The difficulty in learning meaningful latent variables, in many cases of interest, is related to the fact that the abstractions underlying observed data may be encoded with a smaller number of bits than the observed variables. For example, there are multiple ways of picturing a particular “cat” (e.g. different poses, colors or lightning) without varying the more abstract properties of the concept “cat”. In these cases, the maximum-likelihood training objective may not be sensitive to how well abstractions are encoded, causing the latent variables to “shut off”, i.e. the local correlations at the pixel level may be too strong and bias the learning process towards finding parameter solutions for which the latent variables are unused. In these cases, the posterior approximation tends to provide a too weak or noisy signal, due to the variance induced by the stochastic gradient approximation. As a result, the decoder may learn to ignore z and instead to rely solely on the autoregressive properties of x, causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes. Recent solutions to this problem generally propose to reduce the capacity of the autoregressive decoder (Bowman et al., 2015; Bachman, 2016; Chen et al., 2017; Semeniuta et al., 2017). The constraints on the decoder capacity inherently bias the learning towards finding parameter solutions for which z and x are dependent. One of the shortcomings with this approach is that, in general, it may be hard to achieve the desired solutions by architecture search. Instead, we investigate whether it is useful to keep the expressiveness of the autoregressive decoder but force the latent variables to encode useful information by adding an auxiliary training signal for the latent variables alone. In practice, our results show that this auxiliary cost, albeit simple, helps achieving better performance on the objective of interest. Specifically, we consider training an additional conditional generative model of the backward states b = {b1, . . . , bT } given the forward states pξ(b|h) = R z pξ(b, z|h)dz ≥Eqξ(z|b,h)[log pξ(b|z) + log pξ(z|h) −log qξ(z|b, h)]. This additional model is also trained through amortized variational inference. However, we share its prior pξ(z|h) and approximate posterior qξ(z|b, h) with those of the “primary” model (b is a deterministic function of x per Eq. 6 and the approximate posterior is conditioned on b). In practice, we solely learn additional parameters ξ for the decoding model pξ(b|z) = Q t pξ(bt|zt). The auxiliary reconstruction model trains zt to contain relevant information about the future of the sequence contained in the hidden state of the backward network bt: pξ(bt|zt) = N(µ(a) t , σ(a) t ) where [µ(a) t , log σ(a) t ] = f (a)(zt), (8) By means of the auxiliary reconstruction cost, the approximate posterior and prior of the primary model is trained with an additional signal that may help with escaping local minima due to short term reconstructions appearing in the lower bound, similarly to what has been recently noted in Karl et al. (2016). 3.4 Learning The training objective is a regularized version of the lower-bound on the data log-likelihood based on the variational free-energy, where the regularization is imposed by the auxiliary cost: L(x; θ, φ, ξ) = X t E qφ(zt|x) h log pθ(xt+1|x1:t, z1:t) + α log pξ(bt|zt) i −DKL qφ(zt|x1:T ) ∥pθ(zt|x1:t, z1:t−1) . (9) We learn the parameters of our model by backpropagation through time (Rumelhart et al., 1988) and we approximate the expectation with one sample from the posterior qφ(z|x) by using reparametrization. When optimizing Eq. 9, we disconnect the gradients of the auxiliary prediction from affecting the backward network, i.e. we don’t use the gradients ∇φ log pξ(bt|zt) to train the parameters φ of the approximate posterior: intuitively, the backward network should be agnostic about the auxiliary task assigned to the latent variables. It also performed better empirically. As the approximate posterior is trained only with the gradient flowing through the ELBO, the backward states b may be receiving a weak training signal early in training, which may hamper the usefulness of the auxiliary generative cost, i.e. all the backward states may be concentrated around the zero vector. Therefore, 5 we additionally train the backward network to predict the output variables in reverse (see Figure 1): L(x; θ, φ, ξ) = X t E qφ(zt|x) h log pθ(xt+1|x1:t, z1:t) + α log pξ(bt|zt) i + β log pξ(xt|bt) −DKL qφ(zt|x1:T ) ∥pθ(zt|x1:t, z1:t−1) . (10) 3.5 Connection to previous models Our model is similar to several previous stochastic recurrent models: similarly to STORN (Bayer and Osendorfer, 2014) and VRNN (Chung et al., 2015) the latent variables are provided as input to the autoregressive decoder. Differently from STORN, we use the conditional prior parametrization proposed in Chung et al. (2015). However, the generation process in the VRNN differs from our approach. In VRNN, zt are directly used, along with ht−1, to produce the next output xt. We found that the model performed better if we relieved the latent variables from producing the next output. VRNN has a “myopic” posterior in such that the latent variables are not informed about the whole future in the sequence. SRNN (Fraccaro et al., 2016) addresses the issue by running a posterior backward in the sequence and thus providing future context for the current prediction. However, the autoregressive decoder is not informed about the future of the sequence through the latent variables. Several efforts have been made in order to bias the learning process towards parameter solutions for which the latent variables are used (Bowman et al., 2015; Karl et al., 2016; Kingma et al., 2016; Chen et al., 2017; Zhao et al., 2017). Bowman et al. (2015) tackle the problem in a language modeling setting by dropping words from the input at random in order to weaken the autoregressive decoder and by annealing the KL divergence term during training. We achieve similar latent interpolations by using our auxiliary cost. Similarly, Chen et al. (2017) propose to restrict the receptive field of the pixel-level decoder for image generation tasks. Kingma et al. (2016) propose to reserve some free bits of KL divergence. In parallel to our work, the idea of using a task-agnostic loss for the latent variables alone has also been considered in (Zhao et al., 2017). The authors force the latent variables to predict a bag-of-words representation of a dialog utterance. Instead, we work in a sequential setting, in which we have a latent variable for each timestep in the sequence. 4 Experiments In this section, we evaluate our proposed model on diverse modeling tasks (speech, images and text). We show that our model can achieve state-of-the-art results on two speech modeling datasets: Blizzard (King and Karaiskos, 2013) and TIMIT raw audio datasets (also used in Chung et al. (2015)). Our approach also gives competitive results on sequential generation on MNIST (Salakhutdinov and Murray, 2008). For text, we show that the the auxiliary cost helps the latent variables to capture information about latent structure of language (e.g. sequence length, sentiment). In all experiments, we used the ADAM optimizer (Kingma and Ba, 2014). 4.1 Speech Modeling and Sequential MNIST Blizzard and TIMIT We test our model in two speech modeling datasets. Blizzard consists in 300 hours of English, spoken by a single female speaker. TIMIT has been widely used in speech recognition and consists in 6300 English sentences read by 630 speakers. We train the model directly on raw sequences represented as a sequence of 200 real-valued amplitudes normalized using the global mean and standard deviation of the training set. We adopt the same train, validation and test split as in Chung et al. (2015). For Blizzard, we report the average log-likelihood for half-second sequences (Fraccaro et al., 2016), while for TIMIT we report the average log-likelihood for the sequences in the test set. In this setting, our models use a fully factorized multivariate Gaussian distribution as the output distribution for each timestep. In order to keep our model comparable with the state-of-the-art, we keep the number of parameters comparable to those of SRNN (Fraccaro et al., 2016). Our forward/backward networks are LSTMs with 2048 recurrent units for Blizzard and 1024 recurrent units for TIMIT. The dimensionality of the Gaussian latent variables is 256. The prior f (p), inference f (q) and auxiliary networks f (a) have a single hidden layer, with 1024 units for Blizzard and 512 units for TIMIT, and use leaky rectified nonlinearities with leakiness 1 3 and clipped at ±3 (Fraccaro et al., 2016). For Blizzard, we use a learning rate of 0.0003 and batch size of 128, for TIMIT they are 6 Model Blizzard TIMIT RNN-Gauss 3539 -1900 RNN-GMM 7413 26643 VRNN-I-Gauss ≥8933 ≥28340 VRNN-Gauss ≥9223 ≥28805 VRNN-GMM ≥9392 ≥28982 SRNN (smooth+resq) ≥11991 ≥60550 Ours ≥14435 ≥68132 Ours + kla ≥14226 ≥68903 Ours + aux ≥15430 ≥69530 Ours + kla, aux ≥15024 ≥70469 Models MNIST DBN 2hl (Germain et al., 2015) ≈84.55 NADE (Uria et al., 2016) 88.33 EoNADE-5 2hl (Raiko et al., 2014) 84.68 DLGM 8 (Salimans et al., 2014) ≈85.51 DARN 1hl (Gregor et al., 2015) ≈84.13 DRAW (Gregor et al., 2015) ≤80.97 PixelVAE (Gulrajani et al., 2016) ≈79.02▼ P-Forcing(3-layer) (Goyal et al., 2016) 79.58▼ PixelRNN(1-layer) (Oord et al., 2016) 80.75 PixelRNN(7-layer) (Oord et al., 2016) 79.20▼ MatNets (Bachman, 2016) 78.50▼ Ours(1 layer) ≤80.60 Ours + aux(1 layer) ≤80.09 Table 1: On the left, we report the average log-likelihood per sequence on the test sets for Blizzard and TIMIT datasets. “kla” and “aux” denote respectively KL annealing and the use of the proposed auxiliary costs. On the right, we report the test set negative log-likelihood for sequential MNIST, where ▼denotes lower performance of our model with respect to the baselines. For MNIST, we observed that KL annealing hurts overall performance. 0.001 and 32 respectively. Previous work reliably anneal the KL term in the ELBO via a temperature weight during training (KL annealing) (Fraccaro et al., 2016; Chung et al., 2015). We report the results obtained by our model by training both with KL annealing and without. When KL annealing is used, the temperature was linearly annealed from 0.2 to 1 after each update with increments of 0.00005 (Fraccaro et al., 2016). We show our results in Table 1 (left), along with results that were obtained by models of comparable size to SRNN. Similar to (Fraccaro et al., 2016; Chung et al., 2015), we report the conservative evidence lower bound on the log-likelihood. In Blizzard, the KL annealing strategy (Ours + kla) is effective in the first training iterations, but eventually converges to a slightly lower log-likelihood than the model trained without KL annealing (Ours). We explored different annealing strategies but we didn’t observe any improvements in performance. Models trained with the proposed auxiliary cost outperform models trained with KL annealing strategy in both datasets. In TIMIT, it appears that there is a slightly synergistic effect between KL annealing and auxiliary cost. Even if not explicitly reported in the table, similar performance gains were observed on the training sets. Sequential MNIST The task consists in pixel-by-pixel generation of binarized MNIST digits. We use the standard binarized MNIST dataset used in Larochelle and Murray (2011). Both forward and backward networks are LSTMs with one layer of 1024 hidden units. We use a learning rate of 0.001 and batch size of 32. We report the results in Table 1 (right). In this setting, we observed that KL annealing hurt performance of the model. Although being architecturally flat, our model is competitive with respect to strong baselines, e.g. DRAW (Gregor et al., 2015), and is outperformed by deeper version of autoregressive models with latent variables, i.e. PixelVAE (gated) (Gulrajani et al., 2016), and deep autoregressive models such as PixelRNN (Oord et al., 2016) and MatNets (Bachman, 2016). 4.2 Language modeling A well-known result in language modeling tasks is that the generative model tends to fit the observed data without storing information in the latent variables, i.e. the KL divergence term in the ELBO becomes zero (Bowman et al., 2015; Zhao et al., 2017; Serban et al., 2017b). We test our proposed stochastic recurrent model trained with the auxiliary cost on a medium-sized IMDB text corpus containing 350K movie reviews (Diao et al., 2014). Following the setting described in Hu et al. (2017), we keep only sentences with less than 16 words and fixed the vocabulary size to 16K words. We split the dataset into train/valid/test sets following these ratios respectively: 85%, 5%, 10%. Special delimiter tokens were added at the beginning and end of each sentence but we only learned to 7 0 1 2 3 4 5 Updates 1e4 500 1000 1500 2000 2500 3000 KL (nats) Blizzard ours ours + kla ours + aux ours + kla, aux 0 1 2 3 4 5 Updates 1e4 0 500 1000 1500 2000 2500 3000 3500 4000 KL (nats) TIMIT ours ours + kla ours + aux ours + kla, aux Figure 2: Evolution of the KL divergence term (measured in nats) in the ELBO with and without auxiliary cost during training for Blizzard (left) and TIMIT (right). We plot curves for models that performed best after hyper-parameter (KL annealing and auxiliary cost weights) selection on the validation set. The auxiliary cost puts pressure on the latent variables resulting in higher KL divergence. Models trained with the auxiliary cost (Ours + aux) exhibit a more stable evolution of the KL divergence. Models trained with auxiliary cost alone achieve better performance than using KL annealing alone (Ours + kla) and similar, or better performance for Blizzard, compared to both using KL annealing and auxiliary cost (Ours + kla, aux). Model α, β KL Valid Test ELBO IWAE ELBO IWAE Ours 0 0.12 53.93 52.40 54.67 53.11 Ours + aux 0.0025 3.03 55.71 52.54 56.57 53.37 Ours + aux 0.005 9.82 65.03 58.13 65.84 58.83 Table 2: IMDB language modeling results for models trained by maximizing the standard evidence lower-bound. We report word perplexity as evaluated by both the ELBO and the IWAE bound and KL divergence between approximate posterior and prior distribution, for different values of auxiliary cost hyperparameters α, β. The gap in perplexity between the ELBO and IWAE (evaluated with 25 samples) increases with greater KL divergence values. generate the end of sentence token. We use a single layered LSTM with 500 hidden recurrent units, fix the dimensionality of word embeddings to 300 and use 64 dimensional latent variables. All the f (·) networks are single-layered with 500 hidden units and leaky relu activations. We used a learning rate of 0.001 and a batch size of 32. Results are shown in Table 2. As expected, it is hard to obtain better perplexity than a baseline model when latent variables are used in language models. We found that using the IWAE (Importance Weighted Autoencoder) (Burda et al., 2015) bound gave great improvements in perplexity. This observation highlights the fact that, in the text domain, the ELBO may be severely underestimating the likelihood of the model: the approximate posterior may loosely match the true posterior and the IWAE bound can correct for this mismatch by tightening the posterior approximation, i.e. the IWAE bound can be interpreted as the standard VAE lower bound with an implicit posterior distribution (Bachman and Precup, 2015). On the basis of this observation, we attempted training our models with the IWAE bound, but observed no noticeable improvement on validation perplexity. We analyze whether the latent variables capture characteristics of language by interpolating in the latent space (Bowman et al., 2015). Given a sentence, we first infer the latent variables at each step by running the approximate posterior and then concatenate them in order to form a contiguous latent encoding for the input sentence. Then, we perform linear interpolation in the latent space between the latent encodings of two sentences. At each step of the interpolation, the latent encoding is run through the decoder network to generate a sentence. We show the results in Table 3. 8 this movie is so terrible . never watch ever a Argmax Sampling 0.0 it ’s a movie that does n’t work ! this film is more of a “ classic ” 0.1 it ’s a movie that does n’t work ! i give it a 5 out of 10 0.2 it ’s a movie that does n’t work ! i felt that the movie did n’t have any 0.3 it ’s a very powerful piece of film ! i do n’t know what the film was about 0.4 it ’s a very powerful story about it ! the acting is good and the acting is very good 0.5 it ’s a very powerful story about a movie about life the acting is great and the acting is good too 0.6 it ’s a very dark part of the film , eh ? i give it a 7 out of 10 , kids 0.7 it ’s a very dark movie with a great ending ! ! the acting is pretty good and the story is great 0.8 it ’s a very dark movie with a great message here ! the best thing i ’ve seen before is in the film 0.9 it ’s a very dark one , but a great one ! funny movie , with some great performances 1.0 it ’s a very dark movie , but a great one ! but the acting is good and the story is really interesting this movie is great . i want to watch it again ! (1 / 10) violence : yes . a Argmax Sampling 0.0 greetings again from the darkness . greetings again from the darkness . 0.1 “ oh , and no . “ let ’s screenplay it . 0.2 “ oh , and it is . rating : **** out of 5 . 0.3 well ... i do n’t know . i do n’t know what the film was about 0.4 so far , it ’s watchable . ( pg-13 ) violence , no . 0.5 so many of the characters are likable . just give this movie a chance . 0.6 so many of the characters were likable . so far , but not for children 0.7 so many of the characters have been there . so many actors were excellent as well . 0.8 so many of them have fun with it . there are a lot of things to describe . 0.9 so many of the characters go to the house ! so where ’s the title about the movie ? 1.0 so many of the characters go to the house ! as much though it ’s going to be funny ! there was a lot of fun in this movie ! Table 3: Results of linear interpolation in the latent space. The left column reports greedy argmax decoding obtained by selecting, at each step of the decoding, the word with maximum probability under the model distribution, while the right column reports random samples from the model. a is the interpolation parameter. In general, latent variables seem to capture the length of the sentences. 5 Conclusion In this paper, we proposed a recurrent stochastic generative model that builds upon recent architectures that use latent variables to condition the recurrent dynamics of the network. We augmented the inference network with a recurrent network that runs backward through the input sequence and added a new auxiliary cost that forces the latent variables to reconstruct the state of that backward network, thus explicitly encoding a summary of future observations. The model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard. The proposed auxiliary cost, albeit simple, appears to promote the use of latent variables more effectively compared to other similar strategies such as KL annealing. In future work, it would be interesting to use a multitask learning setting, e.g. sentiment analysis as in (Hu et al., 2017). Also, it would be interesting to incorporate the proposed approach with more powerful autogressive models, e.g. PixelRNN/PixelCNN (Oord et al., 2016). Acknowledgments The authors would like to thank Phil Bachman, Alex Lamb and Adam Trischler for the useful discussions. AG and YB would also like to thank NSERC, CIFAR, Google, Samsung, IBM and Canada Research Chairs for funding, and Compute Canada and NVIDIA for computing resources. The authors would also like to express debt of gratitude towards those who contributed to Theano over the years (as it is no longer maintained), making it such a great tool. 9 References Bachman, P. (2016). An architecture for deep, hierarchical generative models. In Advances in Neural Information Processing Systems, pages 4826–4834. Bachman, P. and Precup, D. (2015). Training deep generative models: Variations on a theme. Bayer, J. and Osendorfer, C. (2014). Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610. Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392. Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., and Bengio, S. (2015). Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Burda, Y., Grosse, R., and Salakhutdinov, R. (2015). Importance weighted autoencoders. arXiv preprint arXiv:1509.00519. Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. (2017). Variational lossy autoencoder. Proc. of ICLR. Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A. C., and Bengio, Y. (2015). A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980–2988. Diao, Q., Qiu, M., Wu, C.-Y., Smola, A. J., Jiang, J., and Wang, C. (2014). Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 193–202. Fraccaro, M., Sønderby, S. K., Paquet, U., and Winther, O. (2016). Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems, pages 2199–2207. Germain, M., Gregor, K., Murray, I., and Larochelle, H. (2015). Made: Masked autoencoder for distribution estimation. In ICML, pages 881–889. Goyal, A., Lamb, A., Zhang, Y., Zhang, S., Courville, A. C., and Bengio, Y. (2016). Professor forcing: A new algorithm for training recurrent networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4601–4609. Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., and Wierstra, D. (2015). Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., and Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., and Courville, A. (2017). Pixelvae: A latent variable model for natural images. Proc. of ICLR. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735– 1780. Honkela, A. and Valpola, H. (2004). Variational learning and bits-back coding: an informationtheoretic view to bayesian learning. IEEE Transactions on Neural Networks, 15(4):800–810. Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., and Xing, E. P. (2017). Controllable text generation. arXiv preprint arXiv:1703.00955. Karl, M., Soelch, M., Bayer, J., and van der Smagt, P. (2016). Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432. King, S. and Karaiskos, V. (2013). The blizzard challenge 2013. The Ninth Annual Blizzard Challenge, 2013. 10 Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. (2014). Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589. Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. (2016). Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pages 4743–4751. Kingma, D. P. and Welling, M. (2014). Stochastic Gradient VB and the Variational Auto-Encoder. 2nd International Conference on Learning Representationsm (ICLR), pages 1–14. Larochelle, H. and Murray, I. (2011). The neural autoregressive distribution estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 29–37. Louizos, C. and Welling, M. (2017). Multiplicative normalizing flows for variational bayesian neural networks. arXiv preprint arXiv:1703.01961. Oord, A. v. d., Kalchbrenner, N., and Kavukcuoglu, K. (2016). Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759. Raiko, T., Li, Y., Cho, K., and Bengio, Y. (2014). Iterative neural autoregressive distribution estimator nade-k. In Advances in neural information processing systems, pages 325–333. Rezende, D. J. and Mohamed, S. (2015). Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1988). Learning representations by backpropagating errors. Cognitive modeling, 5(3):1. Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM. Salimans, T., Kingma, D. P., and Welling, M. (2014). Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460. Semeniuta, S., Severyn, A., and Barth, E. (2017). A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390. Serban, I. V., II, A. G. O., Pineau, J., and Courville, A. C. (2017a). Piecewise latent variables for neural variational text processing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 422–432. Serban, I. V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A. C., and Bengio, Y. (2017b). A hierarchical latent variable encoder-decoder model for generating dialogues. In In Proc. of AAAI. Uria, B., Côté, M.-A., Gregor, K., Murray, I., and Larochelle, H. (2016). Neural autoregressive distribution estimation. Journal of Machine Learning Research, 17(205):1–37. Zhao, T., Zhao, R., and Eskenazi, M. (2017). Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960. 11 | 2017 | 385 |
6,880 | Regret Minimization in MDPs with Options without Prior Knowledge Ronan Fruit Sequel Team - Inria Lille ronan.fruit@inria.fr Matteo Pirotta Sequel Team - Inria Lille matteo.pirotta@inria.fr Alessandro Lazaric Sequel Team - Inria Lille alessandro.lazaric@inria.fr Emma Brunskill Stanford University ebrun@cs.stanford.edu Abstract The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RMAX-SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RMAX-SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while the regret analysis of UCRL-SMDP requires prior knowledge of the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP’s up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical results supporting the theoretical findings. 1 Introduction Tractable learning of how to make good decisions in complex domains over many time steps almost definitely requires some form of hierarchical reasoning. One powerful and popular framework for incorporating temporally-extended actions in the context of reinforcement learning is the options framework [1]. Creating and leveraging options has been the subject of many papers over the last two decades (see e.g., [2, 3, 4, 5, 6, 7, 8]) and it has been of particular interest recently in combination with deep reinforcement learning, with a number of impressive empirical successes (see e.g., [9] for an application to Minecraft). Intuitively (and empirically) temporal abstraction can help speed up learning (reduce the amount of experience needed to learn a good policy) by shaping the actions selected towards more promising sequences of actions [10], and it can reduce planning computation through reducing the need to evaluate over all possible actions (see e.g., Mann and Mannor [11]). However, incorporating options does not always improve learning efficiency as shown by Jong et al. [12]. Intuitively, limiting action selection only to temporally-extended options might hamper the exploration of the environment by restricting the policy space. Therefore, we argue that in addition to the exciting work being done in heuristic and algorithmic approaches that leverage and/or dynamically discover options, it is important to build a formal understanding of how and when options may help or hurt reinforcement learning performance, and that such insights may also help inform empirically motivated options-RL research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. There has been fairly limited work on formal performance bounds of RL with options. Brunskill and Li [13] derived sample complexity bounds for an RMAX-like exploration-exploitation algorithm for semi-Markov decision processes (SMDPs). While MDPs with options can be mapped to SMDPs, their analysis cannot be immediately translated into the PAC-MDP sample complexity of learning with options, which makes it harder to evaluate their potential benefit. Fruit and Lazaric [14] analyzed an SMDP variant of UCRL [15] showing how its regret can be mapped to the regret of learning in the original MDP with options. The resulting analysis explicitly showed how options can be beneficial whenever the navigability among the states in the original MDP is not compromised (i.e., the MDP diameter is not significantly increased), the level of temporal abstraction is high (i.e., options have long durations, thus reducing the number of decision steps), and the optimal policy with options performs as well as the optimal policy using primitive actions. While this result makes explicit the impact of options on the learning performance, the proposed algorithm (UCRL-SMDP, or SUCRL in short) needs prior knowledge on the parameters of the distributions of cumulative rewards and durations of each option to construct confidence intervals and compute optimistic solutions. In practice this is often a strong requirement and any incorrect parametrization (e.g., loose upper-bounds on the true parameters) directly translates into a poorer regret performance. Furthermore, even if a hand-designed set of options may come with accurate estimates of their parameters, this would not be possible for automatically generated options, which are of increasing interest to the deep RL community. Finally, this prior work views each option as a distinct and atomic macro-action, thus losing the potential benefit of considering the inner structure and the interaction between of options, which could be used to significantly improve sample efficiency. In this paper we remove the limitations of prior theoretical analyses. In particular, we combine the semi-Markov decision process view on options and the intrinsic MDP structure underlying their execution to achieve temporal abstraction without relying on parameters that are typically unknown. We introduce a transformation mapping each option to an associated irreducible Markov chain and we show that optimistic policies can be computed using only the stationary distributions of the irreducible chains and the SMDP dynamics (i.e., state to state transition probabilities through options). This approach does not need to explicitly estimate cumulative rewards and duration of options and their confidence intervals. We propose two alternative implementations of a general algorithm (FREE-SUCRL, or FSUCRL in short) that differs in whether the stationary distribution of the options’ irreducible Markov chains and its confidence intervals are computed explicitly or implicitly through an ad-hoc extended value iteration algorithm. We derive regret bounds for FSUCRL that match the regret of SUCRL up to an additional term accounting for the complexity of estimating the stationary distribution of an irreducible Markov chain starting from its transition matrix. This additional regret is the, possibly unavoidable, cost to pay for not having prior knowledge on options. We further the theoretical findings with a series of simple grid-world experiments where we compare FSUCRL to SUCRL and UCRL (i.e., learning without options). 2 Preliminaries Learning in MDPs with options. A finite MDP is a tuple M = S, A, p, r where S is the set of states, A is the set of actions, p(s′|s, a) is the probability of transition from state s to state s′ through action a, r(s, a) is the random reward associated to (s, a) with expectation r(s, a). A deterministic policy π : S →A maps states to actions. We define an option as a tuple o = so, βo, πo where so ∈S is the state where the option can be initiated1, πo : S →A is the associated stationary Markov policy, and βo : S →[0, 1] is the probability of termination. As proved by Sutton et al. [1], when primitive actions are replaced by a set of options O, the resulting decision process is a semi-Markov decision processes (SMDP) MO = SO, Os, pO, RO, τO where SO ⊆S is the set of states where options can start and end, Os is the set of options available at state s, pO(s′|s, o) is the probability of terminating in s′ when starting o from s, RO(s, o) is the (random) cumulative reward obtained by executing option o from state s until interruption at s′ with expectation RO(s, o), and τO(s, o) is the duration (i.e., number of actions executed to go from s to s′ by following πo) with expectation τ(s, o).2 Throughout the rest of the paper, we assume that options are well defined. 1Restricting the standard initial set to one state so is without loss of generality (see App. A). 2Notice that RO(s, o) (similarly for τO) is well defined only when s = so, that is when o ∈Os. 2 Assumption 1. The set of options O is admissible, that is 1) all options terminate in finite time with probability 1, 2), in all possible terminal states there exists at least one option that can start, i.e., ∪o∈O{s : βo(s) > 0} ⊆∪o∈O{so}, 3) the resulting SMDP MO is communicating. Lem. 3 in [14] shows that under Asm. 1 the family of SMDPs induced by using options in MDPs is such that for any option o, the distributions of the cumulative reward and the duration are subExponential with bounded parameters (σr(o), br(o)) and (στ(o), bτ(o)) respectively. The maximal expected duration is denoted by τmax = maxs,o {τ O(s, o)}. Let t denote primitive action steps and let i index decision steps at option level. The number of decision steps up to (primitive) step t is N(t) = max n : Tn ≤t , where Tn = Pn i=1 τi is the number of primitive steps executed over n decision steps and τi is the (random) number of steps before the termination of the option chosen at step i. Under Asm. 1 there exists a policy π∗: S →O over options that achieves the largest gain (per-step reward) ρ∗ O def = max π ρπ O = max π lim t→+∞Eπ PN(t) i=1 Ri t , (1) where Ri is the reward cumulated by the option executed at step i. The optimal gain also satisfies the optimality equation of an equivalent MDP obtained by data-transformation (Lem. 2 in [16]), i.e., ∀s ∈S ρ∗ O = max o∈Os RO(s, o) τ O(s, o) + 1 τ O(s, o) X s′∈S pO(s′|s, o)u∗ O(s′) −u∗ O(s) , (2) where u∗ O is the optimal bias and Os is the set of options than can be started in s (i.e., o ∈Os ⇔ so = s). In the following sections, we drop the dependency on the option set O from all previous terms whenever clear from the context. Given the optimal average reward ρ∗ O, we evaluate the performance of a learning algorithm A by its cumulative (SMDP) regret over n decision steps as ∆(A, n) = Pn i=1 τi ρ∗ O −Pn i=1 Ri. In [14] it is shown that ∆(A, n) is equal to the MDP regret up to a linear “approximation” regret accounting for the difference between the optimal gains of M on primitive actions and the associated SMDP MO. 3 Parameter-free SUCRL for Learning with Options Optimism in SUCRL. At each episode, SUCRL runs a variant of extended value iteration (EVI) [17] to solve the “optimistic” version of the data-transformation optimality equation in Eq. 2, i.e., eρ∗= max o∈Os ( max e R,eτ eR(s, o) eτ(s, o) + 1 eτ(s, o) max ep n X s′∈S ep(s′|s, o)eu∗(s′) o −eu∗(s) ) , (3) where e R and eτ are the vectors of cumulative rewards and durations for all state-option pairs and they belong to confidence intervals constructed using parameters (σr(o), br(o)) and (στ(o), bτ(o)) (see Sect.3 in [14] for the exact expression). Similarly, confidence intervals need to be computed for ep, but this does not require any prior knowledge on the SMDP since the transition probabilities naturally belong to the simplex over states. As a result, without any prior knowledge, such confidence intervals cannot be directly constructed and SUCRL cannot be run. In the following, we see how constructing an irreducible Markov chain (MC) associated to each option avoids this problem. 3.1 Irreducible Markov Chains Associated to Options Options as absorbing Markov chains. A natural way to address SUCRL’s limitations is to avoid considering options as atomic operations (as in SMDPs) but take into consideration their inner (MDP) structure. Since options terminate in finite time (Asm. 1), they can be seen as an absorbing Markov reward process whose state space contains all states that are reachable by the option and where option terminal states are absorbing states of the MC (see Fig. 1). More formally, for any option o the set of inner states So includes the initial state so and all states s with βo(s) < 1 that are reachable by executing πo from so (e.g., So = {s0, s1} in Fig. 1), while the set of absorbing states Sabs o includes all states with βo(s) > 0 (e.g., Sabs o = {s0, s1, s2} in Fig. 1). The absorbing MC associated to o is 3 s0 β0 s1 β1 s2 β2 . . . . . . . . . . . . a0 a1 a0 a1 a0 a1 p p 1 −p 1 −p s0 s1 s2 . . . . . . . . . . . . o p(s1|s0, o) p(s2|s0, o) p(s0|s0, o) So so,0 so,1 Sabs o s0 s1 s2 (1−β1)p (1−p)β0 (1−p)(1−β0) β1p (1−p)β1 (1−p)(1−β1) p 1 1 1 so,0 so,1 (1−β1)p p′ p′′ (1−p)(1−β1) Figure 1: (upper-left) MDP with an option o starting from s0 and executing a0 in all states with termination probabilities βo(s0) = β0, βo(s1) = β1 and βo(s2) = 1. (upper-right) SMDP dynamics associated to option o. (lower-left) Absorbing MC associated to options o. (lower-right) Irreducible MC obtained by transforming the associated absorbing MC with p′ = (1 −β0)(1 −p) + β0(1 −p) + pβ1 and p′′ = β1(1 −p) + p. characterized by a transition matrix Po of dimension (|So| + |Sabs o |) × (|So| + |Sabs o |) defined as3 Po = Qo Vo 0 I with Qo(s, s′) = (1 −βo(s′))p(s′|s, πo(s)) for any s, s′ ∈So Vo(s, s′) = βo(s′)p(s′|s, πo(s)) for any s ∈So, s′ ∈Sabs o , where Qo is the transition matrix between inner states (dim. |So| × |So|), Vo is the transition matrix from inner states to absorbing states (dim. |So| × |Sabs o |), and I is the identity matrix (dim. |Sabs o | × |Sabs o |). As proved in Lem. 3 in [14], the expected cumulative rewards R(s, o), the duration τ(s, o), and the sub-Exponential parameters (σr(o), br(o)) and (στ(o), bτ(o)) are directly related to the transition matrices Qo and Vo of the associated absorbing chain Po. This suggests that, given an estimate of Po, we could directly derive the corresponding estimates of R(s, o) and τ(s, o). Following this idea, we could “propagate” confidence intervals on the entries of Po to obtain confidence intervals on rewards and duration estimates without any prior knowledge on their parameters and thus solve Eq. 3 without any prior knowledge. Nonetheless, intervals on Po do not necessarily translate into compact bounds for R and τ. For example, if the value eVo = 0 belongs to the confidence interval of ePo (no state in Sabs o can be reached), the corresponding optimistic estimates eR(s, o) and eτ(s, o) are unbounded and Eq. 3 is ill-defined. Options as irreducible Markov chains. We first notice from Eq. 2 that computing the optimal policy only requires computing the ratio R(s, o)/τ(s, o) and the inverse 1/τ(s, o). Starting from Po, we can construct an irreducible MC whose stationary distribution is directly related to these terms. We proceed as illustrated in Fig. 1: all terminal states are “merged” together and their transitions are “redirected” to the initial state so. More formally, let 1 be the all-one vector of dimension |Sabs o |, then vo = Vo1 ∈R|So| contains the cumulative probability to transition from an inner state to any terminal state. Then the chain Po can be transformed into a MC with transition matrix P ′ o = [vo Q′ o] ∈RSo×So, where Q′ o contains all but the first column of Qo. P ′ o is now an irreducible MC as any state can be reached starting from any other state and thus it admits a unique stationary distribution µo. In order to relate µo to the optimality equation in Eq. 2, we need an additional assumption on the options. Assumption 2. For any option o ∈O, the starting state so is also a terminal state (i.e., βo (so) = 1) and any state s′ ∈S with βo(s′) < 1 is an inner state (i.e., s′ ∈So). 3In the following we only focus on the dynamics of the process; similar definitions apply for the rewards. 4 Input: Confidence δ ∈]0, 1[, rmax, S, A, O For episodes k = 1, 2, ... do 1. Set ik := i, t = tk and episode counters νk(s, a) = 0, νk(s, o) = 0 2. Compute estimates bpk(s′|s, o), bP ′ o,k, brk(s, a) and their confidence intervals in Eq. 6 3. Compute an ϵk-approximation of the optimal optimistic policy eπk of Eq. 5 4. While ∀l ∈[t + 1, t + τi], νk(sl, al) < Nk(sl, al) do (a) Execute option oi = eπk(si), obtain primitive rewards r1 i , ..., rτi i and visited states s1 i , ..., sτi i = si+1 (b) Set νk(si, oi) += 1, i += 1, t += τi and νk(s, πoi(s)) += 1 for all s ∈{s1 i , ..., sτi i } 5. Set Nk(s, o) += νk(s, o) and Nk(s, a) += νk(s, a) Figure 2: The general structure of FSUCRL. While the first part has a very minor impact on the definition of O, the second part of the assumption guarantees that options are “well designed” as it requires the termination condition to be coherent with the true inner states of the option, so that if βo(s′) < 1 then s′ should be indeed reachable by the option. Further discussion about Asm. 2 is reported in App. A. We then obtain the following property. Lemma 1. Under Asm. 2, let µo ∈[0, 1]So be the unique stationary distribution of the irreducible MC P ′ o associated to option o, then 4 ∀s ∈S, ∀o ∈Os, 1 τ(s, o) = µo(s) and R(s, o) τ(s, o) = X s′∈So r(s′, πo(s′))µo(s′). (4) This lemma illustrates the relationship between the stationary distribution of P ′ o and the key terms in Eq. 2.5 As a result, we can apply Lem. 1 to Eq. 3 and obtain the optimistic optimality equation ∀s ∈S eρ∗= max o∈Os ( max eµo,ero X s′∈So ero (s′) eµo(s′) + eµo(s) max ebo eb⊺ oeu∗ −eu∗(s) ) , (5) where ero (s′) = er (s′, πo(s′)) and ebo = (ep(s′|s, o))s′∈S. Unlike in the absorbing MC case, where compact confidence sets for Po may lead to unbounded optimistic estimates for eR and eτ, in this formulation µo(s) can be equal to 0 (i.e., infinite duration and cumulative reward) without compromising the solution of Eq. 5. Furthermore, estimating µo implicitly leverages over the correlation between cumulative reward and duration, which is ignored when estimating R(s, o) and τ(s, o) separately. Finally, we prove the following result. Lemma 2. Let ero ∈R, ebo ∈P, and eµo ∈M, with R, P, M compact sets containing the true parameters ro, bo and µo, then the optimality equation in Eq. 5 always admits a unique solution eρ∗ and eρ∗≥ρ∗(i.e., the solution of Eq. 5 is an optimistic gain). Now, we need to provide an explicit algorithm to compute the optimistic optimal gain eρ∗of Eq. 5 and its associated optimistic policy. In the next section, we introduce two alternative algorithms that are guaranteed to compute an ϵ-optimistic policy. 3.2 SUCRL with Irreducible Markov Chains The structure of the UCRL-like algorithm for learning with options but with no prior knowledge on distribution parameters (called FREE-SUCRL, or FSUCRL) is reported in Fig. 2. Unlike SUCRL we do not directly estimate the expected cumulative reward and duration of options but we estimate the SMDP transition probabilities p(s′|s, o), the irreducible MC P ′ o associated to each option, and the state-action reward r(s, a). For all these terms we can compute confidence intervals (Hoeffding and empirical Bernstein) without any prior knowledge as 4Notice that since option o is defined in s, then s = so. Furthermore r is the MDP expected reward. 5Lem. 4 in App. D extends this result by giving an interpretation of µo(s′), ∀s′ ∈So. 5 r(s, a) −brk(s, a) ≤βr k(s, a) ∝rmax s log(SAtk/δ) Nk(s, a) , (6a) p(s′|s, o) −bpk(s′|s, o) ≤βp k(s, o, s′) ∝ s 2bpk(s′|s, o) 1 −bpk(s′|s, o))ctk,δ Nk(s, o) + 7ctk,δ 3Nk(s, o), (6b) P ′ o(s, s′) −bP ′ o,k(s, s′) ≤βP k (s, o, s′) ∝ s 2 bP ′ o,k(s, s′) 1 −bP ′ o,k(s, s′))dtk,δ Nk(s, πo(s)) + 7dtk,δ 3Nk(s, πo(s)), (6c) where Nk(s, a) (resp. Nk(s, o)) is the number of samples collected at state-action s, a (resp. stateoption s, o) up to episode k, Eq. 6a coincides with the one used in UCRL, in Eq. 6b s = so and s′ ∈S, and in Eq. 6c s, s′ ∈So. Finally, we set ctk,δ = O (log (SOtk)/δ)) and dtk,δ = O (log (|So| log(tk)/δ)) [18, Eq. 31]. To obtain an actual implementation of the algorithm reported on Fig. 2 we need to define a procedure to compute an approximation of Eq. 5 (step 3). Similar to UCRL and SUCRL, we define an EVI algorithm starting from a function u0(s) = 0 and computing at each iteration j uj+1(s)= max o∈Os ( max eµo ( X s′∈So ero (s′) eµo(s′) + eµo(s) max ebo n eb⊺ ouj o −uj(s) ) ) +uj(s), (7) where ero(s′) is the optimistic reward (i.e., estimate plus the confidence bound of Eq. 6a) and the optimistic transition probability vector ebo is computed using the algorithm introduced in [19, App. A] for Bernstein bound as in Eqs. 6b, 6c or in [15, Fig. 2] for Hoeffding bound (see App. B). Depending on whether confidence intervals for µo are computed explicitly or implicitly we can define two alternative implementations that we present below. Explicit confidence intervals. Given the estimate bP ′ o, let bµo be the solution of bµ⊺ o = bµ⊺ o bP ′ o under constraint bµ⊺ oe = e. Such a bµo always exists and is unique since bP ′ o is computed after terminating the option at least once and is thus irreducible. The perturbation analysis in [20] can be applied to derive the confidence interval ∥µo −bµo∥1 ≤βµ k (o) := bκo,min∥P ′ o −bP ′ o∥∞,1, (8) where ∥·∥∞,1 is the maximum of the ℓ1-norm of the rows of the transition matrix, bκo,min is the smallest condition number6 for the ℓ1-norm of µo. Let ζo ∈R|So| be such that ζo(so) = ero(so) + maxebo eb⊺ ouj −uj(so) and ζo(s) = ero(s), then the maximum over eµo in Eq. 7 has the same form as the innermost maximum over bo (with Hoeffding bound) and thus we can directly apply Alg. [15, Fig. 2] with parameters bµo, βµ k (o), and states So ordered descendingly according to ζo. The resulting value is then directly plugged into Eq. 7 and uj+1 is computed. We refer to this algorithm as FSUCRLV1. Nested extended value iteration. An alternative approach builds on the observation that the maximum over µo in Eq. 7 can be seen as the optimization of the average reward (gain) eρ∗ o(uj) = max eµo ( X s′∈So ζo(s′)eµo(s′) ) , (9) where ζo is defined as above. Eq. 9 is indeed the optimal gain of a bounded-parameter MDP with state space So, an action space composed of the option action (i.e., πo(s)), and transitions eP ′ o in the confidence intervals 7 of Eq. 6c, and thus we can write its optimality equation eρ∗ o(uj) = max e P ′o ( ζo(s) + X s′ eP ′ o(s, s′) ew∗ o(s′) ) −ew∗ o(s), (10) 6The provably smallest condition number (refer to [21, Th. 2.3]) is the one provided by Seneta [22]: bκo,min = τ1( bZo) = maxi,j 1 2∥bZo(i, :)−bZo(j, :)∥1 where bZo(i, :) is the i-th row of bZo = (I −bP ′ o +1⊺bµo)−1. 7The confidence intervals on eP ′ o can never exclude a non-zero transition between any two states of So. Therefore, the corresponding bounded-parameter MDP is always communicating and ρ∗ o(uj) is state-independent. 6 where ew∗ o is an optimal bias. For any input function v we can compute ρ∗ o(v) by using EVI on the bounded-parameter MDP, thus avoiding to explicitly construct the confidence intervals of eµo. As a result, we obtain two nested EVI algorithms where, starting from an initial bias function v0(s) = 0, 8 at any iteration j we set the bias function of the inner EVI to wo j,0(s) = 0 and we compute (see App. C.3 for the general EVI for bounded-parameter MDPs and its guarantees) wo j,l+1(s′) = max e Po n ζo(s) + e Po(·|s′)⊺wo j,l o , (11) until the stopping condition lo j = inf{l ≥0 : sp{wo j,l+1−wo j,l} ≤εj} is met, where (εj)j≥0 is a vanishing sequence. As wo j,l+1 −wo j,l converges to ρ∗ o(vj) with l, the outer EVI becomes vj+1(s) = max o∈Os n g wo j,lo j +1 −wo j,lo j o + vj(s), (12) where g : v 7→1 2 (max{v} + min{v}). In App. C.4 we show that this nested scheme, that we call FSUCRLV2, converges to the solution of Eq. 5. Furthermore, if the algorithm is stopped when sp {vj+1 −vj} + εj ≤ε then |eρ∗−g(vj+1 −vj)| ≤ε/2. One of the interesting features of this algorithm is its hierarchical structure. Nested EVI is operating on two different time scales by iteratively considering every option as an independent optimistic planning sub-problem (EVI of Eq. 11) and gathering all the results into a higher level planning problem (EVI of Eq. 12). This idea is at the core of the hierarchical approach in RL, but it is not always present in the algorithmic structure, while nested EVI naturally arises from decomposing Eq. 7 in two value iteration algorithms. It is also worth to underline that the confidence intervals implicitly generated for eµo are never worse than those in Eq. 8 and they are often much tighter. In practice the bound of Eq. 8 may be actually worse because of the worst-case scenario considered in the computation of the condition numbers (see Sec. 5 and App. F). 4 Theoretical Analysis Before stating the guarantees for FSUCRL, we recall the definition of diameter of M and MO: D = max s,s′∈S min π:S→A E τπ(s, s′) , DO = max s,s′∈SO min π:S→O E τπ(s, s′) , where τπ(s, s′) is the (random) number of primitive actions to move from s to s′ following policy π. We also define a pseudo-diameter characterizing the “complexity” of the inner dynamics of options: eDO = r∗κ1 ∗+ τmaxκ∞ ∗ √µ∗ where we define: r∗= max o∈O {sp(ro)} , κ1 ∗= max o∈O κ1 o , κ∞ ∗= max o∈O {κ∞ o } , and µ∗= min o∈O min s∈So µo(s) with κ1 o and κ∞ o the condition numbers of the irreducible MC associated to options o (for the ℓ1 and ℓ∞-norm respectively [20]) and sp(ro) the span of the reward of the option. In App. D we prove the following regret bound. Theorem 1. Let M be a communicating MDP with reward bounded between 0 and rmax = 1 and let O be a set of options satisfying Asm. 1 and 2 such that σr(s, o) ≤σr, στ(s, o) ≤στ, and τ(s, o) ≤τmax. We also define BO = maxs,o supp(p(·|s, o)) (resp. B = maxs,a supp(p(·|s, a)) as the largest support of the SMDP (resp. MDP) dynamics. Let Tn be the number of primitive steps executed when running FSUCRLV2 over n decision steps, then its regret is bounded as ∆(FSUCRL, n) = eO DO √ SBOOn | {z } ∆p + (σr + στ)√n | {z } ∆R,τ + √ SATn + eDO √ SBOTn | {z } ∆µ (13) 8We use vj instead of uj since the error in the inner EVI directly affects the value of the function at the outer EVI, which thus generates a sequence of functions different from (uj). 7 Comparison to SUCRL. Using the confidence intervals of Eq. 6b and a slightly tighter analysis than the one by Fruit and Lazaric [14] (Bernstein bounds and higher accuracy for EVI) leads to a regret bound for SUCRL as ∆(SUCRL, n) = eO ∆p + ∆R,τ + σ+ r + σ+ τ √ SAn | {z } ∆′ R,τ , (14) where σ+ r and σ+ τ are upper-bounds on σr and στ that are used in defining the confidence intervals for τ and R that are actually used in SUCRL. The term ∆p is the regret induced by errors in estimating the SMDP dynamics p(s′|s, o), while ∆R,τ summarizes the randomness in the cumulative reward and duration of options. Both these terms scale as √n, thus taking advantage of the temporal abstraction (i.e., the ratio between the number of primitive steps Tn and the decision steps n). The main difference between the two bounds is then in the last term, which accounts for the regret due to the optimistic estimation of the behavior of the options. In SUCRL this regret is linked to the upper bounds on the parameters of R and τ. As shown in Thm.2 in [14], when σ+ r = σr and σ+ τ = στ, the bound of SUCRL is nearly-optimal as it almost matches the lower-bound, thus showing that ∆′ R,τ is unavoidable. In FSUCRL however, the additional regret ∆µ comes from the estimation errors of the per-time-step rewards ro and the dynamic P ′ o. Similar to ∆p, these errors are amplified by the pseudo-diameter eDO. While ∆µ may actually be the unavoidable cost to pay for removing the prior knowledge about options, it is interesting to analyze how eDO changes with the structure of the options (see App. E for a concrete example). The probability µo(s) decreases as the probability of visiting an inner state s ∈So using the option policy. In this case, the probability of collecting samples on the inner transitions is low and this leads to large estimation errors for P ′ o. These errors are then propagated to the stationary distribution µo through the condition numbers κ (e.g., κ1 o directly follows from an non-empirical version of Eq. 8). Furthermore, we notice that 1/µo(s) ≥τo(s) ≥|So|, suggesting that “long” or “big” options are indeed more difficult to estimate. On the other hand, ∆µ becomes smaller whenever the transition probabilities under policy πo are supported over a few states (B small) and the rewards are similar within the option (sp(ro) small). While in the worst case ∆µ may actually be much bigger than ∆′ R,τ when the parameters of R and τ are accurately known (i.e., σ+ τ ≈στ and σ+ r ≈σr), in Sect. 5 we show scenarios in which the actual performance of FSUCRL is close or better than SUCRL and the advantage of learning with options is preserved. To explain why FSUCRL can perform better than SUCRL we point out that FSUCRL’s bound is somewhat worst-case w.r.t. the correlation between options. In fact, in Eq. 6c the error in estimating P ′ o in a state s does not scale with the number of samples obtained while executing option o but those collected by taking the primitive action prescribed by πo. This means that even if o has a low probability of reaching s starting from so (i.e., µo(s) is very small), the true error may still be small as soon as another option o′ executes the same action (i.e., πo(s) = πo′(s)). In this case the regret bound is loose and the actual performance of FSUCRL is much better. Therefore, although it is not apparent in the regret analysis, not only is FSUCRL leveraging on the correlation between the cumulative reward and duration of a single option, but it is also leveraging on the correlation between different options that share inner state-action pairs. Comparison to UCRL. We recall that the regret of UCRL is bounded as O(D√SBATn), where Tn is to the total number of steps. As discussed by [14], the major advantage of options is in terms of temporal abstraction (i.e., Tn ≫n) and reduction of the state-action space (i.e., SO < S and O < A). Eq.(13) also reveals that options can also improve the learning speed by reducing the size of the support BO of the dynamics of the environment w.r.t. primitive actions. This can lead to a huge improvement e.g., when options are designed so as to reach a specific goal. This potential advantage is new compared to [14] and matches the intuition on “good” options often presented in the literature (see e.g., the concept of “funnel” actions introduced by Dietterich [23]). Bound for FSUCRLV1. Bounding the regret of FSUCRLV1 requires bounding the empirical bκ in Eq. (8) with the true condition number κ. Since bκ tends to κ as the number of samples of the option increases, the overall regret would only be increased by a lower order term. In practice however, FSUCRLV2 is preferable to FSUCRLV1. The latter will suffer from the true condition numbers κ1 o o∈O since they are used to compute the confidence bounds on the stationary distributions (µo)o∈O, while for FSUCRLV2 they appear only in the analysis. As much as the dependency on the diameter in the analysis of UCRL, the condition numbers may also be loose in practice, although tight from a theoretical perspective. See App.D.6 and experiments for further insights. 8 2 4 6 8 10 12 0.6 0.7 0.8 0.9 1 1.1 Maximal duration of options Tmax Ratio of regrets R UCRL FSUCRLv1 FSUCRLv2 SUCRLv2 SUCRLv3 0 2 4 6 8 10 ·108 0 1 2 3 ·106 Duration Tn Cumulative Regret ∆(Tn) UCRL FSUCRLv1 FSUCRLv2 SUCRLv2 SUCRLv3 Figure 3: (Left) Regret after 1.2 · 108 steps normalized w.r.t. UCRL for different option durations in a 20x20 grid-world. (Right) Evolution of the regret as Tn increases for a 14x14 four-rooms maze. 5 Numerical Simulations In this section we compare the regret of FSUCRL to SUCRL and UCRL to empirically verify the impact of removing prior knowledge about options and estimating their structure through the irreducible MC transformation. We consider the toy domain presented in [14] that was specifically designed to show the advantage of temporal abstraction and the classical 4-rooms maze [1]. To be able to reproduce the results of [14], we run our algorithm with Hoeffding confidence bounds for the ℓ1-deviation of the empirical distribution (implying that BO has no impact). We consider settings where ∆R,τ is the dominating term of the regret (refer to App. F for details). When comparing the two versions of FSUCRL to UCRL on the grid domain (see Fig. 3 (left)), we empirically observe that the advantage of temporal abstraction is indeed preserved when removing the knowledge of the parameters of the option. This shows that the benefit of temporal abstraction is not just a mere artifact of prior knowledge on the options. Although the theoretical bound in Thm. 1 is always worse than its SMDP counterpart (14), we see that FSUCRL performs much better than SUCRL in our examples. This can be explained by the fact that the options we use greatly overlap. Even if our regret bound does not make explicit the fact that FSUCRL exploits the correlation between options, this can actually significantly impact the result in practice. The two versions of SUCRL differ in the amount of prior knowledge given to the algorithm to construct the parameters σ+ r and σ+ τ that are used in building the confidence intervals.In v3 we provide a tight upper-bound rmax on the rewards and distinct option-dependent parameters for the duration (τo and στ(o)), in v2 we only provide a global (option-independent) upper bound on τo and σo. Unlike FSUCRL which is “parameter-free”, SUCRL is highly sensitive to the prior knowledge about options and can perform even worse than UCRL. A similar behaviour is observed in Fig. 3 (right) where both the versions of SUCRL fail to beat UCRL but FSUCRLV2 has nearly half the regret of UCRL. On the contrary, FSUCRLV1 suffers a linear regret due to a loose dependency on the condition numbers (see App. F.2). This shows that the condition numbers appearing in the bound of FSUCRLV2 are actually loose. In both experiments, UCRL and FSUCRL had similar running times meaning that the improvement in cumulative regret is not at the expense of the computational complexity. 6 Conclusions We introduced FSUCRL, a parameter-free algorithm to learn in MDPs with options by combining the SMDP view to estimate the transition probabilities at the level of options (p(s′|s, o)) and the MDP structure of options to estimate the stationary distribution of an associated irreducible MC which allows to compute the optimistic policy at each episode. The resulting regret matches SUCRL bound up to an additive term. While in general, this additional regret may be large, we show both theoretically and empirically that FSUCRL is actually competitive with SUCRL and it retains the advantage of temporal abstraction w.r.t. learning without options. Since FSUCRL does not require strong prior knowledge about options and its regret bound is partially computable, we believe the results of this paper could be used as a basis to construct more principled option discovery algorithms that explicitly optimize the exploration-exploitation performance of the learning algorithm. 9 Acknowledgments This research was supported in part by French Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and French National Research Agency (ANR) under project ExTra-Learn (n.ANR-14-CE24-0010-01). References [1] Richard S. Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1): 181 – 211, 1999. [2] Amy McGovern and Andrew G. Barto. Automatic discovery of subgoals in reinforcement learning using diverse density. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 361–368, 2001. [3] Ishai Menache, Shie Mannor, and Nahum Shimkin. Q-cut—dynamic discovery of sub-goals in reinforcement learning. In Proceedings of the 13th European Conference on Machine Learning, Helsinki, Finland, August 19–23, 2002, pages 295–306. Springer Berlin Heidelberg, 2002. [4] Özgür ¸Sim¸sek and Andrew G. Barto. Using relative novelty to identify useful temporal abstractions in reinforcement learning. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04, 2004. [5] Pablo Samuel Castro and Doina Precup. Automatic construction of temporally extended actions for mdps using bisimulation metrics. In Proceedings of the 9th European Conference on Recent Advances in Reinforcement Learning, EWRL’11, pages 140–152, Berlin, Heidelberg, 2012. Springer-Verlag. [6] Kfir Y. Levy and Nahum Shimkin. Unified inter and intra options learning using policy gradient methods. In EWRL, volume 7188 of Lecture Notes in Computer Science, pages 153–164. Springer, 2011. [7] Munu Sairamesh and Balaraman Ravindran. Options with exceptions. In Proceedings of the 9th European Conference on Recent Advances in Reinforcement Learning, EWRL’11, pages 165–176, Berlin, Heidelberg, 2012. Springer-Verlag. [8] Timothy Arthur Mann, Daniel J. Mankowitz, and Shie Mannor. Time-regularized interrupting options (TRIO). In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 1350–1358. JMLR.org, 2014. [9] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 1553–1561. AAAI Press, 2017. [10] Martin Stolle and Doina Precup. Learning options in reinforcement learning. In SARA, volume 2371 of Lecture Notes in Computer Science, pages 212–223. Springer, 2002. [11] Timothy A. Mann and Shie Mannor. Scaling up approximate value iteration with options: Better policies with fewer iterations. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 127–135. JMLR.org, 2014. [12] Nicholas K. Jong, Todd Hester, and Peter Stone. The utility of temporal abstraction in reinforcement learning. In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, May 2008. [13] Emma Brunskill and Lihong Li. PAC-inspired Option Discovery in Lifelong Reinforcement Learning. In Proceedings of the 31st International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Proceedings, pages 316–324. JMLR.org, 2014. [14] Ronan Fruit and Alessandro Lazaric. Exploration–exploitation in mdps with options. In Proceedings of Machine Learning Research, volume 54: Artificial Intelligence and Statistics, 20-22 April 2017, Fort Lauderdale, FL, USA, pages 576–584, 2017. [15] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010. 10 [16] A. Federgruen, P.J. Schweitzer, and H.C. Tijms. Denumerable undiscounted semi-markov decision processes with unbounded rewards. Mathematics of Operations Research, 8(2):298– 313, 1983. [17] Alexander L. Strehl and Michael L. Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331, December 2008. [18] Daniel J. Hsu, Aryeh Kontorovich, and Csaba Szepesvári. Mixing time estimation in reversible markov chains from a single sample path. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS 15, pages 1459–1467. MIT Press, 2015. [19] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS 15, pages 2818–2826. MIT Press, 2015. [20] Grace E. Cho and Carl D. Meyer. Comparison of perturbation bounds for the stationary distribution of a markov chain. Linear Algebra and its Applications, 335(1):137 – 150, 2001. [21] Stephen J. Kirkland, Michael Neumann, and Nung-Sing Sze. On optimal condition numbers for markov chains. Numerische Mathematik, 110(4):521–537, Oct 2008. [22] E. Seneta. Sensitivity of finite markov chains under perturbation. Statistics & Probability Letters, 17(2):163–168, May 1993. [23] Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000. [24] Ronald Ortner. Optimism in the face of uncertainty should be refutable. Minds and Machines, 18(4):521–526, 2008. [25] Pierre Bremaud. Applied Probability Models with Optimization Applications, chapter 3: Recurrence and Ergodicity. Springer-Verlag Inc, Berlin; New York, 1999. [26] Pierre Bremaud. Applied Probability Models with Optimization Applications, chapter 2: Discrete-Time Markov Models. Springer-Verlag Inc, Berlin; New York, 1999. [27] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. [28] Peter L. Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, pages 35–42. AUAI Press, 2009. [29] Daniel Paulin. Concentration inequalities for markov chains by marton couplings and spectral methods. Electronic Journal of Probability, 20, 2015. [30] Martin Wainwright. Course on Mathematical Statistics, chapter 2: Basic tail and concentration bounds. University of California at Berkeley, Department of Statistics, 2015. 11 | 2017 | 386 |
6,881 | Learning Identifiable Gaussian Bayesian Networks in Polynomial Time and Sample Complexity Asish Ghoshal and Jean Honorio Department of Computer Science, Purdue University, West Lafayette, IN - 47906 {aghoshal, jhonorio}@purdue.edu Abstract Learning the directed acyclic graph (DAG) structure of a Bayesian network from observational data is a notoriously difficult problem for which many non-identifiability and hardness results are known. In this paper we propose a provably polynomialtime algorithm for learning sparse Gaussian Bayesian networks with equal noise variance — a class of Bayesian networks for which the DAG structure can be uniquely identified from observational data — under high-dimensional settings. We show that O(k4 log p) number of samples suffices for our method to recover the true DAG structure with high probability, where p is the number of variables and k is the maximum Markov blanket size. We obtain our theoretical guarantees under a condition called restricted strong adjacency faithfulness (RSAF), which is strictly weaker than strong faithfulness — a condition that other methods based on conditional independence testing need for their success. The sample complexity of our method matches the information-theoretic limits in terms of the dependence on p. We validate our theoretical findings through synthetic experiments. 1 Introduction and Related Work Motivation. The problem of learning the directed acyclic graph (DAG) structure of Bayesian networks (BNs) in general, and Gaussian Bayesian networks (GBNs) — or equivalently linear Gaussian structural equation models (SEMs) — in particular, from observational data has a long history in the statistics and machine learning community. This is, in part, motivated by the desire to uncover causal relationships between entities in domains as diverse as finance, genetics, medicine, neuroscience and artificial intelligence, to name a few. Although in general, the DAG structure of a GBN or linear Gaussian SEM cannot be uniquely identified from purely observational data (i.e., multiple structures can encode the same conditional independence relationships present in the observed data set), under certain restrictions on the generative model, the DAG structure can be uniquely determined. Furthermore, the problem of learning the structure of BNs exactly is known to be NP-complete even when the number of parents of a node is at most q, for q > 1, [1]. It is also known that approximating the log-likelihood to a constant factor, even when the model class is restricted to polytrees with at-most two parents per node, is NP-hard [2]. Peters and Bühlmann [3] recently showed that if the noise variances are the same, then the structure of a GBN can be uniquely identified from observational data. As observed by them, this “assumption of equal error variances seems natural for applications with variables from a similar domain and is commonly used in time series models”. Unfortunately, even for the equal noise-variance case, no polynomial time algorithm is known. Contribution. In this paper we develop a polynomial time algorithm for learning a subclass of BNs exactly: sparse GBNs with equal noise variance. This problem has been considered by [3] who proposed an exponential time algorithm based on `0-penalized maximum likelihood estimation (MLE), and a heuristic greedy search method without any guarantees. Our algorithm involves estimating a p-dimensional inverse covariance matrix and solving 2(p −1) at-most-k-dimensional 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ordinary least squares problems, where p is the number of nodes and k is the maximum Markov blanket size of a variable. We show that O((k4/↵2) log(p/δ)) samples suffice for our algorithm to recover the true DAG structure and to approximate the parameters to at most ↵additive error, with probability at least 1 −δ, for some δ > 0. The sample complexity of O(k4 log p) is close to the information-theoretic limit of ⌦(k log p) for learning sparse GBNs as obtained by [4]. The main assumption under which we obtain our theoretical guarantees is a condition that we refer to as the ↵-restricted strong adjacency faithfulness (RSAF). We show that RSAF is a strictly weaker condition than strong faithfulness, which methods based on independence testing require for their success. In this identifiable regime, given enough samples, our method can recover the exact DAG structure of any Gaussian distribution. However, existing exact algorithms like the PC algorithm [5] can fail to recover the correct skeleton for distributions that are not faithful, and fail to orient a number of edges that are not covered by the Meek orientation rules [6, 7]. Of independent interest is our analysis of OLS regression under the random design setting for which we obtain `1 error bounds. Related Work. In the this section, we first discuss some identifiability results for GBNs known in the literature and then survey relevant algorithms for learning GBNs and Gaussian SEMs. [3] proved identifiability of distributions drawn from a restricted SEM with additive noise, where in the restricted SEM the functions are assumed to be non-linear and thrice continuously differentiable. It is also known that SEMs with linear functions and strictly non-Gaussian noise are identifiable [8]. Indentifiability of the DAG structure for the linear function and Gaussian noise case was proved by [9] when noise variables are assumed to have equal variance. Algorithms for learning BNs typically fall into two distinct categories, namely: independence test based methods and score based methods. This dichotomy also extends to the Gaussian case. Score based methods assign a score to a candidate DAG structure based on how well it explains the observed data, and then attempt to find the highest scoring structure. Popular examples for the Gaussian distribution are the log-likelihood based BIC and AIC scores and the `0-penalized log-likelihood score by [10]. However, given that the number of DAGs and sparse DAGs is exponential in the number of variables [4, 11], exhaustively searching for the highest scoring DAG in the combinatorial space of all DAGs, which is a feature of existing exact search based algorithms, is prohibitive for all but a few number of variables. [12] propose a score-based method, based on concave penalization of a reparameterized negative log-likelihood function, which can learn a GBN over 1000 variables in an hour. However, the resulting optimization problem is neither convex — therefore is not guaranteed to find a globally optimal solution — nor solvable in polynomial time. In light of these shortcomings, approximation algorithms have been proposed for learning BNs which can be used to learn GBNs in conjunction with a suitable score function; notable methods are Greedy Equivalence Search (GES) proposed by [13] and an LP-relaxation based method proposed by [14]. Among independence test based methods for learning GBNs, [15] extended the PC algorithm, originally proposed by [5], to learn the Markov equivalence class of GBNs from observational data. The computational complexity of the PC algorithm is bounded by O(pk) with high probability, where k is the maximum neighborhood size of a node, and is only efficient for learning very sparse DAGs. For the non-linear Gaussian SEM case, [3] developed a two-stage algorithm called RESIT, which works by first learning the causal ordering of the variables and then performing regressions to learn the DAG structure. As we formally show in Appendix C.1, RESIT does not work for the linear Gaussian case. Moreover, Peters et al. proved the correctness of RESIT only in the population setting. Lastly, [16] developed an algorithm, which is similar in spirit to our algorithm, for efficiently learning Poisson Bayesian networks. They exploit a property specific to the Poisson distribution called overdispersion to learn the causal ordering of variables. Finally, the max-min hill climbing (MMHC) algorithm by [17] is a state-of-the-art hybrid algorithm for BNs that combines ideas from constraint-based and score-based learning. While MMHC works well in practice, it is inherently a heuristic algorithm and is not guaranteed to recover the true DAG structure even when it is uniquely identifiable. 2 Preliminaries In this section, we formalize the problem of learning Gaussian Bayesian networks from observational data. First, we introduce some notations and definitions. 2 We denote the set {1, . . . , p} by [p]. Vectors and matrices are denoted by lowercase and uppercase bold faced letters respectively. Random variables (including random vectors) are denoted by italicized uppercase letters. Let sr, sc ✓[p] be any two non-empty index sets. Then for any matrix A 2 Rp⇥p, we denote the R|sr|⇥|sc| sub-matrix, formed by selecting the sr rows and sc columns of A by: Asr,sc. With a slight abuse of notation, we will allow the index sets sr and sc to be a single index, e.g., i, and we will denote the index set of all row (or columns) by ⇤. Thus, A⇤,i and Ai,⇤ denote the i-th column and row of A respectively. For any vector v 2 Rp, we will denote its support set by: S(v) = {i 2 [p]||vi| > 0}. Vector `p-norms are denoted by k·kp. For matrices, k·kp denotes the induced (or operator) `p-norm and |·|p denotes the element-wise `p-norm, i.e., |A|p def = (P i,j|Ai,j|p) 1/p. Finally, we denote the set [p] \ {i} by −i. Let G = (V, E) be a directed acyclic graph (DAG) where the vertex set V = [p] and E is the set of directed edges, where (i, j) 2 E implies the edge i j. We denote by ⇡G(i) and φG(i) the parent set and the set of children of the i-th node, respectively, in the graph G, and drop the subscript G when the intended graph is clear from context. A vertex i 2 [p] is a terminal vertex in G if φG(i) = ?. For each i 2 [p] we have a random variable Xi 2 R, X = (X1, . . . , Xp) is the p-dimensional vector of random variables, and x = (x1, . . . , xp) is a joint assignment to X. Without loss of generality, we assume that E [Xi] = 0, 8i 2 [p]. Every DAG G = (V, E) defines a set of topological orderings TG over [p] that are compatible with the DAG G, i.e., TG = {⌧2 Sp | ⌧(j) < ⌧(i) if (i, j) 2 E}, where Sp is the set of all possible permutations of [p]. A Gaussian Bayesian network (GBN) is a tuple (G, P(W, S)), where G = (V, E) is a DAG structure, W = {wi,j 2 R | (i, j) 2 E ^ |wi,j| > 0} is the set of edge weights, S = {σ2 i 2 R+}p i=1 is the set of noise variances, and P is a multivariate Gaussian distribution over X = (X1, . . . , Xp) that is Markov with respect to the DAG G and is parameterized by W and S. In other words, P = N(x; 0, ⌃), factorizes as follows: P(x; W, S) = p Y i=1 Pi(xi; wi, x⇡(i), σ2 i ), (1) Pi(xi; wi, x⇡(i), σ2 i ) = N(xi; wT i x⇡(i), σ2 i ), (2) where wi 2 R|⇡(i)| def = (wi,j)j2⇡(i) is the weight vector for the i-th node, 0 is a vector of zeros of appropriate dimension (in this case p), x⇡(i) = {xj | j 2 ⇡(i)}, ⌃is the covariance matrix for X, and Pi is the conditional distribution of Xi given its parents — which is also Gaussian. We will also extensively use an alternative, but equivalent, view of a GBN: the linear structural equation model (SEM). Let B = (wi,j1 [(i, j) 2 E])(i,j)2[p]⇥[p] be the matrix of weights created from the set of edge weights W. A GBN (G, P(W, S)) corresponds to a SEM where each variable Xi can be written as follows: Xi = X j2⇡(i) Bi,jXj + Ni, 8i 2 [p] (3) with Ni ⇠N(0, σ2 i ) (for all i 2 [p]) being independent noise variables and |Bi,j| > 0 for all j 2 ⇡(i). The joint distribution of X as given by the SEM corresponds to the distribution P in (1) and the graph associated with the SEM, where we have a directed edge (i, j) if j 2 ⇡(i), corresponds to the DAG G. Denoting N = (N1, . . . , Np) as the noise vector, (3) can be rewritten in vector form as: X = BX + N. Given a GBN (G, P(W, S)), with B being the weight matrix corresponding to W, we denote the effective influence between two nodes i, j 2 [p] ewi,j def = BT ⇤,iB⇤,j −Bi,j −Bj,i (4) The effective influence ewi,j between two nodes i and j is zero if: (a) i and j do not have an edge between them and do not have common children, or (b) i and j have an edge between them but the dot product between the weights to the children (BT ⇤,iB⇤,j) exactly equals the edge weight between i and j (Bi,j + Bj,i). The effective influence determines the Markov blanket of each node, i.e., 8i 2 [p], the Markov blanket is given as: Si = {j | j 2 −i ^ ewi,j 6= 0} 1. Furthermore, a node is conditionally 1Our definition of Markov blanket differs from the commonly used graph-theoretic definition in that the latter includes the parents, children and all the co-parents of the children of node i in the Markov blanket Si. 3 independent of all other nodes not in its Markov blanket, i.e., Pr{Xi|X−i} = Pr{Xi|XSi}. Next, we present a few definitions that will be useful later. Definition 1 (Causal Minimality [18]). A distribution P is causal minimal with respect to a DAG structure G if it is not Markov with respect to a proper subgraph of G. Definition 2 (Faithfulness [5]). Given a GBN (G, P), P is faithful to the DAG G = (V, E) if for any i, j 2 V and any V0 ✓V \ {i, j}: i d-separated from j | V0 () corr(Xi, Xj|XV0) = 0, where corr(Xi, Xj|XV0) is the partial correlation between Xi and Xj given XV0. Definition 3 (Strong Faithfulness [19]). Given a GBN (G, P) the multivariate Gaussian distribution P is λ-strongly faithful to the DAG G, for some λ 2 (0, 1), if min{|corr(Xi, Xj|XV0)| : i is not d-separated from j | V0, 8i, j 2 [p] ^ 8V0 ✓V \ {i, j}^} ≥λ. Strong faithfulness is a stronger version of the faithfulness assumption that requires that for all triples (Xi, Xj, XV0) such that i is not d-separated from j given V0, the partial correlation corr(Xi, Xj|XV0) is bounded away from 0. It is known that while the set of distributions P that are Markov to a DAG G but not faithful to it have Lebesgue measure zero, the set of distributions P that are not strongly faithful to G have nonzero Lebesgue measure, and in fact can be quite large [20]. The problem of learning a GBN from observational data corresponds to recovering the DAG structure G and parameters W from a matrix X 2 Rn⇥p of n i.i.d. samples drawn from P(W, S). In this paper we consider the problem of learning GBNs over p variables where the size of the Markov blanket of a node is at most k. This is in general not possible without making additional assumptions on the GBN (G, P(W, S)) and the distribution P as we describe next. Assumptions. Here, we enumerate our technical assumptions. Assumption 1 (Causal Minimality). Let (G, P(W, S)) be a GBN, then 8wi,j 2 W, |wi,j| > 0. The above assumption ensures that all edge weights are strictly nonzero, which results in each variable Xi being a non-constant function of its parents X⇡(i). Given Assumption 1, the distribution P is causal minimal with respect to G [3] and therefore identifiable under equal noise variances [9], i.e., σ1 = . . . = σp = σ. Throughout the rest of the paper, we will denote such Bayesian networks by (G, P(W, σ2)). Assumption 2 (Restricted Strong Adjacency Faithfulness). Let (G, P(W, σ2)) be a GBN with G = (V, E). For every ⌧2 TG, consider the sequence of graphs G[m, ⌧] = (V[m, ⌧], E[m, ⌧]) indexed by (m, ⌧), where G[m, ⌧] is the induced subgraph of G over the first m vertices in the topological ordering ⌧, i.e., V[m, ⌧] def = {i 2 [p] | ⌧(i) m} and E[m, ⌧] def = {(i, j) 2 E | i 2 V[m, ⌧] ^ j 2 V[m, ⌧]}. The multivariate Gaussian distribution P is restricted ↵-strongly adjacency faithful to G, provided that: (i) min{|wi,j| | (i, j) 2 E} > 3↵, (ii) | ewi,j| > 3↵ (↵), 8i 2 V[m, ⌧] ^ j 2 Si[m, ⌧] ^ m 2 [p] ^ ⌧2 TG, where ↵> 0 is a constant, ewi,j is the effective influence between i and j in the induced subgraph G[m, ⌧] as defined in (4), and Si[m, ⌧] denotes the Markov blanket of node i in G[m, ⌧]. The constant (↵) = 1 −2/(1+9|φG[m,⌧](i)|↵2) if i is a non-terminal vertex in G[m, ⌧], where |φG[m,⌧](i)| is the number of children of i in G[m, ⌧], and (↵) = 1 if i is a terminal vertex. Simply stated, the RSAF assumption requires that the absolute value of the edge weights are at least 3↵and the absolute value of the effective influence between two nodes, whenever it is non-zero, is at least 3↵for terminal nodes and 3↵/(↵) for non-terminal nodes. Moreover, the above should hold not only for the original DAG, but also for each DAG obtained by sequentially removing terminal vertices. The constant ↵is related to the statistical error and decays as ⌦(k2p log p/n). Note that in Both the definitions are equivalent under faithfulness. However, since we allow non-faithful distributions, our definition of Markov blanket is more appropriate. 4 1 2 1 3 1 4 -1 5 1 0.25 1 -1 Figure 1: A GBN, with noise variance set to 1 that is RSAF, but is neither faithful, nor strongly faithful, nor adjacency faithful to the DAG structure. This GBN is not faithful because corr(X4, X5|X2, X3) = 0 even though (2, 3) do not d-separate 4 and 5. Other violations of faithfulness include corr(X1, X4|?) = 0 and corr(X1, X5|?) = 0. Therefore, a CI test based method will fail to recover the true structure. In Appendix B.1, we show that the PC algorithm fails to recover the structure of this GBN while our method recovers the structure exactly. the regime ↵2 (0, 1/3p |φG[m,⌧](i)|), which happens for sufficiently large n, then the condition on ewi,j is satisfied trivially. As we will show later, Assumption 2 is equivalent to the following, for some constant ↵0, min{|corr(Xi, Xj|XV[m,⌧]\{i,j})| | i 2 V[m, ⌧] ^ j 2 Si[m, ⌧] ^ m 2 [p] ^ ⌧2 TG} ≥↵0. At this point, it is worthwhile to compare our assumptions with those made by other methods for learning GBNs. Methods based on conditional independence (CI) tests, e.g., the PC algorithm for learning the equivalence class of GBNs developed by [15], require strong faithfulness. While strong faithfulness requires that for a node pair (i, j) that are adjacent in the DAG, the partial correlation corr(Xi, Xj|XS) is bounded away from zero for all sets S 2 {S ✓[p] \ {i, j}}, RSAF only requires non-zero partial correlations with respect to a subset of sets in {S ✓[p] \ {i, j}}. Thus, RSAF is strictly weaker than strong faithfulness. The number of non-zero partial correlations needed by RSAF is also strictly a subset of those needed by the faithfulness condition. Figure 1 shows a GBN which is RSAF but neither faithful, nor strongly faithful, nor adjacency faithful (see [20] for a definition). We conclude this section with one last remark. At first glance, it might appear that the assumption of equal variance together with our assumptions implies a simple causal ordering of variables in which the marginal variance of the variables increases strictly monotonically with the causal ordering. However, this is not the case. For instance, in the GBN shown in Figure 1 the marginal variance of the causally ordered nodes (1, 2, 3, 4, 5) is (1, 2, 2, 2, 2.125). We also perform extensive simulation experiments to further investigate this case in Appendix B.6. 3 Results We start by characterizing the covariance and precision matrix of a GBN (G, P(W, σ2)). Let B be the weight matrix corresponding to the edge weights W, then from (3) it follows that the covariance and precision matrix are, respectively: ⌃= σ2(I −B)−1(I −B)−T , ⌦= 1 σ2 (I −B)T (I −B), (5) where I is the p ⇥p identity matrix. Remark 1. Since the elements of the inverse covariance matrix are related to the partial correlations as follows: corr(Xi, Xj|XV\{i,j}) = −⌦i,j/p ⌦i,i⌦j,j. We have that, | ewi,j| ≥c↵, for some constant c (Assumption 2), implies that |corr(Xi, Xj|XV\{i,j})| ≥c↵/p ⌦i,i⌦j,j > 0. Next, we describe a key property of homoscedastic noise GBNs in the lemma below, which will be the driving force behind our algorithm. Lemma 1. Let (G, P(W, σ2)) be a GBN, with ⌦being the inverse covariance matrix over X and ✓i def = E [Xi|(X−i = x−i)] = ✓T i x−i being the i-th regression coefficient. Under Assumption 1, we have that i is a terminal vertex in G () ✓ij = −σ2⌦i,j, 8j 2 −i. Detailed proofs can be found in Appendix A in the supplementary material. Lemma 1 states that, in the population setting, one can identify the terminal vertex, and therefore the causal ordering, just by assuming causal minimality (Assumption 1). However, to identify terminal vertices from a finite number of samples, one needs additional assumptions. We use Lemma 1 to develop our algorithm for learning GBNs which, at a high level, works as follows. Given data X drawn from a GBN, we 5 first estimate the inverse covariance matrix b⌦. Then we perform a series of ordinary least squares (OLS) regressions to compute the estimators b✓i 8i 2 [p]. We then identify terminal vertices using the property described in Lemma 1 and remove the corresponding variables (columns) from X. We repeat the process of identifying and removing terminal vertices and obtain the causal ordering of vertices. Then, we perform a final set of OLS regressions to learn the structure and parameters of the DAG. The two main operations performed by our algorithm are: (a) estimating the inverse covariance matrix, and (b) estimating the regression coefficients ✓i. In what follows, we discuss these two steps in more detail and obtain theoretical guarantees for our algorithm. Inverse covariance matrix estimation. The first part of our algorithm requires an estimate b⌦of the true inverse covariance matrix ⌦⇤. Due in part to its role in undirected graphical model selection, the problem of inverse covariance matrix estimation has received significant attention over the years. A popular approach for inverse covariance estimation, under high-dimensional settings, is the `1penalized Gaussian MLE studied by [21–28], among others. While, technically, these algorithms can be used in the first phase of our algorithm to estimate the inverse covariance matrix, in this paper, we use the method called CLIME, developed by Cai et. al. [29], since its theoretical guarantees do not require a quite restrictive edge-based mutual incoherence condition as in [24]. Further, CLIME is computationally attractive because it computes b⌦columnwise by solving p independent linear programs. Even though the CLIME estimator b⌦is not guaranteed to be positive-definite (it is positivedefinite with high probability) it is suitable for our purpose since we use b⌦only for identifying terminal vertices. Next, we briefly describe the CLIME method for inverse covariance estimation and instantiate the theoretical results of [29] for our purpose. The CLIME estimator b⌦is obtained as follows. First, we compute a potentially non-symmetric estimate ¯⌦= (¯!i,j) by solving the following: ¯⌦= argmin ⌦2Rp⇥p|⌦|1 s.t. |⌃n⌦−I|1 λn, (6) where λn > 0 is the regularization parameter, ⌃n def = (1/n)XT X is the empirical covariance matrix. Finally, the symmetric estimator is obtained by selecting the smaller entry among ¯!i,j and ¯!j,i, i.e., b⌦= (b!i,j), where b!i,j = ¯!i,j1 [|¯!i,j| < |¯!j,i|] + ¯!j,i1 [|¯!j,i| |¯!i,j|]. It is easy to see that (6) can be decomposed into p linear programs as follows. Let ¯⌦= (¯!1, . . . , ¯!p), then ¯!i = argmin !2Rp k!k1 s.t. |⌃n! −ei|1 λn, (7) where ei = (ei,j) such that ei,j = 1 for j = i and ei,j = 0 otherwise. The following lemma which follows from the results of [29] and [24], bounds the maximum elementwise difference between b⌦ and the true precision matrix ⌦⇤. Lemma 2. Let (G⇤, P(W⇤, σ2)) be a GBN satisfying Assumption 1, with ⌃⇤and ⌦⇤being the “true” covariance and inverse covariance matrix over X, respectively. Given a data matrix X 2 Rn⇥p of n i.i.d. samples drawn from P(W⇤, σ2), compute b⌦by solving (6). Then, if the regularization parameter and number of samples satisfy: λn ≥k⌦⇤k1 q (C1/n) log(4p2/δ), n ≥((16σ4k⌦⇤k4 1C1)/↵2) log((4p2)/δ), with probability at least 1 −δ we have that |⌦⇤−b⌦|1 ↵/σ2, where C1 = 3200 ( maxi(⌃⇤ i,i)2) and δ 2 (0, 1). Further, thresholding b⌦at the level 4k⌦⇤k1λn, we have S(⌦⇤) = S(b⌦). Remark 2. Note that in each column of the true precision matrix ⌦⇤, at most k entries are non-zero, where k is the maximum Markov blanket size of a node in G. Therefore, the `1 induced (or operator) norm k⌦⇤k1 = O(k), and the sufficient number of samples required for the estimator b⌦to be within ↵distance from ⌦⇤, elementwise, with probability at least 1 −δ is O((1/↵2)k4 log(p/δ)). Estimating regression coefficients. Given a GBN (G, P(W, σ2)) with the covariance and precision matrix over X being ⌃and ⌦respectively, the conditional distribution of Xi given the variables in its Markov blanket is: Xi|(XSi = x) ⇠N((✓i)T Six, 1/⌦i,i). Let ✓i Si def = (✓i)Si. This leads to the following generative model for X⇤,i: X⇤,i = (X⇤,Si)✓i Si + "0 i, (8) 6 where "0 i ⇠N(0, 1/⌦i,i) and Xl,Si ⇠N(0, ⌃Si,Si) for all l 2 [n]. Therefore, for all i 2 [p], we obtain the estimator b✓i Si of ✓i Si by solving the following ordinary least squares (OLS) problem: b✓i Si = argmin β2R|Si| 1 2nkX⇤,i −(X⇤,Si)βk2 2 = (⌃n Si,Si)−1⌃n Si,i (9) The following lemma bounds the approximation error between the true regression coefficients and those obtained by solving the OLS problem. OLS regression has been previously analyzed by [30] under the random design setting. However, they obtain bounds on the predicion error, i.e., (✓i Si −b✓i Si)T ⌃⇤(✓i Si −b✓i Si), while the following lemma bounds k✓i Si −b✓i Sik1. Lemma 3. Let (G⇤, P(W⇤, σ2)) be a GBN with ⌃⇤and ⌦⇤being the true covariance and inverse covariance matrix over X. Let X 2 Rn⇥p be the data matrix of n i.i.d. samples drawn from P(W⇤, σ2). Let E [Xi|(XSi = x)] = xT ✓i Si, and let b✓i Si be the OLS solution obtained by solving (9) for some i 2 [p]. Then, assuming ⌃⇤is non-singular, and if the number of samples satisfy: n ≥c|Si| 3/2(k✓i Sik1 + 1/|Si|) λmin(⌃⇤ Si,Si)↵ log ✓4|Si| δ ◆ , we have that, k✓i Si −b✓i Sik1 ↵with probability at least 1 −δ, for some δ 2 (0, 1), with c being an absolute constant. Our algorithm. Algorithm 1 presents our algorithm for learning GBNs. Throughout the algorithm we use as indices the true label of a node. We first estimate the inverse covariance matrix, b⌦, in line 5. In line 7 we estimate the Markov blanket of each node. Then, we estimate b✓i,j for all i and j 2 bSi, and compute the maximum per-node ratios ri = |−b⌦i,j/b✓i,j| in lines 8 – 11. We then identify as terminal vertex the node for which ri is minimum and remove it from the collection of variables (lines 13 and 14). Each time a variable is removed, we perform a rank-1 update of the precision matrix (line 15) and also update the regression coefficients of the nodes in its Markov blanket (lines 16 – 20). We repeat this process of identifying and removing terminal vertices until the causal order has been completely determined. Finally, we compute the DAG structure and parameters by regressing each variable against variables that are in its Markov blanket which also precede it in the causal order (lines 23 – 29). Algorithm 1 Gaussian Bayesian network structure learning algorithm. Input: Data matrix X 2 Rn⇥p. Output: (bG, bW). 1: bB 0 2 Rp⇥p. 2: z ?, r ?. . z stores the causal order. 3: V [p]. . Remaining vertices. 4: ⌃n (1/n)XT X. 5: Compute b⌦using the CLIME estimator. 6: b⌦0 = b⌦. 7: Compute bSi = {j 2 −i | |b⌦i,j|> 0},8i 2 [p]. 8: for i 2 1, . . . , p do 9: Compute b✓i bSi = (⌃n bSi,bSi)−1⌃n bSi,i. 10: ri max{|−b⌦i,j/b✓i,j| | j 2 bSi}. 11: end for 12: for t 2 1 . . . p −1 do 13: i argmin(r). . i is a terminal vertex. 14: Append i to z; V V \ {i}; ri +1. 15: b⌦ b⌦−i,−i −(1/b⌦i,i)(b⌦−i,i)(b⌦i,−i) . 16: for j 2 bSi do 17: bSj {l 6= j | |b⌦j,l| > 0}. 18: Compute b✓j bSj == (⌃n bSj,bSj)−1⌃n bSj,j. 19: rj max{|−b⌦j,l/b✓j,l| | l 2 bSj}. 20: end for 21: end for 22: Append the remaining vertex in V to z. 23: for i 2 2, . . . , p do 24: bSzi {zj|j 2 [i −1]} \{j 2 [p] | j 6= zi ^ |b⌦0 zi,j| > 0}. 25: Compute b✓= (⌃n bSzi,bSzi )−1⌃n bSzi,zi . 26: b⇡(zi) S(b✓). 27: bBzi,b⇡(zi) b✓b⇡(zi). 28: end for 29: bE {(i, j)| bBi,j 6= 0}, bW { bBi,j|(i, j) 2 bE}, and bG ([p], bE). In order to obtain our main result for learning GBNs we first derive the following technical lemma which states that if the data comes from a GBN that satisfies Assumptions 1 – 2, then removing a terminal vertex results in a GBN that still satisfies Assumptions 1 – 2. 7 Lemma 4. Let (G, P(W, σ2)) be a GBN satisfying Assumptions 1 – 2, and let ⌃, ⌦be the (nonsingular) covariance and precision matrix respectively. Let X 2 Rn⇥p be a data matrix of n i.i.d. samples drawn from P(W, σ2), and let i be a terminal vertex in G. Denote by G0 = (V0, E0) and W0 = {wi,j 2 W | (i, j) 2 E0} the graph and set of edge weights, respectively, obtained by removing the node i from G. Then, Xj,−i ⇠P(W0, σ2) 8j 2 [n], and the GBN (G0, P(W0, σ2)) satisfies Assumptions 1 – 2. Further, the inverse covariance matrix ⌦0 and the covariance matrix ⌃0 for the GBN (G0, P(W0, σ2)) satisfy (respectively): ⌦0 = ⌦−(1/⌦i,i)⌦⇤,i⌦i,⇤and ⌃0 = ⌃−i,−i. Theorem 1. Let bG = ([p], bE) and bW be the DAG and edge weights, respectively, returned by Algorithm 1. Under the assumption that the data matrix X was drawn from a GBN (G⇤, P(W⇤, σ2)) with G⇤= ([p], E⇤), ⌃⇤and ⌦⇤being the “true” covariance and inverse covariance matrix respectively, and satisfying Assumptions 1 – 2; if the regularization parameter is set according to Lemma 2, and if the number of samples satisfies the condition: n ≥c ✓σ4k⌦⇤k4 1Cmax ↵2 + k(3/2)( ewmax + 1/k) Cmin↵ ◆ log ✓24p2(p −1) δ ◆ , where c is an absolute constant, ewmax def = max{| ewi,j||i 2 V[m, ⌧]^j 2 Si[m, ⌧]^m 2 [p]^⌧2 TG} with ewi,j being the effective influence between i and j (4), Cmax = maxi2p(⌃⇤ i,i)2, and Cmin = mini2[p] λmin(⌃⇤ Si,Si), then, bE ◆E⇤and 8(i, j) 2 bE, | bwi,j −w⇤ i,j| ↵with probability at least 1 −δ for some δ 2 (0, 1) and ↵> 0. Further, thresholding bW at the level ↵we get bE = E⇤. The CLIME estimator of the precision matrix can be computed in polynomial time and the OLS steps take O(pk3) time. Therefore our algorithm is polynomial time (please see Appendix C.2). 4 Experiments In this section, we validate our theoretical findings through synthetic experiments. We use a class of Erd˝os-Rényi GBNs, with edge weights set to ±1/2 with probability 1/2, and noise variance σ2 = 0.8. For each value of p 2 {50, 100, 150, 200}, we sampled 30 random GBNs and estimated the probability Pr{G⇤= bG} by computing the fraction of times the learned DAG structure bG matched the true DAG structure G⇤exactly. The number of samples was set to Ck2 log p, where C was the control parameter, and k was the maximum Markov blanket size (please see Appendix B.2 for more details). Figure 2 shows the results of the structure and parameter recovery experiments. We can see that the log p scaling as prescribed by Theorem 1 holds in practice. Our method outperforms various state-of-the-art methods like PC, GES and MMHC on this class of Erd˝os-Rényi GBNs (Appendix B.3), works when the noise variables have unequal, but similar, variance (Appendix B.4), and also works for high-dimensional gene expression data (Appendix B.5). Concluding Remarks. There are several ways of extending our current work. While the algorithm developed in the paper is specific to equal noise-variance case, we believe our theoretical analysis can be extended to the non-identifiable case to show that our algorithm, under some suitable conditions, can recover one of the Markov-equivalent DAGs. It would be also interesting to explore if some of the ideas developed herein can be extended to binary or discrete Bayesian networks. Figure 2: (Left) Probability of correct structure recovery vs. number of samples, where the latter is set to Ck2 log p with C being the control parameter and k being the maximum Markov blanket size. (Right) The maximum absolute difference between the true parameters and the learned parameters vs. number of samples. 8 References [1] David Maxwell Chickering. Learning bayesian networks is np-complete. In Learning from data, pages 121–130. Springer, 1996. [2] Sanjoy Dasgupta. Learning polytrees. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 134–141. Morgan Kaufmann Publishers Inc., 1999. [3] Jonas Peters, Joris M Mooij, Dominik Janzing, and Bernhard Schölkopf. Causal Discovery with Continuous Additive Noise Models. Journal of Machine Learning Research, 15(June):2009– 2053, 2014. [4] Asish Ghoshal and Jean Honorio. Information-theoretic limits of Bayesian network structure learning. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 767–775, Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR. [5] Peter Spirtes, Clark N Glymour, and Richard Scheines. Causation, prediction, and search. MIT press, 2000. [6] Christopher Meek. Causal inference and causal explanation with background knowledge. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 403–410. Morgan Kaufmann Publishers Inc., 1995. [7] Christopher Meek. Strong completeness and faithfulness in bayesian networks. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 411–418. Morgan Kaufmann Publishers Inc., 1995. [8] Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti Kerminen. A Linear Non-Gaussian Acyclic Model for Causal Discovery. Journal of Machine Learning Research, 7:2003–2030, 2006. [9] J. Peters and P. Bühlmann. Identifiability of Gaussian structural equation models with equal error variances. Biometrika, 101(1):219–228, 2014. [10] Sara Van De Geer and Peter Bühlmann. L0-Penalized maximum likelihood for sparse directed acyclic graphs. Annals of Statistics, 41(2):536–567, 2013. [11] R W Robinson. Counting unlabeled acyclic digraphs. Combinatorial Mathematics V, 622:28–43, 1977. [12] Bryon Aragam and Qing Zhou. Concave penalized estimation of sparse gaussian bayesian networks. Journal of Machine Learning Research, 16:2273–2328, 2015. [13] David Maxwell Chickering. Optimal Structure Identification with Greedy Search. J. Mach. Learn. Res., 3:507–554, March 2003. [14] Tommi S. Jaakkola, David Sontag, Amir Globerson, Marina Meila, and others. Learning Bayesian Network Structure using LP Relaxations. In AISTATS, pages 358–365, 2010. [15] Markus Kalisch and Bühlmann Peter. Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm. Journal of Machine Learning Research, 8:613–636, 2007. [16] Gunwoong Park and Garvesh Raskutti. Learning large-scale poisson dag models based on overdispersion scoring. In Advances in Neural Information Processing Systems, pages 631–639, 2015. [17] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine learning, 65(1):31–78, 2006. [18] Jiji Zhang and Peter Spirtes. Detection of unfaithfulness and robust causal inference. Minds and Machines, 18(2):239–271, 2008. [19] Jiji Zhang and Peter Spirtes. Strong faithfulness and uniform consistency in causal inference. In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence, pages 632–639. Morgan Kaufmann Publishers Inc., 2002. [20] Caroline Uhler, Garvesh Raskutti, Peter Bühlmann, and Bin Yu. Geometry of the faithfulness assumption in causal inference. Annals of Statistics, 41(2):436–463, 2013. [21] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19–35, 2007. 9 [22] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning Research, 9(Mar):485–516, 2008. [23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008. [24] Pradeep Ravikumar, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. High-dimensional covariance estimation by minimizing `1-penalized log-determinant divergence. Electronic Journal of Statistics, 5(0):935–980, 2011. [25] Cho-Jui Hsieh, Màtyàs A Sustik, Inderjit S Dhillon, Pradeep Ravikumar, and Russell Poldrack. BIG & QUIC : Sparse Inverse Covariance Estimation for a Million Variables. In Advances in Neural Information Processing Systems, volume 26, pages 3165–3173, 2013. [26] Cho-Jui Hsieh, Arindam Banerjee, Inderjit S Dhillon, and Pradeep K Ravikumar. A divide-andconquer method for sparse inverse covariance estimation. In Advances in Neural Information Processing Systems, pages 2330–2338, 2012. [27] Benjamin Rolfs, Bala Rajaratnam, Dominique Guillot, Ian Wong, and Arian Maleki. Iterative thresholding algorithm for sparse inverse covariance estimation. In Advances in Neural Information Processing Systems, pages 1574–1582, 2012. [28] Christopher C Johnson, Ali Jalali, and Pradeep Ravikumar. High-dimensional sparse inverse covariance estimation using greedy methods. In AISTATS, volume 22, pages 574–582, 2012. [29] Tony Cai, Weidong Liu, and Xi Luo. A Constrained L1 Minimization Approach to Sparse Precision Matrix Estimation. Journal of the American Statistical Association, 106(494):594–607, 2011. [30] Daniel Hsu, Sham M Kakade, and Tong Zhang. An analysis of random design linear regression. In Proc. COLT. Citeseer, 2011. [31] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. [32] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027 [cs, math], November 2010. arXiv: 1011.3027. [33] Rahul Mazumder and Trevor Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. Journal of Machine Learning Research, 13(Mar):781–794, 2012. [34] Y. Lu, Y. Yi, P. Liu, W. Wen, M. James, D. Wang, and M. You. Common human cancer genes discovered by integrated gene-expression analysis. Public Library of Science ONE, 2(11):e1149, 2007. [35] E. Shubbar, A. Kovacs, S. Hajizadeh, T. Parris, S. Nemes, K.Gunnarsdottir, Z. Einbeigi, P. Karlsson, and K. Helou. Elevated cyclin B2 expression in invasive breast carcinoma is associated with unfavorable clinical outcome. BioMedCentral Cancer, 13(1), 2013. 10 | 2017 | 387 |
6,882 | Learning Neural Representations of Human Cognition across Many fMRI Studies Arthur Mensch∗ Inria arthur.mensch@m4x.org Julien Mairal† Inria julien.mairal@inria.fr Danilo Bzdok Department of Psychiatry, RWTH danilo.bzdok@rwth-aachen.de Bertrand Thirion∗ Inria bertrand.thirion@inria.fr Gaël Varoquaux∗ Inria gael.varoquaux@inria.fr Abstract Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli. Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations; it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts. Due to the advent of functional brain-imaging technologies, cognitive neuroscience is accumulating quantitative maps of neural activity responses to specific tasks or stimuli. A rapidly increasing number of neuroimaging studies are publicly shared (e.g., the human connectome project, HCP [1]), opening the door to applying large-scale statistical approaches [2]. Yet, it remains a major challenge to formally extract structured knowledge from heterogeneous neuroscience repositories. As stressed in [3], aggregating knowledge across cognitive neuroscience experiments is intrinsically difficult due to the diverse nature of the hypotheses and conclusions of the investigators. Cognitive neuroscience experiments aim at isolating brain effects underlying specific psychological processes: they yield statistical maps of brain activity that measure the neural responses to carefully designed stimulus. Unfortunately, neither regional brain responses nor experimental stimuli can be considered to be atomic: a given experimental stimulus recruits a spatially distributed set of brain regions [4], while each brain region is observed to react to diverse stimuli. Taking advantage of the resulting data richness to build formal models describing psychological processes requires to describe each cognitive ∗Inria, CEA, Université Paris-Saclay, 91191 Gif sur Yvette, France †Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. conclusion on a common basis for brain response and experimental study design. Uncovering atomic basis functions that capture the neural building blocks underlying cognitive processes is therefore a primary goal of neuroscience [5], for which we propose a new data-driven approach. Several statistical approaches have been proposed to tackle the problem of knowledge aggregation in functional imaging. A first set of approaches relies on coordinate-based meta-analysis to define robust neural correlates of cognitive processes: those are extracted from the descriptions of experiments — based on categories defined by text mining [6] or expert [7]— and correlated with brain coordinates related to these experiments. Although quantitative meta-analysis techniques provide useful summaries of the existing literature, they are hindered by label noise in the experiment descriptions, and weak information on brain activation as the maps are reduced to a few coordinates [8]. A second, more recent set of approaches models directly brain maps across studies, either focusing on studies on similar cognitive processes [9], or tackling the entire scope of cognition [10, 11]. Decoding, i.e. predicting the cognitive process from brain activity, across many different studies touching different cognitive questions is a key goal for cognitive neuroimaging as it provides a principled answer to reverse inference [12]. However, a major roadblock to scaling this approach is the necessity to label cognitive tasks across studies in a rich but consistent way, e.g., building an ontology [13]. We follow a more automated approach and cast dataset accumulation into a multi-task learning problem: our model is trained to decode simultaneously different datasets, using a shared architecture. Machine-learning techniques can indeed learn universal representations of inputs that give good performance in multiple supervised problems [14, 15]. They have been successful, especially with the development of deep neural network [see, e.g., 16], in sharing representations and transferring knowledge from one dataset prediction model to another (e.g., in computer-vision [17] and audioprocessing [18]). A popular approach is to simultaneously learn to represent the inputs of the different datasets in a low-dimensional space and to predict the outputs from the low-dimensional representatives. Using very deep model architectures in functional MRI is currently thwarted by the signal-to-noise ratio of the available recordings and the relative little size of datasets [19] compared to computer vision and text corpora. Yet, we show that multi-dataset representation learning is a fertile ground for identifying cognitive systems with predictive power for mental operations. Contribution. We introduce a new model architecture dedicated to multi-dataset classification, that performs two successive linear dimension reductions of the input statistical brain images and predicts psychological conditions from a learned low-dimensional representation of these images, linked to cognitive processes. In contrast to previous ontology-based approaches, imposing a structure across different cognitive experiments is not needed in our model: the representation of brain images is learned using the raw set of experimental conditions for each dataset. To our knowledge, this work is the first to propose knowledge aggregation and transfer learning in between functional MRI studies with such modest level of supervision. We demonstrate the performance of our model on several openly accessible and rich reference datasets in the brain-imaging domain. The different aspects of its architecture bring a substantial increase in out-of-sample accuracy compared to models that forgo learning a cognitive-aware low-dimensional representation of brain maps. Our model remains simple enough to be interpretable: it can be collapsed into a collection of classification maps, while the space of low-dimensional representatives can be explored to uncover a set of meaningful latent components. 1 Model: multi-dataset classification of brain statistical images Our general goal is to extract and integrate biological knowledge across many brain-imaging studies within the same statistical learning framework. We first outline how analyzing large repositories of fMRI experiments can be cast as a classification problem. Here, success in capturing brain-behavior relationships is measured by out-of-sample prediction accuracy. The proposed model (Figure 1) solves a range of these classification problems in an identical statistical estimation and imposes a shared latent structure across the single-dataset classification parameters. These shared model parameters may be viewed as a chain of two dimension reductions. The first reduction layer leverages knowledge about brain spatial regularities; it is learned from resting-state data and designed to capture neural activity patterns at different coarseness levels. The second reduction layer projects data on directions generally relevant for cognitive-state prediction. The combination of both reductions yields low-dimensional representatives that are less affected by noise and subject variance than 2 Figure 1: Model architecture: Three-layer multi-dataset classification. The first layer (orange) is learned from data acquired outside of cognitive experiments and captures a spatially coherent signal at multiple scales, the second layer (blue) embeds these representations in a space common to all datasets, from which the conditions are predicted (pink) from multinomial models. the high-dimensional samples: classification is expected to have better out-of-sample prediction performance. 1.1 Problem setting: predicting conditions from brain activity in multiple studies We first introduce our notations and terminology, and formalize a general prediction problem applicable to any task fMRI dataset. In a single fMRI study, each subject performs different experiments in the scanner. During such an experiment, the subjects are presented a set of sensory stimuli (i.e., conditions) that aim at recruiting a target set of cognitive processes. We fit a first-level general linear model for every record to obtain z-score maps that quantify the importance of each condition in explaining each voxel. Formally, the n statistical maps (xi)i∈[n] of a given study form a sequence in Rp, where p is the number of voxels in the brain. Each such observation is labelled by a condition ci in [1, k] whose effect captures xi. A single study typically features one or a few (if experiments are repeated) statistical map per condition and per subject, and may present up to k = 30 conditions. Across the studies, the observed brain maps can be modeled as generated from an unknown joint distribution of brain activity and associated cognitive conditions ((xi, ci))i∈[n] where variability across trials and subjects acts as confounding noise. In this context, we wish to learn a decoding model that predicts condition c from brain activity x measured from new subjects or new studies. Inspired by recent work [10, 20, 21], we frame the condition prediction problem into the estimation of a multinomial classification model. Our models estimate a probability vector of x being labeled by each condition in C. This vector is modeled as a function of (W, b) in Rp×k × Rk that takes the softmax form. For all j in [1, k], its j-th coordinate is defined as p(x, W, b)j ≜P[c = j|x, W, b] = eW(j)⊤x+b P l∈C eW(l)⊤x+b . (1) Fitting the model weights is done by minimizing the cross-entropy between (p(xi))i and the true labels ([ci = j]j∈[k])i, with respect to (W, b), with or without imposing parameter regularization. In this model, an input image is classified in between all conditions presented in the whole study. It is possible to restrict this classification to the set of conditions used in a given experiment — the empirical results of this study can be reproduced in this setting. The challenge of model parameter estimation. A major inconvenience of the vanilla multinomial model lies in the ratio between the limited number of samples provided by a typical fMRI dataset and the overwhelming number of model weights to be estimated. Fitting the model amounts to estimating k discriminative brain map, i.e. millions of parameters (4M for the 23 conditions of HCP), whereas most brain-imaging studies yield less than a hundred observations and therefore only a few thousands samples. This makes it hard to reasonably approximate the population parameters for successful generalization, especially because the variance between subjects is high compared to the 3 variance between conditions. The obstacle is usually tackled in one of two major ways in brainimaging: 1) we can impose sparsity or a-priori structure over the model weights. Alternatively, 2) we can reduce the dimension of input data by performing spatial clustering or univariate feature selection by ANOVA. However, we note that, on the one hand, regularization strategies frequently incur costly computational budgets if one wants to obtain interpretable weights [22] and they introduce artificial bias. On the other hand, existing techniques developed in fMRI analysis for dimension reduction can lead to distorted signal and accuracy losses [23]. Most importantly, previous statistical approaches are not tuned to identifying conditions from task fMRI data. We therefore propose to use a dimension reduction that is estimated from data and tuned to capture the common hidden aspects shared by statistical maps across studies — we aggregate several classification models that share parameters. 1.2 Learning shared representation across studies for decoding We now consider several fMRI studies. (xi)i∈[n] is the union of all statistical maps from all datasets. We write D the set of all studies, Cd the set of all kd conditions from study d, k ≜P d kd the total number of conditions and Sd the subset of [n] that index samples of study d. For each study d, we estimate the parameters (Wd, bd) for the classification problem defined above. Adapting the multi-task learning framework of [14], we constrain the weights (Wd)d to share a common latent structure: namely, we fix a latent dimension l ≤p, and enforce that for all datasets d, Wd = WeW′ d, (2) where the matrix We in Rp×l is shared across datasets, and (W′ d)d are dataset-specific classification matrices from a l dimensional input space. Intuitively, We should be a “consensus” projection matrix, that project every sample xi from every dataset onto a lower dimensional representation W⊤ e xi in Rl that is easy to label correctly. The latent dimension l may be chosen larger than k. In this case, regularization is necessary to ensure that the factorization (2) is indeed useful, i.e., that the multi-dataset classification problem does not reduce to separate multinomial regressions on each dataset. To regularize our model, we apply Dropout [24] to the projected data representation. Namely, during successive training iterations, we set a random fraction r of the reduced data features to 0. This prevents the co-adaptation of matrices We and (W′ d)d and ensures that every direction of We is useful for classifying every dataset. Formally, Dropout amounts to sample binary diagonal matrices M in Rl×l during training, with Bernouilli distributed coefficients; for all datasets d, W′ d is estimated through the task of classifying Dropout-corrupted reduced data (MW⊤ e xi)i∈Sd,M∼M. In practice, matrices We and (W′ d)d are learned by jointly minimizing the following expected risk, where the objective is the sum of each of single-study cross-entropies, averaged over Dropout noise: min We (W′ d)d X d∈D 1 |Sd| X i∈Sd X j∈Cd EM −δj=ci log pd[xi, WeMW′ d, bd]j] (3) Imposing a common structure to the classification matrices (Wd)d is natural as the classes to be distinguished do share some common neural organization — brain maps have a correlated spatial structure, while the psychological conditions of the diffent datasets may trigger shared cognitive primitives underlying human cognition [21, 20]. With our design, we aim at learning a matrix We that captures these common aspects and thus benefits the generalization performance of all the classifiers. As We is estimated from data, brain maps from one study are enriched by the maps from all the other studies, even if the conditions to be classified are not shared among studies. In so doing, our modeling approach allows transfer learning among all the classification tasks. Unfortunately, estimators provided by solving (3) may have limited generalization performance as n remain relatively small (∼20, 000) compared to the number of parameters. We address this issue by performing an initial dimension reduction that captures the spatial structure of brain maps. 1.3 Initial dimension reduction using localized rest-fMRI activity patterns The projection expressed by We ignores the signal structure of statistical brain maps. Acknowledging this structure in commonly acquired brain measurements should allow to reduce the dimensionality of data with little signal loss, and possibly the additional benefit of a denoising effect. Several recent 4 studies [25] in the brain-imaging domain suggest to use fMRI data acquired in experiment-free studies for such dimension reduction. For this reason, we introduce a first reduction of dimension that is not estimated from statistical maps, but from resting-state data. Formally, we enforce We = WgW′ e, where g > l (g ∼300), Wg ∈Rp×g and W′ e ∈Rg×k. Intuitively, the multiplication by matrix Wg should summarize the spatial distribution of brain maps, while multiplying by W′ e, that is estimated solving (3), should find low-dimensional representations able to capture cognitive features. W′ e is now of reasonable size (g × l ∼15000): solving (3) should estimate parameters with better generalization performance. Defining an appropriate matrix Wg is the purpose of the next paragaphs. Resting-state decomposition. The initial dimension reduction determines the relative contribution of statistical brain maps over what is commonly interpreted by neuroscience investigators as functional networks. We discover such macroscopical brain networks by performing a sparse matrix factorization over the massive resting-state dataset provided in the HCP900 release [1]: such a decomposition technique, described e.g., in [26, 27] efficiently provides (i.e., in the order of few hours) a given number of sparse spatial maps that decompose the resting state signal with good reconstruction performance. That is, it finds a sparse and positive matrix D in Rp×g and loadings A in Rg×m such that the m resting-state brain images Xrs in Rp×m are well approximated by DA. D is this a set of slightly overlapping networks — each voxel belongs to at most two networks. To maximally preserve Euclidian distance when performing the reduction, we perform an orthogonal projection, which amounts to setting Wg ≜D(D⊤D)−1. Replacing in (3), we obtain the reduced expected risk minimization problem, where the input dimension is now the number g of dictionary components: min W′ e∈Rg×l (W′ d)d X d∈D 1 |Sd| X i∈Sd X j∈Cd EM −δj=cilog pd[W⊤ g xi, W′ eMW′ d, bd]j . (4) Multiscale projection. Selecting the “best” number of brain networks q is an ill-posed problem [28]: the size of functional networks that will prove relevant for condition classification is unknown to the investigator. To address this issue, we propose to reduce high-resolution data (xi)i in a multi-scale fashion: we initially extract 3 sparse spatial dictionaries (Dj)j∈[3] with 16, 64 and 512 components respectively. Then, we project statistical maps onto each of the dictionaries, and concatenate the loadings, in a process analogous to projecting on an overcomplete dictionary in computer vision [e.g., 29]. This amounts to define the matrix Wg as the concatenation Wg ≜[D1(D⊤ 1 D1)−1 D2(D⊤ 2 D2)−1 D3(D⊤ 3 D3)−1] ∈Rp×(16+64+512). (5) With this definition, the reduced data (W⊤ g xi)i carry information about the network activations at different scales. As such, it makes the classification maps learned by the model more regular than when using a single-scale dictionary, and indeed yields more interpretable classification maps. However, it only brings only a small improvement in term of predictive accuracy, compared to using a simple dictionary of size k = 512. We further discuss multi-scale decomposition in Appendix A.2. 1.4 Training with stochastic gradient descent As illustrated in Figure 1, our model may be interpreted as a three-layer neural network with linear activations and several read-out heads, each corresponding to a specific dataset. The model can be trained using stochastic gradient descent, following a previously employed alternated training scheme [18]: we cycle through datasets d ∈D and select, at each iteration, a mini-batch of samples (xi)i∈B, where B ⊂Sd has the same size for all datasets. We perform a gradient step — the weights W′ d, bd and W′ e are updated, while the others are left unchanged. The optimizer thus sees the same number of samples for each dataset, and the expected stochastic gradient is the gradient of (4), so that the empirical risk decreases in expectation and we find a critical point of (4) asymptotically. We use the Adam solver [30] as a flavor of stochastic gradient descent, as it allows faster convergence. Computational cost. Training the model on projected data (W⊤ g xi)i takes 10 minutes on a conventional single CPU machine with an Intel Xeon 3.21Ghz. The initial step of computing the dictionaries (D1, D2, D3) from all HCP900 resting-state (4TB of data) records takes 5 hours using [27], while transforming data from all the studies with Wg projection takes around 1 hour. Adding a new dataset with 30 subjects to our model and performing the joint training takes no more than 20 minutes. This is much less than the cost of fitting a first-level GLM on this dataset (∼1h per subject). 5 2 Experiments We characterize the behavior and performance of our model on several large, publicly available brain-imaging datasets. First, to validate the relevance of all the elements of our model, we perform an ablation study. It proves that the multi-scale spatial dimension reduction and the use of multi-dataset classification improves substancially classification performance, and suggests that the proposed model captures a new interesting latent structure of brain images. We further illustrate the effect of transfer learning, by systematically varying the number of subjects in a single dataset: we show how multi-dataset learning helps mitigating the decrease in accuracy due to smaller train size — a result of much use for analysing cognitive experiments on small cohorts. Finally, we illustrate the interpretability of our model and show how the latent “cognitive-space” can be explored to uncover some template brain maps associated with related conditions in different datasets. 2.1 Datasets and tools Datasets. Our experimental study features 5 publicly-available task fMRI study. We use all restingstate records from the HCP900 release [1] to compute the sparse dictionaries that are used in the first dimension reduction materialized by Wg. We succinctly describe the conditions of each dataset — we refer the reader to the original publications for further details. • HCP: gambling, working memory, motor, language, social and relational tasks. 800 subjects. • Archi [31]: localizer protocol, motor, social and relational task. 79 subjects. • Brainomics [32]: localizer protocol. 98 subjects. • Camcan [33]: audio-video task, with frequency variation. 606 subjects. • LA5c consortium [34]: task-switching, balloon analog risk taking, stop-signal and spatial working memory capacity tasks — high-level tasks. 200 subjects. The last four datasets are target datasets, on which we measure out-of-sample prediction performance. The larger HCP dataset serves as a knowledge transfering dataset, which should boost these performance when considered in the multi-dataset model. We register the task time-series in the reference MNI space before fitting a general linear model (GLM) and computing the maps (standardized by z-scoring) associated with each base condition — no manual design of contrast is involved. More details on the pipeline used for z-map extraction is provided in Appendix A.1. Tools. We use pytorch 1 to define and train the proposed models, nilearn [35] to handle brain datasets, along with scikit-learn [36] to design the experimental pipelines. Sparse brain decompositions were computed from the whole HCP900 resting-state data. The code for reproducing experiments is available at http://github.com/arthurmensch/cogspaces. Our model involves a few noncritical hyperparameters: we use batches of size 256, set the latent dimension l = 100 and use a Dropout rate r = 0.75 in the latent cognitive space — this value perform slightly better than r = 0.5. We use a multi-scale dictionary with 16, 64 and 512 components, as it yields the best quantitative and qualitative results.2. Finally, test accuracy is measured on half of the subjects of each dataset, that are removed from training sets beforehand. Benchmarks are repeated 20 times with random split folds to estimate the variance in performance. 2.2 Dimension reduction and transfer improves test accuracy For the four benchmark studies, the proposed model brings between +1.3% to +13.4% extra test accuracy compared to a simple multinomial classification. To further quantify which aspects of the model improve performance, we perform an ablation study: we measure the prediction accuracy of six models, from the simplest to the most complete model described in Section 1. The first three experiments study the effect of initial dimension reduction and regularization3. The last three experiments measure the performance of the proposed factored model, and the effect of multi-dataset classification. 1http://pytorch.org/ 2Note that using only the 512-components dictionary yields comparable predictive accuracy. Quantitatively, the multi-scale approach is beneficial when using dictionary with less components (e.g., 16, 64, 128) — see Appendix A.2 for a quantitative validation of the multi-scale approach. 3For these models, ℓ2 and Dropout regularization parameter are estimated by nested cross-validation. 6 77.5 84.6 85.4 90.7 91.0 91.9 Brainomics 50% 55% 60% 65% 60.6 59.9 61.0 61.3 62.0 62.9 CamCan 55.8 55.6 61.1 62.6 61.8 59.8 LA5C 75% 80% 85% 90% 95% Test accuracy 76.5 79.1 81.8 86.7 87.4 87.8 Archi Full input + L2 Dim. reduction + L2 Dim. red. + dropout Factored model + dropout Transfer from HCP Transfer from all datasets Figure 2: Ablation results. Each dimension reduction of the model has a relevant contribution. Dropout regularization is very effective when applied to the cognitive latent space. Learning this latent space allows to transfer knowledge between datasets. 5 10 20 30 39 Train size 65% 70% 80% 90% Test accuracy Archi Train subjects No transfer Transfer from HCP Transfer from all datasets 5 10 20 30 40 49 60% 70% 80% 90% Brainomics 20 60 100 200 302 50% 60% 68% Camcan Figure 3: Learning curves in the single-dataset and multi-dataset setting. Estimating the latent cognitive space from multiple datasets is very useful for studying small cohorts. 1. Baseline ℓ2-penalized multinomial classification, where we predict c from x ∈Rp directly. 2. Multinomial classification after projection on a dictionary, i.e. predicting c from Wgx. 3. Same as experience 2, using Dropout noise on projected data Wgx. 4. Factored model in the single-study case: solving (4) with the target study only. 5. Factored model in a two-study case: using target study alongside HCP. 6. Factored model in the multi-study case: using target study alongside all other studies. The results are summarized in Figure 2. On average, both dimension reduction introduced by Wg and W′ e are beneficial to generalization performance. Using many datasets for prediction brings a further increase in performance, providing evidence of transfer learning between datasets. In detail, the comparison between experiments 1, 2 and 3 confirms that projecting brain images onto functional networks of interest is a good strategy to capture cognitive information [20, 25]. Note that in addition to improving the statistical properties of the estimators, the projection reduces drastically the computational complexity of training our full model. Experiment 2 and 3 measure the impact of the regularization method without learning a further latent projection. Using Dropout on the input space performs consistently better than ℓ2 regularization (+1% to +5%); this can be explained in view of [37], that interpret input-Dropout as a ℓ2 regularization on the natural model parametrization. Experiment 4 shows that Dropout regularization becomes much more powerful when learning a second dimension reduction, i.e. when solving problem (4). Even when using a single study for learning, we observe a significant improvement (+3% to +7%) in performance on three out of four datasets. Learning a latent space projection together with Dropout-based data augmentation in this space is thus a much better regularization strategy than a simple ℓ2 or input-Dropout regularization. Finally, the comparison between experiments 4, 5 and 6 exhibits the expected transfer effect. On three out of four target studies, learning the projection matrix W′ e using several datasets leads to an accuracy gain from +1.1% to +1.6%, consistent across folds. The more datasets are used, the higher the accuracy gain — already note that this gain increases with smaller train size. Jointly classifying images on several datasets thus brings extra information to the cognitive model, which allows to find better representative brain maps for the target study. In particular, we conjecture that the large number of subjects in HCP helps modeling inter-subject noises. On the other hand, we observe a negative transfer effect on LA5c, as the tasks of this dataset share little cognitive aspects with the tasks of the other datasets. This encourages us to use richer dataset repositories for further improvement. 7 Multi-scale spatial projection L R Face z=-10mm Latent cognitive space (single) Latent cognitive (multi-study) Multi-scale spatial projection L R Audio calculation z=46mm Latent cognitive space (single) Latent cognitive (multi-study) Figure 4: Classification maps from our model are more specific of higher level functions: they focus more on the FFA for faces, and on the left intraparietal suci for calculations. Figure 5: The latent space of our model can be explored to unveil some template brain statistical maps, that corresponds to bags of conditions related across color-coded datasets. 2.3 Transfer learning is very effective on small datasets To further demonstrate the benefits of the multi-dataset model, we vary the size of target datasets (Archi, Brainomics and CamCan) and compare the performance of the single-study model with the model that aggregates Archi, Brainomics, CamCan and HCP studies. Figure 3 shows that the effect of transfer learning increases as we reduce the training size of the target dataset. This suggests that the learned data embedding WgW′ e does capture some universal cognitive information, and can be learned from different data sources. As a consequence, aggregating a larger study to mitigate the small number of training samples in the target dataset. With only 5 subjects, the gain in accuracy due to transfer is +13% on Archi, +8% on Brainomics, and +6% on CamCan. Multi-study learning should thus proves very useful to classify conditions in studies with ten or so subjects, which are still very common in neuroimaging. 2.4 Introspecting classification maps At prediction time, our multi-dataset model can be collapsed into one multinomial model per dataset. Each dataset d is then classified using matrix WgW′ eW′ d. Similar to the linear models classically used for decoding, the model weights for each condition can be represented as a brain map. Figure 4 shows the maps associated with digit computation and face viewing, for the Archi dataset. The models 2, 4 and 5 from the ablation study are compared. Although it is hard to assess the intrinsic quality of the maps, we can see that the introduction of the second projection layer and the multistudy problem formulation (here, appending the HCP dataset) yields maps with more weight on the high-level functional regions known to be specific of the task: for face viewing, the FFA stands out more compared to primary visual cortices; for calculations, the weights of the intraparietal sulci becomes left lateralized, as it has been reported for symbolic number processing [38]. 2.5 Exploring the latent space Within our model, classification is performed on the same l-dimensional space E for all datasets, that is learned during training. To further show that this space captures some cognitive information, we extract from E template brain images associated to general cognitive concepts. Fitting our model on the Archi, Brainomics, CamCan and HCP studies, we extract representative vectors of E with a k-means clustering over the projected data and consider the centroids (yj)j of 50 clusters. Each centroid yj can be associated to a brain image tj ∈Rp that lies in the span of D1, D2 8 and D3. In doing so, we go backward through the model and obtain a representative of yj with well delineated spatial regions. Going forward, we compute the classification probability vectors W⊤ d yj = W′ d ⊤W′⊤ e W⊤ g tj for each study d. Together, these probability vectors give an indication on the cognitive functions that tj captures. Figure 5 represents six template images, associated to their probability vectors, shown as word clouds. We clearly obtain interpretable pairs of brain image/cognitive concepts. These pairs capture across datasets clusters of experiment conditions with similar brain representations. 3 Discussion We compare our model to a previously proposed formulation for brain image classification. We show how our model differs from convex multi-task learning, and stress the importance of Dropout. Task fMRI classification. Our model is related to a previous semi-supervised classification model [20] that also performs multinomial classification of conditions in a low-dimensional space: the dimension reduction they propose is the equivalent of our projection Wg. Our approach differs in two aspects. First, we replace the initial semi-supervised dimension reduction with unsupervised analysis of resting-state, using a much more tractable approach that we have shown to be conservative of cognitive signals. Second, we introduce the additional cognitive-aware projection W′ e, learned on multiple studies. It substancially improves out-of-sample prediction performance, especially on small datasets, and above all allow to uncover a cognitive-aware latent space, as we have shown in our experiments. Convex multi-task learning. Due to the Dropout regularization and the fact that l is allowed to be larger than k, our formulation differs from the classical approach [39] to the multi-task problem, that would estimate Θ = W′ e[W′ 1, . . . , W′ d]d ∈Rg×k by solving a convex empirical risk minimization problem with a trace-norm penalization, that encourages Θ to be low-rank. We tested this formulation, which does not perform better than the explicit factorization formulation with Dropout regularization. Trace-norm regularized regression has the further drawback of being slower to train, as it typically operates with full gradients, e.g. using FISTA [40]. In contrast, the non-convex explicit factorization model is easily amenable to large-scale stochastic optimization — hence our focus. Importance of Dropout. The use of Dropout regularization is crucial in our model. Without Dropout, in the single-study case with l > k, solving the factored problem (4) yields a solution worse in term of empirical risk than solving the simple multinomial problem on (W⊤ g xi)i, which finds a global minimizer of (4). Yet, Figure 2 shows that the model enriched with a latent space (red) has better performance in test accuracy than the simple model (orange), thanks to the Dropout noise applied to the latent-space representation of the input data. Dropout is thus a promising novel way of regularizing fMRI models. 4 Conclusion We proposed and characterized a novel cognitive neuroimaging modeling scheme that blends latent factor discovery and transfer learning. It can be applied to many different cognitive studies jointly without requiring explicit correspondences between the cognitive tasks. The model helps identifying the fundamental building blocks underlying the diversity of cognitive processes that the human mind can realize. It produces a basis of cognitive processes whose generalization power is validated quantitatively, and extracts representations of brain activity that grounds the transfer of knowledge from existing fMRI repositories to newly acquired task data. The captured cognitive representations will improve as we provide the model with a growing number of studies and cognitive conditions. 5 Acknowledgments This project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under grant agreement No 720270 (Human Brain Project SGA1). Julien Mairal was supported by the ERC grant SOLARIS (No 714381) and a grant from ANR (MACARON project ANR-14-CE23-0003-01). We thank Olivier Grisel for his most helpful insights. 9 References [1] David Van Essen, Kamil Ugurbil, and others. The Human Connectome Project: A data acquisition perspective. NeuroImage, 62(4):2222–2231, 2012. [2] Russell A. Poldrack, Chris I. Baker, Joke Durnez, Krzysztof J. Gorgolewski, Paul M. Matthews, Marcus R. Munafò, Thomas E. Nichols, Jean-Baptiste Poline, Edward Vul, and Tal Yarkoni. Scanning the horizon: Towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18(2): 115–126, 2017. [3] Allen Newell. You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. 1973. [4] John D. Medaglia, Mary-Ellen Lynall, and Danielle S. Bassett. Cognitive Network Neuroscience. Journal of Cognitive Neuroscience, 27(8):1471–1491, 2015. [5] Lisa Feldman Barrett. The future of psychology: Connecting mind to brain. Perspectives on psychological science, 4(4):326–339, 2009. [6] Tal Yarkoni, Russell A. Poldrack, Thomas E. Nichols, David C. Van Essen, and Tor D. Wager. Large-scale automated synthesis of human functional neuroimaging data. Nature methods, 8(8):665–670, 2011. [7] Angela R. Laird, Jack J. Lancaster, and Peter T. Fox. Brainmap. Neuroinformatics, 3(1):65–77, 2005. [8] Gholamreza Salimi-Khorshidi, Stephen M. Smith, John R. Keltner, Tor D. Wager, and Thomas E. Nichols. Meta-analysis of neuroimaging data: A comparison of image-based and coordinate-based pooling of studies. NeuroImage, 45(3):810–823, 2009. [9] Tor D. Wager, Lauren Y. Atlas, Martin A. Lindquist, Mathieu Roy, Choong-Wan Woo, and Ethan Kross. An fMRI-Based Neurologic Signature of Physical Pain. New England Journal of Medicine, 368(15): 1388–1397, 2013. [10] Yannick Schwartz, Bertrand Thirion, and Gael Varoquaux. Mapping paradigm ontologies to and from the brain. In Advances in Neural Information Processing Systems, pages 1673–1681. 2013. [11] Oluwasanmi Koyejo and Russell A. Poldrack. Decoding cognitive processes from functional MRI. In NIPS Workshop on Machine Learning for Interpretable Neuroimaging, pages 5–10, 2013. [12] Russell A. Poldrack, Yaroslav O. Halchenko, and Stephen José Hanson. Decoding the large-scale structure of brain function by classifying mental states across individuals. Psychological Science, 20(11):1364–1372, 2009. [13] Jessica A. Turner and Angela R. Laird. The cognitive paradigm ontology: Design and application. Neuroinformatics, 10(1):57–66, 2012. [14] Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817–1853, 2005. [15] Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for classification with dirichlet process priors. Journal of Machine Learning Research, 8(Jan):35–63, 2007. [16] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [17] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In International Conference on Machine Learning, volume 32, pages 647–655, 2014. [18] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, pages 160–167, 2008. [19] Danilo Bzdok and B. T. Thomas Yeo. Inference in the age of big data: Future perspectives on neuroscience. NeuroImage, 155(Supplement C):549 – 564, 2017. [20] Danilo Bzdok, Michael Eickenberg, Olivier Grisel, Bertrand Thirion, and Gaël Varoquaux. Semi-supervised factored logistic regression for high-dimensional neuroimaging data. In Advances in Neural Information Processing Systems, pages 3348–3356, 2015. 10 [21] Timothy Rubin, Oluwasanmi O Koyejo, Michael N Jones, and Tal Yarkoni. Generalized CorrespondenceLDA Models (GC-LDA) for Identifying Functional Regions in the Brain. In Advances in Neural Information Processing Systems, pages 1118–1126, 2016. [22] Alexandre Gramfort, Bertrand Thirion, and Gaël Varoquaux. Identifying Predictive Regions from fMRI with TV-L1 Prior. In International Workshop on Pattern Recognition in Neuroimaging, pages 17–20, 2013. [23] Bertrand Thirion, Gaël Varoquaux, Elvis Dohmatob, and Jean-Baptiste Poline. Which fMRI clustering gives good brain parcellations? Frontiers in neuroscience, 8:167, 2014. [24] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1): 1929–1958, 2014. [25] Thomas Blumensath, Saad Jbabdi, Matthew F. Glasser, David C. Van Essen, Kamil Ugurbil, Timothy E.J. Behrens, and Stephen M. Smith. Spatially constrained hierarchical parcellation of the brain with restingstate fMRI. NeuroImage, 76:313–324, 2013. [26] Arthur Mensch, Julien Mairal, Bertrand Thirion, and Gaël Varoquaux. Dictionary learning for massive matrix factorization. In International Conference on Machine Learning, pages 1737–1746, 2016. [27] Arthur Mensch, Julien Mairal, Bertrand Thirion, and Gaël Varoquaux. Stochastic Subsampling for Factorizing Huge Matrices. IEEE Transactions on Signal Processing, 99(to appear), 2017. [28] Simon B. Eickhoff, Bertrand Thirion, Gaël Varoquaux, and Danilo Bzdok. Connectivity-based parcellation: Critique and implications. Human brain mapping, 36(12):4771–4792, 2015. [29] Stéphane G. Mallat and Zhifeng Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. [30] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference for Learning Representations, 2015. [31] Philippe Pinel, Bertrand Thirion, Sébastien Meriaux, Antoinette Jobert, Julien Serres, Denis Le Bihan, Jean-Baptiste Poline, and Stanislas Dehaene. Fast reproducible identification and large-scale databasing of individual functional cognitive networks. BMC Neuroscience, 8(1):91, 2007. [32] Dimitri Papadopoulos Orfanos, Vincent Michel, Yannick Schwartz, Philippe Pinel, Antonio Moreno, Denis Le Bihan, and Vincent Frouin. The Brainomics/Localizer database. NeuroImage, 144:309–314, 2017. [33] Meredith A. Shafto, Lorraine K. Tyler, Marie Dixon, Jason R Taylor, James B. Rowe, Rhodri Cusack, William D. Calder, Andrew J. an d Marslen-Wilson, John Duncan, Tim Dalgleish, Richard N. Henson, Carol Brayne, and Fiona E. Matthews. The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study protocol: A cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. BMC Neurology, 14:204, 2014. [34] RA Poldrack, Eliza Congdon, William Triplett, KJ Gorgolewski, KH Karlsgodt, JA Mumford, FW Sabb, NB Freimer, ED London, TD Cannon, et al. A phenome-wide examination of neural and cognitive function. Scientific Data, 3:160110, 2016. [35] Alexandre Abraham, Fabian Pedregosa, Michael Eickenberg, Philippe Gervais, Andreas Mueller, Jean Kossaifi, Alexandre Gramfort, Bertrand Thirion, and Gael Varoquaux. Machine learning for neuroimaging with scikit-learn. Frontiers in Neuroinformatics, 8:14, 2014. [36] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [37] Stefan Wager, Sida Wang, and Percy S Liang. Dropout Training as Adaptive Regularization. In Advances in Neural Information Processing Systems, pages 351–359. 2013. [38] Stephanie Bugden, Gavin R. Price, D. Adam McLean, and Daniel Ansari. The role of the left intraparietal sulcus in the relationship between symbolic number processing and children’s arithmetic competence. Developmental Cognitive Neuroscience, 2(4):448–457, 2012. [39] Nathan Srebro, Jason Rennie, and Tommi S. Jaakkola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems, pages 1329–1336, 2004. [40] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. 11 | 2017 | 388 |
6,883 | Conic Scan-and-Cover algorithms for nonparametric topic modeling Mikhail Yurochkin Department of Statistics University of Michigan moonfolk@umich.edu Aritra Guha Department of Statistics University of Michigan aritra@umich.edu XuanLong Nguyen Department of Statistics University of Michigan xuanlong@umich.edu Abstract We propose new algorithms for topic modeling when the number of topics is unknown. Our approach relies on an analysis of the concentration of mass and angular geometry of the topic simplex, a convex polytope constructed by taking the convex hull of vertices representing the latent topics. Our algorithms are shown in practice to have accuracy comparable to a Gibbs sampler in terms of topic estimation, which requires the number of topics be given. Moreover, they are one of the fastest among several state of the art parametric techniques.1 Statistical consistency of our estimator is established under some conditions. 1 Introduction A well-known challenge associated with topic modeling inference can be succinctly summed up by the statement that sampling based approaches may be accurate but computationally very slow, e.g., Pritchard et al. (2000); Griffiths & Steyvers (2004), while the variational inference approaches are faster but their estimates may be inaccurate, e.g., Blei et al. (2003); Hoffman et al. (2013). For nonparametric topic inference, i.e., when the number of topics is a priori unknown, the problem becomes more acute. The Hierarchical Dirichlet Process model (Teh et al., 2006) is an elegant Bayesian nonparametric approach which allows for the number of topics to grow with data size, but its sampling based inference is much more inefficient compared to the parametric counterpart. As pointed out by Yurochkin & Nguyen (2016), the root of the inefficiency can be traced to the need for approximating the posterior distributions of the latent variables representing the topic labels — these are not geometrically intrinsic as any permutation of the labels yields the same likelihood. A promising approach in addressing the aforementioned challenges is to take a convex geometric perspective, where topic learning and inference may be formulated as a convex geometric problem: the observed documents correspond to points randomly drawn from a topic polytope, a convex set whose vertices represent the topics to be inferred. This perspective has been adopted to establish posterior contraction behavior of the topic polytope in both theory and practice (Nguyen, 2015; Tang et al., 2014). A method for topic estimation that exploits convex geometry, the Geometric Dirichlet Means (GDM) algorithm, was proposed by Yurochkin & Nguyen (2016), which demonstrates attractive behaviors both in terms of running time and estimation accuracy. In this paper we shall continue to amplify this viewpoint to address nonparametric topic modeling, a setting in which the number of topics is unknown, as is the distribution inside the topic polytope (in some situations). We will propose algorithms for topic estimation by explicitly accounting for the concentration of mass and angular geometry of the topic polytope, typically a simplex in topic modeling applications. The geometric intuition is fairly clear: each vertex of the topic simplex can be identified by a ray emanating from its center (to be defined formally), while the concentration of mass can be quantified 1Code is available at https://github.com/moonfolk/Geometric-Topic-Modeling. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. for the cones hinging on the apex positioned at the center. Such cones can be rotated around the center to scan for high density regions inside the topic simplex — under mild conditions such cones can be constructed efficiently to recover both the number of vertices and their estimates. We also mention another fruitful approach, which casts topic estimation as a matrix factorization problem (Deerwester et al., 1990; Xu et al., 2003; Anandkumar et al., 2012; Arora et al., 2012). A notable recent algorithm coming from the matrix factorization perspective is RecoverKL (Arora et al., 2012), which solves non-negative matrix factorization (NMF) efficiently under assumptions on the existence of so-called anchor words. RecoverKL remains to be a parametric technique — we will extend it to a nonparametric setting and show that the anchor word assumption appears to limit the number of topics one can efficiently learn. Our paper is organized as follows. In Section 2 we discuss recent developments in geometric topic modeling and introduce our approach; Sections 3 and 4 deliver the contributions outlined above; Section 5 demonstrates experimental performance; we conclude with a discussion in Section 6. 2 Geometric topic modeling Background and related work In this section we present the convex geometry of the Latent Dirichlet Allocation (LDA) model of Blei et al. (2003), along with related theoretical and algorithmic results that motivate our work. Let V be vocabulary size and ∆V −1 be the corresponding vocabulary probability simplex. Sample K topics (i.e., distributions on words) βk ∼DirV (η), k = 1, . . . , K, where η ∈RV +. Next, sample M document-word probabilities pm residing in the topic simplex B := Conv(β1, . . . , βK) (cf. Nguyen (2015)), by first generating their barycentric coordinates (i.e., topic proportions) θm ∼DirK(α) and then setting pm := P k βkθmk for m = 1, . . . , M and α ∈RK + . Finally, word counts of the m-th document can be sampled wm ∼Mult(pm, Nm), where Nm ∈N is the number of words in document m. The above model is equivalent to the LDA when individual words to topic label assignments are marginalized out. Nguyen (2015) established posterior contraction rates of the topic simplex, provided that αk ≤1 ∀k and either number of topics K is known or topics are sufficiently separated in terms of the Euclidean distance. Yurochkin & Nguyen (2016) devised an estimate for B, taken to be a fixed unknown quantity, by formulating a geometric objective function, which is minimized when topic simplex B is close to the normalized documents ¯wm := wm/Nm. They showed that the estimation of topic proportions θm given B simply reduces to taking barycentric coordinates of the projection of ¯wm onto B. To estimate B given K, they proposed a Geometric Dirichlet Means (GDM) algorithm, which operated by performing a k-means clustering on the normalized documents, followed by a geometric correction for the cluster centroids. The resulting algorithm is remarkably fast and accurate, supporting the potential of the geometric approach. The GDM is not applicable when K is unknown, but it provides a motivation which our approach is built on. The Conic Scan-and-Cover approach To enable the inference of B when K is not known, we need to investigate the concentration of mass inside the topic simplex. It suffices to focus on two types of geometric objects: cones and spheres, which provide the basis for a complete coverage of the simplex. To gain intuition of our procedure, which we call Conic Scan-and-Cover (CoSAC) approach, imagine someone standing at a center point of a triangular dark room trying to figure out all corners with a portable flashlight, which can produce a cone of light. A room corner can be identified with the direction of the farthest visible data objects. Once a corner is found, one can turn the flashlight to another direction to scan for the next ones. See Fig. 1a, where red denotes the scanned area. To make sure that all corners are detected, the cones of light have to be open to an appropriate range of angles so that enough data objects can be captured and removed from the room. To make sure no false corners are declared, we also need a suitable stopping criterion, by relying only on data points that lie beyond a certain spherical radius, see Fig. 1b. Hence, we need to be able to gauge the concentration of mass for suitable cones and spherical balls in ∆V −1. This is the subject of the next section. 3 Geometric estimation of the topic simplex We start by representing B in terms of its convex and angular geometry. First, B is centered at a point denoted by Cp. The centered probability simplex is denoted by ∆V −1 0 := {x ∈RV |x+Cp ∈∆V −1}. 2 0.4 0.2 0.0 0.2 0.4 0.2 0.1 0.0 0.1 0.2 0.3 0.4 (v1) (v2) (v3) (a) An incomplete coverage using 3 cones (containing red points). 0.4 0.2 0.0 0.2 0.4 0.2 0.1 0.0 0.1 0.2 0.3 0.4 (v1) (v2) (v3) (b) Complete coverage using 3 cones (red) and a ball (yellow). 0.4 0.2 0.0 0.2 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 c(v1) (v1) c 1 c c 1 c (c) Cap Λc(v1) and cone Sω(v1). Figure 1: Complete coverage of topic simplex by cones and a spherical ball for K = 3, V = 3. Then, write bk := βk −Cp ∈∆V −1 0 for k = 1, . . . , K and ˜pm := pm −Cp ∈∆V −1 0 for m = 1, . . . , M. Note that re-centering leaves corresponding barycentric coordinates θm ∈∆K−1 unchanged. Moreover, the extreme points of centered topic simplex ˜B := Conv{b1, . . . , bK} can now be represented by their directions vk ∈RV and corresponding radii Rk ∈R+ such that bk = Rkvk for any k = 1, . . . , K. 3.1 Coverage of the topic simplex The first step toward formulating a CoSAC approach is to show how ˜B can be covered with exactly K cones and one spherical ball positioned at Cp. A cone is defined as set Sω(v) := {p ∈∆V −1 0 |dcos(v, p) < ω}, where we employ the angular distance (a.k.a. cosine distance) dcos(v, p) := 1 −cos(v, p) and cos(v, p) is the cosine of angle ∠(v, p) formed by vectors v and p. The Conical coverage It is possible to choose ω so that the topic simplex can be covered with exactly K cones, that is, KS k=1 Sω(vk) ⊇˜B. Moreover, each cone contains exactly one vertex. Suppose that Cp is the incenter of the topic simplex ˜B, with r being the inradius. The incenter and inradius correspond to the maximum volume sphere contained in ˜B. Let ai,k denote the distance between the i-th and k-th vertex of ˜B, with amin ≤ai,k ≤amax for all i, k, and Rmax, Rmin such that Rmin ≤Rk := ∥bk∥2 ≤Rmax ∀k = 1, . . . , K. Then we can establish the following. Proposition 1. For simplex ˜B and ω ∈(ω1, ω2), where ω1 = 1 −r/Rmax and ω2 = max{(a2 min)/(2R2 max), max i,k=1,...,K(1 −cos(bi, bk)}, the cone Sω(v) around any vertex direction v of ˜B contains exactly one vertex. Moreover, complete coverage holds: KS k=1 Sω(vk) ⊇˜B. We say there is an angular separation if cos(bi, bk) ≤0 for any i, k = 1, . . . , K (i.e., the angles for all pairs are at least π/2), then ω ∈ 1 − r Rmax , 1 ̸= ∅. Thus, under angular separation, the range ω that allows for full coverage is nonempty independently of K. Our result is in agreement with that of Nguyen (2015), whose result suggested that topic simplex B can be consistently estimated without knowing K, provided there is a minimum edge length amin > 0. The notion of angular separation leads naturally to the Conic Scan-and-Cover algorithm. Before getting there, we show a series of results allowing us to further extend the range of admissible ω. The inclusion of a spherical ball centered at Cp allows us to expand substantially the range of ω for which conical coverage continues to hold. In particular, we can reduce the lower bound on ω in Proposition 1, since we only need to cover the regions near the vertices of ˜B with cones using the following proposition. Fig. 1b provides an illustration. Proposition 2. Let B(Cp, R) = {˜p ∈RV |∥˜p −Cp∥2 ≤R}, R > 0; ω1, ω2 given in Prop. 1, and ω3 := 1 −min min i,k Rk sin2(bi, bk) R + cos(bi, bk) s 1 −R2 k sin2(bi, bj) R2 , 1 , (1) 3 then we have KS k=1 Sω(vk) ∪B(Cp, R) ⊇˜B whenever ω ∈(min{ω1, ω3}, ω2). Notice that as R →Rmax , the value of ω3 →0. Hence if R ≤Rmin ≈Rmax, the admissible range for ω in Prop. 2 results in a substantial strengthening from Prop. 1. It is worth noting that the above two geometric propositions do not require any distributional properties inside the simplex. Coverage leftovers In practice complete coverage may fail if ω and R are chosen outside of corresponding ranges suggested by the previous two propositions. In that case, it is useful to note that leftover regions will have a very low mass. Next we quantify the mass inside a cone that does contain a vertex, which allows us to reject a cone that has low mass, therefore not containing a vertex in it. Proposition 3. The cone Sω(v1) whose axis is a topic direction v1 has mass P(Sω(v1)) > P(Λc(b1)) = R 1 1−c θα1−1 1 (1 −θ1) P i̸=1 αi−1dθ1 R 1 0 θα1−1 1 (1 −θ1) P i̸=1 αi−1dθ1 = c P i̸=1 αi(1 −c)α1Γ(PK i=1 αi) (P i̸=1 αi)Γ(α1)Γ(P i̸=1 αi) 1 + c PK i=1 αi P i̸=1 αi + 1 + c2(PK i=1 αi)(PK i=1 αi + 1) (P i̸=1 αi + 1)(P i̸=1 αi + 2) + · · · , (2) where Λc(b1) is the simplicial cap of Sω(v1) which is composed of vertex b1 and a base parallel to the corresponding base of ˜B and cutting adjacent edges of ˜B in the ratio c : (1 −c). See Fig. 1c for an illustration for the simplicial cap described in the proposition. Given the lower bound for the mass around a cone containing a vertex, we have arrived at the following guarantee. Proposition 4. For λ ∈(0, 1), let cλ be such that λ = min k P(Λcλ(bk)) and let ωλ be such that cλ = 2 s 1 − r2 R2max ! (sin(d) cot(arccos(1 −ωλ)) + cos(d)) !−1 , (3) where angle d ≤min i,k ∠(bk, bk −bi). Then, as long as ω ∈ ωλ, max a2 min 2R2max , max i,k=1,...,K(1 −cos(bi, bk) , (4) the bound P(Sω(vk)) ≥λ holds for all k = 1, . . . , K. 3.2 CoSAC: Conic Scan-and-Cover algorithm Having laid out the geometric foundations, we are ready to present the Conic Scan-and-Cover (CoSAC) algorithm, which is a scanning procedure for detecting the presence of simplicial vertices based on data drawn randomly from the simplex. The idea is simple: iteratively pick the farthest point from the center estimate ˆCp := 1 M P m pm, say v, then construct a cone Sω(v) for some suitably chosen ω, and remove all the data residing in this cone. Repeat until there is no data point left. Specifically, let A = {1, . . . , M} be the index set of the initially unseen data, then set v := argmax ˜pm:m∈A ∥˜pm∥2 and update A := A \ Sω(v). The parameter ω needs to be sufficiently large to ensure that the farthest point is a good estimate of a true vertex, and that the scan will be completed in exactly K iterations; ω needs to be not too large, so that Sω(v) does not contain more than one vertex. The existence of such ω is guaranteed by Prop. 1. In particular, for an equilateral ˜B, the condition of the Prop. 1 is satisfied as long as ω ∈(1 −1/ √ K −1, 1 + 1/(K −1)). In our setting, K is unknown. A smaller ω would be a more robust choice, and accordingly the set A will likely remain non-empty after K iterations. See the illustration of Fig. 1a, where the blue regions correspond to A after K = 3 iterations of the scan. As a result, we proceed by adopting a stopping criteria based on Prop. 2: the procedure is stopped as soon as ∀m ∈A ∥˜pm∥2 < R, which allows us to complete the scan in K iterations (as in Fig. 1b for K = 3). The CoSAC algorithm is formally presented by Algorithm 1. Its running is illustrated in Fig. 2, where we show iterations 1, 26, 29, 30 of the algorithm by plotting norms of the centered documents 4 in the active set A and cone Sω(v) against cosine distance to the chosen direction of a topic. Iteration 30 (right) satisfies stopping criteria and therefore CoSAC recovered correct K = 30. Note that this type of visual representation can be useful in practice to verify choices of ω and R. The following theorem establishes the consistency of the CoSAC procedure. Theorem 1. Suppose {β1, . . . , βK} are the true topics, incenter Cp is given, θm ∼DirK(α) and pm := P k βkθmk for m = 1, . . . , M and α ∈RK + . Let ˆK be the estimated number of topics, {ˆβ1, . . . , ˆβ ˆ K} be the output of Algorithm 1 trained with ω and R as in Prop. 2. Then ∀ϵ > 0, P ( min j∈{1,..., ˆ K} ∥βi −ˆβj∥> ϵ , for any i ∈{1, . . . , ˆK} ) ∪{K ̸= ˆK} ! →0 as M →∞. Remark We found the choices ω = 0.6 and R to be median of {∥˜p1∥2, . . . , ∥˜pM∥2} to be robust in practice and agreeing with our theoretical results. From Prop. 3 it follows that choosing R as median length is equivalent to choosing ω resulting in an edge cut ratio c such that 1 − K K−1( c 1−c)1−1/K ≥ 1/2, then c ≤( K−1 2K )K/(K−1), which, for any equilateral topic simplex B, is satisfied by setting ω ∈(0.3, 1), provided that K ≤2000 based on the Eq. (3). 4 Document Conic Scan-and-Cover algorithm In the topic modeling problem, pm for m = 1, . . . , M are not given. Instead, under the bag-of-words assumption, we are given the frequencies of words in documents w1, . . . , wM which provide a point estimate ¯wm := wm/Nm for the pm. Clearly, if number of documents M →∞and length of documents Nm →∞∀m, we can use Algorithm 1 with the plug-in estimates ¯wm in place of pm, since ¯wm →pm. Moreover, Cp will be estimated by ˆCp := 1 M P ¯wm. In practice, M and Nm are finite, some of which may take relatively small values. Taking the topic direction to be the farthest point in the topic simplex, i.e., v = argmax ˜ wm:m∈A ∥˜wm∥2, where ˜wm := ¯wm −ˆCp ∈∆V −1 0 , may no longer yield a robust estimate, because the variance of this topic direction estimator can be quite high (in the Supplement we show that it is upper bounded with (1 −1/V )/Nm). To obtain improved estimates, we propose a technique that we call “mean-shifting”. Instead of taking the farthest point in the simplex, this technique is designed to shift the estimate of a topic to a high density region, where true topics are likely to be found. Precisely, given a (current) cone Sω(v), we re-position the cone by updating v := argmin v P m∈Sω(v) ∥˜wm∥2(1 −cos( ˜wm, v)). In other words, we re-position the cone by centering it around the mean direction of the cone weighted by the norms of the data points inside, which is simply given by v ∝P m∈Sω(v) ˜wm/ card(Sω(v)). This results in reduced variance of the topic direction estimate, due to the averaging over data residing in the cone. The mean-shifting technique may be slightly modified and taken as a local update for a subsequent optimization which cycles through the entire set of documents and iteratively updates the cones. The optimization is with respect to the following weighted spherical k-means objective: min ∥vk∥2=1,k=1,...K K X k=1 X m∈Sk(vk) ∥˜wm∥2(1 −cos(vk, ˜wm)), (5) where cones Sk(vk) = {m|dcos(vk, ˜pm) < dcos(vl, ˜pi) ∀l ̸= k} yield a disjoint data partition KF k=1 Sk(vk) = {1, . . . , M} (this is different from Sω(vk)). The rationale of spherical k-means optimization is to use full data for estimation of topic directions, hence further reducing the variance due to short documents. The connection between objective function (5) and topic simplex estimation is given in the Supplement. Finally, obtain topic norms Rk along the directions vk using maximum projection: Rk := max m:m∈Sk(vk)⟨vk, ˜wm⟩. Our entire procedure is summarized in Algorithm 2. Remark In Step 9 of the algorithm, cone Sω(v) with a very low cardinality, i.e., card(Sω(v)) < λM, for some small constant λ, is discarded because this is likely an outlier region that does not actually contain a true vertex. The choice of λ is governed by results of Prop. 4. For small αk = 1/K, ∀k, 5 λ ≤P(Λc) ≈ c(K−1)/K (K−1)(1−c) and for an equilateral ˜B we can choose d such that cos(d) = q K+1 2K . Plugging these values into Eq. (3) leads to c = 2 q 1 − 1 K2 q K−1 2K ( 1−ω √ 1−(1−ω)2 ) + q K+1 2K −1 . Now, plugging in ω = 0.6 we obtain λ ≤K−1 for large K. Our approximations were based on large K to get a sense of λ, we now make a conservative choice λ = 0.001, so that (K)−1 > λ ∀K < 1000. As a result, a topic is rejected if the corresponding cone contains less than 0.1% of the data. Finding anchor words using Conic Scan-and-Cover Another approach to reduce the noise is to consider the problem from a different viewpoint, where Algorithm 1 will prove itself useful. RecoverKL by Arora et al. (2012) can identify topics with diminishing errors (in number of documents M), provided that topics contain anchor words. The problem of finding anchor words geometrically reduces to identifying rows of the word-to-word co-occurrence matrix that form a simplex containing other rows of the same matrix (cf. Arora et al. (2012) for details). An advantage of this approach is that noise in the word-to-word co-occurrence matrix goes to zero as M →∞no matter the document lengths, hence we can use Algorithm 1 with "documents" being rows of the word-to-word co-occurrence matrix to learn anchor words nonparametrically and then run RecoverKL to obtain topic estimates. We will call this procedure cscRecoverKL. Algorithm 1 Conic Scan-and-Cover (CoSAC) Input: document generating distributions p1, . . . , pM, angle threshold ω, norm threshold R Output: topics β1, . . . , βk 1: ˆCp = 1 M P m pm {find center}; ˜pm := pm −ˆCp for m = 1, . . . , M {center the data} 2: A1 = {1, . . . , M} {initialize active set}; k = 1 {initialize topic count} 3: while ∃m ∈Ak : ∥˜pm∥2 > R do 4: vk = argmax ˜pm:m∈Ak ∥˜pm∥2 {find topic} 5: Sω(vk) = {m : dcos(˜pm, vk) < ω} {find cone of near documents} 6: Ak = Ak \ Sω(vk) {update active set} 7: βk = vk + ˆCp, k = k + 1 {compute topic} 8: end while 0.0 0.2 0.4 0.6 0.8 1.0 1.2 cosine distance dcos(v1, ˜pi) 0.02 0.04 0.06 0.08 0.10 norm ∥˜pi∥2 topic v1 ω = 0.60 Sω(v1) A2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 cosine distance dcos(v26, ˜pi) topic v26 ω = 0.60 Sω(v26) A27 0.0 0.2 0.4 0.6 0.8 1.0 1.2 cosine distance dcos(v29, ˜pi) topic v29 ω = 0.60 Sω(v29) A30 0.0 0.2 0.4 0.6 0.8 1.0 1.2 cosine distance dcos(v30, ˜pi) topic v30 ω = 0.60 R = 0.047 Sω(v30) A31 Figure 2: Iterations 1, 26, 29, 30 of the Algorithm 1. Red are the documents in the cone Sω(vk); blue are the documents in the active set Ak+1 for next iteration. Yellow are documents ∥˜pm∥2 < R. 5 Experimental results 5.1 Simulation experiments In the simulation studies we shall compare CoSAC (Algorithm 2) and cscRecoverKL based on Algorithm 1 both of which don’t have access to the true K, versus popular parametric topic modeling approaches (trained with true K): Stochastic Variational Inference (SVI), Collapsed Gibbs sampler, RecoverKL and GDM (more details in the Supplement). The comparisons are done on the basis of minimum-matching Euclidean distance, which quantifies distance between topic simplices (Tang et al., 2014), and running times (perplexity scores comparison is given in the Supplement). Lastly we will demonstrate the ability of CoSAC to recover correct number of topics for a varying K. 6 Algorithm 2 CoSAC for documents Input: normalized documents ¯w1, . . . , ¯wM, angle threshold ω, norm threshold R, outlier threshold λ Output: topics β1, . . . , βk 1: ˆCp = 1 M P m ¯wm {find center}; ˜wm := ¯wm −ˆCp for m = 1, . . . , M {center the data} 2: A1 = {1, . . . , M} {initialize active set}; k = 1 {initialize topic count} 3: while ∃m ∈Ak : ∥˜wm∥2 > R do 4: vk = argmax ˜ wm:m∈Ak ∥˜wm∥2 {initialize direction} 5: while vk not converged do {mean-shifting} 6: Sω(vk) = {m : dcos( ˜wm, vk) < ω} {find cone of near documents} 7: vk = P m∈Sω(vk) ˜wm/ card(Sω(vk)) {update direction} 8: end while 9: Ak = Ak \ Sω(vk) {update active set} if card(Sω(vk)) > λM then k = k + 1 {record topic direction} 10: end while 11: v1, . . . , vk = weighted spherical k-means (v1, . . . , vk, ˜w1, . . . , ˜wM) 12: for l in {1, . . . , k} do 13: Rl := max m:m∈Sl(vl)⟨vl, ˜wm⟩{find topic length along direction vl} 14: βl = Rlvl + ˆCp {compute topic} 15: end for G G G G G G G G G G G G G G G G G G G G G G 0.000 0.025 0.050 0.075 0 2000 4000 6000 8000 10000 Number of documents M Minimum Matching distance G G cscRecoverKL RecoverKL CoSAC GDM Gibbs SVI G G G G G G G G G G G G G G 0.0 0.1 0.2 0.3 50 100 150 200 250 300 Length of documents Nm Minimum Matching distance G G cscRecoverKL RecoverKL CoSAC GDM Gibbs SVI G G G G G G G G G G G G G G G G G G G G G G 0 100 200 300 0 2000 4000 6000 8000 10000 Number of documents M Running time, sec G G cscRecoverKL RecoverKL CoSAC GDM Gibbs SVI 0 10 20 30 40 10 20 30 40 50 True number of topics K Absolute topic number error cscRecoverKL CoSAC Bayes factor Figure 3: Minimum matching Euclidean distance for (a) varying corpora size, (b) varying length of documents; (c) Running times for varying corpora size; (d) Estimation of number of topics. G G G G G G G G G G G 0.02 0.04 0.06 0 50 100 150 Training time, sec Minimum Matching distance G Gibbs, M=1000 Gibbs, M=5000 CoSAC, M=1000 CoSAC, M=5000 G G G G G G G G G G G 675 700 725 750 775 0 50 100 150 Training time, sec Perplexity G Gibbs, M=1000 Gibbs, M=5000 CoSAC, M=1000 CoSAC, M=5000 1500 1550 1600 0 500 1000 1500 2000 Training time, min Perplexity LDA Gibbs HDP Gibbs CoSAC Figure 4: Gibbs sampler convergence analysis for (a) Minimum matching Euclidean distance for corpora sizes 1000 and 5000; (b) Perplexity for corpora sizes 1000 and 5000; (c) Perplexity for NYTimes data. Estimation of the LDA topics First we evaluate the ability of CoSAC and cscRecoverKL to estimate topics β1, . . . , βK, fixing K = 15. Fig. 3(a) shows performance for the case of fewer M ∈[100, 10000] but longer Nm = 500 documents (e.g. scientific articles, novels, legal documents). CoSAC demonstrates performance comparable in accuracy to Gibbs sampler and GDM. Next we consider larger corpora M = 30000 of shorter Nm ∈[25, 300] documents (e.g. news articles, social media posts). Fig. 3(b) shows that this scenario is harder and CoSAC matches the performance of Gibbs sampler for Nm ≥75. Indeed across both experiments CoSAC only made mistakes in terms of K for the case of Nm = 25, when it was underestimating on average by 4 topics 7 and for Nm = 50 when it was off by around 1, which explains the earlier observation. Experiments with varying V and α are given in the Supplement. It is worth noting that cscRecoverKL appears to be strictly better than its predecessor. This suggests that our procedure for selection of anchor words is more accurate in addition to being nonparametric. Running time A notable advantage of the CoSAC algorithm is its speed. In Fig. 3(c) we see that Gibbs, SVI, GDM and CoSAC all have linear complexity growth in M, but the slopes are very different and approximately are INm for SVI and Gibbs (where I is the number of iterations which has to be large enough for convergence), number of k-means iterations to converge for GDM and is of order K for the CoSAC procedure making it the fastest algorithm of all under consideration. Next we compare CoSAC to per iteration quality of the Gibbs sampler trained with 500 iterations for M = 1000 and M = 5000. Fig. 4(b) shows that Gibbs sampler, when true K is given, can achieve good perplexity score as fast as CoSAC and outperforms it as training continues, although Fig. 4(a) suggests that much longer training time is needed for Gibbs sampler to achieve good topic estimates and small estimation variance. Estimating number of topics Model selection in the LDA context is a quite challenging task and, to the best of our knowledge, there is no "go to" procedure. One of the possible approaches is based on refitting LDA with multiple choices of K and using Bayes Factor for model selection (Griffiths & Steyvers, 2004). Another option is to adopt the Hierarchical Dirichlet Process (HDP) model, but we should understand that it is not a procedure to estimate K of the LDA model, but rather a particular prior on the number of topics, that assumes K to grow with the data. A more recent suggestion is to slightly modify LDA and use Bayes moment matching (Hsu & Poupart, 2016), but, as can be seen from Figure 2 of their paper, estimation variance is high and the method is not very accurate (we tried it with true K = 15 and it took above 1 hour to fit and found 35 topics). Next we compare Bayes factor model selection versus CoSAC and cscRecoverKL for K ∈[5, 50]. Fig. 3(d) shows that CoSAC consistently recovers exact number of topics in a wide range. We also observe that cscRecoverKL does not estimate K well (underestimates) in the higher range. This is expected because cscRecoverKL finds the number of anchor words, not topics. The former is decreasing when later is increasing. Attempting to fit RecoverKL with more topics than there are anchor words might lead to deteriorating performance and our modification can address this limitation of the RecoverKL method. 5.2 Real data analysis In this section we demonstrate CoSAC algorithm for topic modeling on one of the standard bag of words datasets — NYTimes news articles. After preprocessing we obtained M ≈130, 000 documents over V = 5320 words. Bayes factor for the LDA selected the smallest model among K ∈[80, 195], while CoSAC selected 159 topics. We think that disagreement between the two procedures is attributed to the misspecification of the LDA model when real data is in play, which affects Bayes factor, while CoSAC is largely based on the geometry of the topic simplex. The results are summarized in Table 1 — CoSAC found 159 topics in less than 20min; cscRecoverKL estimated the number of anchor words in the data to be 27 leading to fewer topics. Fig. 4(c) compares CoSAC perplexity score to per iteration test perplexity of the LDA (1000 iterations) and HDP (100 iterations) Gibbs samplers. Text files with top 20 words of all topics are included in the Supplementary material. We note that CoSAC procedure recovered meaningful topics, contextually similar to LDA and HDP (e.g. elections, terrorist attacks, Enron scandal, etc.) and also recovered more specific topics about Mike Tyson, boxing and case of Timothy McVeigh which were present among HDP topics, but not LDA ones. We conclude that CoSAC is a practical procedure for topic modeling on large scale corpora able to find meaningful topics in a short amount of time. 6 Discussion We have analyzed the problem of estimating topic simplex without assuming number of vertices (i.e., topics) to be known. We showed that it is possible to cover topic simplex using two types of geometric shapes, cones and a sphere, leading to a class of Conic Scan-and-Cover algorithms. We 8 Table 1: Modeling topics of NYTimes articles K Perplexity Coherence Time cscRecoverKL 27 2603 -238 37 min HDP Gibbs 221 ± 5 1477 ± 1.6 −442 ± 1.7 35 hours LDA Gibbs 80 1520 ± 1.5 −300 ± 0.7 5.3 hours CoSAC 159 1568 -322 19 min then proposed several geometric correction techniques to account for the noisy data. Our procedure is accurate in recovering the true number of topics, while remaining practical due to its computational speed. We think that angular geometric approach might allow for fast and elegant solutions to other clustering problems, although as of now it does not immediately offer a unifying problem solving framework like MCMC or variational inference. An interesting direction in a geometric framework is related to building models based on geometric quantities such as distances and angles. Acknowledgments This research is supported in part by grants NSF CAREER DMS-1351362, NSF CNS-1409303, a research gift from Adobe Research and a Margaret and Herman Sokol Faculty Award. 9 References Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S. M., and Liu, Y. A spectral algorithm for Latent Dirichlet Allocation. NIPS, 2012. Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M. A practical algorithm for topic modeling with provable guarantees. arXiv preprint arXiv:1212.4777, 2012. Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent Dirichlet Allocation. J. Mach. Learn. Res., 3:993–1022, March 2003. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391, Sep 01 1990. Griffiths, Thomas L and Steyvers, Mark. Finding scientific topics. PNAS, 101(suppl. 1):5228–5235, 2004. Hoffman, Ma. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. J. Mach. Learn. Res., 14(1):1303–1347, May 2013. Hsu, Wei-Shou and Poupart, Pascal. Online bayesian moment matching for topic modeling with unknown number of topics. In Advances In Neural Information Processing Systems, pp. 4529–4537, 2016. Nguyen, XuanLong. Posterior contraction of the population polytope in finite admixture models. Bernoulli, 21 (1):618–646, 02 2015. Pritchard, Jonathan K, Stephens, Matthew, and Donnelly, Peter. Inference of population structure using multilocus genotype data. Genetics, 155(2):945–959, 2000. Tang, Jian, Meng, Zhaoshi, Nguyen, Xuanlong, Mei, Qiaozhu, and Zhang, Ming. Understanding the limiting factors of topic modeling via posterior contraction analysis. In Proceedings of The 31st International Conference on Machine Learning, pp. 190–198. ACM, 2014. Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. Hierarchical dirichlet processes. Journal of the american statistical association, 101(476), 2006. Xu, Wei, Liu, Xin, and Gong, Yihong. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, SIGIR ’03, pp. 267–273. ACM, 2003. Yurochkin, Mikhail and Nguyen, XuanLong. Geometric dirichlet means algorithm for topic inference. In Advances in Neural Information Processing Systems, pp. 2505–2513, 2016. 10 | 2017 | 389 |
6,884 | Approximate Supermodularity Bounds for Experimental Design Luiz F. O. Chamon and Alejandro Ribeiro Electrical and Systems Engineering University of Pennsylvania {luizf,aribeiro}@seas.upenn.edu Abstract This work provides performance guarantees for the greedy solution of experimental design problems. In particular, it focuses on A- and E-optimal designs, for which typical guarantees do not apply since the mean-square error and the maximum eigenvalue of the estimation error covariance matrix are not supermodular. To do so, it leverages the concept of approximate supermodularity to derive nonasymptotic worst-case suboptimality bounds for these greedy solutions. These bounds reveal that as the SNR of the experiments decreases, these cost functions behave increasingly as supermodular functions. As such, greedy A- and E-optimal designs approach (1 −e−1)-optimality. These results reconcile the empirical success of greedy experimental design with the non-supermodularity of the A- and E-optimality criteria. 1 Introduction Experimental design consists of selecting which experiments to run or measurements to observe in order to estimate some variable of interest. Finding good designs is an ubiquitous problem with applications in regression, semi-supervised learning, multivariate analysis, and sensor placement [1– 10]. Nevertheless, selecting a set of k experiments that optimizes a generic figure of merit is NPhard [11, 12]. In some situations, however, an approximate solution with optimality guarantees can be obtained in polynomial time. For example, this is possible when the cost function possesses a diminishing returns property known as supermodularity, in which case greedy search is nearoptimal. Greedy solutions are particularly attractive for large-scale problems due to their iterative nature and because they have lower computational complexity than typical convex relaxations [11, 12]. Supermodularity, however, is a stringent condition not met by important performance metrics. For instance, it is well-known that neither the mean-square error (MSE) nor the maximum eigenvalue of the estimation error covariance matrix are supermodular [1, 13, 14]. Nevertheless, greedy algorithms have been successfully used to minimize these functions despite the lack of theoretical guarantees. The goal of this paper is to reconcile these observations by showing that these figures of merit, used in A- and E-optimal experimental designs, are approximately supermodular. To do so, it introduces different measures of approximate supermodularity and derives near-optimality results for these classes of functions. It then bounds how much the MSE and the maximum eigenvalue of the error covariance matrix violate supermodularity, leading to performance guarantees for greedy A- and E-optimal designs. More to the point, the main results of this work are: 1. The greedy solution of the A-optimal design problem is within a multiplicative (1 −e−α) factor of the optimal with α ≥[1 + O(γ)]−1, where γ upper bounds the signal-to-noise ratio (SNR) of the experiments (Theorem 3). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2. The value of the greedy solution of an E-optimal design problem is at most (1 − e−1)(f(D⋆) + kϵ), where ϵ ≤O(γ) (Theorem 4). 3. As the SNR of the experiments decreases, the performance guarantees for greedy A- and E-optimal designs approach the classical 1 −1/e. This last observation is particularly interesting since careful selection of experiments is more important in low SNR scenarios. In fact, unless experiments are highly correlated, designs have similar performances in high SNR. Also, note that the guarantees in this paper are not asymptotic and hold in the worst-case, i.e., hold for problems of any dimension and for designs of any size. Notation Lowercase boldface letters represent vectors (x), uppercase boldface letters are matrices (X), and calligraphic letters denote sets/multisets (A). We write #A for the cardinality of A and P(B) to denote the collection of all finite multisets of the set B. To say X is a positive semidefinite (PSD) matrix we write X ⪰0, so that for X, Y ∈Rn×n, X ⪯Y ⇔bT Xb ≤bT Y b, for all b ∈Rn. Similarly, we write X ≻0 when X is positive definite. 2 Optimal experimental design Let E be a pool of possible experiments. The outcome of experiment e ∈E is a multivariate measurement ye ∈Rne defined as ye = Aeθ + ve, (1) where θ ∈Rp is a parameter vector with a prior distribution such that E [θ] = ¯θ and E(θ −¯θ)(θ − ¯θ)T = Rθ ≻0; Ae is an ne × p observation matrix; and ve ∈Rne is a zero-mean random variable with arbitrary covariance matrix Re = E vevT e ≻0 that represents the experiment uncertainty. The {ve} are assumed to be uncorrelated across experiments, i.e., E vevT f = 0 for all e ̸= f, and independent of θ. These experiments aim to estimate z = Hθ, (2) where H is an m×p matrix. Appropriately choosing H is important given that the best experiments to estimate θ are not necessarily the best experiments to estimate z. For instance, if θ is to be used for classification, then H can be chosen so as to optimize the design with respect to the output of the classifier. Alternatively, transductive experimental design can be performed by taking H to be a collection of data points from a test set [6]. Finally, H = I, the identity matrix, recovers the classical θ-estimation case. The experiments to be used in the estimation of z are collected in a multiset D called a design. Note that D contains elements of E with repetitions. Given a design D, it is ready to compute an optimal Bayesian estimate ˆzD. The estimation error of ˆzD is measured by the error covariance matrix K(D). An expression for the estimator and its error matrix in terms of the problem constants is given in the following proposition. Proposition 1 (Bayesian estimator). Let the experiments be defined as in (1). For Me = AT e R−1 e Ae and a design D ∈P(E), the unbiased affine estimator of z with the smallest error covariance matrix in the PSD cone is given by ˆzD = H " R−1 θ + X e∈D Me #−1 "X e∈D AT e R−1 e ye + R−1 θ ¯θ # . (3) The corresponding error covariance matrix K(D) = E (z −ˆzD)(z −ˆzD)T | θ, {Me}e∈D is given by the expression K(D) = H " R−1 θ + X e∈D Me #−1 HT . (4) Proof. See extended version [15]. The experimental design problem consists of selecting a design D of cardinality at most k that minimizes the overall estimation error. This can be explicitly stated as the problem of choosing D 2 with #D ≤k that minimizes the error covariance K(D) whose expression is given in (4). Note that (4) can account for unregularized (non-Bayesian) experimental design by removing Rθ and using a pseudo-inverse [16]. However, the error covariance matrix is no longer monotone in this case—see Lemma 1. Providing guarantees for this scenario is the subject of future work. The minimization of the PSD matrix K(D) in experimental design is typically attempted using scalarization procedures generically known as alphabetical design criteria, the most common of which are A-, D-, and E-optimal design [17]. These are tantamount to selecting different figures of merit to compare the matrices K(D). Our focus in this paper is mostly on A- and E-optimal designs, but we also consider D-optimal designs for comparison. A design D with k experiments is said to be A-optimal if it minimizes the estimation MSE which is given by the trace of the covariance matrix, minimize |D|≤k Tr h K(D) i −Tr h HRθHT i (P-A) Notice that is customary to say a design is A-optimal when H = I in (P-A), whereas the notation V-optimal is reserved for the case when H is arbitrary [17]. We do not make this distinction here for conciseness. A design is E-optimal if instead of minimizing the MSE as in (P-A), it minimizes the largest eigenvalue of the covariance matrix K(D), i.e., minimize |D|≤k λmax h K(D) i −λmax h HRθHT i . (P-E) Since the trace of a matrix is the sum of its eigenvalues, we can think of (P-E) as a robust version of (P-A). While the design in (P-A) seeks to reduce the estimation error in all directions, the design in (P-E) seeks to reduce the estimation error in the worst direction. Equivalently, given that λmax(X) = max∥u∥2=1 uT Xu, we can interpret (P-E) with H = I as minimizing the MSE for an adversarial choice of z. A D-optimal design is one in which the objective is to minimize the log-determinant of the estimator’s covariance matrix, minimize |D|≤k log det h K(D) i −log det h HRθHT i . (P-D) The motivation for using the objective in (P-D) is that the log-determinant of K(D) is proportional to the volume of the confidence ellipsoid when the data are Gaussian. Note that the trace, maximum eigenvalue, and determinant of HRθHT in (P-A), (P-E), and (P-D) are constants and do not affect the respective optimization problems. They are subtracted so that the objectives vanish when D = ∅, the empty set. This simplifies the exposition in Section 4. Although the problem formulations in (P-A), (P-E), and (P-D) are integer programs known to be NP-hard, the use of greedy methods for their solution is widespread with good performance in practice. In the case of D-optimal design, this is justified theoretically because the objective of (P-D) is supermodular, which implies greedy methods are (1 −e−1)-optimal [2, 11, 12]. The objectives in (P-A) and (P-E), on the other hand, are not be supermodular in general [1, 13, 14] and it is not known why their greedy optimization yields good results in practice—conditions for the MSE to be supermodular exist but are restrictive [1]. The goal of this paper is to derive performance guarantees for greedy solutions of A- and E-optimal design problems. We do so by developing different notions of approximate supermodularity to show that A- and E-optimal design problems are not far from supermodular. Remark 1. Besides its intrinsic value as a minimizer of the volume of the confidence ellipsoid, (P-D) is often used as a surrogate for (P-A), when A-optimality (MSE) is considered the appropriate metric. It is important to point out that this is only justified when the problem has some inherent structure that suggests the minimum volume ellipsoid is somewhat symmetric. Otherwise, since the volume of an ellipsoid can be reduced by decreasing the length of a single principal axis, using (P-D) can lead to designs that perform well—in the MSE sense—along a few directions of the parameter space and poorly along all others. Formally, this can be seen by comparing the variation of the log-determinant and trace functions with respect to the eigenvalues of the PSD matrix K, ∂log det(K) ∂λj(K) = 1 λj(K) and ∂Tr(K) ∂λj(K) = 1. 3 The gradient of the log-determinant is largest in the direction of the smallest eigenvalue of the error covariance matrix. In contrast, the MSE gives equal weight to all directions of the space. The latter therefore yields balanced, whereas the former tends to flatten the confidence ellipsoid unless the problem has a specific structure. 3 Approximate supermodularity Consider a multiset function f : P(E) →R for which the value corresponding to an arbitrary multiset D ∈P(E) is denoted by f(D). We say the function f is normalized if f(∅) = 0 and we say f is monotone decreasing if for all multisets A ⊆B it holds that f(A) ≥f(B). Observe that if a function is normalized and monotone decreasing it must be that f(D) ≤0 for all D. The objectives of (P-A), (P-E), and (P-D) are normalized and monotone decreasing multiset functions, since adding experiments to a design decreases the covariance matrix uniformly in the PSD cone—see Lemma 1. We say that a multiset function f is supermodular if for all pairs of multisets A, B ∈P(E), A ⊆B, and elements u ∈E it holds that f(A) −f(A ∪{u}) ≥f(B) −f(B ∪{u}). Supermodular functions encode a notion of diminishing returns as the sets grow. Their relevance in this paper is due to the celebrated bound on the suboptimality of their greedy minimization [18]. Specifically, construct a greedy solution by starting with G0 = ∅and incorporating elements (experiments) e ∈E greedily so that at the h-th iteration we incorporate the element whose addition to Gh−1 results in the largest reduction in the value of f: Gh = Gh−1 ∪{e}, with e = argmin u∈E f (Gh−1 ∪{u}) . (5) The recursion in (5) is repeated for k steps to obtain a greedy solution with k elements. Then, if f is normalized, monotone decreasing, and supermodular, f(Gk) ≤(1 −e−1)f(D⋆), (6) where D⋆≜argmin|D|≤k f(D) is the optimal design selection of cardinality not larger than k [18]. We emphasize that in contrast to the classical greedy algorithm, (5) allows the same element to be selected multiple times. The optimality guarantee in (6) applies to (P-D) because its objective is supermodular. This is not true of the cost functions of (P-A) and (P-E). We address this issue by postulating that if a function does not violate supermodularity too much, then its greedy minimization should have close to supermodular performance. To formalize this idea, we introduce two measures of approximate supermodularity and derive near-optimal bounds based on these properties. It is worth noting that as intuitive as it may be, such results are not straightforward. In fact, [19] showed that even functions δclose to supermodular cannot be optimized in polynomial time. We start with the following multiplicative relaxation of the supermodular property. Definition 1 (α-supermodularity). A multiset function f : P(E) →R is α-supermodular, for α : N × N →R, if for all multisets A, B ∈P(E), A ⊆B, and all u ∈E it holds that f (A) −f (A ∪{u}) ≥α(#A, #B) [f (B) −f (B ∪{u})] . (7) Notice that for α ≥1, (7) reduces the original definition of supermodularity, in which case we refer to the function simply as supermodular [11, 12]. On the other hand, when α < 1, f is said to be approximately supermodular. Notice that if f is decreasing, then (7) always holds for α ≡0. We are therefore interested in the largest α for which (7) holds, i.e., α(a, b) = min A,B∈P(E) A⊆B, u∈E #A=a, #B=b f (A) −f (A ∪{u}) f (B) −f (B ∪{u}) (8) Interestingly, α not only measures how much f violates supermodularity, but it also quantifies the loss in performance guarantee incurred from these violations. 4 Theorem 1. Let f be a normalized, monotone decreasing, and α-supermodular multiset function. Then, for ¯α = mina<ℓ, b<ℓ+k α(a, b), the greedy solution from (5) obeys f(Gℓ) ≤ " 1 − ℓ−1 Y h=0 1 − 1 Pk−1 s=0 α(h, h + s)−1 !# f(D⋆) ≤(1 −e−¯αℓ/k)f(D⋆). (9) Proof. See extended version [15]. Theorem 1 bounds the suboptimality of the greedy solution from (5) when its objective is αsupermodular. At the same time, it quantifies the effect of relaxing the supermodularity hypothesis typically used to provide performance guarantees in these settings. In fact, if f is supermodular (α ≡1) and for ℓ= k, we recover the 1 −e−1 ≈0.63 guarantee from [18]. On the other hand, for an approximately supermodular function (¯α < 1), the result in (9) shows that the 63% guarantee can be recovered by selecting a set of size ℓ= ¯α−1k. Thus, α not only measures how much f violates supermodularity, but also gives a factor by which the cardinality constraint must be violated to obtain a supermodular near-optimal certificate. Similar to the original bound in [18], it worth noting that (9) is not tight and that better results are typical in practice (see Section 5). Although α-supermodularity gives a multiplicative approximation factor, finding meaningful bounds on α can be challenging for certain multiset functions, such as the E-optimality criterion in (P-E). It is therefore useful to look at approximate supermodularity from a different perspective as in the following definition. Definition 2 (ϵ-supermodularity). A multiset function f : P(E) →R is ϵ-supermodular, for ϵ : N × N →R, if for all multisets A, B ∈P(E), A ⊆B, and all u ∈E it holds that f (A) −f (A ∪{u}) ≥f (B) −f (B ∪{u}) −ϵ (#A, #B) . (10) Again, we say f is supermodular if ϵ(a, b) ≤0 for all a, b and approximately supermodular otherwise. As with α, we want the best ϵ that satisfies (10), which is given by ϵ (a, b) = max A,B∈P(E) A⊆B, u∈E #A=a, #B=b f (B) −f (B ∪{u}) −f (A) + f (A ∪{u}) . (11) In contrast to α-supermodularity, we obtain an additive approximation guarantee for the greedy minimization of ϵ-supermodular functions. Theorem 2. Let f be a normalized, monotone decreasing, and ϵ-supermodular multiset function. Then, for ¯ϵ = maxa<ℓ, b<ℓ+k ϵ(a, b), the greedy solution from (5) obeys f(Gℓ) ≤ " 1 − 1 −1 k ℓ# f(D⋆) + 1 k k−1 X s=0 ℓ−1 X h=0 ϵ(h, h + s) 1 −1 k ℓ−1−h ≤(1 −e−ℓ/k)(f(D⋆) + k¯ϵ) (12) Proof. See extended version [15]. As before, ϵ quantifies the loss in performance guarantee due to relaxing supermodularity. Indeed, (12) reveals that ϵ-supermodular functions have the same guarantees as a supermodular function up to an additive factor of Θ(k¯ϵ). In fact, if ¯ϵ ≤(ek)−1|f(D⋆)| (recall that f(D⋆) ≤0 due to normalization), then taking ℓ= 3k recovers the 63% approximation factor of supermodular functions. This same factor is obtained for α ≥1/3-supermodular functions. With the certificates of Theorems 1 and 2 in hand, we now proceed with the study of the A- and Eoptimality criteria. In the next section, we derive explicit bounds on their α- and ϵ-supermodularity, respectively, thus providing near-optimal performance guarantees for greedy A- and E-optimal designs. 5 4 Near-optimal experimental design Theorems 1 and 2 apply to functions that are (i) normalized, (ii) monotone decreasing, and (iii) approximately supermodular. By construction, the objectives of (P-A) and (P-E) are normalized [(i)]. The following lemma establishes that they are also monotone decreasing [(ii)] by showing that K is a decreasing set function in the PSD cone. The definition of Loewner order and the monotonicity of the trace operator readily give the desired results [16]. Lemma 1. The matrix-valued set function K(D) in (4) is monotonically decreasing with respect to the PSD cone, i.e., A ⊆B ⇔K(A) ⪰K(B). Proof. See extended version [15]. The main results of this section provide the final ingredient [(iii)] for Theorems 1 and 2 by bounding the approximate supermodularity of the A- and E-optimality criteria. We start by showing that the objective of (P-A) is α-supermodular. Theorem 3. The objective of (P-A) is α-supermodular with α(a, b) ≥ 1 κ(H)2 · λmin R−1 θ λmax R−1 θ + a · ℓmax , for all b ∈N, (13) where ℓmax = maxe∈E λmax(Me), Me = AT e R−1 e Ae, and κ(H) = σmax / σmin is the ℓ2-norm condition number of H, with σmax and σmin denoting the largest and smallest singular values of H respectively. Proof. See extended version [15]. Theorem 3 bounds the α-supermodularity of the objective of (P-A) in terms of the condition number of H, the prior covariance matrix, and the measurements SNR. To facilitate the interpretation of this result, let the SNR of the e-th experiment be γe = Tr[Me] and suppose Rθ = σ2 θI, H = I, and γe ≤γ for all e ∈E. Then, for ℓ= k greedy iterations, (13) implies ¯α ≥ 1 1 + 2kσ2 θγ , for ¯α as in Theorem 1. This deceptively simple bound reveals that the MSE behaves as a supermodular function at low SNRs. Formally, α →1 as γ →0. In contrast, the performance guarantee from Theorem 3 degrades in high SNR scenarios. In this case, however, greedy methods are expected to give good results since designs yield similar estimation errors (as illustrated in Section 5). The greedy solution of (P-A) also approaches the 1 −1/e guarantee when the prior on θ is concentrated (σ2 θ ≪1), i.e., when the problem is heavily regularized. These observations also hold for a generic H as long as it is well-conditioned. Even if κ(H) ≫1, we can replace H by ˜ H = DH for some diagonal matrix D ≻0 without affecting the design, since z is arbitrarily scaled. The scaling D can be designed to minimize the condition number of ˜ H by leveraging preconditioning and balancing methods [20, 21]. Proceeding, we derive guarantees for E-optimal designs using ϵ-supermodularity. Theorem 4. The cost function of (P-E) is ϵ-supermodular with ϵ(a, b) ≤(b −a) σmax(H)2 λmax (Rθ)2 ℓmax, (14) where ℓmax = maxe∈E λmax(Me), Me = AT e R−1 e Ae, and σmax(H) is the largest singular value of H. Proof. See extended version [15]. Under the same assumptions as above, Theorem 4 gives ¯ϵ ≤2kσ4 θγ, 6 -30 -20 -10 0 10 20 0 0.25 0.5 0.75 1 SNR (dB) Equivalent (a) 40 60 80 100 200 400 600 800 1000 Design size Unnormalized A-optimality (b) 40 60 80 100 0 0.25 0.5 0.75 1 Design size Unnormalized A-optimality Greedy Truncation Random [10] (c) Figure 1: A-optimal design: (a) Thm. 3; (b) A-optimality (low SNR); (c) A-optimality (high SNR). The plots show the unnormalized A-optimality value for clarity. -30 -20 -10 0 10 20 10-3 100 103 SNR (dB) Equivalent (a) 40 60 80 100 20 40 60 80 100 Design size Unnormalized E-optimality (b) 40 60 80 100 0 0.05 0.1 0.15 0.2 Design size Unnormalized E-optimality (c) Figure 2: E-optimal design: (a) Thm. 4; (b) E-optimality (low SNR); (c) E-optimality (high SNR). The plots show the unnormalized E-optimality value for clarity. for ¯ϵ as in Theorem 2. Thus, ϵ →0 as γ →0. In other words, the behavior of the objective of (P-E) approaches that of a supermodular function as the SNR decreases. The same holds for concentrated priors, i.e., limσ2 θ→0 ¯ϵ = 0. Once again, it is worth noting that when the SNRs of the experiments are large, almost every design has the same E-optimal performance as long as the experiments are not too correlated. Thus, greedy design is also expected to give good results under these conditions. Finally, the proofs of Theorems 3 and 4 suggest that better bounds can be found when the designs are constructed without replacement, i.e., when only one of each experiment is allowed in the design. 5 Numerical examples In this section, we illustrate the previous results in some numerical examples. To do so, we draw the elements of Ae from an i.i.d. zero-mean Gaussian random variable with variance 1/p and p = 20. The noise {ve} are also Gaussian random variables with Re = σ2 vI. We take σ2 v = 10−1 in high SNR and σ2 v = 10 in low SNR simulations. The experiment pool contains #E = 200 experiments. Starting with A-optimal design, we display the bound from Theorem 3 in Figure 1a for multivariate measurements of size ne = 5 and designs of size k = 40. Here, “equivalent α” is the single ˆα that gives the same near-optimal certificate (9) as using (13). As expected, ˆα approaches 1 as the SNR decreases. In fact, for −10 dB is is already close to 0.75 which means that by selecting a design of size ℓ= 55 we would be within 1 −1/e of the optimal design of size k = 40. Figures 1b and 1c compare greedy A-optimal designs with the convex relaxation of (P-A) in low and high SNR scenarios. The designs are obtained from the continuous solutions using the hard constraint, with replacement method of [10] and a simple design truncation as in [22]. Therefore, these simulations consider univariate measurements (ne = 1). For comparison, a design sampled uniformly at random with replacement from E is also presented. Note that, as mentioned before, the performance difference across designs is small for high SNR—notice the scale in Figures 1c and 2c—, so that even random designs perform well. For the E-optimality criterion, the bound from Theorem 4 is shown in Figure 2a, again for multivariate measurements of size ne = 5 and designs of size k = 40. Once again, “equivalent ϵ” is the single value ˆϵ that yields the same guarantee as using (14). In this case, the bound degradation in 7 high SNR is more pronounced. This reflects the difficulty in bounding the approximate supermodularity of the E-optimality cost function. Still, Figures 2b and 2c show that greedy E-optimal designs have good performance when compared to convex relaxations or random designs. Note that, though it is not intended for E-optimal designs, we again display the results of the sampling post-processing from [10]. In Figure 2b, the random design is omitted due to its poor performance. 5.1 Cold-start survey design for recommender systems Recommender systems use semi-supervised learning methods to predict user ratings based on few rated examples. These methods are useful, for instance, to streaming service providers who are interested in using predicted ratings of movies to provide recommendations. For new users, these systems suffer from a “cold-start problem,” which refers to the fact that it is hard to provide accurate recommendations without knowing a user’s preference on at least a few items. For this reason, services explicitly ask users for ratings in initial surveys before emitting any recommendation. Selecting which movies should be rated to better predict a user’s preferences can be seen as an experimental design problem. In the following example, we use a subset of the EachMovie dataset [23] to illustrate how greedy experimental design can be applied to address this problem. We randomly selected a training and test set containing 9000 and 3000 users respectively. Following the notation from Section 2, each experiment in E represents a movie (|E| = 1622) and the observation vector Ae collects the ratings of movie e for each user in the training set. The parameter θ is used to express the rating of a new user in term of those in the training set. Our hope is that we can extrapolate the observed ratings, i.e., {ye}e∈D, to obtain the rating for a movie f /∈D as ˆyf = Af ˆθ. Since the mean absolute error (MAE) is commonly used in this setting, we choose to work with the A-optimality criterion. We also let H = I and take a non-informative prior ¯θ = 0 and Rθ = σ2 θI with σ2 θ = 100. As expected, greedy A-optimal design is able to find small sets of movies that lead to good prediction. For k = 10, for example, MAE = 2.3, steadily reducing until MAE < 1.8 for k ≥35. These are considerably better results than a random movie selection, for which the MAE varies between 2.8 and 3.3 for k between 10 and 50. Instead of focusing on the raw ratings, we may be interested in predicting the user’s favorite genre. This is a challenging task due to the heavily skewed dataset. For instance, 32% of the movies are dramas whereas only 0.02% are animations. Still, we use the simplest possible classifier by selecting the category with highest average estimated ratings. By using greedy design, we can obtain a misclassification rate of approximately 25% by observing 100 ratings, compared to over 45% error rate for a random design. 6 Related work Optimal experimental design Classical experimental design typically relies on convex relaxations to solve optimal design problems [17, 22]. However, because these are semidefinite programs (SDPs) or sequential second-order cone programs (SOCPs), their computational complexity can hinder their use in large-scale problems [5, 7, 22, 24]. Another issue with these relaxations is that some sort of post-processing is required to extract a valid design from their continuous solutions [5, 22]. For D-optimal designs, this can be done with (1 −e−1)-optimality [25, 26]. For A-optimal designs, [10] provides near-optimal randomized schemes for large enough k. Greedy optimization guarantees The (1 −e−1)-suboptimality of greedy search for supermodular minimization under cardinality constraints was established in [18]. To deal with the fact that the MSE is not supermodular, α-supermodularity with constant α was introduced in [27] along with explicit lower bounds. This concept is related to the submodularity ratio introduced by [3] to obtain guarantees similar to Theorem 1 for dictionary selection and forward regression. However, the bounds on the submodularity ratio from [3, 28] depend on the sparse eigenvalues of K or restricted strong convexity constants of the A-optimal objective, which are NP-hard to compute. Explicit bounds for the submodularity ratio of A-optimal experimental design were recently obtained in [29]. Nevertheless, neither [27] nor [29] consider multisets. Hence, to apply their results we must operate on an extended ground set containing k unique copies of each experiment, which make the bounds uninformative. For instance, in the setting of Section 5, Theorem 3 guarantees 0.1-optimality at 0 dB SNR whereas [29] guarantees 2.5×10−6-optimality. The concept of ϵ-supermodularity was 8 first explored in [30] for a constant ϵ. There, guarantees for dictionary selection were derived by bounding ϵ using an incoherence assumption on the Ae. Finally, a more stringent definition of approximately submodular functions was put forward in [19] by requiring the function to be upper and lower bounded by a submodular function. They show strong impossibility results unless the function is O(1/k)-close to submodular. Approximate submodularity is sometimes referred to as weak submodularity (e.g., [28]), though it is not related to the weak submodularity concept from [31]. 7 Conclusions Greedy search is known to be an empirically effective method to find A- and E-optimal experimental designs despite the fact that these objectives are not supermodular. We reconciled these observations by showing that the A- and E-optimality criteria are approximately supermodular and deriving nearoptimal guarantees for this class of functions. By quantifying their supermodularity violations, we showed that the behavior of the MSE and the maximum eigenvalue of the error covariance matrix becomes increasingly supermodular as the SNR decreases. An important open question is whether these results can be improved using additional knowledge. Can we exploit some structure of the observation matrices (e.g., Fourier, random)? What if the parameter vector is sparse but with unknown support (e.g., compressive sensing)? Are there practical experiment properties other than the SNR that lead to small supermodular violations? Finally, we hope that this approximate supermodularity framework can be extended to other problems. Acknowledgments This work was supported by the National Science Foundation CCF 1717120 and in part by the ARO W911NF1710438. References [1] A. Das and D. Kempe, “Algorithms for subset selection in linear regression,” in ACM Symp. on Theory of Comput., 2008, pp. 45–54. [2] A. Krause, A. Singh, and C. Guestrin, “Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies,” J. Mach. Learning Research, vol. 9, pp. 235–284, 2008. [3] A. Das and D. Kempe, “Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection,” in Int. Conf. on Mach. Learning, 2011. [4] Y. Washizawa, “Subset kernel principal component analysis,” in Int. Workshop on Mach. Learning for Signal Process., 2009. [5] S. Joshi and S. Boyd, “Sensor selection via convex optimization,” IEEE Trans. Signal Process., vol. 57[2], pp. 451–462, 2009. [6] K. Yu, J. Bi, and V. Tresp, “Active learning via transductive experimental design,” in International Conference on Machine Learning, 2006, pp. 1081–1088. [7] P. Flaherty, A. Arkin, and M.I. Jordan, “Robust design of biological experiments,” in Advances in Neural Information Processing Systems, 2006, pp. 363–370. [8] X. Zhu, “Semi-supervised learning literature survey,” 2008, http://pages.cs.wisc.edu/ ~jerryzhu/research/ssl/semireview.html. [9] S. Liu, S.P. Chepuri, M. Fardad, E. Ma¸sazade, G. Leus, and P.K. Varshney, “Sensor selection for estimation with correlated measurement noise,” IEEE Trans. Signal Process., vol. 64[13], pp. 3509–3522, 2016. [10] Y. Wang, A.W. Yu, and A. Singh, “On computationally tractable selection of experiments in regression models,” 2017, arXiv:1601.02068v5. [11] F. Bach, “Learning with submodular functions: A convex optimization perspective,” Foundations and Trends in Machine Learning, vol. 6[2-3], pp. 145–373, 2013. [12] A. Krause and D. Golovin, “Submodular function maximization,” in Tractability: Practical Approaches to Hard Problems. Cambridge University Press, 2014. 9 [13] G. Sagnol, “Approximation of a maximum-submodular-coverage problem involving spectral functions, with application to experimental designs,” Discrete Appl. Math., vol. 161[1-2], pp. 258–276, 2013. [14] T.H. Summers, F.L. Cortesi, and J. Lygeros, “On submodularity and controllability in complex dynamical networks,” IEEE Trans. Contr. Netw. Syst., vol. 3[1], pp. 91–101, 2016. [15] L. F. O. Chamon and A. Ribeiro, “Approximate supermodularity bounds for experimental design,” 2017, arXiv:1711.01501. [16] R.A. Horn and C.R. Johnson, Matrix analysis, Cambridge University Press, 2013. [17] F. Pukelsheim, Optimal Design of Experiments, SIAM, 2006. [18] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher, “An analysis of approximations for maximizing submodular set functions—I,” Mathematical Programming, vol. 14[1], pp. 265–294, 1978. [19] T. Horel and Y. Singer, “Maximization of approximately submodular functions,” in Advances in Neural Information Processing Systems, 2016, pp. 3045–3053. [20] M. Benzi, “Preconditioning techniques for large linear systems: A survey,” Journal of Computational Physics, vol. 182[2], pp. 418–477, 2002. [21] R.D. Braatz and M. Morari, “Minimizing the Euclidian condition number,” SIAM Journal of Control and Optimization, vol. 32[6], pp. 1763–1768, 1994. [22] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004. [23] Digital Equipment Corporation, “EachMovie dataset,” http://www.gatsby.ucl.ac.uk/ ~chuwei/data/EachMovie/. [24] G. Sagnol, “Computing optimal designs of multiresponse experiments reduces to second-order cone programming,” Journal of Statistical Planning and Inference, vol. 141[5], pp. 1684–1708, 2011. [25] T. Horel, S. Ioannidis, and M. Muthukrishnan, “Budget feasible mechanisms for experimental design,” in Latin American Theoretical Informatics Symposium, 2014. [26] A.A. Ageev and M.I. Sviridenko, “Pipage rounding: A new method of constructing algorithms with proven performance guarantee,” Journal of Combinatorial Optimization, vol. 8[3], pp. 307–328, 2004. [27] L.F.O. Chamon and A. Ribeiro, “Near-optimality of greedy set selection in the sampling of graph signals,” in Global Conf. on Signal and Inform. Process., 2016. [28] E.R. Elenberg, R. Khanna, A.G. Dimakis, and S. Negahban, “Restricted strong convexity implies weak submodularity,” 2016, arXiv:1612.00804. [29] A. Bian, J.M. Buhmann, A. Krause, and S. Tschiatschek, “Guarantees for greedy maximization of non-submodular functions with applications,” in ICML, 2017. [30] A. Krause and V. Cevher, “Submodular dictionary selection for sparse representation,” in Int. Conf. on Mach. Learning, 2010. [31] A. Borodin, D. T. M. Le, and Y. Ye, “Weakly submodular functions,” 2014, arXiv:1401.6697v5. 10 | 2017 | 39 |
6,885 | Online Learning for Multivariate Hawkes Processes Yingxiang Yang∗ Jalal Etesami† Niao He† Negar Kiyavash∗† University of Illinois at Urbana-Champaign Urbana, IL 61801 {yyang172,etesami2,niaohe,kiyavash} @illinois.edu Abstract We develop a nonparametric and online learning algorithm that estimates the triggering functions of a multivariate Hawkes process (MHP). The approach we take approximates the triggering function fi,j(t) by functions in a reproducing kernel Hilbert space (RKHS), and maximizes a time-discretized version of the log-likelihood, with Tikhonov regularization. Theoretically, our algorithm achieves an O(log T) regret bound. Numerical results show that our algorithm offers a competing performance to that of the nonparametric batch learning algorithm, with a run time comparable to parametric online learning algorithms. 1 Introduction Multivariate Hawkes processes (MHPs) are counting processes where an arrival in one dimension can affect the arrival rates of other dimensions. They were originally proposed to statistically model the arrival patterns of earthquakes [16]. However, MHP’s ability to capture mutual excitation between dimensions of a process also makes it a popular model in many other areas, including high frequency trading [3], modeling neural spike trains [24], and modeling diffusion in social networks [28], and capturing causality [12, 18]. For a p-dimensional MHP, the intensity function of the i-th dimension takes the following form: λi(t) = µi + p X j=1 Z t 0 fi,j(t −τ)dNj(τ), (1) where the constant µi is the base intensity of the i-th dimension, Nj(t) counts the number of arrivals in the j-th dimension within [0, t], and fi,j(t) is the triggering function that embeds the underlying causal structure of the model. In particular, one arrival in the j-th dimension at time τ will affect the intensity of the arrivals in the i-th dimension at time t by the amount fi,j(t −τ) for t > τ. Therefore, learning the triggering function is the key to learning an MHP model. In this work, we consider the problem of estimating the fi,j(t)s using nonparametric online learning techniques. 1.1 Motivations Why nonparametric? Most of existing works consider exponential triggering functions: fi,j(t) = αi,j exp{−βi,jt}1{t > 0}, (2) where αi,j is unknown while βi,j is given a priori. Under this assumption, learning fi,j(t) is equivalent to learning a real number, αi,j. However, there are many scenarios where (2) fails to ∗Department of Electrical and Computer Engineering. †Department of Industrial and Enterprise Systems Engineering. This work was supported in part by MURI grant ARMY W911NF-15-1-0479 and ONR grant W911NF-15-1-0479. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. describe the correct mutual influence pattern between dimensions. For example, [20] and [11] have reported delayed and bell-shaped triggering functions when applying the MHP model to neural spike train datasets. Moreover, when fi,j(t)s are not exponential, or when βi,js are inaccurate, formulation in (2) is prone to model mismatch [15]. Why online learning? There are many reasons to consider an online framework. (i) Batch learning algorithms do not scale well due to high computational complexity [15]. (ii) The data can be costly to observe, and can be streaming in nature, for example, in criminology. The above concerns motivate us to design an online learning algorithm in the nonparametric regime. 1.2 Related Works Earlier works on learning the triggering functions can be largely categorized into three classes. Batch and parametric. The simplest way to learn the triggering functions is to assume that they possess a parametric form, e.g. (2), and learn the coefficients. The most widely used estimators include the maximum likelihood estimator [23], and the minimum mean-square error estimator [2]. These estimators can also be generalized to the high dimensional case when the coefficient matrix is sparse and low-rank [2]. More generally, one can assume that fi,j(t)s lie within the span of a given set of basis functions S = {e1(t), . . . , e|S|(t)}: fi,j(t) = P|S| i=1 ciei(t), where ei(t)s have a given parametric form [13, 27]. The state-of-the-art of such algorithms is [27], where |S| is adaptively chosen, which sometimes requires a significant portion of the data to determine the optimal S. Batch and nonparametric. A more sophisticated approach towards finding the set S is explored in [29], where the coefficients and the basis functions are iteratively updated and refined. Unlike [27], where the basis functions take a predetermined form, [29] updates the basis functions by solving a set of Euler-Lagrange equations in the nonparametric regime. However, the formulation of [29] is nonconvex, and therefore the optimality is not guaranteed. The method also requires more than 105 arrivals for each dimension in order to obtain good results, on networks of less than 5 dimensions. Another way to estimate fi,j(t)s nonparametrically is proposed in [4], which solves a set of p WienerHopf systems, each of dimension at least p2. The algorithm works well on small dimensions; however, it requires inverting a p2 × p2 matrix, which is costly, if not all together infeasible, when p is large. Online and parametric. To the best of our knowledge, learning the triggering functions in an online setting seems largely unexplored. Under the assumption that fi,j(t)s are exponential, [15] proposes an online algorithm using gradient descent, exploiting the evolutionary dynamics of the intensity function. The time axis is discretized into small intervals, and the updates are performed at the end of each interval. While the authors provide the online solution to the parametric case, their work cannot readily extend to the nonparametric setting where the triggering functions are not exponential, mainly because the evolutionary dynamics of the intensity functions no longer hold. Learning triggering functions nonparametrically remains an open problem. 1.3 Challenges and Our Contributions Designing an online algorithm in the nonparametric regime is not without its challenges: (i) It is not clear how to represent fi,j(t)s. In this work, we relate fi,j(t) to an RKHS. (ii) Although online learning with kernels is a well studied subject in other scenarios [19], a typical choice of loss function for learning an MHP usually involves the integral of fi,j(t)s, which prevents the direct application of the representer theorem. (iii) The outputs of the algorithm at each step require a projection step to ensure positivity of the intensity function. This requires solving a quadratic programming problem, which can be computationally expensive. How to circumvent this computational complexity issue is another challenge of this work. In this paper, we design, to the best of our knowledge, the first online learning algorithm for the triggering functions in the nonparametric regime. In particular, we tackle the challenges mentioned above, and the only assumption we make is that the triggering functions fi,j(t)s are positive, have a decreasing tail, and that they belong to an RKHS. Theoretically, our algorithm achieves a regret 2 bound of O(log T), and numerical experiments show that our approach outperforms the previous approaches despite the fact that they are tailored to a less general setting. In particular, our algorithm attains a similar performance to the nonparametric batch learning maximum likelihood estimators while reducing the run time extensively. 1.4 Notations Prior to discussing our results, we introduce the basic notations used in the paper. Detailed notations will be introduced along the way. For a p-dimensional MHP, we denote the intensity function of the i-th dimension by λi(t). We use λ(t) to denote the vector of intensity functions, and we use F = [fi,j(t)] to denote the matrix of triggering functions. The i-th row of F is denoted by fi. The number of arrivals in the i-th dimension up to t is denoted by the counting process Ni(t). We set N(t) = Pp i=1 Ni(t). The estimates of these quantities are denoted by their “hatted” versions. The arrival time of the n-th event in the j-th dimension is denoted by τj,n. Lastly, define ⌊x⌋y = y⌊x/y⌋. 2 Problem Formulation In this section, we introduce our assumptions and definitions followed by the formulation of the loss function. We omit the basics on MHPs and instead refer the readers to [22] for details. Assumption 2.1. We assume that the constant base intensity µi is lower bounded by a given threshold µmin > 0. We also assume bounded and stationary increments for the MHP [16, 9]: for any t, z > 0, Ni(t)−Ni(t−z) ≤κz = O(z). See Appendix A for more details. Definition 2.1. Suppose that {tk}∞ k=0 is an arbitrary time sequence with t0 = 0, and supk≥1(tk − tk−1) ≤δ ≤1. Let εf : [0, ∞) →[0, ∞) be a continuous and bounded function such that limt→∞εf(t) = 0. Then, f(x) satisfies the decreasing tail property with tail function εf(t) if ∞ X k=m (tk −tk−1) sup x∈(tk−1,tk] |f(x)| ≤εf(tm−1), ∀m > 0. Assumption 2.2. Let H be an RKHS associated with a kernel K(·, ·) that satisfies K(x, x) ≤1. Let L1[0, ∞) be the space of functions for which the absolute value is Lebesgue integrable. For any i, j ∈{1, . . . , p}, we assume that fi,j(t) ∈H and fi,j(t) ∈L1[0, ∞), with both fi,j(t) and dfi,j(t)/dt satisfying the decreasing tail property of Definition 2.1. Assumption 2.1 is common and has been adopted in existing literature [22]. It ensures that the MHP is not “explosive” by assuming that N(t)/t is bounded. Assumption 2.2 restricts the tail behaviors of both fi,j(t) and dfi,j(t)/dt. Complicated as it may seem, functions with exponentially decaying tails satisfy this assumption, as is illustrated by the following example (See Appendix B for proof): Example 1. Functions f1(t) = exp{−βt}1{t > 0} and f2(t) = exp{−(t −γ)2}1{t > 0} satisfy Assumption 2.2 with tail functions β−1 exp{−β(t −δ)} and √ 2π erfc(t/ √ 2 −γ) exp{δ2/2}. 2.1 A Discretized Loss Function for Online Learning A common approach for learning the parameters of an MHP is to perform regularized maximum likelihood estimation. As such, we introduce a loss function comprised of the negative of the loglikelihood function and a penalty term to enforce desired structural properties, e.g. sparsity of the triggering matrix F or smoothness of the triggering functions (see, e.g., [2, 29, 27]). The negative of the log-likelihood function of an MHP over a time interval of [0, t] is given by Lt(λ) := − p X i=1 Z t 0 log λi(τ)dNi(τ) − Z t 0 λi(τ)dτ . (3) Let {τ1, ..., τN(t)} denote the arrival times of all the events within [0, t] and let {t0, . . . , tM(t)} be a finite partition of the time interval [0, t] such that t0 = 0 and tk+1 := minτi≥tk{⌊tk⌋δ + δ, τi}. Using this partitioning, it is straightforward to see that the function in (3) can be written as Lt(λ) = p X i=1 M(t) X k=1 Z tk tk−1 λi(τ)dτ −xi,k log λi(tk) ! := p X i=1 Li,t(λi), (4) 3 where xi,k := Ni(tk) −Ni(tk−1). By the definition of tk, we know that xi,k ∈{0, 1}. In order to learn fi,j(t)s using an online kernel method, we require a similar result as the representer theorem in [25] that specifies the form of the optimizer. This theorem requires that the regularized version of the loss in (4) to be a function of only fi,j(t)s. However, due to the integral part, Lt(λ) is a function of both fi,j(t)s and their integrals, which prevents us from applying the representer theorem directly. To resolve this issue, several approaches can be applied such as adjusting the Hilbert space as proposed in [14] in context of Poisson processes, or approximating the log-likelihood function as in [15]. Here, we adopt a method similar to [15] and approximate (4) by discretizing the integral: L(δ) t (λ) := p X i=1 M(t) X k=1 ((tk −tk−1)λi(tk) −xi,k log λi(tk)):= p X i=1 L(δ) i,t (λi). (5) Intuitively, if δ is small enough and the triggering functions are bounded, it is reasonable to expect that Li,t(λ) is close to L(δ) i,t (λ). Below, we characterize the accuracy of the above discretization and also truncation of the intensity function. First, we require the following definition. Definition 2.2. We define the truncated intensity function as follows λ(z) i (t) := µi + p X j=1 Z t 0 1{t −τ < z}fi,j(t −τ)dNj(τ). (6) Proposition 1. Under Assumptions 2.1 and 2.2, for any i ∈{1, . . . , p}, we have L(δ) i,t (λ(z) i ) −Li,t(λi) ≤(1 + κ1µ−1 min)N(t −z)ε(z) + δN(t)ε′(0), where µmin is the lower bound for µi, κ1 is the upper bound for Ni(t)−Ni(t −1) from Definition 2.1, while ε and ε′ are two tail functions that uniformly capture the decreasing tail property of all fi,j(t)s and all dfi,j(t)/dts, respectively. The first term in the bound characterizes the approximation error when one truncates λi(t) with λ(z) i (t). The second term describes the approximation error caused by the discretization. When z = ∞, λi(t) = λ(z) i (t), and the approximation error is contributed solely by discretization. Note that, in many cases, a small enough truncation error can be obtained by setting a relatively small z. For example, for fi,j(t) = exp{−3t}1{t > 0}, setting z = 10 would result in a truncation error less than 10−13. Meanwhile, truncating λi(t) greatly simplifies the procedure of computing its value. Hence, in our algorithm, we focus on λ(z) i instead of λi. In the following, we consider the regularized instantaneous loss function with the Tikhonov regularization for fi,j(t)s and µi: li,k(λi) := (tk −tk−1)λi(tk) −xi,k log λi(tk) + 1 2ωiµ2 i + p X j=1 ζi,j 2 ∥fi,j∥2 H, (7) and aim at producing a sequence of estimates {bλi(tk)}M(t) k=1 of λi(t) with minimal regret: M(t) X k=1 li,k(bλi(tk)) − min µi≥µmin,fi,j(t)≥0 M(t) X k=1 li,k(λi(tk)). (8) Each regularized instantaneous loss function in (7) is jointly strongly convex with respect to fi,js and µi. Combining with the representer theorem in [25], the minimizer to (8) is a linear combination of a finite set of kernels. In addition, by setting ζi,j = O(1), our algorithm achieves β-stability with β = O((ζi,jt)−1), which is typical for a learning algorithm in RKHS (Theorem 22 of [8]). 3 Online Learning for MHPs We introduce our NonParametric OnLine Estimation for MHP (NPOLE-MHP) in Algorithm 1. The most important components of the algorithm are (i) the computation of the gradients and (ii) the 4 Algorithm 1 NonParametric OnLine Estimation for MHP (NPOLE-MHP) 1: input: a sequence of step sizes {ηk}∞ k=1 and a set of regularization coefficients ζi,js, along with positive values of µmin, z and σ. output: bµ(M(t)) and bF (M(t)). 2: Initialize bf (0) i,j and bµ(0) i for all i, j. 3: for k = 0, ..., M(t) −1 do 4: Observe the interval [tk, tk+1), and compute xi,k for i ∈{1, . . . , p}. 5: for i = 1, . . . , p do 6: Set bµ(k+1) i ←max n bµ(k) i −ηk+1∂µili,k λ(z) i (bµ(k) i , b f (k) i ) , µmin o . 7: for j = 1, . . . , p do 8: Set bf (k+ 1 2 ) i,j ← h bf (k) i,j −ηk+1∂fi,jli,k λ(z) i (bµ(k) i , b f (k) i ) i , and bf (k+1) i,j ←Π h bf (k+ 1 2 ) i,j i . 9: end for 10: end for 11: end for projections in lines 6 and 8. For the partial derivative with respect to µi, recall the definition of li,k in (7) and λ(z) i in (6). Since λ(z) i is a linear function of µi, we have ∂µili,k λ(z) i (bµ(k) i , b f (k) i ) = (tk −tk−1) −xi,k h λ(z) i bµ(k) i , b f (k) i i−1 + ωibµ(k) i ≜ρk + ωibµ(k) i , where ρk is the simplified notation for the first two terms. Upon performing gradient descent, the algorithm makes sure that bµ(k+1) i ≥µmin, which further ensures that bλ(z) i (bµ(k+1) i , b f (k+1) i ) ≥λmin. For the update step of bf (k) i,j (t), notice that the li,k is also a linear function with respect to fi,j. Since ∂fi,jfi,j(x) = K(x, ·), which holds true due to the reproducing property of the kernel, we thus have ∂fi,jli,k λ(z) i (bµ(k) i , b f (k) i ) = ρk X τj,n∈[tk−z,tk) K(tk −τj,n, ·) + ζi,j bf (k) i,j (·). (9) Once again, a projection Π[·] is necessary to ensure that the estimated triggering functions are positive. 3.1 Projection of the Triggering Functions For any kernel, the projection step for a triggering function can be executed by solving a quadratic programming problem: min ∥f −bf (k+ 1 2 ) i,j ∥2 H subject to f ∈H and f(t) ≥0. Ideally, the positivity constraint has to hold for every t > 0, but in order to simplify computation, one can approximate the solution by relaxing the constraint such that f(t) ≥0 holds for only a finite set of ts within [0, z]. Semi-Definite Programming (SDP). When the reproducing kernel is polynomial, the problem is much simpler. The projection step can be formulated as an SDP problem [26] as follows: Proposition 2. Let S = ∪r≤k{tr −τj,n : tr −z ≤τj,n < tr} be the set of tr −τj,ns. Let K(x, y) = (1+xy)2d and K′(x, y) = (1+xy)d be two polynomial kernels with d ≥1. Furthermore, let K and G denote the Gramian matrices where the i, j-th element correspond to K(s, s′), with s and s′ being the i-th and j-th element in S. Suppose that a ∈R|S| is the coefficient vector such that bf (k+ 1 2 ) i,j (·) = P s∈S asK(s, ·), and that the projection step returns bf (k+1) i,j (·) = P s∈S b∗ sK(s, ·). Then the coefficient vector b∗can be obtained by b∗= argmin b∈R|S| −2a⊤Kb + b⊤Kb, s.t. G · diag(b) + diag(b) · G ⪰0. (10) Non-convex approach. Alternatively, we can assume that fi,j(t) = g2 i,j(t) where gi,j(t) ∈H. By minimizing the loss with respect to gi,j(t), one can naturally guarantee that fi,j(t) ≥0. This method was adopted in [14] for estimating the intensity function of non-homogeneous Poisson processes. While this approach breaks the convexity of the loss function, it works relatively well when the initialization is close to the global minimum. It is also interestingly related to a line of recent works in non-convex SDP [6], as well as phase retrieval with Wirtinger flow [10]. Deriving guarantees on regret bound and convergence performances is a future direction implied by the result of this work. 5 4 Theoretical Properties We now discuss the theoretical properties of NPOLE-MHP. We start with defining the regret. Definition 4.1. The regret of Algorithm 1 at time t is given by R(δ) t (λ(z) i (µi, fi)) := M(t) X k=1 li,k(λ(z) i (bµ(k) i , b f (k) i )) −li,k(λ(z) i (µi, fi)) , where bµ(k) i and b f (k) i denote the estimated base intensity and the triggering functions, respectively. Theorem 1. Suppose that the observations are generated from a p-dimensional MHP that satisfies Assumptions 2.1 and 2.2. Let ζ = mini,j{ζi,j, ωi}, and ηk = 1/(ζk + b) for some positive constants b. Then R(δ) t (λ(z) i (µi, fi)) ≤C1(1 + log M(t)), where C1 = 2(1 + pκ2 z)ζ−1|δ −µ−1 min|2. The regret bound of Theorem 1 resembles the regret bound for a typical online learning algorithm with strongly convex loss function (see for example, Theorem 3.3 of [17]). When δ, ζ and µ−1 min are fixed, C1 = O(p), which is intuitive as one needs to update p functions at a time. Note that the regret in Definition 4.1, encodes the performance of Algorithm 1 by comparing its loss with the approximated loss. Below, we compare the loss of Algorithm 1 with the original loss in (4). Corollary 1. Under the same assumptions as Theorem 1, we have M(t) X k=1 li,k(λ(z) i (bµi, b f (k) i )) −li,k(λi(µi, fi)) ≤C1[1 + log M(t)] + C2N(t), (11) where C1 is defined in Theorem 1 and C2 = (1 + κ1µ−1 min)ε(z) + δε′(0). Note that C3N(t) is due to discretization and truncation steps and it can be made arbitrary small for given t and setting small δ and large enough z. Computational Complexity. Since b fis can be estimated in parallel, we restrict our analysis to the case of a fixed i ∈{1, . . . , p} in a single iteration. For each iteration, the computational complexity comes from evaluating the intensity function and projection. Since the number of arrivals within the interval [tk −z, tk) is bounded by pκz and κz = O(1), evaluating the intensity costs O(p2) operations. For the projection in each step, one can truncate the number of kernels used to represent fi,j(t) to be O(1) with controllable error (Proposition 1 of [19]), and therefore the computation cost is O(1). Hence, the per iteration computation cost of NPOLE-MHP is O(p2). By comparison, parametric online algorithms (DMD, OGD of [15]) also require O(p2) operations for each iteration, while the batch learning algorithms (MLE-SGLP, MLE of [27]) require O(p2t3) operations. 5 Numerical Experiments We evaluate the performance of NPOLE-MHP on both synthetic and real data, from multiple aspects: (i) visual assessment of the goodness-of-fit comparing to the ground truth; (ii) the “average L1 error” defined as the average of Pp i=1 Pp j=1 ∥fi,j −bfi,j∥L1[0,z] over multiple trials; (iii) scalability over both dimension p and time horizon T. For benchmarks, we compare NPOLE-MHP’s performance to that of online parametric algorithms (DMD, OGD of [15]) and nonparametric batch learning algorithms (MLE-SGLP, MLE of [27]). 5.1 Synthetic Data Consider a 5-dimensional MHP with µi = 0.05 for all dimensions. We set the triggering functions as F = e−2.5t 0 0 e−10(t−1)2 0 2−5t (1 + cos(πt))e−t/2 e−5t 0 0 0 2e−3t 0 0 0 0 0 0 0.6e−3t2 + 0.4e−3(t−1)2 e−4t 0 0 te−5(t−1)2 0 e−3t . 6 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f2,2(t) 0 0.5 1 1.5 2 2.5 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 f3,2(t) 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 f1,4(t) 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 True fi,j(t) NPOLE-MHP DMD OGD MLE-SGLP MLE Figure 1: Performances of different algorithms for estimating F . Complete set of result can be found in Appendix F. For each subplot, the horizontal axis covers [0, z] and the vertical axis covers [0, 1]. The performances are similar between DMD and OGD, and between MLE and MLE-SGLP. The design of F allows us to test NPOLE-MHP’s ability of detecting (i) exponential triggering functions with various decaying rate; (ii) zero functions; (iii) functions with delayed peaks and tail behaviors different from an exponential function. Goodness-of-fit. We run NPOLE-MHP over a set of data with T = 105 and around 4 × 104 events for each dimension. The parameters are chosen by grid search over a small portion of data, and the parameters of the benchmark algorithms are fine-tuned (see Appendix F for details). In particular, we set the discretization level δ = 0.05, the window size z = 3, the step size ηk = (kδ/20+100)−1, and the regularization coefficient ζi,j ≡ζ = 10−8. The performances of NPOLE-MHP and benchmarks are shown in Figure 1. We see that NPOLE-MHP captures the shape of the function much better than the DMD and OGD algorithms with mismatched forms of the triggering functions. It is especially visible for f1,4(t) and f2,2(t). In fact, our algorithm scores a similar performance to the batch learning MLE estimator, which is optimal for any given set of data. We next plot the average loss per iteration for this dataset in Figure 2. In the left-hand side, the loss is high due to initialization. However, the effect of initialization quickly diminishes as the number of events increases. Run time comparison. The simulation of the DMD and OGD algorithms took 2 minutes combined on a Macintosh with two 6-core Intel Xeon processor at 2.4 GHz, while NPOLE-MHP took 3 minutes. The batch learning algorithms MLE-SGLP and MLE in [27] each took about 1.5 hours. Therefore, our algorithm achieves the performance similar to batch learning algorithms with a run time close to that of parametric online learning algorithms. Effects of the hyperparameters: δ, ζi,j, and ηk. We investigate the sensitivity of NPOLE-MHP with respect to the hyperparameters, measuring the “averaged L1 error” defined at the beginning of this section. We independently generate 100 sets of data with the same parameters, and a smaller T = 104 for faster data generation. The result is shown in Table 1. For NPOLE-MHP, we fix ηk = 1/(k/2000 + 10). MLE and MLE-SGLP score around 1.949 with 5/5 inner/outer rounds of iterations. NPOLE-MHP’s performance is robust when the regularization coefficient and discretization level are sufficiently small. It surpasses MLE and MLE-SGLP on large datasets, in which case the iterations of MLE and MLE-SGLP are limited due to computational considerations. As ζ increases, the error decreases first before rising drastically, a phenomenon caused by the mismatch between the loss functions. For the step size, the error varies under different choice of ηk, which can be selected via grid-search on a small portion of the data like most other online algorithms. 5.2 Real Data: Inferencing Impact Between News Agencies with Memetracker Data We test the performance of NPOLE-MHP on the memetracker data [21], which collects from the internet a set of popular phrases, including their content, the time they were posted, and the url address of the articles that included them. We study the relationship between different news agencies, modeling the data with a p-dimensional MHP where each dimension corresponds to a news website. Unlike [15], which conducted a similar experiment where all the data was used, we focus on only 20 7 Regularization log10 ζ −8 −6 −4 −2 0 δ 0.01 1.83 1.83 1.84 4.15 4.64 0.05 1.86 1.86 1.86 3.10 4.64 0.1 1.92 1.92 1.88 2.73 4.64 0.5 4.80 4.80 4.64 2.19 4.62 1 5.73 5.73 5.58 2.38 4.59 Table 1: Effect of hyperparameters ζ and δ, measured by the “average L1 error”. Horizon T (days) 1.8 3.6 5.4 Dimension p 20 3.9 9.1 15.3 40 4.6 10.4 17.0 60 4.6 10.2 16.7 80 4.5 10.0 16.4 100 4.5 9.7 15.9 Table 2: Average CPU-time for estimating one triggering function (seconds). 0 2 4 6 8 10 Time Axis t ×104 0 0.2 0.4 0.6 0.8 1 Average Loss per Iteration δ = 0.05, with true fi,j(t)s δ = 0.05, NPOLE-MHP δ = 0.10, NPOLE-MHP δ = 0.50, NPOLE-MHP δ = 0.05, DMD Figure 2: Effect of discretization in NPOLEMHP. 0 5 10 15 Time Axis t ×105 0 1 2 3 4 5 6 Cumulative Loss ×105 NPOLE-MHP DMD OGD Figure 3: Cumulative loss on memetracker data of 20 dimensions. websites that are most active, using 18 days of data. We plot the cumulative losses in Figure 3, using a window size of 3 hours, an update interval δ = 0.2 seconds, and a step size ηk = 1/(kζ + 800) with ζ = 10−10 for NPOLE-MHP. For DMD and OGD, we set ηk = 5/ p T/δ. The result shows that NPOLE-MHP accumulates a smaller loss per step compared to OGD and DMD. Scalability and generalization error. Finally, we evaluate the scalability of NPOLE-MHP using the average CPU-time for estimating one triggering function. The result in Table 2 shows that the computation cost of NPOLE-MHP scales almost linearly with the dimension and data size. When scaling the data to 100 dimensions and 2 × 105 events, NPOLE-MHP scores an average 0.01 loss per iteration on both training and test data, while OGD and DMD scored 0.005 on training data and 0.14 on test data. This shows a much better generalization performance of NPOLE-MHP. 6 Conclusion We developed a nonparametric method for learning the triggering functions of a multivariate Hawkes process (MHP) given time series observations. To formulate the instantaneous loss function, we adopted the method of discretizing the time axis into small intervals of lengths at most δ, and we derived the corresponding upper bound for approximation error. From this point, we proposed an online learning algorithm, NPOLE-MHP, based on the framework of online kernel learning and exploits the interarrival time statistics under the MHP setup. Theoretically, we derived the regret bound for NPOLE-MHP, which is O(log T) when the time horizon T is known a priori, and we showed that the per iteration cost of NPOLE-MHP is O(p2). Numerically, we compared NPOLEMHP’s performance with parametric online learning algorithms and nonparametric batch learning algorithms. Results on both synthetic and real data showed that we are able to achieve similar performance to that of the nonparametric batch learning algorithms with a run time comparable to the parametric online learning algorithms. 8 References [1] Emmanuel Bacry, Khalil Dayri, and Jean-Franc¸ois Muzy. Non-parametric kernel estimation for symmetric Hawkes processes. application to high frequency financial data. The European Physical Journal B-Condensed Matter and Complex Systems, 85(5):1–12, 2012. [2] Emmanuel Bacry, St´ephane Ga¨ıffas, and Jean-Franc¸ois Muzy. A generalization error bound for sparse and low-rank multivariate Hawkes processes, 2015. [3] Emmanuel Bacry, Iacopo Mastromatteo, and Jean-Franc¸ois Muzy. Hawkes processes in finance. Market Microstructure and Liquidity, 1(01):1550005, 2015. [4] Emmanuel Bacry and Jean-Franc¸ois Muzy. First- and second-order statistics characterization of Hawkes processes and non-parametric estimation. IEEE Transactions on Information Theory, 62(4):2184–2202, 2016. [5] J Andrew Bagnell and Amir-massoud Farahmand. Learning positive functions in a Hilbert space, 2015. [6] Srinadh Bhojanapalli, Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster semi-definite optimization. Conference on Learning Theory, pages 530–582, 2016. [7] Jacek Bochnak, Michel Coste, and Marie-Franc¸oise Roy. Real algebraic geometry, volume 36. Springer Science & Business Media, 2013. [8] Olivier Bousquet and Andr´e Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2(Mar):499–526, 2002. [9] Pierre Br´emaud and Laurent Massouli´e. Stability of nonlinear Hawkes processes. The Annals of Probability, pages 1563–1588, 1996. [10] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015. [11] Michael Eichler, Rainer Dahlhaus, and Johannes Dueck. Graphical modeling for multivariate Hawkes processes with nonparametric link functions. Journal of Time Series Analysis, 38(2):225– 242, 2017. [12] Jalal Etesami and Negar Kiyavash. Directed information graphs: A generalization of linear dynamical graphs. In American Control Conference (ACC), 2014, pages 2563–2568. IEEE, 2014. [13] Jalal Etesami, Negar Kiyavash, Kun Zhang, and Kushagra Singhal. Learning network of multivariate Hawkes processes: A time series approach. Conference on Uncertainty in Artificial Intelligence, 2016. [14] Seth Flaxman, Yee Whye Teh, and Dino Sejdinovic. Poisson intensity estimation with reproducing kernels. International Conference on Artificial Intelligence and Statistics, 2017. [15] Eric C Hall and Rebecca M Willett. Tracking dynamic point processes on networks. IEEE Transactions on Information Theory, 62(7):4327–4346, 2016. [16] Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83–90, 1971. [17] Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends R⃝in Optimization, 2(3-4):157–325, 2016. 9 [18] Sanggyun Kim, Christopher J Quinn, Negar Kiyavash, and Todd P Coleman. Dynamic and succinct statistical analysis of neuroscience data. Proceedings of the IEEE, 102(5):683–698, 2014. [19] Jyrki Kivinen, Alexander J Smola, and Robert C Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8):2165–2176, 2004. [20] Michael Krumin, Inna Reutsky, and Shy Shoham. Correlation-based analysis and generation of multiple spike trains using Hawkes models with an exogenous input. Frontiers in Computational Neuroscience, 4, 2010. [21] Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. International Conference on Knowledge Discovery and Data Mining, pages 497–506, 2009. [22] Thomas Josef Liniger. Multivariate Hawkes processes. PhD thesis, Eidgen¨ossische Technische Hochschule ETH Z¨urich, 2009. [23] Tohru Ozaki. Maximum likelihood estimation of Hawkes’ self-exciting point processes. Annals of the Institute of Statistical Mathematics, 31(1):145–155, 1979. [24] Patricia Reynaud-Bouret, Sophie Schbath, et al. Adaptive estimation for Hawkes processes; application to genome analysis. The Annals of Statistics, 38(5):2781–2822, 2010. [25] Bernhard Sch¨olkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. International Conference on Computational Learning Theory, pages 416–426, 2001. [26] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM review, 38(1):49–95, 1996. [27] Hongteng Xu, Mehrdad Farajtabar, and Hongyuan Zha. Learning Granger causality for Hawkes processes. International Conference on Machine Learning, 48:1717–1726, 2016. [28] Shuang-Hong Yang and Hongyuan Zha. Mixture of mutually exciting processes for viral diffusion. International Conference on Machine Learning, 28:1–9, 2013. [29] Ke Zhou, Hongyuan Zha, and Le Song. Learning triggering kernels for multi-dimensional Hawkes processes. International Conference on Machine Learning, 28:1301–1309, 2013. 10 | 2017 | 390 |
6,886 | An Empirical Study on The Properties of Random Bases for Kernel Methods Maximilian Alber, Pieter-Jan Kindermans, Kristof T. Schütt Technische Universität Berlin maximilian.alber@tu-berlin.de Klaus-Robert Müller Technische Universität Berlin Korea University Max Planck Institut für Informatik Fei Sha University of Southern California feisha@usc.edu Abstract Kernel machines as well as neural networks possess universal function approximation properties. Nevertheless in practice their ways of choosing the appropriate function class differ. Specifically neural networks learn a representation by adapting their basis functions to the data and the task at hand, while kernel methods typically use a basis that is not adapted during training. In this work, we contrast random features of approximated kernel machines with learned features of neural networks. Our analysis reveals how these random and adaptive basis functions affect the quality of learning. Furthermore, we present basis adaptation schemes that allow for a more compact representation, while retaining the generalization properties of kernel machines. 1 Introduction Recent work on scaling kernel methods using random basis functions has shown that their performance on challenging tasks such as speech recognition can match closely those by deep neural networks [22, 6, 35]. However, research also highlighted two disadvantages of random basis functions. First, a large number of basis functions, i.e., features, are needed to obtain useful representations of the data. In a recent empirical study [22], a kernel machine matching the performance of a deep neural network required a much larger number of parameters. Second, a finite number of random basis functions lead to an inferior kernel approximation error that is data-specific [30, 32, 36]. Deep neural networks learn representations that are adapted to the data using end-to-end training. Kernel methods on the other hand can only achieve this by selecting the optimal kernels to represent the data – a challenge that persistently remains. Furthermore, there are interesting cases in which learning with deep architectures is advantageous, as they require exponentially fewer examples [25]. Yet arguably both paradigms have the same modeling power as the number of training examples goes to infinity. Moreover, empirical studies suggest that for real-world applications the advantage of one method over the other is somewhat limited [22, 6, 35, 37]. Understanding the differences between approximated kernel methods and neural networks is crucial to use them optimally in practice. In particular, there are two aspects that require investigation: (1) How much performance is lost due to the kernel approximation error of the random basis? (2) What is the possible gain of adapting the features to the task at hand? Since these effects are expected to be data-dependent, we argue that an empirical study is needed to complement the existing theoretical contributions [30, 36, 20, 32, 8]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work, we investigate these issues by making use of the fact that approximated kernel methods can be cast as shallow, one-hidden-layer neural networks. The bottom layers of these networks are random basis functions that are generated in a data-agnostic manner and are not adapted during training [30, 31, 20, 8]. This stands in stark contrast to, even the conventional single layer, neural network where the bottom-layer parameters are optimized with respect to the data distribution and the loss function. Specifically, we designed our experiments to distinguish four cases: • Random Basis (RB): we use the (approximated) kernel machine in its traditional formulation [30, 8]. • Unsupervised Adapted Basis (UAB): we adapt the basis functions to better approximate the true kernel function. • Supervised Adapted Basis (SAB): we adapt the basis functions using kernel target alignment [5] to incorporate label information. • Discriminatively Adapted Basis (DAB): we adapt the basis functions with a discriminative loss function, i.e., optimize jointly over basis and classifier parameters. This corresponds to conventional neural network optimization. These experiments allow us to isolate the effect of the randomness of the basis and contrast it to dataand task-dependent adaptations. We found that adapted bases consistently outperform random ones: an unsupervised basis adaption leads to a better kernel approximation than a random approximation, and, when considering the task at hand, a supervised kernel basis leads to a even more compact model while showing a superior performance compared to the task-agnostic bases. Remarkably, this performance is retained after transferring the basis to another task and makes this adaption scheme a viable alternative to a discriminatively adapted basis. The remainder is structured as follows. After a presentation of related work we explain approximated kernel machines in context of neural networks and describe our propositions in Sec. 3. In Sec. 4 we quantify the benefit of adapted basis function in contrast to their random counterparts empirically. Finally, we conclude in Sec. 5. 2 Related work To overcome the limitations of kernel learning, several approximation methods have been proposed. In addition to Nyström methods [34, 7], random Fourier features [30, 31] have gained a lot of attention. Random features or (faster) enhancements [20, 9, 39, 8] were successfully applied in many applications [6, 22, 14, 35], and were theoretically analyzed [36, 32]. They inspired scalable approaches to learn kernels with Gaussian processes [35, 38, 23]. Notably, [2, 24] explore kernels in the context of neural networks, and, in the field of RBF-networks, basis functions were adapted to the data by [26, 27]. Our work contributes in several ways: we view kernel machines from a neural network perspective and delineate the influence of different adaptation schemes. None of the above does this. The related work [36] compares the data-dependent Nyström approximation to random features. While our approach generalizes to structured matrices, i.e., fast kernel machines, Nyström does not. Most similar to our work is [37]. They interpret the Fastfood kernel approximation as a neural network. Their aim is to reduce the number of parameters in a convolutional neural network. 3 Methods In this section we will detail the relation between kernel approximations with random basis functions and neural networks. Then, we discuss the different approaches to adapt the basis in order to perform our analysis. 3.1 Casting kernel approximations as shallow, random neural networks Kernels are pairwise similarity functions k(x, x′) : Rd×Rd 7→R between two data points x, x′ ∈Rd. They are equivalent to the inner-products in an intermediate, potentially infinite-dimensional feature 2 space produced by a function φ : Rd 7→RD k(x, x′) = φ(x)T φ(x′) (1) Non-linear kernel machines typically avoid using φ explicitly by applying the kernel trick. They work in the dual space with the (Gram) kernel matrix. This imposes a quadratic dependence on the number of samples n and prevents its application in large scale settings. Several methods have been proposed to overcome this limitation by approximating a kernel machine with the following functional form f(x) = W T ˆφ(x) + b, (2) where ˆφ(x) is the approximated kernel feature map. Now, we will explain how to obtain this approximation for the Gaussian and the ArcCos kernel [2]. We chose the Gaussian kernel because it is the default choice for many tasks. On the other hand, the ArcCos kernel yields an approximation consisting of rectified, piece-wise linear units (ReLU) as used in deep learning [28, 11, 19]. Gaussian kernel To obtain the approximation of the Gaussian kernel, we use the following property [30]. Given a smooth, shift-invariant kernel k(x −x′) = k(z) with Fourier transform p(w), then: k(z) = Z Rd p(w)ejwT zdw. (3) Using the Gaussian distribution p(w) = N(0, σ−1), we obtain the Gaussian kernel k(z) = exp− ∥z∥2 2 2σ2 . Thus, the kernel value k(x, x′) can be approximated by the inner product between ˆφ(x) and ˆφ(x′), where ˆφ is defined as ˆφ(x) = r 1 D[sin(W T B x), cos(W T B x)] (4) and WB ∈Rd×D/2 as a random matrix with its entries drawn from N(0, σ−1). The resulting features are then used to approximate the kernel machine with the implicitly infinite dimensional feature space, k(x, x′) ≈ˆφ(x)T ˆφ(x′). (5) ArcCos kernel To yield a better connection to state-of-the-art neural networks we use the ArcCos kernel [2] k(x, x′) = 1 π ∥x∥∥x′∥J(θ) with J(θ) = (sin θ + (π −θ) cos θ) and θ = cos−1( x·x′ ∥x∥∥x′∥), the angle between x and x′. The approximation is not based on a Fourier transform, but is given by ˆφ(x) = r 1 D max(0, W T B x) (6) with WB ∈Rd×D being a random Gaussian matrix. This makes the approximated feature map of the ArcCos kernel closely related to ReLUs in deep neural networks. Neural network interpretation The approximated kernel features ˆφ(x) can be interpreted as the output of the hidden layer in a shallow neural network. To obtain the neural network interpretation, we rewrite Eq. 2 as the following f(x) = W T h(W T B x) + b, (7) with W ∈RD×c with c number of classes, and b ∈Rc. Here, the non-linearity h corresponds to the obtained kernel approximation map. Now, we substitute z = W T B x in Eqs. 4 and 6 yielding h(z) = p 1/D[sin(z), cos(z)]T for the Gaussian kernel and h(z) = p 1/D max(0, z) for the ArcCos kernel. 3 3.2 Adapting random kernel approximations Having introduced the neural network interpretation of random features, the key difference between both methods is which parameters are trained. For the neural network, one optimizes the parameters in the bottom-layer and those in the upper layers jointly. For kernel machines, however, WB is fixed, i.e., the features are not adapted to the data. Hyper-parameters (such as σ defining the bandwidth of the Gaussian kernel) are selected with cross-validation or heuristics [12, 6, 8]. Consequently, the basis is not directly adapted to the data, loss, and task at hand. In our experiments, we consider the classification setting where for the given data X ∈Rn×d containing n samples with d input dimensions one seeks to predict the target labels Y ∈[0, 1]n×c with a one-hot encoding for c classes. We use accuracy as the performance measure and the multinomiallogistic loss as its surrogate. All our models have the same, generic form shown in Eq. 7. However, we use different types of basis functions to analyze varying degrees of adaptation. In particular, we study whether data-dependent basis functions improve over data-agnostic basis functions. On top of that, we examine how well label-informative, thus task-adapted basis functions can perform in contrast to the data-agnostic basis. Finally, we use end-to-end learning of all parameters to connect to neural networks. Random Basis - RB: For data-agnostic kernel approximation, we use the current state-of-the-art of random features. Orthogonal random features [8, ORF] improve the convergence properties of the Gaussian kernel approximation over random Fourier features [30, 31]. Practically, we substitute WB with 1/σ GB, sample GB ∈Rd×D/2 from N(0, 1) and orthogonalize the matrix as given in [8] to approximate the Gaussian kernel. The ArcCos kernel is applied as described above. We also use these features as initialization of the following adaptive approaches. When adapting the Gaussian kernel we optimize GB while keeping the scale 1/σ fixed. Unsupervised Adapted Basis - UAB: While the introduced random bases converge towards the true kernel with an increasing number of features, it is to be expected that an optimized approximation will yield a more compact representation. We address this by optimizing the sampled parameters WB w.r.t. the kernel approximation error (KAE): ˆL(x, x′) = 1 2(k(x, x′) −ˆφ(x)T ˆφ(x′))2 (8) This objective is kernel- and data-dependent, but agnostic to the classification task. Supervised Adapted Basis - SAB: As an intermediate step between task-agnostic kernel approximations and end-to-end learning, we propose to use kernel target alignment [5] to inject label information. This is achieved by a target kernel function kY with kY (x, x′) = +1 if x and x′ belong to the same class and kY (x, x′) = 0 otherwise. We maximize the alignment between the approximated kernel k and the target kernel kY for a given data set X: ˆA(X, k, kY ) = ⟨K, KY ⟩ p ⟨K, K⟩⟨KY , KY ⟩ (9) with ⟨Ka, Kb⟩= Pn i,j ka(xi, xj)kb(xi, xj). Discriminatively Adapted Basis - DAB: The previous approach uses label information, but is oblivious to the final classifier. On the other hand, a discriminatively adapted basis is trained jointly with the classifier to minimize the classification objective, i.e., WB, W, b are optimized at the same time. This is the end-to-end optimization performed in neural networks. 4 Experiments In the following, we present the empirical results of our study, starting with a description the experimental setup. Then, we proceed to present the results of using data-dependent and taskdependent basis approximations. In the end, we bridge our analysis to deep learning and fast kernel machines. 4 Gisette Gaussian ArcCos 10 100 1000 10000 10−6 10−5 10−4 10−3 10−2 10−1 100 # of Features KAE 10 100 1000 10000 0.6 0.7 0.8 0.9 1 # of Features Accuracy 10 100 1000 10000 102 103 104 105 106 107 108 # of Features KAE 10 100 1000 10000 0.7 0.8 0.9 1 # of Features Accuracy MNIST Gaussian ArcCos 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy 10 100 1000 10000 10−1 100 101 102 103 104 105 106 # of Features KAE 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy CoverType Gaussian ArcCos 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 100 # of Features KAE 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 # of Features Accuracy 10 100 1000 10000 10−6 10−4 10−2 100 102 # of Features KAE 10 100 1000 10000 0.6 0.7 0.8 0.9 # of Features Accuracy CIFAR10 Gaussian ArcCos 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.2 0.4 0.6 0.8 # of Features Accuracy 10 100 1000 10000 100 101 102 103 # of Features KAE 10 100 1000 10000 0.4 0.6 0.8 # of Features Accuracy Basis: random (RB) unsupervised adapted (UAB) supervised adapted (SAB) discriminative adapted (DAB) Figure 1: Adapting bases. The plots show the relationship between the number of features (X-Axis), the KAE in logarithmic spacing(left, dashed lines) and the classification error (right, solid lines). Typically, the KAE decreases with a higher number of features, while the accuracy increases. The KAE for SAB and DAB (orange and red dotted line) hints how much the adaptation deviates from its initialization (blue dashed line). Best viewed in digital and color. 4.1 Experimental setup We used the following seven data sets for our study: Gisette [13], MNIST [21], CoverType [1], CIFAR10 features from [4], Adult [18], Letter [10], USPS [15]. The results for the last three can be found in the supplement. We center the data sets and scale them feature-wise into the range [−1, +1]. We use validation sets of size 1, 000 for Gisette, 10, 000 for MNIST, 50, 000 for CoverType, 5, 000 for CIFAR10, 3, 560 for Adult, 4, 500 for Letter, and 1, 290 for USPS. We repeat every test three times and report the mean over these trials. Optimization We train all models with mini-batch stochastic gradient descent. The batch size is 64 and as update rule we use ADAM [17]. We use early-stopping where we stop when the respective loss on the validation set does not decrease for ten epochs. We use Keras [3], Scikit-learn [29], NumPy [33] and SciPy [16]. We set the hyper-parameter σ for the Gaussian kernel heuristically according to [39, 8]. UAB ans SAB learning problems scale quadratically in the number of samples n. Therefore, to reduce memory requirements we optimize by sampling mini-batches from the kernel matrix. A batch for UAB consists of 64 sample pairs x and x′ as input and the respective value of the kernel function k(x, x′) as target value. Similarly for SAB, we sample 64 data points as input and generate the 5 target kernel matrix as target value. For each training epoch we randomly generate 10, 000 training and 1, 000 validation batches, and, eventually, evaluate the performance on 1, 000 unseen, random batches. 4.2 Analysis Tab. 1 gives an overview of the best performances achieved by each basis on each data set. Gaussian ArcCos Dataset RB UAB SAB DAB RB UAB SAB DAB Gisette 98.1 97.9 98.1 97.9 97.7 97.8 97.8 97.8 MNIST 98.2 98.2 98.3 98.3 97.2 97.4 97.7 97.9 CoverType 91.9 91.9 90.4 95.2 83.6 83.1 88.7 92.9 CIFAR10 76.4 76.8 79.0 77.3 74.9 76.3 79.4 75.3 Table 1: Best accuracy in % for different bases. Data-adapted kernel approximations First, we evaluate the effect of choosing a data-dependent basis (UAB) over a random basis (RB). In Fig. 1, we show the kernel approximation error (KAE) and the classification accuracy for a range from 10 to 30,000 features (in logarithmic scale). The first striking observation is that a data-dependent basis can approximate the kernel equally well with up to two orders of magnitude fewer features compared to the random baseline. This hold for both the Gaussian and the ArcCos kernel. However, the advantage diminishes as the number of features increases. When we relate the kernel approximation error to the accuracy, we observe that initially a decrease in KAE correlates well with an increase in accuracy. However, once the kernel is approximated sufficiently well, using more feature does not impact accuracy anymore. We conclude that the choice between a random or data-dependent basis strongly depends on the application. When a short training procedure is required, optimizing the basis could be too costly. On the other hand, if the focus lies on fast inference, we argue to optimize the basis to obtain a compact representation. In settings with restricted resources, e.g., mobile devices, this can be a key advantage. Task-adapted kernels A key difference between kernel methods and neural networks originates from the training procedure. In kernel methods the feature representation is fixed while the classifier is optimized. In contrast, deep learning relies on end-to-end training such that the feature representation is tightly coupled to the classifier. Intuitively, this allows the representation to be tailor-made for the task at hand. Therefore, one would expect that this allows for an even more compact representation than the previously examined data-adapted basis. In Sec. 3, we proposed a task-adapted kernel (SAB). Fig. 1 shows that the approach is comparable in terms of classification accuracy to discriminatively trained basis (DAB). Only for CoverType data set SAB performs significantly worse due to the limited model capabilities, which we will discuss below. Both task-adapted features improve significantly in accuracy compared to the random and data-adaptive kernel approximations. Transfer learning The beauty of kernel methods is, however, that a kernel function can be used across a wide range of tasks and consistently result in good performance. Therefore, in the next experiment, we investigate whether the resulting kernel retains this generalization capability when it is task-adapted. To investigate the influence of task-dependent information, we randomly separate the classes MNIST into two distinct subsets. The first task is to classify five randomly samples classes and their respective data points, while the second task is to do the same with the remaining classes. We train the previously presented model variants on task 1 and transfer their bases to task 2 where we only learn the classifier. The experiment is repeated with five different splits and the mean accuracy is reported. Fig. 2 shows that on the transfer task, the random and the data-adapted bases RB and UAB approximately retain the accuracy achieved on task 1. The performance of the end-to-end trained basis DAB drops significantly, however, yields still a better performance than the default random basis. Surprisingly, the supervised basis SAB using kernel-target alignment retains its performance and achieves the highest accuracy on task 2. This shows that using label information can indeed be 6 10 100 1000 0.6 0.8 1 # of Features Accuracy Task 1 10 100 1000 0.6 0.8 1 # of Features Accuracy Transferred - Task 2 Basis: RB UAB SAB DAB Figure 2: Transfer learning. We train to discriminate a random subset of 5 classes on the MNIST data set (left) and then transfer the basis function to a new task (right), i.e., train with the fixed basis from task 1 to classify between the remaining classes. MNIST CoverType 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy ArcCos2 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy ArcCos3 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy ArcCos2 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy ArcCos3 Basis: random (RB) unsupervised adapted (UAB) supervised adapted (SAB) discriminative adapted (DAB) MNIST 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy RB 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy UAB 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy SAB 10 100 1000 10000 0.2 0.4 0.6 0.8 1 # of Features Accuracy DAB CoverType 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy RB 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy UAB 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy SAB 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy DAB Kernel: ArcCos ArcCos2 ArcCos3 Figure 3: Deep kernel machines. The plots show the classification performance of the ArcCos-kernels with respect to the kernel (first part) and with respect to the number of layers (second part). Best viewed in digital and color. exploited in order to improve the efficiency and performance of kernel approximations without having to sacrifice generalization. I.e., a target-driven kernel (SAB) can be an efficient and still general alternative to the universal Gaussian kernel. Deep kernel machines We extend our analysis and draw a link to deep learning by adding two deep kernels [2]. As outlined in the aforementioned paper, stacking a Gaussian kernel is not useful instead we use ArcCos kernels that are related to deep learning as described below. Recall the ArcCos kernel from Eq. 3.1 as k1(x, x′). Then the kernels ArcCos2 and ArcCos3 are defined by the inductive step 7 MNIST RB UAB 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy CoverType RB UAB 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy 10 100 1000 10000 10−5 10−4 10−3 10−2 10−1 # of Features KAE 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy MNIST CoverType 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy SAB 10 100 1000 10000 0.4 0.6 0.8 1 # of Features Accuracy DAB 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy SAB 10 100 1000 10000 0.5 0.6 0.7 0.8 0.9 1 # of Features Accuracy DAB Basis: GB GB GB HD HD HD HDHD HDHD HDHD HDHDHD HDHDHD HDHDHD Figure 4: Fast kernel machines. The plots show how replacing the basis GB with an fast approximation influences the performance of a Gaussian kernel. I.e., GB is replaced by 1, 2, or 3 structured blocks HDi. Fast approximations with 2 and 3 blocks might overlap with GB. Best viewed in digital and color. ki+1(x, x′) = 1 π[ki(x, x)ki(x′, x′)]−1/2J(θi) with θi = cos−1(ki(x, x′)[ki(x, x)ki(x′, x′)]−1/2). Similarly, the feature map of the ArcCos kernel is approximated by a one-layer neural network with the ReLU-activation function and a random weight matrix WB ˆφArcCos(x) = ˆφB(x) = r 1 D max(0, W T B x), (10) and the feature maps of the ArcCos2 and ArcCos3 kernels are then given by a 2- or 3-layer neural network with the ReLU-activations, i.e., ˆφArcCos2(x) = ˆφB1(ˆφB0(x)) and ˆφArcCos3(x) = ˆφB2(ˆφB1(ˆφB0(x))). The training procedure for the ArcCos2 and ArcCos3 kernels remains identical to the training of the ArcCos kernel, i.e., the random matrices WBi are simultaneously adapted. Only, now the basis consists of more than one layer, and, to remain comparable for a given number of features, we split these features evenly over two layers for a 2-layer kernel and over three layers for a 3-layer kernel. In the following we describe our results on the MNIST and CoverType data sets. We observed that the so far described relationship between the cases RB, UAB, SAB, DAB also generalizes to deep models (see Fig. 3, first part, and Fig. 7 in the supplement). I.e., UAB approximates the true kernel function up to several magnitudes better than RB and leads to a better resulting classification performance. Furthermore, SAB and DAB perform similarly well and clearly outperform the task-agnostic bases RB and UAB. We now compare the results across the ArcCos-kernels. Consider the third row of Fig. 3, which depicts the performance of RB and UAB on the CoverType data set. For a limited number of features, i.e., less than 3, 000, the deeper kernels perform worse than the shallow ones. Only given enough capacity the deep kernels are able to perform as good as or better than the single-layer bases. On the 8 other hand for the CoverType data set, task related bases, i.e., SAB and DAB, benefit significantly from a deeper structure and are thus more efficient. Comparing SAB with DAB, for the ArcCos kernel with only one layer SAB leads to worse results than DAB. Given two layers the gap diminishes and vanishes with three layers (see Fig. 3). This suggests that for this data set the evaluated shallow models are not expressive enough to extract the task-related kernel information. Fast kernel machines By using structured matrices one can speed up approximated kernel machines [20, 8]. We will now investigate how this important technique influences the presented basis schemes. The approximation is achieved by replacing random Gaussian matrices with an approximation composed of diagonal and structured Hadamard matrices. The advantage of these matrix types is that they allow for low storage costs as fast multiplications. Recall that the input dimension is d and the number of features is D. By using the fast Hadamard-transform these algorithms only need to store O(D) instead of O(dD) parameters and the kernel approximation can be computed in O(D log d) rather than O(Dd). We use the approximation from [8] and replace the random Gaussian matrix WB = 1/σ GB in Eq. 4 with a chain of random, structured blocks WB ≈1/σ HD1 . . . HDi. Each block HDi consists of a diagonal matrix Di with entries sampled from the Rademacher distribution and a Hadamard matrix H. More blocks lead to a better approximation, but consequently require more computation. We found that the optimization is slightly more unstable and therefore stop early only after 20 epochs without improvement. When adapting a basis we will only modify the diagonal matrices. We re-conducted our previous experiments for the Gaussian kernel on the MNIST and CoverType data sets (Fig. 4). In the first place one can notice that in most cases the approximation exhibits no decline in performance and that it is a viable alternative for all basis adaption schemes. Two major exceptions are the following. Consider first the left part of the second row which depicts a approximated, random kernel machine (RB). The convergence of the kernel approximation stalls when using a random basis with only one block. As a result the classification performance drops drastically. This is not the case when the basis is adapted unsupervised, which is given in the right part of the second row. Here one cannot notice a major difference between one or more blocks. This means that for fast kernel machines an unsupervised adaption can lead to a more effective model utilization, which is crucial for resource aware settings. Furthermore, a discriminatively trained basis, i.e., a neural network, can be effected similarly from this re-parameterization (see Fig. 4, bottom row). Here an order of magnitude more features is needed to achieve the same accuracy compared to an exact representation, regardless how many blocks are used. In contrast, when adapting the kernel in a supervised fashion no decline in performance is noticeable. This shows that this procedure uses parameters very efficiently. 5 Conclusions Our analysis shows how random and adaptive bases affect the quality of learning. For random features this comes with the need for a large number of features and suggests that two issues severely limit approximated kernel machines: the basis being (1) agnostic to the data distribution and (2) agnostic to the task. We have found that data-dependent optimization of the kernel approximation consistently results in a more compact representation for a given kernel approximation error. Moreover, taskadapted features could further improve upon this. Even with fast, structured matrices, the adaptive features allow to further reduce the number of required parameters. This presents a promising strategy when a fast and computationally cheap inference is required, e.g., on mobile device. Beyond that, we have evaluated the generalization capabilities of the adapted variants on a transfer learning task. Remarkably, all adapted bases outperform the random baseline here. We have found that the kernel-task alignment works particularly well in this setting, having almost the same performance on the transfer task as the target task. At the junction of kernel methods and deep learning, this shows that incorporating label information can indeed be beneficial for performance without having to sacrifice generalization capability. Investigating this in more detail appears to be highly promising and suggests the path for future work. 9 Acknowledgments MA, KS, KRM, and FS acknowledge support by the Federal Ministry of Education and Research (BMBF) under 01IS14013A. PJK has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement NO 657679. KRM further acknowledges partial funding by the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451), BK21 and by DFG. FS is partially supported by NSF IIS-1065243, 1451412, 1513966/1632803, 1208500, CCF-1139148, a Google Research Award, an Alfred. P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484. This work was supported by NVIDIA with a hardware donation. References [1] J. A. Blackard and Denis J. D. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Computers and Electronics in Agriculture, 24(3):131–151, 2000. [2] Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances in neural information processing systems, pages 342–350, 2009. [3] François Chollet et al. Keras. https://github.com/fchollet/keras, 2015. [4] Adam Coates, Andrew Y. Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In International Conference on Artificial Intelligence and Statistics, pages 215–223, 2011. [5] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandola. On kernel-target alignment. Advances in neural information processing systems, 2001. [6] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems, pages 3041–3049, 2014. [7] Petros Drineas and Michael W Mahoney. On the nyström method for approximating a gram matrix for improved kernel-based learning. journal of machine learning research, 6(Dec):2153– 2175, 2005. [8] X Yu Felix, Ananda Theertha Suresh, Krzysztof M Choromanski, Daniel N Holtmann-Rice, and Sanjiv Kumar. Orthogonal random features. In Advances in Neural Information Processing Systems, pages 1975–1983, 2016. [9] Chang Feng, Qinghua Hu, and Shizhong Liao. Random feature mapping with signed circulant matrix projection. In IJCAI, pages 3490–3496, 2015. [10] Peter W Frey and David J Slate. Letter recognition using holland-style adaptive classifiers. Machine learning, 6(2):161–182, 1991. [11] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 315–323, 2011. [12] Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In Algorithmic learning theory, pages 63–77. Springer, 2005. [13] Isabelle Guyon, Steve R Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003 feature selection challenge. In NIPS, volume 4, pages 545–552, 2004. [14] Po-Sen Huang, Haim Avron, Tara N Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel methods match deep neural networks on timit. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 205–209. IEEE, 2014. 10 [15] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994. [16] Eric Jones, Travis Oliphant, and Pearu Peterson. {SciPy}: open source scientific tools for {Python}. 2014. [17] D Kingma and J Ba Adam. A method for stochastic optimisation, 2015. [18] Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In KDD, volume 96, pages 202–207, 1996. [19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [20] Quoc Le, Tamas Sarlos, and Alexander Smola. Fastfood – computing hilbert space expansions in loglinear time. Journal of Machine Learning Research, 28:244–252, 2013. [21] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998. [22] Zhiyun Lu, Avner May, Kuan Liu, Alireza Bagheri Garakani, Dong Guo, Aurélien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, Michael Picheny, et al. How to scale up kernel methods to be as good as deepneural nets. arXiv preprint arXiv:1411.4000, 2014. [23] Miguel Lázaro-Gredilla, Joaquin Quiñonero-Candela, Carl Edward Rasmussen, and Aníbal R. Figueiras-Vidal. Sparse spectrum gaussian process regression. Journal of Machine Learning Research, 11:1865–1881, 2010. [24] Grégoire Montavon, Mikio L Braun, and Klaus-Robert Müller. Kernel analysis of deep networks. Journal of Machine Learning Research, 12(Sep):2563–2581, 2011. [25] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014. [26] John Moody and Christian J Darken. Fast learning in networks of locally-tuned processing units. Neural computation, 1(2):281–294, 1989. [27] Klaus-Robert Müller, A Smola, Gunnar Rätsch, B Schölkopf, Jens Kohlmorgen, and Vladimir Vapnik. Using support vector machines for time series prediction. Advances in kernel methods—support vector learning, pages 243–254, 1999. [28] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010. [29] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [30] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In J.C. Platt, D. Koller, Y. Singer, and S.T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1177–1184. Curran Associates, Inc., 2008. [31] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1313–1320. Curran Associates, Inc., 2009. [32] Dougal J Sutherland and Jeff Schneider. On the error of random fourier features. AUAI, 2015. [33] Stéfan van der Walt, S Chris Colbert, and Gael Varoquaux. The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22–30, 2011. 11 [34] Christopher KI Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Proceedings of the 13th International Conference on Neural Information Processing Systems, pages 661–667. MIT press, 2000. [35] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel learning. arXiv preprint arXiv:1511.02222, 2015. [36] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nyström method vs random fourier features: A theoretical and empirical comparison. In Advances in neural information processing systems, pages 476–484, 2012. [37] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. June 2015. [38] Zichao Yang, Andrew Wilson, Alex Smola, and Le Song. A la carte – learning fast kernels. Journal of Machine Learning Research, 38:1098–1106, 2015. [39] Felix X Yu, Sanjiv Kumar, Henry Rowley, and Shih-Fu Chang. Compact nonlinear maps and circulant extensions. arXiv preprint arXiv:1503.03893, 2015. 12 | 2017 | 391 |
6,887 | Nearest-Neighbor Sample Compression: Efficiency, Consistency, Infinite Dimensions Aryeh Kontorovich Department of Computer Science Ben-Gurion University of the Negev karyeh@cs.bgu.ac.il Sivan Sabato Department of Computer Science Ben-Gurion University of the Negev sabatos@bgu.ac.il Roi Weiss Department of Computer Science and Applied Mathematics Weizmann Institute of Science roiw@weizmann.ac.il Abstract We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm. This algorithm is derived from sample compression bounds and enjoys the statistical advantages of tight, fully empirical generalization bounds, as well as the algorithmic advantages of a faster runtime and memory savings. We prove that this algorithm is strongly Bayes-consistent in metric spaces with finite doubling dimension — the first consistency result for an efficient nearest-neighbor sample compression scheme. Rather surprisingly, we discover that this algorithm continues to be Bayes-consistent even in a certain infinitedimensional setting, in which the basic measure-theoretic conditions on which classic consistency proofs hinge are violated. This is all the more surprising, since it is known that k-NN is not Bayes-consistent in this setting. We pose several challenging open problems for future research. 1 Introduction This paper deals with Nearest-Neighbor (NN) learning algorithms in metric spaces. Initiated by Fix and Hodges in 1951 [16], this seemingly naive learning paradigm remains competitive against more sophisticated methods [8, 46] and, in its celebrated k-NN version, has been placed on a solid theoretical foundation [11, 44, 13, 47]. Although the classic 1-NN is well known to be inconsistent in general, in recent years a series of papers has presented variations on the theme of a regularized 1-NN classifier, as an alternative to the Bayes-consistent k-NN. Gottlieb et al. [18] showed that approximate nearest neighbor search can act as a regularizer, actually improving generalization performance rather than just injecting noise. In a follow-up work, [27] showed that applying Structural Risk Minimization to (essentially) the margin-regularized data-dependent bound in [18] yields a strongly Bayes-consistent 1-NN classifier. A further development has seen margin-based regularization analyzed through the lens of sample compression: a near-optimal nearest neighbor condensing algorithm was presented [20] and later extended to cover semimetric spaces [21]; an activized version also appeared [25]. As detailed in [27], margin-regularized 1-NN methods enjoy a number of statistical and computational advantages over the traditional k-NN classifier. Salient among these are explicit data-dependent generalization bounds, and considerable runtime and memory savings. Sample compression affords additional advantages, in the form of tighter generalization bounds and increased efficiency in time and space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work we study the Bayes-consistency of a compression-based 1-NN multiclass learning algorithm, in both finite-dimensional and infinite-dimensional metric spaces. The algorithm is essentially the passive component of the active learner proposed by Kontorovich, Sabato, and Urner in [25], and we refer to it in the sequel as KSU; for completeness, we present it here in full (Alg. 1). We show that in finite-dimensional metric spaces, KSU is both computationally efficient and Bayesconsistent. This is the first compression-based multiclass 1-NN algorithm proven to possess both of these properties. We further exhibit a surprising phenomenon in infinite-dimensional spaces, where we construct a distribution for which KSU is Bayes-consistent while k-NN is not. Main results. Our main contributions consist of analyzing the performance of KSU in finite and infinite dimensional settings, and comparing it to the classical k-NN learner. Our key findings are summarized below. • In Theorem 2, we show that KSU is computationally efficient and strongly Bayes-consistent in metric spaces with a finite doubling dimension. This is the first (strong or otherwise) Bayes-consistency result for an efficient sample compression scheme for a multiclass (or even binary)1 1-NN algorithm. This result should be contrasted with the one in [27], where margin-based regularization was employed, but not compression; the proof techniques from [27] do not carry over to the compression-based scheme. Instead, novel arguments are required, as we discuss below. The new sample compression technique provides a Bayes-consistency proof for multiple (even countably many) labels; this is contrasted with the multiclass 1-NN algorithm in [28], which is not compression-based, and requires solving a minimum vertex cover problem, thereby imposing a 2-approximation factor whenever there are more than two labels. • In Theorem 4, we make the surprising discovery that KSU continues to be Bayes-consistent in a certain infinite-dimensional setting, even though this setting violates the basic measuretheoretic conditions on which classic consistency proofs hinge, including Theorem 2. This is all the more surprising, since it is known that k-NN is not Bayes-consistent for this construction [9]. We are currently unaware of any separable2 metric probability space on which KSU fails to be Bayes-consistent; this is posed as an intriguing open problem. Our results indicate that in finite dimensions, an efficient, compression-based, Bayes-consistent multiclass 1-NN algorithm exists, and hence can be offered as an alternative to k-NN, which is well known to be Bayes-consistent in finite dimensions [12, 41]. In contrast, in infinite dimensions, our results show that the condition characterizing the Bayes-consistency of k-NN does not extend to all NN algorithms. It is an open problem to characterize the necessary and sufficient conditions for the existence of a Bayes-consistent NN-based algorithm in infinite dimensions. Related work. Following the pioneering work of [11] on nearest-neighbor classification, it was shown by [13, 47, 14] that the k-NN classifier is strongly Bayes consistent in Rd. These results made extensive use of the Euclidean structure of Rd, but in [41] a weak Bayes-consistency result was shown for metric spaces with a bounded diameter and a bounded doubling dimension, and additional distributional smoothness assumptions. More recently, some of the classic results on k-NN risk decay rates were refined by [10] in an analysis that captures the interplay between the metric and the sampling distribution. The worst-case rates have an exponential dependence on the dimension (i.e., the so-called curse of dimensionality), and Pestov [33, 34] examines this phenomenon closely under various distributional and structural assumptions. Consistency of NN-type algorithms in more general (and in particular infinite-dimensional) metric spaces was discussed in [1, 5, 6, 9, 30]. In [1, 9], characterizations of Bayes-consistency were given in terms of Besicovitch-type conditions (see Eq. (3)). In [1], a generalized “moving window” classification rule is used and additional regularity conditions on the regression function are imposed. The filtering technique (i.e., taking the first d coordinates in some basis representation) was shown to be universally consistent in [5]. However, that algorithm suffers from the cost of cross-validating over both the dimension d and number of neighbors k. Also, the technique is only applicable in 1 An efficient sample compression algorithm was given in [20] for the binary case, but no Bayes-consistency guarantee is known for it. 2Cérou and Guyader [9] gave a simple example of a nonseparable metric on which all known nearest-neighbor methods, including k-NN and KSU, obviously fail. 2 Hilbert spaces (as opposed to more general metric spaces) and provides only asymptotic consistency, without finite-sample bounds such as those provided by KSU. The insight of [5] is extended to the more general Banach spaces in [6] under various regularity assumptions. None of the aforementioned generalization results for NN-based techniques are in the form of fully empirical, explicitly computable sample-dependent error bounds. Rather, they are stated in terms of the unknown Bayes-optimal rate, and some involve additional parameters quantifying the well-behavedness of the unknown distribution (see [27] for a detailed discussion). As such, these guarantees do not enable a practitioner to compute a numerical generalization error estimate for a given training sample, much less allow for a data-dependent selection of k, which must be tuned via cross-validation. The asymptotic expansions in [43, 37, 23, 40] likewise do not provide a computable finite-sample bound. The quest for such bounds was a key motivation behind the series of works [18, 28, 20], of which KSU [25] is the latest development. The work of Devroye et al. [14, Theorem 21.2] has implications for 1-NN classifiers in Rd that are defined based on data-dependent majority-vote partitions of the space. It is shown that under some conditions, a fixed mapping from each sample size to a data-dependent partition rule induces a strongly Bayes-consistent algorithm. This result requires the partition rule to have a bounded VC dimension, and since this rule must be fixed in advance, the algorithm is not fully adaptive. Theorem 19.3 ibid. proves weak consistency for an inefficient compression-based algorithm, which selects among all the possible compression sets of a certain size, and maintains a certain rate of compression relative to the sample size. The generalizing power of sample compression was independently discovered by [31], and later elaborated upon by [22]. In the context of NN classification, [14] lists various condensing heuristics (which have no known performance guarantees) and leaves open the algorithmic question of how to minimize the empirical risk over all subsets of a given size. The first compression-based 1-NN algorithm with provable optimality guarantees was given in [20]; it was based on constructing γ-nets in spaces with a finite doubling dimension. The compression size of this construction was shown to be nearly unimprovable by an efficient algorithm unless P=NP. With γ-nets as its algorithmic engine, KSU inherits this near-optimality. The compression-based 1-NN paradigm was later extended to semimetrics in [21], where it was shown to survive violations of the triangle inequality, while the hierarchy-based search methods that have become standard for metric spaces (such as [4, 18] and related approaches) all break down. It was shown in [27] that a margin-regularized 1-NN learner (essentially, the one proposed in [18], which, unlike [20], did not involve sample compression) becomes strongly Bayes-consistent when the margin is chosen optimally in an explicitly prescribed sample-dependent fashion. The margin-based technique developed in [18] for the binary case was extended to multiclass in [28]. Since the algorithm relied on computing a minimum vertex cover, it was not possible to make it both computationally efficient and Bayes-consistent when the number of lables exceeds two. An additional improvement over [28] is that the generalization bounds presented there had an explicit (logarithmic) dependence on the number of labels, while our compression scheme extends seamlessly to countable label spaces. Paper outline. After fixing the notation and setup in Sec. 2, in Sec. 3 we present KSU, the compression-based 1-NN algorithm we analyze in this work. Sec. 4 discusses our main contributions regarding KSU, together with some open problems. High-level proof sketches are given in Sec. 5 for the finite-dimensional case, and Sec. 6 for the infinite-dimensional case. Full detailed proofs can be found in [26]. 2 Setting and Notation Our instance space is the metric space (X, ρ), where X is the instance domain and ρ is the metric. (See Appendix A in [26] for relevant background on metric measure spaces.) We consider a countable label space Y. The unknown sampling distribution is a probability measure ¯µ over X × Y, with marginal µ over X. Denote by (X, Y ) ∼¯µ a pair drawn according to ¯µ. The generalization error of a classifier f : X →Y is given by err¯µ(f) := P¯µ(Y ̸= f(X)), and its empirical error with respect to a labeled set S′ ⊆X × Y is given by c err(f, S′) := 1 |S′| P (x,y)∈S′ 1[y ̸= f(x)]. The optimal Bayes risk of ¯µ is R∗ ¯µ := inf err¯µ(f), where the infimum is taken over all measurable classifiers f : X →Y. We say that ¯µ is realizable when R∗ ¯µ = 0. We omit the overline in ¯µ in the sequel when there is no ambiguity. 3 For a finite labeled set S ⊆X × Y and any x ∈X, let Xnn(x, S) be the nearest neighbor of x with respect to S and let Ynn(x, S) be the nearest neighbor label of x with respect to S: (Xnn(x, S), Ynn(x, S)) := argmin (x′,y′)∈S ρ(x, x′), where ties are broken arbitrarily. The 1-NN classifier induced by S is denoted by hS(x) := Ynn(x, S). The set of points in S, denoted by X = {X1, . . . , X|S|} ⊆ X, induces a Voronoi partition of X, V(X) := {V1(X), . . . , V|S|(X)}, where each Voronoi cell is Vi(X) := {x ∈X : argminj∈{1,...,|S|} ρ(x, Xj) = i}. By definition, ∀x ∈Vi(X), hS(x) = Yi. A 1-NN algorithm is a mapping from an i.i.d. labeled sample Sn ∼¯µn to a labeled set S′ n ⊆X × Y, yielding the 1-NN classifier hS′n. While the classic 1-NN algorithm sets S′ n := Sn, in this work we study a compression-based algorithm which sets S′ n adaptively, as discussed further below. A 1-NN algorithm is strongly Bayes-consistent on ¯µ if err(hS′n) converges to R∗almost surely, that is P[limn→∞err(hS′n) = R∗] = 1. An algorithm is weakly Bayes-consistent on ¯µ if err(hS′n) converges to R∗in expectation, limn→∞E[err(hS′n)] = R∗. Obviously, the former implies the latter. We say that an algorithm is Bayes-consistent on a metric space if it is Bayes-consistent on all distributions in the metric space. A convenient property that is used when studying the Bayes-consistency of algorithms in metric spaces is the doubling dimension. Denote the open ball of radius r around x by Br(x) := {x′ ∈ X : ρ(x, x′) < r} and let ¯Br(x) denote the corresponding closed ball. The doubling dimension of a metric space (X, ρ) is defined as follows. Let n be the smallest number such that every ball in X can be covered by n balls of half its radius, where all balls are centered at points of X. Formally, n := min{n ∈N : ∀x ∈X, r > 0, ∃x1, . . . , xn ∈X s.t. Br(x) ⊆∪n i=1Br/2(xi)}. Then the doubling dimension of (X, ρ) is defined by ddim(X, ρ) := log2 n. For an integer n, let [n] := {1, . . . , n}. Denote the set of all index vectors of length d by In,d := [n]d. Given a labeled set Sn = (Xi, Yi)i∈[n] and any i = {i1, . . . , id} ∈In,d, denote the subsample of Sn indexed by i by Sn(i) := {(Xi1, Yi1), . . . , (Xid, Yid)}. Similarly, for a vector Y ′ = {Y ′ 1, . . . , Y ′ d} ∈Yd, denote by Sn(i, Y ′) := {(Xi1, Y ′ 1), . . . , (Xid, Y ′ d)}, namely the sub-sample of Sn as determined by i where the labels are replaced with Y ′. Lastly, for i, j ∈In,d, we denote Sn(i; j) := {(Xi1, Yj1), . . . , (Xid, Yjd)}. 3 1-NN majority-based compression In this work we consider the 1-NN majority-based compression algorithm proposed in [25], which we refer to as KSU. This algorithm is based on constructing γ-nets at different scales; for γ > 0 and A ⊆X, a set X ⊆A is said to be a γ-net of A if ∀a ∈A, ∃x ∈X : ρ(a, x) ≤γ and for all x ̸= x′ ∈X, ρ(x, x′) > γ.3 The algorithm (see Alg. 1) operates as follows. Given an input sample Sn, whose set of points is denoted Xn = {X1, . . . , Xn}, KSU considers all possible scales γ > 0. For each such scale it constructs a γ-net of Xn. Denote this γ-net by X(γ) := {Xi1, . . . , Xim}, where m ≡m(γ) denotes its size and i ≡i(γ) := {i1, . . . , im} ∈In,m denotes the indices selected from Sn for this γ-net. For every such γ-net, the algorithm attaches the labels Y ′ ≡Y ′(γ) ∈Ym, which are the empirical majority-vote labels in the respective Voronoi cells in the partition V(X(γ)) = {V1, . . . , Vm}. Formally, for i ∈[m], Y ′ i ∈argmax y∈Y |{j ∈[n] | Xj ∈Vi, Yj = y}|, (1) where ties are broken arbitrarily. This procedure creates a labeled set S′ n(γ) := Sn(i(γ), Y ′(γ)) for every relevant γ ∈{ρ(Xi, Xj) | i, j ∈[n]} \ {0}. The algorithm then selects a single γ, denoted γ∗≡γ∗ n, and outputs hS′n(γ∗). The scale γ∗is selected so as to minimize a generalization error bound, which upper bounds err(S′ n(γ)) with high probability. This error bound, denoted Q in the algorithm, can be derived using a compression-based analysis, as described below. 3 For technical reasons, having to do with the construction in Sec. 6, we depart slightly from the standard definition of a γ-net X ⊆A. The classic definition requires that (i) ∀a ∈A, ∃x ∈X : ρ(a, x) < γ and (ii) ∀x ̸= x′ ∈X : ρ(x, x′) ≥γ. In our definition, the relations < and ≥in (i) and (ii) are replaced by ≤and >. 4 Algorithm 1 KSU: 1-NN compression-based algorithm Require: Sample Sn = (Xi, Yi)i∈[n], confidence δ Ensure: A 1-NN classifier 1: Let Γ := {ρ(Xi, Xj) | i, j ∈[n]} \ {0} 2: for γ ∈Γ do 3: Let X(γ) be a γ-net of {X1, . . . , Xn} 4: Let m(γ) := |X(γ)| 5: For each i ∈[m(γ)], let Y ′ i be the majority label in Vi(X(γ)) as defined in Eq. (1) 6: Set S′ n(γ) := (X(γ), Y ′(γ)) 7: end for 8: Set α(γ) := c err(hS′n(γ), Sn) 9: Find γ∗ n ∈argminγ∈Γ Q(n, α(γ), 2m(γ), δ), where Q is, e.g., as in Eq. (2) 10: Set S′ n := S′ n(γ∗ n) 11: return hS′n We say that a mapping Sn 7→S′ n is a compression scheme if there is a function C : ∪∞ m=0(X ×Y)m → 2X×Y, from sub-samples to subsets of X ×Y, such that for every Sn there exists an m and a sequence i ∈In,m such that S′ n = C(Sn(i)). Given a compression scheme Sn 7→S′ n and a matching function C, we say that a specific S′ n is an (α, m)-compression of a given Sn if S′ n = C(Sn(i)) for some i ∈In,m and c err(hS′n, Sn) ≤α. The generalization power of compression was recognized by [17] and [22]. Specifically, it was shown in [21, Theorem 8] that if the mapping Sn 7→S′ n is a compression scheme, then with probability at least 1 −δ, for any S′ n which is an (α, m)-compression of Sn ∼¯µn, we have (omitting the constants, explicitly provided therein, which do not affect our analysis) err(hS′n) ≤ n n −mα + O(m log(n) + log(1/δ) n −m ) + O( s nm n−mα log(n) + log(1/δ) n −m ). (2) Defining Q(n, α, m, δ) as the RHS of Eq. (2) provides KSU with a compression bound. The following proposition shows that KSU is a compression scheme, which enables us to use Eq. (2) with the appropriate substitution.4 Proposition 1. The mapping Sn 7→S′ n defined by Alg. 1 is a compression scheme whose output S′ n is a (c err(hS′n), 2|S′ n|)-compression of Sn. Proof. Define the function C by C(( ¯Xi, ¯Yi)i∈[2m]) = ( ¯Xi, ¯Yi+m)i∈[m], and observe that for all Sn, we have S′ n = C(Sn(i(γ); j(γ))), where i(γ) is the γ-net index set as defined above, and j(γ) = {j1, . . . , jm(γ)} ∈In,m(γ) is some index vector such that Y ′ i = Yji for every i ∈[m(γ)]. Since Y ′ i is an empirical majority vote, clearly such a j exists. Under this scheme, the output S′ n of this algorithm is a (c err(hS′n), 2|S′ n|)-compression. KSU is efficient, for any countable Y. Indeed, Alg. 1 has a naive runtime complexity of O(n4), since O(n2) values of γ are considered and a γ-net is constructed for each one in time O(n2) (see [20, Algorithm 1]). Improved runtimes can be obtained, e.g., using the methods in [29, 18]. In this work we focus on the Bayes-consistency of KSU, rather than optimize its computational complexity. Our Bayes-consistency results below hold for KSU, whenever the generalization bound Q(n, α, m, δn) satisfies the following properties: Property 1 For any integer n and δ ∈(0, 1), with probability 1 −δ over the i.i.d. random sample Sn ∼¯µn, for all α ∈[0, 1] and m ∈[n]: If S′ n is an (α, m)-compression of Sn, then err(hS′n) ≤Q(n, α, m, δ). Property 2 Q is monotonically increasing in α and in m. Property 3 There is a sequence {δn}∞ n=1, δn ∈(0, 1) such that P∞ n=1 δn < ∞and for all m, lim n→∞sup α∈[0,1] (Q(n, α, m, δn) −α) = 0. 4 In [25] the analysis was based on compression with side information, and does not extend to infinite Y. 5 The compression bound in Eq. (2) clearly satisfies these properties. Note that Property 3 is satisfied by Eq. (2) using any convergent series P∞ n=1 δn < ∞such that δn = e−o(n); in particular, the decay of δn cannot be too rapid. 4 Main results In this section we describe our main results. The proofs appear in subsequent sections. First, we show that KSU is Bayes-consistent if the instance space has a finite doubling dimension. This contrasts with classical 1-NN, which is only Bayes-consistent if the distribution is realizable. Theorem 2. Let (X, ρ) be a metric space with a finite doubling-dimension. Let Q be a generalization bound that satisfies Properties 1-3, and let δn be as stipulated by Property 3 for Q. If the input confidence δ for input size n is set to δn, then the 1-NN classifier hS′n(γ∗ n) calculated by KSU is strongly Bayes consistent on (X, ρ): P(limn→∞err(hS′n) = R∗) = 1. The proof, provided in Sec. 5, closely follows the line of reasoning in [27], where the strong Bayesconsistency of an adaptive margin-regularized 1-NN algorithm was proved, but with several crucial differences. In particular, the generalization bounds used by KSU are purely compression-based, as opposed to the Rademacher-based generalization bounds used in [27]. The former can be much tighter in practice and guarantee Bayes-consistency of KSU even for countably many labels. This however requires novel technical arguments, which are discussed in detail in Appendix B.1 in [26]. Moreover, since the compression-based bounds do not explicitly depend on ddim, they can be used even when ddim is infinite, as we do in Theorem 4 below. To underscore the subtle nature of Bayes-consistency, we note that the proof technique given here does not carry to an earlier algorithm, suggested in [20, Theorem 4], which also uses γ-nets. It is an open question whether the latter is Bayes-consistent. Next, we study Bayes-consistency of KSU in infinite dimensions (i.e., with ddim = ∞) — in particular, in a setting where k-NN was shown by [9] not to be Bayes-consistent. Indeed, a straightforward application of [9, Lemma A.1] yields the following result. Theorem 3 (Cérou and Guyader [9]). There exists an infinite dimensional separable metric space (X, ρ) and a realizable distribution ¯µ over X × {0, 1} such that no kn-NN learner satisfying kn/n →0 when n →∞is Bayes-consistent under ¯µ. In particular, this holds for any space and realizable distribution ¯µ that satisfy the following condition: The set C of points labeled 1 by ¯µ satisfies µ(C) > 0 and ∀x ∈C, lim r→0 µ(C ∩¯Br(x)) µ( ¯Br(x)) = 0. (3) Since µ(C) > 0, Eq. (3) constitutes a violation of the Besicovitch covering property. In doubling spaces, the Besicovitch covering theorem precludes such a violation [15]. In contrast, as [35, 36] show, in infinite-dimensional spaces this violation can in fact occur. Moreover, this is not an isolated pathology, as this property is shared by Gaussian Hilbert spaces [45]. At first sight, Eq. (3) might appear to thwart any 1-NN algorithm applied to such a distribution. However, the following result shows that this is not the case: KSU is Bayes-consistent on a distribution with this property. Theorem 4. There is a metric space equipped with a realizable distribution for which KSU is weakly Bayes-consistent, while any k-NN classifier necessarily is not. The proof relies on a classic construction of Preiss [35] which satisfies Eq. (3). We show that the structure of the construction, combined with the packing and covering properties of γ-nets, imply that the majority-vote classifier induced by any γ-net with a sufficienlty small γ approaches the Bayes error. To contrast with Theorem 4, we next show that on the same construction, not all majority-vote Voronoi partitions succeed. Indeed, if the packing property of γ-nets is relaxed, partition sequences obstructing Bayes-consistency exist. Theorem 5. For the example constructed in Theorem 4, there exists a sequence of Voronoi partitions with a vanishing diameter such that the induced true majority-vote classifiers are not Bayes consistent. The above result also stands in contrast to [14, Theorem 21.2], showing that, unlike in finite dimensions, the partitions’ vanishing diameter is insufficient to establish consistency when ddim = ∞. We conclude the main results by posing intriguing open problems. 6 Open problem 1. Does there exist a metric probability space on which some k-NN algorithm is consistent while KSU is not? Does there exist any separable metric space on which KSU fails? Open problem 2. Cérou and Guyader [9] distill a certain Besicovitch condition which is necessary and sufficient for k-NN to be Bayes-consistent in a metric space. Our Theorem 4 shows that the Besicovitch condition is not necessary for KSU to be Bayes-consistent. Is it sufficient? What is a necessary condition? 5 Bayes-consistency of KSU in finite dimensions In this section we give a high-level proof of Theorem 2, showing that KSU is strongly Bayesconsistent in finite-dimensional metric spaces. A fully detailed proof is given in Appendix B in [26]. Recall the optimal empirical error α∗ n ≡α(γ∗ n) and the optimal compression size m∗ n ≡m(γ∗ n) as computed by KSU. As shown in Proposition 1, the sub-sample S′ n(γ∗ n) is an (α∗ n, 2m∗ n)-compression of Sn. Abbreviate the compression-based generalization bound used in KSU by Qn(α, m) := Q(n, α, 2m, δn). To show Bayes-consistency, we start by a standard decomposition of the excess error over the optimal Bayes into two terms: err(hS′n(γ∗ n)) −R∗= err(hS′n(γ∗ n)) −Qn(α∗ n, m∗ n) + Qn(α∗ n, m∗ n) −R∗ =: TI(n) + TII(n), and show that each term decays to zero with probability one. For the first term, Property 1 for Q, together with the Borel-Cantelli lemma, readily imply lim supn→∞TI(n) ≤0 with probability one. The main challenge is showing that lim supn→∞TII(n) ≤0 with probability one. We do so in several stages: 1. Loosely speaking, we first show (Lemma 10) that the Bayes error R∗can be well approximated using 1-NN classifiers defined by the true (as opposed to empirical) majority-vote labels over fine partitions of X. In particular, this holds for any partition induced by a γ-net of X with a sufficiently small γ > 0. This approximation guarantee relies on the fact that in finite-dimensional spaces, the class of continuous functions with compact support is dense in L1(µ) (Lemma 9). 2. Fix ˜γ > 0 sufficiently small such that any true majority-vote classifier induced by a ˜γ-net has a true error close to R∗, as guaranteed by stage 1. Since for bounded subsets of finitedimensional spaces the size of any γ-net is finite, the empirical error of any majority-vote γ-net almost surely converges to its true majority-vote error as the sample size n →∞. Let n(˜γ) sufficiently large such that Qn(˜γ)(α(˜γ), m(˜γ)) as computed by KSU for a sample of size n(˜γ) is a reliable estimate for the true error of hS′ n(˜γ)(˜γ). 3. Let ˜γ and n(˜γ) be as in stage 2. Given a sample of size n = n(˜γ), recall that KSU selects an optimal γ∗such that Qn(α(γ), m(γ)) is minimized over all γ > 0. For margins γ ≪˜γ, which are prone to over-fitting, Qn(α(γ), m(γ)) is not a reliable estimate for hS′n(γ) since compression may not yet taken place for samples of size n. Nevertheless, these margins are discarded by KSU due to the penalty term in Q. On the other hand, for γ-nets with margin γ ≫˜γ, which are prone to under-fitting, the true error is well estimated by Qn(α(γ), m(γ)). It follows that KSU selects γ∗ n ≈˜γ and Qn(α∗ n, m∗ n) ≈R∗, implying lim supn→∞TII(n) ≤0 with probability one. As one can see, the assumption that X is finite-dimensional plays a major role in the proof. A simple argument shows that the family of continuous functions with compact support is no longer dense in L1 in infinite-dimensional spaces. In addition, γ-nets of bounded subsets in infinite dimensional spaces need no longer be finite. 6 On Bayes-consistency of NN algorithms in infinite dimensions In this section we study the Bayes-consistency properties of 1-NN algorithms on a classic infinitedimensional construction of Preiss [35], which we describe below in detail. This construction was 7 z1:k−2 z1:k−1 γk γk γk z1:k z γk γk γk γk−1 C = Z∞ Figure 1: Preiss’s construction. Encircled is the closed ball ¯Bγk−1(z) for some z ∈C. first introduced as a concrete example showing that in infinite-dimensional spaces the Besicovich covering theorem [15] can be strongly violated, as manifested in Eq. (3). Example 1 (Preiss’s construction). The construction (see Figure 1) defines an infinite-dimensional metric space (X, ρ) and a realizable measure ¯µ over X × Y with the binary label set Y = {0, 1}. It relies on two sequences: a sequence of natural numbers {Nk}k∈N and a sequence of positive numbers {ak}k∈N. The two sequences should satisfy the following: P∞ k=1 akN1 . . . Nk = 1; limk→∞akN1 . . . Nk+1 = ∞; and limk→∞Nk = ∞. (4) These properties are satisfied, for instance, by setting Nk := k! and ak := 2−k/ Q i∈[k] Ni. Let Z0 be the set of all finite sequences (z1, . . . , zk)k∈N of natural numbers such that zi ≤Ni, and let Z∞ be the set of all infinite sequences (z1, z2, . . . ) of natural numbers such that zi ≤Ni. Define the example space X := Z0 ∪Z∞and denote γk := 2−k, where γ∞:= 0. The metric ρ over X is defined as follows: for x, y ∈X, denote by x ∧y their longest common prefix. Then, ρ(x, y) = (γ|x∧y| −γ|x|) + (γ|x∧y| −γ|y|). It can be shown (see [35]) that ρ(x, y) is a metric; in fact, it embeds isometrically into the square norm metric of a Hilbert space. To define µ, the marginal measure over X, let ν∞be the uniform product distribution measure over Z∞, that is: for all i ∈N, each zi in the sequence z = (z1, z2, . . . ) ∈Z∞is independently drawn from a uniform distribution over [Ni]. Let ν0 be an atomic measure on Z0 such that for all z ∈Z0, ν0(z) = a|z|. Clearly, the first condition in Eq. (4) implies ν0(Z0) = 1. Define the marginal probability measure µ over X by ∀A ⊆Z0 ∪Z∞, µ(A) := αν∞(A) + (1 −α)ν0(A). In words, an infinite sequence is drawn with probability α (and all such sequences are equally likely), or else a finite sequence is drawn (and all finite sequences of the same length are equally likely). Define the realizable distribution ¯µ over X × Y by setting the marginal over X to µ, and by setting the label of z ∈Z∞to be 1 with probability 1 and the label of z ∈Z0 to be 0 with probability 1. As shown in [35], this construction satisfies Eq. (3) with C = Z∞and µ(C) = α > 0. It follows from Theorem 3 that no k-NN algorithm is Bayes-consistent on it. In contrast, the following theorem shows that KSU is weakly Bayes-consistent on this distribution. Theorem 4 immediately follows from the this result. Theorem 6. Assume (X, ρ), Y and ¯µ as in Example 1. KSU is weakly Bayes-consistent on ¯µ. The proof, provided in Appendix C in [26], first characterizes the Voronoi cells for which the true majority-vote yields a significant error for the cell (Lemma 15). In finite-dimensional spaces, the total measure of all such “bad” cells can be made arbitrarily close to zero by taking γ to be sufficiently small, as shown in Lemma 10 of Theorem 2. However, it is not immediately clear whether this can be achieved for the infinite dimensional construction above. Indeed, we expect such bad cells, due to the unintuitive property that for any x ∈C, we have µ( ¯Bγ(x) ∩C)/µ( ¯Bγ(x)) →0 when γ →0, and yet µ(C) > 0. Thus, if for example a significant 8 portion of the set C (whose label is 1) is covered by Voronoi cells of the form V = ¯Bγ(x) with x ∈C, then for all sufficiently small γ, each one of these cells will have a true majority-vote 0. Thus a significant portion of C would be misclassified. However, we show that by the structure of the construction, combined with the packing and covering properties of γ-nets, we have that in any γ-net, the total measure of all these “bad” cells goes to 0 when γ →0, thus yielding a consistent classifier. Lastly, the following theorem shows that on the same construction above, when the Voronoi partitions are allowed to violate the packing property of γ-nets, Bayes-consistency does not necessarily hold. Theorem 5 immediately follows from the following result. Theorem 7. Assume (X, ρ), Y and ¯µ as in Example 1. There exists a sequence of Voronoi partitions (Pk)k∈N of X with maxV ∈Pk diam(V ) ≤γk such that the sequence of true majority-vote classifiers (hPk)k∈N induced by these partitions is not Bayes consistent: lim infk→∞err(hPk) = α > 0. The proof, provided in Appendix D, constructs a sequence of Voronoi partitions, where each partition Pk has all of its impure Voronoi cells (those with both 0 and 1 labels) being bad. In this case, C is incorrectly classified by hPk, yielding a significant error. Thus, in infinite-dimensional metric spaces, the shape of the Voronoi cells plays a fundamental role in the consistency of the partition. Acknowledgments. We thank Frédéric Cérou for the numerous fruitful discussions and helpful feedback on an earlier draft. Aryeh Kontorovich was supported in part by the Israel Science Foundation (grant No. 755/15), Paypal and IBM. Sivan Sabato was supported in part by the Israel Science Foundation (grant No. 555/15). References [1] Christophe Abraham, Gérard Biau, and Benoît Cadre. On the kernel rule for function classification. Ann. Inst. Statist. Math., 58(3):619–633, 2006. [2] Daniel Berend and Aryeh Kontorovich. The missing mass problem. Statistics & Probability Letters, 82(6):1102–1110, 2012. [3] Daniel Berend and Aryeh Kontorovich. On the concentration of the missing mass. Electronic Communications in Probability, 18(3):1–7, 2013. [4] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In ICML ’06: Proceedings of the 23rd international conference on Machine learning, pages 97–104, New York, NY, USA, 2006. ACM. [5] Gérard Biau, Florentina Bunea, and Marten H. Wegkamp. Functional classification in Hilbert spaces. IEEE Trans. Inform. Theory, 51(6):2163–2172, 2005. [6] Gérard Biau, Frédéric Cérou, and Arnaud Guyader. Rates of convergence of the functional k-nearest neighbor estimate. IEEE Trans. Inform. Theory, 56(4):2034–2040, 2010. [7] V. I. Bogachev. Measure theory. Vol. I, II. Springer-Verlag, Berlin, 2007. [8] Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [9] Frédéric Cérou and Arnaud Guyader. Nearest neighbor classification in infinite dimension. ESAIM: Probability and Statistics, 10:340–355, 2006. [10] Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. In NIPS, 2014. [11] Thomas M. Cover and Peter E. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13:21–27, 1967. [12] Luc Devroye. On the inequality of Cover and Hart in nearest neighbor discrimination. IEEE Trans. Pattern Anal. Mach. Intell., 3(1):75–78, 1981. [13] Luc Devroye and László Györfi. Nonparametric density estimation: the L1 view. Wiley Series in Probability and Mathematical Statistics: Tracts on Probability and Statistics. John Wiley & Sons, Inc., New York, 1985. 9 [14] Luc Devroye, László Györfi, and Gábor Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013. [15] Herbert Federer. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969. [16] Evelyn Fix and Jr. Hodges, J. L. Discriminatory analysis. nonparametric discrimination: Consistency properties. International Statistical Review / Revue Internationale de Statistique, 57(3):pp. 238–247, 1989. [17] Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the VapnikChervonenkis dimension. Machine learning, 21(3):269–304, 1995. [18] Lee-Ad Gottlieb, Aryeh Kontorovich, and Robert Krauthgamer. Efficient classification for metric data (extended abstract COLT 2010). IEEE Transactions on Information Theory, 60(9):5750– 5759, 2014. [19] Lee-Ad Gottlieb, Aryeh Kontorovich, and Robert Krauthgamer. Adaptive metric dimensionality reduction. Theoretical Computer Science, 620:105–118, 2016. [20] Lee-Ad Gottlieb, Aryeh Kontorovich, and Pinhas Nisnevitch. Near-optimal sample compression for nearest neighbors. In Neural Information Processing Systems (NIPS), 2014. [21] Lee-Ad Gottlieb, Aryeh Kontorovich, and Pinhas Nisnevitch. Nearly optimal classification for semimetrics (extended abstract AISTATS 2016). Journal of Machine Learning Research, 2017. [22] Thore Graepel, Ralf Herbrich, and John Shawe-Taylor. PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification. Machine Learning, 59(1):55–76, 2005. [23] Peter Hall and Kee-Hoon Kang. Bandwidth choice for nonparametric classification. Ann. Statist., 33(1):284–306, 02 2005. [24] Olav Kallenberg. Foundations of modern probability. Second edition. Probability and its Applications. Springer-Verlag, 2002. [25] Aryeh Kontorovich, Sivan Sabato, and Ruth Urner. Active nearest-neighbor learning in metric spaces. In Advances in Neural Information Processing Systems, pages 856–864, 2016. [26] Aryeh Kontorovich, Sivan Sabato, and Roi Weiss. Nearest-neighbor sample compression: Efficiency, consistency, infinite dimensions. CoRR, abs/1705.08184, 2017. [27] Aryeh Kontorovich and Roi Weiss. A Bayes consistent 1-NN classifier. In Artificial Intelligence and Statistics (AISTATS 2015), 2014. [28] Aryeh Kontorovich and Roi Weiss. Maximum margin multiclass nearest neighbors. In International Conference on Machine Learning (ICML 2014), 2014. [29] Robert Krauthgamer and James R. Lee. Navigating nets: Simple algorithms for proximity search. In 15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 791–801, January 2004. [30] Sanjeev R. Kulkarni and Steven E. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling. IEEE Trans. Inform. Theory, 41(4):1028–1039, 1995. [31] Nick Littlestone and Manfred K. Warmuth. Relating data compression and learnability. unpublished, 1986. [32] James R. Munkres. Topology: a first course. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1975. [33] Vladimir Pestov. On the geometry of similarity search: dimensionality curse and concentration of measure. Inform. Process. Lett., 73(1-2):47–51, 2000. [34] Vladimir Pestov. Is the k-NN classifier in high dimensions affected by the curse of dimensionality? Comput. Math. Appl., 65(10):1427–1437, 2013. 10 [35] David Preiss. Invalid Vitali theorems. Abstracta. 7th Winter School on Abstract Analysis, pages 58–60, 1979. [36] David Preiss. Gaussian measures and the density theorem. Comment. Math. Univ. Carolin., 22(1):181–193, 1981. [37] Demetri Psaltis, Robert R. Snapp, and Santosh S. Venkatesh. On the finite sample performance of the nearest neighbor classifier. IEEE Transactions on Information Theory, 40(3):820–837, 1994. [38] Walter Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., New York, third edition, 1976. International Series in Pure and Applied Mathematics. [39] Walter Rudin. Real and Complex Analysis. McGraw-Hill, 1987. [40] Richard J. Samworth. Optimal weighted nearest neighbour classifiers. Ann. Statist., 40(5):2733– 2763, 10 2012. [41] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [42] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998. [43] Robert R. Snapp and Santosh S. Venkatesh. Asymptotic expansions of the k nearest neighbor risk. Ann. Statist., 26(3):850–878, 1998. [44] Charles J. Stone. Consistent nonparametric regression. The Annals of Statistics, 5(4):595–620, 1977. [45] Jaroslav Tišer. Vitali covering theorem in Hilbert space. Trans. Amer. Math. Soc., 355(8):3277– 3289, 2003. [46] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207–244, 2009. [47] Lin Cheng Zhao. Exponential bounds of mean error for the nearest neighbor estimates of regression functions. J. Multivariate Anal., 21(1):168–178, 1987. 11 | 2017 | 392 |
6,888 | Causal Effect Inference with Deep Latent-Variable Models Christos Louizos University of Amsterdam TNO Intelligent Imaging c.louizos@uva.nl Uri Shalit New York University CIMS uas1@nyu.edu Joris Mooij University of Amsterdam j.m.mooij@uva.nl David Sontag Massachusetts Institute of Technology CSAIL & IMES dsontag@mit.edu Richard Zemel University of Toronto CIFAR∗ zemel@cs.toronto.edu Max Welling University of Amsterdam CIFAR∗ m.welling@uva.nl Abstract Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects. 1 Introduction Understanding the causal effect of an intervention t on an individual with features X is a fundamental problem across many domains. Examples include understanding the effect of medications on a patient’s health, or of teaching methods on a student’s chance of graduation. With the availability of large datasets in domains such as healthcare and education, there is much interest in developing methods for learning individual-level causal effects from observational data [42, 53, 25, 43]. The most crucial aspect of inferring causal relationships from observational data is confounding. A variable which affects both the intervention and the outcome is known as a confounder of the effect of the intervention on the outcome. On the one hand, if such a confounder can be measured, the standard way to account for its effect is by “controlling” for it, often through covariate adjustment or propensity score re-weighting [39]. On the the other hand, if a confounder is hidden or unmeasured, it is impossible in the general case (i.e. without further assumptions) to estimate the effect of the intervention on the outcome [40]. For example, socio-economic status can affect both the medication a patient has access to, and the patient’s general health. Therefore socio-economic status acts as confounder between the medication and health outcomes, and without measuring it we cannot in ∗Canadian Institute For Advanced Research 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Z t y X Figure 1: Example of a proxy variable. t is a treatment, e.g. medication; y is an outcome, e.g. mortality. Z is an unobserved confounder, e.g. socio-economic status; and X is noisy views on the hidden confounder Z, say income in the last year and place of residence. general isolate the causal effect of medications on health measures. Henceforth we will denote observed potential confounders2 by X, and unobserved confounders by Z. In most real-world observational studies we cannot hope to measure all possible confounders. For example, in many studies we cannot measure variables such as personal preferences or most genetic and environmental factors. An extremely common practice in these cases is to rely on so-called “proxy variables” [38, 6, 36, Ch. 11]. For example, we cannot measure the socio-economic status of patients directly, but we might be able to get a proxy for it by knowing their zip code and job type. One of the promises of using big-data for causal inference is the existence of myriad proxy variables for unmeasured confounders. How should one use these proxy variables? The answer depends on the relationship between the hidden confounders, their proxies, the intervention and outcome [31, 37]. Consider for example the causal graphs in Figure 1: it’s well known [20, 15, 18, 31, 41] that it is often incorrect to treat the proxies X as if they are ordinary confounders, as this would induce bias. See the Appendix for a simple example of this phenomena. The aforementioned papers give methods which are guaranteed to recover the true causal effect when proxies are observed. However, the strong guarantees these methods enjoy rely on strong assumptions. In particular, it is assumed that the hidden confounder is either categorical with known number of categories, or that the model is linear-Gaussian. In practice, we cannot know the exact nature of the hidden confounder Z: whether it is categorical or continuous, or if categorical how many categories it includes. Consider socio-economic status (SES) and health. Should we conceive of SES as a continuous or ordinal variable? Perhaps SES as confounder is comprised of two dimensions, the economic one (related to wealth and income) and the social one (related to education and cultural capital). Z might even be a mix of continuous and categorical, or be high-dimensional itself. This uncertainty makes causal inference a very hard problem even with proxies available. We propose an alternative approach to causal effect inference tailored to the surrogate-rich setting when many proxies are available: estimation of a latent-variable model where we simultaneously discover the hidden confounders and infer how they affect treatment and outcome. Specifically, we focus on (approximate) maximum-likelihood based methods. Although in many cases learning latent-variable models are computationally intractable [50, 7], the machine learning community has made significant progress in the past few years developing computationally efficient algorithms for latent-variable modeling. These include methods with provable guarantees, typically based on the method-of-moments (e.g. Anandkumar et al. [4]); as well as robust, fast, heuristics such as variational autoencoders (VAEs) [27, 46], based on stochastic optimization of a variational lower bound on the likelihood, using so-called recognition networks for approximate inference. Our paper builds upon VAEs. This has the disadvantage that little theory is currently available to justify when learning with VAEs can identify the true model. However, they have the significant advantage that they make substantially weaker assumptions about the data generating process and the structure of the hidden confounders. Since their recent introduction, VAEs have been shown to be remarkably successful in capturing latent structure across a wide-range of previously difficult problems, such as modeling images [19], volumes [24], time-series [10] and fairness [34]. 2Including observed covariates which do not affect the intervention or outcome, and therefore are not truly confounders. 2 We show that in the presence of noisy proxies, our method is more robust against hidden confounding, in experiments where we successively add noise to known-confounders. Towards that end we introduce a new causal inference benchmark using data about twin births and mortalities in the USA. We further show that our method is competitive on two existing causal inference benchmarks. Finally, we note that our method does not currently deal with the related problem of selection bias, and we leave this to future work. Related work. Proxy variables and the challenges of using them correctly have long been considered in the causal inference literature [54, 14]. Understanding what is the best way to derive and measure possible proxy variables is an important part of many observational studies [13, 29, 55]. Recent work by Cai and Kuroki [9], Greenland and Lash [18], building on the work of Greenland and Kleinbaum [17], Selén [47], has studied conditions for causal identifiability using proxy variables. The general idea is that in many cases one should first attempt to infer the joint distribution p(X, Z) between the proxy and the hidden confounders, and then use that knowledge to adjust for the hidden confounders [55, 41, 32, 37, 12]. For the example in Figure 1, Cai and Kuroki [9], Greenland and Lash [18], Pearl [41] show that if Z and X are categorical, with X having at least as many categories as Z, and with the matrix p(X, Z) being full-rank, one could identify the causal effect of t on y using a simple matrix inversion formula, an approach called “effect restoration”. Conditions under which one could identify more general and complicated proxy models were recently given by [37]. 2 Identification of causal effect Throughout this paper we assume the causal model in Figure 1. For simplicity and compatibility with prior benchmarks we assume that the treatment t is binary, but our proposed method does not rely on that. We further assume that the joint distribution p (Z, X, t, y) of the latent confounders Z and the observed confounders X can be approximately recovered solely from the observations (X, t, y). While this is impossible if the hidden confounder has no relation to the observed variables, there are many cases where this is possible, as mentioned in the introduction. For example, if X includes three independent views of Z [4, 22, 16, 2]; if Z is categorical and X is a Gaussian mixture model with components determined by X [5]; or if Z is comprised of binary variables and X are so-called “noisy-or” functions of Z [23, 8]. Recent results show that certain VAEs can recover a very large class of latent-variable models [51] as a minimizer of an optimization problem; the caveat is that the optimization process is not guaranteed to achieve the true minimum even if it is within the capacity of the model, similar to the case of classic universal approximation results for neural networks. 2.1 Identifying individual treatment effect Our goal in this paper is to recover the individual treatment effect (ITE), also known as the conditional average treatment effect (CATE), of a treatment t, as well as the average treatment effect (ATE): ITE(x) := E [y|X = x, do(t = 1)] −E [y|X = x, do(t = 0)] , ATE := E[ITE(x)] Identification in our case is an immediate result of Pearl’s back-door adjustment formula [40]: Theorem 1. If we recover p (Z, X, t, y) then we recover the ITE under the causal model in Figure 1. Proof. We will prove that p (y|X, do(t = 1)) is identifiable under the premise of the theorem. The case for t = 0 is identical, and the expectations in the definition of ITE above readily recovered from the probability function. ATE is identified if ITE is identified. We have that: p (y|X, do(t = 1)) = Z Z p (y|X, do(t = 1), Z) p (Z|X, do(t = 1)) dZ (i) = Z Z p (y|X, t = 1, Z) p (Z|X) dZ, (1) where equality (i) is by the rules of do-calculus applied to the causal graph in Figure 1 [40]. This completes the proof since the quantities in the final expression of Eq. (1) can be identified from the distribution p (Z, X, t, y) which we know by the Theorem’s premise. Note that the proof and the resulting estimator in Eq. (1) would be identical whether there is or there is not an edge from X to t. This is because we intervene on t. Also note that for the model in Figure 1, 3 y is independent of X given Z, and we obtain: p (y|X, do(t = 1)) = R Z p (y|t = 1, Z) p (Z|X) dZ. In the next section we will show how we estimate p (Z, X, t, y) from observations of (X, t, y). 3 Causal effect variational autoencoder (a) Inference network, q(z, t, y|x). (b) Model network, p(x, z, t, y). Figure 2: Overall architecture of the model and inference networks for the Causal Effect Variational Autoencoder (CEVAE). White nodes correspond to parametrized deterministic neural network transitions, gray nodes correspond to drawing samples from the respective distribution and white circles correspond to switching paths according to the treatment t. The approach we take in this paper to the problem of learning the latent variable causal model is by using variational autoencoders [27, 46] to infer the complex non-linear relationships between X and (Z, t, y) and approximately recover p (Z, X, t, y). Recent work has dramatically increased the range and type of distributions which can be captured by VAEs [51, 45, 28]. The drawback of these methods is that because of the difficulty of guaranteeing global optima of neural net optimization, one cannot ensure that any given instance will find the true model even if it is within the model class. We believe this drawback is offset by the strong empirical performance across many domains of deep neural networks in general, and VAEs in particular. Specifically, we propose to parametrize the causal graph of Figure 1 as a latent variable model with neural net functions connecting the variables of interest. The flexible non-linear nature of neural nets will hopefully allow us to approximate well the true interactions between the treatment and its effect. Our design choices are mostly typical for VAEs: we assume the observations factorize conditioned on the latent variables, and use an inference network [27, 46] which follows a factorization of the true posterior. For the generative model we use an architecture inspired by TARnet [48], but instead of conditioning on observations we condition on the latent variables z; see details below. For the following, xi corresponds to an input datapoint (e.g. the feature vector of a given subject), ti corresponds to the treatment assignment, yi to the outcome of the of the particular treatment and zi corresponds to the latent hidden confounder. Each of the corresponding factors is described as: p(zi) = Dz Y j=1 N(zij|0, 1); p(xi|zi) = Dx Y j=1 p(xij|zi); p(ti|zi) = Bern(σ(f1(zi))), (2) with p(xij|zi) being an appropriate probability distribution for the covariate j and σ(·) being the logistic function, Dx the dimension of x and Dz the dimension of z. For a continuous outcome we parametrize the probability distribution as a Gaussian with its mean given by a TARnet [48] architecture, i.e. a treatment specific function, and its variance fixed to ˆv, whereas for a discrete outcome we use a Bernoulli distribution similarly parametrized by a TARnet: p(yi|ti, zi) = N(µ = ˆµi, σ2 = ˆv) ˆµi = tif2(zi) + (1 −ti)f3(zi) (3) p(yi|ti, zi) = Bern(π = ˆπi) ˆπi = σ(tif2(zi) + (1 −ti)f3(zi)). (4) Note that each of the fk(·) is a neural network parametrized by its own parameters θk for k = 1, 2, 3. As we do not a-priori know the hidden confounder z we have to marginalize over it in order to learn the parameters of the model θk. Since the non-linear neural network functions make inference intractable we will employ variational inference along with inference networks; these are neural networks that output the parameters of a fixed form posterior approximation over the latent variables 4 z, e.g. a Gaussian, given the observed variables. By the definition of the model at Figure 1 we can see that the true posterior over Z depends on X, t and y. Therefore we employ the following posterior approximation: q(zi|xi, ti, yi) = Dz Y j=1 N(µj = ¯µij, σ2 j = ¯σ2 ij) (5) ¯µi = tiµt=0,i + (1 −ti)µt=1,i ¯σ2 i = tiσ2 t=0,i + (1 −ti)σ2 t=1,i µt=0,i, σ2 t=0,i = g2 ◦g1(xi, yi) µt=1,i, σ2 t=1,i = g3 ◦g1(xi, yi), where we similarly use a TARnet [48] architecture for the inference network, i.e. split them for each treatment group in t after a shared representation g1(xi, yi), and each gk(·) is a neural network with variational parameters φk. We can now form a single objective for the inference and model networks, the variational lower bound of this graphical model [27, 46]: L = N X i=1 Eq(zi|xi,ti,yi)[log p(xi, ti|zi) + log p(yi|ti, zi) + log p(zi) −log q(zi|xi, ti, yi)]. (6) Notice that for out of sample predictions, i.e. new subjects, we require to know the treatment assignment t along with its outcome y before inferring the distribution over z. For this reason we will introduce two auxiliary distributions that will help us predict ti, yi for new samples. More specifically, we will employ the following distributions for the treatment assignment t and outcomes y: q(ti|xi) = Bern(π = σ(g4(xi))) (7) q(yi|xi, ti) = N(µ = ¯µi, σ2 = ¯v) ¯µi = ti(g6 ◦g5(xi)) + (1 −ti)(g7 ◦g5(xi)) (8) q(yi|xi, ti) = Bern(π = ¯πi) ¯πi = ti(g6 ◦g5(xi)) + (1 −ti)(g7 ◦g5(xi)), (9) where we choose eq. 8 for continuous and eq. 9 for discrete outcomes. To estimate the parameters of these auxiliary distributions we will add two extra terms in the variational lower bound: FCEVAE = L + N X i=1 log q(ti = t∗ i |x∗ i ) + log q(yi = y∗ i |x∗ i , t∗ i ) , (10) with xi, t∗ i , y∗ i being the observed values for the input, treatment and outcome random variables in the training set. We coin the name Causal Effect Variational Autoencoder (CEVAE) for our method. 4 Experiments Evaluating causal inference methods is always challenging because we usually lack ground-truth for the causal effects. Common evaluation approaches include creating synthetic or semi-synthetic datasets, where real data is modified in a way that allows us to know the true causal effect or realworld data where a randomized experiment was conducted. Here we compare with two existing benchmark datasets where there is no need to model proxies, IHDP [21] and Jobs [33], often used for evaluating individual level causal inference. In order to specifically explore the role of proxy variables, we create a synthetic toy dataset, and introduce a new benchmark based on data of twin births and deaths in the USA. For the implementation of our model we used Tensorflow [1] and Edward [52]. For the neural network architecture choices we closely followed [48]; unless otherwise specified we used 3 hidden layers with ELU [11] nonlinearities for the approximate posterior over the latent variables q(Z|X, t, y), the generative model p(X|Z) and the outcome models p(y|t, Z), q(y|t, X). For the treatment models p(t|Z), q(t|X) we used a single hidden layer neural network with ELU nonlinearities. Unless mentioned otherwise, we used a 20-dimensional latent variable z and used a small weight decay term for all of the parameters with λ = .0001. Optimization was done with Adamax [26] and a learning rate of 0.01, which was annealed with an exponential decay schedule. We further performed early stopping according to the lower bound on a validation set. To compute the outcomes p(y|X, do(t = 1)) and p(y|X, do(t = 0)) we averaged over 100 samples from the approximate posterior q(Z|X) = P t R q(Z|t, y, X)q(y|t, X)q(t|X)dy. Throughout this section we compare with several baseline methods. LR1 is logistic regression, LR2 is two separate logistic regressions fit to treated (t = 1) and control (t = 0). TARnet is a feed forward neural network architecture for causal inference [48]. 5 4.1 Benchmark datasets For the first benchmark task we consider estimating the individual and population causal effects on a benchmark dataset introduced by [21]; it is constructed from data obtained from the Infant Health and Development Program (IHDP). Briefly, the confounders x correspond to collected measurements of the children and their mothers used during a randomized experiment that studied the effect of home visits by specialists on future cognitive test scores. The treatment assignment is then “de-randomized” by removing from the treated set children with non-white mothers; for each unit a treated and a control outcome are then simulated, thus allowing us to know the “true” individual causal effects of the treatment. We follow [25, 48] and use 1000 replications of the simulated outcome, along with the same train/validation/testing splits. To measure the accuracy of the individual treatment effect estimation we use the Precision in Estimation of Heterogeneous Effect (PEHE) [21], PEHE = 1 N PN i=1((yi1 −yi0) −(ˆyi1 −ˆyi0))2, where y1, y0 correspond to the true outcomes under t = 1 and t = 0, respectively, and ˆy1, ˆy0 correspond to the outcomes estimated by our model. For the population causal effect we report the absolute error on the Average Treatment Effect (ATE). The results can be seen at Table 1. As we can see, CEVAE has decent performance, comparable to the Balancing Neural Network (BNN) of [25]. Table 1: Within-sample and out-of-sample mean and standard errors for the metrics for the various models at the IHDP dataset. Method p ϵwithin-s. PEHE ϵwithin-s. ATE p ϵout-of-s. PEHE ϵout-of-s. ATE OLS-1 5.8±.3 .73±.04 5.8±.3 .94±.06 OLS-2 2.4±.1 .14±.01 2.5±.1 .31±.02 BLR 5.8±.3 .72±.04 5.8±.3 .93±.05 k-NN 2.1±.1 .14±.01 4.1±.2 .79±.05 TMLE 5.0±.2 .30±.01 BART 2.1±.1 .23±.01 2.3±.1 .34±.02 RF 4.2±.2 .73±.05 6.6±.3 .96±.06 CF 3.8±.2 .18±.01 3.8±.2 .40±.03 BNN 2.2±.1 .37±.03 2.1±.1 .42±.03 CFRW .71±.0 .25±.01 .76±.0 .27±.01 CEVAE 2.7±.1 .34±.01 2.6±.1 .46±.02 Table 2: Within-sample and out-of-sample policy risk and error on the average treatment effect on the treated (ATT) for the various models on the Jobs dataset. Method Rwithin-s. pol ϵwithin-s. ATT Rout-of-s. pol ϵout-of-s. ATT LR-1 .22±.0 .01±.00 .23±.0 .08±.04 LR-2 .21±.0 .01±.01 .24±.0 .08±.03 BLR .22±.0 .01±.01 .25±.0 .08±.03 k-NN .02±.0 .21±.01 .26±.0 .13±.05 TMLE .22±.0 .02±.01 BART .23±.0 .02±.00 .25±.0 .08±.03 RF .23±.0 .03±.01 .28±.0 .09±.04 CF .19±.0 .03±.01 .20±.0 .07±.03 BNN .20±.0 .04±.01 .24±.0 .09±.04 CFRW .17±.0 .04±.01 .21±.0 .09±.03 CEVAE .15±.0 .02±.01 .26±.0 .03±.01 For the second benchmark we consider the task described at [48] and follow closely their procedure. It uses a dataset obtained by the study of [33, 49], which concerns the effect of job training (treatment) on employment after training (outcome). Due to the fact that a part of the dataset comes from a randomized control trial we can estimate the “true” causal effect. Following [48] we report the absolute error on the Average Treatment effect on the Treated (ATT), which is the E [ITE(X)|t = 1]. For the individual causal effect we use the policy risk, that acts as a proxy to the individual treatment effect. The results after averaging over 10 train/validation/test splits can be seen at Table 2. As we can observe, CEVAE is competitive with the state-of-the art, while overall achieving the best estimate on the out-of-sample ATT. 4.2 Synthetic experiment on toy data To illustrate that our model better handles hidden confounders we experiment on a toy simulated dataset where the marginal distribution of X is a mixture of Gaussians, with the hidden variable Z determining the mixture component. We generate synthetic data by the following process: zi ∼Bern (0.5) ; xi|zi ∼N zi, σ2 z1zi + σ2 z0(1 −zi) ti|zi ∼Bern (0.75zi + 0.25(1 −zi)) ; yi|ti, zi ∼Bern (Sigmoid (3(zi + 2(2ti −1)))) , (11) where σz0 = 3, σz1 = 5 and Sigmoid is the logistic sigmoid function. This generation process introduces hidden confounding between t and y as t and y depend on the mixture assignment z for x. Since there is significant overlap between the two Gaussian mixture components we expect that methods which do not model the hidden confounder z will not produce accurate estimates for the treatment effects. We experiment with both a binary z for CEVAE, which is close to the true 6 model, as well as a 5-dimensional continuous z in order to investigate the robustness of CEVAE w.r.t. model misspecification. We evaluate across samples size N ∈{1000, 3000, 5000, 10000, 30000} and provide the results in Figure 3. We see that no matter how many samples are given, LR1, LR2 and TARnet are not able to improve their error in estimating ATE directly from the proxies. On the other hand, CEVAE achieves significantly less error. When the latent model is correctly specified (CEVAE bin) we do better even with a small sample size; when it is not (CEVAE cont) we require more samples for the latent space to imitate more closely the true binary latent variable. 3.0 3.5 4.0 4.5 log(Nsamples) 0.00 0.04 0.08 0.12 0.16 absolute ATE error LR1 LR2 TARnet CEVAE cont CEVAE bin Figure 3: Absolute error of estimating ATE on samples from the generative process (11). CEVAE bin and CEVAE cont are CEVAE with respectively binary or continuous 5-dim latent z. See text above for description of the other methods. 4.3 Binary treatment outcome on Twins We introduce a new benchmark task that utilizes data from twin births in the USA between 1989-1991 [3] 3. The treatment t = 1 is being born the heavier twin whereas, the outcome corresponds to the mortality of each of the twins in their first year of life. Since we have records for both twins, their outcomes could be considered as the two potential outcomes with respect to the treatment of being born heavier. We only chose twins which are the same sex. Since the outcome is thankfully quite rare (3.5% first-year mortality), we further focused on twins such that both were born weighing less than 2kg. We thus have a dataset of 11984 pairs of twins. The mortality rate for the lighter twin is 18.9%, and for the heavier 16.4%, for an average treatment effect of −2.5%. For each twin-pair we obtained 46 covariates relating to the parents, the pregnancy and birth: mother and father education, marital status, race and residence; number of previous births; pregnancy risk factors such as diabetes, renal disease, smoking and alcohol use; quality of care during pregnancy; whether the birth was at a hospital, clinic or home; and number of gestation weeks prior to birth. In this setting, for each twin pair we observed both the case t = 0 (lighter twin) and t = 1 (heavier twin). In order to simulate an observational study, we selectively hide one of the two twins; if we were to choose at random this would be akin to a randomized trial. In order to simulate the case of hidden confounding with proxies, we based the treatment assignment on a single variable which is highly correlated with the outcome: GESTAT10, the number of gestation weeks prior to birth. It is ordinal with values from 0 to 9 indicating birth before 20 weeks gestation, birth after 20-27 weeks of gestation and so on 4. We then set ti|xi, zi ∼Bern σ(w⊤ o x + wh(z/10 −0.1)) , wo ∼N(0, 0.1 · I), wh ∼N(5, 0.1), where z is GESTAT10 and x are the 45 other features. We created proxies for the hidden confounder as follows: We coded the 10 GESTAT10 categories with one-hot encoding, replicated 3 times. We then randomly and independently flipped each of these 30 bits. We varied the probabilities of flipping from 0.05 to 0.5, the latter indicating there is no direct information about the confounder. We chose three replications following the well-known result that three independent views of a latent feature are what is needed to guarantee that it can be recovered 3Data taken from the denominator file at http://www.nber.org/data/linked-birth-infant-death-data-vitalstatistics-data.html 4The partition is given in the original dataset from NBER. 7 [30, 2, 5]. We note that there might still be proxies for the confounder in the other variables, such as the incompetent cervix covariate which is a known risk factor for early birth. Having created the dataset, we focus our attention on two tasks: Inferring the mortality of the unobserved twin (counterfactual), and inferring the average treatment effect. We compare with TARnet, LR1 and LR2. We vary the number of hidden layers for TARnet and CEVAE (nh in the figures). We note that while TARnet with 0 hidden layers is equivalent to LR2, CEVAE with 0 hidden layers still infers a latent space and is thus different. The results are given respectively in Figures 4(a) (higher is better) and 4(b) (lower is better). For the counterfactual task, we see that for small proxy noise all methods perform similarly. This is probably due to the gestation length feature being very informative; for LR1, the noisy codings of this feature form 6 of the top 10 most predictive features for mortality, the others being sex (males are more at risk), and 3 risk factors: incompetent cervix, mother lung disease, and abnormal amniotic fluid. For higher noise, TARnet, LR1 and LR2 see roughly similar degradation in performance; CEVAE, on the other hand, is much more robust to increasing proxy noise because of its ability to infer a cleaner latent state from the noisy proxies. Of particular interest is CEVAE nh = 0, which does much better for counterfactual inference than the equivalent LR2, probably because LR2 is forced to rely directly on the noisy proxies instead of the inferred latent state. For inference of average treatment effect, we see that at the low noise levels CEVAE does slightly worse than the other methods, with CEVAE nh = 0 doing noticeably worse. However, similar to the counterfactual case, CEVAE is significantly more robust to proxy noise, achieving quite a low error even when the direct proxies are completely useless at noise level 0.5. 0.1 0.2 0.3 0.4 0.5 proxy noise level 0.65 0.75 0.85 counterfactual AUC CEVAE nh=0 CEVAE nh=1 CEVAE nh=2 LR2 LR1 TARnet nh=1 TARnet nh=2 (a) Area under the curve (AUC) for predicting the mortality of the unobserved twin in a hidden confounding experiment; higher is better. 0.1 0.2 0.3 0.4 0.5 proxy noise level 0.00 0.02 0.04 0.06 0.08 absolute ATE error CEVAE nh=0 CEVAE nh=1 CEVAE nh=2 LR2 LR1 TARnet nh=1 TARnet nh=2 (b) Absolute error ATE estimate; lower is better. Dashed black line indicates the error of using the naive ATE estimator: the difference between the average treated and average control outcomes. Figure 4: Results on the Twins dataset. LR1 is logistic regression, LR2 is two separate logistic regressions fit on the treated and control. “nh” is number of hidden layers used. TARnet with nh = 0 is identical to LR2 and not shown, whereas CEVAE with nh = 0 has a latent space component. 5 Conclusion In this paper we draw a connection between causal inference with proxy variables and the groundbreaking work in the machine learning community on latent variable models. Since almost all observational studies rely on proxy variables, this connection is highly relevant. We introduce a model which is the first attempt at tying these two ideas together: The Causal Effect Variational Autoencoder (CEVAE), a neural network latent variable model used for estimating individual and population causal effects. In extensive experiments we showed that it is competitive with the state-of-the art on benchmark datasets, and more robust to hidden confounding both at a toy artificial dataset as well as modifications of real datasets, such as the newly introduced Twins dataset. For future work, we plan to employ the expanding set of tools available for latent variables models (e.g. Kingma et al. [28], Tran et al. [51], Maaløe et al. [35], Ranganath et al. [44]), as well as to further explore connections between method of moments approaches such as Anandkumar et al. [5] with the methods for effect restoration given by Kuroki and Pearl [32], Miao et al. [37]. 8 Acknowledgements We would like to thank Fredrik D. Johansson for valuable discussions, feedback and for providing the data for IHDP and Jobs. We would also like to thank Maggie Makar for helping with the Twins dataset. Christos Louizos and Max Welling were supported by TNO, NWO and Google. Joris Mooij was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 639466). References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] E. S. Allman, C. Matias, and J. A. Rhodes. Identifiability of parameters in latent structure models with many observed variables. The Annals of Statistics, pages 3099–3132, 2009. [3] D. Almond, K. Y. Chay, and D. S. Lee. The costs of low birth weight. The Quarterly Journal of Economics, 120(3):1031–1083, 2005. [4] A. Anandkumar, D. J. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden markov models. In COLT, volume 1, page 4, 2012. [5] A. Anandkumar, R. Ge, D. J. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773–2832, 2014. [6] J. D. Angrist and J.-S. Pischke. Mostly harmless econometrics: An empiricist’s companion. Princeton university press, 2008. [7] S. Arora and R. Kannan. Learning mixtures of separated nonspherical gaussians. Annals of Applied Probability, pages 69–92, 2005. [8] S. Arora, R. Ge, T. Ma, and A. Risteski. Provable learning of noisy-or networks. CoRR, abs/1612.08795, 2016. URL http://arxiv.org/abs/1612.08795. [9] Z. Cai and M. Kuroki. On identifying total effects in the presence of latent variables and selection bias. In Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, pages 62–69. AUAI Press, 2008. [10] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980–2988, 2015. [11] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. [12] J. K. Edwards, S. R. Cole, and D. Westreich. All your data are always missing: incorporating bias due to measurement error into the potential outcomes framework. International Journal of Epidemiology, 44(4): 1452, 2015. [13] D. Filmer and L. H. Pritchett. Estimating wealth effects without expenditure data—or tears: an application to educational enrollments in states of india. Demography, 38(1):115–132, 2001. [14] P. A. Frost. Proxy variables and specification bias. The review of economics and Statistics, pages 323–325, 1979. [15] W. Fuller. Measurement error models. Wiley series in probability and mathematical statistics (, 1987. [16] L. A. Goodman. Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika, 61(2):215–231, 1974. [17] S. Greenland and D. G. Kleinbaum. Correcting for misclassification in two-way tables and matched-pair studies. International Journal of Epidemiology, 12(1):93–97, 1983. [18] S. Greenland and T. Lash. Bias analysis. In Modern epidemiology, 3rd ed., pages 345–380. Lippincott Williams and Wilkins, 2008. [19] K. Gregor, I. Danihelka, A. Graves, D. Jimenez Rezende, and D. Wierstra. DRAW: A Recurrent Neural Network For Image Generation. ArXiv e-prints, Feb. 2015. [20] Z. Griliches and J. A. Hausman. Errors in variables in panel data. Journal of econometrics, 31(1):93–118, 1986. [21] J. L. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240, 2011. [22] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models. Journal of Computer and System Sciences, 78(5):1460–1480, 2012. 9 [23] Y. Jernite, Y. Halpern, and D. Sontag. Discovering hidden variables in noisy-or networks using quartet tests. In Advances in Neural Information Processing Systems, pages 2355–2363, 2013. [24] D. Jimenez Rezende, S. M. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised Learning of 3D Structure from Images. ArXiv e-prints, July 2016. [25] F. D. Johansson, U. Shalit, and D. Sontag. Learning representations for counterfactual inference. International Conference on Machine Learning (ICML), 2016. [26] D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), San Diego, 2015. [27] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations (ICLR), 2014. [28] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. [29] S. Kolenikov and G. Angeles. Socioeconomic status measurement with discrete proxy variables: Is principal component analysis a reliable answer? Review of Income and Wealth, 55(1):128–165, 2009. [30] J. B. Kruskal. More factors than subjects, tests and treatments: an indeterminacy theorem for canonical decomposition and individual differences scaling. Psychometrika, 41(3):281–293, 1976. [31] M. Kuroki and J. Pearl. Measurement bias and effect restoration in causal inference. Technical report, DTIC Document, 2011. [32] M. Kuroki and J. Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101(2): 423, 2014. [33] R. J. LaLonde. Evaluating the econometric evaluations of training programs with experimental data. The American economic review, pages 604–620, 1986. [34] C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel. The variational fair autoencoder. International Conference on Learning Representations (ICLR), 2016. [35] L. Maaløe, C. K. Sønderby, S. K. Sønderby, and O. Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [36] G. S. Maddala and K. Lahiri. Introduction to econometrics, volume 2. Macmillan New York, 1992. [37] W. Miao, Z. Geng, and E. Tchetgen Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. arXiv preprint arXiv:1609.08816, 2016. [38] M. R. Montgomery, M. Gragnolati, K. A. Burke, and E. Paredes. Measuring living standards with proxy variables. Demography, 37(2):155–174, 2000. [39] S. L. Morgan and C. Winship. Counterfactuals and causal inference. Cambridge University Press, 2014. [40] J. Pearl. Causality. Cambridge university press, 2009. [41] J. Pearl. On measurement bias in causal inference. arXiv preprint arXiv:1203.3504, 2012. [42] J. Pearl. Detecting latent heterogeneity. Sociological Methods & Research, page 0049124115600597, 2015. [43] A. Peysakhovich and A. Lada. Combining observational and experimental data to find heterogeneous treatment effects. arXiv preprint arXiv:1611.02385, 2016. [44] R. Ranganath, D. Tran, J. Altosaar, and D. Blei. Operator variational inference. In Advances in Neural Information Processing Systems, pages 496–504, 2016. [45] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. [46] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1278–1286, 2014. [47] J. Selén. Adjusting for errors in classification and measurement in the analysis of partly and purely categorical data. Journal of the American Statistical Association, 81(393):75–81, 1986. [48] U. Shalit, F. Johansson, and D. Sontag. Estimating individual treatment effect: generalization bounds and algorithms. ArXiv e-prints, June 2016. [49] J. A. Smith and P. E. Todd. Does matching overcome lalonde’s critique of nonexperimental estimators? Journal of econometrics, 125(1):305–353, 2005. [50] B. Thiesson, C. Meek, D. M. Chickering, and D. Heckerman. Learning mixtures of dag models. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pages 504–513. Morgan Kaufmann Publishers Inc., 1998. 10 [51] D. Tran, R. Ranganath, and D. M. Blei. The variational Gaussian process. International Conference on Learning Representations (ICLR), 2015. [52] D. Tran, A. Kucukelbir, A. B. Dieng, M. Rudolph, D. Liang, and D. M. Blei. Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint arXiv:1610.09787, 2016. [53] S. Wager and S. Athey. Estimation and inference of heterogeneous treatment effects using random forests. arXiv preprint arXiv:1510.04342, 2015. [54] M. R. Wickens. A note on the use of proxy variables. Econometrica: Journal of the Econometric Society, pages 759–761, 1972. [55] J. M. Wooldridge. On estimating firm-level production functions using proxy variables to control for unobservables. Economics Letters, 104(3):112–114, 2009. 11 | 2017 | 393 |
6,889 | Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach Emmanouil A. Platanios Carnegie Mellon University Pittsburgh, PA e.a.platanios@cs.cmu.edu Hoifung Poon Microsoft Research Redmond, WA hoifung@microsoft.com Tom M. Mitchell Carnegie Mellon University Pittsburgh, PA tom.mitchell@cs.cmu.edu Eric Horvitz Microsoft Research Redmond, WA horvitz@microsoft.com Abstract We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition. 1 Introduction Estimating the accuracy of classifiers is central to machine learning and many other fields. Accuracy is defined as the probability of a system’s output agreeing with the true underlying output, and thus is a measure of the system’s performance. Most existing approaches to estimating accuracy are supervised, meaning that a set of labeled examples is required for the estimation. Being able to estimate the accuracies of classifiers using only unlabeled data is important for many applications, including: (i) any autonomous learning system that operates under no supervision, as well as (ii) crowdsourcing applications, where multiple workers provide answers to questions, for which the correct answer is unknown. Furthermore, tasks which involve making several predictions which are tied together by logical constraints are abundant in machine learning. As an example, we may have two classifiers in the Never Ending Language Learning (NELL) project [Mitchell et al., 2015] which predict whether noun phrases represent animals or cities, respectively, and we know that something cannot be both an animal and a city (i.e., the two categories are mutually exclusive). In such cases, it is not hard to observe that if the predictions of the system violate at least one of the constraints, then at least one of the system’s components must be wrong. This paper extends this intuition and presents an unsupervised approach (i.e., only unlabeled data are needed) for estimating accuracies that is able to use information provided by such logical constraints. Furthermore, the proposed approach is also able to use any available labeled data, thus also being applicable to semi-supervised settings. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. animal fish bird . . . Logical Constraints shark animal 99% fish 95% bird 5% . . . sparrow animal 95% fish 10% bird 26% . . . . . . Classifier #1 shark animal 99% fish 95% bird 5% . . . sparrow animal 95% fish 2% bird 84% . . . . . . Classifier #2 Instance Category Probability Classifier Outputs . . . SUB(animal, fish) ^ ¬ ˆf animal 1 (shark) ^ f fish(shark) ! eanimal 1 . . . ME(fish, bird) ^ ˆf fish 1 (sparrow) ^ f bird(sparrow) ! efish 1 Ground Rules ˆf animal 1 (shark) = 0.99 ˆf fish 1 (shark) = 0.95 ˆf bird 1 (shark) = 0.05 ˆf animal 1 (sparrow) = 0.95 ˆf fish 1 (sparrow) = 0.10 ˆf bird 1 (sparrow) = 0.26 . . . SUB(animal, fish) = 1 SUB(animal, bird) = 1 ME(fish, bird) = 1 eanimal 1 efish 1 ebird 1 . . . f animal(shark) f fish(shark) f bird(shark) f animal(sparrow) f fish(sparrow) f bird(sparrow) Observed Unobserved Error Rates Combined Predictions Results Classifier #1 animal 1% fish 5% bird 57% . . . Classifier #2 animal 1% fish 2% bird 9% . . . . . . sparrow animal 95% fish 4% bird 75% . . . shark animal 99% fish 95% bird 8% . . . . . . Grounding Inputs: Predicted probability for each classifier-object-category. Outputs: Set of object-category classification pairs and categoryclassifier error-rate pairs that are not directly constrained to be 0 or 1 from the logical constraints. Description: Section 3.3.2. Probabilistic Inference Inputs: Ground predicates and rules. Step 1: Create a Markov Random Field (MRF). Step 2: Perform probabilistic inference to obtain the most likely values for the unobserved ground predicates. Inference is performed using a modified version of the Probabilistic Soft Logic (PSL) framework. Outputs: Classifier error rates and underlying function values. Description: Section 3.3. Ground Predicates Figure 1: System overview diagram. The classifier outputs (corresponding to the function approximation outputs) and the logical constraints make up the system inputs. The representation of the logical constraints in terms of the function approximation error rates is described in section 3.2. In the logical constraints box, blue arrows represent subsumption constraints, and labels connected by a red dashed line represent a mutually exclusive set. Given the inputs, the first step is grounding (computing all feasible ground predicates and rules that the system will need to perform inference over) and is described in section 3.3.2. In the ground rules box, ∧, ¬, →correspond to the logic AND, OR, and IMPLIES. Then, inference is performed in order to infer the most likely truth values of the unobserved ground predicates, given the observed ones and the ground rules (described in detail in section 3.3). The results constitute the outputs of our system and they include: (i) the estimated error rates, and (ii) the most likely target function outputs (i.e., combined predictions). We consider a “multiple approximations” problem setting in which we have several different approximations, ˆf d 1 , . . . , ˆf d N d, to a set of target boolean classification functions, f d : X 7→{0, 1} for d = 1, . . . , D, and we wish to know the true accuracies of each of these different approximations, using only unlabeled data, as well as the response of the true underlying functions, f d. Each value of d characterizes a different domain (or problem setting) and each domain can be interpreted as a class or category of objects. Similarly, the function approximations can be interpreted as classifying inputs as belonging or not to these categories. We consider the case where we may have a set of logical constraints defined over the domains. Note that, in contrast with related work, we allow the function approximations to provide soft responses in the interval [0, 1] (as opposed to only allowing binary responses — i.e., they can now return the probability for the response being 1), thus allowing modeling of their “certainty”. As an example of this setting, to which we will often refer throughout this paper, let us consider a part of NELL, where the input space of our functions, X, is the space of all possible noun phrases (NPs). Each target function, f d, returns a boolean value indicating whether the input NP belongs to a category, such as “city” or “animal”, and these categories correspond to our domains. There also exist logical constraints between these categories that may be hard (i.e., strongly enforced) or soft (i.e., enforced in a probabilistic manner). For example, “city” and “animal” may be mutually exclusive (i.e., if an object belongs to “city”, then it is unlikely that it also belongs to “animal”). In this case, the function approximations correspond to different classifiers (potentially using a different set of features / different views of the input data), which may return a probability for a NP belonging to a class, instead of a binary value. Our goal is to estimate the accuracies of these classifiers using only unlabeled data. In order to quantify accuracy, we define the error rate of classifier j in domain d as ed j ≜PD[ ˆf d j (X) ̸= f d(X)], for the binary case, for j = 1, . . . , N d, where 2 D is the true underlying distribution of the input data. Note that accuracy is equal to one minus error rate. This definition may be relaxed for the case where ˆf d j (X) ∈[0, 1] representing a probability: ed j ≜ˆf d j (X)PD[f d(X)̸=1] + (1 −ˆf d j (X))PD[f d(X)̸=0], which resembles an expected probability of error. Even though our work is motivated by the use of logical constraints defined over the domains, we also consider the setting where there are no such constraints. 2 Related Work The literature covers many projects related to estimating accuracy from unlabeled data. The setting we are considering was previously explored by Collins and Singer [1999], Dasgupta et al. [2001], Bengio and Chapados [2003], Madani et al. [2004], Schuurmans et al. [2006], Balcan et al. [2013], and Parisi et al. [2014], among others. Most of their approaches made some strong assumptions, such as assuming independence given the outputs, or assuming knowledge of the true distribution of the outputs. None of the previous approaches incorporated knowledge in the form of logical constraints. Collins and Huynh [2014] review many methods that were proposed for estimating the accuracy of medical tests in the absence of a gold standard. This is effectively the same problem that we are considering, applied to the domains of medicine and biostatistics. They present a method for estimating the accuracy of tests, where these tests are applied in multiple different populations (i.e., different input data), while assuming that the accuracies of the tests are the same across the populations, and that the test results are independent conditional on the true “output”. These are similar assumptions to the ones made by several of the other papers already mentioned, but the idea of applying the tests to multiple populations is new and interesting. Platanios et al. [2014] proposed a method relaxing some of these assumptions. They formulated the problem of estimating the error rates of several approximations to a function as an optimization problem that uses agreement rates of these approximations over unlabeled data. Dawid and Skene [1979] were the first to formulate the problem in terms of a graphical model and Moreno et al. [2015] proposed a nonparametric extension to that model applied to crowdsourcing. Tian and Zhu [2015] proposed an interesting max-margin majority voting scheme for combining classifier outputs, also applied to crowdsourcing. However, all of these approaches were outperformed by the models of Platanios et al. [2016], which are most similar to the work of Dawid and Skene [1979] and Moreno et al. [2015]. To the best of our knowledge, our work is the first to use logic for estimating accuracy from unlabeled data and, as shown in our experiments, outperforms all competing methods. Logical constraints provide additional information to the estimation method and this partially explains the performance boost. 3 Proposed Method Our method consists of: (i) defining a set of logic rules for modeling the logical constraints between the f d and the ˆf d j , in terms of the error rates ed j and the known logical constraints, and (ii) performing probabilistic inference using these rules as priors, in order to obtain the most likely values of the ed j and the f d, which are not observed. The intuition behind the method is that if the constraints are violated for the function approximation outputs, then at least one of these functions has to be making an error. For example, in the NELL case, if two function approximations respond that a NP belongs to the “city” and the “animal” categories, respectively, then at least one of them has to be making an error. We define the form of the logic rules in section 3.2 and then describe how to perform probabilistic inference over them in section 3.3. An overview of our system is shown in figure 1. In the next section we introduce the notion of probabilistic logic, which fuses classical logic with probabilistic reasoning and that forms the backbone of our method. 3.1 Probabilistic Logic In classical logic, we have a set of predicates (e.g., mammal(x) indicating whether x is a mammal, where x is a variable) and a set of rules defined in terms of these predicates (e.g., mammal(x) → animal(x), where “→” can be interpreted as “implies”). We refer to predicates and rules defined for a particular instantiation of their variables as ground predicates and ground rules, respectively (e.g., mammal(whale) and mammal(whale) →animal(whale)). These ground predicates and rules take boolean values (i.e., are either true or false — for rules, the value is true if the rule holds). Our goal 3 is to infer the most likely values for a set of unobserved ground predicates, given a set of observed ground predicate values and logic rules. In probabilistic logic, we are instead interested in inferring the probabilities of these ground predicates and rules being true, given a set of observed ground predicates and rules. Furthermore, the truth values of ground predicates and rules may be continuous and lie in the interval [0, 1], instead of being boolean, representing the probability that the corresponding ground predicate or rule is true. In this case, boolean logic operators, such as AND (∧), OR (∨), NOT (¬), and IMPLIES (→), need to be redefined. For the next section, we will assume their classical logical interpretation. 3.2 Model As described earlier, our goal is to estimate the true accuracies of each of the function approximations, ˆf d 1 , . . . , ˆf d N d for d = 1, . . . , D, using only unlabeled data, as well as the response of the true underlying functions, f d. We now define the logic rules that we perform inference over in order to achieve that goal. The rules are defined in terms of the following predicates, for d = 1, . . . , D: • Function Approximation Outputs: ˆf d j (X), defined over all approximations j = 1, . . . , N d, and inputs X ∈X, for which the corresponding function approximation has provided a response. Note that the values of these ground predicates lie in [0, 1] due to their probabilistic nature (i.e., they do not have to be binary, as in related work), and some of them are observed. • Target Function Outputs: f d(X), defined over all inputs X ∈X. Note that, in the purely unsupervised setting, none of these ground predicate values are observed, in contrast with the semi-supervised setting. • Function Approximation Error Rates: ed j, defined over all approximations j = 1, . . . , N d. Note that none of these ground predicate values are observed. The primary goal of this paper is to infer their values. The goal of the logic rules we define is two-fold: (i) to combine the function approximation outputs in a single output value, and (ii) to account for the logical constraints between the domains. We aim to achieve both goals while accounting for the error rates of the function approximations. We first define a set of rules that relate the function approximation outputs with the true underlying function output. We call this set of rules the ensemble rules and we describe them in the following section. We then discuss how to account for the logical constraints between the domains. 3.2.1 Ensemble Rules This first set of rules specifies a relation between the target function outputs, f d(X), and the function approximation outputs, ˆf d j (X), independent of the logical constraints: ˆf d j (X) ∧¬ed j →f d(X), ¬ ˆf d j (X) ∧¬ed j →¬f d(X), (1) ˆf d j (X) ∧ed j →¬f d(X), and ¬ ˆf d j (X) ∧ed j →f d(X), (2) for d = 1, . . . , D, j = 1, . . . , N d, and X ∈X. In words: (i) the first set of rules state that if a function approximation is not making an error, its output should match the output of the target function, and (ii) the second set of rules state that if a function approximation is making an error, its output should not match the output of the target function. An interesting point to make is that the ensemble rules effectively constitute a weighted majority vote for combining the function approximation outputs, where the weights are determined by the error rates of the approximations. These error rates are implicitly computed based on agreement between the function approximations. This is related to the work of Platanios et al. [2014]. There, the authors try to answer the question of whether consistency in the outputs of the approximations implies correctness. They directly use the agreement rates of the approximations in order to estimate their error rates. Thus, there exists an interesting connection in our work in that we also implicitly use agreement rates to estimate error rates, and our results, even though improving upon theirs significantly, reinforce their claim. Identifiability. Let us consider flipping the values of all error rates (i.e., setting them to one minus their value) and the target function responses. Then, the ensemble logic rules would evaluate to the same value as before (e.g., satisfied or unsatisfied). Therefore, the error rates and the target function values are not identifiable when there are no logical constraints. As we will see in the next 4 section, the constraints may sometimes help resolve this issue as, often, the corresponding logic rules do not exhibit that kind of symmetry. However, for cases where that symmetry exists, we can resolve it by assuming that most of the function approximations have error rates better than chance (i.e., < 0.5). This can be done by considering the two rules: (i) ˆf d j (X) →f d(X), and ¬ ˆf d j (X) →¬f d(X), for d = 1, . . . , D, j = 1, . . . , N d, and X ∈X. Note that all that these rules imply is that ˆf d j (X) = f d(X) (i.e., they represent the prior belief that function approximations are correct). As will be discussed in section 3.3, in probabilistic frameworks where rules are weighted with a real value in [0, 1], these rules will be given a weight that represents their significance or strength. In such a framework, we can consider using a smaller weight for these prior belief rules, compared to the remainder of the rules, which would simply correspond to a regularization weight. This weight can be a tunable or even learnable parameter. 3.2.2 Constraints The space of possible logical constraints is huge; we do not deal with every possible constraint in this paper. Instead, we focus our attention on two types of constraints that are abundant in structured prediction problems in machine learning, and which are motivated by the use of our method in the context of NELL: • Mutual Exclusion: If domains d1 and d2 are mutually exclusive, then f d1 = 1 implies that f d2 = 0. For example, in the NELL setting, if a NP belongs to the “city” category, then it cannot also belong to the “animal” category. • Subsumption: If d1 subsumes d2, then if f d2 = 1, we must have that f d1 = 1. For example, in the NELL setting, if a NP belongs to the “cat” category, then it must also belong to the “animal” category. This set of constraints is sufficient to model most ontology constraints between categories in NELL, as well as a big subset of the constraints more generally used in practice. Mutual Exclusion Rule. We first define the predicate ME(d1, d2), indicating that domains d1 and d2 are mutually exclusive1. This predicate has value 1 if domains d1 and d2 are mutually exclusive, and value 0 otherwise, and its truth value is observed for all values of d1 and d2. Furthermore, note that it is symmetric, meaning that if ME(d1, d2) is true, then ME(d2, d1) is also true. We define the mutual exclusion logic rule as: ME(d1, d2) ∧ˆf d1 j (X) ∧f d2(X) →ed1 j , (3) for d1 ̸= d2 = 1, . . . , D, j = 1, . . . , N d1, and X ∈X. In words, this rule says that if f d2(X) = 1 and domains d1 and d2 are mutually exclusive, then ˆf d1 j (X) must be equal to 0, as it is an approximation to f d1(X) and ideally we want that ˆf d1 j (X) = f d1(X). If that is not the case, then ˆf d1 j must be making an error. Subsumption Rule. We first define the predicate SUB(d1, d2), indicating that domain d1 subsumes domain d2. This predicate has value 1 if domain d1 subsumes domain d2, and 0 otherwise, and its truth value is always observed. Note that, unlike mutual exclusion, this predicate is not symmetric. We define the subsumption logic rule as: SUB(d1, d2) ∧¬ ˆf d1 j (X) ∧f d2(X) →ed1 j , (4) for d1, d2 = 1, . . . , D, j = 1, . . . , N d1, and X ∈X. In words, this rule says that if f d2(X) = 1 and d1 subsumes d2, then ˆf d1 j (X) must be equal to 1, as it is an approximation to f d1(X) and ideally we want that ˆf d1 j (X) = f d1(X). If that is not the case, then ˆf d1 j must be making an error. Having defined all of the logic rules that comprise our model, we now describe how to perform inference under such a probabilistic logic model, in the next section. Inference in this case comprises determining the most likely truth values of the unobserved ground predicates, given the observed predicates and the set of rules that comprise our model. 1A set of mutually-exclusive domains can be reduced to pairwise ME constraints for all pairs in that set. 5 3.3 Inference In section 3.1 we introduced the notion of probabilistic logic and we defined our model in terms of probabilistic predicates and rules. In this section we discuss in more detail the implications of using probabilistic logic, and the way in which we perform inference in our model. There exist various probabilistic logic frameworks, each making different assumptions. In what is arguably the most popular such framework, Markov Logic Networks (MLNs) [Richardson and Domingos, 2006], inference is performed over a constructed Markov Random Field (MRF) based on the model logic rules. Each potential function in the MRF corresponds to a ground rule and takes an arbitrary positive value when the ground rule is satisfied and the value 0 otherwise (the positive values are often called rule weights and can be either fixed or learned). Each variable is boolean-valued and corresponds to a ground predicate. MLNs are thus a direct probabilistic extension to boolean logic. It turns out that due to the discrete nature of the variables in MLNs, inference is NP-hard and can thus be very inefficient. Part of our goal in this paper is for our method to be applicable at a very large scale (e.g., for systems like NELL). We thus resorted to Probabilistic Soft Logic (PSL) [Bröcheler et al., 2010], which can be thought of as a convex relaxation of MLNs. Note that the model proposed in the previous section, which is also the primary contribution of this paper, can be used with various probabilistic logic frameworks. Our choice, which is described in this section, was motivated by scalability. One could just as easily perform inference for our model using MLNs, or any other such framework. 3.3.1 Probabilistic Soft Logic (PSL) In PSL, models, which are composed of a set of logic rules, are represented using hinge-loss Markov random fields (HL-MRFs) [Bach et al., 2013]. In this case, inference amounts to solving a convex optimization problem. Variables of the HL-MRF correspond to soft truth values of ground predicates. Specifically, a HL-MRF, f, is a probability density over m random variables, Y = {Y1, . . . , Ym} with domain D = [0, 1]m, corresponding to the unobserved ground predicate values. Let X = {X1, . . . , Xn} be an additional set of variables with known values in the domain [0, 1]n, corresponding to observed ground predicate values. Let φ = {φ1, . . . , φk} be a finite set of k continuous potential functions of the form φj(X, Y) = (max {ℓj(X, Y), 0})pj, where ℓj is a linear function of X and Y, and pj ∈{1, 2}. We will soon see how these functions relate to the ground rules of the model. Given the above, for a set of non-negative free parameters λ = {λ1, . . . , λk} (i.e., the equivalent of MLN rule weights), the HL-MRF density is defined as: f(Y) = 1 Z exp − k X j=1 λjφj(X, Y), (5) where Z is a normalizing constant so that f is a proper probability density function. Our goal is to infer the most probable explanation (MPE), which consists of the values of Y that maximize the likelihood of our data2. This is equivalent to solving the following convex problem: min Y∈[0,1]m k X j=1 λjφj(X, Y). (6) Each variable Xi or Yi corresponds to a soft truth value (i.e., Yi ∈[0, 1]) of a ground predicate. Each function ℓj corresponds to a measure of the distance to satisfiability of a logic rule. The set of rules used is what characterizes a particular PSL model. The rules represent prior knowledge we might have about the problem we are trying to solve. For our model, these rules were defined in section 3.2. As mentioned above, variables are allowed to take values in the interval [0, 1]. We thus need to define what we mean by the truth value of a rule and its distance to satisfiability. For the logical operators AND (∧), OR (∨), NOT (¬), and IMPLIES (→), we use the definitions from Łukasiewicz Logic [Klir and Yuan, 1995]: P ∧Q ≜max {P + Q −1, 0}, P ∨Q ≜min {P + Q, 1}, ¬P ≜1 −P, and P →Q ≜min{1 −P + Q, 1}. Note that these operators are a simple continuous relaxation of the corresponding boolean operators, in that for boolean-valued variables, with 0 corresponding to FALSE and 1 to TRUE, they are equivalent. By writing all logic rules in the form B1 ∧B2 ∧· · · ∧Bs →H1 ∨H2 ∨· · · ∨Ht, it is easy to observe that the distance to satisfiability 2As opposed to performing marginal inference which aims to infer the marginal distribution of these values. 6 Animal Vertebrate Invertebrate River Lake City Country Bird Fish Mammal Arthropod Mollusk Location Figure 2: Illustration of the NELL-11 data set constraints. Each box represents a label, each blue arrow represents a subsumption constraint, and each set of labels connected by a red dashed line represents a mutually exclusive set of labels. For example, Animal subsumes Vertebrate and Bird, Fish, and Mammal are mutually exclusive. (i.e., 1 minus its truth value) of a rule evaluates to max {0, Ps i=1 Bi −Pt j=1 Ht + 1 −s}. Note that any set of rules of first-order predicate logic can be represented in this form [Bröcheler et al., 2010], and that minimizing this quantity amounts to making the rule “more satisfied”. In order to complete our system description we need to describe: (i) how to obtain a set of ground rules and predicates from a set of logic rules of the form presented in section 3.2 and a set of observed ground predicates, and define the objective function of equation 6, and (ii) how to solve the optimization problem of that equation to obtain the most likely truth values for the unobserved ground predicates. These two steps are described in the following two sections. 3.3.2 Grounding Grounding is the process of computing all possible groundings of each logic rule to construct the inference problem variables and the objective function. As already described in section 3.3.1, the variables X and Y correspond to ground predicates and the functions ℓj correspond to ground rules. The easiest way to ground a set of logic rules would be to go through each one and create a ground rule instance of it, for each possible value of its arguments. However, if a rule depends on n variables and each variable can take m possible values, then mn ground rules would be generated. For example, the mutual exclusion rule of equation 3 depends on d1, d2, j, and X, meaning that D2×N d1 ×|X| ground rule instances would be generated, where |X| denotes the number of values that X can take. The same applies to predicates; ˆf d1 j (X) would result in D ×N d1 ×|X| ground instances, which would become variables in our optimization problem. This approach would thus result in a huge optimization problem rendering it impractical when dealing with large scale problems such as NELL. The key to scaling up the grounding procedure is to notice that many of the possible ground rules are always satisfied (i.e., have distance to satisfiability equal to 0), irrespective of the values of the unobserved ground predicates that they depend upon. These ground rules would therefore not influence the optimization problem solution and can be safely ignored. Since in our model we are only dealing with a small set of predefined logic rule forms, we devised a heuristic grounding procedure that only generates those ground rules and predicates that may influence the optimization. Our grounding algorithm is shown in the supplementary material and is based on the idea that a ground rule is only useful if the function approximation predicate that appears in its body is observed. It turns out that this approach is orders of magnitude faster than existing state-of-the-art solutions such as the grounding solution used by Niu et al. [2011]. 3.3.3 Solving the Optimization Problem For large problems, the objective function of equation 6 will be a sum of potentially millions of terms, each one of which only involving a small set of variables. In PSL, the method used to solve this optimization problem is based on the consensus Alternating Directions Method of Multipliers (ADMM). The approach consists of handling each term in that sum as a separate optimization problem using copies of the corresponding variables, while adding the constraint that all copies of each variable must be equal. This allows for solving the subproblems completely in parallel and is thus scalable. The algorithm is summarized in the supplementary material. More details on this algorithm and on its convergence properties can be found in the latest PSL paper [Bach et al., 2015]. We propose a stochastic variation of this consensus ADMM method that is even more scalable. During each iteration, instead of solving all subproblems and aggregating their solutions in the consensus variables, we sample K << k subproblems to solve. The probability of sampling each 7 Table 1: Mean absolute deviation (MAD) of the error rate rankings and the error rate estimates (lower MAD is better), and area under the curve (AUC) of the label estimates (higher AUC is better). The best results for each experiment, across all methods, are shown in bolded text and the results for our proposed method are highlighted in blue. NELL-7 NELL-11 MADerror rank MADerror AUCtarget MADerror rank MADerror AUCtarget MAJ 7.71 0.238 0.372 7.54 0.303 0.447 AR-2 12.0 0.261 0.378 10.8 0.350 0.455 AR 11.4 0.260 0.374 11.1 0.350 0.477 BEE 6.00 0.231 0.314 5.69 0.291 0.368 CBEE 6.00 0.232 0.314 5.69 0.291 0.368 HCBEE 5.03 0.229 0.452 5.14 0.324 0.462 LEE 3.71 0.152 0.508 4.77 0.180 0.615 ×10−2 uNELL-All uNELL-10% MADerror rank MADerror AUCtarget MADerror rank MADerror AUCtarget MAJ 23.3 0.47 99.9 33.3 0.54 87.7 GIBBS-SVM 102.0 2.05 28.6 101.7 2.15 28.2 GD-SVM 26.7 0.42 71.3 93.3 1.90 67.8 DS 170.0 7.08 12.1 180.0 6.96 12.3 AR-2 48.3 2.63 96.7 50.0 2.56 96.4 AR 48.3 2.60 96.7 48.3 2.52 96.4 BEE 40.0 0.60 99.8 31.7 0.64 79.5 CBEE 40.0 0.61 99.8 118.0 45.40 55.4 HCBEE 81.7 2.53 99.4 81.7 2.45 84.9 LEE 30.0 0.37 96.5 30.0 0.43 97.3 ×10−1 uBRAIN-All uBRAIN-10% MADerror rank MADerror AUCtarget MADerror rank MADerror AUCtarget MAJ 8.76 0.57 8.49 1.52 0.68 7.84 GIBBS-SVM 7.77 0.43 4.65 1.51 0.66 5.28 GD-SVM 7.60 0.44 5.24 1.50 0.68 8.56 DS 7.77 0.44 8.76 1.32 0.63 4.59 AR-2 16.40 0.87 9.71 2.28 0.97 9.89 BEE 7.98 0.40 9.32 1.38 0.63 9.35 CBEE 10.90 0.43 9.34 1.77 0.89 9.30 HCBEE 28.10 0.85 9.20 3.25 0.97 9.37 LEE 7.60 0.38 9.95 1.32 0.47 9.98 subproblem is proportional to the distance of its variable copies from the respective consensus variables. The intuition and motivation behind this approach is that at the solution of the optimization problem, all variable copies should be in agreement with the consensus variables. Therefore, prioritizing subproblems whose variables are in greater disagreement with the consensus variables might facilitate faster convergence. Indeed, this modification to the inference algorithm allowed us to apply our method to the NELL data set and obtain results within minutes instead of hours. 4 Experiments Our implementation as well as the experiment data sets are available at https://github.com/ eaplatanios/makina. Data Sets. First, we considered the following two data sets with logical constraints: • NELL-7: Classify noun phrases (NPs) as belonging to a category or not (categories correspond to domains in this case). The categories considered for this data set are Bird, Fish, Mammal, City, Country, Lake, and River. The only constraint considered is that all these categories are mutually exclusive. • NELL-11: Perform the same task, but with the categories and constraints illustrated in figure 2. For both of these data sets, we have a total of 553,940 NPs and 6 classifiers, which act as our function approximations and are described in [Mitchell et al., 2015]. Not all of the classifiers provide a response every input NP. In order to show the applicability of our method in cases where there are no logical constraints between the domains, we also replicated the experiments of Platanios et al. [2014]: • uNELL: Same task as NELL-7, but without considering the constraints and using 15 categories, 4 classifiers, and about 20,000 NPs per category. 8 • uBRAIN: Classify which of two 40 second long story passages corresponds to an unlabeled 40 second time series of Functional Magnetic Resonance Imaging (fMRI) neural activity. 11 classifiers were used and the domain in this case is defined by 11 different locations in the brain, for each of which we have 924 examples. Additional details can be found in [Wehbe et al., 2014]. Methods. Some of the methods we compare against do not explicitly estimate error rates. Rather, they combine the classifier outputs to produce a single label. For these methods, we produce an estimate of the error rate using these labels and compare against this estimate. 1. Majority Vote (MV): This is the most intuitive method and it consists of taking the most common output among the provided function approximation responses, as the combined output. 2. GIBBS-SVM/GD-SVM: Methods of Tian and Zhu [2015]. 3. DS: Method of Dawid and Skene [1979]. 4. Agreement Rates (AR): This is the method of Platanios et al. [2014]. It estimates error rates but does not infer the combined label. To that end, we use a weighted majority vote, where the classifiers’ predictions are weighted according to their error rates in order to produce a single output label. We also compare against a method denoted by AR-2 in our experiments, which is the same method, except only pairwise function approximation agreements are considered. 5. BEE/CBEE/HCBEE: Methods of Platanios et al. [2016]. In the results, LEE stands for Logic Error Estimation and refers to the proposed method of this paper. Evaluation. We compute the sample error rate estimates using the true target function labels (which are always provided), and we then compute three metrics for each domain and average over domains: • Error Rank MAD: We rank the function approximations by our estimates and by the sample estimates to produce two vectors with the ranks. We then compute the mean absolute deviation (MAD) between the two vectors, where by MAD we mean the ℓ1 norm of the vectors’ difference. • Error MAD: MAD between the vector of our estimates and the vector of the sample estimates, where each vector is indexed by the function approximation index. • Target AUC: Area under the precision-recall curve for the inferred target function values, relative to the true function values that are observed. Results. First, note that the largest execution time of our method among all data sets was about 10 minutes, using a 2013 15-inch MacBook Pro. The second best performing method, HCBEE, required about 100 minutes. This highlights the scalability of our approach. Results are shown in table 1. 1. NELL-7 and NELL-11 Data Sets: In this case we have logical constraints and thus, this set of results is most relevant to the central research claims in this paper (our method was motivated by the use of such logical constraints). It is clear that our method outperforms all existing methods, including the state-of-the-art, by a significant margin. Both the MADs of the error rate estimation, and the AUCs of the target function response estimation, are significantly better. 2. uNELL and uBRAIN Data Sets: In this case there exist no logical constraints between the domains. Our method still almost always outperforms the competing methods and, more specifically, it always does so in terms of error rate estimation MAD. This set of results makes it clear that our method can also be used effectively in cases where there are no logical constraints. Acknowledgements We would like to thank Abulhair Saparov and Otilia Stretcu for the useful feedback they provided in early versions of this paper. This research was performed during an internship at Microsoft Research, and was also supported in part by NSF under award IIS1250956, and in part by a Presidential Fellowship from Carnegie Mellon University. References S. H. Bach, B. Huang, B. London, and L. Getoor. Hinge-loss Markov Random Fields: Convex Inference for Structured Prediction. In Conference on Uncertainty in Artificial Intelligence, 2013. 9 S. H. Bach, M. Broecheler, B. Huang, and L. Getoor. Hinge-loss markov random fields and probabilistic soft logic. CoRR, abs/1505.04406, 2015. URL http://dblp.uni-trier.de/ db/journals/corr/corr1505.html#BachBHG15. M.-F. Balcan, A. Blum, and Y. Mansour. Exploiting Ontology Structures and Unlabeled Data for Learning. International Conference on Machine Learning, pages 1112–1120, 2013. Y. Bengio and N. Chapados. Extensions to Metric-Based Model Selection. Journal of Machine Learning Research, 3:1209–1227, 2003. M. Bröcheler, L. Mihalkova, and L. Getoor. Probabilistic Similarity Logic. In Conference on Uncertainty in Artificial Intelligence, pages 73–82, 2010. J. Collins and M. Huynh. Estimation of Diagnostic Test Accuracy Without Full Verification: A Review of Latent Class Methods. Statistics in Medicine, 33(24):4141–4169, June 2014. M. Collins and Y. Singer. Unsupervised Models for Named Entity Classification. In Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999. S. Dasgupta, M. L. Littman, and D. McAllester. PAC Generalization Bounds for Co-training. In Neural Information Processing Systems, pages 375–382, 2001. A. P. Dawid and A. M. Skene. Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20–28, 1979. G. J. Klir and B. Yuan. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1995. ISBN 0-13-101171-5. O. Madani, D. Pennock, and G. Flake. Co-Validation: Using Model Disagreement on Unlabeled Data to Validate Classification Algorithms. In Neural Information Processing Systems, 2004. T. Mitchell, W. W. Cohen, E. Hruschka Jr, P. Pratim Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. A. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. Never-Ending Learning. In Association for the Advancement of Artificial Intelligence, 2015. P. G. Moreno, A. Artés-Rodríguez, Y. W. Teh, and F. Perez-Cruz. Bayesian Nonparametric Crowdsourcing. Journal of Machine Learning Research, 16, 2015. F. Niu, C. Ré, A. Doan, and J. Shavlik. Tuffy: Scaling up statistical inference in markov logic networks using an rdbms. Proc. VLDB Endow., 4(6):373–384, Mar. 2011. ISSN 2150-8097. doi: 10.14778/ 1978665.1978669. URL http://dx.doi.org/10.14778/1978665.1978669. F. Parisi, F. Strino, B. Nadler, and Y. Kluger. Ranking and combining multiple predictors without labeled data. Proceedings of the National Academy of Sciences, 2014. E. A. Platanios, A. Blum, and T. M. Mitchell. Estimating Accuracy from Unlabeled Data. In Conference on Uncertainty in Artificial Intelligence, 2014. E. A. Platanios, A. Dubey, and T. M. Mitchell. Estimating Accuracy from Unlabeled Data: A Bayesian Approach. In International Conference on Machine Learning, pages 1416–1425, 2016. M. Richardson and P. Domingos. Markov Logic Networks. Mach. Learn., 62(1-2):107–136, 2006. D. Schuurmans, F. Southey, D. Wilkinson, and Y. Guo. Metric-Based Approaches for SemiSupervised Regression and Classification. In Semi-Supervised Learning. 2006. T. Tian and J. Zhu. Max-Margin Majority Voting for Learning from Crowds. In Neural Information Processing Systems, 2015. L. Wehbe, B. Murphy, P. Talukdar, A. Fyshe, A. Ramdas, and T. Mitchell. Predicting brain activity during story processing. in review, 2014. 10 | 2017 | 394 |
6,890 | A Decomposition of Forecast Error in Prediction Markets Miroslav Dudík Microsoft Research, New York, NY mdudik@microsoft.com Sébastien Lahaie Google, New York, NY slahaie@google.com Ryan Rogers University of Pennsylvania, Philadelphia, PA rrogers386@gmail.com Jennifer Wortman Vaughan Microsoft Research, New York, NY jenn@microsoft.com Abstract We analyze sources of error in prediction market forecasts in order to bound the difference between a security’s price and the ground truth it estimates. We consider cost-function-based prediction markets in which an automated market maker adjusts security prices according to the history of trade. We decompose the forecasting error into three components: sampling error, arising because traders only possess noisy estimates of ground truth; market-maker bias, resulting from the use of a particular market maker (i.e., cost function) to facilitate trade; and convergence error, arising because, at any point in time, market prices may still be in flux. Our goal is to make explicit the tradeoffs between these error components, influenced by design decisions such as the functional form of the cost function and the amount of liquidity in the market. We consider a specific model in which traders have exponential utility and exponential-family beliefs representing noisy estimates of ground truth. In this setting, sampling error vanishes as the number of traders grows, but there is a tradeoff between the other two components. We provide both upper and lower bounds on market-maker bias and convergence error, and demonstrate via numerical simulations that these bounds are tight. Our results yield new insights into the question of how to set the market’s liquidity parameter and into the forecasting benefits of enforcing coherent prices across securities. 1 Introduction A prediction market is a marketplace in which participants can trade securities with payoffs that depend on the outcomes of future events [19]. Consider the simple setting in which we are interested in predicting the outcome of a political election: whether the incumbent or challenger will win. A prediction market might issue a security that pays out $1 per share if the incumbent wins, and $0 otherwise. The market price p of this security should always lie between 0 and 1, and can be construed as an event probability. If a trader believes that the likelihood of the incumbent winning is greater than p, she will buy shares with the expectation of making a profit. Market prices increase when there is more interest in buying and decrease when there is more interest in selling. By this process, the market aggregates traders’ information into a consensus forecast, represented by the market price. With sufficient activity, prediction markets are competitive with alternative forecasting methods such as polls [4], but while there is a mature literature on sources of error and bias in polls, the impact of prediction market structure on forecast accuracy is still an active area of research [17]. We consider prediction markets in which all trades occur through a centralized entity known as a market maker. Under this market structure, security prices are dictated by a fixed cost function and 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the current number of outstanding shares [6]. The basic conditions that a cost function should satisfy to correctly elicit beliefs, while bounding the market maker’s loss, are now well-understood, chief among them being convexity [1]. Nonetheless, the class of allowable cost functions remains broad, and the literature so far provides little formal guidance on the specific form of cost function to use in order to achieve good forecast accuracy, including how to set the liquidity parameter which controls price responsiveness to trade. In practice, the impact of the liquidity parameter is difficult to quantify a priori, so implementations typically resort to calibrations based on market simulations [8, 18]. Prior work also suggests that maintaining coherence among prices of logically related securities has informational advantages [8], but there has been little work aimed at understanding why. This paper provides a framework to quantify the impact of the choice of cost function on forecast accuracy. We introduce a decomposition of forecast error, in analogy with the bias-variance decomposition familiar from statistics or the approximation-estimation-optimization decomposition for large-scale machine learning [5]. Our decomposition consists of three components. First, there is the sampling error resulting from the fact that the market consists of a finite population of traders, each holding a noisy estimate of ground truth. Second, there is a market-maker bias which stems from the use of a cost function to provide liquidity and induce trade. Third, there is convergence error due to the fact that the market prices may not have fully converged to their equilibrium point. The central contribution of this paper is a theoretical characterization of the market-maker bias and convergence error, the two components of this decomposition that depend on market structure as defined by the form of the cost function and level of liquidity. We consider a tractable model of agent behavior, originally studied by Abernethy et al. [2], in which traders have exponential utility functions and beliefs drawn from an exponential family. Under this model it is possible to characterize the market’s equilibrium prices in terms of the traders’ belief and risk aversion parameters, and thereby quantify the discrepancy between current market prices and ground truth. To analyze market convergence, we consider the trader dynamics introduced by Frongillo and Reid [9], under which trading can be viewed as randomized block-coordinate descent on a suitable potential function. Our analysis is local in that the bounds depend on the market equilibrium prices. This allows us to exactly identify the main asymptotic terms of error. We demonstrate via numerical experiments that these asymptotic bounds are accurate early on and therefore can be used to compare market designs. We make the following specific contributions: 1. We precisely define the three components of the forecasting error. 2. We show that the market-maker bias equals cb ± O(b2) as b →0, where b is the liquidity parameter, and c is an explicit constant that depends on the cost function and trader beliefs. 3. We show that the convergence error decreases with the number of trades t as γt with γ = 1−Θ(b). We provide explicit upper and lower bounds on γ that depend on the cost function and trader beliefs. In the process, we prove a new local convergence bound for block-coordinate descent. 4. We use our explicit formulas for bias and convergence error to compare two common cost functions: independent markets (IND), under which security prices vary independently, and the logarithmic market scoring rule (LMSR) [10], which enforces logical relationships between security prices. We show that at the same value of the market-maker bias, IND requires at least half-as-many and at most twice-as-many trades as LMSR to achieve the same convergence error. We consider a specific utility model (exponential utility), but our bias and convergence analysis immediately carry over if we assume that each trader is optimizing a risk measure (rather than an exponential utility function) similar to the setup of Frongillo and Reid [9]. Exponential utility was chosen because it was previously well studied and allowed us to focus on the analysis of the cost function and liquidity. The role of the liquidity parameter in trading off the bias and convergence error has been informally recognized in the literature [7, 10, 13], but our precise definition of market-maker bias and explicit formulas for the bias and convergence error are novel. Abernethy et al. [2] provide results that can be used to derive the bias for LMSR, but not for generic cost functions, so they do not enable comparison of biases of different costs. Frongillo and Reid [9] observe that the convergence error can be locally bounded as γt, but they only provide an upper bound and do not show how γ is related to the liquidity or cost function. Our analysis establishes both upper and lower bounds on convergence and relates γ explicitly to the liquidity and cost function. This is necessary for a 2 meaningful comparison of cost function families. Thus our framework provides the first meaningful way to compare the error tradeoffs inherent in different choices of cost functions and liquidity levels. 2 Preliminaries We use the notation [N] to denote the set {1, . . . , N}. Given a convex function f : Rd →R ∪{∞}, its effective domain, denoted dom f, is the set of points where f is finite. Whenever dom f is non-empty, the conjugate f ∗: Rd →R ∪{∞} is defined by f ∗(v) := supu∈Rd[v⊺u −f(u)]. We write ∥·∥for the Euclidean norm. A centralized mathematical reference is provided in Appendix A.1 Cost-function-based market makers We study cost-function-based prediction markets [1]. Let Ωbe a finite set of mutually exclusive and exhaustive states of the world. A market administrator, known as market maker, wishes to elicit information about the likelihood of various states ω ∈Ω, and to that end offers to buy and sell any number of shares of K securities. Securities are associated with coordinates of a payoff function φ : Ω→RK, where each share of the kth security is worth φk(ω) in the event that the true state of the world is ω ∈Ω. Traders arrive in the market sequentially and trade with the market maker. The market price is fully determined by a convex potential function C called the cost function. In particular, if the market maker has previously sold sk ∈R shares of each security k and a trader would like to purchase a bundle consisting of δk ∈R shares of each, the trader is charged C(s + δδδ) −C(s). The instantaneous price of security k is then ∂C(s)/∂sk. Note that negative values of δk are allowed and correspond to the trader (short) selling security k. Let M := conv{φ(ω) : ω ∈Ω} be the convex hull of the set of payoff vectors. It is exactly the set of expectations E [φ(ω)] across all possible probability distributions over Ω, which we call beliefs. We refer to elements of M as coherent prices. Abernethy et al. [1] characterize the conditions that a cost function must satisfy in order to guarantee important properties such as bounded loss for the market maker and no possibility of arbitrage. To start, we assume only that C : RK →R is convex and differentiable and that M ⊆dom C∗, which corresponds to the bounded loss property. Example 2.1 (Logarithmic Market Scoring Rule: LMSR [10]). Consider a complete market with a single security for each outcome worth $1 if that outcome occurs and $0 otherwise, i.e., Ω= [K] and φk(ω) = 1{k = ω} for all k. The LMSR cost function and instantaneous security prices are given by C(s) = log PK k=1 esk and ∂C(s) ∂sk = esk PK ℓ=1 esℓ, ∀k ∈[K]. (1) Its conjugate is the entropy function, C∗(µ) = P k µk log µk + I{µ ∈∆K}, where ∆K is the simplex in RK and I{·} is the convex indicator, equal to zero if its argument is true and infinity if false. Thus, in this case M = ∆K = dom C∗. Notice that the LMSR security prices are coherent because they always sum to one. This prevents arbitrage opportunities for traders. Our second running example does not have this property. Example 2.2 (Sum of Independent LMSRs: IND). Let Ω= [K] and φk(ω) = 1{k = ω} for all k. The cost function and instantaneous security prices for the sum of independent LMSRs are given by C(s) = PK k=1 log (1 + esk) and ∂C(s) ∂sk = esk 1 + esk , ∀k ∈[K], (2) C∗(µ) = P k[µk log µk+(1−µk) log(1−µk)]+I{µ ∈[0, 1]K}, M = ∆K, and dom C∗= [0, 1]K. When choosing a cost function, one important consideration is liquidity, that is, how quickly prices change in response to trades. Any cost function C can be viewed as a member of a parametric family of cost functions of the form Cb(s) := bC(s/b) across all b > 0. With larger values of b, larger trades are required to move market prices by some fixed amount, and the worst-case loss of the market maker is larger; with smaller values, small purchases can result in big changes to the market price. Basic model In our analysis of error we assume that there exists an unknown true probability distribution ptrue ∈∆|Ω| over the outcome set Ω. The true expected payoffs of the K market securities are then given by the vector µtrue := Eω∼ptrue [φ(ω)]. 1A longer version of this paper containing the appendix is available on arXiv and the authors’ websites. 3 We assume that there are N traders and that each trader i ∈[N] has a private belief ˜pi over outcomes. We additionally assume that each trader i has a utility function ui : R →R for wealth and would like to maximize expected utility subject to her beliefs. For now we assume that ui is differentiable and concave, meaning that each trader is risk averse, though later we focus on exponential utility. The expected utility of trader i owning a security bundle ri ∈RK and cash ci is Ui(ri, ci) := Eω∼˜pi ui ci + φ(ω) · ri . We assume that each trader begins with zero cash. This is without loss of generality because we could incorporate any initial cash holdings into ui. 3 A Decomposition of Error In this section, we decompose the market’s forecast error into three major components. The first is sampling error, which arises because traders have only noisy observations of the ground truth. The second is market-maker bias, which arises because the shape of the cost function impacts the traders’ willingness to invest. Finally, convergence error arises due to the fact that at any particular point in time the market prices may not have fully converged. To formalize our decomposition, we introduce two new notions of equilibrium. Our first notion of equilibrium, called a market-clearing equilibrium, does not assume the existence of a market maker, but rather assumes that traders trade only among themselves, and so no additional securities or cash are available beyond the traders’ initial allocations. This equilibrium is described by security prices ¯µ ∈RK and allocations (¯ri, ¯ci) of security bundles and cash to each trader i such that, given her allocation, no trader wants to buy or sell any bundle of securities at those prices. Trader bundles and cash are summarized as ¯r = (¯ri)i∈[N] and ¯c = (¯ci)i∈[N]. Definition 3.1 (Market-clearing equilibrium). A triple (¯r, ¯c, ¯µ) is a market-clearing equilibrium if PN i=1 ¯ri = 0, PN i=1 ¯ci = 0, and for all i ∈[N], 0 ∈argmaxδ∈RK Ui(¯ri + δ, ¯ci −δ · ¯µ). We call ¯µ market-clearing prices if there exist ¯r and ¯c such that (¯r, ¯c, ¯µ) is a market-clearing equilibrium. Similarly, we call ¯r a market-clearing allocation if there exists a corresponding equilibrium. The requirements on PN i=1 ¯ri and PN i=1 ¯ci guarantee that no additional securities or cash have been created. In other words, there exists some set of trades among traders that would lead to the market-clearing allocation, although the definition says nothing about how the equilibrium is reached. Since we rely on a market maker to orchestrate trade, our markets generally do not reach the marketclearing equilibrium. Instead, we introduce the notion of market-maker equilibrium. This equilibrium is again described by a set of security prices µ⋆and trader allocations (r⋆ i , c⋆ i ), summarized as (r⋆, c⋆), such that no trader wants to trade at these prices given her allocation. The difference is that we now require r⋆and c⋆to be reachable via some sequence of trade with the market maker instead of via trade among only the traders, and µ⋆must be the market prices after such a sequence of trade. Definition 3.2 (Market-maker equilibrium). A triple (r⋆, c⋆, µ⋆) is a market-maker equilibrium for cost function Cb if, for the market state s⋆= PN i=1 r⋆ i , we have PN i=1 c⋆ i = Cb(0) −Cb(s⋆), µ⋆= ∇Cb(s⋆), and for all i ∈[N], 0 ∈argmaxδ∈RK Ui r⋆ i + δ, c⋆ i −Cb(s⋆+ δ) + Cb(s⋆) . We call µ⋆market-maker equilibrium prices if there exist r⋆and c⋆such that (r⋆, c⋆, µ⋆) is a marketmaker equilibrium. Similarly, we call r⋆a market-maker equilibrium allocation if there exists a corresponding equilibrium. We sometimes write µ⋆(b; C) to show the dependence of µ⋆on C and b. The market-clearing prices ¯µ and the market-maker equilibrium prices µ⋆(b; C) are not unique in general, but are unique for the specific utility functions that we study in this paper. Using these notions of equilibrium, we can formally define our error components. Sampling error is the difference between the true security values and the market-clearing equilibrium prices. The bias is the difference between the market-clearing equilibrium prices and the market-maker equilibrium prices. Finally, the convergence error is the difference between the market-maker equilibrium prices and the market prices µt(b; C) at a particular round t. Putting this together, we have that µtrue −µt(b; C) = µtrue −¯µ | {z } Sampling Error + ¯µ −µ⋆(b; C) | {z } Bias + µ⋆(b; C) −µt(b; C) | {z } Convergence Error . (3) 4 4 The Exponential Trader Model For the remainder of the paper, we work with the exponential trader model introduced by Abernethy et al. [2] in which traders have exponential utility functions and exponential-family beliefs. Under this model, both the market-clearing prices and market-maker equilibrium prices are unique and can be expressed cleanly in terms of potential functions [9], yielding a tractable analysis. The results of this section are immediate consequences of prior work [2, 9], but our equilibrium concepts bring them into a common framework. We consider a specific exponential family [3] of probability distributions over Ωdefined as p(ω; θ) = eφ(ω)·θ−T (θ), where θ ∈RK is the natural parameter of the distribution, and T is the log partition function, T(θ) := log P ω∈Ωeφ(ω)·θ . The gradient ∇T(θ) coincides with the expectation of φ under p(·; θ), and dom T ∗= conv{φ(ω) : ω ∈Ω} = M. Following Abernethy et al. [2], we assume that each trader i has exponential-family beliefs with natural parameter ˜θi. From the perspective of trader i, the expected payoffs of the K market securities can then be expressed as the vector ˜µi with ˜µi,k := P ω∈Ωφk(ω)p(ω; ˜θi). As in Abernethy et al. [2], we also assume that traders are risk averse with exponential utility for wealth, so the utility of trader i for wealth W is ui(W) = −(1/ai)e−aiW , where ai is the the trader’s risk aversion coefficient. We assume that the traders’ risk aversion coefficients are fixed. Using the definitions of the expected utility Ui, the exponential family distribution p(·; ˜θi), the log partition function T, and the exponential utility ui, it is straightforward to show [2] that Ui(ri, ci) = −1 ai e−T (˜θi)−aici P ω∈Ωeφ(ω)·(˜θi−airi) = −1 ai eT (˜θi−airi)−T (˜θi)−aici. (4) Under this trader model, we can use the techniques of Frongillo and Reid [9] to construct potential functions which yield alternative characterizations of the equilibria as solutions of minimization problems. Consider first a market-clearing equilibrium. Define Fi(s) := 1 ai T(˜θi + ais) for each trader i. From Eq. (4) we can observe that −Fi(−ri) + ci is a monotone transformation of trader i’s utility. Since each trader’s utility is locally maximized at a market-clearing equilibrium, the sum of traders’ utilities is also locally maximized, as is PN i=1(−Fi(−ri) + ci). Since the equilibrium conditions require that PN i=1 ci = 0, the security allocation associated with any market-clearing equilibrium must be a local minimum of PN i=1 Fi(−ri). This idea is formalized in the following theorem. The proof follows from an analysis of the KKT conditions of the equilibrium. (See the appendix for all omitted proofs.) Theorem 4.1. Under the exponential trader model, a market-clearing equilibrium always exists and market-clearing prices are unique. Market-clearing allocations and prices are exactly the solutions of the following optimization problems: ¯r ∈ argmin r: PN i=1 ri=0 hPN i=1 Fi(−ri) i , ¯µ = argmin µ∈RK hPN i=1 F ∗ i (µ) i . (5) Using a similar argument, we can show that the allocation associated with any market-maker equilibrium is a local minimum of the function F(r) := PN i=1 Fi(−ri) + Cb PN i=1 ri . Theorem 4.2. Under the exponential trader model, a market-maker equilibrium always exists and equilibrium prices are unique. Market-maker equilibrium allocations and prices are exactly the solutions of the following optimization problems: r⋆∈argmin r F(r) , µ⋆= argmin µ∈RK hPN i=1 F ∗ i (µ) + bC∗(µ) i . (6) Sampling error We finish this section with an analysis of the first component of error identified in Section 3: the sampling error. We begin by deriving a more explicit form of market-clearing prices: Theorem 4.3. Under the exponential trader model, the unique market-clearing equilibrium prices can be written as ¯µ = E¯θ [φ(ω)], where ¯θ := PN i=1 ˜θi/ai / PN i=1 1/ai is the risk-aversionweighted average belief and E¯θ is the expectation under p(·; ¯θ). 5 The sampling error arises because the beliefs ˜θi are only noisy signals of the ground truth. From Theorem 4.3 we see that this error may be compounded by the weighting according to risk aversions, which can skew the prices. To obtain a concrete bound on the error term ∥µtrue −¯µ∥, we need to make some assumptions about risk aversion coefficients, the true distribution of the outcome, and how this distribution is related to trader beliefs. For instance, suppose risk aversion coefficients are bounded both from below and above, the true outcome is drawn from an exponential-family distribution with natural parameter θtrue, and the beliefs ˜θi are independent samples with mean θtrue and a bounded covariance matrix. Under these assumptions, one can show using standard concentration bounds that with high probability, ∥µtrue −¯µ∥= O( p 1/N) as N →∞. In other words, market-clearing prices approach the ground truth as the number of traders increases. In Appendix B.4 we make the dependence on risk aversion and belief noise more explicit. The analysis of other information structures (e.g., biased or correlated beliefs) is beyond the scope of this paper; instead, we focus on the two error components that depend on the market design. 5 Market-maker Bias We now analyze the market-maker bias—the difference between the marker-maker equilibrium prices µ⋆and market-clearing prices ¯µ. We first state a global bound that depends on the liquidity b and cost function C, but not on trader beliefs, and show that µ⋆→¯µ with the rate O(b) as b →0. The proof builds on Theorems 4.1 and 4.2 and uses the facts that C∗is bounded on M (by our assumptions on C), and conjugates F ∗ i are strongly convex on M (from properties of the log partition function). Theorem 5.1 (Global Bias Bound). Under the exponential trader model, for any C, there exists a constant c such that ∥µ⋆(b; C) −¯µ∥≤cb for all b ≥0. This result makes use of strong convexity constants that are valid over the entire set M, which can be overly conservative when µ⋆is close to ¯µ. Furthermore, it gives us only an upper bound, which cannot be used to compare different cost function families. In the rest of this section we pursue a tighter local analysis, based on the properties of F ∗ i and C∗at ¯µ. Our local analysis requires assumptions that go beyond convexity and differentiability of the cost function. We call the class of functions that satisfy these assumptions convex+ functions. (See Appendix A.3 for their complete treatment and a more general definition than provided here.) These functions are related to functions of Legendre type (see Sec. 26 of Rockafellar [15]). Informally, they are smooth functions that are strictly convex along directions in a certain space (the gradient space) and linear in orthogonal directions. For cost functions, strict convexity means that prices change in response to arbitrarily small trades, while the linear directions correspond to bundles with constant payoffs, whose prices are therefore fixed. Definition 5.2. Let f : Rd →R be differentiable and convex. Its gradient space is the linear space parallel to the affine hull of its gradients, denoted as G(f) := span{∇f(u) −∇f(u′):u, u′ ∈Rd}. Definition 5.3. We say that a convex function f : Rd →R is convex+ if it has continuous third derivatives and range(∇2f(u)) = G(f) for all u ∈Rd. It can be checked that if P is a projection on G(f) then there exists some a such that f(u) = f(Pu) + a⊺u, so f is up to a linear term fully described by its values on G(f). The condition on the range of the Hessian ensures that f is strictly convex over G(f), so its gradient map is invertible over G(f). This means that the Hessian can be expressed as a function of the gradient, i.e., there exists a matrix-valued function Hf such that ∇2f(u) = Hf(∇f(u)) (see Proposition A.8). The cost functions C for both the LMSR and the sum of independent LMSRs (IND) are convex+. Example 5.4 (LMSR as a convex+ function). For LMSR, the gradient space of C is parallel to the simplex: G(C) = {u : 1⊺u = 0}. The gradients of C are points in the relative interior of the simplex. Given such a point µ = ∇C(s), the corresponding Hessian is ∇2C(s) = HC(µ) = (diagk∈[K] µk) −µµ⊺, where diagk∈[K] µk denotes the diagonal matrix with values µk on the diagonal. The null space of HC(µ) is {c1 : c ∈R}, so C is linear in the all-ones direction (buying one share of each security always has cost one), but strictly convex in directions from G(C). Example 5.5 (IND as a convex+ function). For IND, the gradient space is RK and the gradients are the points in (0, 1)K. In this case, HC(µ) = diagk[µk(1 −µk)]. This matrix has full rank. Our next theorem shows that for an appropriate vector u, which depends on ¯µ and C, we have µ⋆(b; C) = ¯µ + bu + εb, where ∥εb∥= O(b2). Here, the O(·) is taken as b →0, so the error term 6 εb goes to zero faster than the term bu, which we call the asymptotic bias. Our analysis is local in the sense that the constants hiding within O(·) may depend on ¯µ. This analysis fully uncovers the main asymptotic term and therefore allows comparison of cost families. In our experiments, we show that the asymptotic bias is an accurate estimate of the bias even for moderately large values of b. Theorem 5.6 (Local Bias Bound). Assume that the cost function C is convex+. Then µ⋆(b; C) = ¯µ −b(¯a/N)HT (¯µ)∂C∗(¯µ) + εb , where ∥εb∥= O(b2). In the expression above, ¯a = N/(PN i=1 1/ai) is the harmonic mean of risk-aversion coefficients and HT (¯µ)∂C∗(¯µ) is guaranteed to consist of a single point even when ∂C∗(¯µ) is a set. The theorem is proved by a careful application of Taylor’s Theorem and crucially uses properties of conjugates of convex+ functions, which we derive in Appendix A.3. It gives us a formula to calculate the asymptotic bias for any cost function for a particular value of ¯µ, or evaluate the worst-case bias against some set of possible market-clearing prices. It also constitutes an important step in comparing cost function families. To compare the convergence error of two costs C and C′ in the next section, we require that their liquidities b and b′ be set so that they have (approximately) the same bias, i.e., ∥µ⋆(b′; C′) −¯µ∥≈∥µ⋆(b; C) −¯µ∥. Theorem 5.6 tells us that this can be achieved by the linear rule b′ = b/η where η = ∥HT (¯µ)∂C′∗(¯µ)∥/ ∥HT (¯µ)∂C∗(¯µ)∥. For C = LMSR and C′ = IND, we prove that the corresponding η ∈[1, 2]. Equivalently, this means that for the same value of b the asymptotic bias of IND is at least as large as that of LMSR, but no more than twice as large: Theorem 5.7. For any ¯µ there exists η ∈[1, 2] such that for all b, ∥µ⋆(b/η; IND) −¯µ∥= ∥µ⋆(b; LMSR)−¯µ∥±O(b2). For this same η, also ∥µ⋆(b; IND)−¯µ∥= η∥µ⋆(b; LMSR)−¯µ∥±O(b2). Theorem 5.6 also captures an intuitive relationship which can guide the market maker in adjusting the market liquidity b as the number of traders N and their risk aversion coefficients ai vary. In particular, holding ¯µ and the cost function fixed, we can maintain the same amount of bias by setting b ∝N/¯a. Note that 1/ai plays the role of the budget of trader i in the sense that at fixed prices, the trader will spend an amount of cash proportional to 1/ai. Thus N/¯a = P i(1/ai) corresponds to the total amount of available cash among the traders in the market. Similarly, the market maker’s worst-case loss, amounting to the market maker’s cash, is proportional to b, so setting b ∝P i(1/ai) is natural. 6 Convergence Error We now study the convergence error, namely the difference between the prices µt at round t and the market-maker equilibrium prices µ⋆. To do so, we must posit a model of how the traders interact with the market. Following Frongillo and Reid [9], we assume that in each round, a trader i ∈[N], chosen uniformly at random, buys a bundle δ ∈RK that optimizes her utility given the current market state s and her existing security and cash allocations, ri and ci. The resulting updates of the allocation vector r = (ri)N i=1 correspond to randomized block-coordinate descent on the potential function F(r) with blocks ri (see Appendix D.1 and Frongillo and Reid [9]). We refer to this model as the all-security (trader) dynamics (ASD).2 We apply and extend the analysis of block-coordinate descent to this setting. We focus on convex+ functions and conduct local convergence analysis around the minimizer of F. Our experiments demonstrate that the local analysis accurately estimates the convergence rate. Let r⋆denote an arbitrary minimizer of F and let F ⋆be the minimum value of F. Also, let rt denote the allocation vector and µt the market price vector after the tth trade. Instead of directly analyzing the convergence error ∥µt −µ⋆∥, we bound the suboptimality F(rt) −F ⋆since ∥µt −µ⋆∥2 = Θ(F(rt) −F ⋆) for convex+ costs C under ASD (see Appendix D.7.1). Convex+ functions are locally strongly convex and have a Lipschitz-continuous gradient, so the standard analysis of block-coordinate descent [9, 11] implies linear convergence, i.e., E [F(rt)] − F ⋆≤O(γt) for some γ < 1, where the expectation is under the randomness of the algorithm. We refine the standard analysis by (1) proving not only upper, but also lower bounds on the convergence rate, and (2) proving an explicit dependence of γ on the cost function C and the liquidity b. These two refinements are crucial for comparison of cost families, as we demonstrate with the comparison of LMSR and IND. We begin by formally defining bounds on local convergence of any randomized iterative algorithm that minimizes a function F(r) via a sequence of iterates rt. 2In Appendix D, we also analyze the single-security (trader) dynamics (SSD), in which a randomly chosen trader randomly picks a single security to trade, corresponding to randomized coordinate descent on F. 7 Definition 6.1. We say that γhigh is an upper bound on the local convergence rate of an algorithm if, with probability 1 under the randomness of the algorithm, the algorithm reaches an iteration t0 such that for some c > 0 and all t ≥t0, E F(rt) rt0 −F ⋆≤cγt−t0 high . We say that γlow is a lower bound on the local convergence rate if γhigh ≥γlow holds for all upper bounds γhigh. To state explicit bounds, we use the notation D := diagi∈[N] ai and P := IN −11⊺/N, where IN is the N × N identity matrix and 1 is the all-ones vector. We write M + for the pseudoinverse of a matrix M and λmin(M) and λmax(M) for its smallest and largest positive eigenvalues. Theorem 6.2 (Local Convergence Bound). Assume that C is convex+. Let HT := HT (¯µ) and HC := HC(¯µ). For the all-securities dynamics, the local convergence rate is bounded between γASD high = 1 −2b N · λmin(PDP) · λmin H1/2 T H+ C H1/2 T + O(b2) , γASD low = 1 −2b N · λmax(PDP) · λmax H1/2 T H+ C H1/2 T −O(b2) . In our proof, we first establish both lower and upper bounds on convergence of a generic blockcoordinate descent that extend the results of Nesterov [11]. We then analyze the behavior of the algorithm for the specific structure of our objective to obtain explicit lower and upper bounds. Our bounds prove linear convergence with the rate γ = 1 −Θ(b). Since the convergence gets worse as b →0, there is a trade-off with the bias, which decreases as b →0. Theorems 5.6 and 6.2 enable systematic quantitative comparisons of cost families. For simplicity, assume that N ≥2 and all risk aversions are a, so λmin(PDP) = λmax(PDP) = a. To compare convergence rates of two costs C and C′, we need to control for bias. As discussed after Theorem 5.6, their biases are (asymptotically) equal if their liquidities are linearly related as b′ = b/η for a suitable η. Theorem 6.2 then states that C′ b′ requires (asymptotically) at most a factor of ρ as many trades as Cb to achieve the same convergence error, where ρ := η · λmax(H1/2 T H+ C H1/2 T )/λmin(H1/2 T H+ C′H1/2 T ). Similarly, Cb requires at most a factor of ρ′ as many trades as C′ b′, with ρ′ defined symmetrically to ρ. For C = LMSR and C′ = IND, we can show that ρ ≤2 and ρ′ ≤2, yielding the following result: Theorem 6.3. Assume that N ≥2 and all risk aversions are equal to a. Consider running LMSR with liquidity b and IND with liquidity b′ = b/η such that their asymptotic biases are equal. Denote the iterates of the two runs of the market as µt LMSR and µt IND and the respective market-maker equilibria as µ⋆ LMSR and µ⋆ IND. Then, with probability 1, there exist t0 and t1 ≥t0 such that for all t ≥t1 and sufficiently small b Et0
µ2t(1+ε) LMSR −µ⋆ LMSR
2 ≤Et0
µt IND −µ⋆ IND
2 ≤Et0
µ(t/2)(1−ε) LMSR −µ⋆ LMSR
2 , where ε = O(b) and Et0[·] = E[· | rt0] conditions on the t0th iterate of a given run. This result means that LMSR and IND are roughly equivalent (up to a factor of two) in terms of the number of trades required to achieve a given accuracy. This is somewhat surprising as this implies that maintaining price coherence does not offer strong informational advantages (at least when traders are individually coherent, as assumed here). However, while there is little difference between the two costs in terms of accuracy, there is a difference in terms of the worst-case loss. For K securities, the worst-case loss of LMSR with the liquidity b is b log K, and the worst-case loss of IND with the liquidity b′ is b′K log 2. If liquidities are chosen as in Theorem 6.3, so that b′ is up to a factor-of-two smaller than b, then the worst-case loss of IND is at least (bK/2) log 2, which is always worse than the LMSR’s loss of b log K, and the ratio of the two losses increases as K grows. When all risk aversion coefficients are equal to some constant a, then the dependence of Theorem 6.2 on the number of traders N and their risk aversion is similar to the dependence in Theorem 5.6. For instance, to guarantee that γ stays below a certain level for varying N and a requires b = Ω(N/a). 7 Numerical Experiments We evaluate the tightness of our theoretical bounds via numerical simulation. We consider a complete market over K = 5 securities and simulate N = 10 traders with risk aversion coefficients equal to 1. These values of N and K are large enough to demonstrate the tightness of our results, but small enough that simulations are tractable. While our theory comprehensively covers heterogeneous 8 Liquidity Parameter b 0.0 0.2 0.4 0.6 0.8 1.0 Bias Plus Convergence Error 0.00 0.04 0.08 0.12 #Trades 100 200 500 1000 Liquidity Parameter b 0.0 0.2 0.4 0.6 0.8 1.0 Market−Maker Bias 0.00 0.04 0.08 G G G G G G G G G G G G G G G G G G G G G G G G Actual Bias LMSR IND Asymptotic Bias LMSR IND −8 −6 −4 −2 0 0 250 500 750 1000 1250 Number of Trades Log10 of Suboptimality of F Liquidity b 0.01 0.03 0.05 0.07 Figure 1: (Left) The tradeoff between market-maker bias and convergence. Solid lines are for LMSR, dashed for IND, the color indicates the number of trades. (Center) Market-maker bias as a function of b. (Right) Convergence in the objective. Shading indicates 95% confidence based on 20 trading sequences. risk aversions and the dependence on the number of traders and securities, we have chosen to keep these values fixed, so that we can more cleanly explore the impact of liquidity and number of trades. We consider the two most commonly studied cost functions: LMSR and IND. We fix the ground-truth natural parameter θtrue and independently sample the belief ˜θi of each trader from Normal(θtrue, σ2IK), with σ = 5. We consider a single-peaked ground truth distribution with θtrue 1 = log(1 −ν(K −1)) and θtrue k = log ν for k ̸= 1, with ν = 0.02. Trading is simulated according to the all-security dynamics (ASD) as described at the start of Section 6. In Appendix E, we show qualitatively similar results using a uniform ground truth distribution and single-security dynamics (SSD). We first examine the tradeoff that arises between market-maker bias and convergence error as the liquidity parameter is adjusted. Fig. 1 (left) shows the combined bias and convergence error, ∥µt−¯µ∥, as a function of liquidity and the number of trades t (indicated by the color of the line) for the two cost functions, averaged over twenty random trading sequences. The minimum point on each curve tells us the optimal value of the liquidity parameter b for the particular cost function and particular number of trades. When the market is run for a short time, larger values of b lead to lower error. On the other hand, smaller values of b are preferable as the number of trades grows, with the combined error approaching 0 for small b. In Fig. 1 (center) we plot the bias ∥µ⋆(b; C) −¯µ∥as a function of b for both LMSR and IND. We compare this with the theoretical approximation ∥µ⋆(b; C) −¯µ∥≈b(¯a/N)∥HT (¯µ)∂C∗(¯µ)∥from Theorem 5.6. Although Theorem 5.6 only gives an asymptotic guarantee as b →0, the approximation is fairly accurate even for moderate values of b. In agreement with Theorem 5.7, the bias of IND is higher than that of LMSR at any fixed value of b, but by no more than a factor of two. In Fig. 1 (right) we plot the log of ˆE[F(rt)] −F ⋆as a function of the number of trades t for our two cost functions and several liquidity levels. Even for small t the curves are close to linear, showing that the local linear convergence rate kicks in essentially from the start of trade in our simulations. In other words, there exist some ˆc and ˆγ such that, empirically, we have ˆE[F(rt)] −F ⋆≈ˆcˆγt, or equivalently, log(ˆE[F(rt)] −F ⋆) ≈log ˆc + t log ˆγ. Plugging the belief values into Theorem 6.2, the slope of the curve for LMSR should be log10 ˆγ ≈−0.087b for sufficiently small b, and the slope for IND should be between −0.088b and −0.164b. In Appendix E, we verify that this is the case. 8 Conclusion Our theoretical framework provides a meaningful way to quantitatively evaluate the error tradeoffs inherent in different choices of cost functions and liquidity levels. We find, for example, that to maintain a fixed amount of bias, one should set the liquidity parameter b proportional to a measure of the amount of cash that traders are willing to spend. We also find that, although the LMSR maintains coherent prices while IND does not, the two are equivalent up to a factor of two in terms of the number of trades required to reach any fixed accuracy, though LMSR has lower worst-case loss. We have assumed that traders’ beliefs are individually coherent. Experimental evidence suggests that LMSR might have additional informational advantages over IND when traders’ beliefs are incoherent or each trader is informed about only a subset of events [12]. We touch on this in Appendix C.2, but leave a full exploration of the impact of different assumptions on trader beliefs to future work. 9 References [1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2):Article 12, 2013. [2] Jacob Abernethy, Sindhu Kutty, Sébastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proceedings of the 15th ACM Conference on Economics and Computation (EC), 2014. [3] Ole Barndorff-Nielsen. Exponential Families. Wiley Online Library, 1982. [4] Joyce Berg, Robert Forsythe, Forrest Nelson, and Thomas Rietz. Results from a dozen years of election futures markets research. Handbook of experimental economics results, 1:742–751, 2008. [5] Olivier Bousquet and Léon Bottou. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NIPS), 2008. [6] Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence (UAI), 2007. [7] Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC), 2010. [8] Miroslav Dudík, Sébastien Lahaie, David M. Pennock, and David Rothschild. A combinatorial prediction market for the US elections. In Proceedings of the 14th ACM Conference on Electronic Commerce (EC), 2013. [9] Rafael Frongillo and Mark D. Reid. Convergence analysis of prediction markets via randomized subspace descent. In Advances in Neural Information Processing Systems (NIPS), 2015. [10] Robin Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1): 105–119, 2003. [11] Yu. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [12] Kenneth C. Olson, Charles R. Twardy, and Kathryn B. Laskey. Accuracy of simulated flat, combinatorial, and penalized prediction markets. Presented at Collective Intelligence, 2015. [13] Abraham Othman, David M Pennock, Daniel M Reeves, and Tuomas Sandholm. A practical liquidity-sensitive automated market maker. ACM Transactions on Economics and Computation, 1(3):14, 2013. [14] Kaare Brandt Petersen and Michael Syskind Pedersen. The matrix cookbook. Technical Report, Technical University of Denmark, Nov 2012. [15] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, 1970. [16] R. Tyrrell Rockafellar and Roger J-B Wets. Variational analysis. Springer-Verlag, 2009. [17] David Rothschild. Forecasting elections: comparing prediction markets, polls, and their biases. Public Opinion Quarterly, 73(5):895–916, 2009. [18] Christian Slamka, Bernd Skiera, and Martin Spann. Prediction market performance and market liquidity: A comparison of automated market makers. IEEE Transactions on Engineering Management, 60(1):169–185, 2013. [19] Justin Wolfers and Eric Zitzewitz. Prediction markets. The Journal of Economic Perspectives, 18(2):107–126, 2004. 10 | 2017 | 395 |
6,891 | Ranking Data with Continuous Labels through Oriented Recursive Partitions Stephan Cl´emenc¸on Mastane Achab LTCI, T´el´ecom ParisTech, Universit´e Paris-Saclay 75013 Paris, France first.last@telecom-paristech.fr Abstract We formulate a supervised learning problem, referred to as continuous ranking, where a continuous real-valued label Y is assigned to an observable r.v. X taking its values in a feature space X and the goal is to order all possible observations x in X by means of a scoring function s : X →R so that s(X) and Y tend to increase or decrease together with highest probability. This problem generalizes bi/multi-partite ranking to a certain extent and the task of finding optimal scoring functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall τ related to the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of this problem and provide statistical guarantees for empirical Kendall τ maximization under appropriate conditions for the class of scoring function candidates. We also propose a recursive statistical learning algorithm tailored to empirical IROC curve optimization and producing a piecewise constant scoring function that is fully described by an oriented binary tree. Preliminary numerical experiments highlight the difference in nature between regression and continuous ranking and provide strong empirical evidence of the performance of empirical optimizers of the criteria proposed. 1 Introduction The predictive learning problem considered in this paper can be easily stated in an informal fashion, as follows. Given a collection of objects of arbitrary cardinality, N ≥1 say, respectively described by characteristics x1, . . . , xN in a feature space X, the goal is to learn how to order them by increasing order of magnitude of a certain unknown continuous variable y. To fix ideas, the attribute y can represent the ’size’ of the object and be difficult to measure, as for the physical measurement of microscopic bodies in chemistry and biology or the cash flow of companies in quantitative finance and the features x may then correspond to indirect measurements. The most convenient way to define a preorder on a feature space X is to transport the natural order on the real line onto it by means of a (measurable) scoring function s : X →R: an object with charcateristics x is then said to be ’larger’ (’strictly larger’, respectively) than an object described by x′ according to the scoring rule s when s(x′) ≤s(x) (when s(x) < s(x′)). Statistical learning boils down here to build a scoring function s(x), based on a training data set Dn = {(X1, Y1), . . . , (Xn, Yn)} of objects for which the values of all variables (direct and indirect measurements) have been jointly observed, such that s(X) and Y tend to increase or decrease together with highest probability or, in other words, such that the ordering of new objects induced by s(x) matches that defined by their true measures as well as possible. This problem, that shall be referred to as continuous ranking throughout the article can be viewed as an extension of bipartite ranking, where the output variable Y is assumed to be binary and the objective can be naturally formulated as a functional M-estimation problem by means of the concept of ROC curve, see [7]. Refer also to [4], [11], [1] for approaches based on the optimization 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of summary performance measures such as the AUC criterion in the binary context. Generalization to the situation where the random label is ordinal and may take a finite number K ≥3 of values is referred to as multipartite ranking and has been recently investigated in [16] (see also e.g. [14]), where distributional conditions guaranteeing that ROC surface and the VUS criterion can be used to determine optimal scoring functions are exhibited in particular. It is the major purpose of this paper to formulate the continuous ranking problem in a quantitative manner and explore the connection between the latter and bi/multi-partite ranking. Intuitively, optimal scoring rules would be also optimal for any bipartite subproblem defined by thresholding the continuous variable Y with cut-off t > 0, separating the observations X such that Y < t from those such that Y > t. Viewing this way continuous ranking as a continuum of nested bipartite ranking problems, we provide here sufficient conditions for the existence of such (optimal) scoring rules and we introduce a concept of integrated ROC curve (IROC curve in abbreviated form) that may serve as a natural performance measure for continuous ranking, as well as the related notion of integrated AUC criterion, a summary scalar criterion, akin to Kendall tau. Generalization properties of empirical Kendall tau maximizers are discussed in the Supplementary Material. The paper also introduces a novel recursive algorithm that solves a discretized version of the empirical integrated ROC curve optimization problem, producing a scoring function that can be computed by means of a hierarchical combination of binary classification rules. Numerical experiments providing strong empirical evidence of the relevance of the approach promoted in this paper are also presented. The paper is structured as follows. The probabilistic framework we consider is described and key concepts of bi/multi-partite ranking are briefly recalled in section 2. Conditions under which optimal solutions of the problem of ranking data with continuous labels exist are next investigated in section 3, while section 4 introduces a dedicated quantitative (functional) performance measure, the IROC curve. The algorithmic approach we propose in order to learn scoring functions with nearly optimal IROC curves is presented at length in section 5. Numerical results are displayed in section 6. Some technical proofs are deferred to the Supplementary Material. 2 Notation and Preliminaries Throughout the paper, the indicator function of any event E is denoted by I{E}. The pseudo-inverse of any cdf F(t) on R is denoted by F −1(u) = inf{s ∈R : F(s) ≥u}, while U([0, 1]) denotes the uniform distribution on the unit interval [0, 1]. 2.1 The probabilistic framework Given a continuous real valued r.v. Y representing an attribute of an object, its ’size’ say, and a random vector X taking its values in a (typically high dimensional euclidian) feature space X modelling other observable characteristics of the object (e.g. ’indirect measurements’ of the size of the object), hopefully useful for predicting Y , the statistical learning problem considered here is to learn from n ≥1 training independent observations Dn = {(X1, Y1), . . . , (Xn, Yn)}, drawn as the pair (X, Y ), a measurable mapping s : X →R, that shall be referred to as a scoring function throughout the paper, so that the variables s(X) and Y tend to increase or decrease together: ideally, the larger the score s(X), the higher the size Y . For simplicity, we assume throughout the article that X = Rd with d ≥1 and that the support of Y ’s distribution is compact, equal to [0, 1] say. For any q ≥1, we denote by λq the Lebesgue measure on Rq equipped with its Borelian σ-algebra and suppose that the joint distribution FX,Y (dxdy) of the pair (X, Y ) has a density fX,Y (x, y) w.r.t. the tensor product measure λd ⊗λ1. We also introduces the marginal distributions FY (dy) = fY (y)λ1(dy) and FX(dx) = fX(x)λd(dx), where fY (y) = R x∈X fX,Y (x, y)λd(dx) and fX(x) = R y∈[0,1] fX,Y (x, y)λ1(dy) as well as the conditional densities fX|Y =y(x) = fX,Y (x, y)/fY (y) and fY |X=x(y) = fX,Y (x, y)/fX(x). Observe incidentally that the probabilistic framework of the continuous ranking problem is quite similar to that of distribution-free regression. However, as shall be seen in the subsequent analysis, even if the regression function m(x) = E[Y | X = x] can be optimal under appropriate conditions, just like for regression, measuring ranking performance involves criteria that are of different nature than the expected least square error and plug-in rules may not be relevant for the goal pursued here, as depicted by Fig. 2 in the Supplementary Material. 2 Scoring functions. The set of all scoring functions is denoted by S here. Any scoring function s ∈S defines a total preorder on the space X: ∀(x, x′) ∈X 2, x ⪯s x′ ⇔s(x) ≤s(x′). We also set x ≺s x′ when s(x) < s(x′) and x =s x′ when s(x) = s(x′) for (x, x′) ∈X 2. 2.2 Bi/multi-partite ranking Suppose that Z is a binary label, taking its values in {−1, +1} say, assigned to the r.v. X. In bipartite ranking, the goal is to pick s in S so that the larger s(X), the greater the probability that Y is equal to 1 ideally. In other words, the objective is to learn s(x) such that the r.v. s(X) given Y = +1 is as stochastically larger1 as possible than the r.v. s(X) given Y = −1: the difference between ¯Gs(t) = P{s(X) ≥t | Y = +1} and ¯Hs(t) = P{s(X) ≥t | Y = −1} should be thus maximal for all t ∈R. This can be naturally quantified by means of the notion of ROC curve of a candidate s ∈S, i.e. the parametrized curve t ∈R 7→( ¯Hs(t), ¯Gs(t)), which can be viewed as the graph of a mapping ROCs : α ∈(0, 1) 7→ROCs(α), connecting possible discontinuity points by linear segments (so that ROCs(α) = ¯Gs ◦(1 −H−1 s )(1 −α) when Hs has no flat part in H−1 s (1 −α), where Hs = 1 −¯Hs). A basic Neyman Pearson’s theory argument shows that the optimal elements s∗(x) related to this natural (functional) bipartite ranking criterion (i.e. scoring functions whose ROC curve dominates any other ROC curve everywhere on (0, 1)) are transforms (T ◦η)(x) of the posterior probability η(x) = P{Z = +1 | X = x}, where T : SUPP(η(X)) →R is any strictly increasing borelian mapping. Optimization of the curve in sup norm has been considered in [7] or in [8] for instance. However, given its functional nature, in practice the ROC curve of any s ∈S is often summarized by the area under it, which performance measure can be interpreted in a probabilistic manner, as the theoretical rate of concording pairs AUC(s) = P {s(X) < s(X′) | Z = −1, Z′ = +1} + 1 2P {s(X) = s(X′) | Z = −1, Z′ = +1} , (1) where (X′, Z′) denoted an independent copy of (X, Z). A variety of algorithms aiming at maximizing the AUC criterion or surrogate pairwise criteria have been proposed and studied in the literature, among which [11], [15] or [3], whereas generalization properties of empirical AUC maximizers have been studied in [5], [1] and [12]. An analysis of the relationship between the AUC and the error rate is given in [9]. Extension to the situation where the label Y takes at least three ordinal values (i.e. multipartite ranking) has been also investigated, see e.g. [14] or [6]. In [16], it is shown that, in contrast to the bipartite setup, the existence of optimal solutions cannot be guaranteed in general and conditions on (X, Y )’s distribution ensuring that optimal solutions do exist and that extensions of bipartite ranking criteria such as the ROC manifold and the volume under it can be used for learning optimal scoring rules have been exhibited. An analogous analysis in the context of continuous ranking is carried out in the next section. 3 Optimal elements in ranking data with continuous labels In this section, a natural definition of the set of optimal elements for continuous ranking is first proposed. Existence and characterization of such optimal scoring functions are next discussed. 3.1 Optimal scoring rules for continuous ranking Considering a threshold value y ∈[0, 1], a considerably weakened (and discretized) version of the problem stated informally above would consist in finding s so that the r.v. s(X) given Y > y is as stochastically larger than s(X) given Y < y as possible. This subproblem coincides with the bipartite ranking problem related to the pair (X, Zy), where Zy = 2I{Y > y} −1. As briefly recalled in subsection 2.2, the optimal set S∗ y is composed of the scoring functions that induce the same ordering as ηy(X) = P{Y > y | X} = 1 −(1 −py)/(1 −py + pyΦy(X)), where py = 1 −FY (y) = P{Y > y} and Φy(X) = (dFX|Y >y/dFX|Y <y)(X). 1Given two real-valued r.v.’s U and U ′, recall that U is said to be stochastically larger than U ′ when P{U ≥t} ≥P{U ′ ≥t} for all t ∈R. 3 A continuum of bipartite ranking problems. The rationale behind the definition of the set S∗of optimal scoring rules for continuous ranking is that any element s∗should score observations x in the same order as ηy (or equivalently as Φy). Definition 1. (OPTIMAL SCORING RULE) An optimal scoring rule for the continuous ranking problem related to the random pair (X, Y ) is any element s∗that fulfills: ∀y ∈(0, 1), ∀(x, x′) ∈X 2, ηy(x) < ηy(x′) ⇒s∗(x) < s∗(x′). (2) In other words, the set of optimal rules is defined as S∗= T y∈(0,1) S∗ y. It is noteworthy that, although the definition above is natural, the set S∗can be empty in absence of any distributional assumption, as shown by the following example. Example 1. As a counter-example, consider the distributions FX,Y such that FY = U([0, 1]) and FX|Y =y = N(|2y −1|, (2y −1)2). Observe that (X, 1 −Y ) d=(X, Y ), so that Φ1−t = Φ−1 t for all t ∈(0, 1) and there exists t ̸= 0 s.t. Φt is not constant. Hence, there exists no s∗in S such that (2) holds true for all t ∈(0, 1). Remark 1. (INVARIANCE) We point out that the class S∗of optimal elements for continuous ranking thus defined is invariant by strictly increasing transform of the ’size’ variable Y (in particular, a change of unit has no impact on the definition of S∗): for any borelian and strictly increasing mapping H : (0, 1) →(0, 1), any scoring function s∗(x) that is optimal for the continuous ranking problem related to the pair (X, Y ) is still optimal for that related to (X, H(Y )) (since, under these hypotheses, for any y ∈(0, 1): Y > y ⇔H(Y ) > H(y)). 3.2 Existence and characterization of optimal scoring rules We now investigate conditions guaranteeing the existence of optimal scoring functions for the continuous ranking problem. Proposition 1. The following assertions are equivalent. 1. For all 0 < y < y′ < 1, for all (x, x′) ∈X 2: Φy(x) < Φy(x′) ⇒Φy′(x) ≤Φy′(x′). 2. There exists an optimal scoring rule s∗(i.e. S∗̸= ∅). 3. The regression function m(x) = E[Y | X = x] is an optimal scoring rule. 4. The collection of probability distributions FX|Y =y(dx) = fX|Y =y(x)λd(dx), y ∈(0, 1) satisfies the monotone likelihood ratio property: there exist s∗∈S and, for all 0 < y < y′ < 1, an increasing function ϕy,y′ : R →R+ such that: ∀x ∈Rd, fX|Y =y′ fX|Y =y (x) = ϕy,y′(s∗(x)). Refer to the Appendix section for the technical proof. Truth should be said, assessing that Assertion 1. is a very challenging statistical task. However, through important examples, we now describe (not uncommon) situations where the conditions stated in Proposition 1 are fulfilled. Example 2. We give a few important examples of probabilistic models fulfilling the properties listed in Proposition 1. • Regression model. Suppose that Y = m(X) + ϵ, where m : X →R is a borelian function and ϵ is a centered r.v. independent from X. One may easily check that m ∈S∗. • Exponential families. Suppose that fX|Y =y(x) = exp(κ(y)T(x) −ψ(y))f(x) for all x ∈Rd, where f : Rd →R+ is borelian, κ : [0, 1] →R is a borelian strictly increasing function and T : Rd →R is a borelian mapping such that ψ(y) = log R x∈Rd exp(κ(y)T(x))f(x)dx < +∞. We point out that, although the regression function m(x) is an optimal scoring function when S∗̸= ∅, the continuous ranking problem does not coincide with distribution-free regression (notice incidentally that, in this case, any strictly increasing transform of m(x) belongs to S∗as well). As depicted by Fig. 2 the least-squares criterion is not relevant to evaluate continuous ranking performance and naive plug-in strategies should be avoided, see Remark 3 below. Dedicated performance criteria are proposed in the next section. 4 4 Performance measures for continuous ranking We now investigate quantitative criteria for assessing the performance in the continuous ranking problem, which practical machine-learning algorithms may rely on. We place ourselves in the situation where the set S∗is not empty, see Proposition 1 above. A functional performance measure. It follows from the view developped in the previous section that, for any (s, s∗) ∈S × S∗and for all y ∈(0, 1), we have: ∀α ∈(0, 1), ROCs,y(α) ≤ROCs∗,y(α) = ROC∗ y(α), (3) denoting by ROCs,y the ROC curve of any s ∈S related to the bipartite ranking subproblem (X, Zy) and by ROC∗ y the corresponding optimal ROC curve, i.e. the ROC curve of strictly increasing transforms of ηy(x). Based on this observation, it is natural to design a dedicated performance measure by aggregating these ’sub-criteria’. Integrating over y w.r.t. a σ-finite measure µ with support equal to [0, 1], this leads to the following definition IROCµ,s(α) = R ROCs,y(α)µ(dy). The functional criterion thus defined inherits properties from the ROCs,y’s (e.g. monotonicity, concavity). In addition, the curve IROCµ,s∗with s∗∈S∗dominates everywhere on (0, 1) any other curve IROCµ,s for s ∈S. However, except in pathologic situations (e.g. when s(x) is constant), the curve IROCµ,s is not invariant when replacing Y ’s distribution by that of a strictly increasing transform H(Y ). In order to guarantee that this desirable property is fulfilled (see Remark 1), one should integrate w.r.t. Y ’s distribution (which boils down to replacing Y by the uniformly distributed r.v. FY (Y )). Definition 2. (INTEGRATED ROC/AUC CRITERIA) The integrated ROC curve of any scoring rule s ∈S is defined as: ∀α ∈(0, 1), IROCs(α) = Z 1 y=0 ROCs,y(α)FY(dy) = E [ROCs,Y(α)] . (4) The integrated AUC criterion is defined as the area under the integrated ROC curve: ∀s ∈S, IAUC(s) = Z 1 α=0 IROCs(α)dα. (5) The following result reveals the relevance of the functional/summary criteria defined above for the continuous ranking problem. Additional properties of IROC curves are listed in the Supplementary Material. Theorem 1. Let s∗∈S. The following assertions are equivalent. 1. The assertions of Proposition 1 are fulfilled and s∗is an optimal scoring function in the sense given by Definition 1. 2. For all α ∈(0, 1), IROCs∗(α) = E [ROC∗ Y(α)]. 3. We have IAUCs∗= E [AUC∗ Y], where AUC∗ y = R 1 α=0 ROC∗ y(α)dα for all y ∈(0, 1). If S∗̸= ∅, then we have: ∀s ∈S, IROCs(α) ≤ IROC∗(α) def = E [ROC∗ Y(α)] , for any α ∈(0, 1, ) IAUC(s) ≤ IAUC∗def = E [AUC∗ Y] . In addition, for any borelian and strictly increasing mapping H : (0, 1) →(0, 1), replacing Y by H(Y ) leaves the curves IROCs, s ∈S, unchanged. Equipped with the notion defined above, a scoring rule s1 is said to be more accurate than another one s2 if IROCs2(α) ≤IROCs1(α) for all α ∈(0, 1).The IROC curve criterion thus provides a partial preorder on S. Observe also that, by virtue of Fubini’s theorem, we have IAUC(s) = R AUCy(s)FY(dy) for all s ∈S, denoting by AUCy(s) the AUC of s related to the bipartite ranking subproblem (X, Zy). Just like the AUC for bipartite ranking, the scalar IAUC criterion defines a full preorder on S for continuous ranking. Based on a training dataset Dn of independent copies of (X, Y ), statistical versions of the IROC/IAUC criteria can be straightforwardly computed by replacing the distributions FY , FX|Y >t and FX|Y <t by their empirical counterparts in (3)-(5), see the Supplementary Material for further details. The lemma below provides a probabilistic interpretation of the IAUC criterion. 5 Lemma 1. Let (X′, Y ′) be a copy of the random pair (X, Y ) and Y ′′ a copy of the r.v. Y . Suppose that (X, Y ), (X′, Y ′) and Y ′′ are defined on the same probability space and are independent. For all s ∈S, we have: IAUC(s) = P {s(X) < s(X′) | Y < Y′′ < Y′} + 1 2P {s(X) = s(X′) | Y < Y′′ < Y′} . (6) This result shows in particular that a natural statistical estimate of IAUC(s) based on Dn involves U-statistics of degree 3. Its proof is given in the Supplementary Material for completeness. The Kendall τ statistic. The quantity (6) is akin to another popular way to measure the tendency to define the same ordering on the statistical population in a summary fashion: dτ (s) def = P {(s(X) −s(X′)) · (Y −Y ′) > 0} + 1 2P {s(X) = s(X′)} (7) = P{s(X) < s(X′) | Y < Y ′} + 1 2P {X =s X′} , where (X′, Y ′) denotes an independent copy of (X, Y ), observing that P{Y < Y ′} = 1/2. The empirical counterpart of (7) based on the sample Dn, given by bdn(s) = 2 n(n −1) X i<j I {(s(Xi) −s(Xj)) · (Yi −Yj) > 0} + 1 n(n −1) X i<j I {s(Xi) = s(Xj)} (8) is known as the Kendall τ statistic and is widely used in the context of statistical hypothesis testing. The quantity (7) shall be thus referred to as the (theoretical or true) Kendall τ. Notice that dτ(s) is invariant by strictly increasing transformation of s(x) and thus describes properties of the order it defines. The following result reveals that the class S∗, when non empty, is the set of maximizers of the theoretical Kendall τ. Refer to the Supplementary Material for the technical proof. Proposition 2. Suppose that S∗̸= ∅. For any (s, s∗) ∈S × S∗, we have: dτ(s) ≤dτ(s∗). Equipped with these criteria, the objective expressed above in an informal manner can be now formulated in a quantitative manner as a (possibly functional) M-estimation problem. In practice, the goal pursued is to find a reasonable approximation of a solution to the optimization problem maxs∈S dτ(s) (respectively maxs∈S IAUC(s)), where the supremum is taken over the set of all scoring functions s : X →R. Of course, these criteria are unknown in general, just like (X, Y )’s probability distribution, and the empirical risk minimization (ERM in abbreviated form) paradigm (see [10]) invites for maximizing the statistical version (8) over a class S0 ⊂S of controlled complexity when considering the criterion dτ(s) for instance. The generalization capacity of empirical maximizers of the Kendall τ can be straightforwardly established using results in [5]. More details are given in the Supplementary Material. Before describing a practical algorithm for recursive maximization of the IROC curve, a few remarks are in order. Remark 2. (ON KENDALL τ AND AUC) We point out that, in the bipartite ranking problem (i.e. when the output variable Z takes its values in {−1, +1}, see subsection 2.2) as well, the AUC criterion can be expressed as a function of the Kendall τ related to the pair (s(X), Z) when the r.v. s(X) is continuous. Indeed, we have in this case 2p(1−p)AUC(s) = dτ(s), where p = P{Z = +1} and dτ(s) = P{(s(X) −s(X′)) · (Z −Z′) > 0}, denoting by (X′, Z′) an independent copy of (X, Z). Remark 3. (CONNECTION TO DISTRIBUTION-FREE REGRESSION) Consider the nonparametric regression model Y = m(X) + ϵ, where ϵ is a centered r.v. independent from X. In this case, it is well-known that the regression function m(X) = E[Y | X] is the (unique) solution of the expected least squares minimization. However, although m ∈S∗, the least squares criterion is far from appropriate to evaluate ranking performance, as depicted by Fig. 2. Observe additionally that, in contrast to the criteria introduced above, increasing transformation of the output variable Y may have a strong impact on the least squares minimizer: except for linear stransforms, E[H(Y ) | X] is not an increasing transform of m(X). Remark 4. (ON DISCRETIZATION) Bi/multi-partite algorithms are not directly applicable to the continuous ranking problem. Indeed a discretization of the interval [0, 1] would be first required but this would raise a difficult question outside our scope: how to choose this discretization based on the training data? We believe that this approach is less efficient than ours which reveals problemspecific criteria, namely IROC and IAUC. 6 Figure 1: A scoring function described by an oriented binary subtree T . For any element x ∈X, one may compute the quantity sT (x) very fast in a top-down fashion by means of the heap structure: starting from the initial value 2J at the root node, at each internal node Cj,k, the score remains unchanged if x moves down to the left sibling, whereas one subtracts 2J−(j+1) from it if x moves down to the right. 5 Continuous Ranking through Oriented Recursive Partitioning It is the purpose of this section to introduce the algorithm CRANK, a specific tree-structured learning algorithm for continuous ranking. 5.1 Ranking trees and Oriented Recursive Partitions Decision trees undeniably figure among the most popular techniques, in supervised and unsupervised settings, refer to [2] or [13] for instance. This is essentially due to the visual model summary they provide, in the form of a binary tree graphic that permits to describe predictions by means of a hierachichal combination of elementary rules of the type ”X(j) ≤κ” or ”X(j) > κ”, comparing the value taken by a (quantitative) component of the input vector X (the split variable) to a certain threshold (the split value). In contrast to local learning problems such as classification or regression, predictive rules for a global problem such as ranking cannot be described by a (tree-structured) partition of the feature space: cells (corresponding to the terminal leaves of the binary decision tree) must be ordered so as to define a scoring function. This leads to the definition of ranking trees as binary trees equipped with a ”left-to-right” orientation, defining a tree-structured collection of anomaly scoring functions, as depicted by Fig. 1. Binary ranking trees have been in the context of bipartite ranking in [7] or in [3] and in [16] in the context of multipartite ranking. The root node of a ranking tree TJ of depth J ≥0 represents the whole feature space X: C0,0 = X, while each internal node (j, k) with j < J and k ∈{0, . . . , 2j −1} corresponds to a subset Cj,k ⊂X, whose left and right siblings respectively correspond to disjoint subsets Cj+1,2k and Cj+1,2k+1 such that Cj,k = Cj+1,2k ∪Cj+1,2k+1. Equipped with the left-to-right orientation, any subtree T ⊂TJ defines a preorder on X: elements lying in the same terminal cell of T being equally ranked. The scoring function related to the oriented tree T can be written as: sT (x) = X Cj,k: terminal leaf of T 2J 1 −k 2j · I{x ∈Cj,k}. (9) 5.2 The CRANK algorithm Based on Proposition 2, as mentioned in the Supplementary Material, one can try to build from the training dataset Dn a ranking tree by recursive empirical Kendall τ maximization. We propose below an alternative tree-structured recursive algorithm, relying on a (dyadic) discretization of the ’size’ variable Y . At each iteration, the local sample (i.e. the data lying in the cell described by the current node) is split into two halves (the highest/smallest halves, depending on Y ) and the algorithm calls a binary classification algorithm A to learn how to divide the node into right/left children. The theoretical analysis of this algorithm and its connection with approximation of IROC∗are difficult questions that will be adressed in future work. Indeed we found out that the IROC cannot be 7 represented as a parametric curve contrary to the ROC, which renders proofs much more difficult than in the bipartite case. THE CRANK ALGORITHM 1. Input. Training data Dn, depth J ≥1, binary classification algorithm A. 2. Initialization. Set C0,0 = X. 3. Iterations. For j = 0, . . . , J −1 and k = 0, . . . , 2J −1, (a) Compute a median yj,k of the dataset {Y1, . . . , , Yn} ∩Cj,k and assign the binary label Zi = 2I{Yi > yj,k} −1 to any data point i lying in Cj,k, i.e. such that Xi ∈Cj,k. (b) Solve the binary classification problem related to the input space Cj,k and the training set {(Xi, Yi) : 1 ≤i ≤n, Xi ∈Cj,k}, producing a classifier gj,k : Cj,k →{−1, +1}. (c) Set Cj+1,2k = {x ∈Cj,k, gj,k = +1} = Cj,k \ Cj+1,2k+1. 4. Output. Ranking tree TJ = {Cj,k : 0 ≤j ≤J, 0 ≤k < D}. Of course, the depth J should be chosen such that 2J ≤n. One may also consider continuing to split the nodes until the number of data points within a cell has reached a minimum specified in advance. In addition, it is well known that recursive partitioning methods fragment the data and the unstability of splits increases with the depth. For this reason, a ranking subtree must be selected. The growing procedure above should be classically followed by a pruning stage, where children of a same parent are progressively merged until the root T0 is reached and a subtree among the sequence T0 ⊂ . . . ⊂TJ with nearly maximal IAUC should be chosen using cross-validation. Issues related to the implementation of the CRANK algorithm and variants (e.g. exploiting randomization/aggregation) will be investigated in a forthcoming paper. 6 Numerical Experiments In order to illustrate the idea conveyed by Fig. 2 that the least squares criterion is not appropriate for the continuous ranking problem we compared on a toy example CRANK with CART. Recall that the latter is a regression decision tree algorithm which minimizes the MSE (Mean Squared Error). We also runned an alternative version of CRANK which maximizes the empirical Kendall τ instead of the empirical IAUC: this method is refered to as KENDALL from now on. The experimental setting is composed of a unidimensional feature space X = [0, 1] (for visualization reasons) and a simple regression model without any noise: Y = m(X). Intuitively, a least squares strategy can miss slight oscillations of the regression function, which are critical in ranking when they occur in high probability regions as they affect the order among the feature space. The results are presented in Table 1. See Supplementary Material for further details. IAUC Kendall τ MSE CRANK 0.95 0.92 0.10 KENDALL 0.94 0.93 0.10 CART 0.61 0.58 7.4 × 10−4 Table 1: IAUC, Kendall τ and MSE empirical measures 7 Conclusion This paper considers the problem of learning how to order objects by increasing ’size’, modeled as a continuous r.v. Y , based on indirect measurements X. We provided a rigorous mathematical formulation of this problem that finds many applications (e.g. quality control, chemistry) and is referred to as continuous ranking. In particular, necessary and sufficient conditions on (X, Y )’s distribution for the existence of optimal solutions are exhibited and appropriate criteria have been proposed for evaluating the performance of scoring rules in these situations. In contrast to distribution-free regression where the goal is to recover the local values taken by the regression function, continuous 8 ranking aims at reproducing the preorder it defines on the feature space as accurately as possible. The numerical results obtained via the algorithmic approaches we proposed for optimizing the criteria aforementioned highlight the difference in nature between these two statistical learning tasks. Acknowledgments This work was supported by the industrial chair Machine Learning for Big Data from T´el´ecom ParisTech and by a public grant (Investissement d’avenir project, reference ANR-11-LABX-0056LMH, LabEx LMH). References [1] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under the ROC curve. J. Mach. Learn. Res., 6:393–425, 2005. [2] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth and Brooks, 1984. [3] G. Cl´emenc¸on, M. Depecker, and N. Vayatis. Ranking Forests. J. Mach. Learn. Res., 14:39–73, 2013. [4] S. Cl´emenc¸on, G. Lugosi, and N.Vayatis. Ranking and scoring using empirical risk minimization. In Proceedings of COLT 2005, volume 3559, pages 1–15. Springer., 2005. [5] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of ustatistics. The Annals of Statistics, 36:844–874, 2008. [6] S. Cl´emenc¸on and S. Robbiano. The TreeRank Tournament algorithm for multipartite ranking. Journal of Nonparametric Statistics, 25(1):107–126, 2014. [7] S. Cl´emenc¸on and N. Vayatis. Tree-based ranking methods. IEEE Transactions on Information Theory, 55(9):4316–4336, 2009. [8] S. Cl´emenc¸on and N. Vayatis. The RankOver algorithm: overlaid classification rules for optimal ranking. Constructive Approximation, 32:619–648, 2010. [9] Corinna Cortes and Mehryar Mohri. Auc optimization vs. error rate minimization. In Advances in neural information processing systems, pages 313–320, 2004. [10] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [11] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [12] Aditya Krishna Menon and Robert C Williamson. Bipartite ranking: a risk-theoretic perspective. Journal of Machine Learning Research, 17(195):1–102, 2016. [13] J.R. Quinlan. Induction of Decision Trees. Machine Learning, 1(1):1–81, 1986. [14] S. Rajaram and S. Agarwal. Generalization bounds for k-partite ranking. In NIPS 2005 Workshop on Learn to rank, 2005. [15] A. Rakotomamonjy. Optimizing Area Under Roc Curve with SVMs. In Proceedings of the First Workshop on ROC Analysis in AI, 2004. [16] S. Robbiano S. Cl´emenc¸on and N. Vayatis. Ranking data with ordinal labels: optimality and pairwise aggregation. Machine Learning, 91(1):67–104, 2013. 9 | 2017 | 396 |
6,892 | Scalable Log Determinants for Gaussian Process Kernel Learning Kun Dong 1, David Eriksson 1, Hannes Nickisch 2, David Bindel 1, Andrew Gordon Wilson 1 1 Cornell University, 2 Phillips Research Hamburg Abstract For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an n × n positive definite matrix, and its derivatives – leading to prohibitive O(n3) computations. We propose novel O(n) approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev, Lanczos, and surrogate models, and converge quickly even for kernel matrices that have challenging spectra. We leverage these approximations to develop a scalable Gaussian process approach to kernel learning. We find that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels. 1 Introduction There is a pressing need for scalable machine learning approaches to extract rich statistical structure from large datasets. A common bottleneck — arising in determinantal point processes [1], Bayesian neural networks [2], model comparison [3], graphical models [4], and Gaussian process kernel learning [5] — is computing a log determinant over a large positive definite matrix. While we can approximate log determinants by existing stochastic expansions relying on matrix vector multiplications (MVMs), these approaches make assumptions, such as near-uniform eigenspectra [6], which are unsuitable in machine learning contexts. For example, the popular RBF kernel gives rise to rapidly decaying eigenvalues. Moreover, while standard approaches, such as stochastic power series, have reasonable asymptotic complexity in the rank of the matrix, they require too many terms (MVMs) for the precision necessary in machine learning applications. Gaussian processes (GPs) provide a principled probabilistic kernel learning framework, for which a log determinant is of foundational importance. Specifically, the marginal likelihood of a Gaussian process is the probability of data given only kernel hyper-parameters. This utility function for kernel learning compartmentalizes into automatically calibrated model fit and complexity terms — called automatic Occam’s razor — such that the simplest models which explain the data are automatically favoured [7, 5], without the need for approaches such as cross-validation, or regularization, which can be costly, heuristic, and involve substantial hand-tuning and human intervention. The automatic complexity penalty, called the Occam’s factor [3], is a log determinant of a kernel (covariance) matrix, related to the volume of solutions that can be expressed by the Gaussian process. Many current approaches to scalable Gaussian processes [e.g., 8–10] focus on inference assuming a fixed kernel, or use approximations that do not allow for very flexible kernel learning [11], due to poor scaling with number of basis functions or inducing points. Alternatively, approaches which exploit algebraic structure in kernel matrices can provide highly expressive kernel learning [12], but are essentially limited to grid structured data. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, Wilson and Nickisch [13] proposed the structured kernel interpolation (SKI) framework, which generalizes structuring exploiting methods to arbitrarily located data. SKI works by providing accurate and fast matrix vector multiplies (MVMs) with kernel matrices, which can then be used in iterative solvers such as linear conjugate gradients for scalable GP inference. However, evaluating the marginal likelihood and its derivatives, for kernel learning, has followed a scaled eigenvalue approach [12, 13] instead of iterative MVM approaches. This approach can be inaccurate, and relies on a fast eigendecomposition of a structured matrix, which is not available in many consequential situations where fast MVMs are available, including: (i) additive covariance functions, (ii) multi-task learning, (iii) change-points [14], and (iv) diagonal corrections to kernel approximations [15]. Fiedler [16] and Weyl [17] bounds have been used to extend the scaled eigenvalue approach [18, 14], but are similarly limited. These extensions are often very approximate, and do not apply beyond sums of two and three matrices, where each matrix in the sum must have a fast eigendecomposition. In machine learning there has recently been renewed interest in MVM based approaches to approximating log determinants, such as the Chebyshev [19] and Lanczos [20] based methods, although these approaches go back at least two decades in quantum chemistry computations [21]. Independently, several authors have proposed various methods to compute derivatives of log determinants [22, 23]. But both the log determinant and the derivatives are needed for efficient GP marginal likelihood learning: the derivatives are required for gradient-based optimization, while the log determinant itself is needed for model comparison, comparisons between the likelihoods at local maximizers, and fast and effective choices of starting points and step sizes in a gradient-based optimization algorithm. In this paper, we develop novel scalable and general purpose Chebyshev, Lanczos, and surrogate approaches for efficiently and accurately computing both the log determinant and its derivatives simultaneously. Our methods use only fast MVMs, and re-use the same MVMs for both computations. In particular: • We derive fast methods for simultaneously computing the log determinant and its derivatives by stochastic Chebyshev, stochastic Lanczos, and surrogate models, from MVMs alone. We also perform an error analysis and extend these approaches to higher order derivatives. • These methods enable fast GP kernel learning whenever fast MVMs are possible, including applications where alternatives such as scaled eigenvalue methods (which rely on fast eigendecompositions) are not, such as for (i) diagonal corrections for better kernel approximations, (ii) additive covariances, (iii) multi-task approaches, and (iv) non-Gaussian likelihoods. • We illustrate the performance of our approach on several large, multi-dimensional datasets, including a consequential crime prediction problem, and a precipitation problem with n = 528, 474 training points. We consider a variety of kernels, including deep kernels [24], diagonal corrections, and both Gaussian and non-Gaussian likelihoods. • We have released code and tutorials as an extension to the GPML library [25] at https: //github.com/kd383/GPML_SLD. A Python implementation of our approach is also available through the GPyTorch library: https://github.com/jrg365/gpytorch. When using our approach in conjunction with SKI [13] for fast MVMs, GP kernel learning is O(n + g(m)), for m inducing points and n training points, where g(m) ≤m log m. With algebraic approaches such as SKI we also do not need to worry about quadratic storage in inducing points, since symmetric Toeplitz and Kronecker matrices can be stored with at most linear cost, without needing to explicitly construct a matrix. Although we here use SKI for fast MVMs, we emphasize that the proposed iterative approaches are generally applicable, and can easily be used in conjunction with any method that admits fast MVMs, including classical inducing point methods [8], finite basis expansions [9], and the popular stochastic variational approaches [10]. Moreover, stochastic variational approaches can naturally be combined with SKI to further accelerate MVMs [26]. We start in §2 with an introduction to GPs and kernel approximations. In §3 we introduce stochastic trace estimation and Chebyshev (§3.1) and Lanczos (§3.2) approximations. In §4, we describe the different sources of error in our approximations. In §5 we consider experiments on several large real-world data sets. We conclude in §6. The supplementary materials also contain several additional experiments and details. 2 2 Background A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian distribution [e.g., 5]. A GP can be used to define a distribution over functions f(x) ∼GP(µ(x), k(x, x′)), where each function value is a random variable indexed by x ∈Rd, and µ : Rd →R and k : Rd × Rd →R are the mean and covariance functions of the process. The covariance function is often chosen to be an RBF or Matérn kernel (see the supplementary material for more details). We denote any kernel hyperparameters by the vector θ. To be concise we will generally not explicitly denote the dependence of k and associated matrices on θ. For any locations X = {x1, . . . , xn} ⊂Rd, fX ∼N(µX, KXX) where fX and µX represent the vectors of function values for f and µ evaluated at each of the xi ∈X, and KXX is the matrix whose (i, j) entry is k(xi, xj). Suppose we have a vector of corresponding function values y ∈Rn, where each entry is contaminated by independent Gaussian noise with variance σ2. Under a Gaussian process prior depending on the covariance hyperparameters θ, the log marginal likelihood is given by L(θ|y) = −1 2 h (y −µX)T α + log | ˜KXX| + n log 2π i (1) where α = ˜K−1 XX(y −µX) and ˜KXX = KXX + σ2I. Optimization of (1) is expensive, since the cheapest way of evaluating log | ˜KXX| and its derivatives without taking advantage of the structure of ˜KXX involves computing the O(n3) Cholesky factorization of ˜KXX. O(n3) computations is too expensive for inference and learning beyond even just a few thousand points. A popular approach to GP scalability is to replace the exact kernel k(x, z) by an approximate kernel that admits fast computations [8]. Several methods approximate k(x, z) via inducing points U = {uj}m j=1 ⊂Rd. An example is the subset of regressor (SoR) kernel: kSoR(x, z) = KxUK−1 UUKUz which is a low-rank approximation [27]. The SoR matrix KSoR XX ∈Rn×n has rank at most m, allowing us to solve linear systems involving ˜KSoR XX = KSoR XX + σ2I and to compute log | ˜KSoR XX | in O(m2n + m3) time. Another popular kernel approximation is the fully independent training conditional (FITC), which is a diagonal correction of SoR so that the diagonal is the same as for the original kernel [15]. Thus kernel matrices from FITC have low-rank plus diagonal structure. This modification has had exceptional practical significance, leading to improved point predictions and much more realistic predictive uncertainty [8, 28], making FITC arguably the most popular approach for scalable Gaussian processes. Wilson and Nickisch [13] provides a mechanism for fast MVMs through proposing the structured kernel interpolation (SKI) approximation, KXX ≈WKUUW T (2) where W is an n-by-m matrix of interpolation weights; the authors of [13] use local cubic interpolation so that W is sparse. The sparsity in W makes it possible to naturally exploit algebraic structure (such as Kronecker or Toeplitz structure) in KUU when the inducing points U are on a grid, for extremely fast matrix vector multiplications with the approximate KXX even if the data inputs X are arbitrarily located. For instance, if KUU is Toeplitz, then each MVM with the approximate KXX costs only O(n + m log m). By contrast, placing the inducing points U on a grid for classical inducing point methods, such as SoR or FITC, does not result in substantial performance gains, due to the costly cross-covariance matrices KxU and KUz. 3 Methods Our goal is to estimate, for a symmetric positive definite matrix ˜K, log | ˜K| = tr(log( ˜K)) and ∂ ∂θi h log | ˜K| i = tr ˜K−1 ∂˜K ∂θi !! , where log is the matrix logarithm [29]. We compute the traces involved in both the log determinant and its derivative via stochastic trace estimators [30], which approximate the trace of a matrix using only matrix vector products. 3 The key idea is that for a given matrix A and a random probe vector z with independent entries with mean zero and variance one, then tr(A) = E[zT Az]; a common choice is to let the entries of the probe vectors be Rademacher random variables. In practice, we estimate the trace by the sample mean over nz independent probe vectors. Often surprisingly few probe vectors suffice. To estimate tr(log( ˜K)), we need to multiply log( ˜K) by probe vectors. We consider two ways to estimate log( ˜K)z: by a polynomial approximation of log or by using the connection between the Gaussian quadrature rule and the Lanczos method [19, 20]. In both cases, we show how to re-use the same probe vectors for an inexpensive coupled estimator of the derivatives. In addition, we may use standard radial basis function interpolation of the log determinant evaluated at a few systematically chosen points in the hyperparameter space as an inexpensive surrogate for the log determinant. 3.1 Chebyshev Chebyshev polynomials are defined by the recursion T0(x) = 1, T1(x) = x, Tj+1(x) = 2xTj(x) −Tj−1(x) for j ≥1. For f : [−1, 1] →R the Chebyshev interpolant of degree m is f(x) ≈pm(x) := m X j=0 cjTj(x), where cj = 2 −δj0 m + 1 m X k=0 f(xk)Tj(xk) where δj0 is the Kronecker delta and xk = cos(π(k+1/2)/(m+1)) for k = 0, 1, 2, . . . , m; see [31]. Using the Chebyshev interpolant of log(1 + αx), we approximate log | ˜K| by log | ˜K| −n log β = log |I + αB| ≈ m X j=0 cj tr(Tj(B)) when B = ( ˜K/β −1)/α has eigenvalues λi ∈(−1, 1). For stochastic estimation of tr(Tj(B)), we only need to compute zT Tj(B)z for each given probe vector z. We compute vectors wj = Tj(B)z and ∂wj/∂θi via the coupled recurrences w0 = z, w1 = Bz, wj+1 = 2Bwj −wj−1 for j ≥1, ∂w0 ∂θi = 0, ∂w1 ∂θi = ∂B ∂θi z, ∂wj+1 ∂θi = 2 ∂B ∂θi wj + B ∂wj ∂θi −∂wj−1 ∂θi for j ≥1. This gives the estimators log | ˜K| ≈E m X j=0 cjzT wj and ∂ ∂θi log | ˜K| ≈E m X j=0 cjzT ∂wj ∂θi . Thus, each derivative of the approximation costs two extra MVMs per term. 3.2 Lanczos We can also approximate zT log( ˜K)z via a Lanczos decomposition; see [32] for discussion of a Lanczos-based computation of zT f( ˜K)z and [20, 21] for stochastic Lanczos estimation of log determinants. We run m steps of the Lanczos algorithm, which computes the decomposition ˜KQm = QmT + βmqm+1eT m where Qm = [q1 q2 . . . qm] ∈Rn×m is a matrix with orthonormal columns such that q1 = z/∥z∥, T ∈Rm×m is tridiagonal, βm is the residual, and em is the mth Cartesian unit vector. We estimate zT f( ˜K)z ≈eT 1 f(∥z∥2T)e1 (3) where e1 is the first column of the identity. The Lanczos algorithm is numerically unstable. Several practical implementations resolve this issue [33, 34]. The approximation (3) corresponds to a Gauss quadrature rule for the Riemann-Stieltjes integral of the measure associated with the eigenvalue 4 distribution of ˜K. It is exact when f is a polynomial of degree up to 2m −1. This approximation is also exact when ˜K has at most m distinct eigenvalues, which is particularly relevant to Gaussian process regression, since frequently the kernel matrices only have a small number of eigenvalues that are not close to zero. The Lanczos decomposition also allows us to estimate derivatives of the log determinant at minimal cost. Via the Lanczos decomposition, we have ˆg = Qm(T −1e1∥z∥) ≈˜K−1z. This approximation requires no additional matrix vector multiplications beyond those used to compute the Lanczos decomposition, which we already used to estimate log( ˜K)z; in exact arithmetic, this is equivalent to m steps of CG. Computing ˆg in this way takes O(mn) additional time; subsequently, we only need one matrix-vector multiply by ∂˜K/∂θi for each probe vector to estimate tr( ˜K−1(∂˜K/∂θi)) = E[( ˜K−1z)T (∂˜K/∂θi)z]. 3.3 Diagonal correction to SKI The SKI approximation may provide a poor estimate of the diagonal entries of the original kernel matrix for kernels with limited smoothness, such as the Matérn kernel. In general, diagonal corrections to scalable kernel approximations can lead to great performance gains. Indeed, the popular FITC method [15] is exactly a diagonal correction of subset of regressors (SoR). We thus modify the SKI approximation to add a diagonal matrix D, KXX ≈WKUUW T + D , (4) such that the diagonal of the approximated KXX is exact. In other words, D substracts the diagonal of WKUUW T and adds the true diagonal of KXX. This modification is not possible for the scaled eigenvalue method for approximating log determinants in [13], since adding a diagonal matrix makes it impossible to approximate the eigenvalues of KXX from the eigenvalues of KUU. However, Eq. (4) still admits fast MVMs and thus works with our approach for estimating the log determinant and its derivatives. Computing D with SKI costs only O(n) flops since W is sparse for local cubic interpolation. We can therefore compute (W T ei)T KUU(W T ei) in O(1) flops. 3.4 Estimating higher derivatives We have already described how to use stochastic estimators to compute the log marginal likelihood and its first derivatives. The same approach applies to computing higher-order derivatives for a Newton-like iteration, to understand the sensitivity of the maximum likelihood parameters, or for similar tasks. The first derivatives of the full log marginal likelihood are ∂L ∂θi = −1 2 " tr ˜K−1 ∂˜K ∂θi ! −αT ∂˜K ∂θi α # and the second derivatives of the two terms are ∂2 ∂θi∂θj h log | ˜K| i = tr ˜K−1 ∂2 ˜K ∂θi∂θj −˜K−1 ∂˜K ∂θi ˜K−1 ∂˜K ∂θj ! , ∂2 ∂θi∂θj (y −µX)T α = 2αT ∂˜K ∂θi ˜K−1 ∂˜K ∂θj α −αT ∂2 ˜K ∂θi∂θj α. Superficially, evaluating the second derivatives would appear to require several additional solves above and beyond those used to estimate the first derivatives of the log determinant. In fact, we can get an unbiased estimator for the second derivatives with no additional solves, but only fast products with the derivatives of the kernel matrices. Let z and w be independent probe vectors, and define g = ˜K−1z and h = ˜K−1w. Then ∂2 ∂θi∂θj h log | ˜K| i = E " gT ∂2 ˜K ∂θi∂θj z − gT ∂˜K ∂θi w ! hT ∂˜K ∂θj z !# , ∂2 ∂θi∂θj (y −µX)T α = 2E " zT ∂˜K ∂θi α ! gT ∂˜K ∂θj α !# −αT ∂2 ˜K ∂θi∂θj α. 5 Hence, if we use the stochastic Lanczos method to compute the log determinant and its derivatives, the additional work required to obtain a second derivative estimate is one MVM by each second partial of the kernel for each probe vector and for α, one MVM of each first partial of the kernel with α, and a few dot products. 3.5 Radial basis functions Another way to deal with the log determinant and its derivatives is to evaluate the log determinant term at a few systematically chosen points in the space of hyperparameters and fit an interpolation approximation to these values. This is particularly useful when the kernel depends on a modest number of hyperparameters (e.g., half a dozen), and thus the number of points we need to precompute is relatively small. We refer to this method as a surrogate, since it provides an inexpensive substitute for the log determinant and its derivatives. For our surrogate approach, we use radial basis function (RBF) interpolation with a cubic kernel and a linear tail. See e.g. [35–38] and the supplementary material for more details on RBF interpolation. 4 Error properties In addition to the usual errors from sources such as solver termination criteria and floating point arithmetic, our approach to kernel learning involves several additional sources of error: we approximate the true kernel with one that enables fast MVMs, we approximate traces using stochastic estimation, and we approximate the actions of log( ˜K) and ˜K−1 on probe vectors. We can compute first-order estimates of the sensitivity of the log likelihood to perturbations in the kernel using the same stochastic estimators we use for the derivatives with respect to hyperparameters. For example, if Lref is the likelihood for a reference kernel ˜Kref = ˜K + E, then Lref(θ|y) = L(θ|y) −1 2 E gT Ez −αT Eα + O(∥E∥2), and we can bound the change in likelihood at first order by ∥E∥ ∥g∥∥z∥+ ∥α∥2 . Given bounds on the norms of ∂E/∂θi, we can similarly estimate changes in the gradient of the likelihood, allowing us to bound how the marginal likelihood hyperparameter estimates depend on kernel approximations. If ˜K = UΛU T + σ2I, the Hutchinson trace estimator has known variance [39] Var[zT log( ˜K)z] = X i̸=j [log( ˜K)]2 ij ≤ n X i=1 log(1 + λj/σ2)2. If the eigenvalues of the kernel matrix without noise decay rapidly enough compared to σ, the variance will be small compared to the magnitude of tr(log ˜K) = 2n log σ + Pn i=1 log(1 + λj/σ2). Hence, we need fewer probe vectors to obtain reasonable accuracy than one would expect from bounds that are blind to the matrix structure. In our experiments, we typically only use 5–10 probes — and we use the sample variance across these probes to estimate a posteriori the stochastic component of the error in the log likelihood computation. If we are willing to estimate the Hessian of the log likelihood, we can increase rates of convergence for finding kernel hyperparameters. The Chebyshev approximation scheme requires O(√κ log(κ/ϵ)) steps to obtain an O(ϵ) approximation error in computing zT log( ˜K)z, where κ = λmax/λmin is the condition number of ˜K [19]. This behavior is independent of the distribution of eigenvalues within the interval [λmin, λmax], and is close to optimal when eigenvalues are spread quasi-uniformly across the interval. Nonetheless, when the condition number is large, convergence may be quite slow. The Lanczos approach converges at least twice as fast as Chebyshev in general [20, Remark 1], and converges much more rapidly when the eigenvalues are not uniform within the interval, as is the case with log determinants of many kernel matrices. Hence, we recommend the Lanczos approach over the Chebyshev approach in general. In all of our experiments, the error associated with approximating zT log( ˜K)z by Lanczos was dominated by other sources of error. 6 5 Experiments We test our stochastic trace estimator with both Chebyshev and Lanczos approximation schemes on: (1) a sound time series with missing data, using a GP with an RBF kernel; (2) a three-dimensional space-time precipitation data set with over half a million training points, using a GP with an RBF kernel; (3) a two-dimensional tree growth data set using a log-Gaussian Cox process model with an RBF kernel; (4) a three-dimensional space-time crime datasets with a log-Gaussian Cox model with Matérn 3/2 and spectral mixture kernels; and (5) a high-dimensional feature space using the deep kernel learning framework [24]. In the supplementary material we also include several additional experiments to illustrate particular aspects of our approach, including kernel hyperparameter recovery, diagonal corrections (Section 3.3), and surrogate methods (Section 3.5). Throughout we use the SKI method [13] of Eq. (2) for fast MVMs. We find that the Lanczos and surrogate methods are able to do kernel recovery and inference significantly faster and more accurately than competing methods. 5.1 Natural sound modeling Here we consider the natural sound benchmark in [13], shown in Figure 1(a). Our goal is to recover contiguous missing regions in a waveform with n = 59, 306 training points. We exploit Toeplitz structure in the KUU matrix of our SKI approximate kernel for accelerated MVMs. The experiment in [13] only considered scalable inference and prediction, but not hyperparameter learning, since the scaled eigenvalue approach requires all the eigenvalues for an m × m Toeplitz matrix, which can be computationally prohibitive with cost O(m2). However, evaluating the marginal likelihood on this training set is not an obstacle for Lanczos and Chebyshev since we can use fast MVMs with the SKI approximation at a cost of O(n + m log m). In Figure 1(b), we show how Lanczos, Chebyshev and surrogate approaches scale with the number of inducing points m compared to the scaled eigenvalue method and FITC. We use 5 probe vectors and 25 iterations for Lanczos, both when building the surrogate and for hyperparameter learning with Lanczos. We also use 5 probe vectors for Chebyshev and 100 moments. Figure 1(b) shows the runtime of the hyperparameter learning phase for different numbers of inducing points m, where Lanczos and the surrogate are clearly more efficient than scaled eigenvalues and Chebyshev. For hyperparameter learning, FITC took several hours to run, compared to minutes for the alternatives; we therefore exclude FITC from Figure 1(b). Figure 1(c) shows the time to do inference on the 691 test points, while 1(d) shows the standardized mean absolute error (SMAE) on the same test points. As expected, Lanczos and surrogate make accurate predictions much faster than Chebyshev, scaled eigenvalues, and FITC. In short, Lanczos and the surrogate approach are much faster than alternatives for hyperparameter learning with a large number of inducing points and training points. 0 1 2 3 Time (s) -0.2 0 0.2 Intensity (a) Sound data 3000 3500 4000 4500 5000 m 101 102 103 104 Time (s) (b) Recovery time 1 2000 4000 6000 8000 10000 m 100 101 102 103 Runtime (s) (c) Inference time 100 101 102 103 Runtime (s) 0.2 0.4 0.6 0.8 1 1.2 SMAE (d) SMAE Figure 1: Sound modeling using 59,306 training points and 691 test points. The intensity of the time series can be seen in (a). Train time for RBF kernel hyperparameters is in (b) and the time for inference is in (c). The standardized mean absolute error (SMAE) as a function of time for an evaluation of the marginal likelihood and all derivatives is shown in (d). Surrogate is (——), Lanczos is (- - -), Chebyshev is (— ⋄—), scaled eigenvalues is (— + —), and FITC is (— o —). 5.2 Daily precipitation prediction This experiment involves precipitation data from the year of 2010 collected from around 5500 weather stations in the US1. The hourly precipitation data is preprocessed into daily data if full information of the day is available. The dataset has 628, 474 entries in terms of precipitation per day given the date, longitude and latitude. We randomly select 100, 000 data points as test points and use the remaining 1https://catalog.data.gov/dataset/u-s-hourly-precipitation-data 7 points for training. We then perform hyperparameter learning and prediction with the RBF kernel, using Lanczos, scaled eigenvalues, and exact methods. For Lanczos and scaled eigenvalues, we optimize the hyperparameters on the subset of data for January 2010, with an induced grid of 100 points per spatial dimension and 300 in the temporal dimension. Due to memory constraints we only use a subset of 12, 000 entries for training with the exact method. While scaled eigenvalues can perform well when fast eigendecompositions are possible, as in this experiment, Lanczos nonetheless still runs faster and with slightly lower MSE. Method n m MSE Time [min] Lanczos 528k 3M 0.613 14.3 Scaled eigenvalues 528k 3M 0.621 15.9 Exact 12k 0.903 11.8 Table 1: Prediction comparison for the daily precipitation data showing the number of training points n, number of induced grid points m, the mean squared error, and the inference time. Incidentally, we are able to use 3 million inducing points in Lanczos and scaled eigenvalues, which is enabled by the SKI representation [13] of covariance matrices, for a a very accurate approximation. This number of inducing points m is unprecedented for typical alternatives which scale as O(m3). 5.3 Hickory data In this experiment, we apply Lanczos to the log-Gaussian Cox process model with a Laplace approximation for the posterior distribution. We use the RBF kernel and the Poisson likelihood in our model. The scaled eigenvalue method does not apply directly to non-Gaussian likelihoods; we thus applied the scaled eigenvalue method in [13] in conjunction with the Fiedler bound in [18] for the scaled eigenvalue comparison. Indeed, a key advantage of the Lanczos approach is that it can be applied whenever fast MVMs are available, which means no additional approximations such as the Fiedler bound are required for non-Gaussian likelihoods. This dataset, which comes from the R package spatstat, is a point pattern of 703 hickory trees in a forest in Michigan. We discretize the area into a 60 × 60 grid and fit our model with exact, scaled eigenvalues, and Lanczos. We see in Table 2 that Lanczos recovers hyperparameters that are much closer to the exact values than the scaled eigenvalue approach. Figure 2 shows that the predictions by Lanczos are also indistinguishable from the exact computation. Method sf ℓ1 ℓ2 −log p(y|θ) Time [s] Exact 0.696 0.063 0.085 1827.56 465.9 Lanczos 0.693 0.066 0.096 1828.07 21.4 Scaled eigenvalues 0.543 0.237 0.112 1851.69 2.5 Table 2: Hyperparameters recovered on the Hickory dataset. (a) Point pattern data (b) Prediction by exact (c) Scaled eigenvalues (d) Lanczos Figure 2: Predictions by exact, scaled eigenvalues, and Lanczos on the Hickory dataset. 5.4 Crime prediction In this experiment, we apply Lanczos with the spectral mixture kernel to the crime forecasting problem considered in [18]. This dataset consists of 233, 088 incidents of assault in Chicago from January 1, 2004 to December 31, 2013. We use the first 8 years for training and attempt to predict the crime rate for the last 2 years. For the spatial dimensions, we use the log-Gaussian Cox process model, with the Matérn-5/2 kernel, the negative binomial likelihood, and the Laplace approximation for the 8 posterior. We use a spectral mixture kernel with 20 components and an extra constant component for the temporal dimension. We discretize the data into a 17 × 26 spatial grid corresponding to 1-by-1 mile grid cells. In the temporal dimension we sum our data by weeks for a total of 522 weeks. After removing the cells that are outside Chicago, we have a total of 157, 644 observations. The results for Lanczos and scaled eigenvalues (in conjunction with the Fiedler bound due to the non-Gaussian likelihood) can be seen in Table 3. The Lanczos method used 5 Hutchinson probe vectors and 30 Lanczos steps. For both methods we allow 100 iterations of LBFGS to recover hyperparameters and we often observe early convergence. While the RMSE for Lanczos and scaled eigenvalues happen to be close on this example, the recovered hyperparameters using scaled eigenvalues are very different than for Lanczos. For example, the scaled eigenvalue method learns a much larger σ2 than Lanczos, indicating model misspecification. In general, as the data become increasingly non-Gaussian the Fiedler bound (used for fast scaled eigenvalues on non-Gaussian likelihoods) will become increasingly misspecified, while Lanczos will be unaffected. Method ℓ1 ℓ2 σ2 Trecovery[s] Tprediction[s] RMSEtrain RMSEtest Lanczos 0.65 0.67 69.72 264 10.30 1.17 1.33 Scaled eigenvalues 0.32 0.10 191.17 67 3.75 1.19 1.36 Table 3: Hyperparameters recovered, recovery time and RMSE for Lanczos and scaled eigenvalues on the Chicago assault data. Here ℓ1 and ℓ2 are the length scales in spatial dimensions and σ2 is the noise level. Trecovery is the time for recovering hyperparameters. Tprediction is the time for prediction at all 157, 644 observations (including training and testing). 5.5 Deep kernel learning To handle high-dimensional datasets, we bring our methods into the deep kernel learning framework [24] by replacing the final layer of a pre-trained deep neural network (DNN) with a GP. This experiment uses the gas sensor dataset from the UCI machine learning repository. It has 2565 instances with 128 dimensions. We pre-train a DNN, then attach a Gaussian process with RBF kernels to the two-dimensional output of the second-to-last layer. We then further train all parameters of the resulting kernel, including the weights of the DNN, through the GP marginal likelihood. In this example, Lanczos and the scaled eigenvalue approach perform similarly well. Nonetheless, we see that Lanczos can effectively be used with SKI on a high dimensional problem to train hundreds of thousands of kernel parameters. Method DNN Lanczos Scaled eigenvalues RMSE 0.1366 ± 0.0387 0.1053 ± 0.0248 0.1045 ± 0.0228 Time [s] 0.4438 2.0680 1.6320 Table 4: Prediction RMSE and per training iteration runtime. 6 Discussion There are many cases in which fast MVMs can be achieved, but it is difficult or impossible to efficiently compute a log determinant. We have developed a framework for scalable and accurate estimates of a log determinant and its derivatives relying only on MVMs. We particularly consider scalable kernel learning, showing the promise of stochastic Lanczos estimation combined with a pre-computed surrogate model. We have shown the scalability and flexibility of our approach through experiments with kernel learning for several real-world data sets using both Gaussian and non-Gaussian likelihoods, and highly parametrized deep kernels. Iterative MVM approaches have great promise for future exploration. We have only begun to explore their significant generality. In addition to log determinants, the methods presented here could be adapted to fast posterior sampling, diagonal estimation, matrix square roots, and many other standard operations. The proposed methods only depend on fast MVMs—and the structure necessary for fast MVMs often exists, or can be readily created. We have here made use of SKI [13] to create such structure. But other approaches, such as stochastic variational methods [10], could be used or combined with SKI for fast MVMs, as in [26]. Moreover, iterative MVM methods naturally harmonize with GPU acceleration, and are therefore likely to increase in their future applicability and popularity. Finally, one could explore the ideas presented here for scalable higher order derivatives, making use of Hessian methods for greater convergence rates. 9 References [1] Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends R⃝in Machine Learning, 5(2–3):123–286, 2012. [2] David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992. [3] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003. [4] Havard Rue and Leonhard Held. Gaussian Markov random fields: theory and applications. CRC Press, 2005. [5] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. The MIT Press, 2006. [6] Christos Boutsidis, Petros Drineas, Prabhanjan Kambadur, Eugenia-Maria Kontopoulou, and Anastasios Zouzias. A randomized algorithm for approximating the log determinant of a symmetric positive definite matrix. arXiv preprint arXiv:1503.00374, 2015. [7] Carl Edward Rasmussen and Zoubin Ghahramani. Occam’s razor. In Neural Information Processing Systems (NIPS), 2001. [8] Joaquin Quiñonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939–1959, 2005. [9] Q. Le, T. Sarlos, and A. Smola. Fastfood-computing Hilbert space expansions in loglinear time. In Proceedings of the 30th International Conference on Machine Learning, pages 244–252, 2013. [10] J Hensman, N Fusi, and N.D. Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence (UAI). AUAI Press, 2013. [11] Andrew Gordon Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. PhD thesis, University of Cambridge, 2014. [12] Andrew Gordon Wilson, Elad Gilboa, Nehorai Arye, and John P Cunningham. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, pages 3626–3634, 2014. [13] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). International Conference on Machine Learning (ICML), 2015. [14] William Herlands, Andrew Wilson, Hannes Nickisch, Seth Flaxman, Daniel Neill, Wilbert Van Panhuis, and Eric Xing. Scalable Gaussian processes for characterizing multidimensional change surfaces. Artificial Intelligence and Statistics, 2016. [15] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in neural information processing systems (NIPS), volume 18, page 1257. MIT Press, 2006. [16] M. Fiedler. Hankel and Loewner matrices. Linear Algebra and Its Applications, 58:75–95, 1984. [17] Hermann Weyl. Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung). Mathematische Annalen, 71(4):441–479, 1912. [18] Seth Flaxman, Andrew Wilson, Daniel Neill, Hannes Nickisch, and Alex Smola. Fast kronecker inference in gaussian processes with non-gaussian likelihoods. In International Conference on Machine Learning, pages 607–616, 2015. [19] Insu Han, Dmitry Malioutov, and Jinwoo Shin. Large-scale log-determinant computation through stochastic Chebyshev expansions. In ICML, pages 908–917, 2015. [20] Shashanka Ubaru, Jie Chen, and Yousef Saad. Fast estimation of tr(F(A)) via stochastic Lanczos quadrature. [21] Zhaojun Bai, Mark Fahey, Gene H Golub, M Menon, and E Richter. Computing partial eigenvalue sums in electronic structure calculations. Technical report, Tech. Report SCCM-9803, Stanford University, 1998. 10 [22] D MacKay and MN Gibbs. Efficient implementation of gaussian processes. Neural Computation, 1997. [23] Michael L Stein, Jie Chen, Mihai Anitescu, et al. Stochastic approximation of score functions for gaussian processes. The Annals of Applied Statistics, 7(2):1162–1191, 2013. [24] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel learning. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 370–378, 2016. [25] Carl Edward Rasmussen and Hannes Nickisch. Gaussian processes for machine learning (GPML) toolbox. Journal of Machine Learning Research (JMLR), 11:3011–3015, Nov 2010. [26] Andrew G Wilson, Zhiting Hu, Ruslan R Salakhutdinov, and Eric P Xing. Stochastic variational deep kernel learning. In Advances in Neural Information Processing Systems, pages 2586–2594, 2016. [27] Bernhard W Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve fitting. Journal of the Royal Statistical Society. Series B (Methodological), pages 1–52, 1985. [28] Joaquin Quinonero-Candela, Carl Edward Rasmussen, and Christopher KI Williams. Approximation methods for Gaussian process regression. Large-scale kernel machines, pages 203–223, 2007. [29] Nicholas J Higham. Functions of matrices: theory and computation. SIAM, 2008. [30] Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 19(2):433–450, 1990. [31] Amparo Gil, Javier Segura, and Nico Temme. Numerical Methods for Special Functions. SIAM, 2007. [32] Gene Golub and Gérard Meurant. Matrices, Moments and Quadrature with Applications. Princeton University Press, 2010. [33] Jane K Cullum and Ralph A Willoughby. Lanczos algorithms for large symmetric eigenvalue computations: Vol. I: Theory. SIAM, 2002. [34] Youcef Saad. Numerical methods for large eigenvalue problems. Manchester University Press, 1992. [35] Martin Dietrich Buhmann. Radial basis functions. Acta Numerica 2000, 9:1–38, 2000. [36] Gregory E Fasshauer. Meshfree approximation methods with MATLAB, volume 6. World Scientific, 2007. [37] Robert Schaback and Holger Wendland. Kernel techniques: from machine learning to meshless methods. Acta Numerica, 15:543–639, 2006. [38] Holger Wendland. Scattered data approximation, volume 17. Cambridge university press, 2004. [39] Haim Avron and Sivan Toledo. Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. J. ACM, 58(2):8:1–8:34, 2011. doi: 10.1145/1944345. 1944349. URL http://dx.doi.org/10.1145/1944345.1944349. 11 | 2017 | 397 |
6,893 | Fair Clustering Through Fairlets Flavio Chierichetti Dipartimento di Informatica Sapienza University Rome, Italy Ravi Kumar Google Research 1600 Amphitheater Parkway Mountain View, CA 94043 Silvio Lattanzi Google Research 76 9th Ave New York, NY 10011 Sergei Vassilvitskii Google Research 76 9th Ave New York, NY 10011 Abstract We study the question of fair clustering under the disparate impact doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the k-center and the k-median objectives, and show that even with two protected classes the problem is challenging, as the optimum solution can violate common conventions—for instance a point may no longer be assigned to its nearest cluster center! En route we introduce the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow. We empirically demonstrate the price of fairness by quantifying the value of fair clustering on real-world datasets with sensitive attributes. 1 Introduction From self driving cars, to smart thermostats, and digital assistants, machine learning is behind many of the technologies we use and rely on every day. Machine learning is also increasingly used to aid with decision making—in awarding home loans or in sentencing recommendations in courts of law (Kleinberg et al. , 2017a). While the learning algorithms are not inherently biased, or unfair, the algorithms may pick up and amplify biases already present in the training data that is available to them. Thus a recent line of work has emerged on designing fair algorithms. The first challenge is to formally define the concept of fairness, and indeed recent work shows that some natural conditions for fairness cannot be simultaneously achieved (Kleinberg et al. , 2017b; Corbett-Davies et al. , 2017). In our work we follow the notion of disparate impact as articulated by Feldman et al. (2015), following the Griggs v. Duke Power Co. US Supreme Court case. Informally, the doctrine codifies the notion that not only should protected attributes, such as race and gender, not be explicitly used in making decisions, but even after the decisions are made they should not be disproportionately different for applicants in different protected classes. In other words, if an unprotected feature, for example, height, is closely correlated with a protected feature, such as gender, then decisions made based on height may still be unfair, as they can be used to effectively discriminate based on gender. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. x y z a b c Figure 1: A colorblind k-center clustering algorithm would group points a, b, c into one cluster, and x, y, z into a second cluster, with centers at a and z respectively. A fair clustering algorithm, on the other hand, may give a partition indicated by the dashed line. Observe that in this case a point is no longer assigned to its nearest cluster center. For example x is assigned to the same cluster as a even though z is closer. While much of the previous work deals with supervised learning, in this work we consider the most common unsupervised learning problem, that of clustering. In modern machine learning systems, clustering is often used for feature engineering, for instance augmenting each example in the dataset with the id of the cluster it belongs to in an effort to bring expressive power to simple learning methods. In this way we want to make sure that the features that are generated are fair themselves. As in standard clustering literature, we are given a set X of points lying in some metric space, and our goal is to find a partition of X into k different clusters, optimizing a particular objective function. We assume that the coordinates of each point x ∈X are unprotected; however each point also has a color, which identifies its protected class. The notion of disparate impact and fair representation then translates to that of color balance in each cluster. We study the two color case, where each point is either red or blue, and show that even this simple version has a lot of underlying complexity. We formalize these views and define a fair clustering objective that incorporates both fair representation and the traditional clustering cost; see Section 2 for exact definitions. A clustering algorithm that is colorblind, and thus does not take a protected attribute into its decision making, may still result in very unfair clusterings; see Figure 1. This means that we must explicitly use the protected attribute to find a fair solution. Moreover, this implies that a fair clustering solution could be strictly worse (with respect to an objective function) than a colorblind solution. Finally, the example in Figure 1 also shows the main technical hurdle in looking for fair clusterings. Unlike the classical formulation where every point is assigned to its nearest cluster center, this may no longer be the case. Indeed, a fair clustering is defined not just by the position of the centers, but also by an assignment function that assigns a cluster label to each input. Our contributions. In this work we show how to reduce the problem of fair clustering to that of classical clustering via a pre-processing step that ensures that any resulting solution will be fair. In this way, our approach is similar to that of Zemel et al. (2013), although we formulate the first step as an explicit combinatorial problem, and show approximation guarantees that translate to approximation guarantees on the optimal solution. Specifically we: (i) Define fair variants of classical clustering problems such as k-center and k-median; (ii) Define the concepts of fairlets and fairlet decompositions, which encapsulate minimal fair sets; (iii) Show that any fair clustering problem can be reduced to first finding a fairlet decomposition, and then using the classical (not necessarily fair) clustering algorithm; (iv) Develop approximation algorithms for finding fair decompositions for a large range of fairness values, and complement these results with NP-hardness; and (v) Empirically quantify the price of fairness, i.e., the ratio of the cost of traditional clustering to the cost of fair clustering. Related work. Data clustering is a classic problem in unsupervised learning that takes on many forms, from partition clustering, to soft clustering, hierarchical clustering, spectral clustering, among many others. See, for example, the books by Aggarwal & Reddy (2013); Xu & Wunsch (2009) for an extensive list of problems and algorithms. In this work, we focus our attention on the k-center and k-median problems. Both of these problems are NP-hard but have known efficient approximation algorithms. The state of the art approaches give a 2-approximation for k-center (Gonzalez, 1985) and a (1 + √ 3 + ϵ)-approximation for k-median (Li & Svensson, 2013). Unlike clustering, the exploration of fairness in machine learning is relatively nascent. There are two broad lines of work. The first is in codifying what it means for an algorithm to be fair. See for example the work on statistical parity (Luong et al. , 2011; Kamishima et al. , 2012), disparate impact (Feldman et al. , 2015), and individual fairness (Dwork et al. , 2012). More recent work 2 by Corbett-Davies et al. (2017) and Kleinberg et al. (2017b) also shows that some of the desired properties of fairness may be incompatible with each other. A second line of work takes a specific notion of fairness and looks for algorithms that achieve fair outcomes. Here the focus has largely been on supervised learning (Luong et al. , 2011; Hardt et al. , 2016) and online (Joseph et al. , 2016) learning. The direction that is most similar to our work is that of learning intermediate representations that are guaranteed to be fair, see for example the work by Zemel et al. (2013) and Kamishima et al. (2012). However, unlike their work, we give strong guarantees on the relationship between the quality of the fairlet representation, and the quality of any fair clustering solution. In this paper we use the notion of fairness known as disparate impact and introduced by Feldman et al. (2015). This notion is also closely related to the p%-rule as a measure for fairness. The p%-rule is a generalization of the 80%-rule advocated by US Equal Employment Opportunity Commission (Biddle, 2006) and was used in a recent paper on mechanism for fair classification (Zafar et al. , 2017). In particular our paper addresses an open question of Zafar et al. (2017) presenting a framework to solve an unsupervised learning task respecting the p%-rule. 2 Preliminaries Let X be a set of points in a metric space equipped with a distance function d : X2 →R≥0. For an integer k, let [k] denote the set {1, . . . , k}. We first recall standard concepts in clustering. A k-clustering C is a partition of X into k disjoint subsets, C1, . . . , Ck, called clusters. We can evaluate the quality of a clustering C with different objective functions. In the k-center problem, the goal is to minimize φ(X, C) = max C∈C min c∈C max x∈C d(x, c), and in the k-median problem, the goal is to minimize ψ(X, C) = X C∈C min c∈C X x∈C d(x, c). A clustering C can be equivalently described via an assignment function α : X →[k]. The points in cluster Ci are simply the pre-image of i under α, i.e., Ci = {x ∈X | α(x) = i}. Throughout this paper we assume that each point in X is colored either red or blue; let χ : X → {RED, BLUE} denote the color of a point. For a subset Y ⊆X and for c ∈{RED, BLUE}, let c(Y ) = {x ∈X | χ(x) = c} and let #c(Y ) = |c(Y )|. We first define a natural notion of balance. Definition 1 (Balance). For a subset ∅̸= Y ⊆X, the balance of Y is defined as: balance(Y ) = min #RED(Y ) #BLUE(Y ), #BLUE(Y ) #RED(Y ) ∈[0, 1]. The balance of a clustering C is defined as: balance(C) = min C∈C balance(C). A subset with an equal number of red and blue points has balance 1 (perfectly balanced) and a monochromatic subset has balance 0 (fully unbalanced). To gain more intuition about the notion of balance, we investigate some basic properties that follow from its definition. Lemma 2 (Combination). Let Y, Y ′ ⊆X be disjoint. If C is a clustering of Y and C′ is a clustering of Y ′, then balance(C ∪C′) = min(balance(C), balance(C′)). It is easy to see that for any clustering C of X, we have balance(C) ≤balance(X). In particular, if X is not perfectly balanced, then no clustering of X can be perfectly balanced. We next show an interesting converse, relating the balance of X to the balance of a well-chosen clustering. Lemma 3. Let balance(X) = b/r for some integers 1 ≤b ≤r such that gcd(b, r) = 1. Then there exists a clustering Y = {Y1, . . . , Ym} of X such that (i) |Yj| ≤b + r for each Yj ∈Y, i.e., each cluster is small, and (ii) balance(Y) = b/r = balance(X). Fairness and fairlets. Balance encapsulates a specific notion of fairness, where a clustering with a monochromatic cluster (i.e., fully unbalanced) is considered unfair. We call the clustering Y as described in Lemma 3 a (b, r)-fairlet decomposition of X and call each cluster Y ∈Y a fairlet. 3 Equipped with the notion of balance, we now revisit the clustering objectives defined earlier. The objectives do not consider the color of the points, so they can lead to solutions with monochromatic clusters. We now extend them to incorporate fairness. Definition 4 ((t, k)-fair clustering problems). In the (t, k)-fair center (resp., (t, k)-fair median) problem, the goal is to partition X into C such that |C| = k, balance(C) ≥t, and φ(X, C) (resp. ψ(X, C)) is minimized. Traditional formulations of k-center and k-median eschew the notion of an assignment function. Instead it is implicit through a set {c1, . . . , ck} of centers, where each point assigned to its nearest center, i.e., α(x) = arg mini∈[1,k] d(x, ci). Without fairness as an issue, they are equivalent formulations; however, with fairness, we need an explicit assignment function (see Figure 1). Missing proofs are deferred to the full version of the paper. 3 Fairlet decomposition and fair clustering At first glance, the fair version of a clustering problem appears harder than its vanilla counterpart. In this section we prove, interestingly, a reduction from the former to the latter. We do this by first clustering the original points into small clusters preserving the balance, and then applying vanilla clustering on these smaller clusters instead of on the original points. As noted earlier, there are different ways to partition the input to obtain a fairlet decomposition. We will show next that the choice of the partition directly impacts the approximation guarantees of the final clustering algorithm. Before proving our reduction we need to introduce some additional notation. Let Y = {Y1, . . . , Ym} be a fairlet decomposition. For each cluster Yj, we designate an arbitrary point yj ∈Yj as its center. Then for a point x, we let β : X →[1, m] denote the index of the fairlet to which it is mapped. We are now ready to define the cost of a fairlet decomposition Definition 5 (Fairlet decomposition cost). For a fairlet decomposition, we define its k-median cost as P x∈X d(x, β(x)), and its k-center cost as maxx∈X d(x, β(x)). We say that a (b, r)-fairlet decomposition is optimal if it has minimum cost among all (b, r)-fairlet decompositions. Since (X, d) is a metric, we have from the triangle inequality that for any other point c ∈X, d(x, c) ≤d(x, yβ(x)) + d(yβ(x), c). Now suppose that we aim to obtain a (t, k)-fair clustering of the original points X. (As we observed earlier, necessarily t ≤balance(X).) To solve the problem we can cluster instead the centers of each fairlet, i.e., the set {y1, . . . , ym} = Y , into k clusters. In this way we obtain a set of centers {c1, . . . , ck} and an assignment function αY : Y →[k]. We can then define the overall assignment function as α(x) = αY (yβ(x)) and denote the clustering induced by α as Cα. From the definition of Y and the property of fairlets and balance, we get that balance(Cα) = t. We now need to bound its cost. Let ˜Y be a multiset, where each yi appears |Yi| number of times. Lemma 6. ψ(X, Cα) = ψ(X, Y) + ψ( ˜Y , Cα) and φ(X, Cα) = φ(X, Y) + φ( ˜Y , Cα). Therefore in both cases we can reduce the fair clustering problem to the problem of finding a good fairlet decomposition and then solving the vanilla clustering problem on the centers of the fairlets. We refer to ψ(X, Y) and φ(X, Y) as the k-median and k-center costs of the fairlet decomposition. 4 Algorithms In the previous section we presented a reduction from the fair clustering problem to the regular counterpart. In this section we use it to design efficient algorithms for fair clustering. We first focus on the k-center objective and show in Section 4.3 how to adapt the reasoning to solve the k-median objective. We begin with the most natural case in which we require the clusters to be perfectly balanced, and give efficient algorithms for the (1, k)-fair center problem. Then we analyze the more challenging (t, k)-fair center problem for t < 1. Let B = BLUE(X), R = RED(X). 4 4.1 Fair k-center warmup: (1, 1)-fairlets Suppose balance(X) = 1, i.e., (|R| = |B|) and we wish to find a perfectly balanced clustering. We now show how we can obtain it using a good (1, 1)-fairlet decomposition. Lemma 7. An optimal (1, 1)-fairlet decomposition for k-center can be found in polynomial time. Proof. To find the best decomposition, we first relate this question to a graph covering problem. Consider a bipartite graph G = (B ∪R, E) where we create an edge E = (bi, rj) with weight wij = d(ri, bj) between any bichromatic pair of nodes. In this case a decomposition into fairlets corresponds to some perfect matching in the graph. Each edge in the matching represents a fairlet, Yi. Let Y = {Yi} be the set of edges in the matching. Observe that the k-center cost φ(X, Y) is exactly the cost of the maximum weight edge in the matching, therefore our goal is to find a perfect matching that minimizes the weight of the maximum edge. This can be done by defining a threshold graph Gτ that has the same nodes as G but only those edges of weight at most τ. We then look for the minimum τ where the corresponding graph has a perfect matching, which can be done by (binary) searching through the O(n2) values. Finally, for each fairlet (edge) Yi we can arbitrarily set one of the two nodes as the center, yi. Since any fair solution to the clustering problem induces a set of minimal fairlets (as described in Lemma 3), the cost of the fairlet decomposition found is at most the cost of the clustering solution. Lemma 8. Let Y be the partition found above, and let φ∗ t be the cost of the optimal (t, k)-fair center clustering. Then φ(X, Y) ≤φ∗ t . This, combined with the fact that the best approximation algorithm for k-center yields a 2approximation (Gonzalez, 1985) gives us the following. Theorem 9. The algorithm that first finds fairlets and then clusters them is a 3-approximation for the (1, k)-fair center problem. 4.2 Fair k-center: (1, t′)-fairlets Now, suppose that instead we look for a clustering with balance t ⪇1. In this section we assume t = 1/t′ for some integer t′ > 1. We show how to extend the intuition in the matching construction above to find approximately optimal (1, t′)-fairlet decompositions for integral t′ > 1. In this case, we transform the problem into a minimum cost flow (MCF) problem.1 Let τ > 0 be a parameter of the algorithm. Given the points B, R, and an integer t′, we construct a directed graph Hτ = (V, E). Its node set V is composed of two special nodes β and ρ, all of the nodes in B ∪R, and t′ additional copies for each node v ∈B ∪R. More formally, V = {β, ρ} ∪B ∪R ∪ n bj i | bi ∈B and j ∈[t′] o ∪ n rj i | ri ∈R and j ∈[t′] o . The directed edges of Hτ are as follows: (i) A (β, ρ) edge with cost 0 and capacity min(|B|, |R|). (ii) A (β, bi) edge for each bi ∈B, and an (ri, ρ) edge for each ri ∈R. All of these edges have cost 0 and capacity t′ −1. (iii) For each bi ∈B and for each j ∈[t′], a (bi, bj i) edge, and for each ri ∈R and for each j ∈[t′], an (ri, rj i ) edge. All of these edges have cost 0 and capacity 1. (iv) Finally, for each bi ∈B, rj ∈R and for each 1 ≤k, ℓ≤t, a (bk i , rℓ j) edge with capacity 1. The cost of this edge is 1 if d(bi, rj) ≤τ and ∞otherwise. To finish the description of this MCF instance, we have now specify supply and demand at every node. Each node in B has a supply of 1, each node in R has a demand of 1, β has a supply of |R|, and ρ has a demand of |B|. Every other node has zero supply and demand. In Figure 2 we show an example of this construction for a small graph. 1Given a graph with edges costs and capacities, a source, a sink, the goal is to push a given amount of flow from source to sink, respecting flow conservation at nodes, capacity constraints on the edges, at the least possible cost. 5 b1 β ⍴ b2 b3 r1 r2 b1 b2 b3 r1 r2 Figure 2: The construction of the MCF instance for the bipartite graph for t′ = 2. Note that the only nodes with positive demands or supplies are β, ρ, b1, b2, b3, r1, and r2 and all the dotted edges have cost 0. The MCF problem can be solved in polynomial time and since all of the demands and capacities are integral, there exists an optimal solution that sends integral flow on each edge. In our case, the solution is a set of edges of Hτ that have non-zero flow, and the total flow on the (β, ρ) edge. In the rest of this section we assume for simplicity that any two distinct elements of the metric are at a positive distance apart and we show that starting from a solution to the described MCF instance we can build a low cost (1, t′)-fairlet decomposition. We start by showing that every (1, t′)-fairlet decomposition can be used to construct a feasible solution for the MCF instance and then prove that an optimal solution for the MCF instance can be used to obtain a (1, t′)-fairlet decomposition. Lemma 10. Let Y be a (1, t′)-fairlet decomposition of cost C for the (1/t′, k)-fair center problem. Then it is possible to construct a feasible solution of cost 2C to the MCF instance. Proof. We begin by building a feasible solution and then bound its cost. Consider each fairlet in the (1, t′)-fairlet decomposition. Suppose the fairlet contains 1 red node and c blue nodes, with c ≤t′, i.e., the fairlet is of the form {r1, b1, . . . , bc}. For any such fairlet we send a unit of flow form each node bi to b1 i , for i ∈[c] and a unit of flow from nodes b1 1, . . . , b1 c to nodes r1 1, . . . , rc 1. Furthermore we send a unit of flow from each r1 1, . . . , rc 1 to r1 and c −1 units of flow from r1 to ρ. Note that in this way we saturate the demands of all nodes in this fairlet. Similarly, if the fairlet contains c red nodes and 1 blue node, with c ≤t′, i.e., the fairlet is of the form {r1, . . . , rc, b1}. For any such fairlet, we send c −1 units of flow from β to b1. Then we send a unit of flow from each b1 to each b1 1, . . . , bc 1 and a unit of flow from nodes b1 1, . . . , bc 1 to nodes r1 1, . . . , r1 c. Furthermore we send a unit of flow from each r1 1, . . . , r1 c to the nodes r1, . . . , rc. Note that also in this case we saturate all the request of nodes in this fairlet. Since every node v ∈B ∪R is contained in a fairlet, all of the demands of these nodes are satisfied. Hence, the only nodes that can have still unsatisfied demand are β and ρ, but we can use the direct edge (β, ρ) to route the excess demand, since the total demand is equal to the total supply. In this way we obtain a feasible solution for the MCF instance starting from a (1, t′)-fairlet decomposition. To bound the cost of the solution note that the only edges with positive cost in the constructed solution are the edges between nodes bj i and rℓ k. Furthermore an edge is part of the solution only if the nodes bi and rk are contained in the same fairlet F. Given that the k-center cost for the fairlet decomposition is C, the cost of the edges between nodes in F in the constructed feasible solution for the MCF instance is at most 2 times this distance. The claim follows. Now we show that given an optimal solution for the MCF instance of cost C, we can construct a (1, t′)-fairlet decomposition of cost no bigger than C. Lemma 11. Let Y be an optimal solution of cost C to the MCF instance. Then it is possible to construct a (1, t′)-fairlet decomposition for (1/t′, k)-fair center problem of cost at most C. Combining Lemma 10 and Lemma 11 yields the following. Lemma 12. By reducing the (1, t′)-fairlet decomposition problem to an MCF problem, it is possible to compute a 2-approximation for the optimal (1, t′)-fairlet decomposition for the (1/t′, k)-fair center problem. Note that the cost of a (1, t′)-fairlet decomposition is necessarily smaller than the cost of a (1/t′, k)fair clustering. Our main theorem follows. Theorem 13. The algorithm that first finds fairlets and then clusters them is a 4-approximation for the (1/t′, k)-fair center problem for any positive integer t′. 6 0 2000 4000 6000 8000 10000 12000 14000 16000 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Bank (k-center) Fair Cost Fair Balance Unfair Cost Unfair Balance Fairlet Cost 0 50000 100000 150000 200000 250000 300000 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Census (k-center) 0 5 10 15 20 25 30 35 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Diabetes (k-center) 0 100000 200000 300000 400000 500000 600000 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Bank (k-median) Fair Cost Fair Balance Unfair Cost Unfair Balance Fairlet Cost 0 5x106 1x107 1.5x107 2x107 2.5x107 3x107 3.5x107 4x107 4.5x107 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Census (k-median) 0 2000 4000 6000 8000 10000 12000 3 4 6 8 10 12 14 16 18 20 0 0.2 0.4 0.6 0.8 1 Number of Clusters Diabetes (k-median) Figure 3: Empirical performance of the classical and fair clustering median and center algorithms on the three datasets. The cost of each solution is on left axis, and its balance on the right axis. 4.3 Fair k-median The results in the previous section can be modified to yield results for the (t, k)-fair median problem with minor changes that we describe below. For the perfectly balanced case, as before, we look for a perfect matching on the bichromatic graph. Unlike, the k-center case, we let the weight of a (bi, rj) edge be the distance between the two points. Our goal is to find a perfect matching of minimum total cost, since that exactly represents the cost of the fairlet decomposition. Since the best known approximation for k-median is 1 + √ 3 + ϵ (Li & Svensson, 2013), we have: Theorem 14. The algorithm that first finds fairlets and then clusters them is a (2 + √ 3 + ϵ)approximation for the (1, k)-fair median problem. To find (1, t′)-fairlet decompositions for integral t′ > 1, we again resort to MCF and create an instance as in the k-center case, but for each bi ∈B, rj ∈R, and for each 1 ≤k, ℓ≤t, we set the cost of the edge (bk i , rℓ j) to d(bi, rj). Theorem 15. The algorithm that first finds fairlets and then clusters them is a (t′ + 1 + √ 3 + ϵ)approximation for the (1/t′, k)-fair median problem for any positive integer t′. 4.4 Hardness We complement our algorithmic results with discussion of computational hardness for fair clustering. We show that the question of finding a good fairlet decomposition is itself computationally hard. Thus, ensuring fairness causes hardness, regardless of the underlying clustering objective. Theorem 16. For each fixed t′ ≥3, finding an optimal (1, t′)-fairlet decomposition is NP-hard. Also, finding the minimum cost (1/t′, k)-fair median clustering is NP-hard. 5 Experiments In this section we illustrate our algorithm by performing experiments on real data. The goal of our experiments is two-fold: first, we show that traditional algorithms for k-center and k-median tend to produce unfair clusters; second, we show that by using our algorithms one can obtain clusters that respect the fairness guarantees. We show that in the latter case, the cost of the solution tends to converge to the cost of the fairlet decomposition, which serves as a lower bound on the cost of the optimal solution. Datasets. We consider 3 datasets from the UCI repository Lichman (2013) for experimentation. 7 Diabetes. This dataset2 represents the outcomes of patients pertaining to diabetes. We chose numeric attributes such as age, time in hospital, to represent points in the Euclidean space and gender as the sensitive dimension, i.e., we aim to balance gender. We subsampled the dataset to 1000 records. Bank. This dataset3 contains one record for each phone call in a marketing campaign ran by a Portuguese banking institution (Moro et al. , 2014)). Each record contains information about the client that was contacted by the institution. We chose numeric attributes such as age, balance, and duration to represents points in the Euclidean space, we aim to cluster to balance married and not married clients. We subsampled the dataset to 1000 records. Census. This dataset4 contains the census records extracted from the 1994 US census (Kohavi, 1996). Each record contains information about individuals including education, occupation, hours worked per week, etc. . We chose numeric attributes such as age, fnlwgt, education-num, capitalgain and hours-per-week to represents points in the Euclidean space and we aim to cluster the dataset so to balance gender. We subsampled the dataset to 600 records. Algorithms. We implement the flow-based fairlet decomposition algorithm as described in Section 4. To solve the k-center problem we augment it with the greedy furthest point algorithm due to Gonzalez (1985), which is known to obtain a 2-approximation. To solve the k-median problem we use the single swap algorithm due to Arya et al. (2004), which also gets a 5-approximation in the worst case, but performs much better in practice (Kanungo et al. , 2002). Results. Figure 3 shows the results for k-center for the three datasets, and Figure 3 shows the same for the k-median objective. In all of the cases, we run with t′ = 2, that is we aim for balance of at least 0.5 in each cluster. Observe that the balance of the solutions produced by the classical algorithms is very low, and in four out of the six cases, the balance is 0 for larger values of k, meaning that the optimal solution has monochromatic clusters. Moreover, this is not an isolated incident, for instance the k-median instance of the Bank dataset has three monochromatic clusters starting at k = 12. Finally, left unchecked, the balance in all datasets keeps decreasing as the clustering becomes more discriminative, with increased k. On the other hand the fair clustering solutions maintain a balanced solution even as k increases. Not surprisingly, the balance comes with a corresponding increase in cost, and the fair solutions are costlier than their unfair counterparts. In each plot we also show the cost of the fairlet decomposition, which represents the limit of the cost of the fair clustering; in all of the scenarios the overall cost of the clustering converges to the cost of the fairlet decomposition. 6 Conclusions In this work we initiate the study of fair clustering algorithms. Our main result is a reduction of fair clustering to classical clustering via the notion of fairlets. We gave efficient approximation algorithms for finding fairlet decompositions, and proved lower bounds showing that fairness can introduce a computational bottleneck. An immediate future direction is to tighten the gap between lower and upper bounds by improving the approximation ratio of the decomposition algorithms, or giving stronger hardness results. A different avenue is to extend these results to situations where the protected class is not binary, but can take on multiple values. Here there are multiple challenges including defining an appropriate version of fairness. Acknowledgments Flavio Chierichetti was supported in part by the ERC Starting Grant DMAP 680153, by a Google Focused Research Award, and by the SIR Grant RBSI14Q743. 2https://archive.ics.uci.edu/ml/datasets/diabetes 3https://archive.ics.uci.edu/ml/datasets/Bank+Marketing 4https://archive.ics.uci.edu/ml/datasets/adult 8 References Aggarwal, Charu C., & Reddy, Chandan K. 2013. Data Clustering: Algorithms and Applications. 1st edn. Chapman & Hall/CRC. Arya, Vijay, Garg, Naveen, Khandekar, Rohit, Meyerson, Adam, Munagala, Kamesh, & Pandit, Vinayaka. 2004. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33(3), 544–562. Biddle, Dan. 2006. Adverse Impact and Test Validation: A Practitioner’G guide to Valid and Defensible Employment Testing. Gower Publishing, Ltd. Corbett-Davies, Sam, Pierson, Emma, Feller, Avi, Goel, Sharad, & Huq, Aziz. 2017. Algorithmic Decision Making and the Cost of Fairness. Pages 797–806 of: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’17. New York, NY, USA: ACM. Dwork, Cynthia, Hardt, Moritz, Pitassi, Toniann, Reingold, Omer, & Zemel, Richard. 2012. Fairness through awareness. Pages 214–226 of: ITCS. Feldman, Michael, Friedler, Sorelle A., Moeller, John, Scheidegger, Carlos, & Venkatasubramanian, Suresh. 2015. Certifying and removing disparate impact. Pages 259–268 of: KDD. Gonzalez, T. 1985. Clustering to minimize the maximum intercluster distance. TCS, 38, 293–306. Hardt, Moritz, Price, Eric, & Srebro, Nati. 2016. Equality of opportunity in supervised learning. Pages 3315–3323 of: NIPS. Joseph, Matthew, Kearns, Michael, Morgenstern, Jamie H., & Roth, Aaron. 2016. Fairness in learning: Classic and contextual bandits. Pages 325–333 of: NIPS. Kamishima, Toshihiro, Akaho, Shotaro, Asoh, Hideki, & Sakuma, Jun. 2012. Fairness-aware classifier with prejudice remover regularizer. Pages 35–50 of: ECML/PKDD. Kanungo, Tapas, Mount, David M., Netanyahu, Nathan S., Piatko, Christine D., Silverman, Ruth, & Wu, Angela Y. 2002. An efficient k-means clustering algorithm: Analysis and implementation. PAMI, 24(7), 881–892. Kleinberg, Jon, Lakkaraju, Himabindu, Leskovec, Jure, Ludwig, Jens, & Mullainathan, Sendhil. 2017a. Human decisions and machine predictions. Working Paper 23180. NBER. Kleinberg, Jon M., Mullainathan, Sendhil, & Raghavan, Manish. 2017b. Inherent trade-offs in the fair determination of risk scores. In: ITCS. Kohavi, Ron. 1996. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. Pages 202–207 of: KDD. Li, Shi, & Svensson, Ola. 2013. Approximating k-median via pseudo-approximation. Pages 901– 910 of: STOC. Lichman, M. 2013. UCI Machine Learning Repository. Luong, Binh Thanh, Ruggieri, Salvatore, & Turini, Franco. 2011. k-NN as an implementation of situation testing for discrimination discovery and prevention. Pages 502–510 of: KDD. Moro, S´ergio, Cortez, Paulo, & Rita, Paulo. 2014. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62, 22–31. Xu, Rui, & Wunsch, Don. 2009. Clustering. Wiley-IEEE Press. Zafar, Muhammad Bilal, Valera, Isabel, Gomez-Rodriguez, Manuel, & Gummadi, Krishna P. 2017. Fairness constraints: Mechanisms for fair classification. Pages 259–268 of: AISTATS. Zemel, Richard S., Wu, Yu, Swersky, Kevin, Pitassi, Toniann, & Dwork, Cynthia. 2013. Learning fair representations. Pages 325–333 of: ICML. 9 | 2017 | 398 |
6,894 | A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum Gatsby Unit, UCL wittawatj@gmail.com Wenkai Xu Gatsby Unit, UCL wenkaix@gatsby.ucl.ac.uk Zoltán Szabó∗ CMAP, École Polytechnique zoltan.szabo@polytechnique.edu Kenji Fukumizu The Institute of Statistical Mathematics fukumizu@ism.ac.jp Arthur Gretton∗ Gatsby Unit, UCL arthur.gretton@gmail.com Abstract We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model. 1 Introduction The goal of goodness of fit testing is to determine how well a model density p(x) fits an observed sample D = {xi}n i=1 ⊂X ⊆Rd from an unknown distribution q(x). This goal may be achieved via a hypothesis test, where the null hypothesis H0 : p = q is tested against H1 : p ̸= q. The problem of testing goodness of fit has a long history in statistics [11], with a number of tests proposed for particular parametric models. Such tests can require space partitioning [18, 3], which works poorly in high dimensions; or closed-form integrals under the model, which may be difficult to obtain, besides in certain special cases [2, 5, 30, 26]. An alternative is to conduct a two-sample test using samples drawn from both p and q. This approach was taken by [23], using a test based on the (quadratic-time) Maximum Mean Discrepancy [16], however this does not take advantage of the known structure of p (quite apart from the increased computational cost of dealing with samples from p). More recently, measures of discrepancy with respect to a model have been proposed based on Stein’s method [21]. A Stein operator for p may be applied to a class of test functions, yielding functions that have zero expectation under p. Classes of test functions can include the W 2,∞Sobolev space [14], and reproducing kernel Hilbert spaces (RKHS) [25]. Statistical tests have been proposed by [9, 22] based on classes of Stein transformed RKHS functions, where the test statistic is the norm of the smoothness-constrained function with largest expectation under q . We will refer to this statistic as the Kernel Stein Discrepancy (KSD). For consistent tests, it is sufficient to use C0-universal kernels [6, Definition 4.1], as shown by [9, Theorem 2.2], although inverse multiquadric kernels may be preferred if uniform tightness is required [15].2 ∗Zoltán Szabó’s ORCID ID: 0000-0001-6183-7603. Arthur Gretton’s ORCID ID: 0000-0003-3169-7624. 2Briefly, [15] show that when an exponentiated quadratic kernel is used, a sequence of sets D may be constructed that does not correspond to any q, but for which the KSD nonetheless approaches zero. In a statistical testing setting, however, we assume identically distributed samples from q, and the issue does not arise. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The minimum variance unbiased estimate of the KSD is a U-statistic, with computational cost quadratic in the number n of samples from q. It is desirable to reduce the cost of testing, however, so that larger sample sizes may be addressed. A first approach is to replace the U-statistic with a running average with linear cost, as proposed by [22] for the KSD, but this results in an increase in variance and corresponding decrease in test power. An alternative approach is to construct explicit features of the distributions, whose empirical expectations may be computed in linear time. In the two-sample and independence settings, these features were initially chosen at random by [10, 8, 32]. More recently, features have been constructed explicitly to maximize test power in the two-sample [19] and independence testing [20] settings, resulting in tests that are not only more interpretable, but which can yield performance matching quadratic-time tests. We propose to construct explicit linear-time features for testing goodness of fit, chosen so as to maximize test power. These features further reveal where the model and data differ, in a readily interpretable way. Our first theoretical contribution is a derivation of the null and alternative distributions for tests based on such features, and a corresponding power optimization criterion. Note that the goodness-of-fit test requires somewhat different strategies to those employed for two-sample and independence testing [19, 20], which become computationally prohibitive in high dimensions for the Stein discrepancy (specifically, the normalization used in prior work to simplify the asymptotics would incur a cost cubic in the dimension d and the number of features in the optimization). Details may be found in Section 3. Our second theoretical contribution, given in Section 4, is an analysis of the relative Bahadur efficiency of our test vs the linear time test of [22]: this represents the relative rate at which the pvalue decreases under H1 as we observe more samples. We prove that our test has greater asymptotic Bahadur efficiency relative to the test of [22], for Gaussian distributions under the mean-shift alternative. This is shown to hold regardless of the bandwidth of the exponentiated quadratic kernel used for the earlier test. The proof techniques developed are of independent interest, and we anticipate that they may provide a foundation for the analysis of relative efficiency of linear-time tests in the two-sample and independence testing domains. In experiments (Section 5), our new linear-time test is able to detect subtle local differences between the density p(x), and the unknown q(x) as observed through samples. We show that our linear-time test constructed based on optimized features has comparable performance to the quadratic-time test of [9, 22], while uniquely providing an explicit visual indication of where the model fails to fit the data. 2 Kernel Stein Discrepancy (KSD) Test We begin by introducing the Kernel Stein Discrepancy (KSD) and associated statistical test, as proposed independently by [9] and [22]. Assume that the data domain is a connected open set X ⊆Rd. Consider a Stein operator Tp that takes in a multivariate function f(x) = (f1(x), . . . , fd(x))⊤∈Rd and constructs a function (Tpf) (x): Rd →R. The constructed function has the key property that for all f in an appropriate function class, Ex∼q [(Tpf)(x)] = 0 if and only if q = p. Thus, one can use this expectation as a statistic for testing goodness of fit. The function class Fd for the function f is chosen to be a unit-norm ball in a reproducing kernel Hilbert space (RKHS) in [9, 22]. More precisely, let F be an RKHS associated with a positive definite kernel k: X × X →R. Let φ(x) = k(x, ·) denote a feature map of k so that k(x, x′) = ⟨φ(x), φ(x′)⟩F. Assume that fi ∈F for all i = 1, . . . , d so that f ∈F × · · · × F := Fd where Fd is equipped with the standard inner product ⟨f, g⟩Fd := Pd i=1 ⟨fi, gi⟩F. The kernelized Stein operator Tp studied in [9] is (Tpf) (x) := Pd i=1 ∂log p(x) ∂xi fi(x) + ∂fi(x) ∂xi (a) = f, ξp(x, ·) Fd , where at (a) we use the reproducing property of F, i.e., fi(x) = ⟨fi, k(x, ·)⟩F, and that ∂k(x,·) ∂xi ∈F [28, Lemma 4.34], hence ξp(x, ·) := ∂log p(x) ∂x k(x, ·)+ ∂k(x,·) ∂x is in Fd. We note that the Stein operator presented in [22] is defined such that (Tpf) (x) ∈Rd. This distinction is not crucial and leads to the same goodness-offit test. Under appropriate conditions, e.g. that lim∥x∥→∞p(x)fi(x) = 0 for all i = 1, . . . , d, it can be shown using integration by parts that Ex∼p(Tpf)(x) = 0 for any f ∈Fd [9, Lemma 5.1]. Based on the Stein operator, [9, 22] define the kernelized Stein discrepancy as Sp(q) := sup ∥f∥Fd≤1 Ex∼q f, ξp(x, ·) Fd (a) = sup ∥f∥Fd≤1 f, Ex∼qξp(x, ·) Fd = ∥g(·)∥Fd, (1) 2 where at (a), ξp(x, ·) is Bochner integrable [28, Definition A.5.20] as long as Ex∼q∥ξp(x, ·)∥Fd < ∞, and g(y) := Ex∼qξp(x, y) is what we refer to as the Stein witness function. The Stein witness function will play a crucial role in our new test statistic in Section 3. When a C0-universal kernel is used [6, Definition 4.1], and as long as Ex∼q∥∇x log p(x) −∇x log q(x)∥2 < ∞, it can be shown that Sp(q) = 0 if and only if p = q [9, Theorem 2.2]. The KSD Sp(q) can be written as S2 p(q) = Ex∼qEx′∼qhp(x, x′), where hp(x, y) := s⊤ p (x)sp(y)k(x, y) + s⊤ p (y)∇xk(x, y) + s⊤ p (x)∇yk(x, y) + Pd i=1 ∂2k(x,y) ∂xi∂yi , and sp(x) := ∇x log p(x) is a column vector. An unbiased empirical estimator of S2 p(q), denoted by c S2 = 2 n(n−1) P i<j hp(xi, xj) [22, Eq. 14], is a degenerate U-statistic under H0. For the goodness-of-fit test, the rejection threshold can be computed by a bootstrap procedure. All these properties make c S2 a very flexible criterion to detect the discrepancy of p and q: in particular, it can be computed even if p is known only up to a normalization constant. Further studies on nonparametric Stein operators can be found in [25, 14]. Linear-Time Kernel Stein (LKS) Test Computation of c S2 costs O(n2). To reduce this cost, a linear-time (i.e., O(n)) estimator based on an incomplete U-statistic is proposed in [22, Eq. 17], given by c S2 l := 2 n Pn/2 i=1 hp(x2i−1, x2i), where we assume n is even for simplicity. Empirically [22] observed that the linear-time estimator performs much worse (in terms of test power) than the quadratic-time U-statistic estimator, agreeing with our findings presented in Section 5. 3 New Statistic: The Finite Set Stein Discrepancy (FSSD) Although shown to be powerful, the main drawback of the KSD test is its high computational cost of O(n2). The LKS test is one order of magnitude faster. Unfortunately, the decrease in the test power outweighs the computational gain [22]. We therefore seek a variant of the KSD statistic that can be computed in linear time, and whose test power is comparable to the KSD test. Key Idea The fact that Sp(q) = 0 if and only if p = q implies that g(v) = 0 for all v ∈X if and only if p = q, where g is the Stein witness function in (1). One can see g as a function witnessing the differences of p, q, in such a way that |gi(v)| is large when there is a discrepancy in the region around v, as indicated by the ith output of g. The test statistic of [22, 9] is essentially given by the degree of “flatness” of g as measured by the RKHS norm ∥· ∥Fd. The core of our proposal is to use a different measure of flatness of g which can be computed in linear time. The idea is to use a real analytic kernel k which makes g1, . . . , gd real analytic. If gi ̸= 0 is an analytic function, then the Lebesgue measure of the set of roots {x | gi(x) = 0} is zero [24]. This property suggests that one can evaluate gi at a finite set of locations V = {v1, . . . , vJ}, drawn from a distribution with a density (w.r.t. the Lebesgue measure). If gi ̸= 0, then almost surely gi(v1), . . . , gi(vJ) will not be zero. This idea was successfully exploited in recently proposed linear-time tests of [8] and [19, 20]. Our new test statistic based on this idea is called the Finite Set Stein Discrepancy (FSSD) and is given in Theorem 1. All proofs are given in the appendix. Theorem 1 (The Finite Set Stein Discrepancy (FSSD)). Let V = {v1, . . . , vJ} ⊂Rd be random vectors drawn i.i.d. from a distribution η which has a density. Let X be a connected open set in Rd. Define FSSD2 p(q) := 1 dJ Pd i=1 PJ j=1 g2 i (vj). Assume that 1) k: X × X →R is C0universal [6, Definition 4.1] and real analytic i.e., for all v ∈X, f(x) := k(x, v) is a real analytic function on X. 2) Ex∼qEx′∼qhp(x, x′) < ∞. 3) Ex∼q∥∇x log p(x) −∇x log q(x)∥2 < ∞. 4) lim∥x∥→∞p(x)g(x) = 0. Then, for any J ≥1, η-almost surely FSSD2 p(q) = 0 if and only if p = q. This measure depends on a set of J test locations (or features) {vi}J i=1 used to evaluate the Stein witness function, where J is fixed and is typically small. A kernel which is C0-universal and real analytic is the Gaussian kernel k(x, y) = exp −∥x−y∥2 2 2σ2 k (see [20, Proposition 3] for the result on analyticity). Throughout this work, we will assume all the conditions stated in Theorem 1, and consider only the Gaussian kernel. Besides the requirement that the kernel be real and analytic, the remaining conditions in Theorem 1 are the same as given in [9, Theorem 2.2]. Note that if the 3 FSSD is to be employed in a setting otherwise than testing, for instance to obtain pseudo-samples converging to p, then stronger conditions may be needed [15]. 3.1 Goodness-of-Fit Test with the FSSD Statistic Given a significance level α for the goodness-of-fit test, the test can be constructed so that H0 is rejected when n \ FSSD2 > Tα, where Tα is the rejection threshold (critical value), and \ FSSD2 is an empirical estimate of FSSD2 p(q). The threshold which guarantees that the type-I error (i.e., the probability of rejecting H0 when it is true) is bounded above by α is given by the (1 −α)-quantile of the null distribution i.e., the distribution of n \ FSSD2 under H0. In the following, we start by giving the expression for \ FSSD2, and summarize its asymptotic distributions in Proposition 2. Let Ξ(x) ∈Rd×J such that [Ξ(x)]i,j = ξp,i(x, vj)/ √ dJ. Define τ(x) := vec(Ξ(x)) ∈RdJ where vec(M) concatenates columns of the matrix M into a column vector. We note that τ(x) depends on the test locations V = {vj}J j=1. Let ∆(x, y) := τ(x)⊤τ(y) = tr(Ξ(x)⊤Ξ(y)). Given an i.i.d. sample {xi}n i=1 ∼q, a consistent, unbiased estimator of FSSD2 p(q) is \ FSSD2 = 1 dJ d X l=1 J X m=1 1 n(n −1) n X i=1 X j̸=i ξp,l(xi, vm)ξp,l(xj, vm) = 2 n(n −1) X i<j ∆(xi, xj), (2) which is a one-sample second-order U-statistic with ∆as its U-statistic kernel [27, Section 5.1.1]. Being a U-statistic, its asymptotic distribution can easily be derived. We use d→to denote convergence in distribution. Proposition 2 (Asymptotic distributions of \ FSSD2). Let Z1, . . . , ZdJ i.i.d. ∼ N(0, 1). Let µ := Ex∼q[τ(x)], Σr := covx∼r[τ(x)] ∈RdJ×dJ for r ∈{p, q}, and {ωi}dJ i=1 be the eigenvalues of Σp = Ex∼p[τ(x)τ ⊤(x)]. Assume that Ex∼qEy∼q∆2(x, y) < ∞. Then, for any realization of V = {vj}J j=1, the following statements hold. 1. Under H0 : p = q, n \ FSSD2 d→PdJ i=1(Z2 i −1)ωi. 2. Under H1 : p ̸= q, if σ2 H1 := 4µ⊤Σqµ > 0, then √n( \ FSSD2 −FSSD2) d→N(0, σ2 H1). Proof. Recognizing that (2) is a degenerate U-statistic, the results follow directly from [27, Section 5.5.1, 5.5.2]. Claims 1 and 2 of Proposition 2 imply that under H1, the test power (i.e., the probability of correctly rejecting H1) goes to 1 asymptotically, if the threshold Tα is defined as above. In practice, simulating from the asymptotic null distribution in Claim 1 can be challenging, since the plug-in estimator of Σp requires a sample from p, which is not available. A straightforward solution is to draw sample from p, either by assuming that p can be sampled easily or by using a Markov chain Monte Carlo (MCMC) method, although this adds an additional computational burden to the test procedure. A more subtle issue is that when dependent samples from p are used in obtaining the test threshold, the test may become more conservative than required for i.i.d. data [7]. An alternative approach is to use the plug-in estimate ˆΣq instead of Σp. The covariance matrix ˆΣq can be directly computed from the data. This is the approach we take. Theorem 3 guarantees that the replacement of the covariance in the computation of the asymptotic null distribution still yields a consistent test. We write PH1 for the distribution of n \ FSSD2 under H1. Theorem 3. Let ˆΣq := 1 n Pn i=1 τ(xi)τ ⊤(xi)−[ 1 n Pn i=1 τ(xi)][ 1 n Pn j=1 τ(xj)]⊤with {xi}n i=1 ∼ q. Suppose that the test threshold Tα is set to the (1−α)-quantile of the distribution of PdJ i=1(Z2 i −1) ˆνi where {Zi}dJ i=1 i.i.d. ∼N(0, 1), and ˆν1, . . . , ˆνdJ are eigenvalues of ˆΣq. Then, under H0, asymptotically the false positive rate is α. Under H1, for {vj}J j=1 drawn from a distribution with a density, the test power PH1(n \ FSSD2 > Tα) →1 as n →∞. Remark 1. The proof of Theorem 3 relies on two facts. First, under H0, ˆΣq = ˆΣp i.e., the plug-in estimate of Σp. Thus, under H0, the null distribution approximated with ˆΣq is asymptotically 4 correct, following the convergence of ˆΣp to Σp. Second, the rejection threshold obtained from the approximated null distribution is asymptotically constant. Hence, under H1, claim 2 of Proposition 2 implies that n \ FSSD2 d→∞as n →∞, and consequently PH1(n \ FSSD2 > Tα) →1. 3.2 Optimizing the Test Parameters Theorem 1 guarantees that the population quantity FSSD2 = 0 if and only if p = q for any choice of {vi}J i=1 drawn from a distribution with a density. In practice, we are forced to rely on the empirical \ FSSD2, and some test locations will give a higher detection rate (i.e., test power) than others for finite n. Following the approaches of [17, 20, 19, 29], we choose the test locations V = {vj}J j=1 and kernel bandwidth σ2 k so as to maximize the test power i.e., the probability of rejecting H0 when it is false. We first give an approximate expression for the test power when n is large. Proposition 4 (Approximate test power of n \ FSSD2). Under H1, for large n and fixed r, the test power PH1(n \ FSSD2 > r) ≈1 −Φ r √nσH1 −√n FSSD2 σH1 , where Φ denotes the cumulative distribution function of the standard normal distribution, and σH1 is defined in Proposition 2. Proof. PH1(n \ FSSD2 > r) = PH1( \ FSSD2 > r/n) = PH1 √n \ FSSD2−FSSD2 σH1 > √n r/n−FSSD2 σH1 . For sufficiently large n, the alternative distribution is approximately normal as given in Proposition 2. It follows that PH1(n \ FSSD2 > r) ≈1 −Φ r √nσH1 −√n FSSD2 σH1 . Let ζ := {V, σ2 k} be the collection of all tuning parameters. Assume that n is sufficiently large. Following the same argument as in [29], in r √nσH1 −√n FSSD2 σH1 , we observe that the first term r √nσH1 = O(n−1/2) going to 0 as n →∞, while the second term √n FSSD2 σH1 = O(n1/2), dominating the first for large n. Thus, the best parameters that maximize the test power are given by ζ∗= arg maxζ PH1(n \ FSSD2 > Tα) ≈arg maxζ FSSD2 σH1 . Since FSSD2 and σH1 are unknown, we divide the sample {xi}n i=1 into two disjoint training and test sets, and use the training set to compute \ FSSD2 ˆσH1+γ , where a small regularization parameter γ > 0 is added for numerical stability. The goodness-of-fit test is performed on the test set to avoid overfitting. The idea of splitting the data into training and test sets to learn good features for hypothesis testing was successfully used in [29, 20, 19, 17]. To find a local maximum of \ FSSD2 ˆσH1+γ , we use gradient ascent for its simplicity. The initial points of {vi}J i=1 are set to random draws from a normal distribution fitted to the training data, a heuristic we found to perform well in practice. The objective is non-convex in general, reflecting many possible ways to capture the differences of p and q. The regularization parameter γ is not tuned, and is fixed to a small constant. Assume that ∇x log p(x) costs O(d2) to evaluate. Computing ∇ζ \ FSSD2 ˆσH1+γ costs O(d2J2n). The computational complexity of n \ FSSD2 and ˆσ2 H1 is O(d2Jn). Thus, finding a local optimum via gradient ascent is still linear-time, for a fixed maximum number of iterations. Computing ˆΣq costs O(d2J2n), and obtaining all the eigenvalues of ˆΣq costs O(d3J3) (required only once). If the eigenvalues decay to zero sufficiently rapidly, one can approximate the asymptotic null distribution with only a few eigenvalues. The cost to obtain the largest few eigenvalues alone can be much smaller. Remark 2. Let ˆµ := 1 n Pn i=1 τ(xi). It is possible to normalize the FSSD statistic to get a new statistic ˆλn := nˆµ⊤( ˆΣq + γI)−1 ˆµ where γ ≥0 is a regularization parameter that goes to 0 as n →∞. This was done in the case of the ME (mean embeddings) statistic of [8, 19]. The asymptotic null distribution of this statistic takes the convenient form of χ2(dJ) (independent of p and q), eliminating the need to obtain the eigenvalues of ˆΣq. It turns out that the test power criterion for tuning the parameters in this case is the statistic ˆλn itself. However, the optimization is computationally expensive as ( ˆΣq + γI)−1 (costing O(d3J3)) needs to be reevaluated in each gradient ascent iteration. This is not needed in our proposed FSSD statistic. 5 4 Relative Efficiency and Bahadur Slope Both the linear-time kernel Stein (LKS) and FSSD tests have the same computational cost of O(d2n), and are consistent, achieving maximum power of 1 as n →∞under H1. It is thus of theoretical interest to understand which test is more sensitive in detecting the differences of p and q. This can be quantified by the Bahadur slope of the test [1]. Two given tests can then be compared by computing the Bahadur efficiency (Theorem 7) which is given by the ratio of the slopes of the two tests. We note that the constructions and techniques in this section may be of independent interest, and can be generalised to other statistical testing settings. We start by introducing the concept of Bahadur slope for a general test, following the presentation of [12, 13]. Consider a hypothesis testing problem on a parameter θ. The test proposes a null hypothesis H0 : θ ∈Θ0 against the alternative hypothesis H1 : θ ∈Θ\Θ0, where Θ, Θ0 are arbitrary sets. Let Tn be a test statistic computed from a sample of size n, such that large values of Tn provide an evidence to reject H0. We use plim to denote convergence in probability, and write Er for Ex∼rEx′∼r. Approximate Bahadur Slope (ABS) For θ0 ∈Θ0, let the asymptotic null distribution of Tn be F(t) = limn→∞Pθ0(Tn < t), where we assume that the CDF (F) is continuous and common to all θ0 ∈Θ0. The continuity of F will be important later when Theorem 9 and 10 are used to compute the slopes of LKS and FSSD tests. Assume that there exists a continuous strictly increasing function ρ : (0, ∞) →(0, ∞) such that limn→∞ρ(n) = ∞, and that −2 plimn→∞ log(1−F (Tn)) ρ(n) = c(θ) where Tn ∼Pθ, for some function c such that 0 < c(θA) < ∞for θA ∈Θ\Θ0, and c(θ0) = 0 when θ0 ∈Θ0. The function c(θ) is known as the approximate Bahadur slope (ABS) of the sequence Tn. The quantifier “approximate” comes from the use of the asymptotic null distribution instead of the exact one [1]. Intuitively the slope c(θA), for θA ∈Θ\Θ0, is the rate of convergence of p-values (i.e., 1 −F(Tn)) to 0, as n increases. The higher the slope, the faster the p-value vanishes, and thus the lower the sample size required to reject H0 under θA. Approximate Bahadur Efficiency Given two sequences of test statistics, T (1) n and T (2) n having the same ρ(n) (see Theorem 10), the approximate Bahadur efficiency of T (1) n relative to T (2) n is defined as E(θA) := c(1)(θA)/c(2)(θA) for θA ∈Θ\Θ0. If E(θA) > 1, then T (1) n is asymptotically more efficient than T (2) n in the sense of Bahadur, for the particular problem specified by θA ∈Θ\Θ0. We now give approximate Bahadur slopes for two sequences of linear time test statistics: the proposed n \ FSSD2, and the LKS test statistic √nc S2 l discussed in Section 2. Theorem 5. The approximate Bahadur slope of n \ FSSD2 is c(FSSD) := FSSD2/ω1, where ω1 is the maximum eigenvalue of Σp := Ex∼p[τ(x)τ ⊤(x)] and ρ(n) = n. Theorem 6. The approximate Bahadur slope of the linear-time kernel Stein (LKS) test statistic √nc S2 l is c(LKS) = 1 2 [Eqhp(x,x′)] 2 Ep[h2p(x,x′)] , where hp is the U-statistic kernel of the KSD statistic, and ρ(n) = n. To make these results concrete, we consider the setting where p = N(0, 1) and q = N(µq, 1). We assume that both tests use the Gaussian kernel k(x, y) = exp −(x −y)2/2σ2 k , possibly with different bandwidths. We write σ2 k and κ2 for the FSSD and LKS bandwidths, respectively. Under these assumptions, the slopes given in Theorem 5 and Theorem 6 can be derived explicitly. The full expressions of the slopes are given in Proposition 12 and Proposition 13 (in the appendix). By [12, 13] (recalled as Theorem 10 in the supplement), the approximate Bahadur efficiency can be computed by taking the ratio of the two slopes. The efficiency is given in Theorem 7. Theorem 7 (Efficiency in the Gaussian mean shift problem). Let E1(µq, v, σ2 k, κ2) be the approximate Bahadur efficiency of n \ FSSD2 relative to √nc S2 l for the case where p = N(0, 1), q = N(µq, 1), and J = 1 (i.e., one test location v for n \ FSSD2). Fix σ2 k = 1 for n \ FSSD2. Then, for any µq ̸= 0, for some v ∈R, and for any κ2 > 0, we have E1(µq, v, σ2 k, κ2) > 2. When p = N(0, 1) and q = N(µq, 1) for µq ̸= 0, Theorem 7 guarantees that our FSSD test is asymptotically at least twice as efficient as the LKS test in the Bahadur sense. We note that the 6 efficiency is conservative in the sense that σ2 k = 1 regardless of µq. Choosing σ2 k dependent on µq will likely improve the efficiency further. 5 Experiments In this section, we demonstrate the performance of the proposed test on a number of problems. The primary goal is to understand the conditions under which the test can perform well. −4 −2 0 2 4 v∗ v∗ p q FSSD2 σH1 Figure 1: The power criterion FSSD2/σH1 as a function of test location v. Sensitivity to Local Differences We start by demonstrating that the test power objective FSSD2/σH1 captures local differences of p and q, and that interpretable features v are found. Consider a one-dimensional problem in which p = N(0, 1) and q = Laplace(0, 1/ √ 2), a zero-mean Laplace distribution with scale parameter 1/ √ 2. These parameters are chosen so that p and q have the same mean and variance. Figure 1 plots the (rescaled) objective as a function of v. The objective illustrates that the best features (indicated by v∗) are at the most discriminative locations. Test Power We next investigate the power of different tests on two problems: 1. Gaussian vs. Laplace: p(x) = N(x|0, Id) and q(x) = Qd i=1 Laplace(xi|0, 1/ √ 2) where the dimension d will be varied. The two distributions have the same mean and variance. The main characteristic of this problem is local differences of p and q (see Figure 1). Set n = 1000. 2. Restricted Boltzmann Machine (RBM): p(x) is the marginal distribution of p(x, h) = 1 Z exp x⊤Bh + b⊤x + c⊤x −1 2∥x∥2 , where x ∈Rd, h ∈{±1}dh is a random vector of hidden variables, and Z is the normalization constant. The exact marginal density p(x) = P h∈{−1,1}dh p(x, h) is intractable when dh is large, since it involves summing over 2dh terms. Recall that the proposed test only requires the score function ∇x log p(x) (not the normalization constant), which can be computed in closed form in this case. In this problem, q is another RBM where entries of the matrix B are corrupted by Gaussian noise. This was the problem considered in [22]. We set d = 50 and dh = 40, and generate samples by n independent chains (i.e., n independent samples) of blocked Gibbs sampling with 2000 burn-in iterations. We evaluate the following six kernel-based nonparametric tests with α = 0.05, all using the Gaussian kernel. 1. FSSD-rand: the proposed FSSD test where the test locations set to random draws from a multivariate normal distribution fitted to the data. The kernel bandwidth is set by the commonly used median heuristic i.e., σk = median({∥xi −xj∥, i < j}). 2. FSSD-opt: the proposed FSSD test where both the test locations and the Gaussian bandwidth are optimized (Section 3.2). 3. KSD: the quadratic-time Kernel Stein Discrepancy test with the median heuristic. 4. LKS: the linear-time version of KSD with the median heuristic. 5. MMD-opt: the quadratic-time MMD two-sample test of [16] where the kernel bandwidth is optimized by grid search to maximize a power criterion as described in [29]. 6. ME-opt: the linear-time mean embeddings (ME) two-sample test of [19] where parameters are optimized. We draw n samples from p to run the two-sample tests (MMD-opt, ME-opt). For FSSD tests, we use J = 5 (see Section A for an investigation of test power as J varies). All tests with optimization use 20% of the sample size n for parameter tuning. Code is available at https://github.com/wittawatj/kernel-gof. Figure 2 shows the rejection rates of the six tests for the two problems, where each problem is repeated for 200 trials, resampling n points from q every time. In Figure 2a (Gaussian vs. Laplace), high performance of FSSD-opt indicates that the test performs well when there are local differences between p and q. Low performance of FSSD-rand emphasizes the importance of the optimization of FSSD-opt to pinpoint regions where p and q differ. The power of KSD quickly drops as the dimension increases, which can be understood since KSD is the RKHS norm of a function witnessing differences in p and q across the entire domain, including where these differences are small. We next consider the case of RBMs. Following [22], b, c are independently drawn from the standard multivariate normal distribution, and entries of B ∈R50×40 are drawn with equal probability from {±1}, in each trial. The density q represents another RBM having the same b, c as in p, and with all entries of B corrupted by independent zero-mean Gaussian noise with standard deviation σper. Figure 7 0.00 0.02 0.04 0.06 Perturbation SD σper 0.0 0.5 1.0 Rejection rate FSSD-opt FSSD-rand KSD LKS MMD-opt ME-opt 1 5 10 15 dimension d 0.0 0.5 1.0 Rejection rate (a) Gaussian vs. Laplace. n = 1000. 0.00 0.02 0.04 0.06 Perturbation SD σper 0.0 0.5 1.0 Rejection rate (b) RBM. n = 1000. Perturb all entries of B. 2000 4000 Sample size n 0.00 0.25 0.50 0.75 Rejection rate (c) RBM. σper = 0.1. Perturb B1,1. 1000 2000 3000 4000 Sample size n 0 100 200 300 Time (s) (d) Runtime (RBM) Figure 2: Rejection rates of the six tests. The proposed linear-time FSSD-opt has a comparable or higher test power in some cases than the quadratic-time KSD test. 2b shows the test powers as σper increases, for a fixed sample size n = 1000. We observe that all the tests have correct false positive rates (type-I errors) at roughly α = 0.05 when there is no perturbation noise. In particular, the optimization in FSSD-opt does not increase false positive rate when H0 holds. We see that the performance of the proposed FSSD-opt matches that of the quadratic-time KSD at all noise levels. MMD-opt and ME-opt perform far worse than the goodness-of-fit tests when the difference in p and q is small (σper is low), since these tests simply represent p using samples, and do not take advantage of its structure. The advantage of having O(n) runtime can be clearly seen when the problem is much harder, requiring larger sample sizes to tackle. Consider a similar problem on RBMs in which the parameter B ∈R50×40 in q is given by that of p, where only the first entry B1,1 is perturbed by random N(0, 0.12) noise. The results are shown in Figure 2c where the sample size n is varied. We observe that the two two-sample tests fail to detect this subtle difference even with large sample size. The test powers of KSD and FSSD-opt are comparable when n is relatively small. It appears that KSD has higher test power than FSSD-opt in this case for large n. However, this moderate gain in the test power comes with an order of magnitude more computation. As shown in Figure 2d, the runtime of the KSD is much larger than that of FSSD-opt, especially at large n. In these problems, the performance of the new test (even without optimization) far exceeds that of the LKS test. Further simulation results can be found in Section B. (a) p = 2-component GMM. −0.08 −0.04 0.00 0.04 0.08 0.12 0.16 0.20 (b) p = 10-component GMM Figure 3: Plots of the optimization objective as a function of test location v ∈R2 in the Gaussian mixture model (GMM) evaluation task. Interpretable Features In the final simulation, we demonstrate that the learned test locations are informative in visualising where the model does not fit the data well. We consider crime data from the Chicago Police Department, recording n = 11957 locations (latitude-longitude coordinates) of robbery events in Chicago in 2016.3 We address the situation in which a model p for the robbery location density is given, and we wish to visualise where it fails to match the data. We fit a Gaussian mixture model (GMM) with the expectationmaximization algorithm to a subsample of 5500 points. We then test the model on a held-out test set of the same size to obtain proposed locations of relevant features v. Figure 3a shows the test robbery locations in purple, the model with two Gaussian components in wireframe, and the optimization objective for v as a grayscale contour plot (a red star indicates the maximum). We observe that the 2-component model is a poor fit to the data, particularly in the right tail areas of the data, as indicated in dark gray (i.e., the objective is high). Figure 3b shows a similar plot with a 10-component GMM. The additional components appear to have eliminated some mismatch in the right tail, however a discrepancy still exists in the left region. Here, the data have a sharp boundary on the right side following the geography of Chicago, and do not exhibit exponentially decaying Gaussian-like tails. We note that tests based on a learned feature located at the maximum both correctly reject H0. 3Data can be found at https://data.cityofchicago.org. 8 Acknowledgement WJ, WX, and AG thank the Gatsby Charitable Foundation for the financial support. ZSz was financially supported by the Data Science Initiative. KF has been supported by KAKENHI Innovative Areas 25120012. References [1] R. R. Bahadur. Stochastic comparison of tests. The Annals of Mathematical Statistics, 31(2): 276–295, 1960. [2] L. Baringhaus and N. Henze. A consistent test for multivariate normality based on the empirical characteristic function. Metrika, 35:339–348, 1988. [3] J. Beirlant, L. Györfi, and G. Lugosi. On the asymptotic normality of the l1- and l2-errors in histogram density estimation. Canadian Journal of Statistics, 22:309–318, 1994. [4] R. Bhatia. Matrix analysis, volume 169. Springer Science & Business Media, 2013. [5] A. Bowman and P. Foster. Adaptive smoothing and density based tests of multivariate normality. Journal of the American Statistical Association, 88:529–537, 1993. [6] C. Carmeli, E. De Vito, A. Toigo, and V. Umanità. Vector valued reproducing kernel Hilbert spaces and universality. Analysis and Applications, 08(01):19–61, Jan. 2010. [7] K. Chwialkowski, D. Sejdinovic, and A. Gretton. A wild bootstrap for degenerate kernel tests. In NIPS, pages 3608–3616, 2014. [8] K. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton. Fast two-sample testing with analytic representations of probability measures. In NIPS, pages 1981–1989, 2015. [9] K. Chwialkowski, H. Strathmann, and A. Gretton. A kernel test of goodness of fit. In ICML, pages 2606–2615, 2016. [10] T. Epps and K. Singleton. An omnibus test for the two-sample problem using the empirical characteristic function. Journal of Statistical Computation and Simulation, 26(3–4):177–203, 1986. [11] J. Frank J. Massey. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46(253):68–78, 1951. [12] L. J. Gleser. On a measure of test efficiency proposed by R. R. Bahadur. 35(4):1537–1544, 1964. [13] L. J. Gleser. The comparison of multivariate tests of hypothesis by means of Bahadur efficiency. 28(2):157–174, 1966. [14] J. Gorham and L. Mackey. Measuring sample quality with Stein’s method. In NIPS, pages 226–234, 2015. [15] J. Gorham and L. Mackey. Measuring sample quality with kernels. In ICML, pages 1292–1301. PMLR, 06–11 Aug 2017. [16] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. JMLR, 13:723–773, 2012. [17] A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B. K. Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In NIPS, pages 1205–1213. 2012. [18] L. Györfiand E. C. van der Meulen. A consistent goodness of fit test based on the total variation distance. In G. Roussas, editor, Nonparametric Functional Estimation and Related Topics, pages 631–645, 1990. [19] W. Jitkrittum, Z. Szabó, K. P. Chwialkowski, and A. Gretton. Interpretable Distribution Features with Maximum Testing Power. In NIPS, pages 181–189. 2016. [20] W. Jitkrittum, Z. Szabó, and A. Gretton. An adaptive test of independence with analytic kernel embeddings. In ICML, pages 1742–1751. PMLR, 2017. [21] C. Ley, G. Reinert, and Y. Swan. Stein’s method for comparison of univariate distributions. Probability Surveys, 14:1–52, 2017. 9 [22] Q. Liu, J. Lee, and M. Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. In ICML, pages 276–284, 2016. [23] J. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In NIPS, pages 829–837, 2015. [24] B. Mityagin. The Zero Set of a Real Analytic Function. Dec. 2015. arXiv: 1512.07276. [25] C. J. Oates, M. Girolami, and N. Chopin. Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):695–718, 2017. [26] M. L. Rizzo. New goodness-of-fit tests for Pareto distributions. ASTIN Bulletin: Journal of the International Association of Actuaries, 39(2):691–715, 2009. [27] R. J. Serfling. Approximation Theorems of Mathematical Statistics. John Wiley & Sons, 2009. [28] I. Steinwart and A. Christmann. Support Vector Machines. Springer, New York, 2008. [29] D. J. Sutherland, H.-Y. Tung, H. Strathmann, S. De, A. Ramdas, A. Smola, and A. Gretton. Generative models and model criticism via optimized Maximum Mean Discrepancy. In ICLR, 2016. [30] G. J. Székely and M. L. Rizzo. A new test for multivariate normality. Journal of Multivariate Analysis, 93(1):58–80, 2005. [31] A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 2000. [32] Q. Zhang, S. Filippi, A. Gretton, and D. Sejdinovic. Large-scale kernel methods for independence testing. Statistics and Computing, pages 1–18, 2017. 10 | 2017 | 399 |
6,895 | Learning to Inpaint for Image Compression Mohammad Haris Baig∗ Department of Computer Science Dartmouth College Hanover, NH Vladlen Koltun Intel Labs Santa Clara, CA Lorenzo Torresani Dartmouth College Hanover, NH Abstract We study the design of deep architectures for lossy image compression. We present two architectural recipes in the context of multi-stage progressive encoders and empirically demonstrate their importance on compression performance. Specifically, we show that: (a) predicting the original image data from residuals in a multi-stage progressive architecture facilitates learning and leads to improved performance at approximating the original content and (b) learning to inpaint (from neighboring image pixels) before performing compression reduces the amount of information that must be stored to achieve a high-quality approximation. Incorporating these design choices in a baseline progressive encoder yields an average reduction of over 60% in file size with similar quality compared to the original residual encoder. 1 Introduction Visual data constitutes most of the total information created and shared on the Web every day and it forms a bulk of the demand for storage and network bandwidth [13]. It is customary to compress image data as much as possible as long as there is no perceptible loss in content. In recent years deep learning has made it possible to design deep models for learning compact representations for image data [2, 16, 18, 19, 20]. Deep learning based approaches, such as the work of Rippel and Bourdev [16], significantly outperform traditional methods of lossy image compression. In this paper, we show how to improve the performance of deep models trained for lossy image compression. We focus on the design of models that produce progressive codes. Progressive codes are a sequence of representations that can be transmitted to improve the quality of an existing estimate (from a previously sent code) by adding missing detail. This is in contrast to non-progressive codes whereby the entire data for a certain quality approximation must be transmitted before the image can be viewed. Progressive codes improve the user’s browsing experience by reducing loading time of pages that are rich in images. Our main contributions in this paper are two-fold. 1. While traditional progressive encoders are optimized to compress residual errors in each stage of their architecture (residual-in, residual-out), instead we propose a model that is trained to predict at each stage the original image data from the residual of the previous stage (residual-in, image-out). We demonstrate that this leads to an easier optimization resulting in better image compression. The resulting architecture reduces the amount of information that must be stored for reproducing images at similar quality by 18% compared to a traditional residual encoder. 2. Existing deep architectures do not exploit the high degree of spatial coherence exhibited by neighboring patches. We show how to design and train a model that can exploit dependences between adjacent regions by learning to inpaint from the available content. We introduce multi-scale convolutions that sample content at multiple scales to assist with inpainting. ∗http://www.cs.dartmouth.edu/ haris/compression 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We jointly train our proposed inpainting and compression models and show that inpainting reduces the amount of information that must be stored by an additional 42%. 2 Approach We begin by reviewing the architecture and the learning objective of a progressive multi-stage encoderdecoder with S stages. We adopt the convolutional-deconvolutional residual encoder proposed by Toderici et al. [19] as our reference model. The model extracts a compact binary representation B from an image patch P. This binary representation, used to reconstruct an approximation of the original patch, consists of the sequence of representations extracted by the S stages of the model, B = [B1, B2, . . . BS]. The first stage of the model extracts a binary code B1 from the input patch P. Each of the subsequent stages learns to extract representations Bs, to model the compressions residuals Rs−1 from the previous stage. The compression residuals Rs are defined as Rs = Rs−1 −Ms(Rs−1|Θs), where Ms(Rs−1|Θs) represents the reconstruction obtained by the stage s when modelling the residuals Rs−1. The model at each stage is split into an encoder Bs = Es(Rs−1|ΘE s ) and a decoder Ds(Bs|ΘD s ) such that Ms(Rs−1|Θs) = Ds(Es(Rs−1|ΘE s )|ΘD s ) and Θs = {ΘE s , ΘD s }. The parameters for the sth stage of the model are denoted by Θs. The residual encoder-decoder is trained on a dataset P, consisting of N image patches, according to the following objective: ˆL(P; Θ1:S) = S X s=1 N X i=1 ∥R(i) s−1 −Ms(R(i) s−1|Θs)∥2 2. (1) R(i) s represents the compression residual for the ith patch P (i) after stage s and R(i) 0 = P (i). Residual encoders are difficult to optimize as gradients have to traverse long paths from later stages to affect change in the previous stages. When moving along longer paths, gradients tend to decrease in magnitude as they get to earlier stages. We address this shortcoming of residual encoders by studying a class of architectures we refer to as “Residual-to-Image” (R2I). 2.1 Residual-to-Image (R2I) To address the issue of vanishing gradients we add connections between subsequent stages and restate the loss to predict the original data at the end of each stage, thus performing residual-to-image prediction. This leads to the new objective shown below: L(P; Θ1:S) = S X s=1 N X i=1 ∥P (i) −Ms(R(i) s−1|Θs)∥2 2. (2) Stage s of this model takes as input the compression residuals Rs−1 computed with respect to the original data, Rs−1 = P −Ms−1(Rs−2|Θs−1), and Ms−1(Rs−2|Θs−1) now approximates the reconstruction of the original data P at stage s −1. To allow complete image reconstructions to be produced at each stage while only feeding in residuals, we introduce connections between the layers of adjacent stages. These connections allow for later stages to incorporate information that has been recovered by earlier stages into their estimate of the original image data. Consequently, these connections (between subsequent stages) allow for better optimization of the model. In addition to assisting with modeling the original image, these connections play two key roles. Firstly, these connections create residual blocks [10] which encourage explicit learning of how to reproduce information which could not be generated by the previous stage. Secondly, these connections reduce the length of the path along which information has to travel from later stages to impact the earlier stages, leading to a better joint optimization. This leads us to the question of where should such connections be introduced and how should information be propagated? We consider two types of connections to propagate information between successive stages. 1) Prediction connections are analogous to the identity shortcuts introduced by He et al. [10] for residual learning. They act as parameter-free additive connections: the output of 2 Residuals Decoder layer Encoder layer Residual Prediction Parametrized connection Additive connection 10 Binarizer Information flow a) Residual encoder 10101000 10101001 Data 01010001 d) Decoding connections 10101000 01010001 c) Full connections 10101000 01010001 b) Prediction connections 10101000 + Residual to Image (R2I) Models Data Prediction Figure 1: Multiple approaches for introducing connections between successive stages. These designs for progressive architectures allow for varying degrees of information to be shared. Architecture (b-d) do not reconstruct residuals, but the original data at every stage. We call these architectures “residual-to-image” (R2I). each stage is produced by simply adding together the residual predictions of the current stage and all preceding stages (see Figure 1(b)) before applying a final non-linearity.2) Parametric connections are referred to as projection shortcuts by He et al. [10]. Here we use them to connect corresponding layers in two consecutive stages of the compression model. The features of each layer from the previous stage are convolved with learned filters before being added to the features of the same layer in the current stage. A non-linearity is then applied on top. The prediction connections only yield the benefit of creating residual blocks, albeit very large and thus difficult to optimize. In contrast, parametric connections allow for the intermediate representations from previous stages to be passed to the subsequent stages. They also create a denser connectivity pattern with gradients now moving along corresponding layers in adjacent stages. We consider two variants of parametric connections: “full” which use parametric connections between all the layers in two successive stages (see Figure 1(c)), and “decoding” connections which link only corresponding decoding layers (i.e., there are no connections between encoding layers of adjacent stages). We note that the LSTM-based model of Toderici et al. [20] represents a particular instance of R2I network with full connections. In Section 3 we demonstrate that R2I models with decoding connections outperform those with full connections and provide an intuitive explanation for this result. 2.2 Inpainting Network Image compression architectures learn to encode and decode an image patch-by-patch. Encoding all patches independently assumes that the regions contain truly independent content. This assumption generally does not hold true when the patches being encoded are contiguous. We observe that the content of adjacent image patches is not independent. We propose a new module for the compression model designed to exploit the spatial coherence between neighboring patches. We achieve this goal by training a model with the objective of predicting the content of each patch from information available in the neighboring regions. Deep models for inpainting, such as the one proposed by Pathak et al. [14], are trained to predict the values of pixels in the region ˆW from a context region ˆC (as shown in Figure 2). As there is data present all around the region to be inpainted this imposes strong constraints on what the inpainted region should look like. We consider the scenario where images are encoded and decoded block-by-block moving from left to right and going from top to bottom (similar to how traditional codecs process images [1, 21]). Now, at decoding time only content above and to the left of each patch will have been reconstructed (see Figure 2(a)). This gives rise to the problem of “partial-context inpainting”. We propose a model that, given an input region C, attempts to predict the content of the current patch P. We denote by ˆP the dataset which contains all the patches from the dataset P 3 (a) Variations of the inpainting problem Partial-context Inpainting C Full-context Inpainting ˆC Content available for inpainting Region to inpaint ˆP P (b) Multi-scale convolutional layer Figure 2: (a) The two kinds of inpainting problems. (b) A multi-scale convolutional layer with 3 dilation factors. The colored boxes represent pixels from which the content is sampled. and the respective context regions C for each patch. The loss function used to train our inpainting network is: Linp( ˆP; ΘI) = N X i=1 ∥P (i) −MI(C(i)|ΘI)∥2 2. (3) The output of the inpainting network is denoted by MI(C(i)|ΘI), where ΘI refers to the parameters of the inpainting network. 2.2.1 Architecture of the Partial-Context Inpainting Network Our inpainting network has a feed-forward architecture which propagates information from the context region C to the region being inpainted, P. To improve the ability of our model at predicting content, we use a multi-scale convolutional layer as the basic building block of our inpainting network. We make use of the dilated convolutions described by Yu and Koltun [23] to allow for sampling at various scales. Each multi-scale convolutional layer is composed of k filters for each dilation factor being considered. Varying the dilation factor of filters gives us the ability to analyze content at various scales. This structure of filters provides two benefits. First, it allows for a substantially denser and more diverse sampling of data from context and second it allows for better propagation of content at different spatial scales. A similarly designed layer was also used by Chen et al. [5] for sampling content at multiple scales for semantic segmentation. Figure 2(b) shows the structure of a multi-scale convolutional layer. The multi-scale convolutional layer also gives us the freedom to propagate content at full resolution (no striding or pooling) as only a few multi-scale layers suffice to cover the entire region. This allows us to train a relatively shallow yet highly expressive architecture which can propagate fine-grained information that might otherwise be lost due to sub-sampling. This light-weight and efficient design is needed to allow for joint training with a multi-stage compression model. 2.2.2 Connecting the Inpainting Network with the R2I Compression model Next, we describe how to use the prediction of the inpainting network for assisting with compression. Whereas the inpainting network learns to predict the data as accurately as possible, we note that this is not sufficient to achieve good performance on compression, where it is also necessary that the “inpainting residuals” be easy to compress. We describe the inpainting residuals as R0 = P −MI(C|ΘI), where MI(C|ΘI) denotes the inpainting estimate. As we wanted to train our model to always predict the data, we add the inpainting estimate to the final prediction of each stage of our compression model. This allows us to (a) produce the original content at each stage and (b) to 4 discover an inpainting that is beneficial for all stages of the model because of joint training. We now train our complete model as LC( ˆP; ΘI, Θ1:S) = Linp( ˆP; ΘI) + N X i=1 S X s=1 ∥P (i) −[Ms(R(i) s−1|Θs) + MI(C(i)|ΘI)] ∥2 2. (4) In this new objective LC, the first term Linp corresponds to the original inpainting loss, R(i) 0 corresponds to the inpainting residual for example i. We note that each stage of this inpainting-based progressive coder directly affects what is learned by the inpainting network. We refer to the model trained with this joint objective as “Inpainting for Residual-to-Image Compression” (IR2I). Whereas we train our model to perform inpainting from the original image content, we use a lossy approximation of the context region C when encoding images with IR2I. This is done because at decoding time our model does not have access to the original image data. We use the approximation from stage 2 of our model for performing inpainting at encoding and decoding time, and transmit the binary codes for the first two stages as a larger first code. This strategy allows us to leverage inpainting while performing progressive image compression. 2.3 Implementation Details Our models were trained on 6,507 images from the ImageNet dataset [7], as proposed by Ballé et al. [2] to train their single-stage encoder-decoder architectures. A full description of the R2I models and the inpainting network is provided in the supplementary material. We use the Caffe library [11] to train our models. The residual encoder and R2I models were trained for 60,000 iterations whereas the joint inpainting network was trained for 110,000 iterations. We used the Adam optimizer [12] for training our models and the MSRA initialization [9] for initializing all stages. We used initial learning rates of 0.001 and the learning rate was dropped after 30K and 45K for the R2I models. For the IR2I model, the learning rate was dropped after 30K, 65K, and 90K iterations by a factor of 10 each time. All of our models were trained to reproduce the content of 32 × 32 image patches. Each of our models has 8 stages, with each stage contributing 0.125 bits-per-pixel (bpp) to the total representation of a patch. Our models handle binary optimization by employing the biased estimators approach proposed by Raiko et al. [15] as was done by Toderici et al. [19, 20]. Our inpainting network has 8 multi-scale convolutional layers for content propagation and one standard convolutional layer for performing the final prediction. Each multi-scale convolutional layer consists of 24 filters each for dilation factors 1, 2, 4, 8. Our inpainting network takes as input a context region C of size 64 × 64, where the bottom right 32 × 32 region is zeroed out and represents the region to be inpainted. 3 Results We investigate the improvement brought about by the presented techniques. We are interested in studying the reduction in bit-rate, for varying quality of reconstruction, achieved after adaptation from the residual encoder proposed by Toderici et al. [19]. To evaluate performance, we perform compression with our models on images from the Kodak dataset [8]. The dataset consists of 24 uncompressed color images of size 512 × 768. The quality is measured according to the MSSSIM [22] metric (higher values indicate better quality). We use the Bjontegaard-Delta metric [4] to compute the average reduction in bit-rate across all quality settings. 3.1 R2I - Design and Performance The table in Figure 3(a) shows the percentage reduction in bit-rate achieved by the three variations of the Residual-to-Image models. As can be seen, adding side-connections and training for the more desirable objective (i.e., approximating the original data) at each stage helps each of our models. That said, having connections in the decoder only helps more compared to using a “full” connection approach or only sharing the final prediction. 5 Rate Savings (%) Approach SSIM MS-SSIM R2I Prediction 4.483 5.177 R2I Full 10.015 7.652 R2I Decoding 20.002 17.951 (a) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 6 8 10 12 14 16 18 BitRate MS−SSIM (dB) Residual Encoder R2I Prediction connection R2I Full connections R2I Decoding connections (b) Figure 3: (a) Average rate savings for each of the three R2I variants compared to the residual encoder proposed by Toderici et al. [19]. (b) Figure shows the quality of images produced by each of the three R2I variants across a range of bit-rates. 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 10 20 30 40 50 Iterations Training Loss (MSE) Enc−Dec Dec−only 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 50 100 150 200 Iterations Training Loss (MSE) Enc−Dec Dec−only 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 10 20 30 40 50 Iterations Training Loss (MSE) Enc−Dec Dec−only Iterations Iterations (x104) Iterations a) Stage 1 b) Stage 4 c) Stage 8 R2I full connections R2I decoding connections Figure 4: The R2I training loss from 3 different stages (start, middle, end) viewed as a function of iterations for the “full” and the “decoding” connections models. We note that the decoding connections model converges faster, to a lower value, and shows less variance. The model which shares only the prediction between stages performs poorly in comparison to the other two designs as it does not allow for features from earlier stages to be altered as efficiently as done by the full or decoding connections architectures. The model with decoding connections does better than the architecture with full connections because for the model with connections at decoding only the binarization layer in each stage extracts a representation from the relevant information only (the residuals with respect to the data). In contrast, when connections are established in both the encoder and the decoder, the binary representation may include information that has been captured by a previous stage, thereby adding burden on each stage in identifying information pertinent to improving reconstruction, leading to a tougher optimization. Figure 4 shows that the model with full connections struggles to minimize the training error compared to the model with decoding connections. This difference in training error points to the fact that connections in the encoder make it harder for the model to do well at training time. This difficulty of optimization amplifies with the increase in stages as can be seen by the difference between the full and decoding architecture performance (shown in Figure 3(b)) because the residuals become harder to compress. We note that R2I models significantly improve the quality of reconstruction at higher bit rates but do not improve the estimates at lower bit-rates as much (see Figure 3(b)). This tells us that the overall performance can be improved by focusing on approaches that yield a significant improvement at lower bit-rates, such as inpainting, which is analyzed next. 6 Rate Savings (%) Approach SSIM MS-SSIM R2I Decoding 20.002 17.951 R2I Decoding Sep-Inp 27.379 27.794 R2I Decoding Joint-Inp 63.353 60.446 (a) Impact of inpainting on the performance at compression. All bit-rate savings are reported with respect to the residual encoder by Toderici et al. [19] 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 9 10 11 12 13 14 15 16 17 18 BitRate MS−SSIM (dB) Residual Encoder R2I−Decoding Sep−Inp R2I−Decoding Joint−Inp (b) Figure 5: (a) Average rate savings with varying forms of inpainting. (b) The quality of images with each of our proposed approaches at varying bit-rates. 3.2 Impact of Inpainting We begin analyzing the performance of the inpainting network and other approaches on partial-context inpainting. We compare the performance of the inpainting network with both traditional approaches as well as a learning-based baseline. Table 1 shows the average SSIM achieved by each approach for inpainting all non-overlapping patches in the Kodak dataset. Approach PDE-based Exemplar-based Learning-based [3] [6] Vanilla network Inpainting network SSIM 0.4574 0.4611 0.4545 0.5165 Table 1: Average SSIM for partial-context inpainting on the Kodak dataset [8]. The vanilla model is a feed-forward CNN with no multi-scale convolutions. The vanilla network corresponds to a 32-layer (4 times as deep as the inpainting network) model that does not use multi-scale convolutions (all filters have a dilation factor of 1), has the same number of parameters, and also operates at full resolution (as our inpainting network). This points to the fact that the improvement in performance of the inpainting network over the vanilla model is a consequence of using multi-scale convolutions. The inpainting network improves over traditional approaches because our model learns the best strategy for propagating content as opposed to using hand-engineered principles of content propagation. The low performance of the vanilla network shows that learning by itself is not superior to traditional approaches and multi-scale convolutions play a key role in achieving better performance. Whereas inpainting provides an initial estimate of the content within the region it by no means generates a perfect reconstruction. This leads us to the question of whether this initial estimate is better than not having an estimate? The table in Figure 5(a) shows the performance on the compression task with and without inpainting. These results show that the greatest reduction in file size is achieved when the inpainting network is jointly trained with the R2I model. We note (from Figure 5(b)) that inpainting greatly improves the quality of results obtained at lower and at higher bit rates. The baseline where the inpainting network is trained separately from the compression network is presented here to emphasize the role of joint training. Traditional codecs [1] use simple non learning-based inpainting approaches and their predefined methods of representing data are unable to compactly encode the inpainting residuals. Learning to inpaint separately improves the performance 7 as the inpainted estimate is better than not having any estimate. But given that the compression model has not been trained to optimize the compression residuals the reduction in bit-rate for achieving high quality levels is low. We show that with joint training, we can not only train a model that does better inpainting but also ensure that the inpainting residuals can be represented compactly. 3.3 Comparison with Existing Approaches Table 2 shows a comparison of the performance of various approaches compared to JPEG [21] in the 0.125 to 1 bits-per-pixel (bpp) range. We select this range as images from our models towards the end of this range show no perceptible artifacts of compression. The first part of the table evaluates the performance of learning-based progressive approaches. We note that our proposed model outperforms the multi-stage residual encoder proposed by Toderici et al. [19] (trained on the same 6.5K dataset) by 17.9% and IR2I outperforms the residual encoder by reducing file-sizes by 60.4%. The residual-GRU, while similar in architecture to our “full” connections model, does not do better even when trained on a dataset that is 1000 times bigger and for 10 times more training time. The results shown here do not make use of entropy coding as the goal of this work is to study how to improve the performance of deep networks for progressive image compression and entropy coding makes it harder to understand where the performance improvements are coming from. As various approaches use different entropy coding methods, this further obfuscates the source of the improvements. The second part of the table shows the performance of existing codecs. Existing codecs use entropy coding and rate-distortion optimization. We note that even without using either of these powerful post processing techniques, our final “IR2I” model is competitive with traditional methods for compression, which use both of these techniques. A comparison with recent non-progressive approaches [2, 18], which also use these post-processing techniques for image compression, is provided in the supplementary material. Approach Number of Progressive Rate Savings Training Images (%) Residual Encoder [19] 6.5K Yes 2.56 Residual-GRU [20] 6M Yes 33.26 R2I (Decoding connections) 6.5K Yes 18.53 IR2I 6.5K Yes 51.25 JPEG-2000 [17] N/A No 63.01 WebP [1] N/A No 64.98 Table 2: Average rate savings compared to JPEG [21]. The savings are computed on the Kodak [8] dataset with rate-distortion profiles measuring MS-SSIM in the 0-1 bpp range. We observe that a naive implementation of IR2I creates a linear dependence in content (as all regions used as context have to be decoded before being used for inpainting) and thus may be substantially slower. In practice, this slowdown would be negligible as one can use a diagonal scan pattern (similar to traditional codecs) for ensuring high parallelism thereby reducing run times. Furthermore, we perform inpainting using predictions from the first step only. Therefore, the dependence only exists when generating the first progressive code. For all subsequent stages, there is no dependence in content, and our approach is comparable in run time to similar approaches. 4 Conclusion and Future Work We study a class of “Residual to Image” models and show that within this class, architectures which have decoding connections perform better at approximating image data compared to designs with other forms of connectivity. We observe that our R2I decoding connections model struggles at low bit-rates and we show how to exploit spatial coherence between content of adjacent patches via inpainting to improve performance at approximating image content at low bit-rates. We design a new 8 model for partial-context inpainting using multi-scale convolutions and show that the best way to leverage inpainting is by jointly training the inpainting network with our R2I Decoding model. One interesting extension of this work would be to incorporate entropy coding within our progressive compression framework to train models that produce binary codes which have low-entropy and can be represented even more compactly. Another possible direction would be to extend our proposed framework to video data, where the gains from our discovery of recipes for improving compression may be even greater. 5 Acknowledgements This work was funded in part by Intel Labs and NSF award CNS-120552. We gratefully acknowledge NVIDIA and Facebook for the donation of GPUs used for portions of this work. We would like to thank George Toderici, Nick Johnston, Johannes Balle for providing us with information needed for accurate assessment. We are grateful to members of the Visual Computing Lab at Intel Labs, and members of the Visual Learning Group at Dartmouth College for their feedback. References [1] WebP a new image format for the web. https://developers.google.com/speed/webp/. Accessed: 2017-04-29. [2] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. In ICLR, 2017. [3] Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417–424. ACM Press/Addison-Wesley Publishing Co., 2000. [4] Gisle Bjontegaard. Improvements of the bd-psnr model. In ITU-T SC16/Q6, 35th VCEG Meeting, Berlin, Germany, July 2008, 2008. [5] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016. [6] Antonio Criminisi, Patrick Pérez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on image processing, 13(9):1200–1212, 2004. [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. [8] Eastman Kodak Company. Kodak lossless true color image suite, 1999. http://r0k.us/ graphics/kodak/. [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [11] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [12] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014. 9 [13] Mary Meeker. Internet Trends Report 2016. KPCB, 2016. [14] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536–2544, 2016. [15] Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In ICLR, 2015. [16] Oren Rippel and Lubomir Bourdev. Real-time adaptive image compression. In International Conference on Machine Learning, 2017. [17] Athanassios Skodras, Charilaos Christopoulos, and Touradj Ebrahimi. The jpeg 2000 still image compression standard. IEEE Signal processing magazine, 18(5):36–58, 2001. [18] L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. In ICLR, 2017. [19] George Toderici, Sean M. O’Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. In ICLR, 2016. [20] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016. [21] Gregory K. Wallace. The JPEG still picture compression standard. Communications of the ACM, 34(4), 1991. [22] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on, volume 2, pages 1398–1402. IEEE, 2003. [23] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. 10 | 2017 | 4 |
6,896 | Differentiable Learning of Logical Rules for Knowledge Base Reasoning Fan Yang Zhilin Yang William W. Cohen School of Computer Science Carnegie Mellon University {fanyang1,zhiliny,wcohen}@cs.cmu.edu Abstract We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space. We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model. This approach is inspired by a recently-developed differentiable logic called TensorLog [5], where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose these operations. Empirically, our method outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies. 1 Introduction A large body of work in AI and machine learning has considered the problem of learning models composed of sets of first-order logical rules. An example of such rules is shown in Figure 1. Logical rules are useful representations for knowledge base reasoning tasks because they are interpretable, which can provide insight to inference results. In many cases this interpretability leads to robustness in transfer tasks. For example, consider the scenario in Figure 1. If new facts about more companies or locations are added to the knowledge base, the rule about HasOfficeInCountry will still be usefully accurate without retraining. The same might not be true for methods that learn embeddings for specific knowledge base entities, as is done in TransE [3]. X = Uber X = Ly* HasOfficeInCity(New York, Uber) CityInCountry(USA, New York) HasOfficeInCity(Paris, Ly*) CityInCountry(France, Paris) HasOfficeInCountry(Y, X) ß HasOfficeInCity(Z, X), CityInCountry(Y, Z) Y = USA Y = France HasOfficeInCountry(Y, X) ? In which country Y does X have office? Figure 1: Using logical rules (shown in the box) for knowledge base reasoning. Learning collections of relational rules is a type of statistical relational learning [7], and when the learning involves proposing new logical rules, it is often called inductive logic programming [18] . Often the underlying logic is a probabilistic logic, such as Markov Logic Networks [22] or ProPPR [26]. The advantage of using a probabilistic logic is that by equipping logical rules with probability, one can better model statistically complex and noisy data. Unfortunately, this learning problem is quite difficult — it requires learning both the structure (i.e. the particular sets of rules included in a model) and the parameters (i.e. confidence associated with each rule). Determining 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the structure is a discrete optimization problem, and one that involves search over a potentially large problem space. Many past learning systems have thus used optimization methods that interleave moves in a discrete structure space with moves in parameter space [12, 13, 14, 27]. In this paper, we explore an alternative approach: a completely differentiable system for learning models defined by sets of first-order rules. This allows one to use modern gradient-based programming frameworks and optimization methods for the inductive logic programming task. Our approach is inspired by a differentiable probabilistic logic called TensorLog [5]. TensorLog establishes a connection between inference using first-order rules and sparse matrix multiplication, which enables certain types of logical inference tasks to be compiled into sequences of differentiable numerical operations on matrices. However, TensorLog is limited as a learning system because it only learns parameters, not rules. In order to learn parameters and structure simultaneously in a differentiable framework, we design a neural controller system with an attention mechanism and memory to learn to sequentially compose the primitive differentiable operations used by TensorLog. At each stage of the computation, the controller uses attention to “softly” choose a subset of TensorLog’s operations, and then performs the operations with contents selected from the memory. We call our approach neural logic programming, or Neural LP. Experimentally, we show that Neural LP performs well on a number of tasks. It improves the performance in knowledge base completion on several benchmark datasets, such as WordNet18 and Freebase15K [3]. And it obtains state-of-the-art performance on Freebase15KSelected [25], a recent and more challenging variant of Freebase15K. Neural LP also performs well on standard benchmark datasets for statistical relational learning, including datasets about biomedicine and kinship relationships [12]. Since good performance on many of these datasets can be obtained using short rules, we also evaluate Neural LP on a synthetic task which requires longer rules. Finally, we show that Neural LP can perform well in answering partially structured queries, where the query is posed partially in natural language. In particular, Neural LP also obtains state-of-the-art results on the KB version of the WIKIMOVIES dataset [16] for question-answering against a knowledge base. In addition, we show that logical rules can be recovered by executing the learned controller on examples and tracking the attention. To summarize, the contributions of this paper include the following. First, we describe Neural LP, which is, to our knowledge, the first end-to-end differentiable approach to learning not only the parameters but also the structure of logical rules. Second, we experimentally evaluate Neural LP on several types of knowledge base reasoning tasks, illustrating that this new approach to inductive logic programming outperforms prior work. Third, we illustrate techniques for visualizing a Neural LP model as logical rules. 2 Related work Structure embedding [3, 24, 29] has been a popular approach to reasoning with a knowledge base. This approach usually learns a embedding that maps knowledge base relations (e.g CityInCountry) and entities (e.g. USA) to tensors or vectors in latent feature spaces. Though our Neural LP system can be used for similar tasks as structure embedding, the methods are quite different. Structure embedding focuses on learning representations of relations and entities, while Neural LP learns logical rules. In addition, logical rules learned by Neural LP can be applied to entities not seen at training time. This is not achievable by structure embedding, since its reasoning ability relies on entity-dependent representations. Neural LP differs from prior work on logical rule learning in that the system is end-to-end differentiable, thus enabling gradient based optimization, while most prior work involves discrete search in the problem space. For instance, Kok and Domingos [12] interleave beam search, using discrete operators to alter a rule set, with parameter learning via numeric methods for rule confidences. Lao and Cohen [13] introduce all rules from a restricted set, then use lasso-style regression to select a subset of predictive rules. Wang et al. [27] use an Iterative Structural Gradient algorithm that alternate gradient-based search for parameters of a probabilistic logic ProPPR [26], with structural additions suggested by the parameter gradients. Recent work on neural program induction [21, 20, 1, 8] have used attention mechanism to “softly choose” differentiable operators, where the attentions are simply approximations to binary choices. The main difference in our work is that attentions are treated as confidences of the logical rules and 2 have semantic meanings. In other words, Neural LP learns a distribution over logical rules, instead of an approximation to a particular rule. Therefore, we do not use hardmax to replace softmax during inference time. 3 Framework 3.1 Knowledge base reasoning Knowledge bases are collections of relational data of the format Relation(head,tail), where head and tail are entities and Relation is a binary relation between entities. Examples of such data tuple are HasOfficeInCity(New York,Uber) and CityInCountry(USA,New York). The knowledge base reasoning task we consider here consists of a query1, an entity tail that the query is about, and an entity head that is the answer to the query. The goal is to retrieve a ranked list of entities based on the query such that the desired answer (i.e. head) is ranked as high as possible. To reason over knowledge base, for each query we are interested in learning weighted chain-like logical rules of the following form, similar to stochastic logic programs [19], α query(Y,X)←Rn (Y,Zn) ∧· · · ∧R1 (Z1,X) (1) where α ∈[0, 1] is the confidence associated with this rule, and R1, . . . , Rn are relations in the knowledge base. During inference, given an entity x, the score of each y is defined as sum of the confidence of rules that imply query(y,x), and we will return a ranked list of entities where higher the score implies higher the ranking. 3.2 TensorLog for KB reasoning We next introduce TensorLog operators and then describe how they can be used for KB reasoning. Given a knowledge base, let E be the set of all entities and R be the set of all binary relations. We map all entities to integers, and each entity i is associated with a one-hot encoded vector vi ∈{0, 1}|E| such that only the i-th entry is 1. TensorLog defines an operator MR for each relation R. Concretely, MR is a matrix in {0, 1}|E|×|E| such that its (i, j) entry is 1 if and only if R(i,j) is in the knowledge base, where i is the i-th entity and similarly for j. We now draw the connection between TensorLog operations and a restricted case of logical rule inference. Using the operators described above, we can imitate logical rule inference R(Y,X)←P(Y, Z) ∧Q(Z,X) for any entity X = x by performing matrix multiplications MP · MQ · vx .= s. In other words, the non-zero entries of the vector s equals the set of y such that there exists z that P(y,z) and Q(z,x) are in the KB. Though we describe the case where rule length is two, it is straightforward to generalize this connection to rules of any length. Using TensorLog operations, what we want to learn for each query is shown in Equation 2, X l αlΠk∈βlMRk (2) where l indexes over all possible rules, αl is the confidence associated with rule l and βl is an ordered list of all relations in this particular rule. During inference, given an entity vx, the score of each retrieved entity is then equivalent to the entries in the vector s, as shown in Equation 3. s = X l (αl (Πk∈βlMRkvx)) , score(y | x) = vT y s (3) To summarize, we are interested in the following learning problem for each query. max {αl,βl} X {x,y} score(y | x) = max {αl,βl} X {x,y} vT y X l (αl (Πk∈βlMRkvx)) ! (4) where {x, y} are entity pairs that satisfy the query, and {αl, βl} are to be learned. 1In this work, the notion of query refers to relations, which differs from conventional notion, where query usually contains relation and entity. 3 Figure 2: The neural controller system. 3.3 Learning the logical rules We will now describe the differentiable rule learning process, including learnable parameters and the model architecture. As shown in Equation 2, for each query, we need to learn the set of rules that imply it and the confidences associated with these rules. However, it is difficult to formulate a differentiable process to directly learn the parameters and the structure {αl, βl}. This is because each parameter is associated with a particular rule, and enumerating rules is an inherently discrete task. To overcome this difficulty, we observe that a different way to write Equation 2 is to interchange the summation and product, resulting the following formula with a different parameterization, T Y t=1 |R| X k ak tMRk (5) where T is the max length of rules and |R| is the number of relations in the knowledge base. The key parameterization difference between Equation 2 and Equation 5 is that in the latter we associate each relation in the rule with a weight. This combines the rule enumeration and confidence assignment. However, the parameterization in Equation 5 is not sufficiently expressive, as it assumes that all rules are of the same length. We address this limitation in Equation 6-8, where we introduce a recurrent formulation similar to Equation 3. In the recurrent formulation, we use auxiliary memory vectors ut. Initially the memory vector is set to the given entity vx. At each step as described in Equation 7, the model first computes a weighted average of previous memory vectors using the memory attention vector bt. Then the model “softly” applies the TensorLog operators using the operator attention vector at. This formulation allows the model to apply the TensorLog operators on all previous partial inference results, instead of just the last step’s. u0 = vx (6) ut = |R| X k ak tMRk t−1 X τ=0 bτ t uτ ! for 1 ≤t ≤T (7) uT+1 = T X τ=0 bτ T +1uτ (8) Finally, the model computes a weighted average of all memory vectors, thus using attention to select the proper rule length. Given the above recurrent formulation, the learnable parameters for each query are {at | 1 ≤t ≤T} and {bt | 1 ≤t ≤T + 1}. We now describe a neural controller system to learn the operator and memory attention vectors. We use recurrent neural networks not only because they fit with our recurrent formulation, but also because it is likely that current step’s attentions are dependent on previous steps’. At every step t ∈[1, T + 1], the network predicts operator and memory attention vectors using Equation 9, 10, 4 and 11. The input is the query for 1 ≤t ≤T and a special END token when t = T + 1. ht = update (ht−1, input) (9) at = softmax (Wht + b) (10) bt = softmax [h0, . . . , ht−1]T ht (11) The system then performs the computation in Equation 7 and stores ut into the memory. The memory holds each step’s partial inference results, i.e. {u0, . . . , ut, . . . , uT+1}. Figure 2 shows an overview of the system. The final inference result u is just the last vector in memory, i.e. uT+1. As discussed in Equation 4, the objective is to maximize vT y u. In particular, we maximize log vT y u because the nonlinearity empirically improves the optimization performance. We also observe that normalizing the memory vectors (i.e. ut) to have unit length sometimes improves the optimization. To recover logical rules from the neural controller system, for each query we can write rules and their confidences {αl, βl} in terms of the attention vectors {at, bt}. Based on the relationship between Equation 3 and Equation 6-8, we can recover rules by following Equation 7 and keep track of the coefficients in front of each matrix MRk. The detailed procedure is presented in Algorithm 1. Algorithm 1 Recover logical rules from attention vectors Input: attention vectors {at | t = 1, . . . , T} and {bt | t = 1, . . . , T + 1} Notation: Let Rt = {r1, . . . , rl} be the set of partial rules at step t. Each rule rl is represented by a pair of (α, β) as described in Equation 1, where α is the confidence and β is an ordered list of relation indexes. Initialize: R0 = {r0} where r0 = (1, ( )). for t ←1 to T + 1 do Initialize: c Rt = ∅, a placeholder for storing intermediate results. for τ ←0 to t −1 do for rule (α, β) in Rτ do Update α′ ←α · bτ t . Store the updated rule (α′, β) in c Rt. if t ≤T then Initialize: Rt = ∅ for rule (α, β) in c Rt do for k ←1 to |R| do Update α′ ←α · ak t , β′ ←β append k. Add the updated rule (α′, β′) to Rt. else Rt = c Rt return RT +1 4 Experiments To test the reasoning ability of Neural LP, we conduct experiments on statistical relation learning, grid path finding, knowledge base completion, and question answering against a knowledge base. For all the tasks, the data used in the experiment are divided into three files: facts, train, and test. The facts file is used as the knowledge base to construct TensorLog operators {MRk | Rk ∈R}. The train and test files contain query examples query(head,tail). Unlike in the case of learning embeddings, we do not require the entities in train and test to overlap, since our system learns rules that are entity independent. Our system is implemented in TensorFlow and can be trained end-to-end using gradient methods. The recurrent neural network used in the neural controller is long short-term memory [9], and the hidden state dimension is 128. The optimization algorithm we use is mini-batch ADAM [11] with batch size 64 and learning rate initially set to 0.001. The maximum number of training epochs is 10, and validation sets are used for early stopping. 4.1 Statistical relation learning We conduct experiments on two benchmark datasets [12] in statistical relation learning. The first dataset, Unified Medical Language System (UMLS), is from biomedicine. The entities are biomedical 5 concepts (e.g. disease, antibiotic) and relations are like treats and diagnoses. The second dataset, Kinship, contains kinship relationships among members of the Alyawarra tribe from Central Australia [6]. Datasets statistics are shown in Table 1. We randomly split the datasets into facts, train, test files as described above with ratio 6:2:1. The evaluation metric is Hits@10. Experiment results are shown in Table 2. Comparing with Iterative Structural Gradient (ISG) [27], Neural LP achieves better performance on both datasets. 2 We conjecture that this is mainly because of the optimization strategy used in Neural LP, which is end-to-end gradient-based, while ISG’s optimization alternates between structure and parameter search. Figure 3: Accuracy on grid path finding. Table 1: Datasets statistics. # Data # Relation # Entity UMLS 5960 46 135 Kinship 9587 25 104 Table 2: Experiment results. T indicates the maximum rule length. ISG Neural LP T = 2 T = 3 T = 2 T = 3 UMLS 43.5 43.3 92.0 93.2 Kinship 59.2 59.0 90.2 90.1 4.2 Grid path finding Since in the previous tasks the rules learned are of length at most three, we design a synthetic task to test if Neural LP can learn longer rules. The experiment setup includes a knowledge base that contains location information about a 16 by 16 grid, such as North((1,2),(1,1)) and SouthEast ((0,2),(1,1)) The query is randomly generated by combining a series of directions, such as North_SouthWest. The train and test examples are pairs of start and end locations, which are generated by randomly choosing a location on the grid and then following the queries. We classify the queries into four classes based on the path length (i.e. Hamming distance between start and end), ranging from two to ten. Figure 3 shows inference accuracy of this task for learning logical rules using ISG [27] and Neural LP. As the path length and learning difficulty increase, the results show that Neural LP can accurately learn rules of length 6-8 for this task, and is more robust than ISG in terms of handling longer rules. 4.3 Knowledge base completion We also conduct experiments on the canonical knowledge base completion task as described in [3]. In this task, the query and tail are part of a missing data tuple, and the goal is to retrieve the related head. For example, if HasOfficeInCountry(USA,Uber) is missing from the knowledge base, then the goal is to reason over existing data tuples and retrieve USA when presented with query HasOfficeInCountry and Uber. To represent the query as a continuous input to the neural controller, we jointly learn an embedding lookup table for each query. The embedding has dimension 128 and is randomly initialized to unit norm vectors. The knowledge bases in our experiments are from WordNet [17, 10] and Freebase [2]. We use the datasets WN18 and FB15K, which are introduced in [3]. We also considered a more challenging dataset, FB15KSelected [25], which is constructed by removing near-duplicate and inverse relations from FB15K. We use the same train/validation/test split as in prior work and augment data files with reversed data tuples, i.e. for each relation, we add its inverse inv_relation. In order to create a 2We use the implementation of ISG available at https://github.com/TeamCohen/ProPPR. In Wang et al. [27], ISG is compared with other statistical relational learning methods in a different experiment setup, and ISG is superior to several methods including Markov Logic Networks [12]. 6 facts file which will be used as the knowledge base, we further split the original train file into facts and train with ratio 3:1. 3 The dataset statistics are summarized in Table 3. Table 3: Knowledge base completion datasets statistics. Dataset # Facts # Train # Test # Relation # Entity WN18 106,088 35,354 5,000 18 40,943 FB15K 362,538 120,604 59,071 1,345 14,951 FB15KSelected 204,087 68,028 20,466 237 14,541 The attention vector at each step is by default applied to all relations in the knowledge base. Sometimes this creates an unnecessarily large search space. In our experiment on FB15K, we use a subset of operators for each query. The subsets are chosen by including the top 128 relations that share common entities with the query. For all datasets, the max rule length T is 2. The evaluation metrics we use are Mean Reciprocal Rank (MRR) and Hits@10. MRR computes an average of the reciprocal rank of the desired entities. Hits@10 computes the percentage of how many desired entities are ranked among top ten. Following the protocol in Bordes et al. [3], we also use filtered rankings. We compare the performance of Neural LP with several models, summarized in Table 4. Table 4: Knowledge base completion performance comparison. TransE [4] and Neural Tensor Network [24] results are extracted from [29]. Results on FB15KSelected are from [25]. WN18 FB15K FB15KSelected MRR Hits@10 MRR Hits@10 MRR Hits@10 Neural Tensor Network 0.53 66.1 0.25 41.4 TransE 0.38 90.9 0.32 53.9 DISTMULT [29] 0.83 94.2 0.35 57.7 0.25 40.8 Node+LinkFeat [25] 0.94 94.3 0.82 87.0 0.23 34.7 Implicit ReasoNets [23] 95.3 92.7 Neural LP 0.94 94.5 0.76 83.7 0.24 36.2 Neural LP gives state-of-the-art results on WN18, and results that are close to the state-of-the-art on FB15K. It has been noted [25] that many relations in WN18 and FB15K have inverse also defined, which makes them easy to learn. FB15KSelected is a more challenging dataset, and on it, Neural LP substantially improves the performance over Node+LinkFeat [25] and achieves similar performance as DISTMULT [29] in terms of MRR. We note that in FB15KSelected, since the test entities are rarely directly linked in the knowledge base, the models need to reason explicitly about compositions of relations. The logical rules learned by Neural LP can very naturally capture such compositions. Examples of rules learned by Neural LP are shown in Table 5. The number in front each rule is the normalized confidence, which is computed by dividing by the maximum confidence of rules for each relation. From the examples we can see that Neural LP successfully combines structure learning and parameter learning. It not only induce multiple logical rules to capture the complex structure in the knowledge base, but also learn to distribute confidences on rules. To demonstrate the inductive learning advantage of Neural LP, we conduct experiments where training and testing use disjoint sets of entities. To create such setting, we first randomly select a subset of the test tuples to be the test set. Secondly, we filter the train set by excluding any tuples that share entities with selected test tuples. Table 6 shows the experiment results in this inductive setting. 3We also make minimal adjustment to ensure that all query relations in test appear at least once in train and all entities in train and test are also in facts. For FB15KSelected, we also ensure that entities in train are not directly linked in facts. 7 Table 5: Examples of logical rules learned by Neural LP on FB15KSelected. The letters A,B,C are ungrounded logic variables. 1.00 partially_contains(C,A)←contains(B,A) ∧contains(B,C) 0.45 partially_contains(C,A)←contains(A,B) ∧contains(B,C) 0.35 partially_contains(C,A)←contains(C,B) ∧contains(B,A) 1.00 marriage_location(C,A)←nationality(C,B) ∧contains(B,A) 0.35 marriage_location(B,A)←nationality(B,A) 0.24 marriage_location(C,A)←place_lived(C,B) ∧contains(B,A) 1.00 film_edited_by(B,A)←nominated_for(A,B) 0.20 film_edited_by(C,A)←award_nominee(B,A) ∧nominated_for(B,C) Table 6: Inductive knowledge base completion. The metric is Hits@10. WN18 FB15K FB15KSelected TransE 0.01 0.48 0.53 Neural LP 94.49 73.28 27.97 As expected, the inductive setting results in a huge decrease in performance for the TransE model4, which uses a transductive learning approach; for all three datasets, Hits@10 drops to near zero, as one could expect. In contrast, Neural LP is much less affected by the amount of unseen entities and achieves performance at the same scale as the non-inductive setting. This emphasizes that our Neural LP model has the advantage of being able to transfer to unseen entities. 4.4 Question answering against knowledge base We also conduct experiments on a knowledge reasoning task where the query is “partially structured”, as the query is posed partially in natural language. An example of a partially structured query would be “in which country does x has an office” for a given entity x, instead of HasOfficeInCountry(Y, x). Neural LP handles queries of this sort very naturally, since the input to the neural controller is a vector which can encode either a structured query or natural language text. We use the WIKIMOVIES dataset from Miller et al. [16]. The dataset contains a knowledge base and question-answer pairs. Each question (i.e. the query) is about an entity and the answers are sets of entities in the knowledge base. There are 196,453 train examples and 10,000 test examples. The knowledge base has 43,230 movie related entities and nine relations. A subset of the dataset is shown in Table 7. Table 7: A subset of the WIKIMOVIES dataset. Knowledge base directed_by(Blade Runner,Ridley Scott) written_by(Blade Runner,Philip K. Dick) starred_actors(Blade Runner,Harrison Ford) starred_actors(Blade Runner,Sean Young) Questions What year was the movie Blade Runner released? Who is the writer of the film Blade Runner? We process the dataset to match the input format of Neural LP. For each question, we identity the tail entity by checking which words match entities in the knowledge base. We also filter the words in the question, keeping only the top 100 frequent words. The length of each question is limited to six words. To represent the query in natural language as a continuous input for the neural controller, we jointly learn a embedding lookup table for all words appearing in the query. The query representation is computed as the arithmetic mean of the embeddings of the words in it. 4We use the implementation of TransE available at https://github.com/thunlp/KB2E. 8 We compare Neural LP with several embedding based QA models. The main difference between these methods and ours is that Neural LP does not embed the knowledge base, but instead learns to compose operators defined on the knowledge base. The comparison is summarized in Table 8. Experiment results are extracted from Miller et al. [16]. Figure 4: Visualization of learned logical rules. Table 8: Performance comparison. Memory Network is from [28]. QA system is from [4]. Model Accuracy Memory Network 78.5 QA system 93.5 Key-Value Memory Network [16] 93.9 Neural LP 94.6 To visualize the learned model, we randomly sample 650 questions from the test dataset and compute the embeddings of each question. We use tSNE [15] to reduce the embeddings to the two dimensional space and plot them in Figure 4. Most learned logical rules consist of one relation from the knowledge base, and we use different colors to indicate the different relations and label some clusters by relation. The experiment results show that Neural LP can successfully handle queries that are posed in natural language by jointly learning word representations as well as the logical rules. 5 Conclusions We present an end-to-end differentiable method for learning the parameters as well as the structure of logical rules for knowledge base reasoning. Our method, Neural LP, is inspired by a recent probabilistic differentiable logic TensorLog [5]. Empirically Neural LP improves performance on several knowledge base reasoning datasets. In the future, we plan to work on more problems where logical rules are essential and complementary to pattern recognition. Acknowledgments This work was funded by NSF under IIS1250956 and by Google Research. References [1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In Proceedings of NAACL-HLT, pages 1545–1554, 2016. [2] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM, 2008. [3] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787–2795, 2013. [4] Antoine Bordes, Sumit Chopra, and Jason Weston. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676, 2014. [5] William W Cohen. Tensorlog: A differentiable deductive database. arXiv preprint arXiv:1605.06523, 2016. [6] Woodrow W Denham. The detection of patterns in Alyawara nonverbal behavior. PhD thesis, University of Washington, Seattle., 1973. [7] Lise Getoor. Introduction to statistical relational learning. MIT press, 2007. [8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471–476, 2016. 9 [9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. [10] Adam Kilgarriff and Christiane Fellbaum. Wordnet: An electronic lexical database, 2000. [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [12] Stanley Kok and Pedro Domingos. Statistical predicate invention. In Proceedings of the 24th international conference on Machine learning, pages 433–440. ACM, 2007. [13] Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67, 2010. [14] Ni Lao, Tom Mitchell, and William W Cohen. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 529–539. Association for Computational Linguistics, 2011. [15] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. [16] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126, 2016. [17] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39–41, 1995. [18] Stephen Muggleton, Ramon Otero, and Alireza Tamaddoni-Nezhad. Inductive logic programming, volume 38. Springer, 1992. [19] Stephen Muggleton et al. Stochastic logic programs. Advances in inductive logic programming, 32:254–264, 1996. [20] Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015. [21] Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945, 2016. [22] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 62(1-2): 107–136, 2006. [23] Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and Jianfeng Gao. Implicit reasonet: Modeling large-scale structured relationships with shared memory. arXiv preprint arXiv:1611.04642, 2016. [24] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934, 2013. [25] Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, 2015. [26] William Yang Wang, Kathryn Mazaitis, and William W Cohen. Programming with personalized pagerank: a locally groundable first-order probabilistic logic. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 2129–2138. ACM, 2013. [27] William Yang Wang, Kathryn Mazaitis, and William W Cohen. Structure learning via parameter learning. In CIKM 2014, 2014. [28] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. [29] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In ICLR, 2015. 10 | 2017 | 40 |
6,897 | Rotting Bandits Nir Levine Electrical Engineering Department The Technion Haifa 32000, Israel levin.nir1@gmail.com Koby Crammer Electrical Engineering Department The Technion Haifa 32000, Israel koby@ee.technion.ac.il Shie Mannor Electrical Engineering Department The Technion Haifa 32000, Israel shie@ee.technion.ac.il Abstract The Multi-Armed Bandits (MAB) framework highlights the trade-off between acquiring new knowledge (Exploration) and leveraging available knowledge (Exploitation). In the classical MAB problem, a decision maker must choose an arm at each time step, upon which she receives a reward. The decision maker’s objective is to maximize her cumulative expected reward over the time horizon. The MAB problem has been studied extensively, specifically under the assumption of the arms’ rewards distributions being stationary, or quasi-stationary, over time. We consider a variant of the MAB framework, which we termed Rotting Bandits, where each arm’s expected reward decays as a function of the number of times it has been pulled. We are motivated by many real-world scenarios such as online advertising, content recommendation, crowdsourcing, and more. We present algorithms, accompanied by simulations, and derive theoretical guarantees. 1 Introduction One of the most fundamental trade-offs in stochastic decision theory is the well celebrated Exploration vs. Exploitation dilemma. Should one acquire new knowledge on the expense of possible sacrifice in the immediate reward (Exploration), or leverage past knowledge in order to maximize instantaneous reward (Exploitation)? Solutions that have been demonstrated to perform well are those which succeed in balancing the two. First proposed by Thompson [1933] in the context of drug trials, and later formulated in a more general setting by Robbins [1985], MAB problems serve as a distilled framework for this dilemma. In the classical setting of the MAB, at each time step, the decision maker must choose (pull) between a fixed number of arms. After pulling an arm, she receives a reward which is a realization drawn from the arm’s underlying reward distribution. The decision maker’s objective is to maximize her cumulative expected reward over the time horizon. An equivalent, more typically studied, is the regret, which is defined as the difference between the optimal cumulative expected reward (under full information) and that of the policy deployed by the decision maker. MAB formulation has been studied extensively, and was leveraged to formulate many real-world problems. Some examples for such modeling are online advertising [Pandey et al., 2007], routing of packets [Awerbuch and Kleinberg, 2004], and online auctions [Kleinberg and Leighton, 2003]. Most past work (Section 6) on the MAB framework has been performed under the assumption that the underlying distributions are stationary, or possibly quasi-stationary. In many real-world scenarios, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. this assumption may seem simplistic. Specifically, we are motivated by real-world scenarios where the expected reward of an arm decreases over time instances that it has been pulled. We term this variant Rotting Bandits. For motivational purposes, we present the following two examples. • Consider an online advertising problem where an agent must choose which ad (arm) to present (pull) to a user. It seems reasonable that the effectiveness (reward) of a specific ad on a user would deteriorate over exposures. Similarly, in the content recommendation context, Agarwal et al. [2009] showed that articles’ CTR decay over amount of exposures. • Consider the problem of assigning projects through crowdsourcing systems [Tran-Thanh et al., 2012]. Given that the assignments primarily require human perception, subjects may fall into boredom and their performance would decay (e.g., license plate transcriptions [Du et al., 2013]). As opposed to the stationary case, where the optimal policy is to always choose some specific arm, in the case of Rotting Bandits the optimal policy consists of choosing different arms. This results in the notion of adversarial regret vs. policy regret [Arora et al., 2012] (see Section 6). In this work we tackle the harder problem of minimizing the policy regret. The main contributions of this paper are the following: • Introducing a novel, real-world oriented MAB formulation, termed Rotting Bandits. • Present an easy-to-follow algorithm for the general case, accompanied with theoretical guarantees. • Refine the theoretical guarantees for the case of existing prior knowledge on the rotting models, accompanied with suitable algorithms. The rest of the paper is organized as follows: in Section 2 we present the model and relevant preliminaries. In Section 3 we present our algorithm along with theoretical guarantees for the general case. In Section 4 we do the same for the parameterized case, followed by simulations in Section 5. In Section 6 we review related work, and conclude with a discussion in Section 7. 2 Model and Preliminaries We consider the problem of Rotting Bandits (RB); an agent is given K arms and at each time step t = 1, 2, .. one of the arms must be pulled. We denote the arm that is pulled at time step t as i (t) ∈[K] = {1, .., K}. When arm i is pulled for the nth time, the agent receives a time independent, σ2 sub-Gaussian random reward, rt, with mean µi (n).1 In this work we consider two cases: (1) There is no prior knowledge on the expected rewards, except for the ‘rotting’ assumption to be presented shortly, i.e., a non-parametric case (NPC). (2) There is prior knowledge that the expected rewards comprised of an unknown constant part and a rotting part which is known to belong to a set of rotting models, i.e., a parametric case (PC). Let Ni (t) be the number of pulls of arm i at time t not including this round’s choice (Ni (1) = 0), and Π the set of all sequences i (1) , i (2) , .., where i (t) ∈[K] , ∀t ∈N. i.e., π ∈Π is an infinite sequence of actions (arms), also referred to as a policy. We denote the arm that is chosen by policy π at time t as π (t). The objective of an agent is to maximize the expected total reward in time T, defined for policy π ∈Π by, J (T; π) = E " T X t=1 µπ(t) Nπ(t) (t) + 1 # (1) We consider the equivalent objective of minimizing the regret in time T defined by, R (T; π) = max ˜π∈Π{J (T; ˜π)} −J (T; π) . (2) Assumption 2.1. (Rotting) ∀i ∈[K], µi (n) is positive, and non-increasing in n. 1Our results hold for pulls-number dependent variances σ2 (n), by upper bound them σ2 ≥σ2 (n) , ∀n. It is fairly straightforward to adapt the results to pulls-number dependent variances, but we believe that the way presented conveys the setting in the clearest way. 2 2.1 Optimal Policy Let πmax be a policy defined by, πmax (t) ∈argmax i∈[K] {µi (Ni (t) + 1)} (3) where, in a case of tie, break it randomly. Lemma 2.1. πmax is an optimal policy for the RB problem. Proof: See Appendix B of the supplementary material. 3 Non-Parametric Case In the NPC setting for the RB problem, the only information we have is that the expected rewards sequences are positive and non-increasing in the number of pulls. The Sliding-Window Average (SWA) approach is a heuristic for ensuring with high probability that, at each time step, the agent did not sample significantly sub-optimal arms too many times. We note that, potentially, the optimal arm changes throughout the trajectory, as Lemma 2.1 suggests. We start by assuming that we know the time horizon, and later account for the case we do not. Known Horizon The idea behind the SWA approach is that after we pulled a significantly sub-optimal arm “enough" times, the empirical average of these “enough" pulls would be distinguishable from the optimal arm for that time step and, as such, given any time step there is a bounded number of significantly sub-optimal pulls compared to the optimal policy. Pseudo algorithm for SWA is given by Algorithm 1. Algorithm 1 SWA Input : K, T, α > 0 Initialize : M ←⌈α42/3σ2/3K−2/3T 2/3 ln1/3 √ 2T ⌉, and Ni ←0 for all i ∈[K] for t = 1, 2, .., KM do Ramp up : i (t) by Round-Robin, receive rt, and set Ni(t) ←Ni(t) + 1 ; r Ni(t) i(t) ←rt end for for t = KM + 1, ..., T do Balance : i (t) ∈argmaxi∈[K] 1 M PNi n=Ni−M+1 rn i Update : receive rt, and set Ni(t) ←Ni(t) + 1 ; r Ni(t) i(t) ←rt end for Theorem 3.1. Suppose Assumption 2.1 holds. SWA algorithm achieves regret bounded by, R T; πSWA ≤ α max i∈[K] µi (1) + α−1/2 42/3σ2/3K1/3T 2/3 ln1/3 √ 2T + 3K max i∈[K] µi (1) (4) Proof: See Appendix C.1 of the supplementary material. We note that the upper bound obtains its minimum for α = 2 maxi∈[K] µi (1) −2/3, which can serve as a way to choose α if maxi∈[K] µi (1) is known, but α can also be given as an input to SWA to allow control on the averaging window size. Unknown Horizon In this case we use doubling trick in order to achieve the same horizon-dependent rate for the regret. We apply the SWA algorithm with a series of increasing horizons (powers of two, i.e., 1, 2, 4, ..) until reaching the (unknown) horizon. We term this Algorithm wSWA (wrapper SWA). Corollary 3.1.1. Suppose Assumption 2.1 holds. wSWA algorithm achieves regret bounded by, R T; πwSWA ≤ α max i∈[K] µi (1) + α−1/2 8σ2/3K1/3T 2/3 ln1/3 √ 2T + 3K max i∈[K] µi (1) (log2 T + 1) (5) Proof: See Appendix C.2 of the supplementary material. 3 4 Parametric Case In the PC setting for the RB problem, there is prior knowledge that the expected rewards comprised of a sum of an unknown constant part and a rotting part known to belong to a set of models, Θ. i.e., the expected reward of arm i at its nth pull is given by, µi (n) = µc i + µ (n; θ∗ i ), where θ∗ i ∈Θ. We denote {θ∗ i }[K] i=1 by Θ∗. We consider two cases: The first is the asymptotically vanishing case (AV), i.e., ∀i : µc i = 0. The second is the asymptotically non-vanishing case (ANV), i.e., ∀i : µc i ∈R. We present a few definitions that will serve us in the following section. Definition 4.1. For a function f : N →R, we define the function f ⋆↓: R →N ∪{∞} by the following rule: given ζ ∈R, f ⋆↓(ζ) returns the smallest N ∈N such that ∀n ≥N : f (n) ≤ζ, or ∞if such N does not exist. Definition 4.2. For any θ1 ̸= θ2 ∈Θ2, define detθ1,θ2, Ddetθ1,θ2 : N →R as, detθ1,θ2 (n) = nσ2 Pn j=1 µ (j; θ1) −Pn j=1 µ (j; θ2) 2 Ddetθ1,θ2 (n) = nσ2 P⌊n/2⌋ j=1 [µ (j; θ1) −µ (j; θ2)] −Pn j=⌊n/2⌋+1 [µ (j; θ1) −µ (j; θ2)] 2 Definition 4.3. Let bal : N ∪∞→N ∪∞be defined at each point n ∈N as the solution for, min α s.t, max θ∈Θ µ (α; θ) ≤min θ∈Θ µ (n; θ) We define bal (∞) = ∞. Assumption 4.1. (Rotting Models) µ (n; θ) is positive, non-increasing in n, and µ (n; θ) ∈o (1), ∀θ ∈Θ, where Θ is a discrete known set. We present an example for which, in Appendix E, we demonstrate how the different following assumptions hold. By this we intend to achieve two things: (i) show that the assumptions are not too harsh, keeping the problem relevant and non-trivial, and (ii) present a simple example on how to verify the assumptions. Example 4.1. The reward of arm i for its nth pull is distributed as N µc i + n−θ∗ i , σ2 . Where θ∗ i ∈Θ = {θ1, θ2, ..., θM}, and ∀θ ∈Θ : 0.01 ≤θ ≤0.49. 4.1 Closest To Origin (AV) The Closest To Origin (CTO) approach for RB is a heuristic that simply states that we hypothesize that the true underlying model for an arm is the one that best fits the past rewards. The fitting criterion is proximity to the origin of the sum of expected rewards shifted by the observed rewards. Let ri 1, ri 2, .., ri Ni(t) be the sequence of rewards observed from arm i up until time t. Define, Y (i, t; Θ) = Ni(t) X j=1 ri j − Ni(t) X j=1 µ (j; θ) θ∈Θ . (6) The CTO approach dictates that at each decision point, we assume that the true underlying rotting model corresponds to the following proximity to origin rule (hence the name), ˆθi (t) = argmin θ∈Θ {|Y (i, t; θ) |}. (7) The CTOSIM version tackles the RB problem by simultaneously detecting the true rotting models and balancing between the expected rewards (following Lemma 2.1). In this approach, every time step, each arm’s rotting model is hypothesized according to the proximity rule (7). Then the algorithm simply follows an argmax rule, where least number of pulls is used for tie breaking (randomly between an equal number of pulls). Pseudo algorithm for CTOSIM is given by Algorithm 2. Assumption 4.2. (Simultaneous Balance and Detection ability) bal max θ1̸=θ2∈Θ2 det⋆↓ θ1,θ2 1 16 ln−1 (ζ) ∈o (ζ) 4 The above assumption ensures that, starting from some horizon T, the underlying models could be distinguished from the others, w.p 1 −1/T 2, by their sums of expected rewards, and the arms could then be balanced, all within the horizon. Theorem 4.1. Suppose Assumptions 4.1 and 4.2 hold. There exists a finite step T ∗ SIM, such that for all T ≥T ∗ SIM, CTOSIM achieves regret upper bounded by o (1) (which is upper bounded by maxθ∈Θ∗µ (1; θ)). Furthermore, T ∗ SIM is upper bounded by the solution for the following, min T s.t T, b ∈N ∪{0}, t ∈NK ∀b, ∃t : ∥t∥1 ≤T + b ti ≥maxθ∈Θ∗ m∗ 1 K(T +b)2 ; θ µ (ti + 1; θ∗ i ) ≤min˜θ∈Θ µ maxθ∈Θ∗ m∗ 1 K(T +b)2 ; θ ; ˜θ (8) Proof: See Appendix D.1 of the supplementary material. Regret upper bounded by o (1) is achieved by proving that w.p of 1 −1/T the regret vanishes, and in any case it is still bounded by a decaying term. The shown optimization bound stems from ensuring that the arms would be pulled enough times to be correctly detected, and then balanced (following the optimal policy, Lemma 2.1). Another upper bound for T ∗ SIM can be found in Appendix D.1. 4.2 Differences Closest To Origin (ANV) We tackle this problem by estimating both the rotting models and the constant terms of the arms. The Differences Closest To Origin (D-CTO) approach is composed of two stages: first, detecting the underlying rotting models, then estimating and controlling the pulls due to the constant terms. We denote a∗= argmaxi∈[K]{µc i}, and ∆i = µc a∗−µc i. Assumption 4.3. (D-Detection ability) max θ1̸=θ2∈Θ2 Ddet⋆↓ θ1,θ2 (ϵ) ≤D (ϵ) < ∞, ∀ϵ > 0 This assumption ensures that for any given probability, the models could be distinguished, by the differences (in pulls) between the first and second halves of the models’ sums of expected rewards. Models Detection In order to detect the underlying rotting models, we cancel the influence of the constant terms. Once we do this, we can detect the underlying models. Specifically, we define a criterion of proximity to the origin based on differences between the halves of the rewards sequences, as follows: define, Z (i, t; Θ) = ⌊Ni(t)/2⌋ X j=1 ri j − Ni(t) X j=⌊Ni(t)/2⌋+1 ri j − ⌊Ni(t)/2⌋ X j=1 µ (j; θ) − Ni(t) X j=⌊Ni(t)/2⌋+1 µ (j; θ) . (9) The D-CTO approach is that in each decision point, we assume that the true underlying model corresponds to the following rule, ˆθi (t) = argmin θ∈Θ {|Z (i, t; θ) |} (10) We define the following optimization problem, indicating the number of samples required for ensuring correct detection of the rotting models w.h.p. For some arm i with (unknown) rotting model θ∗ i , min m s.t ( P ˆθi (l) ̸= θ∗ i ≤p, ∀l ≥m while pulling only arm i. (11) We denote the solution to the above problem, when we use proximity rule (10), by m∗ diff (p; θ∗ i ), and define m∗ diff (p) = maxθ∈Θ {m∗ diff (p; θ)}. 5 Algorithm 2 CTOSIM Input : K, Θ Initialization : Ni = 0, ∀i ∈[K] for t = 1, 2, .., K do Ramp up : i (t) = t ,and update Ni(t) end for for t = K + 1, ..., do Detect : determine {ˆθi} by Eq. (7) Balance : i (t) ∈argmaxi∈[K] µ Ni + 1; ˆθi Update : Ni(t) ←Ni(t) + 1 end for Algorithm 3 D-CTOUCB Input : K, Θ, δ Initialization : Ni = 0, ∀i ∈[K] for t = 1, 2, .., K × m∗ diff (δ/K) do Explore : i (t) by Round Robin, update Ni(t) end for Detect : determine {ˆθi} by Eq. (10) for t = K × m∗ diff (δ/K) + 1, ..., do UCB : i (t) according to Eq. (12) Update : Ni(t) ←Ni(t) + 1 end for D-CTOUCB We next describe an approach with one decision point, and later on remark on the possibility of having a decision point at each time step. As explained above, after detecting the rotting models, we move to tackle the constant terms aspect of the expected rewards. This is done in a UCB1-like approach [Auer et al., 2002a]. Given a sequence of rewards from arm i, {ri k}Ni(t) k=1 , we modify them using the estimated rotting model ˆθi, then estimate the arm’s constant term, and finally choose the arm with the highest estimated expected reward, plus an upper confident term. i.e., at time t, we pull arm i (t), according to the rule, i (t) ∈argmax i∈[K] h ˆµc i (t) + µ Ni (t) + 1; ˆθi (t) + ct,Ni(t) i (12) where ˆθi (t) is the estimated rotting model (obtained in the first stage), and, ˆµc i (t) = PNi(t) j=1 ri j −µ j; ˆθi (t) Ni (t) , ct,s = r 8 ln (t) σ2 s In a case of a tie in the UCB step, it may be arbitrarily broken. Pseudo algorithm for D-CTOUCB is given by Algorithm 3, accompanied with the following theorem. Theorem 4.2. Suppose Assumptions 4.1, and 4.3 hold. For δ ∈(0, 1), with probability of at least 1 −δ, D-CTOUCB algorithm achieves regret bounded at time T by, X i∈[K] i̸=a∗ max m∗ diff (δ/K) , µ⋆↓(ϵi; θ∗ i ) , 32σ2 ln T (∆i −ϵi)2 × (∆i + µ (1; θ∗ a∗)) + C (Θ∗, {µc i}) (13) for any sequence ϵi ∈(0, ∆i) , ∀i ̸= a∗. Where 32σ2 ln T (∆i−ϵi)2 is the only time-dependent factor. Proof: See Appendix D.2 of the supplementary material. A few notes on the result: Instead of calculating m∗ diff (δ/K), it is possible to use any upper bound (e.g., as shown in Appendix E, maxθ1̸=θ2∈Θ2 Ddet⋆↓ θ1,θ2 1 8 ln−1 2K δ rounded to higher even number). We cannot hope for a better rate than ln T as stochastic MAB is a special case of the RB problem. Finally, we can convert the D-CTOUCB algorithm to have a decision point in each step: at each time step, determine the rotting models according to proximity rule (10), followed by pulling an arm according to Eq. (12). We term this version D-CTOSIM-UCB. 5 Simulations We next compare the performance of the SWA and CTO approaches with benchmark algorithms. Setups for all the simulations we use Normal distributions with σ2 = 0.2, and T = 30, 000. Non-Parametric: K = 2. As for the expected rewards: µ1 (n) = 0.5, ∀n, and µ2 (n) = 1 for its first 7, 500 pulls and 0.4 afterwards. This setup is aimed to show the importance of not relying on the 6 Table 1: Number of ‘wins’ and p-values between the different algorithms UCB1 DUCB SWUCB wSWA (D-)CTO UCB1 <1e-5 <1e-5 <1e-5 DUCB 100 <1e-5 <1e-5 SWUCB 100 100 <1e-5 NP wSWA 100 100 100 UCB1 0.81 <1e-5 <1e-5 <1e-5 DUCB 55 <1e-5 <1e-5 <1e-5 SWUCB 15 22 <1e-5 <1e-5 wSWA 98 99 100 <1e-5 AV CTO 100 100 100 100 UCB1 0.54 0.83 <1e-5 <1e-5 DUCB 40 0.91 < 1e-5 <1e-5 SWUCB 50 50 <1e-5 <1e-5 wSWA 97 98 97 <1e-5 ANV D-CTO 100 100 100 66 0 5000 10000 15000 20000 25000 30000 time steps 0 500 1000 1500 2000 2500 Regret Non-Parametric Case UCB1 DUCB SWUCB wSWA 0 5000 10000 15000 20000 25000 30000 time steps 0 100 200 300 400 500 600 Regret Asymptotically Vanishing Case UCB1 DUCB SWUCB wSWA CTO 0 5000 10000 15000 20000 25000 30000 time steps 0 50 100 150 200 250 300 350 400 450 Regret Asymptotically Non-Vanishing Case UCB1 DUCB SWUCB wSWA D-CTO Figure 1: Average regret. Left: non-parametric. Middle: parametric AV. Right: parametric ANV whole past rewards in the RB setting. Parametric AV & ANV: K = 10. The rotting models are of the form µ (j; θ) = int j 100 + 1 −θ, where int(·) is the lower rounded integer, and Θ = {0.1, 0.15, .., 0.4} (i.e., plateaus of length 100, with decay between plateaus according to θ). {θ∗ i }K i=1 were sampled with replacement from Θ, independently across arms and trajectories. {µc i}K i=1 (ANV) were sampled randomly from [0, 0.5]K. Algorithms we implemented standard benchmark algorithms for non-stationary MAB: UCB1 by Auer et al. [2002a], Discounted UCB (DUCB) and Sliding-Window UCB (SWUCB) by Garivier and Moulines [2008]. We implemented CTOSIM, D-CTOSIM-UCB, and wSWA for the relevant setups. We note that adversarial benchmark algorithms are not relevant in this case, as the rewards are unbounded. Grid Searches were performed to determine the algorithms’ parameters. For DUCB, following Kocsis and Szepesvári [2006], the discount factor was chosen from γ ∈{0.9, 0.99, .., 0.999999}, the window size for SWUCB from τ ∈{1e3, 2e3, .., 20e3}, and α for wSWA from {0.2, 0.4, .., 1}. Performance for each of the cases, we present a plot of the average regret over 100 trajectories, specify the number of ‘wins’ of each algorithm over the others, and report the p-value of a paired T-test between the (end of trajectories) regrets of each pair of algorithms. For each trajectory and two algorithms, the ‘winner’ is defined as the algorithm with the lesser regret at the end of the horizon. Results the parameters that were chosen by the grid search are as follows: γ = 0.999 for the non-parametric case, and 0.999999 for the parametric cases. τ = 4e3, 8e3, and 16e3 for the nonparametric, AV, and ANV cases, respectively. α = 0.2 was chosen for all cases. The average regret for the different algorithms is given by Figure 1. Table 1 shows the number of ‘wins’ and p-values. The table is to be read as the following: the entries under the diagonal are the number of times the algorithms from the left column ‘won’ against the algorithms from the top row, and the entries above the diagonal are the p-values between the two. While there is no clear ‘winner’ between the three benchmark algorithms across the different cases, wSWA, which does not require any prior knowledge, consistently and significantly outperformed them. In addition, when prior knowledge was available and CTOSIM or D-CTOUCB-SIM could be deployed, they outperformed all the others, including wSWA. 7 6 Related Work We turn to reviewing related work while emphasizing the differences from our problem. Stochastic MAB In the stochastic MAB setting [Lai and Robbins, 1985], the underlying reward distributions are stationary over time. The notion of regret is the same as in our work, but the optimal policy in this setting is one that pulls a fixed arm throughout the trajectory. The two most common approaches for this problem are: constructing Upper Confidence Bounds which stem from the seminal work by Gittins [1979] in which he proved that index policies that compute upper confidence bounds on the expected rewards of the arms are optimal in this case (e.g., see Auer et al. [2002a], Garivier and Cappé [2011], Maillard et al. [2011]), and Bayesian heuristics such as Thompson Sampling which was first presented by Thompson [1933] in the context of drug treatments (e.g., see Kaufmann et al. [2012], Agrawal and Goyal [2013], Gopalan et al. [2014]). Adversarial MAB In the Adversarial MAB setting (also referred to as the Experts Problem, see the book of Cesa-Bianchi and Lugosi [2006] for a review), the sequence of rewards are selected by an adversary (i.e., can be arbitrary). In this setting the notion of adversarial regret is adopted [Auer et al., 2002b, Hazan and Kale, 2011], where the regret is measured against the best possible fixed action that could have been taken in hindsight. This is as opposed to the policy regret we adopt, where the regret is measured against the best sequence of actions in hindsight. Hybrid models Some past work consider settings between the Stochastic and the Adversarial settings. Garivier and Moulines [2008] consider the case where the reward distributions remain constant over epochs and change arbitrarily at unknown time instants, similarly to Yu and Mannor [2009] who consider the same setting, only with the availability of side observations. Chakrabarti et al. [2009] consider the case where arms can expire and be replaced with new arms with arbitrary expected reward, but as long as an arm does not expire its statistics remain the same. Non-Stationary MAB Most related to our problem is the so-called Non-Stationary MAB. Originally proposed by Jones and Gittins [1972], who considered a case where the reward distribution of a chosen arm can change, and gave rise to a sequence of works (e.g., Whittle et al. [1981], Tekin and Liu [2012]) which were termed Restless Bandits and Rested Bandits. In the Restless Bandits setting, termed by Whittle [1988], the reward distributions change in each step according to a known stochastic process. Komiyama and Qin [2014] consider the case where each arm decays according to a linear combination of decaying basis functions. This is similar to our parametric case in that the reward distributions decay according to possible models, but differs fundamentally in that it belongs to the Restless Bandits setup (ours to the Rested Bandits). More examples in this line of work are Slivkins and Upfal [2008] who consider evolution of rewards according to Brownian motion, and Besbes et al. [2014] who consider bounded total variation of expected rewards. The latter is related to our setting by considering the case where the total variation is bounded by a constant, but significantly differs by that it considers the case where the (unknown) expected rewards sequences are not affected by actions taken, and in addition requires bounded support as it uses the EXP3 as a sub-routine. In the Rested Bandits setting, only the reward distribution of a chosen arm changes, which is the case we consider. An optimal control policy (reward processes are known, no learning required) to bandits with non-increasing rewards and discount factor was previously presented (e.g., Mandelbaum [1987], and Kaspi and Mandelbaum [1998]). Heidari et al. (2016) consider the case where the reward decays (as we do), but with no statistical noise (deterministic rewards), which significantly simplifies the problem. Another somewhat closely related setting is suggested by Bouneffouf and Feraud [2016], in which statistical noise exists, but the expected reward shape is known up to a multiplicative factor. 7 Discussion We introduced a novel variant of the Rested Bandits framework, which we termed Rotting Bandits. This setting deals with the case where the expected rewards generated by an arm decay (or generally do not increase) as a function of pulls of that arm. This is motivated by many real-world scenarios. We first tackled the non-parametric case, where there is no prior knowledge on the nature of the decay. We introduced an easy-to-follow algorithm accompanied by theoretical guarantees. We then tackled the parametric case, and differentiated between two scenarios: expected rewards decay to zero (AV), and decay to different constants (ANV). For both scenarios we introduced 8 suitable algorithms with stronger guarantees than for the non-parametric case: For the AV scenario we introduced an algorithm for ensuring, in expectation, regret upper bounded by a term that decays to zero with the horizon. For the ANV scenario we introduced an algorithm for ensuring, with high probability, regret upper bounded by a horizon-dependent rate which is optimal for the stationary case. We concluded with simulations that demonstrated our algorithms’ superiority over benchmark algorithms for non-stationary MAB. We note that since the RB setting is novel, there are not suitable available benchmarks, and so this paper also serves as a benchmark. For future work we see two main interesting directions: (i) show a lower bound on the regret for the non-parametric case, and (ii) extend the scope of the parametric case to continuous parameterization. Acknowledgment The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n. 306638 References D. Agarwal, B.-C. Chen, and P. Elango. Spatio-temporal models for estimating click-through rate. In Proceedings of the 18th international conference on World wide web, pages 21–30. ACM, 2009. S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling. In Aistats, pages 99–107, 2013. R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. arXiv preprint arXiv:1206.6400, 2012. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002a. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002b. B. Awerbuch and R. D. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 45–53. ACM, 2004. O. Besbes, Y. Gur, and A. Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. In Advances in neural information processing systems, pages 199–207, 2014. D. Bouneffouf and R. Feraud. Multi-armed bandit problem with known trend. Neurocomputing, 205:16–21, 2016. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge university press, 2006. D. Chakrabarti, R. Kumar, F. Radlinski, and E. Upfal. Mortal multi-armed bandits. In Advances in Neural Information Processing Systems, pages 273–280, 2009. S. Du, M. Ibrahim, M. Shehata, and W. Badawy. Automatic license plate recognition (alpr): A state-of-the-art review. IEEE Transactions on Circuits and Systems for Video Technology, 23(2):311–325, 2013. A. Garivier and O. Cappé. The kl-ucb algorithm for bounded stochastic bandits and beyond. In COLT, pages 359–376, 2011. A. Garivier and E. Moulines. On upper-confidence bound policies for non-stationary bandit problems. arXiv preprint arXiv:0805.3415, 2008. J. C. Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pages 148–177, 1979. A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In ICML, volume 14, pages 100–108, 2014. E. Hazan and S. Kale. Better algorithms for benign bandits. Journal of Machine Learning Research, 12(Apr): 1287–1311, 2011. H. Heidari, M. Kearns, and A. Roth. Tight policy regret bounds for improving and decaying bandits. 9 D. M. Jones and J. C. Gittins. A dynamic allocation index for the sequential design of experiments. University of Cambridge, Department of Engineering, 1972. H. Kaspi and A. Mandelbaum. Multi-armed bandits in discrete and continuous time. Annals of Applied Probability, pages 1270–1290, 1998. E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In International Conference on Algorithmic Learning Theory, pages 199–213. Springer, 2012. R. Kleinberg and T. Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 594–605. IEEE, 2003. L. Kocsis and C. Szepesvári. Discounted ucb. In 2nd PASCAL Challenges Workshop, pages 784–791, 2006. J. Komiyama and T. Qin. Time-decaying bandits for non-stationary systems. In International Conference on Web and Internet Economics, pages 460–466. Springer, 2014. T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. O.-A. Maillard, R. Munos, G. Stoltz, et al. A finite-time analysis of multi-armed bandits problems with kullback-leibler divergences. In COLT, pages 497–514, 2011. A. Mandelbaum. Continuous multi-armed bandits and multiparameter processes. The Annals of Probability, pages 1527–1556, 1987. S. Pandey, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based approach. In SDM, pages 216–227. SIAM, 2007. H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169–177. Springer, 1985. A. Slivkins and E. Upfal. Adapting to a changing environment: the brownian restless bandits. In COLT, pages 343–354, 2008. C. Tekin and M. Liu. Online learning of rested and restless bandits. IEEE Transactions on Information Theory, 58(8):5588–5611, 2012. W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. L. Tran-Thanh, S. Stein, A. Rogers, and N. R. Jennings. Efficient crowdsourcing of unknown experts using multi-armed bandits. In European Conference on Artificial Intelligence, pages 768–773, 2012. P. Whittle. Restless bandits: Activity allocation in a changing world. Journal of applied probability, pages 287–298, 1988. P. Whittle et al. Arm-acquiring bandits. The Annals of Probability, 9(2):284–292, 1981. J. Y. Yu and S. Mannor. Piecewise-stationary bandit problems with side observations. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1177–1184. ACM, 2009. 10 | 2017 | 400 |
6,898 | Scalable Planning with Tensorflow for Hybrid Nonlinear Domains Ga Wu Buser Say Scott Sanner Department of Mechanical & Industrial Engineering, University of Toronto, Canada email: {wuga,bsay,ssanner}@mie.utoronto.ca Abstract Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on GPUs, we ask in this paper whether symbolic gradient optimization tools such as Tensorflow can be effective for planning in hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces? To this end, we demonstrate that hybrid planning with Tensorflow and RMSProp gradient descent is competitive with mixed integer linear program (MILP) based optimization on piecewise linear planning domains (where we can compute optimal solutions) and substantially outperforms state-of-the-art interior point methods for nonlinear planning domains. Furthermore, we remark that Tensorflow is highly scalable, converging to a strong plan on a large-scale concurrent domain with a total of 576,000 continuous action parameters distributed over a horizon of 96 time steps and 100 parallel instances in only 4 minutes. We provide a number of insights that clarify such strong performance including observations that despite long horizons, RMSProp avoids both the vanishing and exploding gradient problems. Together these results suggest a new frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power of recent advances in gradient descent with highly optimized toolkits like Tensorflow. 1 Introduction Many real-world hybrid (mixed discrete continuous) planning problems such as Reservoir Control [Yeh, 1985], Heating, Ventilation and Air Conditioning (HVAC) [Erickson et al., 2009; Agarwal et al., 2010], and Navigation [Faulwasser and Findeisen, 2009] have highly nonlinear transition and (possibly nonlinear) reward functions to optimize. Unfortunately, existing state-of-the-art hybrid planners [Ivankovic et al., 2014; Löhr et al., 2012; Coles et al., 2013; Piotrowski et al., 2016] are not compatible with arbitrary nonlinear transition and reward models. While HD-MILP-PLAN [Say et al., 2017] supports arbitrary nonlinear transition and reward models, it also assumes the availability of data to learn the state-transitions. Monte Carlo Tree Search (MCTS) methods [Coulom, 2006; Kocsis and Szepesvári, 2006; Keller and Helmert, 2013] including AlphaGo [Silver et al., 2016] that can use any (nonlinear) black box model of transition dynamics do not inherently work with continuous action spaces due to the infinite branching factor. While MCTS with continuous action extensions such as HOOT [Weinstein and Littman, 2012] have been proposed, their continuous partitioning methods do not scale to high-dimensional continuous action spaces (for example, 100’s or 1,000’s of dimensions as used in this paper). Finally, offline model-free reinforcement learning (for example, Q-learning) with function approximation [Sutton and Barto, 1998; Szepesvári, 2010] and deep extensions [Mnih et al., 2013] do not require any knowledge of the (nonlinear) transition model or reward, but they also do not directly apply to domains with high-dimensional continuous action spaces. That is, offline 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 0.600 0.750 0.900 0.600 0.750 0.900 0.600 0.750 0.900 0.600 0.750 0.900 0.600 0.750 0.900 0.600 0.750 0.900 0.600 0.750 0.900 Epochs:10 Epochs:20 Epochs:40 Epochs:80 Epochs:160 Epochs:320 Figure 1: The evolution of RMSProp gradient descent based Tensorflow planning in a twodimensional Navigation domain with nested central rectangles indicating nonlinearly increasing resistance to robot movement. (top) In initial RMSProp epochs, the plan evolves directly towards the goal shown as a star. (bottom) As later epochs of RMSProp descend the objective cost surface, the fastest path evolves to avoid the central obstacle entirely. learning methods like Q-learning [Watkins and Dayan, 1992] require action maximization for every update, but in high-dimensional continuous action spaces such nonlinear function maximization is non-convex and computationally intractable at the scale of millions or billions of updates. To address the above scalability and expressivity limitations of existing methods, we turn to Tensorflow [Abadi et al., 2015], which is a symbolic computation platform used in the machine learning community for deep learning due to its compilation of complex layered symbolic functions into a representation amenable to fast GPU-based reverse-mode automatic differentiation [Linnainmaa, 1970] for gradient-based optimization. Given recent results in gradient descent optimization with deep learning that demonstrate the ability to effectively optimize high-dimensional non-convex functions, we ask whether Tensorflow can be effective for planning in discrete time, hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces? Our results answer this question affirmatively, where we demonstrate that hybrid planning with Tensorflow and RMSProp gradient descent [Tieleman and Hinton, 2012] is surprisingly effective at planning in complex hybrid nonlinear domains1. As evidence, we reference figure 1, where we show Tensorflow with RMSProp efficiently finding and optimizing a least-cost path in a two-dimensional nonlinear Navigation domain. In general, Tensorflow with RMSProp planning results are competitive with optimal MILP-based optimization on piecewise linear planning domains. The performance directly extends to nonlinear domains where Tensorflow with RMSProp substantially outperforms interior point methods for nonlinear function optimization. Furthermore, we remark that Tensorflow converges to a strong plan on a large-scale concurrent domain with 576,000 continuous actions distributed over a horizon of 96 time steps and 100 parallel instances in 4 minutes. To explain such excellent results, we note that gradient descent algorithms such as RMSProp are highly effective for non-convex function optimization that occurs in deep learning. Further, we provide an analysis of many transition functions in planning domains that suggest gradient descent on these domains will not suffer from either the vanishing or exploding gradient problems, and hence provide a strong signal for optimization over long horizons. Together these results suggest a new frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power of recent advances in gradient descent with Tensorflow and related toolkits. 2 Hybrid Nonlinear Planning via Tensorflow In this section, we present a general framework of hybrid nonlinear planning along with a compilation of the objective in this framework to a symbolic recurrent neural network (RNN) architecture with action parameter inputs directly amenable to optimization with the Tensorflow toolkit. 2.1 Hybrid Planning A hybrid planning problem is a tuple ⟨S, A, T , R, C⟩with S denoting the (infinite) set of hybrid states with a state represented as a mixed discrete and continuous vector, A the set of actions bounded by action constraints C, R : S × A →R the reward function and T : S × A →S the transition 1The approach in this paper is implemented in Tensorflow, but it is not specific to Tensorflow. While “scalable hybrid planning with symbolic representations, auto-differentiation, and modern gradient descent methods for non-convex functions implemented on a GPU” would make for a more general description of our contributions, we felt that “Tensorflow” succinctly imparts at least the spirit of all of these points in a single term. 2 Figure 2: An recurrent neural network (RNN) encoding of a hybrid planning problem: A single-step reward and transition function of a discrete time decision-process are embedded in an RNN cell. RNN inputs correspond to the starting state and action; the outputs correspond to reward and next state. Rewards are additively accumulated in V . Since the entire specification of V is a symbolic representation in Tensorflow with action parameters as inputs, the sequential action plan can be directly optimized via gradient descent using the auto-differentiated representation of V. function. There is also an initial state s0 and the planning objective is to maximize the cumulative reward over a decision horizon of H time steps. Before proceeding, we outline the necessary notation: • st: mixed discrete, continuous state vector at time t. • at: mixed discrete, continuous action vector at time t. • R(st, at): a non-positive reward function (i.e., negated costs). • T(st, at): a (nonlinear) transition function. • V = PH t=1 rt = PH−1 t=0 R(st, at): cumulative reward value to maximize. In general due to the stochastic nature of gradient descent, we will run a number of planning domain instances i in parallel (to take the best performing plan over all instances), so we additionally define instance-specific states and actions: • sitj: the jth dimension of state vector of problem instance i at time t. • aitj: the jth dimension of action vector of problem instance i at time t. 2.2 Planning through Backpropagation Backpropagation [Rumelhart et al.] is a standard method for optimizing parameters of large multilayer neural networks via gradient descent. Using the chain rule of derivatives, backpropagation propagates the derivative of the output error of a neural network back to each of its parameters in a single linear time pass in the size of the network using what is known as reverse-mode automatic differentiation [Linnainmaa, 1970]. Despite its relative efficiency, backpropagation in large-scale (deep) neural networks is still computationally expensive and it is only with the advent of recent GPU-based symbolic toolkits like Tensorflow [Abadi et al., 2015] that recent advances in training very large deep neural networks have become possible. In this paper, we reverse the idea of training parameters of the network given fixed inputs to instead optimizing the inputs (i.e., actions) subject to fixed parameters (effectively the transition and reward parameterization assumed a priori known in planning). That is, as shown in figure 2, given transition T(st, at) and reward function R(st, at), we want to optimize the input at for all t to maximize the accumulated reward value V . Specifically, we want to optimize all actions a = (a1, . . . , aH−1) w.r.t. a planning loss L (defined shortly) that we minimize via the following gradient update schema a′ = a −η ∂L ∂a , (1) 3 where η is the optimization rate and the partial derivatives comprising the gradient based optimization in problem instance i are computed as ∂L ∂aitj = ∂L ∂Li ∂Li ∂aitj = ∂L ∂Li ∂Li ∂sit+1 ∂sit+1 ∂aitj = ∂L ∂Li ∂sit+1 ∂aitj T X τ=t+2 [ ∂Li ∂riτ ∂riτ ∂siτ t+2 Y κ=τ ∂siκ siκ−1 ]. (2) We must now connect our planning objective to a standard Tensorflow loss function. First, however, let us assume that we have N structurally identical instances i of our planning domain given in Figure 2, each with objective value Vi; then let us define V = (. . . , Vi, . . .). In Tensorflow, we choose Mean Squared Error (MSE), which given two continuous vectors Y and Y∗is defined as MSE(Y, Y∗) = 1 N ∥Y∗−Y∥2. We specifically choose to minimize L = MSE(0, V) with inputs of constant vector 0 and value vector V in order to maximize our value for each instance i; we remark that here we want to independently maximize each non-positive Vi, but minimize each positive V 2 i which is achieved with MSE. We will further explain the use of MSE in a moment, but first we digress to explain why we need to solve multiple problem instances i. Since both transition and reward functions are not assumed to be convex, optimization on a domain with such dynamics could result in a local minimum. To mitigate this problem, we use randomly initialized actions in a batch optimization: we optimize multiple mutually independent planning problem instances i simultaneously since the GPU can exploit their parallel computation, and then select the best-performing action sequence among the independent simultaneously solved problem instances. MSE then has dual effects of optimizing each problem instance i independently and providing fast convergence (faster than optimizing V directly). We remark that simply defining the objective V and the definition of all state variables in terms of predecessor state and action variables via the transition dynamics (back to the known initial state constants) is enough for Tensorflow to build the symbolic directed acyclic graph (DAG) representing the objective and take its gradient with respect to to all free action parameters as shown in (2) using reverse-mode automatic differentiation. 2.3 Planning over Long Horizons The Tensorflow compilation of a nonlinear planning problem reflects the same structure as a recurrent neural network (RNN) that is commonly used in deep learning. The connection here is not superficial since a longstanding difficulty with training RNNs lies in the vanishing gradient problem, that is, multiplying long sequences of gradients in the chain rule usually renders them extremely small and make them irrelevant for weight updates, especially when using nonlinear transfer functions such as a sigmoid. However in hybrid planning problems, continuous state updates often take the form si(t+1)j = sitj + ∆for some ∆function of the state and action at time t. Critically we note that the transfer function here is linear in sitj which is the largest determiner of si(t+1)j, hence avoiding vanishing gradients. In addition, a gradient can explode with the chain rule through backpropagation if the elements of the Jacobian matrix of state transitions are too large. In this case, if the planning horizon is large enough, a simple Stochastic Gradient Descent (SGD) optimizer may suffer from overshooting the optimum and never converge (as our experiments appear to demonstrate for SGD). The RMSProp optimization algorithm has a significant advantage for backpropagation-based planning because of its ability to perform gradient normalization that avoids exploding gradients and additionally deals with piecewise gradients [Balduzzi et al., 2016] that arise naturally as conditional transitions in many nonlinear domains (e.g., the Navigation domain of Figure 1 has different piecewise transition dynamics depending on the starting region). Specifically, instead of naively updating action aitj through equation 1, RMSProp maintains a decaying root mean squared gradient value G for each variable, which averages over squared gradients of previous epochs G′ aitj = 0.9Gaitj + 0.1( ∂L ∂aitj )2, (3) 4 and updates each action variable through a′ itj = aitj − η pGaitj + ϵ ∂L ∂aitj . (4) Here, the gradient is relatively small and consistent over iterations. Although the Adagrad [Duchi et al., 2011] and Adadelta [Zeiler, 2012] optimization algorithms have similar mechanisms, their learning rate could quickly reduce to an extremely small value when encountering large gradients. In support of these observations, we note the superior performance of RMSProp in Section 3. 2.4 Handling Constrained and Discrete Actions In most hybrid planning problems, there exist natural range constraints for actions. To handle those constraints, we use projected stochastic gradient descent. Projected stochastic gradient descent (PSGD) is a well-known descent method that can handle constrained optimization problems by projecting the parameters (actions) into a feasible range after each gradient update. To this end, we clip all actions to their feasible range after each epoch of gradient descent. For planning problems with discrete actions, we use a one-hot encoding for optimization purposes and then use a {0, 1} projection for the maximal action to feed into the forward propagation. In this paper, we focus on constrained continuous actions which are representative of many hybrid nonlinear planning problems in the literature. 3 Experiments In this section, we introduce our three benchmark domains and then validate Tensorflow planning performance in the following steps. (1) We evaluate the optimality of the Tensorflow backpropagation planning on linear and bilinear domains through comparison with the optimal solution given by Mixture Integer Linear Programming (MILP). (2) We evaluate the performance of Tensorflow backpropagation planning on nonlinear domains (that MILPs cannot handle) through comparison with the Matlab-based interior point nonlinear solver FMINCON. (4) We investigate the impact of several popular gradient descent optimizers on planning performance. (5) We evaluate optimization of the learning rate. (6) We investigate how other state-of-the-art hybrid planners perform. 3.1 Domain Descriptions Navigation: The Navigation domain is designed to test the ability of optimization of Tensorflow in a relatively small environment that supports different complexity transitions. Navigation has a two-dimensional state of the agent location s and a two-dimensional action a. Both of state and action spaces are continuous and constrained by their maximum and minimum boundaries separately. The objective of the domain is for an agent to move to the goal state as soon as possible (cf. figure 1). Therefore, we compute the reward based on the Manhattan distance from the agent to the goal state at each time step as R(st, at) = −∥st −g∥1, where g is the goal state. We designed three different transitions; from left to right, nonlinear, bilinear and linear: dt = ∥st −z∥ λ = 2 1 + exp(−2dt) −0.99 p = st + λat T(st, at) = max(u, min(l, p)), (5) dt = 2 X j=1 |stj −zj| λ = dt 4 , dt < 4 1, dt ≥4 p = st + λat T(st, at) = max(u, min(l, p)), (6) dt = ∥st −z∥1 λ = 0.8, 3.6 ≤dt < 4 0.6, 2.4 ≤dt < 3.6 0.4, 1.6 ≤dt < 2.4 0.2, 0.8 ≤dt < 1.6 0.05, dt < 0.8 1, dt ≥4 p = st + λat T(st, at) = max(u, min(l, p)), (7) 5 The nonlinear transition has a velocity reduction zone based on its Euclidean distance to the center z. Here, dt is the distance from the deceleration zone z, p is the proposed next state, λ is the velocity reduction factor, and u,l are upper and lower boundaries of the domain respectively. The bilinear domain is designed to compare with MILP where domain discretization is possible. In this setting, we evaluate the efficacy of approximately discretizing bilinear planning problems into MILPs. Equation 6 shows the bilinear transition function. The linear domain is the discretized version of the bilinear domain used for MILP optimization. We also test Tensorflow on this domain to see the optimality of the Tensorflow solution. Equation 7 shows the linear transition function. Reservoir Control: Reservoir Control [Yeh, 1985] is a system to control multiple connected reservoirs. Each of the reservoirs in the system has a single state sj ∈R that denotes the water level of the reservoir j and a corresponding action to permit a flow aj ∈[0, sj] from the reservoir to the next downstream reservoir. The objective of the domain is to maintain the target water level of each reservoir in a safe range and as close to half of its capacity as possible. Therefore, we compute the reward through: cj = 0, Lj ≤sj ≤Uj −5, sj < Lj −100, sj > Uj R(st, at) = −∥c −0.1 ∗|(u −l) 2 −st|∥1, where cj is the cost value of Reservoir j that penalizes water levels outside a safe range. In this domain, we introduce two settings: namely, Nonlinear and Linear. For the nonlinear domain, nonlinearity due to the water loss ej for each reservoir j includes water usage and evaporation. The transition function is et = 0.5 ∗st ⊙sin( st m), T(st, at) = st + rt −et −at + atΣ, (8) where ⊙represents an elementwise product, r is a rain quantity parameter, m is the maximum capacity of the largest tank, and Σ is a lower triangular adjacency matrix that indicates connections to upstream reservoirs. For the linear domain, we only replace the nonlinear function of water loss by a linear function: et = 0.1 ∗st, T(st, at) = st + rt −et −at + atΣ, (9) Unlike Navigation, we do not limit the state dimension of the whole system into two dimensions. In the experiments, we use domain setting of a network with 20 reservoirs. HVAC: Heating, Ventilation, and Air Conditioning [Erickson et al., 2009; Agarwal et al., 2010] is a centralized control problem, with concurrent controls of multiple rooms and multiple connected buildings. For each room j there is a state variable sj denoting the temperature and an action aj for sending the specified volume of heated air to each room j via vent actuation. The objective of the domain is to maintain the temperature of each room in a comfortable range and consume as little energy as possible in doing so. Therefore, we compute the reward based through: dt = |(u −l) 2 −st|, et = at ∗C, R(st, at) = −∥et + dt∥1, where C is the unit electricity cost. Since thermal models for HVAC are inherently nonlinear, we only present one version with a nonlinear transition function: θt = at ⊙(F vent −st), φt = (stQ −st ⊙ J X j=1 qj)/wq ϑt = (F out t −st) ⊙o/wo, φt = (F hall t −st) ⊙h/wh T(st, at) = st + α ∗(θt + φt + ϑt + φt), (10) 6 where F vent, F out t and F hall t are temperatures of the room vent, outside and hallway, respectively, Q. o and h are respectively the adjacency matrix of rooms, adjacency vector of outside areas, and the adjacency vector of hallways. wq, wo and wh are thermal resistances with a room and the hallway and outside walls, respectively. In the experiments, we work with a building layout with five floors and 12 rooms on each floor for a total of 60 rooms. For scalability testing, we apply batched backpropagation on 100 instances of such domain simultaneously, of which, there are 576,000 actions needed to plan concurrently. 3.2 Planning Performance In this section, we investigate the performance of Tensorflow optimization through comparison with the MILP on linear domains and with Matlab’s fmincon nonlinear interior point solver on nonlinear domains. We ran our experiments on Ubuntu Linux system with one E5-1620 v4 CPU, 16GB RAM, and one GTX1080 GPU. The Tensorflow version is beta 0.12.1, the Matlab version is R2016b, and the MILP version is IBM ILOG CPLEX 12.6.3. 3.2.1 Performance in Linear Domains 30 60 120 Horizon −900 −800 −700 −600 −500 −400 −300 −200 −100 0 Total Reward Heuristic MILP TF (a) Navigation Linear 30 60 120 Horizon −900 −800 −700 −600 −500 −400 −300 −200 −100 0 Total Reward (b) Navigation Bilinear 30 60 120 Horizon −3500 −3000 −2500 −2000 −1500 −1000 −500 0 Total Reward (c) Reservoir Linear Figure 3: The total reward comparison (values are negative, lower bars are better) among Tensorflow (Red), MILP optimization guided planning (Green) and domain-specific heuristic policy (Blue). Error bars show standard deviation across the parallel Tensorflow instances; most are too small to be visible. The heuristic policy is a manually designed baseline solution. In the linear domains (a) and (c), the MILP is optimal and Tensorflow is near-optimal for five out of six domains. In Figure 3, we show that Tensorflow backpropagation results in lower cost plans than domain-specific heuristic policies, and the overall cost is close to the MILP-optimal solution in five of six linear domains. While Tensorflow backpropagation planning generally shows strong performance, when comparing the performance of Tensorflow on bilinear and linear domains of Navigation to the MILP solution (recall that the linear domain was discretized from the bilinear case), we notice that Tensorflow does much better relative to the MILP on the bilinear domain than the discretized linear domain. The reason for this is quite simple: gradient optimization of smooth bilinear functions is actually much easier for Tensorflow than the piecewise linear discretized version which has large piecewise steps that make it hard for RMSProp to get a consistent and smooth gradient signal. We additionally note that the standard deviation of the linear navigation domain is much larger than the others. This is because the piecewise constant transition function computing the speed reduction factor λ provides a flat loss surface with no curvature to aid gradient descent methods, leading to high variation depending on the initial random starting point in the instance. 3.2.2 Performance in Nonlinear Domains In figure 4, we show Tensorflow backpropagation planning always achieves the best performance compared to the heuristic solution and the Matlab nonlinear optimizer fmincon. For relatively simple domains like Navigation, we see the fmincon nonlinear solver provides a very competitive solution, while, for the complex domain HVAC with a large concurrent action space, the fmincon solver shows a complete failure at solving the problem in the given time period. In figure 5(a), Tensorflow backpropagation planning shows 16 times faster optimization in the first 15s, which is close to the result given by fmincon at 4mins. In figure 5(b), the optimization speed of 7 30 60 120 Horizon −500 −400 −300 −200 −100 0 Total Reward Heuristic FMC TF (a) Navigation Nonlinear 30 60 120 Horizon −10000 −8000 −6000 −4000 −2000 0 Total Reward (b) Reservoir Nonlinear 12 24 48 96 Horizon −50000 −40000 −30000 −20000 −10000 0 Total Reward (c) HVAC Nonlinear Figure 4: The total reward comparison (values are negative, lower bars are better) among Tensorflow backpropagation planning (Red), Matlab nonlinear solver fmincon guided planning (Purple) and domain-specific heuristic policy (Blue). We gathered the results after 16 minutes of optimization time to allow all algorithms to converge to their best solution. 15s 30s 60s 2m 4m 8m 16m Time −9000 −8000 −7000 −6000 −5000 −4000 −3000 −2000 −1000 Total Reward TF FMC (a) Reservoir, Horizon 60 15s 30s 60s 2m 4m 8m 16m Time -106 -105 -104 -103 Total Reward TF FMC (b) Reservoir, Horizon 120 Figure 5: Optimization comparison between Tensorflow RMSProp gradient planning (Green) and Matlab nonlinear solver fmincon interior point optimization planning (Orange) on Nonlinear Reservoir Domains with Horizon (a) 60 and (b) 120. As a function of the logarithmic time x-axis, Tensorflow is substantially faster and more optimal than fmincon. Tensorflow shows it to be hundreds of times faster than the fmincon nonlinear solver to achieve the same value (if fmincon does ever reach it). These remarkable results demonstrate the power of fast parallel GPU computation of the Tensorflow framework. 3.2.3 Scalability In table 1, we show the scalability of Tensorflow backpropagation planning via the running times required to converge for different domains. The results demonstrate the extreme efficiency with which Tensorflow can converge on exceptionally large nonlinear hybrid planning domains. Domain Dim Horizon Batch Actions Time Nav. 2 120 100 24000 < 1mins Res. 20 120 100 240000 4mins HVAC 60 96 100 576000 4mins Table 1: Timing evaluation of the largest instances of the three domains we tested. All of these tests were performed on the nonlinear versions of the respectively named domains. 3.2.4 Optimization Methods In this experiment, we investigate the effects of different backpropagation optimizers. In figure 6(a), we show that the RMSProp optimizer provides exceptionally fast convergence among the five standard optimizers of Tensorflow. This observation reflects the previous analysis and discussion concerning equation (4) that RMSProp manages to avoid exploding gradients. As mentioned, although Adagrad and Adadelta have similar mechanisms, their normalization methods may cause vanishing gradients after several epochs, which corresponds to our observation of nearly flat curves for these methods. This is a strong indicator that exploding gradients are a significant concern for hybrid planning with gradient descent and that RMSProp performs well despite this well-known potential problem for gradients over long horizons. 8 0 500 1000 1500 2000 2500 3000 3500 4000 Epoch -9.0e+05 -8.0e+05 -7.0e+05 -6.0e+05 -5.0e+05 -4.0e+05 -3.0e+05 -2.0e+05 -1.0e+05 0.0e+00 Total Reward SGD Adagrad Adadelta Adam RMSProp (a) 0 500 1000 1500 2000 2500 3000 3500 4000 Epoch -7.0e+05 -6.0e+05 -5.0e+05 -4.0e+05 -3.0e+05 -2.0e+05 -1.0e+05 0.0e+00 Total Reward Optimizing_Rate:1 Optimizing_Rate:0.1 Optimizing_Rate:0.01 Optimizing_Rate:0.001 (b) Figure 6: (a) Comparison of Tensorflow gradient methods in the HVAC domain. All of these optimizers use the same learning rate of 0.001. (b) Optimization learning rate comparison of Tensorflow with the RMSProp optimizer on HVAC domain. The optimization rate 0.1 (Orange) gave the fastest initial convergence speed but was not able to reach the best score that optimization rate 0.001 (Blue) found. 3.2.5 Optimization Rate In figure 6(b), we show the best learning optimization rate for the HVAC domain is 0.01 since this rate converges to near-optimal extremely fast. The overall trend is smaller optimization rates have a better opportunity to reach a better final optimization solution, but can be extremely slow as shown for optimization rate 0.001. Hence, while larger optimization rates may cause overshooting the optima, rates that are too small may simply converge too slowly for practical use. This suggests a critical need to tune the optimization rate per planning domain. 3.3 Comparison to State-of-the-art Hybrid Planners Finally, we discuss and test the scalability of the state-of-art hybrid planners on our hybrid domains. We note that neither DiNo [Piotrowski et al., 2016], dReal [Bryce et al., 2015] nor SMTPlan [Cashmore et al., 2016] support general metric optimization. We ran ENHSP [Scala et al., 2016] on a much smaller version of the HVAC domain with only 2 rooms over multiple horizon settings. We found that ENHSP returned a feasible solution to the instance with horizon equal to 2 in 31 seconds, whereas the rest of the instances with greater horizon settings timed out with an hour limit. 4 Conclusion We investigated the practical feasibility of using the Tensorflow toolbox to do fast, large-scale planning in hybrid nonlinear domains. We worked with a direct symbolic (nonlinear) planning domain compilation to Tensorflow for which we optimized planning actions directly through gradient-based backpropagation. We then investigated planning over long horizons and suggested that RMSProp avoids both the vanishing and exploding gradient problems and showed experiments to corroborate this finding. Our key empirical results demonstrated that Tensorflow with RMSProp is competitive with MILPs on linear domains (where the optimal solution is known — indicating near optimality of Tensorflow and RMSProp for these non-convex functions) and strongly outperforms Matlab’s state-of-the-art interior point optimizer on nonlinear domains, optimizing up to 576,000 actions in under 4 minutes. These results suggest a new frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power of recent advances in gradient descent such as RMSProp with highly optimized toolkits like Tensorflow. For future work, we plan to further investigate Tensorflow-based planning improvements for domains with discrete action and state variables as well as difficult domains with only terminal rewards that provide little gradient signal guidance to the optimizer. 9 References Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. Yuvraj Agarwal, Bharathan Balaji, Rajesh Gupta, Jacob Lyles, Michael Wei, and Thomas Weng. Occupancy-driven energy management for smart building automation. In Proceedings of the 2nd ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building, pages 1–6. ACM, 2010. David Balduzzi, Brian McWilliams, and Tony Butler-Yeoman. Neural taylor approximations: Convergence and exploration in rectifier networks. arXiv preprint arXiv:1611.02345, 2016. Daniel Bryce, Sicun Gao, David Musliner, and Robert Goldman. SMT-based nonlinear PDDL+ planning. In 29th AAAI, pages 3247–3253, 2015. Michael Cashmore, Maria Fox, Derek Long, and Daniele Magazzeni. A compilation of the full PDDL+ language into SMT. In ICAPS, pages 79–87, 2016. Amanda Jane Coles, Andrew Coles, Maria Fox, and Derek Long. A hybrid LP-RPG heuristic for modelling numeric resource flows in planning. J. Artif. Intell. Res. (JAIR), 46:343–412, 2013. Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International Conference on Computers and Games, pages 72–83. Springer Berlin Heidelberg, 2006. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. Varick L. Erickson, Yiqing Lin, Ankur Kamthe, Rohini Brahme, Alberto E. Cerpa, Michael D. Sohn, , and Satish Narayanan. Energy efficient building environment control strategies using real-time occupancy measurements. In Proceedings of the 1st ACM Workshop On Embedded Sensing Systems For Energy-Efficient Buildings (BuildSys 2009), pages 19–24, Berkeley, CA, USA, November 2009. ACM. Timm Faulwasser and Rolf Findeisen. Nonlinear Model Predictive Path-Following Control. In Nonlinear Model Predictive Control - Towards New Challenging Applications, Lecture Notes in Control and Information Sciences, pages 335–343. Springer, Berlin, Heidelberg, 2009. Franc Ivankovic, Patrik Haslum, Sylvie Thiebaux, Vikas Shivashankar, and Dana Nau. Optimal planning with global numerical state constraints. In International Conference on Automated Planning and Scheduling (ICAPS), pages 145–153, Portsmouth, New Hampshire, USA, jun 2014. Thomas Keller and Malte Helmert. Trial-based heuristic tree search for finite horizon mdps. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling, ICAPS 2013, Rome, Italy, June 10-14, 2013, 2013. Levente Kocsis and Csaba Szepesvári. Bandit based Monte-Carlo planning. In Proceedings of the 17th European Conference on Machine Learning (ECML-06), pages 282–293, 2006. Seppo Linnainmaa. The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. Master’s Thesis (in Finnish), Univ. Helsinki, pages 6–7, 1970. Johannes Löhr, Patrick Eyerich, Thomas Keller, and Bernhard Nebel. A planning based framework for controlling hybrid systems. In Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling, ICAPS 2012, Atibaia, São Paulo, Brazil, June 25-19, 2012, 2012. 10 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learning Workshop. 2013. Wiktor Mateusz Piotrowski, Maria Fox, Derek Long, Daniele Magazzeni, and Fabio Mercorio. Heuristic planning for hybrid systems. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 4254–4255, 2016. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Cognitive modeling, 5(3):1. Buser Say, Wu Ga, Yu Qing Zhou, and Scott Sanner. Nonlinear hybrid planning with deep net learned transition models and mixed-integer linear programming. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 750–756, 2017. Enrico Scala, Patrik Haslum, Sylvie Thiébaux, and Miquel Ramírez. Interval-based relaxation for general numeric planning. In ECAI, pages 655–663, 2016. David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016. Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. Csaba Szepesvári. Algorithms for Reinforcement Learning. Morgan & Claypool, 2010. Tijmen Tieleman and Geoffrey E Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012. Christopher J. C. H. Watkins and Peter Dayan. Q-learning. Machine Learning, 8(3):279–292, May 1992. Ari Weinstein and Michael L. Littman. Bandit-based planning and learning in continuous-action markov decision processes. In Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling, ICAPS 2012, Atibaia, São Paulo, Brazil, June 25-19, 2012, 2012. William G Yeh. Reservoir management and operations models: A state-of-the-art review. Water Resources research, 21,12:1797–1818, 1985. Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 11 | 2017 | 401 |
6,899 | Probabilistic Models for Integration Error in the Assessment of Functional Cardiac Models Chris. J. Oates1,5, Steven Niederer2, Angela Lee2, François-Xavier Briol3, Mark Girolami4,5 1Newcastle University, 2King’s College London, 3University of Warwick, 4Imperial College London, 5Alan Turing Institute Abstract This paper studies the numerical computation of integrals, representing estimates or predictions, over the output f(x) of a computational model with respect to a distribution p(dx) over uncertain inputs x to the model. For the functional cardiac models that motivate this work, neither f nor p possess a closed-form expression and evaluation of either requires ≈100 CPU hours, precluding standard numerical integration methods. Our proposal is to treat integration as an estimation problem, with a joint model for both the a priori unknown function f and the a priori unknown distribution p. The result is a posterior distribution over the integral that explicitly accounts for dual sources of numerical approximation error due to a severely limited computational budget. This construction is applied to account, in a statistically principled manner, for the impact of numerical errors that (at present) are confounding factors in functional cardiac model assessment. 1 Motivation: Predictive Assessment of Computer Models This paper considers the problem of assessment for computer models [7], motivated by an urgent need to assess the performance of sophisticated functional cardiac models [25]. In concrete terms, the problem that we consider can be expressed as the numerical approximation of integrals p(f) = Z f(x)p(dx), (1) where f(x) denotes a functional of the output from a computer model and x denotes unknown inputs (or ‘parameters’) of the model. The term p(x) denotes a posterior distribution over model inputs. Although not our focus in this paper, we note that p(x) is defined based on a prior π0(x) over these inputs and training data y assumed to follow the computer model π(y|x) itself. The integral p(f), in our context, represents a posterior prediction of actual cardiac behaviour. The computational model can be assessed through comparison of these predictions to test data generated from an experiment. The challenging nature of cardiac models – and indeed computer models in general – is such that a closed-form for both f(x) and p(dx) is precluded [23]. Instead, it is typical to be provided with a finite collection of samples {xi}n i=1 obtained from p(dx) through Monte Carlo (or related) methods [32]. The integrand f(x) is then evaluated at these n input configurations, to obtain {f(xi)}n i=1. Limited computational budgets necessitate that the number n is small and, in such situations, the error of an estimator for the integral p(f) based on the data {(xi, f(xi))}n i=1 is subject to strict informationtheoretic lower bounds [26]. The practical consequence is that an unknown (non-negligible) numerical error is introduced in the numerical approximation of p(f), unrelated to the performance of the model. If this numerical error is ignored, it will constitute a confounding factor in the assessment of predictive performance for the computer model. It is therefore unclear how a fair model assessment can proceed. This motivates an attempt to understand the extent of numerical error in any estimate of p(f). This is non-trivial; for example, the error distribution of the arithmetic mean 1 nΣn i=1f(xi) depends on the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. unknown f and p, and attempts to estimate this distribution solely from data, e.g. via a bootstrap or a central limit approximation, cannot succeed in general when the number of samples n is small [27]. Our first contribution, in this paper, is to argue that approximation of p(f) from samples {xi}n i=1 and function evaluations {f(xi)}n i=1 can be cast as an estimation task. Our second contribution is to derive a posterior distribution over the unknown value p(f) of the integral. This distribution provides an interpretable quantification of the extent of numerical integration error that can be reasoned with and propagated through subsequent model assessment. Our third contribution is to establish theoretical properties of the proposed method. The method we present falls within the framework of Probabilistic Numerics and our work can be seen as a contribution to this emerging area [16, 5]. In particular, the method proposed is reminiscent of Bayesian Quadrature (BQ) [9, 28, 29, 15]. In BQ, a Gaussian prior measure is placed on the unknown function f and is updated to a posterior when conditioned on the information {(xi, f(xi))}n i=1. This induces both a prior and a posterior over the value of p(f) as push-forward measures under the projection operator f 7→p(f). Since its introduction, several authors have related BQ to other methods such as the ‘herding’ approach from machine learning [17, 3], random feature approximations used in kernel methods [1], classical quadrature rules [33] and Quasi Monte Carlo (QMC) methods [4]. Most recently, [21] extended theoretical results for BQ to misspecified prior models, and [22] who provided efficient matrix algebraic methods for the implementation of BQ. However, as an important point of distinction, notice that BQ pre-supposes p(dx) is known in closed-form - it does not apply in situations where p(dx) is instead sampled. In this latter case p(dx) will be called an intractable distribution and, for model assessment, this scenario is typical. To extend BQ to intractable distributions, this paper proposes to use a Dirichlet process mixture prior to estimate the unknown distribution p(dx) from Monte Carlo samples {xi}n i=1 [12]. It will be demonstrated that this leads to a simple expression for the closed-form terms which are required to implement the usual BQ. The overall method, called Dirichlet process mixture Bayesian quadrature (DPMBQ), constructs a (univariate) distribution over the unknown integral p(f) that can be exploited to tease apart the intrinsic performance of a model from numerical integration error in model assessment. Note that BQ was used to estimate marginal likelihood in e.g. [30]. The present problem is distinct, in that we focus on predictive performance (of posterior expectations) rather than marginal likelihood, and its solution demands a correspondingly different methodological development. On the computational front, DPMBQ costs O(n3). However, this cost is de-coupled from the often orders-of-magnitude larger costs involved in both evaluation of f(x) and p(dx), which form the main computational bottleneck. Indeed, in the modern computational cardiac models that motivate this research, the ≈100 CPU hour time required for a single simulation limits the number n of available samples to ≈103 [25]. At this scale, numerical integration error cannot be neglected in model assessment. This raises challenges when making assessments or comparisons between models, since the intrinsic performance of models cannot be separated from numerical error that is introduced into the assessment. Moreover, there is an urgent ethical imperative that the clinical translation of such models is accompanied with a detailed quantification of the unknown numerical error component in model assessment. Our contribution explicitly demonstrates how this might be achieved. The remainder of the paper proceeds as follows: In Section 2.1 we first recall the usual BQ method, then in Section 2.2 we present and analyse our novel DPMBQ method. Proofs of theoretical results are contained in the electronic supplement. Empirical results are presented in Section 3 and the paper concludes with a discussion in Section 4. 2 Probabilistic Models for Numerical Integration Error Consider a domain Ω⊆Rd, together with a distribution p(dx) on Ω. As in Eqn. 1, p(f) will be used to denote the integral of the argument f with respect to the distribution p(dx). All integrands are assumed to be (measurable) functions f : Ω→R such that the integral p(f) is well-defined. To begin, we recall details for the BQ method when p(dx) is known in closed-form [9, 28]: 2.1 Probabilistic Integration for Tractable Distributions (BQ) In standard BQ [9, 28], a Gaussian Process (GP) prior f ∼GP(m, k) is assigned to the integrand f, with mean function m : Ω→R and covariance function k : Ω× Ω→R [see 31, for further details 2 on GPs]. The implied prior over the integral p(f) is then the push-forward of the GP prior through the projection f 7→p(f): p(f) ∼N(p(m), p ⊗p(k)) where p⊗p : Ω×Ω→R is the measure formed by independent products of p(dx) and p(dx′), so that under our notational convention the so-called initial error p⊗p(k) is equal to RR k(x, x′)p(dx)p(dx′). Next, the GP is conditioned on the information in {(xi, f(xi))}n i=1. The conditional GP takes a conjugate form f|X, f(X) ∼GP(mn, kn), where we have written X = (x1, . . . , xn), f(X) = (f(x1), . . . , f(xn))⊤. Formulae for the mean function mn : Ω→R and covariance function kn : Ω× Ω→R are standard can be found in [31, Eqns. 2.23, 2.24]. The BQ posterior over p(f) is the push forward of the GP posterior: p(f) | X, f(X) ∼N(p(mn), p ⊗p(kn)) (2) Formulae for p(mn) and p ⊗p(kn) were derived in [28]: p(mn) = f(X)⊤k(X, X)−1µ(X) (3) p ⊗p(kn) = p ⊗p(k) −µ(X)⊤k(X, X)−1µ(X) (4) where k(X, X) is the n × n matrix with (i, j)th entry k(xi, xj) and µ(X) is the n × 1 vector with ith entry µ(xi) where the function µ is called the kernel mean or kernel embedding [see e.g. 35]: µ(x) = Z k(x, x′)p(dx′) (5) Computation of the kernel mean and the initial error each requires that p(dx) is known in general. The posterior in Eqn. 2 was studied in [4], where rates of posterior contraction were established under further assumptions on the smoothness of the covariance function k and the smoothness of the integrand. Note that the matrix inverse of k(X, X) incurs a (naive) computational cost of O(n3); however this cost is post-hoc and decoupled from (more expensive) computation that involves the computer model. Sparse or approximate GP methods could also be used. 2.2 Probabilistic Integration for Intractable Distributions The dependence of Eqns. 3 and 4 on both the kernel mean and the initial error means that BQ cannot be used for intractable p(dx) in general. To address this we construct a second non-parametric model for the unknown p(dx), presented next. Dirichlet Process Mixture Model Consider an infinite mixture model p(dx) = Z ψ(dx; φ)P(dφ), (6) where ψ : Ω× Φ →[0, ∞) is such that ψ(·; φ) is a distribution on Ωwith parameter φ ∈Φ and P is a mixing distribution defined on Φ. In this paper, each data point xi is modelled as an independent draw from p(dx) and is associated with a latent variable φi ∈Φ according to the generative process of Eqn. 6. i.e. xi ∼ψ(·; φi). To limit scope, the extension to correlated xi is reserved for future work. The Dirichlet process (DP) is the natural conjugate prior for non-parametric discrete distributions [12]. Here we endow P(dφ) with a DP prior P ∼DP(α, Pb), where α > 0 is a concentration parameter and Pb(dφ) is a base distribution over Φ. The base distribution Pb coincides with the prior expectation E[P(dφ)] = Pb(dφ), while α determines the spread of the prior about Pb. The DP is characterised by the property that, for any finite partition Φ = Φ1 ∪· · · ∪Φm, it holds that (P(Φ1), . . . , P(Φm)) ∼Dir(αPb(Φ1), . . . , αPb(Φm)) where P(S) denotes the measure of the set S ⊆Φ. For α →0, the DP is supported on the set of atomic distributions, while for α →∞, the DP converges to an atom on the base distribution. This overall approach is called a DP mixture (DPM) model [13]. For a random variable Z, the notation [Z] will be used as shorthand to denote the density function of Z. It will be helpful to note that for φi ∼P independent, writing φ1:n = (φ1, . . . , φn), standard conjugate results for DPs lead to the conditional P | φ1:n ∼DP α + n, α α + nPb + 1 α + n n X i=1 δφi where δφi(dφ) is an atomic distribution centred at the location φi of the ith sample in φ1:n. In turn, this induces a conditional [dp|φ1:n] for the unknown distribution p(dx) through Eqn. 6. 3 Kernel Means via Stick Breaking The stick breaking characterisation can be used to draw from the conditional DP [34]. A generic draw from [P|φ1:n] can be characterised as P(dφ) = ∞ X j=1 wjδϕj(dφ), wj = βj j−1 Y j′=1 (1 −βj′) (7) where randomness enters through the ϕj and βj as follows: ϕj iid∼ α α + nPb + 1 α + n n X i=1 δφi, βj iid∼Beta(1, α + n) In practice the sum in Eqn. 7 may be truncated at a large finite number of terms, N, with negligible truncation error, since weights wj vanish at a geometric rate [18]. The truncated DP has been shown to provide accurate approximation of integrals with respect to the original DP [19]. For a realisation P(dφ) from Eqn. 7, observe that the induced distribution p(dx) over Ωis p(dx) = ∞ X j=1 wjψ(dx; ϕj). (8) Thus we have an alternative characterisation of [p|φ1:n]. Our key insight is that one can take ψ and k to be a conjugate pair, such that both the kernel mean µ(x) and the initial error p ⊗p(k) will be available in an explicit form for the distribution in Eqn. 8 [see Table 1 in 4, for a list of conjugate pairs]. For instance, in the one-dimensional case, consider ϕ = (ϕ1, ϕ2) and ψ(dx; ϕ) = N(dx; ϕ1, ϕ2) for some location and scale parameters ϕ1 and ϕ2. Then for the Gaussian kernel k(x, x′) = ζ exp(−(x −x′)2/2λ2), the kernel mean becomes µ(x) = ∞ X j=1 ζλwj (λ2 + ϕj,2)1/2 exp −(x −ϕj,1)2 2(λ2 + ϕj,2) (9) and the initial variance can be expressed as p ⊗p(k) = ∞ X j=1 ∞ X j′=1 ζλwjwj′ (λ2 + ϕj,2 + ϕj′,2)1/2 exp − (ϕj,1 −ϕj′,1)2 2(λ2 + ϕj,2 + ϕj′,2) . (10) Similar calculations for the multi-dimensional case are straight-forward and provided in the Supplemental Information. The Proposed Model To put this all together, let θ denote all hyper-parameters that (a) define the GP prior mean and covariance function, denoted mθ and kθ below, and (b) define the DP prior, such as α and the base distribution Pb. It is assumed that θ ∈Θ for some specified set Θ. The marginal posterior distribution for p(f) in the DPMBQ model is defined as [p(f) | X, f(X)] = ZZ [p(f) | X, f(X), p, θ] [dp | X, θ] [dθ]. (11) The first term in the integral is BQ for a fixed distribution p(dx). The second term represents the DPM model for the unknown p(dx), while the third term [dθ] represents a hyper-prior distribution over θ ∈Θ. The DPMBQ distribution in Eqn. 11 does not admit a closed-form expression. However, it is straight-forward to sample from this distribution without recourse to f(x) or p(dx). In particular, the second term can be accessed through the law of total probabilities: [dp | X, θ] = Z [dp | φ1:n] [φ1:n | X, θ] dφ1:n where the first term [dp | φ1:n] is the stick-breaking construction and the term [φ1:n | X, θ] can be targeted with a Gibbs sampler. Full details of the procedure we used to sample from Eqn. 11, which is de-coupled from the much larger costs associated with the computer model, are provided in the Supplemental Information. 4 Theoretical Analysis The analysis reported below restricts attention to a fixed hyper-parameter θ and a one-dimensional state-space Ω= R. The extension of theoretical results to multiple dimensions was beyond the scope of this paper. Our aim in this section is to establish when DPMBQ is “consistent”. To be precise, a random distribution Pn over an unknown parameter ζ ∈R, whose true value is ζ0, is called consistent for ζ0 at a rate rn if, for all δ > 0, we have Pn[(−∞, ζ0 −δ) ∪(ζ0 + δ, ∞)] = OP (rn). Below we denote with f0 and p0 the respective true values of f and p; our aim is to estimate ζ0 = p0(f0). Denote with H the reproducing kernel Hilbert space whose reproducing kernel is k and assume that the GP prior mean m is an element of H. Our main theoretical result below establishes that the DPMBQ posterior distribution in Eqn. 11, which is a random object due to the n independent draws xi ∼p(dx), is consistent: Theorem. Let P0 denote the true mixing distribution. Suppose that: 1. f belongs to H and k is bounded on Ω× Ω. 2. ψ(dx; ϕ) = N(dx; ϕ1, ϕ2). 3. P0 has compact support supp(P0) ⊂R × (σ, σ) for some fixed σ, σ ∈(0, ∞). 4. Pb has positive, continuous density on a rectangle R, s.t. supp(Pb) ⊆R ⊆R × [σ, σ]. 5. Pb({(ϕ1, ϕ2) : |ϕ1| > t}) ≤c exp(−γ|t|δ) for some γ, δ > 0 and ∀t > 0. Then the posterior Pn = [p(f) | X, f0(X)] is consistent for the true value p0(f0) of the integral at the rate n−1/4+ϵ where the constant ϵ > 0 can be arbitrarily small. The proof is provided in the Supplemental Information. Assumption (1) derives from results on consistent BQ [4] and can be relaxed further with the results in [21] (not discussed here), while assumptions (2-5) derive from previous work on consistent estimation with DPM priors [14]. For the case of BQ when p(dx) is known and H a Sobolev space of order s > 1/2 on Ω= [0, 1], the corresponding posterior contraction rate is exp(−Cn2s−ϵ) [4, Thm. 1]. Our work, while providing only an upper bound on the convergence rate, suggests that there is an increase in the fundamental complexity of estimation for p(dx) unknown compared to p(dx) known. Interestingly, the n−1/4+ϵ rate is slower than the classical Bernstein-von Mises rate n−1/2 [36]. However, an out-of-hand comparison between these two quantities is not straight forward, as the former involves the interaction of two distinct non-parametric statistical models. It is known Bernstein-von Mises results can be delicate for non-parametric problems [see, for example, the counter-examples in 10]. Rather, this theoretical analysis guarantees consistent estimation in a regime that is non-standard. 3 Results The remainder of the paper reports empirical results from application of DPMBQ to simulated data and to computational cardiac models. 3.1 Simulation Experiments To explore the empirical performance of DPMBQ, a series of detailed simulation experiments were performed. For this purpose, a flexible test bed was constructed wherein the true distribution p0 was a normal mixture model (able to approximate any continuous density) and the true integrand f0 was a polynomial (able to approximate any continuous function). In this set-up it is possible to obtain closed-form expressions for all integrals p0(f0) and these served as a gold-standard benchmark. To mimic the scenario of interest, a small number n of samples xi were drawn from p0(dx) and the integrand values f0(xi) were obtained. This information X, f0(X) was provided to DPMBQ and the output of DPMBQ, a distribution over p(f), was compared against the actual value p0(f0) of the integral. For all experiments in this paper the Gaussian kernel k defined in Sec. 2.2 was used; the integrand f was normalised and the associated amplitude hyper-parameter ζ = 1 fixed, whereas the length-scale hyper-parameter λ was assigned a Gam(2, 1) hyper-prior. For the DPM, the concentration parameter α was assigned a Exp(1) hyper-prior. These choices allowed for adaptation of DPMBQ to the smoothness of both f and p in accordance with the data presented to the method. The base distribution Pb for DPMBQ was taken to be normal inverse-gamma with hyper-parameters µ0 = 0, λ0 = α0 = β0 = 1, selected to facilitate a simplified Gibbs sampler. Full details of the simulation set-up and Gibbs sampler are reported in the Supplemental Information. 5 100 101 102 103 n 0 0.2 0.4 0.6 0.8 1 cover. prob. Oracle Student-t DPMBQ -2 0 2 x -0.5 0 0.5 f(x) -2 0 2 x 0 2 4 p(x) (a) n 100 101 102 W 10-1 100 (b) Figure 1: Simulated data results. (a) Comparison of coverage frequencies for the simulation experiments. (b) Convergence assessment: Wasserstein distance (W) between the posterior in Eqn. 11 and the true value of the integral, is presented as a function of the number n of data points. [Circles represent independent realisations and the linear trend is shown in red.] For comparison, we considered the default 50% confidence interval description of numerical error ¯f −t∗s √n, ¯f + t∗s √n (12) where ¯f = n−1Σn i=1f(xi), s2 = (n −1)−1Σn i=1(f(xi) −¯f)2 and t∗is the 50% level for a Student’s t-distribution with n −1 degrees of freedom. It is well-known that Eqn. 12 is a poor description of numerical error when n is small [c.f. “Monte Carlo is fundamentally unsound” 27]. For example, with n = 2, in the extreme case where, due to chance, f(x1) ≈f(x2), it follows that s ≈0 and no numerical error is acknowledged. This fundamental problem is resolved through the use of prior information on the form of both f and p in DPMBQ. The appropriateness of DPMBQ therefore depends crucially on the prior. The proposed method is further distinguished from Eqn. 12 in that the distribution over numerical error is fully non-parametric, not e.g. constrained to be Student-t. Empirical Results Coverage frequencies are shown in Fig. 1a for a specific integration task (f0, p0), that was deliberately selected to be difficult for Eqn. 12 due to the rare event represented by the mass at x = 2. These were compared against central 50% posterior credible intervals produced under DPMBQ. These are the frequency with which the confidence/credible interval contain the true value of the integral, here estimated with 100 independent realisations for DPMBQ and 1000 for the (less computational) standard method (standard errors are shown for both). Whilst it offers correct coverage in the asymptotic limit, Eqn. 12 can be seen to be over-confident when n is small, with coverage often less than 50%. In contrast, DPMBQ accounts for the fact p is being estimated and provides conservative estimation about the extent of numerical error when n is small. To present results that do not depend on a fixed coverage level (e.g. 50%), we next measured convergence in the Wasserstein distance W = R |p(f) −p0(f0)| d[p(f) | X, f(X)]. In particular we explored whether the theoretical rate of n−1/4+ϵ was realised. (Note that the theoretical result applied just to fixed hyper-parameters, whereas the experimental results reported involved hyper-parameters that were marginalised, so that this is a non-trivial experiment.) Results in Fig. 1b demonstrated that W scaled with n at a rate which was consistent with the theoretical rate claimed. Full experimental results on our polynomial test bed, reported in detail in the Supplemental Information, revealed that W was larger for higher-degree polynomials (i.e. more complex integrands f), while W was insensitive to the number of mixture components (i.e. to more complex distributions p). The latter observation may be explained by the fact that the kernel mean µ is a smoothed version of the distribution p and so is not expected to be acutely sensitive to variation in p itself. 3.2 Application to a Computational Cardiac Model The Model The computation model considered in this paper is due to [24] and describes the mechanics of the left and right ventricles through a heart beat. In brief, the model geometry (Fig. 2a, 6 (a) (b) Figure 2: Cardiac model results: (a) Computational cardiac model. A) Segmentation of the cardiac MRI. B) Computational model of the left and right ventricles. C) Schematic image showing the features of pressure (left) and volume transient (right). (b) Comparison of coverage frequencies, for each of 10 numerical integration tasks defined by functionals gj of the cardiac model output. top right) is described by fitting a C1 continuous cubic Hermite finite element mesh to segmented magnetic resonance images (MRI; Fig. 2a, top left). Cardiac electrophysiology is modelled separately by the solution of the mono-domain equations and provides a field of activation times across the heart. The passive material properties and afterload of the heart are described, respectively, by a transversely isotropic material law and a three element Windkessel model. Active contraction is simulated using a phenomenological cellular model, with spatial variation arising from the local electrical activation times. The active contraction model is defined by five input parameters: tr and td are the respective constants for the rise and decay times, T0 is the reference tension, a4 and a6 respectively govern the length dependence of tension rise time and peak tension. These five parameters were concatenated into a vector x ∈R5 and constitute the model inputs. The model is fitted based on training data y that consist of functionals gj : R5 →R, j = 1, . . . , 10, of the pressure and volume transient morphology during baseline activation and when the heart is paced from two leads implanted in the right ventricle apex and the left ventricle lateral wall. These 10 functionals are defined in the Supplemental Information; a schematic of the model and fitted measurements are shown in Fig. 2a (bottom panel). Test Functions The distribution p(dx) was taken to be the posterior distribution over model inputs x that results from an improper flat prior on x and a squared-error likelihood function: log p(x) = const. + 1 0.12 P10 j=1(yj −gj(x))2. The training data y = (y1, . . . , y10) were obtained from clinical experiment. The task we considered is to compute posterior expectations for functionals f(x) of the model output produced when the model input x is distributed according to p(dx). This represents the situation where a fitted model is used to predict response to a causal intervention, representing a clinical treatment. For assessment of the DPMBQ method, which is our principle aim in this experiment, we simply took the test functions f to be each of the physically relevant model outputs gj in turn (corresponding to no causal intervention). This defined 10 separate numerical integration problems as a test bed. Benchmark values for p0(gj) were obtained, as described in the Supplemental Information, at a total cost of ≈105 CPU hours, which would not be routinely practical. Empirical Results For each of the 10 numerical integration problems in the test bed, we computed coverage probabilities, estimated with 100 independent realisations (standard errors are shown), in line with those discussed for simulation experiments. These are shown in Fig. 2b, where we compared Eqn. 12 with central 50% posterior credible intervals produced under DPMBQ. It is seen that Eqn. 12 is usually reliable but can sometimes be over-confident, with coverage probabilities less than 50%. This over-confidence can lead to spurious conclusions on the predictive performance of the computational model. In contrast, DPMBQ provides a uniformly conservative quantification 7 of numerical error (cover. prob. ≥50%). The DPMBQ method is further distinguished from Eqn. 12 in that it entails a joint distribution for the 10 integrals (the unknown p is shared across integrals - an instance of transfer learning across the 10 integration tasks). Fig. 2b also appears to show a correlation structure in the standard approach (black lines), but this is an artefact of the common sample set {xi}n i=1 that was used to simultaneously estimate all 10 integrals; Eqn. 12 is still applied independently to each integral. 4 Discussion Numerical analysis often focuses the convergence order of numerical methods, but in non-asymptotic regimes the language of probabilities can provide a richer, more intuitive and more useful description of numerical error. This paper cast the computation of integrals p(f) as an estimation problem amenable to Bayesian methods [20, 9, 5]. The difficulty of this problem depends on our level of prior knowledge (rendering the problem trivial if a closed-form solution is a priori known) and, in the general case, on how much information we are prepared to obtain on the objects f and p through numerical computation [16]. In particular, we distinguish between three states of prior knowledge: (1) f known, p unknown, (2) f unknown, p known, (3) both f and p unknown. Case (1) is the subject of Monte Carlo methods [32] and concerns classical problems in applied probability such as estimating confidence intervals for expectations based on Markov chains. Notable recent work in this direction is [8], who obtained a point estimate ˆp for p using a kernel smoother and then, in effect, used ˆp(f) as an estimate for the integral. The decision-theoretic risk associated with error in ˆp was explored in [6]. Independent of integral estimation, there is a large literature on density estimation [37]. Our probabilistic approach provides a Bayesian solution to this problem, as a special case of our more general framework. Case (2) concerns functional analysis, where [26] provide an extensive overview of theoretical results on approximation of unknown functions in an information complexity framework. As a rule of thumb, estimation improves when additional smoothness can be a priori assumed on the value of the unknown object [see 4]. The main focus of this paper was Case (3), until now unstudied, and a transparent, general statistical method called DPMBQ was proposed. The path-finding nature of this work raises several important questions for future theoretical and applied research. First, these methods should be extended to account for the low-rank phenomenon that is often encountered in multi-dimensional integrals [11]. Second, there is no reason, in general, to restrict attention to function values obtained at the locations in X. Indeed, one could first estimate p(dx), then select suitable locations X′ from at which to evaluate f(X′) [2]. This touches on aspects of statistical experimental design; the practitioner seeks a set X′ that minimises an appropriate loss functional at the level of p(f); see again [6]. Third, whilst restricted to Gaussians in our experiments, further methodological work will be required to establish guidance for the choice of kernel k in the GP and choice of base distribution Pb in the DPM [c.f. chapter 4 of 31]. Acknowledgments CJO and MG were supported by the Lloyds Register Foundation Programme on Data-Centric Engineering. SN was supported by an EPSRC Intermediate Career Fellowship. FXB was supported by the EPSRC grant [EP/L016710/1]. MG was supported by the EPSRC grants [EP/K034154/1, EP/R018413/1, EP/P020720/1, EP/L014165/1], and an EPSRC Established Career Fellowship, [EP/J016934/1]. This material was based upon work partially supported by the National Science Foundation (NSF) under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. References [1] F Bach. On the Equivalence Between Quadrature Rules and Random Features. Journal of Machine Learning Research, 18:1–38, 2017. [2] F-X Briol, CJ Oates, J Cockayne, WY Chen, and M Girolami. On the sampling problem for kernel quadrature. In Proceedings of the 34th International Conference on Machine Learning, pages 586–595, 2017. [3] F-X Briol, CJ Oates, M Girolami, and MA Osborne. Frank-Wolfe Bayesian quadrature: Probabilistic integration with theoretical guarantees. In Advances in Neural Information Processing Systems, pages 1162–1170, 2015. 8 [4] F-X Briol, CJ Oates, M Girolami, MA Osborne, and D Sejdinovic. Probabilistic Integration: A Role for Statisticians in Numerical Analysis? arXiv:1512.00933, 2015. [5] J Cockayne, CJ Oates, T Sullivan, and M Girolami. Bayesian probabilistic numerical methods. arXiv:1702.03673, 2017. [6] SN Cohen. Data-driven nonlinear expectations for statistical uncertainty in decisions. arXiv:1609.06545, 2016. [7] PS Craig, M Goldstein, JC Rougier, and AH Seheult. Bayesian Forecasting for Complex Systems Using Computer Simulators. Journal of the American Statistical Association, 96(454):717–729, 2001. [8] B Delyon and F Portier. Integral Approximation by Kernel Smoothing. Bernoulli, 22(4):2177–2208, 2016. [9] P Diaconis. Bayesian Numerical Analysis. Statistical Decision Theory and Related Topics IV, 1:163–175, 1988. [10] P Diaconis and D Freedman. On the Consistency of Bayes Estimates. Annals of Statistics, 14(1):1–26, 1986. [11] J Dick, FY Kuo, and IH Sloan. High-Dimensional Integration: The Quasi-Monte Carlo Way. Acta Numerica, 22:133–288, 2013. [12] TS Ferguson. A Bayesian Analysis of Some Nonparametric Problems. Annals of Statistics, 1(2):209–230, 1973. [13] TS Ferguson. Bayesian Density Estimation by Mixtures of Normal Distributions. Recent Advances in Statistics, 24(1983):287–302, 1983. [14] S Ghosal and AW Van Der Vaart. Entropies and Rates of Convergence for Maximum Likelihood and Bayes Estimation for Mixtures of Normal Densities. Annals of Statistics, 29(5):1233–1263, 2001. [15] T Gunter, MA Osborne, R Garnett, P Hennig, and SJ Roberts. Sampling for Inference in Probabilistic Models With Fast Bayesian Quadrature. In Advances in Neural Information Processing Systems, pages 2789–2797, 2014. [16] P Hennig, MA Osborne, and M Girolami. Probabilistic Numerics and Uncertainty in Computations. Proceedings of the Royal Society A, 471(2179):20150142, 2015. [17] F Huszár and D Duvenaud. Optimally-Weighted Herding is Bayesian Quadrature. In Uncertainty in Artificial Intelligence, volume 28, pages 377–386, 2012. [18] H Ishwaran and LF James. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the American Statistical Association, 96(453):161–173, 2001. [19] H Ishwaran and M Zarepour. Exact and Approximate Sum Representations for the Dirichlet Process. Canadian Journal of Statistics, 30(2):269–283, 2002. [20] JB Kadane and GW Wasilkowski. Average case epsilon-complexity in computer science: A Bayesian view. Bayesian Statistics 2, Proceedings of the Second Valencia International Meeting, pages 361–374, 1985. [21] M Kanagawa, BK Sriperumbudur, and K Fukumizu. Convergence Guarantees for Kernel-Based Quadrature Rules in Misspecified Settings. In Advances in Neural Information Processing Systems, volume 30, 2016. [22] T Karvonen and S Särkkä. Fully symmetric kernel quadrature. arXiv:1703.06359, 2017. [23] MC Kennedy and A O’Hagan. Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B, 63(3):425–464, 2001. [24] AWC Lee, A Crozier, ER Hyde, P Lamata, M Truong, M Sohal, T Jackson, JM Behar, S Claridge, A Shetty, E Sammut, G Plank, CA Rinaldi, and S Niederer. Biophysical Modeling to Determine the Optimization of Left Ventricular Pacing Site and AV/VV Delays in the Acute and Chronic Phase of Cardiac Resynchronization Therapy. Journal of Cardiovascular Electrophysiology, 28(2):208–215, 2016. [25] GR Mirams, P Pathmanathan, RA Gray, P Challenor, and RH Clayton. White paper: Uncertainty and Variability in Computational and Mathematical Models of Cardiac Physiology. The Journal of Physiology, 594(23):6833–6847, 2016. [26] E Novak and H Wo´zniakowski. Tractability of Multivariate Problems, Volume II : Standard Information for Functionals. EMS Tracts in Mathematics 12, 2010. [27] A O’Hagan. Monte Carlo is fundamentally unsound. Journal of the Royal Statistical Society, Series D, 36(2/3):247–249, 1987. [28] A O’Hagan. Bayes–Hermite Quadrature. Journal of Statistical Planning and Inference, 29(3):245–260, 1991. [29] M Osborne, R Garnett, S Roberts, C Hart, S Aigrain, and N Gibson. Bayesian quadrature for ratios. In Artificial Intelligence and Statistics, pages 832–840, 2012. [30] MA Osborne, DK Duvenaud, R Garnett, CE Rasmussen, SJ Roberts, and Z Ghahramani. Active learning of model evidence using Bayesian quadrature. In Advances in Neural Information Processing Systems, 2012. [31] C Rasmussen and C Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [32] C Robert and G Casella. Monte Carlo Statistical Methods. Springer Science & Business Media, 2013. [33] S Särkkä, J Hartikainen, L Svensson, and F Sandblom. On the relation between Gaussian process quadratures and sigma-point methods. Journal of Advances in Information Fusion, 11(1):31–46, 2016. [34] J Sethuraman. A Constructive Definition of Dirichlet Priors. Statistica Sinica, 4(2):639–650, 1994. [35] A Smola, A Gretton, L Song, and B Schölkopf. A Hilbert Space Embedding for Distributions. Algorithmic Learning Theory, Lecture Notes in Computer Science, 4754:13–31, 2007. [36] R Von Mises. Mathematical Theory of Probability and Statistics. Academic, London, 1974. [37] MP Wand and MC Jones. Kernel Smoothing. CRC Press, 1994. 9 | 2017 | 402 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.