index int64 0 20.3k | text stringlengths 0 1.3M | year stringdate 1987-01-01 00:00:00 2024-01-01 00:00:00 | No stringlengths 1 4 |
|---|---|---|---|
6,600 | Reducing Reparameterization Gradient Variance Andrew C. Miller∗ Harvard University acm@seas.harvard.edu Nicholas J. Foti University of Washington nfoti@uw.edu Alexander D’Amour UC Berkeley alexdamour@berkeley.edu Ryan P. Adams Google Brain and Princeton University rpa@princeton.edu Abstract Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the “reparameterization trick,” represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2,000×) reduction in gradient variance resulting in faster and more stable optimization. 1 Introduction Representing massive datasets with flexible probabilistic models has been central to the success of many statistics and machine learning applications, but the computational burden of fitting these models is a major hurdle. For optimization-based fitting methods, a central approach to this problem has been replacing expensive evaluations of the gradient of the objective function with cheap, unbiased, stochastic estimates of the gradient. For example, stochastic gradient descent using small minibatches of (conditionally) i.i.d. data to estimate the gradient at each iteration is a popular approach with massive data sets. Alternatively, some learning methods sample directly from a generative model or approximating distribution to estimate the gradients of interest, for example, in learning algorithms for implicit models [18, 30] and generative adversarial networks [2, 9]. Approximate Bayesian inference using variational techniques (variational inference, or VI) has also motivated the development of new stochastic gradient estimators, as the variational approach reframes the integration problem of inference as an optimization problem [4]. VI approaches seek out the distribution from a well-understood variational family of distributions that best approximates an intractable posterior distribution. The VI objective function itself is often intractable, but recent work has shown that it can be optimized with stochastic gradient methods that use Monte Carlo estimates of the gradient [19, 14, 22, 25], which we call Monte Carlo variational inference (MCVI). In MCVI, generating samples from an approximate posterior distribution is the source of gradient stochasticity. Alternatively, stochastic variational inference (SVI) [11] and other stochastic opti∗http://andymiller.github.io/ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. mization procedures induce stochasticity through data subsampling; MCVI can also be augmented with data subsampling to accelerate computation for large data sets. The two commonly used MCVI gradient estimators are the score function gradient [19, 22] and the reparameterization gradient [14, 25, 29, 8]. Broadly speaking, score function estimates can be applied to both discrete and continuous variables, but often have high variance and thus are frequently used in conjunction with variance reduction techniques. On the other hand, the reparameterization gradient often has lower variance, but is restricted to continuous random variables. See Ruiz et al. [28] for a unifying perspective on these two estimators. Like other stochastic gradient methods, the success of MCVI depends on controlling the variance of the stochastic gradient estimator. In this work, we present a novel approach to controlling the variance of the reparameterization gradient estimator in MCVI. Existing MCVI methods control this variance naïvely by averaging several gradient estimates, which becomes expensive for large data sets and complex models, with error that only diminishes as O(1/ √ N). Our approach exploits the fact that, in MCVI, the randomness in the gradient estimator is completely determined by a known Monte Carlo generating process; this allows us to leverage knowledge about this generative procedure to de-noise the gradient estimator. In particular, we construct a computationally cheap control variate based on an analytical linear approximation to the gradient estimator. Taking a linear combination of a naïve gradient estimate with this control variate yields a new estimator for the gradient that remains unbiased but has lower variance. Applying the idea to Gaussian approximating families, we observe a 20-2,000× reduction in variance of the gradient norm under various conditions, and faster convergence and more stable behavior of optimization traces. 2 Background Variational Inference Given a model, p(z, D) = p(D|z)p(z), of data D and parameters/latent variables z, the goal of VI is to approximate the posterior distribution p(z|D). VI approximates this intractable posterior distribution with one from a simpler family, Q = {q(z; λ), λ ∈Λ}, parameterized by variational parameters λ. VI procedures seek out the member of that family, q(·; λ) ∈Q, that minimizes some divergence between the approximation q and the true posterior p(z|D). Variational inference can be framed as an optimization problem, usually in terms of KullbackLeibler (KL) divergence, of the following form λ∗= arg min λ∈Λ KL(q(z; λ) || p(z|D)) = arg min λ∈Λ Ez∼qλ [ln q(z; λ) −ln p(z|D)] . The task is to find a setting of λ that makes q(z; λ) close to the posterior p(z|D) in KL divergence.2 Directly computing the KL divergence requires evaluating the posterior itself; therefore, VI procedures use the evidence lower bound (ELBO) as the optimization objective L(λ) = Ez∼qλ [ln p(z, D) −ln q(z; λ)] , (1) which, when maximized, minimizes the KL divergence between q(z; λ) and p(z|D). In special cases, parts of the ELBO can be expressed analytically (e.g. the entropy form or KL-to-prior form [10]) — we focus on the general form in Equation 1. To maximize the ELBO with gradient methods, we need to compute the gradient of Eq. (1), ∂L/∂λ ≜gλ. The gradient inherits the ELBO’s form as an expectation, which is in general an intractable quantity to compute. In this work, we focus on reparameterization gradient estimators (RGEs) computed using the reparameterization trick. The reparameterization trick exploits the structure of the variational data generating procedure — the mechanism by which z is simulated from qλ(z). To compute the RGE, we first express the sampling procedure from qλ(z) as a differentiable map applied to exogenous randomness ϵ ∼q0(ϵ) independent of λ (2) z = T (ϵ; λ) differentiable map, (3) where the initial distribution q0 and T are jointly defined such that z ∼q(z; λ) has the desired distribution. As a simple concrete example, if we set q(z; λ) to be a diagonal Gaussian, 2We use q(z; λ) and qλ(z) interchangeably. 2 0 20 40 60 80 100 wall clock (seconds) 260 240 220 200 180 160 ELBO 2sample MC 10sample MC 50sample MC (a) step size = .01 0 20 40 60 80 100 wall clock (seconds) 300 280 260 240 220 200 180 160 ELBO 2sample MC 10sample MC 50sample MC (b) step size = .1 Figure 1: Optimization traces for MCVI applied to a Bayesian neural network with various hyperparameter settings. Each trace is running adam [13]. The three lines in each plot correspond to three different numbers of samples, L, used to estimate the gradient at each step. (Left) small stepsize; (Right) stepsize 10 times larger. Large step sizes allow for quicker progress, however noisier (i.e., small L) gradients combined with large step sizes result in chaotic optimization dynamics. The converging traces reach different ELBOs due to the illustrative constant learning rates; in practice, one decreases the step size over time to satisfy the convergence criteria in Robbins and Monro [26]. N(mλ, diag(s2 λ)), with λ = [mλ, sλ], mλ ∈RD, and sλ ∈RD + the mean and variance. The sampling procedure could then be defined as ϵ ∼N(0, ID) , z = T (ϵ; λ) = mλ + sλ ⊙ϵ, (4) where s ⊙ϵ denotes an element-wise product.3 Given this map, the reparameterization gradient estimator is simply the gradient of a Monte Carlo ELBO estimate with respect to λ. For a single sample, this is ϵ ∼q0(ϵ) , ˆgλ ≜∇λ [ln p(T (ϵ; λ), D) −ln q(T (ϵ; λ); λ)] and similarly the L-sample approximation can be computed by averaging the single-sample estimator over the individual samples ˆg(L) λ = 1 L L X ℓ=1 ˆgλ(ϵℓ). (5) Crucially, the reparameterization gradient is unbiased, E[ˆgλ] = ∇λL(λ), guaranteeing the convergence of stochastic gradient optimization procedures that use it [26]. Gradient Variance and Convergence The efficiency of Monte Carlo variational inference hinges on the magnitude of gradient noise and the step size chosen for the optimization procedure. When the gradient noise is large, smaller gradient steps must be taken to avoid unstable dynamics of the iterates. However, a smaller step size increases the number of iterations that must be performed to reach convergence. We illustrate this trade-off in Figure 1, which shows realizations of an optimization procedure applied to a Bayesian neural network using reparameterization gradients. The posterior is over D = 653 parameters that we approximate with a diagonal Gaussian (see Appendix C.2). We compare the progress of the adam algorithm using various numbers of samples [13], fixing the learning rate. The noise present in the single-sample estimator causes extremely slow convergence, whereas the lower noise 50-sample estimator quickly converges, albeit at 50 times the cost. The upshot is that with low noise gradients we are able to safely take larger steps, enabling faster convergence to a local optimum. A natural question is, how can we reduce the variance of gradient estimates without introducing too much extra computation? Our approach is to use information about the variational model, q(·; λ), and carefully construct a control variate to the gradient. Control Variates Control variates are random quantities that are used to reduce the variance of a statistical estimator without introducing any bias by incorporating additional information into the estimator, [7]. Given an unbiased estimator ˆg such that E[ˆg] = g (the quantity of interest), our goal 3We will also use x/y and x2 to denote pointwise division and squaring, respectively. 3 is to construct another unbiased estimator with lower variance. We can do this by defining a control variate ˜g with known expectation ˜m and can write the new estimator as g(cv) = ˆg −C(˜g −˜m) . (6) where C ∈RD×D for D-dimensional ˆg. Clearly the new estimator has the same expectation as the original estimator, but has a different variance. We can attain optimal variance reduction by appropriately setting C. Intuitively, the optimal C is very similar to a regression coefficient — it is related to the covariance between the control variate and the original estimator. See Appendix A for further details on optimally setting C. 3 Method: Modeling Reparameterization Gradients In this section we develop our main contribution, a new gradient estimator that can dramatically reduce reparameterization gradient variance. In MCVI, the reparameterization gradient estimator (RGE) is a Monte Carlo estimator of the true gradient — the estimator itself is a random variable. This random variable is generated using the “reparameterization trick” — we first generate some randomness ϵ and then compute the gradient of the ELBO with respect to λ holding ϵ fixed. This results in a complex distribution from which we can generate samples, but in general cannot characterize due to the complexity of the term arising from the gradient of the model term. However, we do have a lot of information about the sampling procedure — we know the variational distribution ln q(z; λ), the transformation T , and we can evaluate the model joint density ln p(z, D) pointwise. Furthermore, with automatic differentiation, it is often straightforward to obtain gradients and Hessian-vector products of our model ln p(z, D). We propose a scheme that uses the structure of qλ and curvature of ln p(z, D) to construct a tractable approximation of the distribution of the RGE.4 This approximation has a known mean and is correlated with the RGE distribution, allowing us to use it as a control variate to reduce the RGE variance. Given a variational family parameterized by λ, we can decompose the ELBO gradient into a few terms that reveal its “data generating procedure” ϵ ∼q0 , z = T (ϵ; λ) (7) ˆgλ ≜ˆg(z; λ) = ∂ln p(z, D) ∂z | {z } data term ∂z ∂λ −∂ln qλ(z) ∂z | {z } pathwise score ∂z ∂λ −∂ln qλ(z) ∂λ | {z } parameter score . (8) Certain terms in Eq. (8) have tractable distributions. The Jacobian of T (·; λ), given by ∂z/∂λ, is defined by our choice of q(z; λ). For some transformations T we can exactly compute the distribution of the Jacobian given the distribution of ϵ. The pathwise and parameter score terms are gradients of our approximate distribution with respect to λ (via z or directly). If our approximation is tractable (e.g., a multivariate Gaussian), we can exactly characterize the distribution for these components.5 However, the data term in Eq. (8) involves a potentially complicated function of the latent variable z (and therefore a complicated function of ϵ), resulting in a difficult-to-characterize distribution. Our goal is to construct an approximation to the distribution of ∂ln p(z, D)/∂z and its interaction with ∂z/∂λ given a fixed distribution over ϵ. If the approximation yields random variables that are highly correlated with ˆgλ, then we can use it to reduce the variance of that RGE sample. Linearizing the data term To simplify notation, we write the data term of the gradient as f(z′) ≜∂ln p(z, D) ∂z z=z′ , (9) where f : RD 7→RD since z ∈RD. We then linearize f about some value z0 ˜f(z) = f(z0) + ∂f ∂z (z0) (z −z0) = f(z0) + H(z0)(z −z0), (10) 4We require the model ln p(z, D) to be twice differentiable. 5In fact, we know that the expectation of the parameter score term is zero, and removing that term altogether can sometimes be a source of variance reduction that we do not explore here [27]. 4 where H(z0) is the Hessian of the model, ln p(z, D), with respect to z evaluated at z0, H(z0) = ∂f ∂z (z0) = ∂2 ln p(z, D) ∂z2 (z0) (11) Note that even though this uses second-order information about the model, it is a first-order approximation of the gradient. We also view this as a transformation of the random ϵ for a fixed λ ˜fλ(ϵ) = f(z0) + H(z0)(T (ϵ, λ) −z0) , (12) which is linear in z = T (ϵ, λ). For some forms of T we can analytically derive the distribution of the random variable ˜fλ(ϵ). In Eq. (8), the data term interacts with the Jacobian of T , given by Jλ′(ϵ) ≜∂z ∂λ = ∂T (ϵ, λ) ∂λ λ=λ′ , (13) which importantly is a function of the same ϵ as in Eq. (12). We form our approximation of the first term in Eq. (8) by multiplying Eqs. (12) and (13) yielding ˜g(data) λ (ϵ) ≜˜fλ(ϵ)Jλ(ϵ) . (14) The tractability of this approximation hinges on how Eq. (14) depends on ϵ. When q(z; λ) is multivariate normal, we show that this approximation has a computable mean and can be used to reduce variance in MCVI settings. In the following sections we describe and empirically test this variance reduction technique applied to diagonal Gaussian posterior approximations. 3.1 Gaussian Variational Families Perhaps the most common choice of approximating distribution for MCVI is a diagonal Gaussian, parameterized by a mean mλ ∈RD and scales sλ ∈RD +. 6 The log probability density function is ln q(z; mλ, s2 λ) = −1 2(z −mλ)⊺ diag(s2 λ) −1 (z −mλ) −1 2 X d ln s2 λ,d −D 2 ln(2π) . (15) To generate a random variate z from this distribution, we use the sampling procedure in Eq. (4). We denote the Monte Carlo RGE as ˆgλ ≜[ˆgmλ, ˆgsλ]. From Eq. (15), it is straightforward to derive the distributions of the pathwise score, parameter score, and Jacobian terms in Eq. (8). The Jacobian term of the sampling procedure has two straightforward components ∂z ∂mλ = ID , ∂z ∂sλ = diag(ϵ) . (16) The pathwise score term is the partial derivative of Eq. (15) with respect to z, ignoring variation due to the variational distribution parameters and noting that z = mλ + sλ ⊙ϵ: ∂ln q ∂z = −diag(s2 λ)−1(z −mλ) = −ϵ/sλ . (17) The parameter score term is the partial derivative of Eq. (15) with respect to variational parameters λ, ignoring variation due to z. The mλ and sλ components are given by ∂ln q ∂mλ = (z −mλ)/s2 λ = ϵ/sλ (18) ∂ln q ∂sλ = −1/sλ −(z −mλ)2/s2 λ = ϵ2 −1 sλ . (19) The data term, f(z), multiplied by the Jacobian of T is all that remains to be approximated in Eq. (8). We linearize f around z0 = mλ where the approximation is expected to be accurate ˜fλ(ϵ) = f(mλ) + H(mλ) ((mλ + sλ ⊙ϵ) −mλ) (20) ∼N f(mλ), H(mλ)diag(s2 λ)H(mλ)⊺ . (21) 6For diagonal Gaussian q, we define λ = [mλ, sλ]. 5 ϵ ˆgλ ˜gλ L Figure 2: Relationship between the base randomness ϵ, RGE ˆg, and approximation ˜g. Arrows indicate deterministic functions. Sharing ϵ correlates the random variables. We know the distribution of ˜g, which allows us to use it as a control variate for ˆg. Algorithm 1 Gradient descent with RV-RGE with a diagonal Gaussian variational family 1: procedure RV-RGE-OPTIMIZE(λ1, ln p(z, D), L) 2: f(z) ←∇z ln p(z, D) 3: H(za, zb) ← ∇2 z ln p(za, D) zb ▷Define Hessian-vector product function 4: for t = 1, . . . , T do 5: ϵ(ℓ) ∼N (0, ID) for ℓ= 1, . . . , L ▷Base randomness q0 6: ˆg(ℓ) λt ←∇λ ln p(z(ϵ(ℓ), λt), D) ▷Reparameterization gradients 7: ˜g(ℓ) mλt ←f(mλt) + H(mλt, sλt ⊙ϵ(ℓ)) ▷Mean approx 8: ˜g(ℓ) sλt ← f(mλt) + H(mλt, sλt ⊙ϵ(ℓ)) ⊙ϵ + 1 sλt ▷Scale approx 9: E[˜gmλt ] ←f(mλt) ▷Mean approx expectation 10: E[˜gsλt ] ←diag(H(mλt)) ⊙sλt + 1/sλt ▷Scale approx expectation 11: ˆg(RV ) λt = 1 L P ℓˆgℓ λt −(˜gℓ λt −E[˜gλt]) ▷Subtract control variate 12: λt+1 ←grad-update(λt, ˆg(RV ) λt ) ▷Gradient step (sgd, adam, etc.) 13: return λT Putting It Together: Full RGE Approximation We write the complete approximation of the RGE in Eq. (8) by combining Eqs. (16), (17), (18), (19), and (21) which results in two components that are concatenated, ˜gλ = [˜gmλ, ˜gsλ]. Each component is defined as ˜gmλ = ˜fλ(ϵ) + ϵ/sλ −ϵ/sλ = f(mλ) + H(mλ)(sλ ⊙ϵ) (22) ˜gsλ = ˜fλ(ϵ) ⊙ϵ + (ϵ/sλ) ⊙ϵ −ϵ2 −1 sλ = (f(mλ) + H(mλ)(sλ ⊙ϵ)) ⊙ϵ + 1 sλ . (23) To summarize, we have constructed an approximation, ˜gλ, of the reparameterization gradient, ˆgλ, as a function of ϵ. Because both ˜gλ and ˆgλ are functions of the same random variable ϵ, and because we have mimicked the random process that generates true gradient samples, the two gradient estimators will be correlated. This approximation yields two tractable distributions — a Gaussian for the mean parameter gradient, gmλ, and a location shifted, scaled non-central χ2 for the scale parameter gradient gsλ. Importantly, we can compute the mean of each component E[˜gmλ] = f(mλ) , E[˜gsλ] = diag(H(mλ)) ⊙sλ + 1/sλ . (24) We use ˜gλ (along with its expectation) as a control variate to reduce the variance of the RGE ˆgλ. 3.2 Reduced Variance Reparameterization Gradient Estimators Now that we have constructed a tractable gradient approximation, ˜gλ, with high correlation to the original reparameterization gradient estimator, ˆgλ, we can use it as a control variate as in Eq. (6) ˆg(RV ) λ = ˆgλ −C(˜gλ −E[˜gλ]). (25) The optimal value for C is related to the covariance between ˜gλ and ˆgλ (see Appendix A). We can try to estimate the value of C (or a diagonal approximation to C) on the fly, or we can simply fix this value. In our case, because we are using an accurate linear approximation to the transformation of a spherical Gaussian, the optimal value of C will be close to the identity (see Appendix A.1). High Dimensional Models For models with high dimensional posteriors, direct manipulation of the Hessian is computationally intractable. However, our approximations in Eqs. (22) and (23) only require a Hessian-vector product, which can be computed nearly as efficiently as the gradient [21]. Modern automatic differentiation packages enable easy and efficient implementation of Hessianvector products for nearly any differentiable model [1, 20, 15]. We note that the mean of the control variate ˜gsλ (Eq. (24)), depends on the diagonal of the Hessian matrix. While computing the Hessian diagonal may be tractable in some cases, in general it may cost the time equivalent of D function evaluations to compute [16]. Given a high dimensional problem, we can avoid this bottleneck in multiple ways. The first is simply to ignore the random variation in the Jacobian term due to ϵ — if we fix z to be mλ (as we do with the data term), the portion of the Jacobian that corresponds to 6 sλ will be zero (in Eq. (16)). This will result in the same Hessian-vector-product-based estimator for ˜gmλ but will set ˜gsλ = 0, yielding variance reduction for the mean parameter but not the scale. Alternatively, we can estimate the Hessian diagonal on the fly. If we use L > 1 samples at each iteration, we can create a per-sample estimate of the sλ-scaled diagonal of the Hessian using the other samples [3]. As the scaled diagonal estimator is unbiased, we can construct an unbiased estimate of the control variate mean to use in lieu of the actual mean. We will see that the resulting variance is not much higher than when using full Hessian information, and is computationally tractable to deploy on high-dimensional models. A similar local baseline strategy is used for variance reduction in Mnih and Rezende [17]. RV-RGE Estimators We introduce three different estimators based on variations of the gradient approximation defined in Eqs. (22), (23), and (24), each adressing the Hessian operations differently: • The Full Hessian estimator implements the three equations as written and can be used when it is computationally feasible to use the full Hessian. • The Hessian Diagonal estimator replaces the Hessian in (22) with a diagonal approximation, useful for models with a cheap Hessian diagonal. • The Hessian-vector product + local approximation (HVP+Local) uses an efficient Hessianvector product in Eqs. (22) and (23), while approximating the diagonal term in Eq. (24) using a local baseline. The HVP+Local approximation is geared toward models where Hessian-vector products can be computed, but the exact diagonal of the Hessian cannot. We detail the RV-RGE procedure in Algorithm 1 and compare properties of these three estimators to the pure Monte Carlo estimator in the following section. 3.3 Related Work Recently, Roeder et al. [27] introduced a variance reduction technique for reparameterization gradients that ignores the parameter score component of the gradient and can be viewed as a type of control variate for the gradient throughout the optimization procedure. This approach is complementary to our method — our approximation is typically more accurate near the beginning of the optimization procedure, whereas the estimator in Roeder et al. [27] is low-variance near convergence. We hope to incorporate information from both control variates in future work. Per-sample estimators in a multi-sample setting for variational inference were used in Mnih and Rezende [17]. We employ this technique in a different way; we use it to estimate computationally intractable quantities needed to keep the gradient estimator unbiased. Black box variational inference used control variates and Rao-Blackwellization to reduce the variance of score-function estimators [22]. Our development of variance reduction for reparameterization gradients complements their work. Other variance reduction techniques for stochastic gradient descent have focused on stochasticity due to data subsampling [12, 31]. Johnson and Zhang [12] cache statistics about the entire dataset at each epoch to use as a control variate for noisy mini-batch gradients. The variance reduction method described in Paisley et al. [19] is conceptually similar to ours. This method uses first or second order derivative information to reduce the variance of the score function estimator. The score function estimator (and their reduced variance version) often has much higher variance than the reparameterization gradient estimator that we improve upon in this work. Our variance measurement experiments in Table 1 includes a comparison to the estimator featured in [19], which we found to be much higher variance than the baseline RGE. 4 Experiments and Analysis In this section we empirically examine the variance properties of RV-RGEs and stochastic optimization for two real-data examples — a hierarchical Poisson GLM and a Bayesian neural network.7 • Hierarchical Poisson GLM: The frisk model is a hierarchical Poisson GLM, described in Appendix C.1. This non-conjugate model has a D = 37 dimensional posterior. • Bayesian Neural Network: The non-conjugate bnn model is a Bayesian neural network applied to the wine dataset, (see Appendix C.2) and has a D = 653 dimensional posterior. 7Code is available at https://github.com/andymiller/ReducedVarianceReparamGradients. 7 Table 1: Comparison of variances for RV-RGEs with L = 10-sample estimators. Variance measurements were taken for λ values at three points during the optimization algorithm (early, mid, late). The parenthetical rows labeled “MC abs” denote the absolute value of the standard Monte Carlo reparameterization gradient estimator. The other rows compare estimators relative to the pure MC RGE variance — a value of 100 indicates equal variation L = 10 samples, a value of 1 indicates a 100-fold decrease in variance (lower is better). Our new estimators (Full Hessian, Hessian Diag, HVP+Local) are described in Section 3.2. The Score Delta method is the gradient estimator described in [19]. Additional variance measurement results are in Appendix D. gmλ ln gsλ gλ Iteration Estimator Ave V(·) V(|| · ||) Ave V(·) V(|| · ||) Ave V(·) V(|| · ||) early (MC abs.) (1.7e+02) (5.4e+03) (3e+04) (2e+05) (1.5e+04) (5.9e+03) MC 100.000 100.000 100.000 100.000 100.000 100.000 Full Hessian 1.279 1.139 0.001 0.002 0.008 1.039 Hessian Diag 34.691 23.764 0.003 0.012 0.194 21.684 HVP + Local 1.279 1.139 0.013 0.039 0.020 1.037 Score Delta [19] 6069.668 718.430 1.395 0.931 34.703 655.105 mid (MC abs.) (3.8e+03) (1.3e+05) (18) (3.3e+02) (1.9e+03) (1.3e+05) MC 100.000 100.000 100.000 100.000 100.000 100.000 Full Hessian 0.075 0.068 0.113 0.143 0.076 0.068 Hessian Diag 38.891 21.283 6.295 7.480 38.740 21.260 HVP + Local 0.075 0.068 30.754 39.156 0.218 0.071 Score Delta [19] 4763.246 523.175 2716.038 700.100 4753.752 523.532 late (MC abs.) (1.7e+03) (1.3e+04) (1.1) (19) (8.3e+02) (1.3e+04) MC 100.000 100.000 100.000 100.000 100.000 100.000 Full Hessian 0.042 0.030 1.686 0.431 0.043 0.030 Hessian Diag 40.292 53.922 23.644 28.024 40.281 53.777 HVP + Local 0.042 0.030 98.523 99.811 0.110 0.022 Score Delta [19] 5183.885 1757.209 17355.120 3084.940 5192.270 1761.317 Quantifying Gradient Variance Reduction We measure the variance reduction of the RGE observed at various iterates, λt, during execution of gradient descent. Both the gradient magnitude, and the marginal variance of the gradient elements — using a sample of 1000 gradients — are reported. Further, we inspect both the mean, mλ, and log-scale, ln sλ, parameters separately. Table 1 compares gradient variances for the frisk model for our four estimators: i) pure Monte Carlo (MC), ii) Full Hessian, iii) Hessian Diagonal, and iv) Hessian-vector product + local approximation (HVP+Local). Additionally, we compare our methods to the estimator described in [19], based on the score function estimator and a control variate method. We use a first order delta method approximation of the model term, which admits a closed form control variate term. Each entry in the table measures the percent of the variance of the pure Monte Carlo estimator. We show the average variance over each component AveV(·), and the variance of the norm V(||·||). We separate out variance in mean parameters, gm, log scale parameters, ln gs, and the entire vector gλ. The reduction in variance is dramatic. Using HVP+Local, in the norm of the mean parameters we see between a 80× and 3,000× reduction in variance depending on the progress of the optimizer. The importance of the full Hessian-vector product for reducing mean parameter variance is also demonstrated as the Hessian diagonal only reduces mean parameter variance by a factor of 2-5×. For the variational scale parameters, ln gs, we see that early on the HVP+Local approximation is able to reduce parameter variance by a large factor (≈2,000×). However, at later iterates the HVP+Local scale parameter variance is on par with the Monte Carlo estimator, while the full Hessian estimator still enjoys huge variance reduction. This indicates that, by this point, most of the noise is the local Hessian diagonal estimator. We also note that in this problem, most of the estimator variance is in the mean parameters. Because of this, the norm of the entire parameter gradient, gλ is reduced by 100−5,000×. We found that the score function estimator (with the delta method control variate) is typically much higher variance than the baseline reparameterization gradient estimator (often by a factor of 10-50×). In Appendix D we report results for other values of L. Optimizer Convergence and Stability We compare the optimization traces for the frisk and bnn model for the MC and the HVP+Local estimators under various conditions. At each iteration we estimate the true ELBO value using 2000 Monte Carlo samples. We optimize the ELBO objective using adam [13] for two step sizes, each trace starting at the same value of λ0. 8 0 5 10 15 20 25 30 wall clock (seconds) 860 855 850 845 ELBO 2sample MC 2sample HVP+Local 10sample MC 10sample HVP+Local 50sample MC (a) adam with step size = 0.05 0 5 10 15 20 25 30 wall clock (seconds) 860 855 850 845 ELBO 2sample MC 2sample HVP+Local 10sample MC 10sample HVP+Local 50sample MC (b) adam with step size = .10 Figure 3: MCVI optimization trace applied to the frisk model for two values of L and step size. We run the standard MC gradient estimator (solid line) and the RV-RGE with L = 2 and 10 samples. 0 20 40 60 80 wall clock (seconds) 220 200 180 160 ELBO 2sample MC 2sample HVP+Local 10sample MC 10sample HVP+Local 50sample MC (a) adam with step size = 0.05 0 20 40 60 80 wall clock (seconds) 240 220 200 180 160 ELBO 2sample MC 2sample HVP+Local 10sample MC 10sample HVP+Local 50sample MC (b) adam with step size = 0.10 Figure 4: MCVI optimization for the bnn model applied to the wine data for various L and step sizes. The standard MC gradient estimator (dotted) was run with 2, 10, and 50 samples; RV-RGE (solid) was run with 2 and 10 samples. In 4b the 2-sample MC estimator falls below the frame. Figure 3 compares ELBO optimization traces for L = 2 and L = 10 samples and step sizes .05 and .1 for the frisk model. We see that the HVP+Local estimators make early progress and converge quickly. We also see that the L = 2 pure MC estimator results in noisy optimization paths. Figure 4 shows objective value as a function of wall clock time under various settings for the bnn model. The HVP+Local estimator does more work per iteration, however it tends to converge faster. We observe the L = 10 HVP+Local outperforming the L = 50 MC estimator. 5 Conclusion Variational inference reframes an integration problem as an optimization problem with the caveat that each step of the optimization procedure solves an easier integration problem. For general models, each sub-integration problem is itself intractable, and must be estimated, typically with Monte Carlo samples. Our work has shown that we can use more information about the variational family to create tighter estimators of the ELBO gradient, which leads to faster and more stable optimization. The efficacy of our approach relies on the complexity of the RGE distribution to be well-captured by linear structure which may not be true for all models. However, we found the idea effective for non-conjugate hierarchical Bayesian models and a neural network. Our presentation is a specific instantiation of a more general idea — using cheap linear structure to remove variation from stochastic gradient estimates. This method described in this work is tailored to Gaussian approximating families for Monte Carlo variational inference, but could be easily extended to location-scale families. We plan to extend this idea to more flexible variational distributions, including flow distributions [24] and hierarchical distributions [23], which would require approximating different functional forms within the variational objective. We also plan to adapt our technique to model and inference schemes with recognition networks [14], which would require back-propagating de-noised gradients into the parameters of an inference network. 9 Acknowledgements The authors would like to thank Finale Doshi-Velez, Mike Hughes, Taylor Killian, Andrew Ross, and Matt Hoffman for helpful conversations and comments on this work. ACM is supported by the Applied Mathematics Program within the Office of Science Advanced Scientific Computing Research of the U.S. Department of Energy under contract No. DE-AC02-05CH11231. NJF is supported by a Washington Research Foundation Innovation Postdoctoral Fellowship in Neuroengineering and Data Science. RPA is supported by NSF IIS-1421780 and the Alfred P. Sloan Foundation. References [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017. [3] Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214–1229, 2007. [4] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017. [5] Andrew Gelman and Jennifer Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, 2006. [6] Andrew Gelman, Jeffrey Fagan, and Alex Kiss. An analysis of the NYPD’s stop-and-frisk policy in the context of claims of racial bias. Journal of the American Statistical Association, 102:813–823, 2007. [7] Paul Glasserman. Monte Carlo Methods in Financial Engineering, volume 53. Springer Science & Business Media, 2004. [8] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013. [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. [10] Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. 2016. [11] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013. [13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015. [14] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations, 2014. [15] Dougal Maclaurin, David Duvenaud, Matthew Johnson, and Ryan P. Adams. Autograd: Reverse-mode differentiation of native Python, 2015. URL http://github.com/HIPS/autograd. [16] James Martens, Ilya Sutskever, and Kevin Swersky. Estimating the Hessian by back-propagating curvature. In Proceedings of the International Conference on Machine Learning, 2012. [17] Andriy Mnih and Danilo Rezende. Variational inference for Monte Carlo objectives. In Proceedings of The 33rd International Conference on Machine Learning, pages 2188–2196, 2016. [18] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016. 10 [19] John Paisley, David M Blei, and Michael I Jordan. Variational bayesian inference with stochastic search. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1363–1370. Omnipress, 2012. [20] Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. Pytorch. https://github.com/ pytorch/pytorch, 2017. [21] Barak A Pearlmutter. Fast exact multiplication by the Hessian. Neural computation, 6(1):147–160, 1994. [22] Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AISTATS, pages 814–822, 2014. [23] Rajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. In International Conference on Machine Learning, 2016. [24] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1530–1538, 2015. [25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, 2014. [26] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. [27] Geoffrey Roeder, Yuhuai Wu Wu, and David Duvenaud. Sticking the landing: An asymptotically zerovariance gradient estimator for variational inference. arXiv preprint arXiv:1703.09194, 2017. [28] Francisco R Ruiz, Michalis Titsias RC AUEB, and David Blei. The generalized reparameterization gradient. In Advances in Neural Information Processing Systems, pages 460–468, 2016. [29] Michalis Titsias and Miguel Lázaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971–1979, 2014. [30] Dustin Tran, Matthew D Hoffman, Rif A Saurous, Eugene Brevdo, Kevin Murphy, and David M Blei. Deep probabilistic programming. In Proceedings of the International Conference on Learning Representations, 2017. [31] Chong Wang, Xi Chen, Alexander J Smola, and Eric P Xing. Variance reduction for stochastic gradient optimization. In Advances in Neural Information Processing Systems, pages 181–189, 2013. 11 | 2017 | 133 |
6,601 | Min-Max Propagation Christopher Srinivasa University of Toronto Borealis AI christopher.srinivasa @gmail.com Inmar Givoni University of Toronto inmar.givoni @gmail.com Siamak Ravanbakhsh University of British Columbia siamakx@cs.ubc.ca Brendan J. Frey University of Toronto Vector Institute Deep Genomics frey@psi.toronto.edu Abstract We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for “any” highorder function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that the worst case load is minimized. 1 Introduction Min-max is a common optimization problem that involves minimizing a function with respect to some variables X and maximizing it with respect to others Z: minX maxZ f(X, Z). For example, f(X, Z) may be the cost or loss incurred by a system X under different operating conditions Z, in which case the goal is to select the system whose worst-case cost is lowest. In Section 2, we show that factor graphs present a desirable framework for solving min-max problems and in Section 3 we review min-max propagation, a min-max based belief propagation algorithm. Sum-product and min-sum inference using message passing has repeatedly produced groundbreaking results in various fields, from low-density parity-check codes in communication theory (Kschischang et al., 2001), to satisfiability in combinatorial optimization and latent-factor analysis in machine learning. An important question is whether “min-max” propagation can also yield good approximate solutions when dealing with NP-hard problems? In this paper we answer this question in two parts. I) Our main contribution is the introduction of an efficient min-max message passing procedure for a generic family of high-order factors in Section 4. This enables us to approach new problems through their factor graph formulation. Section 5.2 leverages our solution for high-order factors to efficiently approximate the problem of makespan minimization using min-max propagation. II) To better understand the pros and cons of min-max propagation, Section 5.1 compares it with the alternative approach of reducing min-max inference to a sequence of Constraint Satisfaction Problems (CSPs). The feasibility of “exact” inference in a min-max semiring using the junction-tree method goes back to (Aji and McEliece, 2000). More recent work of (Vinyals et al., 2013) presents the application of min-max for junction-tree in a particular setting of the makespan problem. In this paper, we investigate the usefulness of min-max propagation in the loopy case and more importantly provide an efficient and generic algorithm to perform message passing with high-order factors. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Min-Max Optimization on Factor Graphs We are interested in factorizable min-max problems minX maxZ f(X, Z) – i.e. min-max problems that can be efficiently factored into a group of more simple functions. These have the following properties: 1. The cardinality of either X or Z (say Z) is linear in available computing resources (e.g. Z is an indexing variable a which is linear in the number of indices) 2. The other variable can be decomposed, so that X = (x1, . . . , xN) 3. Given Z, the function f() depends on only a subset of the variables in X and/or exhibits a form which is easier to minimize individually than when combined with f(X, Z) Using a ∈F = {1, . . . , F} to index the values of Z and X∂a to denote the subset of variables that f() depends on when Z = a, the min-max problem can be formulated as, min X max a fa(X∂a). (1) In the following we use i, j ∈N = {1, . . . , N} to denote variable indices and a, b ∈{1, . . . , F} for factor indices. A Factor Graph (FG) is a bipartite graphical representation of the above factorization properties. In it, each function (i.e. factor fa) is represented by a square node and each variable is represented by a circular node. Each factor node is connected via individual edges to the variables on which it depends. We use ∂i to denote the set of neighbouring factor indices for variable i, and similarly we use ∂a to denote the index set of variables connected to factor a. This problem is related to the problems commonly analyzed using FGs (Bishop, 2006): the sumproduct problem, P X Q a fa(X∂a), the min-sum problem, minX P a fa(X∂a), and the max-product problem, maxX Q a fa(X∂a) in which case we would respectively take product, sum, and product rather than the max of the factors in the FG. When dealing with NP-hard problems, the FG contains one or more loop(s). While NP-hard problems have been represented and (approximately) solved directly using message passing on FGs in the sum-product, min-sum, and max-product cases, to our knowledge this has never been done in the min-max case. 3 Min-Max Propagation An important question is how min-max can be computed on FGs. Consider the sum-product algorithm on FGs which relies on the sum and product operations satisfying the distributive law a(b + c) = ab + ac (Aji and McEliece, 2000). Min and max operators also satisfy the distributive law: min(max(α, β), max(α, γ)) = max(α, min(β, γ)). Using (min, max, ℜ) semiring, the belief propagation updates are as follows. Note that these updates are analogous to sum-product belief propagation updates, where sum is replaced by min and product operation is replaced by max. Figure 1: Variable-to-factor message. Variable-to-Factor Messages. The message sent from variable xi to function fa is µia(xi) = max b∈∂i\a ηbi(xi) (2) where ηbi(xi) is the message sent from function fb to variable xi (as shown in Fig. 1) and ∂i \ a is the set of all neighbouring factors of variable i, with a removed. Figure 2: Factor-to-variable message. Factor-to-Variable Messages. The message sent from function fa to variable xi is computed using ηai(xi) = min X∂a\i max fa(X∂a), max j∈∂a\i µja(xj) (3) Initialization Using the Identity. In the sum-product algorithm, messages are usually initialized using knowledge of the identity of 2 the product operation. For example, if the FG is a tree with some node chosen as a root, messages can be passed from the leaves to the root and back to the leaves. The initial message sent from a variable that is a leaf involves taking the product for an empty set of incoming messages, and therefore the message is initialized to the identity of the group (ℜ+, ×), which is × 1 = 1. In this case, we need the identity of the (ℜ, max) semi-group, where max( max 1 , x) = x ∀x ∈ℜ– that is max 1 = −∞. Examining Eq. (3), we see that the message sent from a function that is a leaf will involve maximizing over the empty set of incoming messages. So, we can initialize the message sent from function fa to variable xi using ηai(xi) = minX∂a\xi fa(X∂a). Figure 3: Marginals. Marginals. Min-max marginals, which involve “minimizing” over all variables except some xi, can be computed by taking the max of all incoming messages at xi as in Fig. 3: m(xi) = min XN \i max a fa(X∂a) = max b∈∂i ηbi(xi) (4) The value of xi that achieves the global solution is given by arg minxi m(xi). 4 Efficient Update for High-Order Factors When passing messages from factors to variables, we are interested in efficiently evaluating Eq. (3). In its original form, this computation is exponential in the number of neighbouring variables |∂a|. Since many interesting problems require high-order factors in their FG formulation, many have investigated efficient min-sum and sum-product message passing through special family of, often sparse, factors (e.g. Tarlow et al., 2010; Potetz and Lee, 2008). For the time being, consider the factors over binary variables xi ∈{0, 1}∀i ∈∂a and further assume that efficient minimization of the factor fa is possible. Assumption 1. The function fa : X∂a →ℜcan be minimized in time O(ω) with any subset B ⊂∂a of its variables fixed. In the following we show how to calculate min-max factor-to-variable messages in O(K(ω + log(K))), where K = |∂a| −1. In comparison to the limited settings in which high-order factors allow efficient min-sum and sum-product inference, we believe this result to be quite general.1 The idea is to break the problem in half, at each iteration. We show that for one of these halves, we can obtain the min-max value using a single evaluation of fa. By reducing the size of the original problem in this way, we only need to choose the final min-max message value from a set of candidates that is at most linear in |∂a|. Procedure. According to Eq. (3), in calculating the factor-to-variable message ηai(xi) for a fixed xi = c, we are interested in efficiently solving the following optimization problem min X∂a\i max µ1(x1), µ2(x2), ..., µK(xK), f(X∂a\i, xi = ci) (5) where, without loss of generality we are assuming ∂a \ i = {1, . . . , K}, and for better readability, we drop the index a, in factors (fa), messages (µka, ηai) and elsewhere, when it is clear from the context. There are 2K configurations of X∂a\i, one of which is the minimizing solution. We will divide this set in half in each iteration and save the minimum in one of these halves in the min-max candidate list C. The maximization part of the expression is equivalent to max (max (µ1(x1), µ2(x2), ..., µK(xK)) , f(X∂a, xi = ci)). Let µj1(cj1) be the largest µ value that is obtained at some index j1, for some value cj1 ∈{0, 1}. In other words µj1(cj1) = max (µ1(0), µ1(1), ..., µK(0), µK(1)). For future use, let j2, . . . , jK be the index of the next largest message indices up to the K largest ones, and let cj2, . . . , cjK be their 1 Here we show that solving the minimization problem on any particular factor can be solved in a fixed amount of time. In many applications, doing this might itself involve running another entire inference algorithm. However, please note that our algorithm is agnostic to such choices for optimization of individual factors. 3 corresponding assignment. Note that the same message (e.g. µ3(0), µ3(1)) could appear in this sorted list at different locations. We then partition the set of all assignments to X∂a\i into two sets of size 2K−1 depending on the assignment to xj1: 1) xj1 = cj1 or; 2) xj1 = 1−cj1. The minimization of Eq. (5) can also be divided to two minimizations each having xj1 set to a different value. For xj1 = cj1, Eq. (5) simplifies to η(j1) = max µj1(cj1), min X∂a\{i,j1}(f(X∂a\{i,j1}, xi = ci, xj1 = cj1)) (6) where we need to minimize f, subject to a fixed xi, xj1. We repeat the procedure above at most K times, for j1, . . . , jk, . . . jK, where at each iteration we obtain a candidate solution, η(jm) that we add to the candidate set C = {η(j1), . . . , η(jK)}. The final solution is the smallest value in the candidate solution set, min C. Early Termination. If jk = jk′ for 1 ≤k, k′ ≤K it means that we have performed the minimization of Eq. (5) for both xjk = 0 and xjk = 1. This means that we can terminate the iterations and report the minimum in the current candidate set. Adding the cost of sorting O(K log(K)) to the worst case cost of minimization of f() in Eq. (6) gives a total cost of O(K(log(K) + ω)). Arbitrary Discrete Variables. This algorithm is not limited to binary variables. The main difference in dealing with cardinality D > 2, is that we run the procedure for at most K(D −1) iterations, and in early termination, all variable values should appear in the top K(D −1) incoming message values. For some factors, we can go further and calculate all factor-to-variable messages leaving fa in a time linear in |∂a|. The following section derives such update rule for a type of factor that we use in the make-span application of Section 5.2. 4.1 Choose-One Constraint If fa(X∂a) implements a constraint such that only a subset of configurations XA ⊂X∂a, of the possible configurations of X∂a ∈X∂a are allowed, then the message from function fa to xi simplifies to ηai(x′ i) = min X∂a∈Aa|xi=x′ i max j∈∂a\i µja(xj) (7) In many applications, this can be further simplified by taking into account properties of the constraints. Here, we describe such a procedure for factors which enforce that exactly one of their binary variables be set to one and all others to zero. Consider the constraint f(x1, ..., xK) = δ(P k xk, 1) for binary variables xk ∈{0, 1}, where δ(x, x′) evaluates to −∞iff x = x′ and ∞otherwise.2 Using X\i = (x1, x2, ..., xi−1, xi+1, ..., xK) for X with xi removed, Eq. (7) becomes ηi(xi) = min X\i|PK k=1 xk=1 max k|k̸=i µk(xk) = maxk|k̸=i µk(0) if xi = 1 minX\i∈{(1,0,...,0),(0,1,...,0),...,(0,0,...,1)} maxk|k̸=i µk(xk) if xi = 0 (8) Naive implementation of the above update is O(K2) for each xi , or O(K3) for sending messages to all neighbouring xi. However, further simplification is possible. Consider the calculation of maxk|k̸=i µk(xk) for X\i = (1, 0, . . . , 0) and X\i = (0, 1, . . . , 0). All but the first two terms in these two sets are the same (all zero), so most of the comparisons that were made when computing maxk|k̸=i µk(xk) for the first set, can be reused when computing it for the second set. This extends to all K −1 sets (1, 0, . . . , 0), . . . , (0, 0, . . . , 1), and also extends across the message updates for different xi’s. After examining the shared terms in the maximizations, we see that all that is needed is k(1) i = arg max k|k̸=i µk(0), k(2) i = arg max k|k̸=i,k(1) i µk(0), (9) the indices of the maximum and second largest values of µk(0) with i removed from consideration. Note that these can be computed for all neighbouring xi in time linear in K, by finding the top three 2Similar to any other semiring, ±∞as the identities of min and max have a special role in defining constraints. 4 values of µk(0) and selecting two of them appropriately depending on whether µi(0) is among the three values. Using this notation, the above update simplifies as follows: ηi(xi) = ( µk(1) i (0) if xi = 1 min mink|k̸=i,k(1) i max(µk(1) i (0), µk(1)), max(µk(1) i (1), µk(2) i (0)) if xj = 0 = ( µk(1) ai (0) if xi = 1 min max(µk(1) i (0), mink|k̸=i,k(1) i µk(1)), max(µk(1) i (1), µk(2) i (0)) if xi = 0 (10) The term mink|k̸=i,k(1) i µk(1) also need not be recomputed for every xi, since terms will be shared. Define the following: si = arg min k̸=i,k(1) i µk(1), (11) which is the index of the smallest value of µk(1) with i and k(1) i removed from consideration. This can be computed efficiently for all i in time that is linear in K by finding the smallest three values of µk(1) and selecting one of them appropriately depending on whether µi(1) and/or µk(1) i are among the three values. The resulting message update for K-choose-1 constraint becomes ηi(xi) = ( µk(1) i (0) if xi = 1 min max(µk(1) i (0), µsi(1)), max(µk(1) i (1), µk(2) i (0)) if xi = 0 (12) This shows that messages to all neighbouring variables x1, ..., xK can be obtained in time that is linear in K. This type of constraint also has a tractable form in min-sum and sum-product inference, albeit of a different form (e.g. see Gail et al., 1981; Gupta et al., 2007). 5 Experiments and Applications In the first part of this section we compare min-max propagation with the only alternative min-max inference method over FGs that relies on sum-product reduction. In the second part, we formulate the real-world problem of makespan minimization as a min-max inference problem, with high-order factors. In this application, the sum-product reduction is not tractable; to formulate the makespan problem using a FG we need to use high-order factors that do not allow an efficient (polynomial time) sum-product message update. However, min-max propagation can be applied using the efficient updates of the previous section. 5.1 Sum-Product Reduction vs. Min-Max Propagation Like all belief propagation algorithms, min-max propagation is exact when the FG is tree. However, our first point of interest is how min-max propagation performs on loopy graphs. For this, we compare its performance against the sum-product (or CSP) reduction. Sum-product reduction of (Ravanbakhsh et al., 2014) seeks the min-max value using bisection-search over all values in the range of all factors in the FG – i.e. Y = {fa(X∂a) ∀a, X∂a}. In each step of the search a value y ∈Y is used to reduce the min-max problem to a CSP. This CSP is satisfiable iff the min-max solution y∗= minX maxa fa(X∂a) is less than the current y. The complexity of this search procedure is O(log(|Y|)τ), where τ is the complexity of solving the CSP. Following that paper, we use Perturbed Belief Propagation (PBP) (Ravanbakhsh and Greiner, 2015) to solve the resulting CSPs. Experimental Setup. Our setup is based on the following observations Observation 1. For any strictly monotonically increasing function g : ℜ→ℜ, arg min X max a fa(X∂a) = arg min X max a g(fa(X∂a)) that is only the ordering of the factor values affects the min-max assignment. Using the same argument, application of monotonic g() does not inherently change the behaviour of min-max propagation either. 5 Mean Min-Max Solution 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 Min-Max Propagation (Random decimation) Min-Max Propagation (Max support decimation) Min-Max Propagation (Min value decimation) PBP CSP Solver (Min-max prop iterations) PBP CSP Solver (1000 iterations) Upper Bound Brute Force 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 9 Mean Min-Max Solution 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 9 Connectivity Connectivity Connectivity Figure 4: Min-max performance of different methods on Erdos-Renyi random graphs. Top: N=10, Bottom: N=100, Left: D=4, Middle: D=6, Right: D=8. Observation 2. Only the factor(s) which output(s) the max value, i.e. max factor(s), matter. For all other factors the variables involved can be set in any way as long as the factors’ value remains smaller or equal to that of the max factor. This means that variables that do not appear in the max factor(s), which we call free variables, could potentially assume any value without affecting the min-max value. Free variables can be identified from their uniform min-max marginals. This also means that the min-max assignment is not unique. This phenomenon is unique to min-max inference and does not appear in min-sum and sum-product counterparts. We rely on this observation in designing benchmark random min-max inference problems: i) we use integers as the range of factor values; ii) by selecting all factor values in the same range, we can use the number of factors as a control parameter for the difficulty of the inference problem. For N variables x1, . . . , xN, where each xi ∈{1, . . . , D}, we draw Erdos-Renyi graphs with edge probability p ∈(0, 1] and treat each edge as a pairwise factor. Consider the factor fa(xi, xj) = min(π(xi), π′(xj)), where π, π′ are permutations of {1, . . . , D}. With D = 2, this definition of factor fa reduces to 2-SAT factor. This setup for random min-max instances therefore generalizes different K-SAT settings, where the min-max solution of minX maxa fa(X∂a) = 1 for D = 2, corresponds to a satisfying assignment. The same argument with K > 2 establishes the “NP-hardness” of min-max inference in factor-graphs. We test our setup on graphs with N ∈{10, 100} variables and cardinality D ∈{4, 6, 8}. For each choice of D and N, we run min-max propagation and sum-product reduction for various connectivity in the Erdos-Renyi graph. Both methods use random sequential update. For N = 10 we also report the exact min-max solutions. Min-max propagation is run for a maximum T = 1000 iterations or until convergence, whichever comes first. The number of iterations actually taken by min-max propagation are reported in appendix. The PBP used in the sum-product reduction requires a fixed T; we report the results for T equal to the worse case min-max convergence iterations (see appendix) and T = 1000 iterations. Each setting is repeated 10 times for a random graph of a fixed connectivity value p ∈(0, 1]. Decimation. To obtain a final min-max assignment we need to fix the free variables. For this we use a decimation scheme similar to what is used with min-sum inference or in finding a satisfying CSP assignment in sum-product. We consider three different decimation procedures: Random: Randomly choose a variable, set it to the state with minimum min-max marginal value. Min-value: Fix the variable with the minimum min-max marginal value. 6 Max-support: Choose the variable for which its min value occurs with the highest frequency. Results. Fig. 4 compares the performance of sum-product reduction that relies on PBP with min-max propagation and brute-force. For min-max propagation we report the results for three different decimation procedures. Each column uses a different variable cardinality D. While this changes the range of values in the factors, we observe a similar trend in performance of different methods. In the top row, we also report the exact min-max value. As expected, by increasing the number of factors (connectivity) the min-max value increases. Overall, the sum-product reduction (although asymptotically more expensive), produces slightly better results. Also different decimation schemes do not significantly affect the results in these experiments. 5.2 Makespan Minimization Figure 5: Makespan FG. The objective in the makespan problem is to schedule a set of given jobs, each with a load, on machines which operate in parallel such that the total load for the machine which has the largest total load (i.e. the makespan) is minimized (Pinedo, 2012). This problem has a range of applications, for example in the energy sector, where the machines represent turbines and the jobs represent electrical power demands. Given N distinct jobs N = {1, . . . , n, . . . , N} and M machines M = {1, . . . , m, . . . , M}, where pnm represents the load of machine m, we denote the assignment variable xnm as whether or not job n is assigned to machine m. The task is to find the set of assignments xnm ∀n ∈N, ∀m ∈M which minimizes the total cost function below, while satisfying the associated set of constraints: min X max m N X n=1 pnmxnm ! s.t. M X m=1 xnm = 1 xnm ∈{0, 1} ∀n ∈N, m ∈M (13) Figure 6: Min-max ratio to a lower bound (lower is better) obtained by LPT with 4/3-approximation guarantee versus min-max propagation using different decimation procedures. N is the number of jobs and M is the number of machines. In this setting, all jobs have the same run-time across all machines. M N LPT Min-Max Prop Min-Max Prop Min-Max Prop (Random Dec.) (Max-Support Dec.) (Min-Value Dec.) 8 25 1.178 1.183 1.091 1.128 26 1.144 1.167 1.079 1.112 33 1.135 1.144 1.081 1.093 34 1.117 1.132 1.071 1.086 41 1.112 1.117 1.055 1.077 42 1.094 1.109 1.079 1.074 10 31 1.184 1.168 1.110 1.105 32 1.165 1.186 1.109 1.111 41 1.138 1.183 1.077 1.088 42 1.124 1.126 1.074 1.090 51 1.112 1.131 1.077 1.081 52 1.102 1.100 1.051 1.076 The makespan minimization problem is NPhard for M = 2 and strongly NP-hard for M > 2 (Garey and Johnson, 1979). Two well-known approximation algorithms are the 2-approximation greedy algorithm and the 4/3approximation greedy algorithm. In the former, all machines are initialized as empty. We then select one job at random and assign it to the machine with least total load given the current job assignments. We repeat this process until no jobs remain. This algorithm is guaranteed to give a schedule with a makespan no more than 2 times larger than the one for the optimal schedule (Behera, 2012; Behera and Laha, 2012) The 4/3-approximation algorithm, a.k.a. the Longest Processing Time (LPT) algorithm, operates similar to the 2-approximation algorithm with the exception that, at each iteration, we always take the job with the next largest load rather than selecting one of the remaining jobs at random (Graham, 1966). 7 FG Representation. Fig. 5 shows the FG with binary variables xnm, where the factors are fm(x1m, . . . , xNm) = N X n=1 pnmxnm ∀m ; gn(xn1, . . . , xnM) = 0, PM m=1 xnm = 1 ∞, otherwise ∀n where f() computes the total load for a machine according to and g() enforces the constraint in Eq. (13). We see that the following min-max problem over this FG minimizes the makespan min X max max m fm(x1m, ..., xNm), max n gn(xn1, ..., xnM) . (14) Using the procedure for passing messages through the g constraints in Section 4.1 and using the procedure of Section 4 for f, we can efficiently approximate the min-max solution of Eq. (14) by message passing. Note that the factor f() in the sum-product reduction of this FG has a non-trivial form that does not allow efficient message update. Figure 7: Min-max ratio (LP relaxation to that) of min-max propagation versus same for the method of (Vinyals et al., 2013) (higher is better). Mode 0, 1 and 2 corresponds to uncorrelated, machine correlated and machine-task correlated respectively. Mode N/M (Vinyals et al., 2013) Min-Max Prop 0 5 0.93(0.03) 0.95(0.01) 10 0.94(0.01) 0.93(0.01) 15 0.94(0.00) 0.90(0.01) 1 5 0.90(0.01) 0.86(0.07) 10 0.90(0.00) 0.88(0.00) 15 0.87(0.01) 0.73(0.03) 2 5 0.81(0.01) 0.89(0.01) 10 0.81(0.01) 0.89(0.01) 15 0.78(0.01) 0.86(0.01) Results. In an initial set of experiments, we compare min-max propagation (with different decimation procedures) with LPT on a set of benchmark experiments designed in (Gupta and Ruiz-Torres, 2001) for the identical machine version of the problem – i.e. a task has the same processing time on all machines. Fig. 6 shows the scenario where min-max prop performs best against the LPT algorithm. We see that this scenario involves large instance (from the additional results in the appendix, we see that our framework does not perform as well on small instances). From this table, we also see that max-support decimation almost always outperforms the other decimation schemes. We then test the min-max propagation with max-support decimation against a more difficult version of the problem: the unrelated machine model, where each job has a different processing time on each machine. Specifically, we compare our method against that of (Vinyals et al., 2013) which also uses distributive law for min-max inference to solve a load balancing problem. However, that paper studies a sparsified version of the unrelated machines problem where tasks are restricted to a subset of machines (i.e. they have infinite processing time for particular machines). This restriction, allows for decomposition of their loopy graph into an almost equivalent tree structure, something which cannot be done in the general setting. Nevertheless, we can still compare their results to what we can achieve using min-max propagation with infinite-time constraints. We use the same problem setup with three different ways of generating the processing times (uncorrelated, machine correlated, and machine/task correlated) and compare our answers to IBM’s CPLEX solver exactly as the authors do in that paper (where a high ratio is better). Fig. 7 shows a subset of results. Here again, min-max propagation works best for large instances. Overall, despite the generality of our approach the results are comparable. 6 Conclusion This paper demonstrates that FGs are well suited to model min-max optimization problems with factorization characteristics. To solve such problems we introduced and evaluated min-max propagation, a variation of the well-known belief propagation algorithm. In particular, we introduced an efficient procedure for passing min-max messages through high-order factors that applies to a wide range of functions. This procedure equips min-max propagation with an ammunition unavailable to min-sum and sum-product message passing and it could enable its application to a wide range of problems. In this work we demonstrated how to leverage efficient min-max-propagation at the presence of high-order factors, in approximating the NP-hard problem of makespan. In the future, we plan to investigate the application of min-max propagation to a variety of combinatorial problems, known as bottleneck problems (Edmonds and Fulkerson, 1970) that can be naturally formulated as min-max inference problems over FGs. 8 References S. M. Aji and R. J. McEliece. The generalized distributive law. Information Theory, IEEE Transactions on, 46(2):325–343, 2000. D. Behera. Complexity on parallel machine scheduling: A review. In S. Sathiyamoorthy, B. E. Caroline, and J. G. Jayanthi, editors, Emerging Trends in Science, Engineering and Technology, Lecture Notes in Mechanical Engineering, pages 373–381. Springer India, 2012. D. K. Behera and D. Laha. Comparison of heuristics for identical parallel machine scheduling. Advanced Materials Research, 488:1708–1712, 2012. C. M. Bishop. Pattern recognition and machine learning. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. J. Edmonds and D. R. Fulkerson. Bottleneck extrema. Journal of Combinatorial Theory, 8(3): 299–306, 1970. M. H. Gail, J. H. Lubin, and L. V. Rubinstein. Likelihood calculations for matched case-control studies and survival studies with tied death times. Biometrika, pages 703–707, 1981. M. R. Garey and D. S. Johnson. Computers and intractability, volume 174. Freeman San Francisco, 1979. R. L. Graham. Bounds for certain multiprocessing anomalies. Bell System Technical Journal, 45(9): 1563–1581, 1966. J. N. D. Gupta and A. J. Ruiz-Torres. A listfit heuristic for minimizing makespan on identical parallel machines. Production Planning & Control, 12(1):28–36, 2001. R. Gupta, A. A. Diwan, and S. Sarawagi. Efficient inference with cardinality-based clique potentials. In Proceedings of the 24th international conference on Machine learning, pages 329–336. ACM, 2007. F. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498 –519, 2001. M. Pinedo. Scheduling: theory, algorithms, and systems. Springer, 2012. B. Potetz and T. S. Lee. Efficient belief propagation for higher-order cliques using linear constraint nodes. Computer Vision and Image Understanding, 112(1):39–54, 2008. S. Ravanbakhsh and R. Greiner. Perturbed message passing for constraint satisfaction problems. Journal of Machine Learning Research, 16:1249–1274, 2015. S. Ravanbakhsh, C. Srinivasa, B. Frey, and R. Greiner. Min-max problems on factor graphs. In Proceedings of the 31st International Conference on Machine Learning, ICML ’14, 2014. D. Tarlow, I. Givoni, and R. Zemel. HOP-MAP: Efficient message passing with high order potentials. Journal of Machine Learning Research - Proceedings Track, 9:812–819, 2010. M. Vinyals, K. S. Macarthur, A. Farinelli, S. D. Ramchurn, and N. R. Jennings. A message-passing approach to decentralized parallel machine scheduling. The Computer Journal, 2013. 9 | 2017 | 134 |
6,602 | Statistical Cost Sharing Eric Balkanski Harvard University ericbalkanski@g.harvard.edu Umar Syed Google NYC usyed@google.com Sergei Vassilvitskii Google NYC sergeiv@google.com Abstract We study the cost sharing problem for cooperative games in situations where the cost function C is not available via oracle queries, but must instead be learned from samples drawn from a distribution, represented as tuples (S, C(S)), for different subsets S of players. We formalize this approach, which we call STATISTICAL COST SHARING, and consider the computation of the core and the Shapley value. Expanding on the work by Balcan et al. [2015], we give precise sample complexity bounds for computing cost shares that satisfy the core property with high probability for any function with a non-empty core. For the Shapley value, which has never been studied in this setting, we show that for submodular cost functions with bounded curvature it can be approximated from samples from the uniform distribution to a p1 −factor, and that the bound is tight. We then define statistical analogues of the Shapley axioms, and derive a notion of statistical Shapley value and that these can be approximated arbitrarily well from samples from any distribution and for any function. 1 Introduction The cost sharing problem asks for an equitable way to split the cost of a service among all of the participants. Formally, there is a cost function C defined over all subsets S ✓N of a ground set of elements, or players, and the objective is to fairly divide the cost of the ground set C(N) among the players. Unlike traditional learning problems, the goal here is not to predict the cost of the service, but rather learn which ways of dividing the cost among the players are equitable. Cost sharing is central to cooperative game theory, and there is a rich literature developing the key concepts and principles to reason about this topic. Two popular cost sharing concepts are the core [Gillies, 1959], where no group of players has an incentive to deviate, and the Shapley value [Shapley, 1953], which is the unique vector of cost shares satisfying four natural axioms. While both the core and the Shapley value are easy to define, computing them poses additional challenges. One obstacle is that the computation of the cost shares requires knowledge of costs in myriad different scenarios. For example, computing the exact Shapley value requires one to look at the marginal contribution of a player over all possible subsets of others. Recent work [Liben-Nowell et al., 2012] shows that one can find approximate Shapley values for a restricted subset of cost functions by looking at the costs for polynomially many specifically chosen subsets. In practice, however, another roadblock emerges: one cannot simply query for the cost of an arbitrary subset. Rather, the subsets are passively observed, and the costs of unobserved subsets are simply unknown. We share the opinion of Balcan et al. [2016] that the main difficulty with using cost sharing methods in concrete applications is the information needed to compute them. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Concretely, consider the following cost sharing applications. Attributing Battery Consumption on Mobile Devices. A modern mobile phone or tablet is typically running a number of distinct apps concurrently. In addition to foreground processes, a lot of activity may be happening in the background: email clients may be fetching new mail, GPS may be active for geo-fencing applications, messaging apps are polling for new notifications, and so on. All of these activities consume power; the question is how much of the total battery consumption should be attributed to each app? This problem is non-trivial because the operating system induces cooperation between apps to save battery power. For example there is no need to activate the GPS sensor twice if two different apps request the current location almost simultaneously. Understanding Black Box Learning Deep neural networks are prototypical examples of black box learning, and it is almost impossible to tease out the contribution of a particular feature to the final output. Particularly in situations where the features are binary, cooperative game theory gives a formal way to analyze and derive these contributions. While one can evaluate the objective function on any subset of features, deep networks are notorious for performing poorly on certain out of sample examples [Goodfellow et al., 2014, Szegedy et al., 2013], which may lead to misleading conclusions when using traditional cost sharing methods. We model these cost sharing questions as follows. Let N be the set of possible players (apps or features), and for a subset S ✓N, let C(S) denote the cost of S. This cost represents the total power consumed over a standard period of time, or the rewards obtained by the learner. We are given ordered pairs (S1, C(S1)), (S2, C(S2)), . . . , (Sm, C(Sm)), where each Si ✓N is drawn independently from some distribution D. The problem of STATISTICAL COST SHARING asks to look for reasonable cost sharing strategies in this setting. 1.1 Our results We build on the approach from Balcan et al. [2015], which studied STATISTICAL COST SHARING in the context of the core, and assume that only partial data about the cost function is observed. The authors showed that cost shares that are likely to respect the core property can be obtained for certain restricted classes of functions. Our main result is an algorithm that generalizes these results for all games where the core is non-empty and we derive sample complexity bounds showing exactly the number of samples required to compute cost shares (Theorems 1 and 2). While the main approach of Balcan et al. [2015] relied on first learning the cost function and then computing cost shares, we show how to proceed directly, computing cost shares without explicitly learning a good estimate of the cost function. This high level idea was independently discovered by Balcan et al. [2016]; our approach here greatly improves the sample complexity bounds, culminating in a result logarithmic in the number of players. We also show that approximately satisfying the core with probability one is impossible in general (Theorem 3). We then focus on the Shapley value, which has never been studied in the STATISTICAL COST SHARING context. We obtain a tight p1 −multiplicative approximation of the Shapley values for submodular functions with bounded curvature over the uniform distribution (Theorems 4 and 11), but show that they cannot be approximated by a bounded factor in general, even for the restricted class of coverage functions, which are learnable, over the uniform distribution (Theorem 5). We also introduce a new cost sharing method called data-dependent Shapley value which is the unique solution (Theorem 6) satisfying four natural axioms resembling the Shapley axioms (Definition 7), and which can be approximated arbitrarily well from samples for any bounded function and any distribution (Theorem 7). 1.2 Related work There are two avenues of work which we build upon. The first is the notion of cost sharing in cooperative games, first introduced by Von Neumann and Morgenstern [1944]. We consider the Shapley value and the core, two popular solution concepts for cost-sharing in cooperative games. The Shapley value [Shapley, 1953] is studied in algorithmic mechanism design [Anshelevich et al., 2008, Balkanski and Singer, 2015, Feigenbaum et al., 2000, Moulin, 1999]. For applications of the Shapley value, see the surveys by Roth [1988] and Winter [2002]. A naive computation of the Shapley value of a cooperative game would take exponential time; recently, methods for efficiently approximating 2 the Shapley value have been suggested [Bachrach et al., 2010, Fatima et al., 2008, Liben-Nowell et al., 2012, Mann, 1960] for some restricted settings. The core, introduced by Gillies [1959], is another well-studied solution concept for cooperative games. Bondareva [1963] and Shapley [1967] characterized when the core is non-empty. The core has been studied in the context of multiple combinatorial games, such as facility location Goemans and Skutella [2004] and maximum flow Deng et al. [1999]. In cases with no solutions in the core or when it is computationally hard to find one, the balance property has been relaxed to hold approximately [Devanur et al., 2005, Immorlica et al., 2008]. In applications where players submit bids, cross-monotone cost sharing, a concept stronger than the core that satisfies the group strategy proofness property, has attracted a lot of attention [Immorlica et al., 2008, Jain and Vazirani, 2002, Moulin and Shenker, 2001, Pál and Tardos, 2003]. We note that these applications are sufficiently different from the ones we are studying in this work. The second is the recent work in econometrics and computational economics that aims to estimate critical concepts directly from a limited data set, and reason about the sample complexity of the computational problems. Specifically, in all of the above papers, the algorithm must be able to query or compute C(S) for an arbitrary set S ✓N. In our work, we are instead given a collection of samples from some distribution; importantly the algorithm does not know C(S) for sets S that were not sampled. This approach was first introduced by Balcan et al. [2015], who showed how to compute an approximate core for some families of games. Their main technique is to first learn the cost function C from samples and then to use the learned function to compute cost shares. The authors also showed that there exist games that are not PAC-learnable but that have an approximate core that can be computed. Independently, in recent follow up work, the authors showed how to extend their approach to compute a probably approximate core for all games with a non-empty core, and gave weak sample complexity bounds [Balcan et al., 2016]. We improve upon their bounds, showing that a logarithmic number of samples suffices when the spread of the cost function is bounded. 2 Preliminaries A cooperative game is defined by an ordered pair (N, C), where N is the ground set of elements, also called players, and C : 2N ! R≥0 is the cost function mapping each coalition S ✓N to its cost, C(S). The ground set of size n = |N| is called the grand coalition and we denote the elements by N = {1, . . . , n} = [n]. We assume that C(;) = 0, C(S) ≥0 for all S ✓N, and that maxS C(S) is bounded by a polynomial in n, which are standard assumptions. We will slightly abuse notation and use C(i) instead of C({i}) for i 2 N when it is clear from the context. We recall three specific classes of functions. Submodular functions exhibit the property of diminishing returns: CS(i) ≥CT (i) for all S ✓T ✓N and i 2 N where CS(i) is the marginal contribution of element i to set S, i.e., CS(i) = C(S [ {i}) −C(S). Coverage functions are the canonical example of submodular functions. A function is coverage if it can be written as C(S) = | [i2S Ti| where Ti ✓U for some universe U. Finally, we also consider the simple class of additive functions, such that C(S) = P i2S C(i). A cost allocation is a vector 2 Rn where i is the share of element i. We call a cost allocation balanced if P i2N i = C(N). Given a cooperative game (N, C) the goal in the cost sharing literature is to find “desirable" balanced cost allocations. Most proposals take an axiomatic approach, defining a set of axioms that a cost allocation should satisfy. These lead to the concepts of Shapley value and the core, which we define next. A useful tool to describe and compute these cost sharing concepts is permutations. We denote by σ a uniformly random permutation of N and by Sσ<i the players before i 2 N in permutation σ. 2.1 The core The core is a balanced cost allocation where no player has an incentive to deviate from the grand coalition — for any subset of players the sum of their shares does not cover their collective cost. Definition 1. A cost allocation is in the core of function C if the following properties are satisfied: • Balance: P i2N i = C(N), • Core property: for all S ✓N, P i2S i C(S). 3 The core is a natural cost sharing concept. For example, in the battery blame scenario it translates to the following assurance: No matter what other apps are running concurrently, an app is never blamed for more battery consumption than if it were running alone. Given that app developers are typically business competitors, and that a mobile device’s battery is a very scarce resource, such a guarantee can rather neatly avoid a great deal of finger-pointing. Unfortunately, for a given cost function C the core may not exist (we say the core is empty), or there may be multiple (or even infinitely many) cost allocations in the core. For submodular functions C, the core is guaranteed to exist and one allocation in the core can be computed in polynomial time. Specifically, for any permutation σ, the cost allocation such that i = C(Sσ<i [ {i}) −C(Sσ<i) is in the core. 2.2 The Shapley value The Shapley value provides an alternative cost sharing method. For a game (N, C) we denote it by φC, dropping the superscript when it is clear from the context. While the Shapley value may not satisfy the core property, it satisfies the following four axioms: • Balance: P i2N φi = C(N). • Symmetry: For all i, j 2 N, if C(S [ {i}) = C(S [ {j}) for all S ✓N \ {i, j} then φi = φj. • Zero element: For all i 2 N, if C(S [ {i}) = C(S) for all S ✓N then φi = 0. • Additivity: For two games (N, C1) and (N, C2) with the same players, but different cost functions C1 and C2, let φ1 and φ2 be the respective cost allocations. Consider a new game (N, C1 + C2), and let φ0 be the cost allocation for this game. Then for all elements, i 2 N, φ0 i = φ1 i + φ2 i . Each of these axioms is natural: balance ensures that the cost of the grand coalition is distributed among all of the players. Symmetry states that two identical players should have equal shares. Zero element verifies that a player that adds zero cost to any coalition should have zero share. Finally, additivity just confirms that costs combine in a linear manner. It is surprising that the set of cost allocations that satisfies all four axioms is unique. Moreover, the Shapley value φ can be written as the following summation: φi = E σ[C(Sσ<i [ {i}) −C(Sσ<i)] = X S✓N\{i} |S|!(n −|S| −1)! n! (C(S [ {i}) −C(S)). This expression is the expected marginal contribution C(S [ {i}) −C(S) of i over a set of players S who arrived before i in a random permutation of N. As the summation is over exponentially many terms, the Shapley value generally cannot be computed exactly in polynomial time. However, several sampling approaches have been suggested to approximate the Shapley value for specific classes of functions Bachrach et al. [2010], Fatima et al. [2008], Liben-Nowell et al. [2012], Mann [1960]. 2.3 Statistical cost sharing With the sole exception of Balcan et al. [2015], previous work in cost-sharing critically assumes that the algorithm is given oracle access to C, i.e., it can query, or determine, the cost C(S) for any S ✓N. In this paper, we aim to (approximately) compute the Shapley value and other cost allocations from samples, without oracle access to C, and with a number of samples that is polynomial in n. Definition 2. Consider a cooperative game with players N and cost function C. In the STATISTICAL COST SHARING problem we are given pairs (S1, C(S1)), (S2, C(S2)), . . . , (Sm, C(Sm)) where each Si is drawn i.i.d. from a distribution D over 2N. The goal is to find a cost allocation 2 Rn. In what follows we will often refer to an individual (S, C(S)) pair as a sample. It is tempting to reduce STATISTICAL COST SHARING to classical cost sharing by simply collecting enough samples to use known algorithms. For example, Liben-Nowell et al. [2012] showed how to approximate the Shapley value with polynomially many queries C(S). However, if the distribution D is not aligned with these specific queries, which is the case even for the uniform distribution, emulating these 4 algorithms in our setting requires exponentially many samples. Balcan et al. [2015] showed how to instead first learn an approximation to C from the given samples and then compute cost shares for the learned function, but their results hold only for a limited number of games and cost functions C. We show that a more powerful approach is to compute cost shares directly from the data, without explicitly learning the cost function first. 3 Approximating the Core from Samples In this section, we consider the problem of finding cost allocations from samples that satisfy relaxations of the core. A natural approach to this problem is to first learn the underlying model, C, from the data and to then compute a cost allocation for the learned function. As shown in Balcan et al. [2015], this approach works if C is PAC-learnable, but there exist functions C that are not PAClearnable and for which a cost allocation that approximately satisfies the core can still be computed. The main result of this section shows that a cost allocation that approximates the core property can be computed from samples for any function with a non-empty core. We first give a sample complexity bound that is linear in the number n of players, a result which was independently discovered by Balcan et al. [2016]. With a more intricate analysis, we then improve this sample complexity to be logarithmic in n, but at the cost of a weaker relaxation. Our algorithm, which runs in polynomial time, directly computes a cost allocation that empirically satisfies the core property, i.e., it satisfies the core property on all of the samples. We argue, by leveraging VC-dimension and Rademacher complexity-based generalization bounds, that the same cost allocation will likely satisfy the core property on newly drawn samples as well. We also propose a stronger notion of the approximate core, and prove that it cannot be computed by any algorithm. This hardness result is information theoretic and is not due to running time limitations. The proofs in this section are deferred to Appendix B. We begin by defining three notions of the approximate core: the probably approximately stable (Balcan et al. [2016]), mostly approximately stable, and probably mostly approximately stable cores. Definition 3. Given δ, ✏> 0, a cost allocation such that P i2N i = C(N) is in • the probably approximately stable core if PrS⇠D ⇥P i2S i C(S) ⇤ ≥1 −δ for all D (see Balcan et al. [2015]), • the mostly approximately stable core over D if (1 −✏) P i2S i C(S) for all S ✓N, • the probably mostly approximately stable core if PrS⇠D ⇥ (1 −✏) P i2S i C(S) ⇤ ≥ 1 −δ for all D, For each of these notions, our goal is to efficiently compute a cost allocation in the approximate core, in the following sense. Definition 4. A cost allocation is efficiently computable for the class of functions C over distribution D, if for all C 2 C and any ∆, δ, ✏> 0, given C(N) and m = poly(n, 1/∆, 1/δ, 1/✏) samples (Sj, C(Sj)) with each Sj drawn i.i.d. from distribution D, there exists an algorithm that computes with probability at least 1 −∆over both the samples and the choices of the algorithm. We refer to the number of samples required to compute approximate cores as the sample complexity of the algorithm. We first present our result for computing a probably approximately stable core with sample complexity that is linear in the number of players, which was also independently discovered by Balcan et al. [2016]. Theorem 1. The class of functions with a non-empty core has cost shares in the probably approximately stable core that are efficiently computable. The sample complexity is O ✓n + log(1/∆) δ ◆ . The full proof of Theorem 1 is in Appendix B, and can be summarized as follows: We define a class of halfspaces which contains the core. Since we assume that C has a non-empty core, there exists a cost allocation in this class of halfspaces that satisfies both the core property on all the samples and the balance property. Given a set of samples, such a cost allocation can be computed with a simple linear program. We then use the VC-dimension of the class of halfspaces to show that the performance on the samples generalizes well to the performance on the distribution D. 5 We next show that the sample complexity dependence on n can be improved from linear to logarithmic if we relax the goal from computing a cost allocation in the probably approximately stable core to computing one in the probably mostly approximately stable core instead. The sample complexity of our algorithm also depends on the spread of the function C, defined as maxS C(S) minS6=; C(S) (we assume minS6=; C(S) > 0). Theorem 2. The class of functions with a non-empty core has cost allocations in the probably mostly approximately stable core that are efficiently computable with sample complexity ✓1 −✏ ✏δ ◆2 ' 128⌧(C)2 log(2n) + 8⌧(C)2 log(2/∆) ( = O ✓⌧(C) ✏δ ◆2 (log n + log(1/∆)) ! . where ⌧(C) = maxS C(S) minS6=; C(S) is the spread of C. The full proof of Theorem 2 is in Appendix B. Its main steps are: 1. We find a cost allocation which satisfies the core property on all samples, restricting the search to cost allocations with bounded `1-norm. Such a cost allocation can be found efficiently since the space of such cost allocations is convex. 2. The analysis begins by bounding the `1-norm of any vector in the core (Lemma 3). Combined with the assumption that the core is non-empty, this implies that a cost allocation satisfying the previous conditions exists. 3. Let [x]+ denote the function x 7! max(x, 0). Consider the following “loss" function: P i2S i C(S) −1 , + This loss function is convenient since it is equal to 0 if and only if the core property is satisfied for S and it is 1-Lipschitz, which is used in the next step. 4. Next, we bound the difference between the empirical loss and the expected loss for all with a known result using the Rademacher complexity of linear predictors with low `1 norm over ⇢-Lipschitz loss functions (Theorem 10). 5. Finally, given which approximately satisfies the core property in expectation, we show that is in the probably mostly approximately stable core by Markov’s inequality (Lemma 4). Since we obtained a probably mostly approximately stable core, a natural question is if it is possible to compute cost allocations that are mostly approximately stable over natural distributions. The answer is negative in general: even for the restricted class of monotone submodular functions, which always have a solution in the core, the core cannot be mostly approximated from samples, even over the uniform distribution. The full proof of this impossibility theorem is in Appendix B. Theorem 3. Cost allocations in the (1/2 + ✏)-mostly approximately stable core, i.e., such that for all S, ✓1 2 + ✏ ◆ · X i2S i C(S), cannot be computed for monotone submodular functions over the uniform distribution, for any constant ✏> 0. 4 Approximating the Shapley Value from Samples We turn our attention to the STATISTICAL COST SHARING problem in the context of the Shapley value. Since the Shapley value exists and is unique for all functions, a natural relaxation is to simply approximate this value from samples. The distributions we consider in this section are the uniform distribution, and more generally product distributions, which are the standard distributions studied in the learning literature for combinatorial functions Balcan and Harvey [2011], Balcan et al. [2012], Feldman and Kothari [2014], Feldman and Vondrak [2014]. It is easy to see that we need some restrictions on the distribution D (for example, if the empty set if drawn with probability one, the Shapley value cannot be approximated). 6 For submodular functions with bounded curvature, we prove approximation bounds when samples are drawn from the uniform or a bounded product distribution, and also show that the bound for the uniform distribution is tight. However, we show that the Shapley value cannot be approximated from samples even for coverage functions (which are a special case of submodular functions) and the uniform distribution. Since coverage functions are learnable from samples, this implies the counter-intuitive observation that learnability does not imply that the Shapley value is approximable from samples. We defer the full proofs to Appendix C. Definition 5. An algorithm ↵-approximates, ↵2 (0, 1], the Shapley value of cost functions C over distribution D, if, for all C 2 C and all δ > 0, given poly(n, 1/δ, 1/1−↵) samples from D, it computes Shapley value estimates ˜φC such that ↵φi ˜φi 1 ↵φi for all i 2 N such that φi ≥1/ poly(n)1with probability at least 1 −δ over both the samples and the choices made by the algorithm. We consider submodular functions with bounded curvature, a common assumption in the submodular maximization literature Iyer and Bilmes [2013], Iyer et al. [2013], Sviridenko et al. [2015], Vondrák [2010]. Intuitively, the curvature of a submodular function bounds by how much the marginal contribution of an element can decrease. This property is useful since the Shapley value of an element can be written as a weighted sum of its marginal contributions over all sets. Definition 6. A monotone submodular function C has curvature 2 [0, 1] if CN\{i}(i) ≥(1 − )C(i) for all i 2 N. This curvature is bounded if < 1. An immediate consequence of this definition is that CS(i) ≥(1 −)CT (i) for all S, T such that i 62 S [ T by monotonicity and submodularity. The main tool used is estimates ˜vi of expected marginal contributions vi = ES⇠D|i62S[CS(i)] where ˜vi = avg(Si) −avg(S−i) is the difference between the average value of samples containing i and the average value of samples not containing i. Theorem 4. Monotone submodular functions with bounded curvature have Shapley value that is p1 −−✏approximable from samples over the uniform distribution, which is tight, and 1 −−✏ approximable over any bounded product distribution for any constant ✏> 0. Consider the algorithm which computes ˜φi = ˜vi. Note that φi = E σ[C(Aσ<i [ {i}) −C(Aσ<i)] ≥ (1 −)vi > 1− 1+✏˜vi > (1 −−✏)˜vi where the first inequality is by curvature and the second by Lemma 5 which shows that the estimates ˜vi of vi are arbitrarily good. The other direction follows similarly. The p1 −result is the main technical component of the upper bound. We describe two main steps: 1. The expected marginal contribution ES⇠U|i62S,|S|=j[CS(i)] of i to a uniformly random set S of size j is decreasing in j, which is by submodularity. 2. Since a uniformly random set has size concentrated close to n/2, this implies that roughly half of the terms in the summation φi = (Pn−1 j=0 ES⇠Uj|i62S[CS(i)])/n are greater than vi and the other half of the terms are smaller. For the tight lower bound, we show that there exists two functions that cannot be distinguished from samples w.h.p. and that have an element with Shapley value which differs by an ↵2 factor. We show that the Shapley value of coverage (and submodular) functions are not approximable from samples in general, even though coverage functions are PMAC-learnable ( Balcan and Harvey [2011]) from samples over any distribution Badanidiyuru et al. [2012]. Theorem 5. There exists no constant ↵> 0 such that coverage functions have Shapley value that is ↵-approximable from samples over the uniform distribution. 5 Data Dependent Shapley Value The general impossibility result for computing the Shapley value from samples arises from the fact that the concept was geared towards the query model, where the algorithm can ask for the cost of any set S ✓N. In this section, we develop an analogue that is distribution-dependent. We denote it by φC,D with respect to both C and D. We define four natural distribution-dependent axioms resembling 1See Appendix C for general definition. 7 the Shapley value axioms, and then prove that our proposed value is the unique solution satisfying them. This value can be approximated arbitrarily well in the statistical model for all functions. The proofs are deferred to Appendix D. We start by stating the four axioms. Definition 7. The data-dependent axioms for cost sharing functions are: • Balance: P i2N φD i = ES⇠D[C(S)], • Symmetry: for all i and j, if PrS⇠D [|S \ {i, j}| = 1] = 0 then φD i = φD j , • Zero element: for all i, if PrS⇠D [i 2 S] = 0 then φD i = 0, • Additivity: for all i, if D1, D2, ↵, and β such that ↵+ β = 1, φ↵D1+βD2 i = ↵φD1 i + βφD2 i where Pr [S ⇠↵D1 + βD2] = ↵· Pr [S ⇠D1] + β · Pr [S ⇠D2]. The similarity to the original Shapley value axioms is readily apparent. The main distinction is that we expect these to hold with regard to D, which captures the frequency with which different coalitions S occur. Interpreting the axioms one by one, the balance property ensures that the expected cost is always accounted for. The symmetry axiom states that if two elements always occur together, they should have the same share, since they are indistinguishable. If an element is never observed, then it should have zero share. Finally costs should combine in a linear manner according to the distribution. The data-dependent Shapley value is φD i := X S : i2S Pr [S ⇠D] · C(S) |S| . Informally, for all set S, the cost C(S) is divided equally between all elements in S and is weighted with the probability that S occurs according to D. The main appeal of this cost allocation is the following theorem. Theorem 6. The data-dependent Shapley value is the unique value satisfying the four data-dependent axioms. The data-dependent Shapley value can be approximated from samples with the following empirical data-dependent Shapley value: ˜φD i = 1 m X Sj : i2Sj C(Sj) |Sj| . These estimates are arbitrarily good with arbitrarily high probability. Theorem 7. The empirical data-dependent Shapley value approximates the data-dependent Shapley value arbitrarily well, i.e., |˜φD i −φD i | < ✏ with poly(n, 1/✏, 1/δ) samples and with probability at least 1 −δ for any δ, ✏> 0. 6 Discussion and Future Work We follow a recent line of work that studies classical algorithmic problems from a statistical perspective, where the input is restricted to a collection of samples. Our results fall into two categories, we give results for approximating the Shapley value and the core and propose new cost sharing concepts that are tailored for the statistical framework. We use techniques from multiple fields that encompass statistical machine learning, combinatorial optimization, and, of course, cost sharing. The cost sharing literature being very rich, the number of directions for future work are considerable. Obvious avenues include studying other cost sharing methods in this statistical framework, considering other classes of functions to approximate known methods, and improving the sample complexity of previous algorithms. More conceptually, an exciting modeling question arises when designing “desirable" axioms from data. Traditionally these axioms only depended on the cost function, whereas in this model they can depend on both the cost function and the distribution, providing an interesting interplay. 8 References Elliot Anshelevich, Anirban Dasgupta, Jon Kleinberg, Eva Tardos, Tom Wexler, and Tim Roughgarden. The price of stability for network design with fair cost allocation. SIAM Journal on Computing, 38(4):1602–1623, 2008. Yoram Bachrach, Evangelos Markakis, Ezra Resnick, Ariel D Procaccia, Jeffrey S Rosenschein, and Amin Saberi. Approximating power indices: theoretical and empirical analysis. Autonomous Agents and Multi-Agent Systems, 20(2):105–122, 2010. Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, and Tim Roughgarden. Sketching valuation functions. In Proceedings of the twenty-third annual ACMSIAM symposium on Discrete Algorithms, pages 1025–1035. Society for Industrial and Applied Mathematics, 2012. Maria-Florina Balcan and Nicholas JA Harvey. Learning submodular functions. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 793–802. ACM, 2011. Maria-Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. Learning valuation functions. In COLT, volume 23, pages 4–1, 2012. Maria-Florina Balcan, Ariel D. Procaccia, and Yair Zick. Learning cooperative games. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 475–481, 2015. Maria-Florina Balcan, Ariel D Procaccia, and Yair Zick. Learning cooperative games. arXiv preprint arXiv:1505.00039v2, 2016. Eric Balkanski and Yaron Singer. Mechanisms for fair attribution. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, pages 529–546. ACM, 2015. Olga N Bondareva. Some applications of linear programming methods to the theory of cooperative games. Problemy kibernetiki, 10:119–139, 1963. Xiaotie Deng, Toshihide Ibaraki, and Hiroshi Nagamochi. Algorithmic aspects of the core of combinatorial optimization games. Mathematics of Operations Research, 24(3):751–766, 1999. Nikhil R Devanur, Milena Mihail, and Vijay V Vazirani. Strategyproof cost-sharing mechanisms for set cover and facility location games. Decision Support Systems, 39(1):11–22, 2005. Shaheen S Fatima, Michael Wooldridge, and Nicholas R Jennings. A linear approximation method for the shapley value. Artificial Intelligence, 172(14):1673–1699, 2008. Joan Feigenbaum, Christos Papadimitriou, and Scott Shenker. Sharing the cost of muliticast transmissions (preliminary version). In Proceedings of the thirty-second annual ACM symposium on Theory of computing, pages 218–227. ACM, 2000. Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of marginals. In COLT, pages 679–702, 2014. Vitaly Feldman and Jan Vondrak. Optimal bounds on approximation of submodular and xos functions by juntas. In Information Theory and Applications Workshop (ITA), 2014, pages 1–10. IEEE, 2014. Donald B Gillies. Solutions to general non-zero-sum games. Contributions to the Theory of Games, 4(40):47–85, 1959. Michel X Goemans and Martin Skutella. Cooperative facility location games. Journal of Algorithms, 50(2):194–214, 2004. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014. URL http://arxiv.org/abs/1412.6572. Nicole Immorlica, Mohammad Mahdian, and Vahab S Mirrokni. Limitations of cross-monotonic cost-sharing schemes. ACM Transactions on Algorithms (TALG), 4(2):24, 2008. 9 Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In Advances in Neural Information Processing Systems, pages 2436–2444, 2013. Rishabh K Iyer, Stefanie Jegelka, and Jeff A Bilmes. Curvature and optimal algorithms for learning and minimizing submodular functions. In Advances in Neural Information Processing Systems, pages 2742–2750, 2013. Kamal Jain and Vijay V Vazirani. Equitable cost allocations via primal-dual-type algorithms. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 313–321. ACM, 2002. David Liben-Nowell, Alexa Sharp, Tom Wexler, and Kevin Woods. Computing shapley value in supermodular coalitional games. In International Computing and Combinatorics Conference, pages 568–579. Springer, 2012. Irwin Mann. Values of large games, IV: Evaluating the electoral college by Montecarlo techniques. Rand Corporation, 1960. Hervé Moulin. Incremental cost sharing: Characterization by coalition strategy-proofness. Social Choice and Welfare, 16(2):279–320, 1999. Hervé Moulin and Scott Shenker. Strategyproof sharing of submodular costs: budget balance versus efficiency. Economic Theory, 18(3):511–533, 2001. Martin Pál and Éva Tardos. Group strategy proof mechanisms via primal-dual algorithms. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 584–593. IEEE, 2003. Alvin E Roth. The Shapley value: essays in honor of Lloyd S. Shapley. Cambridge University Press, 1988. Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. 2014. Lloyd S Shapley. On balanced sets and cores. Naval research logistics quarterly, 14(4):453–460, 1967. LS Shapley. A value for n-person games1. 1953. Maxim Sviridenko, Jan Vondrák, and Justin Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1134–1148. SIAM, 2015. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013. URL http://arxiv.org/abs/1312.6199. John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior. 1944. Jan Vondrák. Submodularity and curvature: the optimal algorithm. RIMS Kokyuroku Bessatsu B, 23: 253–266, 2010. Eyal Winter. The shapley value. Handbook of game theory with economic applications, 3:2025–2054, 2002. 10 | 2017 | 135 |
6,603 | Dilated Recurrent Neural Networks Shiyu Chang1⇤, Yang Zhang1⇤, Wei Han2⇤, Mo Yu1, Xiaoxiao Guo1, Wei Tan1, Xiaodong Cui1, Michael Witbrock1, Mark Hasegawa-Johnson2, Thomas S. Huang2 1IBM Thomas J. Watson Research Center, Yorktown, NY 10598, USA 2University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA {shiyu.chang, yang.zhang2, xiaoxiao.guo}@ibm.com, {yum, wtan, cuix, witbrock}@us.ibm.com, {weihan3, jhasegaw, t-huang1}@illinois.edu Abstract Learning with recurrent neural networks (RNNs) on long sequences is a notoriously difficult task. There are three major challenges: 1) complex dependencies, 2) vanishing and exploding gradients, and 3) efficient parallelization. In this paper, we introduce a simple yet effective RNN connection structure, the DILATEDRNN, which simultaneously tackles all of these challenges. The proposed architecture is characterized by multi-resolution dilated recurrent skip connections, and can be combined flexibly with diverse RNN cells. Moreover, the DILATEDRNN reduces the number of parameters needed and enhances training efficiency significantly, while matching state-of-the-art performance (even with standard RNN cells) in tasks involving very long-term dependencies. To provide a theory-based quantification of the architecture’s advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures. We rigorously prove the advantages of the DILATEDRNN over other recurrent neural architectures. The code for our method is publicly available1. 1 Introduction Recurrent neural networks (RNNs) have been shown to have remarkable performance on many sequential learning problems. However, long sequence learning with RNNs remains a challenging problem for the following reasons: first, memorizing extremely long-term dependencies while maintaining mid- and short-term memory is difficult; second, training RNNs using back-propagationthrough-time is impeded by vanishing and exploding gradients; And lastly, both forward- and back-propagation are performed in a sequential manner, which makes the training time-consuming. Many attempts have been made to overcome these difficulties using specialized neural structures, cells, and optimization techniques. Long short-term memory (LSTM) [10] and gated recurrent units (GRU) [6] powerfully model complex data dependencies. Recent attempts have focused on multi-timescale designs, including clockwork RNNs [12], phased LSTM [17], hierarchical multi-scale RNNs [5], etc. The problem of vanishing and exploding gradients is mitigated by LSTM and GRU memory gates; other partial solutions include gradient clipping [18], orthogonal and unitary weight optimization [2, 14, 24], and skip connections across multiple timestamps [8, 30]. For efficient sequential training, WaveNet [22] abandoned RNN structures, proposing instead the dilated causal convolutional neural network (CNN) architecture, which provides significant advantages in working directly with raw audio waveforms. However, the length of dependencies captured by a dilated CNN is limited by its kernel size, whereas an RNN’s autoregressive modeling can, in theory, capture potentially infinitely ⇤Denotes equal contribution. 1https://github.com/code-terminator/DilatedRNN 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. !" !# !$ !% !& !' !( !) *+ ,,. ,/ ,0 ,1 ,2 ,3 ,4 *+ *+5 *+ !" !# !$ !% !& !' !( !) Figure 1: (left) A single-layer RNN with recurrent skip connections. (mid) A single-layer RNN with dilated recurrent skip connections. (right) A computation structure equivalent to the second graph, which reduces the sequence length by four times. long dependencies with a small number of parameters. Recently, Yu et al. [27] proposed learningbased RNNs with the ability to jump (skim input text) after seeing a few timestamps worth of data; although the authors showed that the modified LSTM with jumping provides up to a six-fold speed increase, the efficiency gain is mainly in the testing phase. In this paper, we introduce the DILATEDRNN, a neural connection architecture analogous to the dilated CNN [22, 28], but under a recurrent setting. Our approach provides a simple yet useful solution that tries to alleviate all challenges simultaneously. The DILATEDRNN is a multi-layer, and cell-independent architecture characterized by multi-resolution dilated recurrent skip connections. The main contributions of this work are as follows. 1) We introduce a new dilated recurrent skip connection as the key building block of the proposed architecture. These alleviate gradient problems and extend the range of temporal dependencies like conventional recurrent skip connections, but in the dilated version require fewer parameters and significantly enhance computational efficiency. 2) We stack multiple dilated recurrent layers with hierarchical dilations to construct a DILATEDRNN, which learns temporal dependencies of different scales at different layers. 3) We present the mean recurrent length as a new neural memory capacity measure that reveals the performance difference between the previously developed recurrent skip-connections and the dilated version. We also verify the optimality of the exponentially increasing dilation distribution used in the proposed architecture. It is worth mentioning that, the recent proposed Dilated LSTM [23] can be viewed as a special case of our model, which contains only one dilated recurrent layer with fixed dilation. The main purpose of their model is to reduce the temporal resolution on time-sensitive tasks. Thus, the Dilated LSTM is not a general solution for modeling at multiple temporal resolutions. We empirically validate the DILATEDRNN in multiple RNN settings on a variety of sequential learning tasks, including long-term memorization, pixel-by-pixel classification of handwritten digits (with permutation and noise), character-level language modeling, and speaker identification with raw audio waveforms. The DILATEDRNN improves significantly on the performance of a regular RNN, LSTM, or GRU with far fewer parameters. Many studies [6, 14] have shown that vanilla RNN cells perform poorly in these learning tasks. However, within the proposed structure, even vanilla RNN cells outperform more sophisticated designs, and match the state-of-the-art. We believe that the DILATEDRNN provides a simple and generic approach to learning on very long sequences. 2 Dilated Recurrent Neural Networks The main ingredients of the DILATEDRNN are its dilated recurrent skip connection and its use of exponentially increasing dilation; these will be discussed in the following two subsections respectively. 2.1 Dilated Recurrent Skip Connection Denote c(l) t as the cell in layer l at time t. The dilated skip connection can be represented as c(l) t = f ⇣ x(l) t , c(l) t−s(l) ⌘ . (1) This is similar to the regular skip connection[8, 30], which can be represented as c(l) t = f ⇣ x(l) t , c(l) t−1, c(l) t−s(l) ⌘ . (2) s(l) is referred to as the skip length, or dilation of layer l; x(l) t as the input to layer l at time t; and f(·) denotes any RNN cell and output operations, e.g. Vanilla RNN cell, LSTM, GRU etc. Both skip connections allow information to travel along fewer edges. The difference between dilated and 2 Hidden Layer Dilation = 4 Hidden Layer Dilation = 2 Hidden Layer Dilation = 1 Input Output Figure 2: (left) An example of a three-layer DILATEDRNN with dilation 1, 2, and 4. (right) An example of a two-layer DILATEDRNN, with dilation 2 in the first layer. In such a case, extra embedding connections are required (red arrows) to compensate missing data dependencies. regular skip connection is that the dependency on c(l) t−1 is removed in dilated skip connection. The left and middle graphs in figure 1 illustrate the differences between two architectures with dilation or skip length s(l) = 4, where W 0 r is removed in the middle graph. This reduces the number of parameters. More importantly, computational efficiency of a parallel implementation (e.g., using GPUs) can be greatly improved by parallelizing operations that, in a regular RNN, would be impossible. The middle and right graphs in figure 1 illustrate the idea with s(l) = 4 as an example. The input subsequences {x(l) 4t }, {x(l) 4t+1}, {x(l) 4t+2} and {x(l) 4t+3} are given four different colors. The four cell chains, {c(l) 4t }, {c(l) 4t+1}, {c(l) 4t+2} and {c(l) 4t+3}, can be computed in parallel by feeding the four subsequences into a regular RNN, as shown in the right of figure 1. The output can then be obtained by interweaving among the four output chains. The degree of parallelization is increased by s(l) times. 2.2 Exponentially Increasing Dilation To extract complex data dependencies, we stack dilated recurrent layers to construct DILATEDRNN. Similar to settings that were introduced in WaveNet [22], the dilation increases exponentially across layers. Denote s(l) as the dilation of the l-th layer. Then, s(l) = M l−1, l = 1, · · · , L. (3) The left side of figure 2 depicts an example of DILATEDRNN with L = 3 and M = 2. On one hand, stacking multiple dilated recurrent layers increases the model capacity. On the other hand, exponentially increasing dilation brings two benefits. First, it makes different layers focus on different temporal resolutions. Second, it reduces the average length of paths between nodes at different timestamps, which improves the ability of RNNs to extract long-term dependencies and prevents vanishing and exploding gradients. A formal proof of this statement will be given in section 3. To improve overall computational efficiency, a generalization of our standard DILATEDRNN is also proposed. The dilation in the generalized DILATEDRNN does not start at one, but M l0. Formally, s(l) = M (l−1+l0), l = 1, · · · , L and l0 ≥0, (4) where M l 0 is called the starting dilation. To compensate for the missing dependencies shorter than M l0, a 1-by-M (l0) convolutional layer is appended as the final layer. The right side of figure 2 illustrates an example of l0 = 1, i.e. dilations start at two. Without the red edges, there would be no edges connecting nodes at odd and even time stamps. As discussed in section 2.1, the computational efficiency can be increased by M l0 by breaking the input sequence into M l0 downsampled subsequences, and feeding each into a L −l0-layer regular DILATEDRNN with shared weights. 3 The Memory Capacity of DILATEDRNN In this section, we extend the analysis framework in [30] to establish better measures of memory capacity and parameter efficiency, which will be discussed in the following two sections respectively. 3.1 Memory Capacity To facilitate theoretical analysis, we apply the cyclic graph Gc notation introduced in [30]. Definition 3.1 (Cyclic Graph). The cyclic graph representation of an RNN structure is a directed multi-graph, GC = (VC, EC). Each edge is labeled as e = (u, v, σ) 2 EC, where u is the origin 3 node, v is the destination node, and σ is the number of time steps the edge travels. Each node is labeled as v = (i, p) 2 VC, where i is the time index of the node modulo m, m is the period of the graph, and p is the node index. GC must contain at least one directed cycle. Along the edges of any directed cycle, the summation of σ must not be zero. Define di(n) as the length of the shortest path from any input node at time i to any output node at time i + n. In [30], a measure of the memory capacity is proposed that essentially only looks at di(m), where m is the period of the graph. This is reasonable when the period is small. However, when the period is large, the entire distribution of di(n), 8n m makes a difference, not just the one at span m. As a concrete example, suppose there is an RNN architecture of period m = 10, 000, implemented using equation (2) with skip length s(l) = m, so that di(n) = n for n = 1, · · · , 9, 999 and di(m) = 1. This network rapidly memorizes the dependence on inputs at time i of the outputs at time i + m = i + 10, 000, but shorter dependencies 2 n 9, 999 are much harder to learn. Motivated by this, we proposed the following additional measure of memory capacity. Definition 3.2 (Mean Recurrent Length). For an RNN with cycle m, the mean recurrent length is ¯d = 1 m m X n=1 max i2V di(n). (5) Mean recurrent length studies the average dilation across different time spans within a cycle. An architecture with good memory capacity should generally have a small recurrent length for all time spans. Otherwise the network can only selectively memorize information at a few time spans. Also, we take the maximum over i, so as to punish networks that have good length only for a few starting times, which can only well memorize information originating from those specific times. The proposed mean recurrent length has an interesting reciprocal relation with the short-term memory (STM) measure proposed in [11], but mean recurrent length emphasizes more on long-term memory capacity, which is more suitable for our intended applications. With this, we are ready to illustrate the memory advantage of DILATEDRNN . Consider two RNN architectures. One is the proposed DILATEDRNN structure with d layers and M = 2 (equation (1)). The other is a regular d-layer RNN with skip connections (equation (2)). If the skip connections are of skip s(l) = 2l−1, then connections in the RNN are a strict superset of those in the DILATEDRNN , and the RNN accomplishes exactly the same ¯d as the DILATEDRNN , but with twice the number of trainable parameters (see section 3.2). Suppose one were to give every layer in the RNN the largest possible skip for any graph with a period of m = 2d−1: set s(l) = 2d−1 in every layer, which is the regular skip RNN setting. This apparent advantage turns out to be a disadvantage, because time spans of 2 n < m suffer from increased path lengths, and therefore ¯d = (m −1)/2 + log2 m + 1/m + 1, (6) which grows linearly with m. On the other hand, for the proposed DILATEDRNN, ¯d = (3m −1)/2m log2 m + 1/m + 1, (7) where ¯d only grows logarithmically with m, which is much smaller than that of regular skip RNN. It implies that the information in the past on average travels along much fewer edges, and thus undergoes far less attenuation. The derivation is given in appendix A in the supplementary materials. 3.2 Parameter Efficiency The advantage of DILATEDRNN lies not only in the memory capacity but also the number of parameters that achieves such memory capacity. To quantify the analysis, the following measure is introduced. Definition 3.3 (Number of Recurrent Edges per Node). Denote Card{·} as the set cardinality. For an RNN represented as GC = (VC, EC), the number of recurrent edges per node, Nr, is defined as Nr = Card {e = (u, v, σ) 2 EC : σ 6= 0} / Card{VC}. (8) Ideally, we would want a network that has large recurrent skips while maintaining a small number of recurrent weights. It is easy to show that Nr for DILATEDRNN is 1 and that for RNNs with regular skip connections is 2. The DILATEDRNN has half the recurrent complexity as the RNN with regular skip RNN because of the removal of the direct recurrent edge. The following theorem states that the DILATEDRNN is able to achieve the best memory capacity among a class of connection structures with Nr = 1, and thus is among the most parameter efficient RNN architectures. 4 Theorem 3.1 (Parameter Efficiency of DILATEDRNN). Consider a subset of d-layer RNNs with period m = M d−1 that consists purely of dilated skip connections (hence Nr = 1). For the RNNs in this subset, there are d different dilations, 1 = s1 s2 · · · sd = m, and si = nisi−1, (9) where ni is any arbitrary positive integer. Among this subset, the d-layer DILATEDRNN with dilation rate {M 0, · · · , M d−1} achieves the smallest ¯d. The proof is motivated by [4], and is given in appendix B. 3.3 Comparing with Dilated CNN Since DILATEDRNN is motivated by dilated CNN [22, 28], it is useful to compare their memory capacities. Although cyclic graph, mean recurrent length and number of recurrent edges per node are designed for recurrent structures, they happen to be applicable to dilated CNN as well. What’s more, it can be easily shown that, compared to a DILATEDRNN with the same number of layers and dilation rate of each layer, a dilated CNN has exactly the same number of recurrent edges per node, and a slightly smaller (by log2 m) mean recurrent length. Hence both architectures have the same model complexity, and it looks like a dilated CNN has a slightly better memory capacity. However, mean recurrent length only measures the memory capacity within a cycle. When going beyond a cycle, it is already shown that the recurrent length grows linearly with the number of cycles [30] for RNN structures, including DILATEDRNN, whereas for a dilated CNN, the receptive field size is always finite (thus mean recurrent length goes to infinity beyond the receptive field size). For example, with dilation rate M = 2l−1 and d layers l = 1, · · · , d, a dilated CNN has a receptive field size of 2d, which is two cycles. On the other hand, the memory of a DILATEDRNN can go far beyond two cycles, particularly with the sophisticated units like GRU and LSTM. Hence the memory capacity advantage of DILATEDRNN over a dilated CNN is obvious. 4 Experiments In this section, we evaluate the performance of DILATEDRNN on four different tasks, which include long-term memorization, pixel-by-pixel MNIST classification [15], character-level language modeling on the Penn Treebank [16], and speaker identification with raw waveforms on VCTK [26]. We also investigate the effect of dilation on performance and computational efficiency. Unless specified otherwise, all the models are implemented with Tensorflow [1]. We use the default nonlinearities and RMSProp optimizer [21] with learning rate 0.001 and decay rate of 0.9. All weight matrices are initialized by the standard normal distribution. The batch size is set to 128. Furthermore, in all the experiments, we apply the sequence classification setting [25], where the output layer only adds at the end of the sequence. Results are reported for trained models that achieve the best validation loss. Unless stated otherwise, no tricks, such as gradient clipping [18], learning rate annealing, recurrent weight dropout [20], recurrent batch norm [20], layer norm [3], etc., are applied. All the tasks are sequence level classification tasks, and therefore the “gridding” problem [29] is irrelevant. No “degridded” module is needed. Three RNN cells, Vanilla, LSTM and GRU cells, are combined with the DILATEDRNN , which we refer to as dilated Vanilla, dilated LSTM and dilated GRU, respectively. The common baselines include single-layer RNNs (denoted as Vanilla RNN, LSTM, and GRU), multi-layer RNNs (denoted as stack Vanilla, stack LSTM, and stack GRU), and Vanilla RNN with regular skip connections (denoted as Skip Vanilla). Additional baselines will be specified in the corresponding subsections. 4.1 Copy memory problem This task tests the ability of recurrent models in memorizing long-term information. We follow a similar setup in [2, 24, 10]. Each input sequence is of length T + 20. The first ten values are randomly generated from integers 0 to 7; the next T −1 values are all 8; the last 11 values are all 9, where the first 9 signals that the model needs to start to output the first 10 values of the inputs. Different from the settings in [2, 24], the average cross-entropy loss is only measured at the last 10 timestamps. Therefore, the random guess yields an expected average cross entropy of ln(8) ⇡2.079. 5 Figure 3: Results of the copy memory problem with T = 500 (left) and T = 1000 (right). The dilatedRNN converges quickly to the perfect solutions. Except for RNNs with dilated skip connections, all other methods are unable to improve over random guesses. The DILATEDRNN uses 9 layers with hidden state size of 10. The dilation starts from one to 256 at the last hidden layer. The single-layer baselines have 256 hidden units. The multi-layer baselines use the same number of layers and hidden state size as the DILATEDRNN . The skip Vanilla has 9 layers, and the skip length at each layer is 256, which matches the maximum dilation of the DILATEDRNN. The convergence curves in two settings, T = 500 and 1, 000, are shown in figure 3. In both settings, the DILATEDRNN with vanilla cells converges to a good optimum after about 1,000 training iterations, whereas dilated LSTM and GRU converge slower. It might be because the LSTM and GRU cells are much more complex than the vanilla unit. Except for the proposed models, all the other models are unable to do better than the random guess, including the skip Vanilla. These results suggest that the proposed structure as a simple renovation is very useful for problems requiring very long memory. 4.2 Pixel-by-pixel MNIST Sequential classification on the MNIST digits [15] is commonly used to test the performance of RNNs. We first implement two settings. In the first setting, called the unpermuted setting, we follow the same setups in [2, 13, 14, 24, 30] by serializing each image into a 784 x 1 sequence. The second setting, called permuted setting, rearranges the input sequence with a fixed permutations. Training, validation and testing sets are the default ones in Tensorflow. Hyperparameters and results are reported in table 1. In addition to the baselines already described, we also implement the dilated CNN. However, the receptive fields size of a nine-layer dilated CNN is 512, and is insufficient to cover the sequence length of 784. Therefore, we added one more layer to the dilated CNN, which enlarges its receptive field size to 1,024. It also forms a slight advantage of dilated CNN over the DILATEDRNN structures. In the unpermuted setting, the dilated GRU achieves the best evaluation accuracy of 99.2. However, the performance improvements of dilated GRU and LSTM over both the single- and multi-layer ones are marginal, which might be because the task is too simple. Further, we observe significant performance differences between stack Vanilla and skip vanilla, which is consistent with the findings in [30] that RNNs can better model long-term dependencies and achieves good results when recurrent skip connections added. Nevertheless, the dilated vanilla has yet another significant performance gain over the skip Vanilla, which is consistent with our argument in section 3, that the DILATEDRNN has a much more balanced memory over a wide range of time periods than RNNs with the regular skips. The performance of the dilated CNN is dominated by dilated LSTM and GRU, even when the latter have fewer parameters (in the 20 hidden units case) than the former (in the 50 hidden units case). In the permuted setting, almost all performances are lower. However, the DILATEDRNN models maintain very high evaluation accuracies. In particular, dilated Vanilla outperforms the previous RNN-based state-of-the-art Zoneout [13] with a comparable number of parameters. It achieves test accuracy of 96.1 with only 44k parameters. Note that the previous state-of-the-art utilizes the recurrent batch normalization. The version without it has a much lower performance compared to all the dilated models. We believe the consistently high performance of the DILATEDRNN across different permutations is due to its hierarchical multi-resolution dilations. In addition, the dilated CNN is able the achieve the best performance, which is in accordance with our claim in section 3.3 that dilated CNN has a slightly shorter mean recurrent length than DILATEDRNN architectures, when sequence length fall within its receptive field size. However, note that this is achieved by adding one additional layer to expand its receptive field size compared to the RNN counterparts. When the useful information lies outside its receptive field, the dilated CNN might fail completely. 6 Figure 4: Results of the noisy MNIST task with T = 1000 (left) and 2000 (right). RNN models without skip connections fail. DILATEDRNN significant outperforms regular recurrent skips and on-pars with the dilated CNN. Table 1: Results for unpermuted and permuted pixel-by-pixel MNIST. Italic numbers indicate the results copied from the original paper. The best results are bold. Method # hidden / # parameters Max Unpermuted Permunted layers layer (⇡, k) dilations test accuracy test accuracy Vanilla RNN 1 / 9 256 / 20 68 / 7 1 - / 49.1 71.6 / 88.5 LSTM [24] 1 / 9 256 / 20 270 / 28 1 98.2 / 98.7 91.7 / 89.5 GRU 1 / 9 256 / 20 200 / 21 1 99.1 / 98.8 94.1 / 91.3 IRNN [14] 1 100 12 1 97.0 ⇡82.0 Full uRNN [24] 1 512 270 1 97.5 94.1 Skipped RNN [30] 1 / 9 95 / 20 16 / 11 21 / 256 98.1 / 85.4 94.0 / 91.8 Zoneout [13] 1 100 42 1 93.1 / 95.92 Dilated CNN [22] 10 20 / 50 7 / 46 512 98.0 / 98.3 95.7 / 96.7 Dilated Vanilla 9 20 / 50 7 / 44 256 97.7 / 98.0 95.5 / 96.1 Dilated LSTM 9 20 / 50 28 / 173 256 98. 9 / 98.9 94.2 / 95.4 Dilated GRU 9 20 / 50 21 / 130 256 99.0 / 99.2 94.4 / 94.6 In addition to these two settings, we propose a more challenging task called the noisy MNIST, where we pad the unpermuted pixel sequences with [0, 1] uniform random noise to the length of T. The results with two setups T = 1, 000 and T = 2, 000 are shown in figure 4. The dilated recurrent models and skip RNN have 9 layers and 20 hidden units per layer. The number of skips at each layer of skip RNN is 256. The dilated CNN has 10 layers and 11 layers for T = 1, 000 and T = 2, 000, respectively. This expands the receptive field size of the dilated CNN to the entire input sequence. The number of filters per layer is 20. It is worth mentioning that, in the case of T = 2, 000, if we use a 10-layer dilated CNN instead, it will only produce random guesses. This is because the output node only sees the last 1, 024 input samples which do not contain any informative data. All the other reported models have the same hyperparameters as shown in the first three row of table 1. We found that none of the models without skip connections is able to learn. Although skip Vanilla remains learning, its performance drops compared to the first unpermuted setup. On the contrary, the DILATEDRNN and dilated CNN models obtain almost the same performances as before. It is also worth mentioning that in all three experiments, the DILATEDRNN models are able to achieve comparable results with an extremely small number of parameters. 4.3 Language modeling We further investigate the task of predicting the next character on the Penn Treebank dataset [16]. We follow the data splitting rule with the sequence length of 100 that are commonly used in previous studies. This corpus contains 1 million words, which is small and prone to over-fitting. Therefore model regularization methods have been shown effective on the validation and test set performances. Unlike many existing approaches, we apply no regularization other than a dropout on the input layer. Instead, we focus on investigating the regularization effect of the dilated structure itself. Results are shown in table 2. Although Zoneout, LayerNorm HM-LSTM and HyperNetowrks outperform the DILATEDRNN models, they apply batch or layer normalizations as regularization. To the best of our knowledge, the dilated GRU with 1.27 BPC achieves the best result among models of similar sizes 2with recurrent batch norm [20]. 7 Table 2: Character-level language modeling on the Penn Tree Bank dataset. Method # hidden # parameters Max Evaluation layers / layer (⇡, M) dilations BPC LSTM 1 / 5 1k / 256 4.25 / 1.9 1 1.31 / 1.33 GRU 1 / 5 1k / 256 3.19 / 1.42 1 1.32 / 1.33 Recurrent BN-LSTM [7] 1 1k 1 1.32 Recurrent dropout LSTM [20] 1 1k 4.25 1 1.30 Zoneout [13] 1 1k 4.25 1 1.27 LayerNorm HM-LSTM [5] 3 512 1 1.24 HyperNetworks [9] 1 / 2 1k 4.91 / 14.41 1 1.26 / 1.223 Dilated Vanilla 5 256 0.6 64 1.37 Dilated LSTM 5 256 1.9 64 1.31 Dilated GRU 5 256 1.42 64 1.27 Table 3: Speaker identification on the VCTK dataset. Method # hidden # parameters Min Max Evaluation layers / layer (⇡, k) dilations dilations accuracy MFCC GRU 5 / 1 20 / 128 16 / 68 1 1 0.66 / 0.77 Raw Fused GRU 1 256 225 32 / 8 32 /8 0.45 / 0.65 Dilated GRU 6 / 8 50 103 / 133 32 / 8 1024 0.64 / 0.74 without layer normalizations. Also, the dilated models outperform their regular counterparts, Vanilla (didn’t converge, omitted), LSTM and GRU, without increasing the model complexity. 4.4 Speaker identification from raw waveform We also perform the speaker identification task using the corpus from VCTK [26]. Learning audio models directly from the raw waveform poses a difficult challenge for recurrent models because of the vastly long-term dependency. Recently the CLDNN family of models [19] managed to match or surpass the log mel-frequency features in several speech problems using waveform. However, CLDNNs coarsen the temporal granularity by pooling the first-layer CNN output before feeding it into the subsequent RNN layers, so as to solve the memory challenge. Instead, the DILATEDRNN directly works on the raw waveform without pooling, which is considered more difficult. To achieve a feasible training time, we adopt the efficient generalization of the DILATEDRNN as proposed in equation (4) with l0 = 3 and l0 = 5 . As mentioned before, if the dilations do not start at one, the model is equivalent to multiple shared-weight networks, each working on partial inputs, and the predictions are made by fusing the information using a 1-by-M l0 convolutional layer. Our baseline GRU model follows the same setting with various resolutions (referred to as fused-GRU), with dilation starting at 8. This baseline has 8 share-weight GRU networks, and each subnetwork works on 1/8 of the subsampled sequences. The same fusion layer is used to obtain the final prediction. Since most other regular baselines failed to converge, we also implemented the MFCC-based models on the same task setting for reference. The 13-dimensional log-mel frequency features are computed with 25ms window and 5ms shift. The inputs of MFCC models are of length 100 to match the input duration in the waveform-based models. The MFCC feature has two natural advantages: 1) no information loss from operating on subsequences; 2) shorter sequence length. Nevertheless, our dilated models operating directly on the waveform still offer a competitive performance (Table 3). 4.5 Discussion In this subsection, we first investigate the relationship between performance and the number of dilations. We compare the DILATEDRNN models with different numbers of layers on the noisy MNIST T = 1, 000 task. All models use vanilla RNN cells with hidden state size 20. The number of dilations starts at one. In figure 5, we observe that the classification accuracy and rate of convergence increases as the models become deeper. Recall the maximum skip is exponential in the number of layers. Thus, the deeper model has a larger maximum skip and mean recurrent length. Second, we consider maintaining a large maximum skip with a smaller number of layers, by increasing the dilation at the bottom layer of DILATEDRNN . First, we construct a nine-layer DILATEDRNN 3with layer normalization [3]. 8 Figure 5: Results for dilated vanilla with different numbers of layers on the noisy MNIST dataset. The performance and convergent speed increase as the number of layers increases. Figure 6: Training time (left) and evaluation performance (right) for dilated vanilla that starts at different numbers of dilations at the bottom layer. The maximum dilations for all models are 256. model with vanilla RNN cells. The number of dilations starts at 1, and hidden state size is 20. This architecture is referred to as “starts at 1” in figure 6. Then, we remove the bottom hidden layers one-by-one to construct seven new models. The last created model has three layers, and the number of dilations starts at 64. Figure 6 demonstrates both the wall time and evaluation accuracy for 50,000 training iterations of noisy MNIST dataset. The training time reduces by roughly 50% for every dropped layer (for every doubling of the minimum dilation). Although the testing performance decreases when the dilation does not start at one, the effect is marginal with s(0) = 2, and small with 4 s(0) 16. Notably, the model with dilation starting at 64 is able to train within 17 minutes by using a single Nvidia P-100 GPU while maintaining a 93.5% test accuracy. 5 Conclusion Our experiments with DILATEDRNN provide strong evidence that this simple multi-timescale architectural choice can reliably improve the ability of recurrent models to learn long-term dependency in problems from different domains. We found that the DILATEDRNN trains faster, requires less hyperparameter tuning, and needs fewer parameters to achieve the state-of-the-arts. In complement to the experimental results, we have provided a theoretical analysis showing the advantages of DILATEDRNN and proved its optimality under a meaningful architectural measure of RNNs. Acknowledgement Authors would like to thank Tom Le Paine (paine1@illinois.edu) and Ryan Musa (ramusa@us.ibm.com) for their insightful discussions. 9 References [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pages 1120–1128, 2016. [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [4] Eduardo R Caianiello, Gaetano Scarpetta, and Giovanna Simoncelli. A systemic study of monetary systems. International Journal Of General System, 8(2):81–92, 1982. [5] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. [6] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [7] Tim Cooijmans, Nicolas Ballas, César Laurent, Ça˘glar Gülçehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. [8] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Nips, volume 409, 1995. [9] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. [10] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [11] Herbert Jaeger. Short term memory in echo state networks, volume 5. GMD-Forschungszentrum Informationstechnik, 2001. [12] Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. [13] David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016. [14] Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015. [15] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [16] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. [17] Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased LSTM: accelerating recurrent network training for long or event-based sequences. arXiv preprint arXiv:1610.09513, 2016. [18] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. ICML (3), 28:1310–1318, 2013. [19] Tara N Sainath, Ron J Weiss, Andrew Senior, Kevin W Wilson, and Oriol Vinyals. Learning the speech frontend with raw waveform cldnns. In Sixteenth Annual Conference of the International Speech Communication Association, 2015. [20] Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016. [21] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. [22] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. 10 [23] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017. [24] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pages 4880–4888, 2016. [25] Zhengzheng Xing, Jian Pei, and Eamonn Keogh. A brief survey on sequence classification. ACM Sigkdd Explorations Newsletter, 12(1):40–48, 2010. [26] Junichi Yamagishi. English multi-speaker corpus for cstr voice cloning toolkit. http://homepages.inf. ed.ac.uk/jyamagis/page3/page58/page58.html, 2012. [27] Adams W Yu, Hongrae Lee, and Quoc V Le. Learning to skim text. arXiv preprint arXiv:1704.06877, 2017. [28] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015. [29] Fisher Yu, Vladlen Koltun, and Thomas Funkhouser. Dilated residual networks. arXiv preprint arXiv:1705.09914, 2017. [30] Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan R Salakhutdinov, and Yoshua Bengio. Architectural complexity measures of recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1822–1830, 2016. 11 | 2017 | 136 |
6,604 | The Expressive Power of Neural Networks: A View from the Width Zhou Lu1,3 1400010739@pku.edu.cn Hongming Pu1 1400010621@pku.edu.cn Feicheng Wang1,3 1400010604@pku.edu.cn Zhiqiang Hu2 huzq@pku.edu.cn Liwei Wang2,3 wanglw@cis.pku.edu.cn 1, Department of Mathematics, Peking University 2, Key Laboratory of Machine Perception, MOE, School of EECS, Peking University 3, Center for Data Science, Peking University, Beijing Institute of Big Data Research Abstract The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a universal approximation theorem for width-bounded ReLU networks: width-(n + 4) ReLU networks, where n is the input dimension, are universal approximators. Moreover, except for a measure zero set, all functions cannot be approximated by width-n ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound. Here we pose the dual question on the width-efficiency of ReLU networks: Are there wide networks that cannot be realized by narrow networks whose size is not substantially larger? We show that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound. On the other hand, we demonstrate by extensive experiments that narrow networks whose size exceed the polynomial bound by a constant factor can approximate wide and shallow network with high accuracy. Our results provide more comprehensive evidence that depth may be more effective than width for the expressiveness of ReLU networks. 1 Introduction Deep neural networks have achieved state-of-the-art performance in a wide range of tasks such as speech recognition, computer vision, natural language processing, and so on. Despite their promising results in applications, our theoretical understanding of neural networks remains limited. The expressive power of neural networks, being one of the vital properties, is crucial on the way towards a more thorough comprehension. The expressive power describes neural networks’ ability to approximate functions. This line of research dates back at least to 1980’s. The celebrated universal approximation theorem states that depth-2 networks with suitable activation function can approximate any continuous function on a compact domain to any desired accuracy [3] [1] [9] [6]. However, the size of such a neural network 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. can be exponential in the input dimension, which means that the depth-2 network has a very large width. From a learning perspective, having universal approximation is just the first step. One must also consider the efficiency, i.e., the size of the neural network to achieve approximation. Having a small size requires an understanding of the roles of depth and width for the expressive power. Recently, there are a series of works trying to characterize how depth affects the expressiveness of a neural network . [5] showed the existence of a 3-layer network, which cannot be realized by any 2-layer to more than a constant accuracy if the size is subexponential in the dimension. [2] proved the existence of classes of deep convolutional ReLU networks that cannot be realized by shallow ones if its size is no more than an exponential bound. For any integer k, [15] explicitly constructed networks with O(k3) layers and constant width which cannot be realized by any network with O(k) layers whose size is smaller than 2k. This type of results are referred to as depth efficiency of neural networks on the expressive power: a reduction in depth results in exponential sacrifice in width. However, it is worth noting that these are existence results. In fact, as pointed out in [2], proving existence is inevitable; There is always a positive measure of network parameters such that deep nets can’t be realized by shallow ones without substantially larger size. Thus we should explore more in addition to proving existence. Different to most of the previous works which investigate the expressive power in terms of the depth of neural networks, in this paper we study the problem from the view of width. We argue that an integration of both views will provide a better understanding of the expressive power of neural networks. Firstly, we prove a universal approximation theorem for width-bounded ReLU networks. Let n denotes the input dimension, we show that width-(n + 4) ReLU networks can approximate any Lebesgue integrable function on n-dimensional space with respect to L1 distance. On the other hand, except for a zero measure set, all Lebesgue integrable functions cannot be approximated by width-n ReLU networks, which demonstrate a phase transition. Our result is a dual version of the classical universal approximation theorem for depth-bounded networks. Next, we explore quantitatively the role of width for the expressive power of neural networks. Similar to the depth efficiency, we raise the following question on the width efficiency: Are there wide ReLU networks that cannot be realized by any narrow network whose size is not substantially increased? We argue that investigation of the above question is important for an understanding of the roles of depth and width for the expressive power of neural networks. Indeed, if the answer to this question is yes, and the size of the narrow networks must be exponentially larger, then it is appropriate to say that width has an equal importance as depth for neural networks. In this paper, we prove that there exists a family of ReLU networks that cannot be approximated by narrower networks whose depth increase is no more than polynomial. This polynomial lower bound for width is significantly smaller than the exponential lower bound for depth. However, it does not rule out the possibility of the existence of an exponential lower bound for width efficiency. On the other hand, insights from the previous analysis suggest us to study if there is a polynomial upper bound, i.e., a polynomial increase in depth and size suffices for narrow networks to approximate wide and shallow networks. Theoretically proving a polynomial upper bound seems very difficult, and we formally pose it as an open problem. Nevertheless, we conduct extensive experiments and the results demonstrate that when the depth of the narrow network exceeds the polynomial lower bound by just a constant factor, it can approximate wide shallow networks to a high accuracy. Together, these results provide more comprehensive evidence that depth is more effective for the expressive power of ReLU networks. Our contributions are summarized as follows: • We prove a Universal Approximation Theorem for Width-Bounded ReLU Networks. We show that any Lebesgue-integrable function f from Rn to R can be approximated by a fully-connected width-(n + 4) ReLU network to arbitrary accuracy with respect to L1 distance. In addition, except for a negligible set, all functions f from Rn to R cannot be approximated by any ReLU network whose width is no more than n. 2 • We show a width efficiency polynomial lower bound. For integer k, there exist a class of width-O(k2) and depth-2 ReLU networks that cannot be approximated by any width-O(k1.5) and depth-k networks. On the other hand, experimental results demonstrate that networks with size slightly larger than the lower bound achieves high approximation accuracy. 1.1 Related Work Research analyzing the expressive power of neural networks date back to decades ago. As one of the most classic work, Cybenko[3] proved that a fully-connected sigmoid neural network with one single hidden layer can universally approximate any continuous univariate function on a bounded domain with arbitrarily small error. Barron[1], Hornik et al.[9] ,Funahashi[6] achieved similar results. They also generalize the sigmoid function to a large class of activation functions, showing that universal approximation is essentially implied by the network structure. Delalleau et al.[4] showed that there exists a family of functions which can be represented much more efficiently with deep networks than with shallow ones as well. Due to the development and success of deep neural networks recently, there have been much more works discussing the expressive power of neural networks theoretically. Depth efficiency is among the most typical results. Eldan et.al [5] showed the existence of a 3-layer network, which cannot be realized by any 2-layer to more than a constant accuracy if the size is subexponential in the dimension. Cohen et.al [2] proved the existence of classes of deep convolutional ReLU networks that cannot be realized by shallow ones if its size is no more than an exponential bound. For any integer k, Telgarsky [15] explicitly constructed networks with O(k3) layers and constant width which cannot be realized by any network with O(k) layers whose size is smaller than 2k. Other works turn to show deep networks’ ability to approximate a wide range of functions. For example, Liang et al.[12] showed that in order to approximate a function which is Θ(log 1 ϵ )-order derivable with ϵ error universally, a deep network with O(log 1 ϵ ) layers and O(poly log 1 ϵ ) weights can do but Ω(poly 1 ϵ ) weights will be required if there is only o(log 1 ϵ ) layers. Yarotsky [16] showed that Cn-functions on Rd with a bounded domain can be approximated with ϵ error universally by a ReLU network with O(log 1 ϵ ) layers and O(( 1 ϵ ) d n log 1 ϵ ) weights. In addition, for results based on classic theories, Harvey et al.[7] provided a nearly-tight bound for VC-dimension of neural networks, that the VC-dimension for a network with W weights and L layers will have a O(WL log W) but Ω(WL log W L ) VC-dimension. Also, there are several works arguing for width’s importance from other aspects, for example, Nguyen et al.[11] shows if a deep architecture is at the same time sufficiently wide at one hidden layer then it has a well-behaved loss surface in the sense that almost every critical point with full rank weight matrices is a global minimum from the view of optimization. The remainder of the paper is organized as follows. In section 2 we introduce some background knowledge needed in this article. In section 3 we present our main result – the Width-Bounded Universal Approximation Theorem; besides, we show two comparing results related to the theorem. Then in section 4 we turn to explore quantitatively the role of width for the expressive power of neural networks. Finally, section 5 concludes. All proofs can be found in the Appendix and we give proof sketch in main text as well. 2 Preliminaries We begin by presenting basic definitions that will be used throughout the paper. A neural network is a directed computation graph, where the nodes are computation units and the edges describe the connection pattern among the nodes. Each node receives as input a weighted sum of activations flowed through the edges, applies some kind of activation function, and releases the output via the edges to other nodes. Neural networks are often organized in layers, so that nodes only receive signals from the previous layer and only release signals to the next layer. A fully-connected neural network is a layered neural network where there exists a connection between every two nodes in adjacent layers. In this paper, we will study the fully-connected ReLU network, which is a fully-connected neural 3 network with Rectifier Linear Unit (ReLU) activation functions. The ReLU function ReLU: R →R can be formally defined as ReLU(x) = max{x, 0} (1) The architecture of neural networks often specified by the width and the depth of the networks. The depth h of a network is defined as its number of layers (including output layer but excluding input layer); while the width dm of a network is defined to be the maximal number of nodes in a layer. The number of input nodes, i.e. the input dimension, is denoted as n. In this paper we study the expressive power of neural networks. The expressive power describes neural networks’ ability to approximate functions. We focus on Lebesgue-integrable functions. A Lebesgue-integrable function f : Rn →R is a Lebesgue-measurable function satisfying Z Rn |f(x)|dx < ∞ (2) which contains continuous functions, including functions such as the sgn function. Because we deal with Lebesgue-integrable functions, we adopt L1 distance as a measure of approximation error, different from L∞distance used by some previous works which consider continuous functions. 3 Width-bounded ReLU Networks as Universal Approximator In this section we consider universal approximation with width-bounded ReLU networks. The following theorem is the main result of this section. Theorem 1 (Universal Approximation Theorem for Width-Bounded ReLU Networks). For any Lebesgue-integrable function f : Rn →R and any ϵ > 0, there exists a fully-connected ReLU network A with width dm ≤n + 4, such that the function FA represented by this network satisfies Z Rn |f(x) −FA (x)|dx < ϵ. (3) The proof of this theorem is lengthy and is deferred to the supplementary material. Here we provide an informal description of the high level idea. For any Lebesgue integrable function and any predefined approximation accuracy, we explicitly construct a width-(n+4) ReLU network so that it can approximate the function to the given accuracy. The network is a concatenation of a series of blocks. Each block satisfies the following properties: 1) It is a depth-(4n + 1) width-(n + 4) ReLU network. 2) It can approximate any Lebesgue integrable function which is uniformly zero outside a cube with length δ to a high accuracy; 3) It can store the output of the previous block, i.e., the approximation of other Lebesgue integrable functions on different cubes; 4) It can sum up its current approximation and the memory of the previous approximations. It is not difficult to see that the construction of the whole network is completed once we build the blocks. We illustrate such a block in Figure 1 . In this block, each layer has n + 4 neurons. Each rectangle in Figure 1 represents a neuron, and the symbols in the rectangle describes the output of that neuron as a function of the block. Among the n + 4 neurons, n neurons simply transfer the input coordinates. For the other 4 neurons, 2 neurons store the approximation fulfilled by previous blocks. The other 2 neurons help to do the approximation on the current cube. The topology of the block is rather simple. It is very sparse, each neuron connects to at most 2 neurons in the next layer. The proof is just to verify the construction illustrated in Figure 1 is correct. Because of the space limit, we defer all the details to the supplementary materials. Theorem 1 can be regarded as a dual version of the classical universal approximation theorem, which proves that depth-bounded networks are universal approximator. If we ignore the size of the network, 4 Figure 1: One block to simulate the indicator function on [a1, b1] × [a2, b2] × · · · × [an, bn]. For k from 1 to n, we "chop" two sides in the kth dimension, and for every k the "chopping" process is completed within a 4-layer sub-network as we show in Figure 1. It is stored in the (n+3)th node as Lnin the last layer of A . We then use a single layer to record it in the (n+1)th or the (n+2)th node, and reset the last two nodes to zero. Now the network is ready to simulate another (n+1)-dimensional cube. both depth and width themselves are efficient for universal approximation. At the technical level however, there are a few differences between the two universal approximation theorems. The classical depth-bounded theorem considers continuous function on a compact domain and use L∞distance; Our width-bounded theorem instead deals with Lebesgue-integrable functions on the whole Euclidean space and therefore use L1 distance. Theorem 1 implies that there is a phase transition for the expressive power of ReLU networks as the width of the network varies across n, the input dimension. It is not difficult to see that if the width is much smaller than n, then the expressive power of the network must be very weak. Formally, we have the following two results. Theorem 2. For any Lebesgue-integrable function f : Rn →R satisfying that {x : f(x) ̸= 0} is a positive measure set in Lebesgue measure, and any function FA represented by a fully-connected ReLU network A with width dm ≤n, the following equation holds: Z Rn |f(x) −FA (x)|dx = +∞or Z Rn |f(x)|dx. (4) Theorem 2 says that even the width equals n, the approximation ability of the ReLU network is still weak, at least on the Euclidean space Rn. If we restrict the function on a bounded set, we can still prove the following theorem. 5 Theorem 3. For any continuous function f : [−1, 1]n →R which is not constant along any direction, there exists a universal ϵ∗> 0 such that for any function FA represented by a fully-connected ReLU network with width dm ≤n −1, the L1 distance between f and FA is at least ϵ∗: Z [−1,1]n |f(x) −FA(x)|dx ≥ϵ∗. (5) Then Theorem 3 is a direct comparison with Theorem 1 since in Theorem 1 the L1 distance can be arbitrarily small. The main idea of the two theorems is grabbing the disadvantage brought by the insufficiency of dimension. If the corresponding first layer values of two different input points are the same, the output will be the same as well. When the ReLU network’s width is not larger than the input layer’s width, we can find a ray for "most" points such that the ray passes the point and the corresponding first layer values on the ray are the same. It is like a dimension reduction caused by insufficiency of width. Utilizing this weakness of thin network, we can finally prove the two theorems. 4 Width Efficiency vs. Depth Efficiency Going deeper and deeper has been a trend in recent years, starting from the 8-layer AlexNet[10], the 19-layer VGG[13], the 22-layer GoogLeNet[14], and finally to the 152-layer and 1001-layer ResNets[8]. The superiority of a larger depth has been extensively shown in the applications of many areas. For example, ResNet has largely advanced the state-of-the-art performance in computer vision related fields, which is claimed solely due to the extremely deep representations. Despite of the great practical success, theories of the role of depth are still limited. Theoretical understanding of the strength of depth starts from analyzing the depth efficiency, by proving the existence of deep neural networks that cannot be realized by any shallow network whose size is exponentially larger. However, we argue that even for a comprehensive understanding of the depth itself, one needs to study the dual problem of width efficiency: Because, if we switch the role of depth and width in the depth efficiency theorems and the resulting statements remain true, then width would have the same power as depth for the expressiveness, at least in theory. It is worth noting that a priori, depth efficiency theorems do not imply anything about the validity of width efficiency. In this section, we study the width efficiency of ReLU networks quantitatively. Theorem 4. Let n be the input dimension. For any integer k ≥n + 4, there exists FA : Rn →R represented by a ReLU neural network A with width dm = 2k2 and depth h = 3, such that for any constant b > 0, there exists ϵ > 0 and for any function FB : Rn →R represented by ReLU neural network B whose parameters are bounded in [−b, b] with width dm ≤k3/2 and depth h ≤k + 2, the following inequality holds: Z Rn |FA −FB|dx ≥ϵ. (6) Theorem 4 states that there are networks such that reducing width requires increasing in the size to compensate, which is similar to that of depth qualitatively. However, at the quantitative level, this theorem is very different to the depth efficiency theorems in [15] [5][2]. Depth efficiency enjoys exponential lower bound, while for width Theorem 4 is a polynomial lower bound. Of course if a corresponding polynomial upper bound can be proven, we can say depth plays a more important role in efficiency, but such a polynomial lower bound still means that depth is not strictly stronger than width in efficiency, sometimes it costs depth super-linear more nodes than width. This raises a natural question: Can we improve the polynomial lower bound? There are at least two possibilities. 1) Width efficiency has exponential lower bound. To be concrete, there are wide networks that cannot be approximated by any narrow networks whose size is no more than an exponential bound. 2) Width efficiency has polynomial upper bound. Every wide network can be approximated by a narrow network whose size increase is no more than a polynomial. Exponential lower bound and polynomial upper bound have completely different implications. If exponential lower bound is true, then width and depth have the same strength for the expressiveness, 6 at least in theory. If the polynomial upper bound is true, then depth plays a significantly stronger role for the expressive power of ReLU networks. Currently, neither the exponential lower bound nor the polynomial upper bound seems within the reach. We pose it as a formal open problem. 4.1 Experiments We further conduct extensive experiments to provide some insights about the upper bound of such an approximation. To this end, we study a series of network architectures with varied width. For each network architecture, we randomly sample the parameters, which, together with the architecture, represent the function that we would like narrower networks to approximate. The approximation error is empirically calculated as the mean square error between the target function and the approximator function evaluated on a series of uniformly placed inputs. For simplicity and clearity, we refer to the network architectures that will represent the target functions when assigned parameters as target networks, and the corresponding network architectures for approximator functions as approximator networks. To be detailed, the target networks are fully-connected ReLU networks of input dimension n, output dimension 1, width 2k2 and depth 3, for n = 1, 2 and k = 3, 4, 5. For each of these networks, we sample weight parameters according to standard normal distribution, and bias parameters according to uniform distribution over [−1, 1). The network and the sampled parameters will collectively represent a target function that we use a narrow approximator network of width 3k3/2 and depth k +2 to approximate, with a corresponding k. The architectures are designed in accordance to Theorem 4 – we aim to investigate whether such a lower bound is actually an upper bound. In order to empirically calculate the approximation error, 20000 uniformly placed inputs from [−1, 1)n for n = 1 and 40000 such inputs for n = 2 are evaluated by the target function and the approximator function respectively, and the mean square error is reported. For each target network, we repeat the parameter-sampling process 50 times and report the mean square error in the worst and average case. We adopt the standard supervised learning approach to search in the parameter space of the approximator network to find the best approximator function. Specifically, half of all the test inputs from [−1, 1)n and the corresponding values evaluated by target function constitute the training set. The training set is used to train approximator network with a mini-batch AdaDelta optimizer and learning rate 1.0. The parameters of approximator network are randomly initialized according to [8]. The training process proceeds 100 epoches for n = 1 and 200 epoches for n = 2; the best approximator function is recorded. Table 1 lists the results. Figure 2 illustrates the comparison of an example target function and the corresponding approximator function for n = 1 and k = 5. Note that the target function values vary with a scale ∼10 in the given domain, so the (absolute) mean square error is indeed a rational measure of the approximation error. It is shown that the approximation error is indeed very small, for the target networks and approximator networks we study. From Figure 2 we can see that the approximation function is so close to the target function that we have to enlarge a local region to better display the difference. Since the architectures of both the target networks and approximator networks are determined according to Theorem 4, where the depth of approximator networks are in a polynomial scale with respect to that of target networks, the empirical results show an indication that a polynomial larger depth may be sufficient for a narrow network to approximate a wide network. 5 Conclusion In this paper, we analyze the expressive power of neural networks with a view from the width, distinguished from many previous works which focus on the view from the depth. We establish the Universal Approximation Theorem for Width-Bounded ReLU Networks, in contrast with the well-known Universal Approximation Theorem, which studies depth-bounded networks. Our result demonstrate a phase transition with respect to expressive power when the width of a ReLU network of given input dimension varies. We also explore the role of width for the expressive power of neural networks: we prove that a wide network cannot be approximated by a narrow network unless with polynomial more nodes, which gives a lower bound of the number of nodes for approximation. We pose open problems on whether 7 Table 1: Empirical study results. n denotes the input dimension, k is defined in Theorem 4; the width/depth for both target network and approximator network are determined in accordance to Theorem 4. We report mean square error in the worst and average case over 50 runs of randomly sampled parameters for target network. n k target network approximator network worst case error average case error width depth width depth 1 3 18 3 16 5 0.002248 0.000345 1 4 36 3 24 6 0.003263 0.000892 1 5 50 3 34 7 0.005643 0.001296 2 3 18 3 16 5 0.008729 0.001990 2 4 36 3 24 6 0.018852 0.006251 2 5 50 3 34 7 0.030114 0.007984 Figure 2: Comparison of an example target function and the corresponding approximator function for n = 1 and k = 5. A local region is enlarged to better display the difference. exponential lower bound or polynomial upper bound hold for the width efficiency, which we think is crucial on the way to a more thorough understanding of expressive power of neural networks. Experimental results support the polynomial upper bound and agree with our intuition and insights from the analysis. The width and the depth are two key components in the design of a neural network architecture. Width and depth are both important and should be carefully tuned together for the best performance of neural networks, since the depth may determine the abstraction level but the width may influence the loss of information in the forwarding pass. A comprehensive understanding of the expressive power of neural networks requires looking from both views. Acknowledgments This work was partially supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), NSFC (61573026) and Center for Data Science, Beijing Institute of Big Data Research in Peking University. We would like to thank the anonymous reviewers for their valuable comments on our paper. 8 References [1] Andrew R Barron. Approximation and estimation bounds for artificial neural networks. Machine Learning, 14(1):115–133, 1994. [2] Nadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, pages 698–728, 2016. [3] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems (MCSS), 2(4):303–314, 1989. [4] Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Information Processing Systems, pages 666–674, 2011. [5] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In Conference on Learning Theory, pages 907–940, 2016. [6] Ken-Ichi Funahashi. On the approximate realization of continuous mappings by neural networks. Neural networks, 2(3):183–192, 1989. [7] Nick Harvey, Chris Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension bounds for piecewise linear neural networks. COLT 2017, 2017. [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [9] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989. [10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [11] Quynh Nguyen and Matthias Hein. The loss surface of deep and wide neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2603–2612, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. [12] R. Srikant Shiyu Liang. Why deep neural networks for funtion approximation? ICLR 2017, 2017. [13] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [14] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. [15] Matus Telgarsky. Benefits of depth in neural networks. COLT 2016: 1517-1539, 2016. [16] Dmitry Yarotsky. Error bounds for approximations with deep relu networks. arXiv preprint arXiv:1610.01145, 2016. 9 | 2017 | 137 |
6,605 | Inverse Reward Design Dylan Hadfield-Menell Smitha Milli Pieter Abbeel∗ Stuart Russell Anca D. Dragan Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94709 {dhm, smilli, pabbeel, russell, anca}@cs.berkeley.edu Abstract Autonomous agents optimize the reward function we give them. What they don’t know is how hard it is for us to design a reward function that actually captures what we want. When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios. Inevitably, agents encounter new scenarios (e.g., new types of terrain) where optimizing that same reward may lead to undesired behavior. Our insight is that reward functions are merely observations about what the designer actually wants, and that they should be interpreted in the context in which they were designed. We introduce inverse reward design (IRD) as the problem of inferring the true objective based on the designed reward and the training MDP. We introduce approximate methods for solving IRD problems, and use their solution to plan risk-averse behavior in test MDPs. Empirical results suggest that this approach can help alleviate negative side effects of misspecified reward functions and mitigate reward hacking. 1 Introduction Robots2 are becoming more capable of optimizing their reward functions. But along with that comes the burden of making sure we specify these reward functions correctly. Unfortunately, this is a notoriously difficult task. Consider the example from Figure 1. Alice, an AI engineer, wants to build a robot, we’ll call it Rob, for mobile navigation. She wants it to reliably navigate to a target location and expects it to primarily encounter grass lawns and dirt pathways. She trains a perception system to identify each of these terrain types and then uses this to define a reward function that incentivizes moving towards the target quickly, avoiding grass where possible. When Rob is deployed into the world, it encounters a novel terrain type; for dramatic effect, we’ll suppose that it is lava. The terrain prediction goes haywire on this out-of-distribution input and generates a meaningless classification which, in turn, produces an arbitrary reward evaluation. As a result, Rob might then drive to its demise. This failure occurs because the reward function Alice specified implicitly through the terrain predictors, which ends up outputting arbitrary values for lava, is different from the one Alice intended, which would actually penalize traversing lava. In the terminology from Amodei et al. (2016), this is a negative side effect of a misspecified reward — a failure mode of reward design where leaving out important aspects leads to poor behavior. Examples date back to King Midas, who wished that everything he touched turn to gold, leaving out that he didn’t mean his food or family. Another failure mode is reward hacking, which happens when, e.g., a vacuum cleaner ejects collected dust so that it can collect even more (Russell & Norvig, 2010), or a racing boat in a game loops in place to collect points instead of actually winning the race (Amodei & Clark, 2016). Short of requiring that the reward designer anticipate and penalize all possible misbehavior in advance, how can we alleviate the impact of such reward misspecification? ∗OpenAI, International Computer Science Institute (ICSI) 2Throughout this paper, we will use robot to refer generically to any artificial agent. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Reward Engineering is Hard actual environment ... ... −100 + −2 10 true reward function intended environment ... 0 0 ... 0 −2 10 proxy reward function −1 2 −1 2 Figure 1: An illustration of a negative side effect. Alice designs a reward function so that her robot navigates to the pot of gold and prefers dirt paths. She does not consider that her robot might encounter lava in the real world and leaves that out of her reward specification. The robot maximizing this proxy reward function drives through the lava to its demise. In this work, we formalize the (Bayesian) inverse reward design (IRD) problem as the problem of inferring (a distribution on) the true reward function from the proxy. We show that IRD can help mitigate unintended consequences from misspecified reward functions like negative side effects and reward hacking. We leverage a key insight: that the designed reward function should merely be an observation about the intended reward, rather than the definition; and should be interpreted in the context in which it was designed. First, a robot should have uncertainty about its reward function, instead of treating it as fixed. This enables it to, e.g., be risk-averse when planning in scenarios where it is not clear what the right answer is, or to ask for help. Being uncertain about the true reward, however, is only half the battle. To be effective, a robot must acquire the right kind of uncertainty, i.e. know what it knows and what it doesn’t. We propose that the ‘correct’ shape of this uncertainty depends on the environment for which the reward was designed. In Alice’s case, the situations where she tested Rob’s learning behavior did not contain lava. Thus, the lava-avoiding reward would have produced the same behavior as Alice’s designed reward function in the (lava-free) environments that Alice considered. A robot that knows the settings it was evaluated in should also know that, even though the designer specified a lava-agnostic reward, they might have actually meant the lava-avoiding reward. Two reward functions that would produce similar behavior in the training environment should be treated as equally likely, regardless of which one the designer actually specified. We formalize this in a probabilistic model that relates the proxy (designed) reward to the true reward via the following assumption: Assumption 1. Proxy reward functions are likely to the extent that they lead to high true utility behavior in the training environment. Formally, we assume that the observed proxy reward function is the approximate solution to a reward design problem (Singh et al., 2010). Extracting the true reward is the inverse reward design problem. The idea of using human behavior as observations about the reward function is far from new. Inverse reinforcement learning uses human demonstrations (Ng & Russell, 2000; Ziebart et al., 2008), shared autonomy uses human operator control signals (Javdani et al., 2015), preference-based reward learning uses answers to comparison queries (Jain et al., 2015), and even what the human wants (HadfieldMenell et al., 2017). We observe that, even when the human behavior is to actually write down a reward function, this should still be treated as an observation, demanding its own observation model. Our paper makes three contributions. First, we define the inverse reward design (IRD) problem as the problem of inferring the true reward function given a proxy reward function, an intended decision problem (e.g., an MDP), and a set of possible reward functions. Second, we propose a solution to IRD and justify how an intuitive algorithm which treats the proxy reward as a set of expert demonstrations can serve as an effective approximation. Third, we show that this inference approach, combined with risk-averse planning, leads to algorithms that are robust to misspecified rewards, alleviating both negative side effects as well as reward hacking. We build a system that ‘knows-what-it-knows’ about reward evaluations that automatically detects and avoids distributional shift in situations with high-dimensional features. Our approach substantially outperforms the baseline of literal reward interpretation. 2 2 Inverse Reward Design Definition 1. (Markov Decision Process Puterman (2009)) A (finite-horizon) Markov decision process (MDP), M, is a tuple M = ⟨S, A, T, r, H⟩. S is a set of states. A is a set of actions. T is a probability distribution over the next state, given the previous state and action. We write this as T(st+1|st, a). r is a reward function that maps states to rewards r : S 7→R. H ∈Z+ is the finite planning horizon for the agent. A solution to M is a policy: a mapping from the current timestep and state to a distribution over actions. The optimal policy maximizes the expected sum of rewards. We will use ξ to represent trajectories. In this work, we consider reward functions that are linear combinations of feature vectors φ(ξ). Thus, the reward for a trajectory, given weights w, is r(ξ; w) = w⊤φ(ξ). The MDP formalism defines optimal behavior, given a reward function. However, it provides no information about where this reward function comes from (Singh et al., 2010). We refer to an MDP without rewards as a world model. In practice, a system designer needs to select a reward function that encapsulates the intended behavior. This process is reward engineering or reward design: Definition 2. (Reward Design Problem (Singh et al., 2010)) A reward design problem (RDP) is defined as a tuple P = ⟨r∗, ∼ M, ∼ R, π(·| ∼r, ∼ M)⟩. r∗is the true reward function. ∼ M is a world model. ∼ R is a set of proxy reward functions. π(·| ∼r, ∼ M) is an agent model, that defines a distribution on trajectories given a (proxy) reward function and a world model. In an RDP, the designer believes that an agent, represented by the policy π(·| ∼r, ∼ M), will be deployed in ∼ M. She must specify a proxy reward function ∼r ∈ ∼ R for the agent. Her goal is to specify ∼r so that π(·| ∼r, ∼ M) obtains high reward according to the true reward function r∗. We let ∼w represent weights for the proxy reward function and w∗represent weights for the true reward function. In this work, our motivation is that system designers are fallible, so we should not expect that they perfectly solve the reward design problem. Instead we consider the case where the system designer is approximately optimal at solving a known RDP, which is distinct from the MDP that the robot currently finds itself in. By inverting the reward design process to infer (a distribution on) the true reward function r∗, the robot can understand where its reward evaluations have high variance and plan to avoid those states. We refer to this inference problem as the inverse reward design problem: Definition 3. (Inverse Reward Design) The inverse reward design (IRD) problem is defined by a tuple ⟨R, ∼ M, ∼ R, π(·| ∼r, ∼ M), ∼r⟩. R is a space of possible reward functions. ∼ M is a world model. ⟨−, ∼ M, ∼ R, π(·| ∼r, ∼ M)⟩partially specifies an RDP P, with an unobserved reward function r∗∈R. ∼r ∈ ∼ R is the observed proxy reward that is an (approximate) solution to P. In solving an IRD problem, the goal is to recover r∗. We will explore Bayesian approaches to IRD, so we will assume a prior distribution on r∗and infer a posterior distribution on r∗given ∼r P(r∗| ∼r, ∼ M). 3 Related Work Optimal reward design. Singh et al. (2010) formalize and study the problem of designing optimal rewards. They consider a designer faced with a distribution of environments, a class of reward functions to give to an agent, and a fitness function. They observe that, in the case of bounded agents, it may be optimal to select a proxy reward that is distinct from the fitness function. Sorg et al. (2010) and subsequent work has studied the computational problem of selecting an optimal proxy reward. In our work, we consider an alternative situation where the system designer is the bounded agent. In this case, the proxy reward function is distinct from the fitness function – the true utility function in our terminology – because system designers can make mistakes. IRD formalizes the problem of determining a true utility function given an observed proxy reward function. This enables us to design agents that are robust to misspecifications in their reward function. Inverse reinforcement learning. In inverse reinforcement learning (IRL) (Ng & Russell, 2000; Ziebart et al., 2008; Evans et al., 2016; Syed & Schapire, 2007) the agent observes demonstrations of (approximately) optimal behavior and infers the reward function being optimized. IRD is a similar 3 problem, as both approaches infer an unobserved reward function. The difference is in the observation: IRL observes behavior, while IRD directly observes a reward function. Key to IRD is assuming that this observed reward incentivizes behavior that is approximately optimal with respect to the true reward. In Section 4.2, we show how ideas from IRL can be used to approximate IRD. Ultimately, we consider both IRD and IRL to be complementary strategies for value alignment (Hadfield-Menell et al., 2016): approaches that allow designers or users to communicate preferences or goals. Pragmatics. The pragmatic interpretation of language is the interpretation of a phrase or utterance in the context of alternatives (Grice, 1975). For example, the utterance “some of the apples are red” is often interpreted to mean that “not all of the apples are red” although this is not literally implied. This is because, in context, we typically assume that a speaker who meant to say “all the apples are red” would simply say so. Recent models of pragmatic language interpretation use two levels of Bayesian reasoning (Frank et al., 2009; Goodman & Lassiter, 2014). At the lowest level, there is a literal listener that interprets language according to a shared literal definition of words or utterances. Then, a speaker selects words in order to convey a particular meaning to the literal listener. To model pragmatic inference, we consider the probable meaning of a given utterance from this speaker. We can think of IRD as a model of pragmatic reward interpretation: the speaker in pragmatic interpretation of language is directly analogous to the reward designer in IRD. 4 Approximating the Inference over True Rewards We solve IRD problems by formalizing Assumption 1: the idea that proxy reward functions are likely to the extent that they incentivize high utility behavior in the training MDP. This will give us a probabilistic model for how ∼w is generated from the true w∗and the training MDP ∼ M. We will invert this probability model to compute a distribution P(w = w∗| ∼w, ∼ M) on the true utility function. 4.1 Observation Model Recall that π(ξ| ∼w, ∼ M) is the designer’s model of the probability that the robot will select trajectory ξ, given proxy reward ∼w. We will assume that π(ξ| ∼w, ∼ M) is the maximum entropy trajectory distribution from Ziebart et al. (2008), i.e. the designer models the robot as approximately optimal: π(ξ| ∼w, ∼ M) ∝exp(w⊤φ(ξ)). An optimal designer chooses ∼w to maximize expected true value, i.e. E[w∗⊤φ(ξ)|ξ ∼π(ξ| ∼w, ∼ M)] is high. We model an approximately optimal designer: P( ∼w|w∗, ∼ M) ∝exp β E w∗⊤φ(ξ)|ξ ∼π(ξ| ∼w, ∼ M) (1) with β controlling how close to optimal we assume the person to be. This is now a formal statement of Assumption 1. w∗can be pulled out of the expectation, so we let ∼ φ = E[φ(ξ)|ξ ∼π(ξ| ∼w, ∼ M)]. Our goal is to invert (1) and sample from (or otherwise estimate) P(w∗| ∼w, ∼ M) ∝P( ∼w|w∗, ∼ M)P(w∗). The primary difficulty this entails is that we need to know the normalized probability P( ∼w|w∗, ∼ M). This depends on its normalizing constant, ∼ Z(w), which integrates over possible proxy rewards. P(w = w∗| ∼w, ∼ M) ∝ exp βw⊤∼ φ ∼ Z(w) P(w), ∼ Z(w) = Z ∼ w exp βw⊤∼ φ d ∼w. (2) 4.2 Efficient approximations to the IRD posterior To compute P(w = w∗| ∼w, ∼ M), we must compute ∼ Z, which is intractable if ∼w lies in an infinite or large finite set. Notice that computing the value of the integrand for ∼ Z is highly non-trivial as it involves solving a planning problem. This is an example of what is referred to as a doubly-intractable likelihood (Murray et al., 2006). We consider two methods to approximate this normalizing constant. 4 Figure 2: An example from the Lavaland domain. Left: The training MDP where the designer specifies a proxy reward function. This incentivizes movement toward targets (yellow) while preferring dirt (brown) to grass (green), and generates the gray trajectory. Middle: The testing MDP has lava (red). The proxy does not penalize lava, so optimizing it makes the agent go straight through (gray). This is a negative side effect, which the IRD agent avoids (blue): it treats the proxy as an observation in the context of the training MDP, which makes it realize that it cannot trust the (implicit) weight on lava. Right: The testing MDP has cells in which two sensor indicators no longer correlate: they look like grass to one sensor but target to the other. The proxy puts weight on the first, so the literal agent goes to these cells (gray). The IRD agent knows that it can’t trust the distinction and goes to the target on which both sensors agree (blue). Sample to approximate the normalizing constant. This approach, inspired by methods in approximate Bayesian computation (Sunnåker et al., 2013), samples a finite set of weights {wi} to approximate the integral in Equation 2. We found empirically that it helped to include the candidate sample w in the sum. This leads to the normalizing constant ˆZ(w) = exp w⊤φw + N−1 X i=0 exp βw⊤φi . (3) Where φi and φw are the vector of feature counts realized optimizing wi and w respectively. Bayesian inverse reinforcement learning. During inference, the normalizing constant serves a calibration purpose: it computes how good the behavior produced by all proxy rewards in that MDP would be with respect to the true reward. Reward functions which increase the reward for all trajectories are not preferred in the inference. This creates an invariance to linear shifts in the feature encoding. If we were to change the MDP by shifting features by some vector φ0, φ ←φ + φ0, the posterior over w would remain the same. We can achieve a similar calibration and maintain the same property by directly integrating over the possible trajectories in the MDP: Z(w) = Z ξ exp(w⊤φ(ξ))dξ β ; ˆP(w| ∼w) ∝ exp βw⊤∼ φ Z(w) (4) Proposition 1. The posterior distribution that the IRD model induces on w∗(i.e., Equation 2) and the posterior distribution induced by IRL (i.e., Equation 4) are invariant to linear translations of the features in the training MDP. Proof. See supplementary material. This choice of normalizing constant approximates the posterior to an IRD problem with the posterior from maximum entropy IRL (Ziebart et al., 2008). The result has an intuitive interpretation. The proxy ∼w determines the average feature counts for a hypothetical dataset of expert demonstrations and β determines the effective size of that dataset. The agent solves ∼ M with reward ∼w and computes the corresponding feature expectations ∼ φ. The agent then pretends like it got β demonstrations with features counts ∼ φ, and runs IRL. The more the robot believes the human is good at reward design, the more demonstrations it pretends to have gotten from the person. The fact that reducing the proxy to behavior in ∼ M approximates IRD is not surprising: the main point of IRD is that the proxy reward is merely a statement about what behavior is good in the training environment. 5 µk ⌃k φs Is Is 2 {grass, dirt, target, unk} φs ⇠N (µIs, ⌃Is) Figure 3: Our challenge domain with latent rewards. Each terrain type (grass, dirt, target, lava) induces a different distribution over high-dimensional features: φs ∼N(µIs, ΣIs). The designer never builds an indicator for lava, and yet the agent still needs to avoid it in the test MDPs. 5 Evaluation 5.1 Experimental Testbed We evaluated our approaches in a model of the scenario from Figure 1 that we call Lavaland. Our system designer, Alice, is programming a mobile robot, Rob. We model this as a gridworld with movement in the four cardinal directions and four terrain types: target, grass, dirt, and lava. The true objective for Rob, w∗, encodes that it should get to the target quickly, stay off the grass, and avoid lava. Alice designs a proxy that performs well in a training MDP that does not contain lava. Then, we measure Rob’s performance in a test MDP that does contain lava. Our results show that combining IRD and risk-averse planning creates incentives for Rob to avoid unforeseen scenarios. We experiment with four variations of this environment: two proof-of-concept conditions in which the reward is misspecified, but the agent has direct access to feature indicators for the different categories (i.e. conveniently having a feature for lava); and two challenge conditions, in which the right features are latent; the reward designer does not build an indicator for lava, but by reasoning in the raw observation space and then using risk-averse planning, the IRD agent still avoids lava. 5.1.1 Proof-of-Concept Domains These domains contain feature indicators for the four categories: grass, dirt, target, and lava. Side effects in Lavaland. Alice expects Rob to encounter 3 types of terrain: grass, dirt, and target, and so she only considers the training MDP from Figure 2 (left). She provides a ∼w to encode a trade-off between path length and time spent on grass. The training MDP contains no lava, but it is introduced when Rob is deployed. An agent that treats the proxy reward literally might go on the lava in the test MDP. However, an agent that runs IRD will know that it can’t trust the weight on the lava indicator, since all such weights would produce the same behavior in the training MDP (Figure 2, middle). Reward Hacking in Lavaland. Reward hacking refers generally to reward functions that can be gamed or tricked. To model this within Lavaland, we use features that are correlated in the training domain but are uncorrelated in the testing environment. There are 6 features: three from one sensor and three from another sensor. In the training environment the features from both sensors are correct indicators of the state’s terrain category (grass, dirt, target). At test time, this correlation gets broken: lava looks like the target category to the second sensor, but the grass category to the first sensor. This is akin to how in a racing game (Amodei & Clark, 2016), winning and game points can be correlated at reward design time, but test environments might contain loopholes for maximizing points without winning. We want agents to hedge their bets between winning and points, or, in Lavaland, between the two sensors. An agent that treats the proxy reward function literally might go to these new cells if they are closer. In contrast, an agent that runs IRD will know that a reward function with the same weights put on the first sensor is just as likely as the proxy. Risk averse planning makes it go to the target for which both sensors agree (Figure 2, right). 6 Negative Side Effects Reward Hacking 0.0 0.1 0.2 0.3 0.4 Fraction of ξ with Lava Proof-of-Concept MaxEnt Z Sample Z Proxy Raw Observations Classifier Features 0.0 0.2 0.4 0.6 0.8 Latent Rewards Figure 4: The results of our experiment comparing our proposed method to a baseline that directly plans with the proxy reward function. By solving an inverse reward design problem, we are able to create generic incentives to avoid unseen or novel states. 5.1.2 Challenge Domain: Latent Rewards, No More Feature Indicators The previous examples allow us to explore reward hacking and negative side effects in an isolated experiment, but are unrealistic as they assume the existence of a feature indicator for unknown, unplanned-for terrain. To investigate misspecified objectives in a more realistic setting, we shift to the terrain type being latent, and inducing raw observations: we use a model where the terrain category determines the mean and variance of a multivariate Gaussian distribution over observed features. Figure 3 shows a depiction of this scenario. The designer has in mind a proxy reward on dirt, target, and grass, but forgets that lava might exist. We consider two realistic ways through which a designer might actually specify the proxy reward function, which is based on the terrain types that the robot does not have access to: 1) directly on the raw observations — collect samples of the training terrain types (dirt, grass, target) and train a (linear) reward predictor; or 2) classifier features — build a classifier to classify terrain as dirt, grass, or target, and define a proxy on its output. Note that this domain allows for both negative side effects and reward hacking. Negative side effects can occur because the feature distribution for lava is different from the feature distribution for the three safe categories, and the proxy reward is trained only on the three safe categories. Thus in the testing MDP, the evaluation of the lava cells will be arbitrary so maximizing the proxy reward will likely lead the agent into lava. Reward hacking occurs when features that are correlated for the safe categories are uncorrelated for the lava category. 5.2 Experiment Lavaland Parameters. We defined a distribution on map layouts with a log likelihood function that prefers maps where neighboring grid cells are the same. We mixed this log likelihood with a quadratic cost for deviating from a target ratio of grid cells to ensure similar levels of the lava feature in the testing MDPs. Our training MDP is 70% dirt and 30% grass. Our testing MDP is 5% lava, 66.5% dirt, and 28.5% grass. In the proof-of-concept experiments, we selected the proxy reward function uniformly at random. For latent rewards, we picked a proxy reward function that evaluated to +1 for target, +.1 for dirt, and −.2 for grass. To define a proxy on raw observations, we sampled 1000 examples of grass, dirt, and target and did a linear regression. With classifier features, we simply used the target rewards as the weights on the classified features. We used 50 dimensions for our feature vectors. We selected trajectories via risk-averse trajectory optimization. Details of our planning method, and our approach and rationale in selecting it can be found in the supplementary material. IVs and DVs. We measured the fraction of runs that encountered a lava cell on the test MDP as our dependent measure. This tells us the proportion of trajectories where the robot gets ’tricked’ by the misspecified reward function; if a grid cell has never been seen then a conservative robot should plan to avoid it. We manipulate two factors: literal-optimizer and Z-approx. literal-optimizer is true if the robot interprets the proxy reward literally and false otherwise. Z-approx varies the approximation technique used to compute the IRD posterior. It varies across the two levels described in Section 4.2: sample to approximate the normalizing constant (Sample-Z) or use the normalizing constant from maximum entropy IRL (MaxEnt-Z) (Ziebart et al., 2008). 7 Results. Figure 4 compares the approaches. On the left, we see that IRD alleviates negative side effects (avoids the lava) and reward hacking (does not go as much on cells that look deceptively like the target to one of the sensors). This is important, in that the same inference method generalizes across different consequences of misspecified rewards. Figure 2 shows example behaviors. In the more realistic latent reward setting, the IRD agent avoids the lava cells despite the designer forgetting to penalize it, and despite not even having an indicator for it: because lava is latent in the space, and so reward functions that would implicitly penalize lava are as likely as the one actually specified, risk-averse planning avoids it. We also see a distinction between raw observations and classifier features. The first essentially matches the proof-of-concept results (note the different axes scales), while the latter is much more difficult across all methods. The proxy performs worse because each grid cell is classified before being evaluated, so there is a relatively good chance that at least one of the lava cells is misclassified as target. IRD performs worse because the behaviors considered in inference plan in the already classified terrain: a non-linear transformation of the features. The inference must both determine a good linear reward function to match the behavior and discover the corresponding uncertainty about it. When the proxy is a linear function of raw observations, the first job is considerably easier. 6 Discussion Summary. In this work, we motivated and introduced the Inverse Reward Design problem as an approach to mitigate the risk from misspecified objectives. We introduced an observation model, identified the challenging inference problem this entails, and gave several simple approximation schemes. Finally, we showed how to use the solution to an inverse reward design problem to avoid side effects and reward hacking in a 2D navigation problem. We showed that we are able to avoid these issues reliably in simple problems where features are binary indicators of terrain type. Although this result is encouraging, in real problems we won’t have convenient access to binary indicators for what matters. Thus, our challenge evaluation domain gave the robot access to only a high-dimensional observation space. The reward designer specified a reward based on this observation space which forgets to penalize a rare but catastrophic terrain. IRD inference still enabled the robot to understand that rewards which would implicitly penalize the catastrophic terrain are also likely. Limitations and future work. IRD gives the robot a posterior distribution over reward functions, but much work remains in understanding how to best leverage this posterior. Risk-averse planning can work sometimes, but it has the limitation that the robot does not just avoid bad things like lava, it also avoids potentially good things, like a giant pot of gold. We anticipate that leveraging the IRD posterior for follow-up queries to the reward designer will be key to addressing misspecified objectives. Another limitation stems from the complexity of the environments and reward functions considered here. The approaches we used in this work rely on explicitly solving a planning problem, and this is a bottleneck during inference. In future work, we plan to explore the use of different agent models that plan approximately or leverage, e.g., meta-learning (Duan et al., 2016) to scale IRD up to complex environments. Another key limitation is the use of linear reward functions. We cannot expect IRD to perform well unless the prior places weights on (a reasonable approximation to) the true reward function. If, e.g., we encoded terrain types as RGB values in Lavaland, there is unlikely to be a reward function in our hypothesis space that represents the true reward well. Finally, this work considers one relatively simple error model for the designer. This encodes some implicit assumptions about the nature and likelihood of errors (e.g., IID errors). In future work, we plan to investigate more sophisticated error models that allow for systematic biased errors from the designer and perform human subject studies to empirically evaluate these models. Overall, we are excited about the implications IRD has not only in the short term, but also about its contribution to the general study of the value alignment problem. Acknowledgements This work was supported by the Center for Human Compatible AI and the Open Philanthropy Project, the Future of Life Institute, AFOSR, and NSF Graduate Research Fellowship Grant No. DGE 1106400. 8 References Amodei, Dario and Clark, Jack. Faulty Reward Functions in the Wild. https://blog.openai. com/faulty-reward-functions/, 2016. Amodei, Dario, Olah, Chris, Steinhardt, Jacob, Christiano, Paul, Schulman, John, and Mané, Dan. Concrete Problems in AI Safety. CoRR, abs/1606.06565, 2016. URL http://arxiv.org/abs/ 1606.06565. Duan, Yan, Schulman, John, Chen, Xi, Bartlett, Peter L., Sutskever, Ilya, and Abbeel, Pieter. RL2: Fast Reinforcement Learning via Slow Reinforcement Learning. CoRR, abs/1611.02779, 2016. URL http://arxiv.org/abs/1611.02779. Evans, Owain, Stuhlmüller, Andreas, and Goodman, Noah D. Learning the Preferences of Ignorant, Inconsistent Agents. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 323–329. AAAI Press, 2016. Frank, Michael C, Goodman, Noah D, Lai, Peter, and Tenenbaum, Joshua B. Informative Communication in Word Production and Word Learning. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pp. 1228–1233. Cognitive Science Society Austin, TX, 2009. Goodman, Noah D and Lassiter, Daniel. Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought. Handbook of Contemporary Semantic Theory. Wiley-Blackwell, 2, 2014. Grice, H. Paul. Logic and Conversation, pp. 43–58. Academic Press, 1975. Hadfield-Menell, Dylan, Dragan, Anca, Abbeel, Pieter, and Russell, Stuart. Cooperative Inverse Reinforcement Learning. In Proceedings of the Thirtieth Annual Conference on Neural Information Processing Systems, 2016. Hadfield-Menell, Dylan, Dragan, Anca D., Abbeel, Pieter, and Russell, Stuart J. The Off-Switch Game. In Proceedings of the International Joint Conference on Artificial Intelligence, 2017. Jain, Ashesh, Sharma, Shikhar, Joachims, Thorsten, and Saxena, Ashutosh. Learning Preferences for Manipulation Tasks from Online Coactive Feedback. The International Journal of Robotics Research, 34(10):1296–1313, 2015. Javdani, Shervin, Bagnell, J. Andrew, and Srinivasa, Siddhartha S. Shared Autonomy via Hindsight Optimization. In Proceedings of Robotics: Science and Systems XI, 2015. URL http://arxiv. org/abs/1503.07619. Murray, Iain, Ghahramani, Zoubin, and MacKay, David. MCMC for Doubly-Intractable Distributions. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, 2006. Ng, Andrew Y and Russell, Stuart J. Algorithms for Inverse Reinforcement Learning. In Proceedings of the Seventeenth International Conference on Machine Learning, pp. 663–670, 2000. Puterman, Martin L. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2009. Russell, Stuart and Norvig, Peter. Artificial Intelligence: A Modern Approach. Pearson, 2010. Singh, Satinder, Lewis, Richard L., , and Barto, Andrew G. Where do rewards come from? In Proceedings of the International Symposium on AI Inspired Biology - A Symposium at the AISB 2010 Convention, pp. 111–116, 2010. ISBN 1902956923. Sorg, Jonathan, Lewis, Richard L, and Singh, Satinder P. Reward Design via Online Gradient Ascent. In Proceedings of the Twenty-Third Conference on Neural Information Processing Systems, pp. 2190–2198, 2010. Sunnåker, Mikael, Busetto, Alberto Giovanni, Numminen, Elina, Corander, Jukka, Foll, Matthieu, and Dessimoz, Christophe. Approximate Bayesian Computation. PLoS Comput Biol, 9(1):e1002803, 2013. 9 Syed, Umar and Schapire, Robert E. A Game-Theoretic Approach to Apprenticeship Learning. In Proceedings of the Twentieth Conference on Neural Information Processing Systems, pp. 1449– 1456, 2007. Ziebart, Brian D, Maas, Andrew L, Bagnell, J Andrew, and Dey, Anind K. Maximum Entropy Inverse Reinforcement Learning. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pp. 1433–1438, 2008. 10 | 2017 | 138 |
6,606 | The power of absolute discounting: all-dimensional distribution estimation Moein Falahatgar UCSD moein@ucsd.edu Mesrob Ohannessian TTIC mesrob@gmail.com Alon Orlitsky UCSD alon@ucsd.edu Venkatadheeraj Pichapati UCSD dheerajpv7@ucsd.edu Abstract Categorical models are a natural fit for many problems. When learning the distribution of categories from samples, high-dimensionality may dilute the data. Minimax optimality is too pessimistic to remedy this issue. A serendipitously discovered estimator, absolute discounting, corrects empirical frequencies by subtracting a constant from observed categories, which it then redistributes among the unobserved. It outperforms classical estimators empirically, and has been used extensively in natural language modeling. In this paper, we rigorously explain the prowess of this estimator using less pessimistic notions. We show that (1) absolute discounting recovers classical minimax KL-risk rates, (2) it is adaptive to an effective dimension rather than the true dimension, (3) it is strongly related to the Good–Turing estimator and inherits its competitive properties. We use powerlaw distributions as the cornerstone of these results. We validate the theory via synthetic data and an application to the Global Terrorism Database. 1 Introduction Many natural problems involve uncertainties about categorical objects. When modeling language, we reason about words, meanings, and queries. When inferring about mutations, we manipulate genes, SNPs, and phenotypes. It is sometimes possible to embed these discrete objects into continuous spaces, which allows us to use the arsenal of the latest machine learning tools that often (though admittedly not always) need numerically meaningful data. But why not operate in the discrete space directly? One of the main obstacles to this is the dilution of data due to the high-dimensional aspect of the problem, where dimension in this case refers to the number k of categories. The classical framework of categorical distribution estimation, studied at length by the information theory community, involves a fixed small k, [BS04]. Add-contant estimators are sufficient for this purpose. Some of the impetus to understanding the large k regime came from the neuroscience world, [Pan04]. But this extended the pessimistic worst-case perspective of the earlier framework, resulting in guarantees that left a lot to be desired. This is because high-dimension often also comes with additional structure. In particular, if a distribution produces only roughly d distinct categories in a sample of size n, then we ought to think of d (and not k) as the effective dimension of the problem. There are also some ubiquitous structures, like power-law distributions. Natural language is a flagship example of this, which was observed as early as by Zipf in [Zip35]. Species and genera, rainfall, terror incidents, to mention just a few all obey power-laws [SLE+03, CSN09, ADW13]. Are there estimators that mold to both dimension and structure? It turns out we don’t need to search far. In natural language processing (NLP) it was first discovered that an estimator proposed by Good and Turing worked very well [Goo53]. Only recently did we start understanding why and how [OSZ03, OD12, AJOS13, OS15]. And the best explanation thus far is that it implicitly competes with the best estimator in a very small neighborhood of the true distribution. But NLP researchers [NEK94, KN95, CG96] have long realized that another simpler estimator, absolute discounting, is equally good. But why and how this is the case was never properly determined, save some mention in [OD12] and in [FNT16], where the focus is primarily on form. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we first show that absolute discounting, defined in Section 3, recovers pessimistic minimax optimality in both the low- and high-dimensional regimes. This is an immediate consequence of an upper bound that we provide in Section 5. We then study lower bounds with classes defined by the number of distinct categories d and also power-law structure in Section 6. This reveals that absolute discounting in fact adapts to the family of these classes. We further unravel the relationship of absolute discounting with the Good–Turing estimator, for power-law distributions. Interestingly, this leads to a further refinement of this estimator’s performance in terms of competitivity. Lastly, we give some synthetic experiments in Section 8 and then explore forecasting global terror incidents on real data [LDMN16], which showcases very well the “all-dimensional” learning power of absolute discounting. These contributions are summarized in more detail in Section 4. We start out in Section 2 with laying out what we mean by these notions of optimality. 2 Optimal distribution learning In this section we concretely formulate the optimal distribution learning framework and take the opportunity to point out related work. Problem setting Let p = (p1, p2, . . . , pk) be a distribution over [k] := {1, 2, . . . , k} categories. Let [k]∗be the set of finite sequences over [k]. An estimator q is a mapping that assigns to every sequence xn ∈[k]∗a distribution q(xn) over [k]. We model p as being the underlying distribution over the categories. We have access to data consisting of n samples Xn = X1, X2, ..., Xn generated i.i.d. from p. Intuitively, our goal is to find a choice of q that is guaranteed to be as close as any other estimator can be to p, in average. We first need to quantify how performance is measured. General notation: Let (µj : j = 1, · · · , k) denote the empirical counts, i.e. the number of times symbol j appears in Xn and let D be the number of distinct categories appearing in Xn, i.e. D = P j 1{µj > 0}. We denote by d := E[D] its expectation. Let (Φµ : µ = 0, · · · , n), be the total number of categories appearing exactly µ times, Φµ := P j 1{µj = µ}. Note that D = P µ>0 Φµ. Also let (Sµ : µ = 0, · · · , n), be the total probability within each such group, Sµ := P j pj1{µj = µ}. Lastly, denote the empirical distribution by q+0 j := µj/n. KL-Risk We adopt the Kullback-Leibler (KL) divergence as a measure of loss between two distributions. When a distribution p is approximated by another q, the KL divergence is given by KL(p||q) := Pk j=1 pj log pj qj . We can then measure the performance of an estimator q that depends on data in terms of the KL-risk, the expectation of the divergence with respect to the samples. We use the following notation to express the KL-risk of q after observing n samples Xn: rn(p, q) := E Xn∼pn[KL(p||q(Xn))]. An estimator that is identical to p regardless of the data is unbeatable, since rn(p, q) = 0. Therefore it is important to model our ignorance of p and gauge the optimality of an estimator q accordingly. This can be done in various ways. We elaborate the three most relevant such perspectives: minimax, adaptive, and competitive distribution learning. Minimax In the minimax setting, p is only known to belong to some class of distributions P, but we don’t know which one. We would like to perform well, no matter which distribution it is. To each q corresponds a distribution p ∈P (assuming the class is finite or closed) on which q has its worst performance: rn(P, q) := max p∈P rn(p, q). The minimax risk is the least worst-case KL-risk achieved by any estimator q, rn(P) := min q rn(P, q). The minimax risk depends only on the class P. It is a lower bound: no estimator can beat it for all p, i.e. it’s not possible that rn(p, q) < rn(P) for all p ∈P. An estimator q that satisfies an upper bound of the form rn(P, q) = (1 + o(1))rn(P) is said to be minimax optimal “even to the constant” (an informal but informative expression that we adopt in this paper). If instead rn(P, q) = O(1)rn(P), we say that q is rate optimal. Near-optimality notions are also possible, but we don’t dwell on them. As an aside, note that universal compression is minimax optimality using cumulative risk. See [FJO+15] for such related work on universal compression for power laws. 2 Adaptive The minimax perspective captures our ignorance of p in a pessimistic fashion. This is because rn(P) may be large, but for a specific p ∈P we may have a much smaller rn(p, q). How can we go beyond this pessimism? Observe that when a class is smaller, then rn(P) is smaller. This is because we’d be maximizing on a smaller set. In the extreme case noted earlier, when P contains only a single distribution, we have rn(P) = 0. The adaptive learning setting finds an intermediate ground where we have a family of distribution classes F = {Ps : s ∈S} indexed by a (not necessarily countable) index set S. For each s, we have a corresponding rn(Ps) which is often much smaller than rn S s∈S Ps , and we would like the estimator to achieve the risk bound corresponding to the smaller class. We say that an estimator q is adaptive to the family F if for all s ∈S: rn(p, q) ≤Os(1) rn(Ps) ∀p ∈Ps ⇐⇒ rn(Ps, q) ≤Os(1) rn(Ps) There often is a price to adaptivity, which is a function of the granularity of F and is paid in the form of varying/large leading constants per class. This framework has been particularly successful in density estimation with smoothness classes [Tsy09] and has been recently used in the discrete setting for universal compression [BGO15]. Competitive The adaptive perspective can be tightened by demanding that, rather than a multiplicative constant, the KL-risk tracks the risk up to a vanishingly small additive term: rn(p, q) = rn(Ps) + ϵn(Ps, q) ∀p ∈Ps. Ideally, we would like the competitive loss ϵn(Ps, q) to be negligible compared to the risk of each class rn(Ps). If ϵn(Ps, q) = Os(1)rn(Ps) for all s, then we recover adaptivity. And when ϵn(Ps, q) = os(1)rn(Ps) for all s ∈S, we have minimax optimality even to the constant within each class, which is a much stronger form of adaptivity. We then say that the estimator is competitive with respect to the family F. We may also evaluate the worst-case competitive loss, over S. This formulation was recently introduced in [OS15] in the context of distribution learning. This work shows that the celebrated Good–Turing estimator [Goo53], combined with the empirical estimator, has small worst-case competitive loss over the family of classes defined by any given distribution and all its permutations. Most importantly, this loss was shown to stay bounded, even as the dimension increases. This provided a rigorous theoretical explanation for the performance of the Good–Turing estimator in high-dimensions. A similar framework is also studied for ℓ1-loss in [VV15]. 3 Absolute discounting One of the first things to observe is that the empirical distribution is particularly ill-suited to handle KL-risk. This is most easily seen by the fact that we’d have infinite blow-up when any µj = 0, which will happen with positive probability. Instead, one could resort to an add-constant estimator, which for a positive β is of the form q+β j := (µj + β)/(n + kβ). The most widely-studied class of distributions is the one that includes all of them: the k−dimensional simplex, ∆k := {(p1, p2, . . . , pk), : P i pi = 1, pi ≥0 ∀i ∈[k]}. In the lowdimensional scaling, when n/k →∞(the “dimension” here being the support size k), the minimax risk is rn(∆k) = (1 + o(1)) k −1 2n , In [BS04], a variant of the add-constant estimator is shown to achieve this risk even to the constant. Furthermore, any add-constant estimator is rate optimal when k is fixed. But in the very highdimensional setting, when k/n →∞, [Pan04] showed that the minimax risk behaves as rn(∆k) = (1 + o(1)) log k n, achieved by an add-constant estimator, but with a constant that depends on the ratio of k and n. Despite these classical results on minimax optimal estimators, in practice people often use other estimators that have better empirical performance. This was a long-running mystery in the language modeling community [CG96], where variants of the Good–Turing estimator were shown to perform the best [JM85, GS95]. The gap in performance was only understood recently, using the notion of competitivity [OS15]. In essence, the Good–Turing estimator works well in both low- and 3 high-dimensional regimes, and in-between. Another estimator, absolute discounting, unlike addconstant estimators, simply subtracts a positive constant from the empirical counts and redistributes the subtracted amount to unseen categories. For a discount parameter δ ∈[0, 1), it is defined as: q−δ j := ( µj−δ n if µj > 0, Dδ n(k−D) if µj = 0. (1) Starting with the work of [NEK94], absolute discounting soon supplanted the Good–Turing estimator, due to both its simplicity and comparable performance. Kneser-Ney smoothing [KN95], which uses absolute discounting at its core was long held as the preferred way to train N-gram models. Even to this day, the state-of-the-art language models are combined systems where one usually interpolates between recurrent neural networks and Kneser-Ney smoothing [JVS+16]. Can this success be explained? Kneser-Ney is for the most part a principled implementation of the notion of back-off, which we only touch upon in the conclusion. The use of absolute discounting is critical however, as performance deteriorates if we back-off with care but use a more na¨ıve add-constant or even Katz-style smoothing [Kat87], which switches from the Good–Turing to the empirical distribution at a fixed frequency point. It is also important to mention the Bayesian approach of [Teh06] that performs similarly to Kneser-Ney, called the Hierarchical Pitman-Yor language model. The hierarchies in this model reprise the role of back-off, while the two-parameter Poisson-Dirichlet prior proposed by Pitman and Yor [PY97] results in estimators that are very similar to absolute discounting. The latter is not a surprise because this prior almost surely generates a power law distribution, which is intimately related to absolute discounting as we study in this paper. Though our theory applies more generally, it can in fact be straightforwardly adapted to give guarantees to estimators built upon this prior. 4 Contributions We investigate the reason behind the auspicious behavior of the absolute discounting estimator. We achieve this by demonstrating the adaptivity and competitivity of this estimator for many relevant families of distribution classes. In summary: • We analyze the performance of the absolute discounting estimator by upper bounding the KLrisk for each class in a family of distribution classes defined by the expected number of distinct categories. [Section 5, Theorem 1] This result implies that absolute discounting achieves classical minimax rate-optimality in both the low- and high-dimensional regimes over the whole simplex ∆k, as outlined in Section 2. • We provide a generic lower bound to the minimax risk of classes defined by a single distribution and all of its permutations. We then show that if the defining distribution is a truncated (possibly perturbed) power-law, then this lower bound matches the upper bound of absolute discounting, up to a constant factor. [Section 6, Corollaries 3 and 4] • This implies that absolute discounting is adaptive to the family of classes defined by a truncated power-law distribution and its permutations. Also, since classes defined by the expected number of distinct categories necessarily includes a power-law, absolute discounting is also adaptive to this family. This is a strict refinement of classical minimax rate-optimality. • We give an equivalence between the absolute discounting and Good–Turing estimators in the high-dimensional setting, whenever the distribution is a truncated power-law. This is a finitesample guarantee, as compared to the asymptotic version of [OD12]. As a consequence, absolutediscounting becomes competitive with respect to the family of classes defined by permutations of power-laws, inheriting Good–Turing’s behavior [OS15]. [Section 7, Lemma 5 and Theorem 6] We corroborate the theoretical results with synthetic experiments that reproduce the theoretical minimax risk bounds. We also show that the prowess of absolute discounting on real data is not restricted only to language modeling. In particular, we explore a striking application to forecasting global terror incidents and show that, unlike naive estimators, absolute discounting gives accurate predictions simultaneously in all of low-, medium-, and high-activity zones. [Section 8] 4 5 Upper bound and classical minimax optimality We now give an upper bound for the risk of the absolute discounting estimator and show that it recovers classical minimax rates in the low- and high-dimensional regimes. Recall that d := E[D] is the expected number of distinct categories in the samples. The upper bound that we derive can be written as function of only d, k, and n, and is non-decreasing in d. For a given n and k, let Pd be the set of all distributions for which E[D] ≤d. The upper bound is thus also a worst-case bound over Pd. Theorem 1 (Upper bound). Consider the absolute discounting estimator q = q−δ, defined in (1). Let p be such that E[D] = d. Given a discount 0 < δ < 1, there exists a constant c that may depend on δ and only δ, such that rn(p, q) ≤ d n log k −d 2 d 2 + c d n if d ≥10 log log k, d n log k + c d n if d < 10 log log k. (2) The same bound holds for rn(Pd, q). We defer the proof the theorem to the supplementary material. Here are the immediate implications. For the low-dimensional regime n k →∞and the class ∆k, the largest d can be once n > k is k. The risk of absolute discounting is thus bounded by c(1 + o(1)) k n = O(1) k n. This is minimax rate-optimal [BS04]. For the high-dimensional regime k n →∞and the class ∆k, the largest d can be when k > n is n. The risk of absolute discounting is thus dominated by the first term, which reduces to (1 + o(1)) log k n. This is the optimal risk for the class ∆k [Pan04], even to the constant. Therefore on the two extreme ranges of k and n absolute discounting recovers the best performance, either as rate-optimal or optimal even to the constant. These results are for the entire k−dimensional simplex ∆k. Furthermore, for smaller classes, it characterizes the worst-case risk of the class by the d, the expected number of distinct categories. Is this characterization tight? 6 Lower bounds and adaptivity In order to lower bound the minimax risk of a given class P, we use a finer granularity than the Pd classes described in Section 5. In particular, let Pp be the permutation class of distributions consisting of a single distribution p and all of its permutations. Note that the multiset of probabilities is the same for all distributions in Pp, and since the expected number of distinct categories only depends on the multiset (d = P j[1 −(1 −pj)n]) it follows that Pp ⊂Pd1. To find a good lower bound for Pd, we need a p that is “worst case”. We first give the following generic lower bound. Theorem 2 (Generic lower bound). Let Pp be a permutation class defined by a distribution p and let γ > 1. Then for k > γd, the minimax risk is bounded by: rn(Pp) ≥ 1 −1 γ k X j=γd pj log k −γd Pk j=γd pj + X i=γd pj log pj (3) Equation (3) can be used as a starting point for more concrete lower bounds on various distribution classes. We illustrate this for two cases. First, let us choose p to be a truncated power-law distribution with power α: pj ∝j−α, for j = 1, · · · , k. We always assume α ≥α0 > 1. This leads to the following lower bound. Corollary 3. Let P be all permutations of a single power-law distribution with power α truncated over k categories. Then there exists a constant c > 0 and large enough n0 such that when n > n0 and k > max{n, 1.2 1 α−1 n 1 α }, rn(P) ≥c d n log k −2d 2d . Next, we use a different choice of p for Pp to provide a lower bound whenever d grows linearly with n. This essentially closes the gap of the previous corollary when α approaches 1. 1We abuse notation by distinguishing the classes by the letter used, while at the same time using the letters to denote actual quantities. From the context we understand that d is the expected number of distinct categories for p, at the given n. 5 Corollary 4. Let ρ ∈(1, 1.75) and let P be all permutations of a single uniform distribution over a subset k′ = n ρ out of k categories. Then d ∼(1 −e−ρ)n/ρ and there exists a constant c > 0 and large enough n0 such that when n > n0 and k > n5, rn(P) ≥c d n log k −1.2d d . We defer the proofs of the theorem and its corollaries to the supplementary material. The upper bound of Theorem 1 and the lower bounds of Corollaries 3 and 4 are within constant factors of each other. The immediate consequence is that absolute discounting is adaptive with respect to the families of classes of the Corollaries. Furthermore, over the family of classes Pd where we can write d as n 1 α for some α > 1 or d ∝n, we can select a distribution from the Corollaries among each class and use the corresponding lower bound to match the upper bound of Theorem 1 up to a constant factor. Therefore absolute discounting is adaptive to this family of classes. Intuitively, adaptivity to these classes establishes optimality in the intermediate range between low- and highdimensional settings in a distribution-dependent fashion and governed by the expected number of distinct categories d, which we may regard as the effective dimension of the problem. 7 Relationship to Good–Turing and competitivity We now establish a relationship between the absolute discounting and Good–Turing estimators and refine the adaptivity results of the previous section into competitivity results. When [OS15] introduced the notion of competitive optimality, they showed that a variation of the Good–Turing estimator is worst-case competitive with respect to the family of distribution classes defined by any given probability distribution and its permutations. In light of the results of Sections 5 and 6, it is natural to ask whether absolute discounting enjoys the same kind of competitive properties. Not only that, but it was observed empirically by [NEK94] and shown theoretically in [OD12] that asymptotically Good–Turing behaves exactly like absolute discounting, when the underlying distribution is a (possibly perturbed) power-law. We therefore choose this family of classes to prove competitivity for. We first make the aforementioned equivalence concrete by establishing a finite sample version. We use the following idealized version of the Good–Turing estimator [Goo53]: qGT j := µj+1 n E[Φµj +1] E[Φµj ] if µj > 0, E[Φ1] n(k−D) if µj = 0. (4) Lemma 5. Let p be a power law with power α truncated over k categories. Then for k > max{n, n 1 α−1 }, we have the equivalence: qGT j = µj −1 α n 1 + O n−1 2 3 2α+1 ∼µj −1 α n ∀µj ∈ n 1, · · · , n 1 2α+1 o . An interesting outcome of the equivalence of Lemma 5 is that it suggests a choice of the discount δ in terms of the power, 1/α. To give a data-driven version of 1/α, we will use a robust version of the ratio Φ1/D proposed in [OD12, BBO17], which is a strongly consistent estimator when k = ∞. Theorem 6. Let P be all permutations of a truncated power law p with power α. Let q be the absolute discounting estimator with δ = min n max{Φ1,1} D , δmax o , for a suitable choice of δmax. Then for k > max{n, n 1 α−1 }, the competitive loss is ϵn(Pp, q) = O n−2α−1 2α+1 . The implications are as follows. For the union of all such classes above a given α, we find that we beat the n−1/3 rate of the worst-case competitive loss obtained for the estimator in [OS15]. Theorem 6 and the bounds of Sections 5 and 6, together imply that absolute discounting is not only worst-case competitive, but also class-by-class competitive with respect to the power-law permutation family. In other words, it in fact achieves minimax optimality even to the constant. One of the advantages of absolute discounting is that it gradually transitions between values that are close to the empirical distribution for abundant categories (since µ then dominates the discount δ), to a behavior that imitates the Good–Turing estimator for rare categories (as established by Lemma 5). In contrast, the estimator proposed in [OS15], and its antecedents starting from [Kat87], have to carefully choose a threshold where they switch abruptly from one estimator to the other. 6 8 Experiments We now illustrate the theory with some experimental results. Our purpose is to (1) validate the functional form of the risk as given by our lower and upper bounds and (2) compare absolute discounting on both synthetic and real data to estimators that have various optimality guarantees. In all synthetic experiments, we use 500 Monte Carlo iterations. Also, we set the discount value based on data, δ = min{ max(Φ1,1) D , 0.9}. This is as suggested in Section 7, assuming δmax = 0.9 is sufficient. 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 n 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 expected KL loss k=100 k=500 k=1000 k=3000 (a) k fixed 5 6 7 8 9 10 11 12 13 14 15 k 0 0.005 0.01 0.015 0.02 0.025 expected KL loss n=500 n=1000 n=5000 n=10000 (b) n fixed, k << n 103 104 k 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 expected KL loss n=20 n=50 n=100 n=200 (c) n fixed, k >> n Figure 1: Risk of absolute discounting in different ranges of k and n for a power-law with α = 2 Validation For our first goal, we consider absolute discounting in isolation. Figure 1(a) shows the decay of KL-risk with the number of samples n for a power-law distribution. The dependence of the risk on the number of categories k is captured in Figures 1(b) (linear x-axis) and 1(c) (logarithmic x-axis). Note the linear growth when k is small and the logarithmic growth when k is large. For the last plot we give 95% confidence intervals for the simulations, by performing 100 restarts. Synthetic data For our second goal, we start with synthetic data. In Figure 2, we pit absolute discounting against a number of distributions related to power-laws. The estimators used for our comparisons are: empirical q+0(x) = µx n , add-beta q+β(x) = µx+βµx N , and its two variants: • Braess and Sauer, qBS [BS04] q+β with β0 = 0.5, β1 = 1, and βi = 0.75 ∀i ≥2 • Paninski, qPan [Pan04] q+β with βi = n k log k n ∀i, absolute discounting, q−δ, described in 1, Good–Turing + empirical qGT in [OS15], and an oracleaided estimator where Sµ is known. In Figures 2(a) and 2(b), samples are generated according to a power-law distribution with power α = 2 over k = 1, 000 categories. However, the underlying distribution in Figure 2(c) is a piecewise power-law. It consists of three equal-length pieces, with powers 1.3, 2, and 1.5. Paninski’s estimator is not shown in Figures 2(b) and 2(c) since it is not well-defined in this range (it is designed for the case k > n only). Unsurprisingly, absolute discounting dominates these experiments. What is more interesting is that it does not seem to need a pure power-law (similar results hold for other kinds of perturbations, such as mixtures and noise). Also Good–Turing is a tight second. 0 100 200 300 400 500 600 700 800 900 1000 n 10-2 10-1 100 101 expected KL loss Good-Turing + empirical Braess-Sauer Paninski absolute-discount oracle (a) pure power-law 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 n 10-2 10-1 100 101 expected KL loss Good-Turing + empirical Braess-Sauer absolute-discount oracle (b) pure power-law 0 2000 4000 6000 8000 10000 n 10-2 10-1 100 101 expected KL loss Good-Turing + empirical Braess-Sauer absolute-discount oracle (c) piece-wise power-law Figure 2: Comparing estimators for power-law variants with power α = 2 and k = 1000. 7 Real data One of the chief motivations to investigate absolute discounting is natural language modeling. But there have been such extensive empirical studies that have verified over and over the power of absolute discounting (see the classical survey of [CG96]) that we chose to use this space for something new. We use the START Global terrorism database from the University of Maryland [LDMN16] and explore how well we can forecast the number of terrorist incidents in different cities. The data contains the record of more than 50, 000 terror incidents between the years 1992 and 2010, in more than 12, 000 different cities around the world. First, we display in Figure 3(a) the frequency of incidents across the entire dataset versus the activity rank of the city in log-log scale, showing a striking adherence to a power-law (see [CSN09] for more on this). The forecasting problem that we solve is to estimate the number of total incidents in a subset of the cities over the coming year, using the current year’s data from all cities. In order to emulate the various dimension regimes, we look at three subsets: (1) low-activity cities with no incidents in the current year and less than 20 incidents in the whole data, (2) medium-activity cities, with some incidents in the current year and less than 20 incidents in the whole data, and (3) high-activity individual cities with a large number of overall incidents. The results for (1) are in Figure 3(b). The frequency estimator trivially estimates zero. Braess-Sauer does something meaningful. But absolute discounting and Good–Turing estimators, indistinguishable from each other, are remarkably on spot. And this, without having observed any of the cities! This nicely captures the importance of using structure when dimensionality is so high and data is so scarce. The results for (2) are in Figure 3(c). The frequency estimator markedly overestimates. But now absolute discounting, Good–Turing, and Braess-Sauer, perform similarly. This is a lower dimensional regime than in (1), but still not adequate for simply using frequencies. This changes in case (3), illustrated in Figure 4. To take advantage of the abundance of data, in this case at each time point we used the previous 2, 000 incidents for learning, and predicted the share of each city for the next 2, 000 incidents. In fact, incidents are so abundant that we can simply rely on the previous window’s count. Note how Braess-Sauer over-penalizes such abundant categories and suffers, whereas absolute discounting and Good–Turing continue to hold their own, mimicking the performance of the empirical counts. This is a very low-dimensional regime. The closeness of the Good–Turing estimator to absolute discounting in all of our experiments validates the equivalence result of Lemma 5. The robustness in various regimes and the improvement in performance over such minimax optimal estimators as Braess-Sauer’s and Paninski’s are evidence that absolute discounting truly molds to both the raw dimension and effective dimension / structure. 100 101 102 103 104 105 rank of the city 100 101 102 103 104 number of incidents (a) frequency vs rank 1992 1995 1997 1999 2002 2006 2007 year 0 500 1000 1500 2000 2500 3000 number of incidents Good-Turing + empirical Braess-Sauer absolute-discount true value empirical (b) unobserved cities 1992 1995 1997 1999 2002 2006 2007 year 0 500 1000 1500 2000 2500 number of incidents Good-Turing + empirical Braess-Sauer absolute-discount true value empirical (c) observed cities Figure 3: (a) power-law behavior of frequency vs rank in terror incidents, (b), and (c) comparing forecasts of the number of incidents in unobserved cities and observed ones, respectively. 9 Conclusion In this paper, we offered a rigorous analysis of the absolute discounting estimator for categorical distributions. We showed that it recovers classical minimax optimality. The true reason for its success, however, is in adapting to distributions much more intimately, by recovering the right dependence on the distinct observed categories d, which can be regarded as an effective dimension, and optimally tracking structure such as power-laws. We also tightened its relationship with the celebrated Good–Turing estimator. 8 1992 1995 1997 1999 2002 2006 2007 2008 2009 2010 year 0 50 100 150 200 250 300 350 number of incidents Good-Turing + empirical Braess-Sauer absolute-discount true value empirical (a) Baghdad 1992 1995 1997 1999 2002 2006 2007 2008 2009 2010 year 0 5 10 15 20 25 number of incidents Good-Turing + empirical Braess-Sauer absolute-discount true value empirical (b) Fallujah 1992 1995 1997 1999 2002 2006 2007 2008 2009 2010 year 0 20 40 60 80 100 120 number of incidents Good-Turing + empirical Braess-Sauer absolute-discount true value empirical (c) Belfast Figure 4: Estimating the number of incidents based on previous data for different cities Some of our analysis could possibly be tightened, in particular in terms of the range of applicability over n, k, and d. Also, the limiting case of α = 1 (very heavy tails, known as “fast variation” [BBO17]) to which our results don’t directly apply, merits investigation. But more importantly, absolute discounting is often a module. For example, we already note how it is widely used in N-gram back-off models [KN95]. Also, recently, it has been successfully applied to smoothing low-rank probability matrices [FOO16]. Perhaps to further understand its power, it is worthwhile to study how it interacts with such larger systems. Acknowledgements We thank Vaishakh Ravindrakumar for very helpful suggestions, and NSF for supporting this work through grants CIF-1564355 and CIF-1619448. References [ADW13] Armen E. Allahverdyan, Weibing Deng, and Q. A. Wang. Explaining Zipf’s law via a mental lexicon. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 88(6), 2013. 1 [AJOS13] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. Optimal Probability Estimation with Applications to Prediction and Classification. In COLT, pages 764–796, 2013. 1 [BBO17] Anna Ben Hamou, St´ephane Boucheron, and Mesrob I Ohannessian. Concentration Inequalities in the Infinite Urn Scheme for Occupancy Counts and the Missing Mass, with Applications. Bernoulli, 2017. 7, 9 [BGO15] St´ephane Boucheron, Elisabeth Gassiat, and Mesrob I Ohannessian. About adaptive coding on countable alphabets: Max-stable envelope classes. IEEE Transactions on Information Theory, 61(9), 2015. 2 [BS04] Dietrich Braess and Thomas Sauer. Bernstein polynomials and learning theory. Journal of Approximation Theory, 128(2):187–206, 2004. 1, 3, 5, 8 [CG96] Stanley F Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 310–318. Association for Computational Linguistics, 1996. 1, 3, 8 [CSN09] Aaron Clauset, Cosma Rohilla Shalizi, and Mark E J Newman. Power-law distributions in empirical data. SIAM review, 51(4):661–703, 2009. 1, 8 [FJO+15] Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky, Venkatadheeraj Pichapati, and Ananda Theertha Suresh. Universal compression of power-law distributions. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 2001–2005. IEEE, 2015. 2 [FNT16] Stefano Favaro, Bernardo Nipoti, and Yee Whye Teh. Rediscovery of {Good–Turing} estimators via {B}ayesian nonparametrics. Biometrics, 72(1):136–145, 2016. 1 [FOO16] Moein Falahatgar, Mesrob I Ohannessian, and Alon Orlitsky. Near-Optimal Smoothing of Structured Conditional Probability Matrices. In NIPS, pages 4860–4868, 2016. 9 [Goo53] Irving J Good. The population frequencies of species and the estimation of population parameters. Biometrika, pages 237–264, 1953. 1, 2, 7 9 [GS95] William A Gale and Geoffrey Sampson. {Good–Turing} frequency estimation without tears. Journal of Quantitative Linguistics, 2(3):217–237, 1995. 3 [JM85] Frederick Jelinek and Robert Mercer. Probability distribution estimation from sparse data. IBM technical disclosure bulletin, 28:2591–2594, 1985. 3 [JVS+16] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. 3 [Kat87] Slava M. Katz. Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer, 1987. 3, 7 [KN95] Reinhard Kneser and Hermann Ney. Improved Backing-Off for {M}-Gram Language Modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181–184, Detroit, MI, may 1995. 1, 3, 9 [LDMN16] Gary LaFree, Laura Dugan, Erin Miller, and National Consortium for the Study of Terrorism and Responses to Terrorism. Global Terrorism Database, 2016. 1, 8 [NEK94] Hermann Ney, Ute Essen, and Reinhard Kneser. On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language, 8(1):1–38, 1994. 1, 3, 7 [Neu13] Edward Neuman. Inequalities and bounds for the incomplete gamma function. Results in Mathematics, pages 1–6, 2013. [NP00] Pierpaolo Natalini and Biagio Palumbo. Inequalities for the incomplete gamma function. Mathematical Inequalities & Applications, 3(1):69–77, 2000. [OD12] Mesrob I Ohannessian and Munther A Dahleh. Rare Probability Estimation under Regularly Varying Heavy Tails. In COLT, page 21, 2012. 1, 4, 7, 7 [OS15] Alon Orlitsky and Ananda Theertha Suresh. Competitive distribution estimation: Why is {Good– Turing} good. In NIPS, pages 2143–2151, 2015. 1, 2, 3, 4, 7, 7, 8 [OSZ03] Alon Orlitsky, Narayana P Santhanam, and Junan Zhang. Always {Good–Turing}: Asymptotically optimal probability estimation. Science, 302(5644):427–431, 2003. 1 [Pan04] Liam Paninski. Variational Minimax Estimation of Discrete Distributions under KL Loss. In NIPS, pages 1033–1040, 2004. 1, 3, 5, 8 [PY97] Jim Pitman and Marc Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25(2):855–900, 1997. 3 [SLE+03] Felisa A Smith, S Kathleen Lyons, S K Ernest, Kate E Jones, Dawn M Kaufman, Tamar Dayan, Pablo A Marquet, James H Brown, and John P Haskell. Body mass of late Quaternary mammals. Ecology, 84(12):3403, 2003. 1 [Teh06] Yee-Whye Teh. A Hierarchical Bayesian Language Model Based on Pitman-Yor processe. Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, (July):985–992, 2006. 3 [Tsy09] Alexandre B Tsybakov. Introduction to Nonparametric Estimation. Springer series in statistics. Springer, 2009. 2 [VV15] Gregory Valiant and Paul Valiant. Instance optimal learning. arXiv preprint arXiv:1504.05321, 2015. 2 [Zip35] George Kingsley Zipf. The psycho-biology of language. Houghton, Mifflin, 1935. 1 10 | 2017 | 139 |
6,607 | Model-Powered Conditional Independence Test Rajat Sen1,*, Ananda Theertha Suresh2,*, Karthikeyan Shanmugam3,*, Alexandros G. Dimakis1, and Sanjay Shakkottai1 1The University of Texas at Austin 2Google, New York 3IBM Research, Thomas J. Watson Center Abstract We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution f(x, y, z) of continuous random vectors X, Y and Z, we determine whether X ?? Y |Z. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing. The main technical challenge in the classification problem is the need for samples from the conditional product distribution f CI(x, y, z) = f(x|z)f(y|z)f(z) – the joint distribution if and only if X ?? Y |Z. – when given access only to i.i.d. samples from the true joint distribution f(x, y, z). To tackle this problem we propose a novel nearest neighbor bootstrap procedure and theoretically show that our generated samples are indeed close to f CI in terms of total variational distance. We then develop theoretical results regarding the generalization bounds for classification for our problem, which translate into error bounds for CI testing. We provide a novel analysis of Rademacher type classification bounds in the presence of non-i.i.d nearindependent samples. We empirically validate the performance of our algorithm on simulated and real datasets and show performance gains over previous methods. 1 Introduction Testing datasets for Conditional Independence (CI) have significant applications in several statistical/learning problems; among others, examples include discovering/testing for edges in Bayesian networks [15, 27, 7, 9], causal inference [23, 14, 29, 5] and feature selection through Markov Blankets [16, 31]. Given a triplet of random variables/vectors (X, Y, Z), we say that X is conditionally independent of Y given Z (denoted by X ?? Y |Z), if the joint distribution fX,Y,Z(x, y, z) factorizes as fX,Y,Z(x, y, z) = fX|Z(x|z)fY |Z(y|z)fZ(z). The problem of Conditional Independence Testing (CI Testing) can be defined as follows: Given n i.i.d samples from fX,Y,Z(x, y, z), distinguish between the two hypothesis H0 : X ?? Y |Z and H1 : X 6?? Y |Z. In this paper we propose a data-driven Model-Powered CI test. The central idea in a model-driven approach is to convert a statistical testing or estimation problem into a pipeline that utilizes the power of supervised learning models like classifiers and regressors; such pipelines can then leverage recent advances in classification/regression in high-dimensional settings. In this paper, we take such a model-powered approach (illustrated in Fig. 1), which reduces the problem of CI testing to Binary Classification. Specifically, the key steps of our procedure are as follows: * Equal Contribution 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. X Y Z x1 y1 z1 ... X Y Z ... Original Samples X Y Z ... X Y Z ... ` 1... 1 + ` ... 0 0 + Shuffle Training Set Test Set G ˆg (Trained Classifier) L(ˆg, De) (Test Error) De Dr 3n Original Samples n 2n Original Samples Nearest Neighbor Bootstrap U 02 U1 n samples close to f CI x3n y3n z3n Figure 1: Illustration of our methodology. A part of the original samples are kept aside in U1. The rest of the samples are used in our nearest neighbor boot-strap to generate a data-set U0 2 which is close to f CI in distribution. The samples are labeled as shown and a classifier is trained on a training set. The test error is measured on a test set there-after. If the test-error is close to 0.5, then H0 is not rejected, however if the test error is low then H0 is rejected. (i) Suppose we are provided 3n i.i.d samples from fX,Y,Z(x, y, z). We keep aside n of these original samples in a set U1 (refer to Fig. 1). The remaining 2n of the original samples are processed through our first module, the nearest-neighbor bootstrap (Algorithm 1 in our paper), which produces n simulated samples stored in U0 2. In Section 3, we show that these generated samples in U0 2 are in fact close in total variational distance (defined in Section 3) to the conditionally independent distribution f CI(x, y, z) , fX|Z(x|z)fY |Z(y|z)fZ(z). (Note that only under H0 does the equality f CI(.) = fX,Y,Z(.) hold; our method generates samples close to f CI(x, y, z) under both hypotheses). (ii) Subsequently, the original samples kept aside in U1 are labeled 1 while the new samples simulated from the nearest-neighbor bootstrap (in U0 2) are labeled 0. The labeled samples (U1 with label 1 and U0 2 labeled 0) are aggregated into a data-set D. This set D is then broken into training and test sets Dr and De each containing n samples each. (iii) Given the labeled training data-set (from step (ii)), we train powerful classifiers such as gradient boosted trees [6] or deep neural networks [17] which attempt to learn the classes of the samples. If the trained classifier has good accuracy over the test set, then intuitively it means that the joint distribution fX,Y,Z(.) is distinguishable from f CI (note that the generated samples labeled 0 are close in distribution to f CI). Therefore, we reject H0. On the other hand, if the classifier has accuracy close to random guessing, then fX,Y,Z(.) is in fact close to f CI, and we fail to reject H0. For independence testing (i.e whether X ?? Y ), classifiers were recently used in [19]. Their key observation was that given i.i.d samples (X, Y ) from fX,Y (x, y), if the Y coordinates are randomly permuted then the resulting samples exactly emulate the distribution fX(x)fY (y). Thus the problem can be converted to a two sample test between a subset of the original samples and the other subset which is permuted - Binary classifiers were then harnessed for this two-sample testing; for details see [19]. However, in the case of CI testing we need to emulate samples from f CI. This is harder because the permutation of the samples needs to be Z dependent (which can be high-dimensional). One of our key technical contributions is in proving that our nearest-neighbor bootstrap in step (i) achieves this task. The advantage of this modular approach is that we can harness the power of classifiers (in step (iii) above), which have good accuracies in high-dimensions. Thus, any improvements in the field of binary classification imply an advancement in our CI test. Moreover, there is added flexibility in choosing the best classifier based on domain knowledge about the data-generation process. Finally, our bootstrap is also efficient owing to fast algorithms for identifying nearest-neighbors [24]. 1.1 Main Contributions (i) (Classification based CI testing) We reduce the problem of CI testing to Binary Classification as detailed in steps (i)-(iii) above and in Fig. 1. We simulate samples that are close to f CI through a novel nearest-neighbor bootstrap (Algorithm 1) given access to i.i.d samples from the joint distribution. 2 The problem of CI testing then reduces to a two-sample test between the original samples in U1 and U0 2, which can be effectively done by binary classifiers. (ii) (Guarantees on Bootstrapped Samples) As mentioned in steps (i)-(iii), if the samples generated by the bootstrap (in U0 2) are close to f CI, then the CI testing problem reduces to testing whether the data-sets U1 and U0 2 are distinguishable from each other. We theoretically justify that this is indeed true. Let φX,Y,Z(x, y, z) denote the distribution of a sample produced by Algorithm 1, when it is supplied with 2n i.i.d samples from fX,Y,Z(.). In Theorem 1, we prove that dT V (φ, f CI) = O(1/n1/dz) under appropriate smoothness assumptions. Here dz is the dimension of Z and dT V denotes total variational distance (Def. 1). (iii) (Generalization Bounds for Classification under near-independence) The samples generated from the nearest-neighbor bootstrap do not remain i.i.d but they are close to i.i.d. We quantify this property and go on to show generalization risk bounds for the classifier. Let us denote the class of function encoded by the classifier as G. Let ˆR denote the probability of error of the optimal classifier ˆg 2 G trained on the training set (Fig. 1). We prove that under appropriate assumptions, we have r0 −O(1/n1/dz) ˆR r0 + O(1/n1/dz) + O ✓p V ✓ n−1/3 + q 2dz/n ◆◆ with high probability, upto log factors. Here r0 = 0.5(1−dT V (f, f CI)), V is the VC dimension [30] of the class G. Thus when f is equivalent to f CI (H0 holds) then the error rate of the classifier is close to 0.5. But when H1 holds the loss is much lower. We provide a novel analysis of Rademacher complexity bounds [4] under near-independence which is of independent interest. (iv) (Empirical Evaluation) We perform extensive numerical experiments where our algorithm outperforms the state of the art [32, 28]. We also apply our algorithm for analyzing CI relations in the protein signaling network data from the flow cytometry data-set [26]. In practice we observe that the performance with respect to dimension of Z scales much better than expected from our worst case theoretical analysis. This is because powerful binary classifiers perform well in high-dimensions. 1.2 Related Work In this paper we address the problem of non-parametric CI testing when the underlying random variables are continuous. The literature on non-parametric CI testing is vast. We will review some of the recent work in this field that is most relevant to our paper. Most of the recent work in CI testing are kernel based [28, 32, 10]. Many of these works build on the study in [11], where non-parametric CI relations are characterized using covariance operators for Reproducing Kernel Hilbert Spaces (RKHS) [11]. KCIT [32] uses the partial association of regression functions relating X, Y , and Z. RCIT [28] is an approximate version of KCIT that attempts to improve running times when the number of samples are large. KCIPT [10] is perhaps most relevant to our work. In [10], a specific permutation of the samples is used to simulate data from f CI. An expensive linear program needs to be solved in order to calculate the permutation. On the other hand, we use a simple nearest-neighbor bootstrap and further we provide theoretical guarantees about the closeness of the samples to f CI in terms of total variational distance. Finally the two-sample test in [10] is based on a kernel method [3], while we use binary classifiers for the same purpose. There has also been recent work on entropy estimation [13] using nearest neighbor techniques (used for density estimation); this can subsequently be used for CI testing by estimating the conditional mutual information I(X; Y |Z). Binary classification has been recently used for two-sample testing, in particular for independence testing [19]. Our analysis of generalization guarantees of classification are aimed at recovering guarantees similar to [4], but in a non-i.i.d setting. In this regard (non-i.i.d generalization guarantees), there has been recent work in proving Rademacher complexity bounds for β-mixing stationary processes [21]. This work also falls in the category of machine learning reductions, where the general philosophy is to reduce various machine learning settings like multi-class regression [2], ranking [1], reinforcement learning [18], structured prediction [8] to that of binary classification. 3 2 Problem Setting and Algorithms In this section we describe the algorithmic details of our CI testing procedure. We first formally define our problem. Then we describe our bootstrap algorithm for generating the data-set that mimics samples from f CI. We give a detailed pseudo-code for our CI testing process which reduces the problem to that of binary classification. Finally, we suggest further improvements to our algorithm. Problem Setting: The problem setting is that of non-parametric Conditional Independence (CI) testing given i.i.d samples from the joint distributions of random variables/vectors [32, 10, 28]. We are given 3n i.i.d samples from a continuous joint distribution fX,Y,Z(x, y, z) where x 2 Rdx, y 2 Rdy and z 2 Rdz. The goal is to test whether X ?? Y |Z i.e whether fX,Y,Z(x, y, z) factorizes as, fX,Y,Z(x, y, z) = fX|Z(x|z)fY |Z(y|z)fZ(z) , f CI(x, y, z) This is essentially a hypothesis testing problem where: H0 : X ?? Y |Z and H1 : X 6?? Y |Z. Note: For notational convenience, we will drop the subscripts when the context is evident. For instance we may use f(x|z) in place of fX|Z(x|z). Nearest-Neighbor Bootstrap: Algorithm 1 is a procedure to generate a data-set U0 consisting of n samples given a data-set U of 2n i.i.d samples from the distribution fX,Y,Z(x, y, z). The data-set U is broken into two equally sized partitions U1 and U2. Then for each sample in U1, we find the nearest neighbor in U2 in terms of the Z coordinates. The Y -coordinates of the sample from U1 are exchanged with the Y -coordinates of its nearest neighbor (in U2); the modified sample is added to U0. Algorithm 1 DataGen - Given data-set U = U1 [ U2 of 2n i.i.d samples from f(x, y, z) (|U1| = |U2| = n ), returns a new data-set U0 having n samples. 1: function DATAGEN(U1, U2, 2n) 2: U0 = ; 3: for u in U1 do 4: Let v = (x0, y0, z0) 2 U2 be the sample such that z0 is the 1-Nearest Neighbor (1-NN) of z (in `2 norm) in the whole data-set U2, where u = (x, y, z) 5: Let u0 = (x, y0, z) and U0 = U0 [ {u0}. 6: end for 7: end function One of our main results is that the samples in U0, generated in Algorithm 1 mimic samples coming from the distribution f CI. Suppose u = (x, y, z) 2 U1 be a sample such that fZ(z) is not too small. In this case z0 (the 1-NN sample from U2) will not be far from z. Therefore given a fixed z, under appropriate smoothness assumptions, y0 will be close to an independent sample coming from fY |Z(y|z0) ⇠fY |Z(y|z). On the other hand if fZ(z) is small, then z is a rare occurrence and will not contribute adversely. CI Testing Algorithm: Now we introduce our CI testing algorithm, which uses Algorithm 1 along with binary classifiers. The psuedo-code is in Algorithm 2 (Classifier CI Test -CCIT). Algorithm 2 CCITv1 - Given data-set U of 3n i.i.d samples from f(x, y, z), returns if X ?? Y |Z. 1: function CCIT(U, 3n, ⌧, G) 2: Partition U into three disjoint partitions U1, U2 and U3 of size n each, randomly. 3: Let U0 2 = DataGen(U2, U3, 2n) (Algorithm 1). Note that |U0 2| = n. 4: Create Labeled data-set D := {(u, ` = 1)}u2U1 [ {(u0, `0 = 0)}u02U0 2 5: Divide data-set D into train and test set Dr and De respectively. Note that |Dr| = |De| = n. 6: Let ˆg = argming2G ˆL(g, Dr) := 1 |Dr| P (u,`)2Dr 1{g(u) 6= l}. This is Empirical Risk Minimization for training the classifier (finding the best function in the class G). 7: If ˆL(ˆg, De) > 0.5 −⌧, then conclude X ?? Y |Z, otherwise, conclude X 6?? Y |Z. 8: end function 4 In Algorithm 2, the original samples in U1 and the nearest-neighbor bootstrapped samples in U0 2 should be almost indistinguishable if H0 holds. However, if H1 holds, then the classifier trained in Line 6 should be able to easily distinguish between the samples corresponding to different labels. In Line 6, G denotes the space of functions over which risk minimization is performed in the classifier. We will show (in Theorem 1) that the variational distance between the distribution of one of the samples in U0 2 and f CI(x, y, z) is very small for large n. However, the samples in U0 2 are not exactly i.i.d but close to i.i.d. Therefore, in practice for finite n, there is a small bias b > 0 i.e. ˆL(ˆg, De) ⇠0.5 −b, even when H0 holds. The threshold ⌧needs to be greater than b in order for Algorithm 2 to function. In the next section, we present an algorithm where this bias is corrected. Algorithm with Bias Correction: We present an improved bias-corrected version of our algorithm as Algorithm 3. As mentioned in the previous section, in Algorithm 2, the optimal classifier may be able to achieve a loss slightly less that 0.5 in the case of finite n, even when H0 is true. However, the classifier is expected to distinguish between the two data-sets only based on the Y, Z coordinates, as the joint distribution of X and Z remains the same in the nearest-neighbor bootstrap. The key idea in Algorithm 3 is to train a classifier only using the Y and Z coordinates, denoted by ˆg0. As before we also train another classier using all the coordinates, which is denoted by ˆg. The test loss of ˆg0 is expected to be roughly 0.5 −b, where b is the bias mentioned in the previous section. Therefore, we can just subtract this bias. Thus, when H0 is true ˆL(ˆg0, D0 e) −ˆL(ˆg, De) will be close to 0. However, when H1 holds, then ˆL(ˆg, De) will be much lower, as the classifier ˆg has been trained leveraging the information encoded in all the coordinates. Algorithm 3 CCITv2 - Given data-set U of 3n i.i.d samples, returns whether X ?? Y |Z. 1: function CCIT(U, 3n, ⌧, G) 2: Perform Steps 1-5 as in Algorithm 2. 3: Let D0 r = {((y, z), `)}(u=(x,y,z),`)2Dr. Similarly, let D0 e = {((y, z), `)}(u=(x,y,z),`)2De. These are the training and test sets without the X-coordinates. 4: Let ˆg = argming2G ˆL(g, Dr) := 1 |Dr| P (u,`)2Dr 1{g(u) 6= l}. Compute test loss: ˆL(ˆg, De). 5: Let ˆg0 = argming2G ˆL(g, D0 r) := 1 |D0r| P (u,`)2D0r 1{g(u) 6= l}. Compute test loss: ˆL(ˆg0, D0 e). 6: If ˆL(ˆg, De) < ˆL(ˆg0, D0 e) −⌧, then conclude X 6?? Y |Z, otherwise, conclude X ?? Y |Z. 7: end function 3 Theoretical Results In this section, we provide our main theoretical results. We first show that the distribution of any one of the samples generated in Algorithm 1 closely resemble that of a sample coming from f CI. This result holds for a broad class of distributions fX,Y,Z(x, y, z) which satisfy some smoothness assumptions. However, the samples generated by Algorithm 1 (U2 in the algorithm) are not exactly i.i.d but close to i.i.d. We quantify this and go on to show that empirical risk minimization over a class of classifier functions generalizes well using these samples. Before, we formally state our results we provide some useful definitions. Definition 1. The total variational distance between two continuous probability distributions f(.) and g(.) defined over a domain X is, dT V (f, g) = supp2B|Ef[p(X)] −Eg[p(X)]| where B is the set of all measurable functions from X ! [0, 1]. Here, Ef[.] denotes expectation under distribution f. We first prove that the distribution of any one of the samples generated in Algorithm 1 is close to f CI in terms of total variational distance. We make the following assumptions on the joint distribution of the original samples i.e. fX,Y,Z(x, y, z): Smoothness assumption on f(y|z): We assume a smoothness condition on f(y|z), that is a generalization of boundedness of the max. eigenvalue of Fisher Information matrix of y w.r.t z. 5 Assumption 1. For z 2 Rdz, a such that ka −zk2 ✏1, the generalized curvature matrix Ia(z) is, Ia(z)ij = @2 @z0 i@z0 j Z log f(y|z) f(y|z0)f(y|z)dy ! ((((( z0=a = E " −δ2 log f(y|z0) δz0 iδz0 j ((( z0=a (((((Z = z # (1) We require that for all z 2 Rdz and all a such that ka −zk2 ✏1, λmax (Ia(z)) β. Analogous assumptions have been made on the Hessian of the density in the context of entropy estimation [12]. Smoothness assumptions on f(z): We assume some smoothness properties of the probability density function f(z). The smoothness assumptions (in Assumption 2) is a subset of the assumptions made in [13] (Assumption 1, Page 5) for entropy estimation. Definition 2. For any δ > 0, we define G(δ) = P (f(Z) δ). This is the probability mass of the distribution of Z in the areas where the p.d.f is less than δ. Definition 3. (Hessian Matrix) Let Hf(z) denote the Hessian Matrix of the p.d.f f(z) with respect to z i.e Hf(z)ij = @2f(z)/@zi@zj, provided it is twice continuously differentiable at z. Assumption 2. The probability density function f(z) satisfies the following: (1) f(z) is twice continuously differentiable and the Hessian matrix Hf satisfies kHf(z)k2 cdz almost everywhere, where cdz is only dependent on the dimension. (2) R f(z)1−1/ddz c3, 8d ≥2 where c3 is a constant. Theorem 1. Let (X, Y 0, Z) denote a sample in U0 2 produced by Algorithm 1 by modifying the original sample (X, Y, Z) in U1, when supplied with 2n i.i.d samples from the original joint distribution fX,Y,Z(x, y, z). Let φX,Y,Z(x, y, z) be the distribution of (X, Y 0, Z). Under smoothness assumptions (1) and (2), for any ✏< ✏1, n large enough, we have: dT V (φ, f CI) b(n) , 1 2 s β 4 c3 ⇤21/dzΓ(1/dz) (nγdz)1/dzdz + β✏G (2cdz✏2) 4 + exp ✓ −1 2nγdzcdz✏dz+2 ◆ + G 2cdz✏2. . Here, γd is the volume of the unit radius `2 ball in Rd. Theorem 1 characterizes the variational distance of the distribution of a sample generated in Algorithm 1 with that of the conditionally independent distribution f CI. We defer the proof of Theorem 1 to Appendix A. Now, our goal is to characterize the misclassification error of the trained classifier in Algorithm 2 under both H0 and H1. Consider the distribution of the samples in the data-set Dr used for classification in Algorithm 2. Let q(x, y, z|` = 1) be the marginal distribution of each sample with label 1. Similarly, let q(x, y, z|` = 0) denote the marginal distribution of the label 0 samples. Note that under our construction, q(x, y, z|` = 1) = fX,Y,Z(x, y, z) = ⇢ f CI(x, y, z) if H0 holds 6= f CI(x, y, z) if H1 holds q(x, y, z|` = 0) = φX,Y,Z(x, y, z) (2) where φX,Y,Z(x, y, z) is as defined in Theorem 1. Note that even though the marginal of each sample with label 0 is φX,Y,Z(x, y, z) (Equation (2)), they are not exactly i.i.d owing to the nearest neighbor bootstrap. We will go on to show that they are actually close to i.i.d and therefore classification risk minimization generalizes similar to the i.i.d results for classification [4]. First, we review standard definitions and results from classification theory [4]. Ideal Classification Setting: We consider an ideal classification scenario for CI testing and in the process define standard quantities in learning theory. Recall that G is the set of classifiers under consideration. Let ˜q be our ideal distribution for q given by ˜q(x, y, z|` = 1) = fX,Y,Z(x, y, z), ˜q(x, y, z|` = 0) = f CI X,Y,Z(x, y, z) and ˜q(` = 1) = ˜q(` = 0) = 0.5. In other words this is the ideal classification scenario for testing CI. Let L(g(u), `) be our loss function for a classifying function g 2 G, for a sample u , (x, y, z) with true label `. In our algorithms the loss function is the 0 −1 loss, but our results hold for any bounded loss function s.t. |L(g(u), `)| |L|. For a distribution ˜q 6 and a classifier g let R˜q(g) , Eu,`⇠˜q[L(g(u), `)] be the expected risk of the function g. The risk optimal classifier g⇤ ˜q under ˜q is given by g⇤ ˜q , arg ming2G R˜q(g). Similarly for a set of samples S and a classifier g, let RS(g) , 1 |S| P u,`2S L(g(u), `) be the empirical risk on the set of samples. We define gS as the classifier that minimizes the empirical loss on the observed set of samples S that is, gS , arg ming2G RS(g). If the samples in S are generated independently from ˜q, then standard results from the learning theory states that with probability ≥1 −δ, R˜q(gS) R˜q(g⇤ ˜q) + C r V n + r 2 log(1/δ) n , (3) where V is the VC dimension [30] of the classification model, C is an universal constant and n = |S|. Guarantees under near-independent samples: Our goal is to prove a result like (3), for the classification problem in Algorithm 2. However, in this case we do not have access to i.i.d samples because the samples in U0 2 do not remain independent. We will see that they are close to independent in some sense. This brings us to one of our main results in Theorem 2. Theorem 2. Assume that the joint distribution f(x, y, z) satisfies the conditions in Theorem 1. Further assume that f(z) has a bounded Lipschitz constant. Consider the classifier ˆg in Algorithm 2 trained on the set Dr. Let S = Dr. Then according to our definition gS = ˆg. For ✏> 0 we have: (i) Rq(gS) −Rq(g⇤ q) γn , C|L| p V + r log 1 δ ! ✓log(n/δ) n ◆1/3 + r 4dz log(n/δ) + on(1/✏) n ! + G(✏) ! , with probability at least 1 −8δ. Here V is the V.C. dimension of the classification function class, G is as defined in Def. 2, C is an universal constant and |L| is the bound on the absolute value of the loss. (ii) Suppose the loss is L(g(u), `) = 1g(u)6=` (s.t |L| 1). Further suppose the class of classifying functions is such that Rq(g⇤ q) r0 + ⌘. Here, r0 , 0.5(1 −dT V (q(x, y, z|1), q(x, y, z|0))) is the risk of the Bayes optimal classifier when q(` = 1) = q(` = 0). This is the best loss that any classifier can achieve for this classification problem [4]. Under this setting, w.p at least 1 −8δ we have: 1 2 1 −dT V (f, f CI) . −b(n) 2 Rq(gS) 1 2 1 −dT V (f, f CI) . + b(n) 2 + ⌘+ γn where b(n) is as defined in Theorem 1. We prove Theorem 2 as Theorem 3 and Theorem 4 in the appendix. In part (i) of the theorem we prove that generalization bounds hold even when the samples are not exactly i.i.d. Intuitively, consider two sample inputs ui, uj 2 U1, such that corresponding Z coordinates zi and zj are far away. Then we expect the resulting samples u0 i and u0 j (in U0 2) to be nearly-independent. By carefully capturing this notion of spatial near-independence, we prove generalization errors in Theorem 3. Part (ii) of the theorem essentially implies that the error of the trained classifier will be close to 0.5 (l.h.s) when f ⇠f CI (under H0). On the other hand under H1 if dT V (f, f CI) > 1 −γ, the error will be less than 0.5(γ + b(n)) + γn which is small. 4 Empirical Results In this section we provide empirical results comparing our proposed algorithm and other state of the art algorithms. The algorithms under comparison are: (i) CCIT - Algorithm 3 in our paper where we use XGBoost [6] as the classifier. In our experiments, for each data-set we boot-strap the samples and run our algorithm B times. The results are averaged over B bootstrap runs1. (ii) KCIT - Kernel CI test from [32]. We use the Matlab code available online. (iii) RCIT - Randomized CI Test from [28]. We use the R package that is publicly available. 1The python package for our implementation can be found here (https://github.com/rajatsen91/CCIT). 7 4.1 Synthetic Experiments We perform the synthetic experiments in the regime of post-nonlinear noise similar to [32]. In our experiments X and Y are dimension 1, and the dimension of Z scales (motivated by causal settings and also used in [32, 28]). X and Y are generated according to the relation G(F(Z) + ⌘) where ⌘ is a noise term and G is a non-linear function, when the H0 holds. In our experiments, the data is generated as follows: (i) when X ?? Y |Z, then each coordinate of Z is a Gaussian with unit mean and variance, X = cos(aT Z + ⌘1) and Y = cos(bT Z + ⌘2). Here, a, b 2 Rdz and kak = kbk = 1. a,b are fixed while generating a single dataset. ⌘1 and ⌘2 are zero-mean Gaussian noise variables, which are independent of everything else. We set V ar(⌘1) = V ar(⌘2) = 0.25. (ii) when X 6?? Y |Z, then everything is identical to (i) except that Y = cos(bT Z + cX + ⌘2) for a randomly chosen constant c 2 [0, 2]. In Fig. 2a, we plot the performance of the algorithms when the dimension of Z scales. For generating each point in the plot, 300 data-sets were generated with the appropriate dimensions. Half of them are according to H0 and the other half are from H1 Then each of the algorithms are run on these data-sets, and the ROC AUC (Area Under the Receiver Operating Characteristic curve) score is calculated from the true labels (CI or not CI) for each data-set and the predicted scores. We observe that the accuracy of CCIT is close to 1 for dimensions upto 70, while all the other algorithms do not scale as well. In these experiments the number of bootstraps per data-set for CCIT was set to B = 50. We set the threshold in Algorithm 3 to ⌧= 1/pn, which is an upper-bound on the expected variance of the test-statistic when H0 holds. 4.2 Flow-Cytometry Dataset We use our CI testing algorithm to verify CI relations in the protein network data from the flowcytometry dataset [26], which gives expression levels of 11 proteins under various experimental conditions. The ground truth causal graph is not known with absolute certainty in this data-set, however this dataset has been widely used in the causal structure learning literature. We take three popular learned causal structures that are recovered by causal discovery algorithms, and we verify CI relations assuming these graphs to be the ground truth. The three graph are: (i) consensus graph from [26] (Fig. 1(a) in [22]) (ii) reconstructed graph by Sachs et al. [26] (Fig. 1(b) in [22]) (iii) reconstructed graph in [22] (Fig. 1(c) in [22]). For each graph we generate CI relations as follows: for each node X in the graph, identify the set Z consisting of its parents, children and parents of children in the causal graph. Conditioned on this set Z, X is independent of every other node Y in the graph (apart from the ones in Z). We use this to create all CI conditions of these types from each of the three graphs. In this process we generate over 60 CI relations for each of the graphs. In order to evaluate false positives of our algorithms, we also need relations such that X 6?? Y |Z. For, this we observe that if there is an edge between two nodes, they are never CI given any other conditioning set. For each graph we generate 50 such non-CI relations, where an edge X $ Y is selected at random and a conditioning set of size 3 is randomly selected from the remaining nodes. We construct 50 such negative examples for each graph. In Fig. 2, we display the performance of all three algorithms based on considering each of the three graphs as ground-truth. The algorithms are given access to observational data for verifying CI and non-CI relations. In Fig. 2b we display the ROC plot for all three algorithms for the data-set generated by considering graph (ii). In Table 2c we display the ROC AUC score for the algorithms for the three graphs. It can be seen that our algorithm outperforms the others in all three cases, even when the dimensionality of Z is fairly low (less than 10 in all cases). An interesting thing to note is that the edges (pkc-raf), (pkc-mek) and (pka-p38) are there in all the three graphs. However, all three CI testers CCIT, KCIT and RCIT are fairly confident that these edges should be absent. These edges may be discrepancies in the ground-truth graphs and therefore the ROC AUC of the algorithms are lower than expected. 8 0 5 20 50 70 100 150 Dimension of Z 0.6 0.7 0.8 0.9 1.0 ROC AUC CCIT RCIT KCIT (a) (b) Algo. Graph (i) Graph (ii) Graph (iii) CCIT 0.6848 0.7778 0.7156 RCIT 0.6448 0.7168 0.6928 KCIT 0.6528 0.7416 0.6610 (c) Figure 2: In (a) we plot the performance of CCIT, KCIT and RCIT in the post-nonlinear noise synthetic data. In generating each point in the plots, 300 data-sets are generated where half of them are according to H0 while the rest are according to H1. The algorithms are run on each of them, and the ROC AUC score is plotted. In (a) the number of samples n = 1000, while the dimension of Z varies. In (b) we plot the ROC curve for all three algorithms based on the data from Graph (ii) for the flow-cytometry dataset. The ROC AUC score for each of the algorithms are provided in (c), considering each of the three graphs as ground-truth. 5 Conclusion In this paper we present a model-powered approach for CI tests by converting it into binary classification, thus empowering CI testing with powerful supervised learning tools like gradient boosted trees. We provide an efficient nearest-neighbor bootstrap which makes the reduction to classification possible. We provide theoretical guarantees on the bootstrapped samples, and also risk generalization bounds for our classification problem, under non-i.i.d near independent samples. In conclusion we believe that model-driven data dependent approaches can be extremely useful in general statistical testing and estimation problems as they enable us to use powerful supervised learning tools. Acknowledgments This work is partially supported by NSF grants CNS 1320175, NSF SaTC 1704778, ARO grants W911NF-17-1-0359, W911NF-16-1-0377 and the US DoT supported D-STOP Tier 1 University Transportation Center. References [1] Maria-Florina Balcan, Nikhil Bansal, Alina Beygelzimer, Don Coppersmith, John Langford, and Gregory Sorkin. Robust reductions from ranking to classification. Learning Theory, pages 604–619, 2007. [2] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and Alex Strehl. Conditional probability tree estimation analysis and algorithms. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 51–58. AUAI Press, 2009. [3] Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57, 2006. [4] Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics, 9:323–375, 2005. [5] Eliot Brenner and David Sontag. Sparsityboost: A new scoring function for learning bayesian network structure. arXiv preprint arXiv:1309.6820, 2013. [6] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794. ACM, 2016. [7] Jie Cheng, David Bell, and Weiru Liu. Learning bayesian networks from data: An efficient approach based on information theory. On World Wide Web at http://www. cs. ualberta. ca/˜ jcheng/bnpc. htm, 1998. 9 [8] Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297–325, 2009. [9] Luis M De Campos and Juan F Huete. A new approach for learning belief networks using independence criteria. International Journal of Approximate Reasoning, 24(1):11–37, 2000. [10] Gary Doran, Krikamol Muandet, Kun Zhang, and Bernhard Schölkopf. A permutation-based kernel conditional independence test. In UAI, pages 132–141, 2014. [11] Kenji Fukumizu, Francis R Bach, and Michael I Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5(Jan):73–99, 2004. [12] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Breaking the bandwidth barrier: Geometrical adaptive entropy estimation. In Advances in Neural Information Processing Systems, pages 2460–2468, 2016. [13] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [14] Markus Kalisch and Peter Bühlmann. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8(Mar):613–636, 2007. [15] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [16] Daphne Koller and Mehran Sahami. Toward optimal feature selection. Technical report, Stanford InfoLab, 1996. [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [18] John Langford and Bianca Zadrozny. Reducing t-step reinforcement learning to classification. In Proc. of the Machine Learning Reductions Workshop, 2003. [19] David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. arXiv preprint arXiv:1610.06545, 2016. [20] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148– 188, 1989. [21] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-iid processes. In Advances in Neural Information Processing Systems, pages 1097–1104, 2009. [22] Joris Mooij and Tom Heskes. Cyclic causal discovery from continuous equilibrium data. arXiv preprint arXiv:1309.6849, 2013. [23] Judea Pearl. Causality. Cambridge university press, 2009. [24] V Ramasubramanian and Kuldip K Paliwal. Fast k-dimensional tree algorithms for nearest neighbor search with application to vector quantization encoding. IEEE Transactions on Signal Processing, 40(3):518–531, 1992. [25] Bero Roos. On the rate of multivariate poisson convergence. Journal of Multivariate Analysis, 69(1):120–134, 1999. [26] Karen Sachs, Omar Perez, Dana Pe’er, Douglas A Lauffenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523–529, 2005. [27] Peter Spirtes, Clark N Glymour, and Richard Scheines. Causation, prediction, and search. MIT press, 2000. [28] Eric V Strobl, Kun Zhang, and Shyam Visweswaran. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. arXiv preprint arXiv:1702.03877, 2017. [29] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine learning, 65(1):31–78, 2006. [30] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of Complexity, pages 11–30. Springer, 2015. 10 [31] Eric P Xing, Michael I Jordan, Richard M Karp, et al. Feature selection for high-dimensional genomic microarray data. In ICML, volume 1, pages 601–608. Citeseer, 2001. [32] Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012. 11 | 2017 | 14 |
6,608 | A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning Marc Lanctot DeepMind lanctot@ Vinicius Zambaldi DeepMind vzambaldi@ Audr¯unas Gruslys DeepMind audrunas@ Angeliki Lazaridou DeepMind angeliki@ Karl Tuyls DeepMind karltuyls@ Julien Pérolat DeepMind perolat@ David Silver DeepMind davidsilver@ Thore Graepel DeepMind thore@ ...@google.com Abstract To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker. 1 Introduction Deep reinforcement learning combines deep learning [59] with reinforcement learning [94, 64] to compute a policy used to drive decision-making [73, 72]. Traditionally, a single agent interacts with its environment repeatedly, iteratively improving its policy by learning from its observations. Inspired by recent success in Deep RL, we are now seeing a renewed interest in multiagent reinforcement learning (MARL) [90, 17, 99]. In MARL, several agents interact and learn in an environment simultaneously, either competitively such as in Go [91] and Poker [39, 105, 74], cooperatively such as when learning to communicate [23, 93, 36], or some mix of the two [60, 95, 35]. The simplest form of MARL is independent RL (InRL), where each learner is oblivious to the other agents and simply treats all the interaction as part of its (“localized”) environment. Aside from the problem that these local environments are non-stationary and non-Markovian [57] resulting in a loss of convergence guarantees for many algorithms, the policies found can overfit to the other agents’ policies and hence not generalize well. There has been relatively little work done in RL community on overfitting to the environment [102, 69], but we argue that this is particularly important in multiagent settings where one must react dynamically based on the observed behavior of others. Classical techniques collect or approximate extra information such as the joint values [62, 19, 29, 56], 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. use adaptive learning rates [12], adjust the frequencies of updates [48, 81], or dynamically respond to the other agents actions online [63, 50]. However, with the notable exceptions of very recent work [22, 80], they have focused on (repeated) matrix games and/or the fully-observable case. There have several proposals for treating partial observability in the multiagent setting. When the model is fully known and the setting is strictly adversarial with two players, there are policy iteration methods based on regret minimization that scale very well when using domain-specific abstractions [27, 14, 46, 47], which was a major component of the expert no-limit poker AI Libratus [15]; recently these methods were combined with deep learning to create an expert no-limit poker AI called DeepStack [74]. There is a significant amount of work that deals with the case of decentralized cooperative problems [76, 79], and in the general setting by extending the notion of belief states and Bayesian updating from POMDPs [28]. These models are quite expressive, and the resulting algorithms are fairly complex. In practice, researchers often resort to approximate forms, by sampling or exploiting structure, to ensure good performance due to intractability [41, 2, 68]. In this paper, we introduce a new metric for quantifying the correlation effects of policies learned by independent learners, and demonstrate the severity of the overfitting problem. These coordination problems have been well-studied in the fully-observable cooperative case [70]: we observe similar problems in a partially-observed mixed cooperative/competitive setting and, and we show that the severity increases as the environment becomes more partially-observed. We propose a new algorithm based on economic reasoning [82], which uses (i) deep reinforcement learning to compute best responses to a distribution over policies, and (ii) empirical game-theoretic analysis to compute new meta-strategy distributions. As is common in the MARL setting, we assume centralized training for decentralized execution: policies are represented as separate neural networks and there is no sharing of gradients nor architectures among agents. The basic form uses a centralized payoff table, which is removed in the distributed, decentralized form that requires less space. 2 Background and Related Work In this section, we start with basic building blocks necessary to describe the algorithm. We interleave this with the most relevant previous work for our setting. Several components of the general idea have been (re)discovered many times across different research communities, each with slightly different but similar motivations. One aim here is therefore to unify the algorithms and terminology. A normal-form game is a tuple (Π, U, n) where n is the number of players, Π = (Π1, · · · , Πn) is the set of policies (or strategies, one for each player i ∈[[n]], where [[n]] = {1, · · · , n}), and U : Π →ℜn is a payoff table of utilities for each joint policy played by all players. Extensive-form games extend these formalisms to the multistep sequential case (e.g. poker). Players try to maximize their own expected utility. Each player does this by choosing a policy from Πi, or by sampling from a mixture (distribution) over them σi ∈∆(Πi). In this multiagent setting, the quality of σi depends on other players’ strategies, and so it cannot be found nor assessed independently. Every finite extensive-form game has an equivalent normal-form [53], but since it is exponentially larger, most algorithms have to be adapted to handle the sequential setting directly. There are several algorithms for computing strategies. In zero-sum games (where ∀π ∈Π,⃗1 · U(π) = 0), one can use e.g. linear programming, fictitious play [13], replicator dynamics [97], or regret minimization [8]. Some of these techniques have been extended to extensive (sequential) form [39, 25, 54, 107] with an exponential increase in the size of the state space. However, these extensions have almost exclusively treated the two-player case, with some notable exceptions [54, 26]. Fictitious play also converges in potential games which includes cooperative (identical payoff) games. The double oracle (DO) algorithm [71] solves a set of (two-player, normal-form) subgames induced by subsets Πt ⊂Π at time t. A payoff matrix for the subgame Gt includes only those entries corresponding to the strategies in Πt. At each time step t, an equilibrium σ∗,t is obtained for Gt, and to obtain Gt+1 each player adds a best response πt+1 i ∈BR(σ∗,t −i) from the full space Πi, so for all i, Πt+1 i = Πt i ∪{πt+1 i }. The algorithm is illustrated in Figure 1. Note that finding an equilibrium in a zero-sum game takes time polynomial in |Πt|, and is PPAD-complete for general-sum [89]. Clearly, DO is guaranteed to converge to an equilibrium in two-player games. But, in the worst-case, the entire strategy space may have to be enumerated. For example, this is necessary for Rock-Paper2 Figure 1: The Double Oracle Algorithm. Figure taken from [10] with authors’ permission. Scissors, whose only equilibrium has full support ( 1 3, 1 3, 1 3). However, there is evidence that support sizes shrink for many games as a function of episode length, how much hidden information is revealed and/or affects it has on the payoff [65, 86, 10]. Extensions to the extensive-form games have been developed [67, 9, 10] but still large state spaces are problematic due to the curse of dimensionality. Empirical game-theoretic analysis (EGTA) is the study of meta-strategies obtained through simulation in complex games [100, 101]. An empirical game, much smaller in size than the full game, is constructed by discovering strategies, and meta-reasoning about the strategies to navigate the strategy space. This is necessary when it is prohibitively expensive to explicitly enumerate the game’s strategies. Expected utilities for each joint strategy are estimated and recorded in an empirical payoff table. The empirical game is analyzed, and the simulation process continues. EGTA has been employed in trading agent competitions (TAC) and automated bidding auctions. One study used evolutionary dynamics in the space of known expert meta-strategies in Poker [83]. Recently, reinforcement learning has been used to validate strategies found via EGTA [104]. In this work, we aim to discover new strategies through learning. However, instead of computing exact best responses, we compute approximate best responses using reinforcement learning. A few epochs of this was demonstrated in continuous double auctions using tile coding [87]. This work follows up in this line, running more epochs, using modern function approximators (deep networks), a scalable implementation, and with a focus on finding policies that can generalize across contexts. A key development in recent years is deep learning [59]. While most work in deep learning has focused on supervised learning, impressive results have recently been shown using deep neural networks for reinforcement learning, e.g. [91, 38, 73, 77]. For instance, Mnih et al. [73] train policies for playing Atari video games and 3D navigation [72], given only screenshots. Silver et al. introduced AlphaGo [91, 92], combining deep RL with Monte Carlo tree search, outperforming human experts. Computing approximate responses is more computationally feasible, and fictitious play can handle approximations [42, 61]. It is also more biologically plausible given natural constraints of bounded rationality. In behavioral game theory [103], the focus is to predict actions taken by humans, and the responses are intentionally constrained to increase predictive ability. A recent work uses a deep learning architecture [34]. The work that closely resembles ours is level-k thinking [20] where level k agents respond to level k −1 agents, and more closely cognitive hierarchy [18], in which responses are to distributions over levels {0, 1, . . . , k −1}. However, our goals and motivations are very different: we use the setup as a means to produce more general policies rather than to predict human behavior. Furthermore, we consider the sequential setting rather than normal-form games. Lastly, there has been several studies from the literature on co-evolutionary algorithms; specifically, how learning cycles and overfitting to the current populations can be mitigated [78, 85, 52]. 3 Policy-Space Response Oracles We now present our main conceptual algorithm, policy-space response oracles (PSRO). The algorithm is a natural generalization of Double Oracle where the meta-game’s choices are policies rather than actions. It also generalizes Fictitious Self-Play [39, 40]. Unlike previous work, any meta-solver can be plugged in to compute a new meta-strategy. In practice, parameterized policies (function approximators) are used to generalize across the state space without requiring any domain knowledge. The process is summarized in Algorithm 1. The meta-game is represented as an empirical game, starting with a single policy (uniform random) and growing, each epoch, by adding policies (“oracles”) 3 Algorithm 1: Policy-Space Response Oracles input :initial policy sets for all players Π Compute exp. utilities U Π for each joint π ∈Π Initialize meta-strategies σi = UNIFORM(Πi) while epoch e in {1, 2, · · · } do for player i ∈[[n]] do for many episodes do Sample π−i ∼σ−i Train oracle π′ i over ρ ∼(π′ i, π−i) Πi = Πi ∪{π′ i} Compute missing entries in U Π from Π Compute a meta-strategy σ from U Π Output current solution strategy σi for player i Algorithm 2: Deep Cognitive Hierarchies input :player number i, level k while not terminated do CHECKLOADMS({j|j ∈[[n]], j ̸= i}, k) CHECKLOADORACLES(j ∈[[n]], k′ ≤k) CHECKSAVEMS(σi,k) CHECKSAVEORACLE(πi,k) Sample π−i ∼σ−i,k Train oracle πi,k over ρ1 ∼(πi,k, π−i) if iteration number mod Tms = 0 then Sample πi ∼σi,k Compute ui(ρ2), where ρ2 ∼(πi, π−i) Update stats for πi and update σi,k Output σi,k for player i at level k that approximate best responses to the meta-strategy of the other players. In (episodic) partially observable multiagent environments, when the other players are fixed the environment becomes Markovian and computing a best response reduces to solving a form of MDP [30]. Thus, any reinforcement learning algorithm can be used. We use deep neural networks due to the recent success in reinforcement learning. In each episode, one player is set to oracle(learning) mode to train π′ i, and a fixed policy is sampled from the opponents’ meta-strategies (π−i ∼σ−i). At the end of the epoch, the new oracles are added to their policy sets Πi, expected utilities for new policy combinations are computed via simulation and added to the empirical tensor U Π, which takes time exponential in |Π|. Define ΠT = ΠT −1 ∪π′ as the policy space including the currently learning oracles, and |σi| = |ΠT i | for all i ∈[[n]]. Iterated best response is an instance of PSRO with σ−i = (0, 0, · · · , 1, 0). Similarly, Independent RL and fictitious play are instances of PSRO with σ−i = (0, 0, · · · , 0, 1) and σ−i = (1/K, 1/K, · · · , 1/K, 0), respectively, where K = |ΠT −1 −i |. Double Oracle is an instance of PSRO with n = 2 and σT set to a Nash equilibrium profile of the meta-game (ΠT −1, U ΠT −1). An exciting question is what can happen with (non-fixed) meta-solvers outside this known space? Fictitious play is agnostic to the policies it is responding to; hence it can only sharpen the metastrategy distribution by repeatedly generating the same best responses. On the other hand, responses to equilibrium strategies computed by Double Oracle will (i) overfit to a specific equilibrium in the n-player or general-sum case, and (ii) be unable to generalize to parts of the space not reached by any equilibrium strategy in the zero-sum case. Both of these are undesirable when computing general policies that should work well in any context. We try to balance these problems of overfitting with a compromise: meta-strategies with full support that force (mix in) γ exploration over policy selection. 3.1 Meta-Strategy Solvers A meta-strategy solver takes as input the empirical game (Π, U Π) and produces a meta-strategy σi for each player i. We try three different solvers: regret-matching, Hedge, and projected replicator dynamics. These specific meta-solvers accumulate values for each policy (“arm”) and an aggregate value based on all players’ meta-strategies. We refer to ui(σ) as player i’s expected value given all players’ meta-strategies and the current empirical payoff tensor U Π (computed via multiple tensor dot products.) Similarly, denote ui(πi,k, σ−i) as the expected utility if player i plays their kth ∈[[K]] ∪{0} policy and the other players play with their meta-strategy σ−i. Our strategies use an exploration parameter γ, leading to a lower bound of γ K+1 on the probability of selecting any πi,k. The first two meta-solvers (Regret Matching and Hedge) are straight-forward applications of previous algorithms, so we defer the details to Appendix A.1 Here, we introduce a new solver we call projected replicator dynamics (PRD). From Appendix A, when using the asymmetric replicator dynamics, e.g. with two players, where U Π = (A, B), the change in probabilities for the kth component (i.e., the policy πi,k) of meta-strategies (σ1, σ2) = (x, y) are: dxk dt = xk[(Ay)k −xT Ay], dyk dt = yk[(xT B)k −xT By], 1Appendices are available in the longer technical report version of the paper, see [55]. 4 To simulate the replicator dynamics in practice, discretized updates are simulated using a step-size of δ. We add a projection operator P(·) to these equations that guarantees exploration: x ←P(x + δ dx dt ), y ←P(y + δ dy dt ), where P(x) = argminx′∈∆K+1 γ {||x′ −x||}, if any xk < γ/(K + 1) or x otherwise, and ∆K+1 γ = {x | xk ≥ γ K+1, P k xk = 1} is the γ-exploratory simplex of size K + 1. This enforces exploratory σi(πi,k) ≥γ/(K + 1). The PRD approach can be understood as directing exploration in comparison to standard replicator dynamics approaches that contain isotropic diffusion or mutation terms (which assume undirected and unbiased evolution), for more details see [98]. 3.2 Deep Cognitive Hierarchies . . . . . . . . . . . . . . . . . . . . N players πi,k K + 1 levels σ1,1 π1,1 rand rand rand rand σi,k Figure 2: Overview of DCH While the generality of PSRO is clear and appealing, the RL step can take a long time to converge to a good response. In complex environments, much of the basic behavior that was learned in one epoch may need to be relearned when starting again from scratch; also, it may be desirable to run many epochs to get oracle policies that can recursively reason through deeper levels of contingencies. To overcome these problems, we introduce a practical parallel form of PSRO. Instead of an unbounded number of epochs, we choose a fixed number of levels in advance. Then, for an n-player game, we start nK processes in parallel (level 0 agents are uniform random): each one trains a single oracle policy πi,k for player i and level k and updates its own meta-strategy σi,k, saving each to a central disk periodically. Each process also maintains copies of all the other oracle policies πj,k′≤k at the current and lower levels, as well as the meta-strategies at the current level σ−i,k, which are periodically refreshed from a central disk. We circumvent storing U Π explicitly by updating the meta-strategies online. We call this a Deep Cognitive Hierarchy (DCH), in reference to Camerer, Ho, & Chong’s model augmented with deep RL. Example oracle response dynamics shown in Figure 2, and the pseudo-code in Algorithm 2. Since each process uses slightly out-dated copies of the other process’s policies and meta-strategies, DCH approximates PSRO. Specifically, it trades away accuracy of the correspondence to PSRO for practical efficiency and, in particular, scalability. Another benefit of DCH is an asymptotic reduction in total space complexity. In PSRO, for K policies and n players, the space required to store the empirical payoff tensor is Kn. Each process in DCH stores nK policies of fixed size, and n meta-strategies (and other tables) of size bounded by k ≤K. Therefore the total space required is O(nK · (nK + nK)) = O(n2K2). This is possible is due to the use of decoupled meta-solvers, which compute strategies online without requiring a payoff tensor U Π, which we describe now. 3.2.1 Decoupled Meta-Strategy Solvers In the field of online learning, the experts algorithms (“full information” case) receive information about each arm at every round. In the bandit (“partial information”) case, feedback is only given for the arm that was pulled. Decoupled meta-solvers are essentially sample-based adversarial bandits [16] applied to games. Empirical strategies are known to converge to Nash equilibria in certain classes of games (i.e. zero-sum, potential games) due to the folk theorem [8]. We try three: decoupled regret-matching [33], Exp3 (decoupled Hedge) [3], and decoupled PRD. Here again, we use exploratory strategies with γ of the uniform strategy mixed in, which is also necessary to ensure that the estimates are unbiased. For decoupled PRD, we maintain running averages for the overall average value an value of each arm (policy). Unlike in PSRO, in the case of DCH, one sample is obtained at a time and the meta-strategy is updated periodically from online estimates. 4 Experiments In all of our experiments, oracles use Reactor [31] for learning, which has achieved state-of-the-art results in Atari game-playing. Reactor uses Retrace(λ) [75] for off-policy policy evaluation, and β-Leave-One-Out policy gradient for policy updates, and supports recurrent network training, which could be important in trying to match online experiences to those observed during training. 5 The action spaces for each player are identical, but the algorithms do not require this. Our implementation differs slightly from the conceptual descriptions in Section 3; see App. C for details. First-Person Gridworld Games. Each agent has a local field-of-view (making the world partially observable), sees 17 spaces in front, 10 to either side, and 2 spaces behind. Consequently, observations are encoded as 21x20x3 RGB tensors with values 0 – 255. Each agent has a choice of turning left or right, moving forward or backward, stepping left or right, not moving, or casting an endless light beam in their current direction. In addition, the agent has two composed actions of moving forward and turning. Actions are executed simultaneously, and order of resolution is randomized. Agents start on a random spawn point at the beginning of each episode. If an agent is touched (“tagged”) by another agent’s light beam twice, then the target agent is immediately teleported to a spawn point. In laser tag, the source agent then receives 1 point of reward for the tag. In another variant, gathering, there is no tagging but agents can collect apples, for 1 point per apple, which refresh at a fixed rate. In pathfind, there is no tagging nor apples, and both agents get 1 point reward when both reach their destinations, ending the episode. In every variant, an episode consists of 1000 steps of simulation. Other details, such as specific maps, can be found in Appendix D. Leduc Poker is a common benchmark in Poker AI, consisting of a six-card deck: two suits with three cards (Jack, Queen, King) each. Each player antes 1 chip to play, and receives one private card. There are two rounds of betting, with a maximum of two raises each, whose values are 2 and 4 chips respectively. After the first round of betting, a single public card is revealed. The input is represented as in [40], which includes one-hot encodings of the private card, public card, and history of actions. Note that we use a more difficult version than in previous work; see Appendix D.1 for details. 4.1 Joint Policy Correlation in Independent Reinforcement Learning To identify the effect of overfitting in independent reinforcement learners, we introduce joint policy correlation (JPC) matrices. To simplify the presentation, we describe here the special case of symmetric two-player games with non-negative rewards; for a general description, see Appendix B.2. Values are obtained by running D instances of the same experiment, differing only in the seed used to initialize the random number generators. Each experiment d ∈[[D]] (after many training episodes) produces policies (πd 1, πd 2). The entries of each D × D matrix shows the mean return over T = 100 episodes, PT t=1 1 T (Rt 1 + Rt 2), obtained when player 1 uses row policy πdi 1 and and player 2 uses column policy πdj 2 . Hence, entries on the diagonals represent returns for policies that learned together (i.e., same instance), while off-diagonals show returns from policies that trained in separate instances. 0 1 2 3 4 Player #2 4 3 2 1 0 Player #1 25.1 27.3 29.6 5.3 29.8 27.3 31.7 27.6 30.6 26.2 14.6 12.9 30.3 7.3 23.8 29.9 30.8 17.8 11.0 15.5 30.7 30.9 23.9 3.7 9.2 6 12 18 24 30 0 1 2 3 4 Player #2 4 3 2 1 0 Player #1 0.5 3.5 0.6 3.9 19.2 3.6 2.6 4.1 20.5 4.7 18.2 2.3 20.0 6.0 2.6 12.2 18.2 11.3 0.8 8.2 23.0 2.9 18.5 7.1 0.6 4 8 12 16 20 Figure 3: Example JPC matrices for InRL on Laser Tag small2 map (left) and small4 (right). From a JPC matrix, we compute an average proportional loss in reward as R−= ( ¯D−¯O)/ ¯D where ¯D is the mean value of the diagonals and ¯O is the mean value of the off-diagonals. E.g. in Figure 3: D = 30.44, O = 20.03, R−= 0.342. Even in a simple domain with almost full observability (small2), an independently-learned policy could expect to lose 34.2% of its reward when playing with another independently-learned policy even though it was trained under identical circumstances! This clearly demonstrates an important problem with independent learners. In the other variants (gathering and pathfind), we observe no JPC problem, presumably because coordination is not required and the policies are independent. Results are summarized in Table 1. We have also noticed similar effects when using DQN [73] as the oracle training algorithm; see Appendix B.1 for example videos. 6 Environment Map InRL DCH(Reactor, 2, 10) JPC Reduction ¯D ¯O R− ¯D ¯O R− Laser Tag small2 30.44 20.03 0.342 28.20 26.63 0.055 28.7 % Laser Tag small3 23.06 9.06 0.625 27.00 23.45 0.082 54.3 % Laser Tag small4 20.15 5.71 0.717 18.72 15.90 0.150 56.7 % Gathering field 147.34 146.89 0.003 139.70 138.74 0.007 – Pathfind merge 108.73 106.32 0.022 90.15 91.492 < 0 – Table 1: Summary of JPC results in first-person gridworld games. We see that a (level 10) DCH agent reduces the JPC problem significantly. On small2, DCH reduces the expected loss down to 5.5%, 28.7% lower than independent learners. The problem gets larger as the map size grows and problem becomes more partially observed, up to a severe 71.7% average loss. The reduction achieved by DCH also grows from 28.7% to 56.7%. Is the Meta-Strategy Necessary During Execution? The figures above represent the fully-mixed strategy σi,10. We also analyze JPC for only the highest-level policy πi,10 in the laser tag levels. The values are larger here: R−= 0.147, 0.27, 0.118 for small2-4 respectively, showing the importance of the meta-strategy. However, these are still significant reductions in JPC: 19.5%, 36.5%, 59.9%. How Many Levels? On small4, we also compute values for level 5 and level 3: R−= 0.156 and R−= 0.246, corresponding to reductions in JPC of 56.1% and 44%, respectively. Level 5 reduces JPC by a similar amount as level 10 (56.1% vs 56.7%), while level 3 less so (44% vs. 56.1%.) 4.2 Learning to Safely Exploit and Indirectly Model Opponents in Leduc Poker We now show results for a Leduc poker where strong benchmark algorithms exist, such as counterfactual regret (CFR) minimization [107, 11]. We evaluate our policies using two metrics: the first is performance against fixed players (random, CFR’s average strategy after 500 iterations “cfr500”, and a purified version of “cfr500pure” that chooses the action with highest probability.) The second is commonly used in poker AI: NASHCONV(σ) = Pn i maxσ′ i∈Σi ui(σ′ i, σ−i) −ui(σ), representing how much can be gained by deviating to their best response (unilaterally), a value that can be interpreted as a distance from a Nash equilibrium (called exploitability in the two-player setting). NashConv is easy to compute in small enough games [45]; for CFR’s values see Appendix E.1. Effect of Exploration and Meta-Strategy Overview. We now analyze the effect of the various meta-strategies and exploration parameters. In Figure 4, we measure the mean area-under-the-curve (MAUC) of the NashConv values for the last (right-most) 32 values in the NashConv graph, and exploration rate of γ = 0.4. Figures for the other values of γ are in Appendix E, but we found this value of γ works best for minimizing NashConv. Also, we found that decoupled replicator dynamics works best, followed by decoupled regret-matching and Exp3. Also, it seems that the higher the level, the lower the resulting NashConv value is, with diminishing improvements. For exploitation, we found that γ = 0.1 was best, but the meta-solvers seemed to have little effect (see Figure 10.) Comparison to Neural Fictitious Self-Play. We now compare to Neural Fictitious Self-Play (NFSP) [40], an implementation of fictitious play in sequential games using reinforcement learning. Note that NFSP, PSRO, and DCH are all sample-based learning algorithms that use general function approximation, whereas CFR is a tabular method that requires a full game-tree pass per iteration. NashConv graphs are shown for {2,3}-player in Figure 5, and performance vs. fixed bots in Figure 6. uprd urm exp3 metasolver 0.0 0.5 1.0 1.5 2.0 2.5 3.0 MAUC (NashConv) level 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 level 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 MAUC min_exploration_weight 0.1 0.25 0.4 (a) (b) Figure 4: (a) Effect of DCH parameters on NashConv in 2 player Leduc Poker. Left: decoupled PRD, Middle: decoupled RM, Right: Exp3, and (b) MAUC of the exploitation graph against cfr500. 7 0 500 1000 1500 2000 2500 Episodes (in thousands) 0 1 2 3 4 5 NashConv NFSP DCH PSRO (a) 2 players 0 100 200 300 400 500 600 700 Episodes (in thousands) 2 4 6 8 10 12 14 NashConv NFSP DCH PSRO (b) 3 players Figure 5: Exploitability for NFSP x DCH x PSRO. 200 400 600 800 1000 1200 1400 1600 Episodes (in thousands) 2 1 0 1 2 Mean score NFSP DCH PSRO (a) Random bots as ref. set 500 1000 1500 2000 2500 3000 Episodes (in thousands) 5 4 3 2 1 0 1 Mean score NFSP DCH PSRO (b) 2-player CFR500 bots as ref. set 100 200 300 400 500 600 700 800 Episodes (in thousands) 5 4 3 2 1 0 Mean score NFSP DCH PSRO (c) 3-player CFR500 bots as ref. set Figure 6: Evaluation against fixed set of bots. Each data point is an average of the four latest values. We observe that DCH (and PSRO) converge faster than NFSP at the start of training, possibly due to a better meta-strategy than the uniform random one used in fictitious play. The convergence curves eventually plateau: DCH in two-player is most affected, possibly due to the asynchronous nature of the updates, and NFSP converges to a lower exploitability in later episodes. We believe that this is due to NFSP’s ability to learn a more accurate mixed average strategy at states far down in the tree, which is particularly important in poker, whereas DCH and PSRO mix at the top over full policies. On the other hand, we see that PSRO/DCH are able to achieve higher performance against the fixed players. Presumably, this is because the policies produced by PSRO/DCH are better able to recognize flaws in the weaker opponent’s policies, since the oracles are specifically trained for this, and dynamically adapt to the exploitative response during the episode. So, NFSP is computing a safe equilibrium while PSRO/DCH may be trading convergence precision for the ability to adapt to a range of different play observed during training, in this context computing a robust counter-strategy [44, 24]. 5 Conclusion and Future Work In this paper, we quantify a severe problem with independent reinforcement learners, joint policy correlation (JPC), that limits the generality of these approaches. We describe a generalized algorithm for multiagent reinforcement learning that subsumes several previous algorithms. In our experiments, we show that PSRO/DCH produces general policies that significantly reduce JPC in partially-observable coordination games, and robust counter-strategies that safely exploit opponents in a common competitive imperfect information game. The generality offered by PSRO/DCH can be seen as a form of “opponent/teammate regularization”, and has also been observed recently in practice [66, 5]. We emphasize the game-theoretic foundations of these techniques, which we hope will inspire further investigation into algorithm development for multiagent reinforcement learning. In future work, we will consider maintaining diversity among oracles via loss penalties based on policy dissimilarity, general response graph topologies, environments such as emergent language games [58] and RTS games [96, 84], and other architectures for prediction of behavior, such as opponent modeling [37] and imagining future states via auxiliary tasks [43]. We would also like to investigate fast online adaptation [1, 21] and the relationship to computational Theory of Mind [106, 4], as well as generalized (transferable) oracles over similar opponent policies using successor features [6]. Acknowledgments. We would like to thank DeepMind and Google for providing an excellent research environment that made this work possible. Also, we would like to thank the anonymous reviewers and several people for helpful comments: Johannes Heinrich, Guy Lever, Remi Munos, Joel Z. Leibo, Janusz Marecki, Tom Schaul, Noam Brown, Kevin Waugh, Georg Ostrovski, Sriram Srinivasan, Neil Rabinowitz, and Vicky Holgate. 8 References [1] Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, and Pieter Abbeel. Continuous adaptation via meta-learning in nonstationary and competitive environments. CoRR, abs/1710.03641, 2017. [2] Christopher Amato and Frans A. Oliehoek. Scalable planning and learning for multiagent POMDPs. In AAAI15, pages 1995–2002, January 2015. [3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pages 322–331, 1995. [4] C.L. Baker, R.R. Saxe, and J.B. Tenenbaum. Bayesian theory of mind: Modeling joint belief-desire attribution. In Proceedings of the Thirty-Third Annual Conference of the Cognitive Science Society, pages 2469–2474, 2011. [5] Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi-agent competition. CoRR, abs/1710.03748, 2017. [6] André Barreto, Will Dabney, Rémi Munos, Jonathan Hunt, Tom Schaul, David Silver, and Hado van Hasselt. Transfer in reinforcement learning with successor features and generalised policy improvement. In Proceedings of the Thirty-First Annual Conference on Neural Information Processing Systems (NIPS 2017), 2017. To appear. Preprint available at http://arxiv.org/abs/1606.05312. [7] Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multiagent learning: A survey. J. Artif. Intell. Res. (JAIR), 53:659–697, 2015. [8] A. Blum and Y. Mansour. Learning, regret minimization, and equilibria. In Algorithmic Game Theory, chapter 4. Cambridge University Press, 2007. [9] Branislav Bosansky, Viliam Lisy, Jiri Cermak, Roman Vitek, and Michal Pechoucek. Using double-oracle method and serialized alpha-beta search for pruning in simultaneous moves games. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), 2013. [10] Branislav Bošanský, Viliam Lisý, Marc Lanctot, Jiˇrí ˇCermák, and Mark H.M. Winands. Algorithms for computing strategies in two-player simultaneous move games. Artificial Intelligence, 237:1–40, 2016. [11] Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up Limit Hold’em Poker is solved. Science, 347(6218):145–149, January 2015. [12] Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136:215–250, 2002. [13] G. W. Brown. Iterative solutions of games by fictitious play. In T.C. Koopmans, editor, Activity Analysis of Production and Allocation, pages 374–376. John Wiley & Sons, Inc., 1951. [14] Noam Brown, Sam Ganzfried, and Tuomas Sandholm. Hierarchical abstraction, distributed equilibrium computation, and post-processing, with application to a champion no-limit Texas Hold’em agent. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 7–15. International Foundation for Autonomous Agents and Multiagent Systems, 2015. [15] Noam Brown and Tuomas Sandholm. Safe and nested subgame solving for imperfect-information games. CoRR, abs/1705.02955, 2017. [16] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [17] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transaction on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 38(2):156–172, 2008. [18] Colin F. Camerer, Teck-Hua Ho, and Juin-Kuan Chong. A cognitive hierarchy model of games. The Quarterly Journal of Economics, 2004. [19] C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), pages 746–752, 1998. 9 [20] M. A. Costa-Gomes and V. P. Crawford. Cognition and behavior in two-person guessing games: An experimental study. The American Economy Review, 96(6):1737–1768, 2006. [21] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. [22] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip H. S. Torr, Pushmeet Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), 2017. [23] Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016. [24] Sam Ganzfried and Tuomas Sandholm. Safe opponent exploitation. ACM Transactions on Economics and Computation (TEAC), 3(2):1–28, 2015. Special issue on selected papers from EC-12. [25] N. Gatti, F. Panozzo, and M. Restelli. Efficient evolutionary dynamics with extensive-form games. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 335–341, 2013. [26] Richard Gibson. Regret minimization in non-zero-sum games with applications to building champion multiplayer computer poker agents. CoRR, abs/1305.0034, 2013. [27] A. Gilpin. Algorithms for Abstracting and Solving Imperfect Information Games. PhD thesis, Carnegie Mellon University, 2009. [28] Gmytrasiewicz and Doshi. A framework for sequential planning in multiagent settings. Journal of Artificial Intelligence Research, 24:49–79, 2005. [29] Amy Greenwald and Keith Hall. Correlated Q-learning. In Proceedings of the Twentieth International Conference on Machine Learning (ICML 2003), pages 242–249, 2003. [30] Amy Greenwald, Jiacui Li, and Eric Sodomka. Solving for best responses and equilibria in extensive-form games with reinforcement learning methods. In Rohit Parikh on Logic, Language and Society, volume 11 of Outstanding Contributions to Logic, pages 185–226. Springer International Publishing, 2017. [31] Audrunas Gruslys, Mohammad Gheshlaghi Azar, Marc G. Bellemare, and Remi Munos. The Reactor: A sample-efficient actor-critic architecture. CoRR, abs/1704.04651, 2017. [32] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000. [33] Sergiu Hart and Andreu Mas-Colell. A reinforcement procedure leading to correlated equilibrium. In Economics Essays: A Festschrift for Werner Hildenbrand. Springer Berlin Heidelberg, 2001. [34] Jason S. Hartford, James R. Wright, and Kevin Leyton-Brown. Deep learning for predicting human strategic behavior. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016. [35] Matthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. In Proceedings of the International Conference on Learning Representations (ICLR), May 2016. [36] Matthew John Hausknecht. Cooperation and communication in multiagent deep reinforcement learning. PhD thesis, University of Texas at Austin, Austin, USA, 2016. [37] He He, Jordan Boyd-Graber, Kevin Kwok, , and Hal Daumé III. Opponent modeling in deep reinforcement learning. In In Proceedings of The 33rd International Conference on Machine Learning (ICML), pages 1804–1813, 2016. [38] Nicolas Heess, Gregory Wayne, David Silver, Timothy P. Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2944–2952, 2015. [39] Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), 2015. 10 [40] Johannes Heinrich and David Silver. Deep reinforcement learning from self-play in imperfect-information games. CoRR, abs/1603.01121, 2016. [41] Trong Nghia Hoang and Kian Hsiang Low. Interactive POMDP lite: Towards practical planning to predict and exploit intentions for interacting with self-interested agents. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI ’13, pages 2298–2305. AAAI Press, 2013. [42] Josef Hofbauer and William H. Sandholm. On the global convergence of stochastic fictitious play. Econometrica, 70(6):2265–2294, 11 2002. [43] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. CoRR, abs/1611.05397, 2016. [44] M. Johanson, M. Zinkevich, and M. Bowling. Computing robust counter-strategies. In Advances in Neural Information Processing Systems 20 (NIPS), pages 1128–1135, 2008. A longer version is available as a University of Alberta Technical Report, TR07-15. [45] Michael Johanson, Michael Bowling, Kevin Waugh, and Martin Zinkevich. Accelerating best response calculation in large extensive games. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pages 258–265, 2011. [46] Michael Johanson, Neil Burch, Richard Valenzano, and Michael Bowling. Evaluating state-space abstractions in extensive-form games. In Proceedings of the Twelfth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2013. [47] Michael Bradley Johanson. Robust Strategies and Counter-Strategies: From Superhuman to Optimal Play. PhD thesis, University of Alberta, 2016. http://johanson.ca/publications/theses/ 2016-johanson-phd-thesis/2016-johanson-phd-thesis.pdf. [48] Michael Kaisers and Karl Tuyls. Frequency adjusted multi-agent Q-learning. In 9th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2010), Toronto, Canada, May 10-14, 2010, Volume 1-3, pages 309–316, 2010. [49] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [50] M. Kleiman-Weiner, M. K. Ho, J. L. Austerweil, M. L. Littman, and J. B. Tenenbaum. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016. [51] D. Koller, N. Megiddo, and B. von Stengel. Fast algorithms for finding randomized strategies in game trees. In Proceedings of the 26th ACM Symposium on Theory of Computing (STOC ’94), pages 750–759, 1994. [52] Kostas Kouvaris, Jeff Clune, Loizos Kounios, Markus Brede, and Richard A. Watson. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation. PLOS Computational Biology, 13(4):1–20, 04 2017. [53] H. W. Kuhn. Extensive games and the problem of information. Contributions to the Theory of Games, 2:193–216, 1953. [54] Marc Lanctot. Further developments of extensive-form replicator dynamics using the sequence-form representation. In Proceedings of the Thirteenth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 1257–1264, 2014. [55] Marc Lanctot, Vinicius Zambaldi, Audr¯unas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. CoRR, abs/1711.00832, 2017. [56] M. Lauer and M. Riedmiller. Reinforcement learning for stochastic cooperative multi-agent systems. In Proceedings of the AAMAS ’04, New York, 2004. [57] Guillaume J. Laurent, Laëtitia Matignon, and Nadine Le Fort-Piat. The world of independent learners is not Markovian. International Journal of Knowledge-Based and Intelligent Engineering Systems, 15(1):55–64, March 2011. [58] Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language. In Proceedings of the International Conference on Learning Representations (ICLR), April 2017. 11 [59] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436–444, 2015. [60] Joel Z. Leibo, Vinicius Zambaldia, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the Sixteenth International Conference on Autonomous Agents and Multiagent Systems, 2017. [61] David S. Leslie and Edmund J. Collins. Generalised weakened fictitious play. Games and Economic Behavior, 56(2):285–298, 2006. [62] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In In Proceedings of the Eleventh International Conference on Machine Learning, pages 157–163. Morgan Kaufmann, 1994. [63] Michael L. Littman. Friend-or-foe Q-learning in general-sum games. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 322–328, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. [64] Michael L. Littman. Reinforcement learning improves behaviour from evaluative feedback. Nature, 521:445–451, 2015. [65] J. Long, N. R. Sturtevant, M. Buro, and T. Furtak. Understanding the success of perfect information Monte Carlo sampling in game tree search. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 134–140, 2010. [66] Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. CoRR, abs/1706.02275, 2017. [67] N. Burch M. Zinkevich, M. Bowling. A new algorithm for generating equilibria in massive zero-sum games. In Proceedings of the Twenty-Seventh Conference on Artificial Intelligence (AAAI-07), 2007. [68] Janusz Marecki, Tapana Gupta, Pradeep Varakantham, Milind Tambe, and Makoto Yokoo. Not all agents are equal: Scaling up distributed pomdps for agent networks. In Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multi-agent Systems, 2008. [69] Vukosi N. Marivate. Improved Empirical Methods in Reinforcement Learning Evaluation. PhD thesis, Rutgers, New Brunswick, New Jersey, 2015. [70] L. Matignon, G. J. Laurent, and N. Le Fort-Piat. Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems. The Knowledge Engineering Review, 27(01):1– 31, 2012. [71] H.B. McMahan, G. Gordon, and A. Blum. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), 2003. [72] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1928–1937, 2016. [73] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015. [74] Matej Moravˇcík, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 358(6362), October 2017. [75] R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, 2016. [76] Ranjit Nair. Coordinating multiagent teams in uncertain domains using distributed POMDPs. PhD thesis, University of Southern California, Los Angeles, USA, 2004. [77] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2863–2871, 2015. 12 [78] F.A. Oliehoek, E.D. de Jong, and N. Vlassis. The parallel Nash memory for asymmetric games. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2006. [79] Frans A. Oliehoek and Christopher Amato. A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems. Springer, 2016. Authors’ pre-print. [80] Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P. How, and John Vian. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), 2017. [81] Liviu Panait, Karl Tuyls, and Sean Luke. Theoretical advantages of lenient learners: An evolutionary game theoretic perspective. Journal of Machine Learning Research, 9:423–457, 2008. [82] David C. Parkes and Michael P. Wellman. Economic reasoning and artificial intelligence. Science, 349(6245):267–272, 2015. [83] Marc Ponsen, Karl Tuyls, Michael Kaisers, and Jan Ramon. An evolutionary game theoretic analysis of poker strategies. Entertainment Computing, 2009. [84] F. Sailer, M. Buro, and M. Lanctot. Adversarial planning through strategy simulation. In IEEE Symposium on Computational Intelligence and Games (CIG), pages 37–45, 2007. [85] Spyridon Samothrakis, Simon Lucas, Thomas Philip Runarsson, and David Robles. Coevolving GamePlaying Agents: Measuring Performance and Intransitivities. IEEE Transactions on Evolutionary Computation, April 2013. [86] Martin Schmid, Matej Moravcik, and Milan Hladik. Bounding the support size in extensive form games with imperfect information. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. [87] L. Julian Schvartzman and Michael P. Wellman. Stronger CDA strategies through empirical gametheoretic analysis and reinforcement learning. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 249–256, 2009. [88] Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improving convolutional neural networks via concatenated rectified linear units. In Proceedings of the International Conference on Machine Learning (ICML), 2016. [89] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, 2009. [90] Yoav Shoham, Rob Powers, and Trond Grenager. If multi-agent learning is the answer, what is the question? Artif. Intell., 171(7):365–377, 2007. [91] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484–489, 2016. [92] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354–359, 2017. [93] S. Sukhbaatar, A. Szlam, and R. Fergus. Learning multiagent communication with backpropagation. In 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016. [94] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [95] Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE, 12(4), 2017. [96] Anderson Tavares, Hector Azpurua, Amanda Santos, and Luiz Chaimowicz. Rock, paper, starcraft: Strategy selection in real-time strategy games. In The Twelfth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-16), 2016. 13 [97] Taylor and Jonker. Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40:145–156, 1978. [98] K. Tuyls and R. Westra. Replicator dynamics in discrete and continuous strategy spaces. In Agents, Simulation and Applications, pages 218–243. Taylor and Francis, 2008. [99] Karl Tuyls and Gerhard Weiss. Multiagent learning: Basics, challenges, and prospects. AI Magazine, 33(3):41–52, 2012. [100] W. E. Walsh, R. Das, G. Tesauro, and J.O. Kephart. Analyzing complex strategic interactions in multiagent games. In AAAI-02 Workshop on Game Theoretic and Decision Theoretic Agents, 2002., 2002. [101] Michael P. Wellman. Methods for empirical game-theoretic analysis. In Proceedings of the National Conference on Artificial Intelligence (AAAI), 2006. [102] S. Whiteson, B. Tanner, M. E. Taylor, and P. Stone. Protecting against evaluation overfitting in empirical reinforcement learning. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pages 120–127, 2011. [103] James R. Wright and Kevin Leyton-Brown. Beyond equilibrium: Predicting human behavior in normal form games. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), pages 901–907, 2010. [104] Mason Wright. Using reinforcement learning to validate empirical game-theoretic analysis: A continuous double auction study. CoRR, abs/1604.06710, 2016. [105] Nikolai Yakovenko, Liangliang Cao, Colin Raffel, and James Fan. Poker-CNN: A pattern learning strategy for making draws and bets in poker games using convolutional networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016. [106] Wako Yoshida, Ray J. Dolan, and Karl J. Friston. Game theory of mind. PLOS Computational Biology, 4(12):1–14, 12 2008. [107] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with incomplete information. In Advances in Neural Information Processing Systems 20 (NIPS 2007), 2008. 14 | 2017 | 140 |
6,609 | Spectral Mixture Kernels for Multi-Output Gaussian Processes Gabriel Parra Department of Mathematical Engineering Universidad de Chile gparra@dim.uchile.cl Felipe Tobar Center for Mathematical Modeling Universidad de Chile ftobar@dim.uchile.cl Abstract Early approaches to multiple-output Gaussian processes (MOGPs) relied on linear combinations of independent, latent, single-output Gaussian processes (GPs). This resulted in cross-covariance functions with limited parametric interpretation, thus conflicting with the ability of single-output GPs to understand lengthscales, frequencies and magnitudes to name a few. On the contrary, current approaches to MOGP are able to better interpret the relationship between different channels by directly modelling the cross-covariances as a spectral mixture kernel with a phase shift. We extend this rationale and propose a parametric family of complex-valued cross-spectral densities and then build on Cramér’s Theorem (the multivariate version of Bochner’s Theorem) to provide a principled approach to design multivariate covariance functions. The so-constructed kernels are able to model delays among channels in addition to phase differences and are thus more expressive than previous methods, while also providing full parametric interpretation of the relationship across channels. The proposed method is first validated on synthetic data and then compared to existing MOGP methods on two real-world examples. 1 Introduction The extension of Gaussian processes (GPs [1]) to multiple outputs is referred to as multi-output Gaussian processes (MOGPs). MOGPs model temporal or spatial relationships among infinitelymany random variables, as scalar GPs, but also account for the statistical dependence across different sources of data (or channels). This is crucial in a number of real-world applications such as fault detection, data imputation and denoising. For any two input points x, x′, the covariance function of an m-channel MOGP k(x, x′) is a symmetric positive-definite m × m matrix of scalar covariance functions. The design of this matrix-valued kernel is challenging since we have to deal with the trade off between (i) choosing a broad class of m(m −1)/2 cross-covariances and m auto-covariances, while at the same time (ii) ensuring positive definiteness of the symmetric matrix containing these m(m+1)/2 covariance functions for any pair of inputs x, x′. In particular, unlike the widely available families of auto-covariance functions (e.g., [2]), cross-covariances are not bound to be positive definite and therefore can be designed freely; the construction of these functions with interpretable functional form is the main focus of this article. A classical approach to define cross-covariances for a MOGP is to linearly combine independent latents GPs, this is the case of the Linear Model of Coregionalization (LMC [3]) and the Convolution Model (CONV, [4]). In these cases, the resulting kernel is a function of both the covariance functions of the latent GPs and the parameters of the linear operator considered; this results in symmetric and centred cross-covariances. While these approaches are simple, they lack interpretability of the dependencies learnt and force the auto-covariances to have similar behaviour across different channels. The LMC method has also inspired the Cross-Spectral Mixture (CSM) kernel [5], which uses the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Spectral Mixture (SM) kernel in [6] within LMC and model phase differences across channels by manually introducing a shift between the cosine and exponential factors of the SM kernel. Despite exhibiting improved performance wrt previous approaches, the addition of the shift parameter in CSM poses the following question: Can the spectral design of multiouput covariance functions be even more flexible? We take a different approach to extend the spectral mixture concept to multiple outputs: Recall that for stationary scalar-valued GPs, [6] designs the power spectral density (PSD) of the process by a mixture of square exponential functions to then, supported by Bochner’s theorem [7], present the Spectral Mixture kernel via the inverse Fourier transform of the so-constructed PSD. Along the same lines, our main contribution is to propose an expressive family of complex-valued square-exponential cross-spectral densities, and then build on Cramér’s theorem [8, 9], the multivariate extension of Bochner’s, to construct the Multi-Output Spectral Mixture kernel (MOSM). The proposed multivariate covariance function accounts for all the properties of the Cross-Spectral Mixture kernel in [5] plus a delay component across channels and variable parameters for auto-covariances of different channels. Additionally, the proposed MOSM provides clear interpretation of all the parameters in spectral terms. Our experimental contribution includes an illustrative example using a trivariate synthetic signal and validation against all the aforementioned literature using two real-world datasets. 2 Background Definition 1. A Gaussian process (GP) over the input set X is a real-valued stochastic process (f(x))x∈X such that for any finite subset of inputs {xi}N i=1 ⊂X, the random variables {f(xi)}N i=1 are jointly Gaussian. Without loss of generality we will choose X = Rn. A GP [1] defines a distribution over functions f(x) that is uniquely determined by its mean function m(x) := E(f(x)), typically assumed m(x) = 0, and its covariance function (also known as kernel) k(x, x′) := cov(f(x), f(x′)), x, x′ ∈X. We now equip the reader with the necessary background to follow our proposal: we first review a spectral-based approach to the design of scalar-valued covariance kernels and then present the definition of a multi-output GP. 2.1 The Spectral Mixture kernel To bypass the explicit construction of positive-definite functions within the design of stationary covariance kernels, it is possible to design the power spectral density (PSD) instead [6] and then transform it into a covariance function using the inverse Fourier transform. This is motivated by the fact that the strict positivity requirement of the PSD is much easier to achieve than the positive definiteness requirement of the covariance kernel. The theoretical support of this construction is given by the following theorem: Theorem 1. (Bochner’s theorem) An integrable1 function k : Rn →C is the covariance function of a weakly-stationary mean-square-continuous stochastic process f : Rn →C if and only if it admits the following representation k(τ) = Z Rn eιω⊤τ S(ω)dω (1) where S(ω) is a non-negative bounded function on Rn and ι denotes the imaginary unit. For a proof see [9]. The above theorem gives an explicit relationship between the spectral density S and the covariance function k of the stochastic process f. In this sense, [6] proposed to model the spectral density S as a weighted mixture of Q square-exponential functions, with weights wq, centres µq and diagonal covariance matrices Σq, that is, S(ω) = Q X q=1 wq 1 (2π)n/2|Σq|1/2 exp −1 2(ω −µq)⊤Σ−1 q (ω −µq) . (2) Relying on Theorem 1, the kernel associated to the spectral density S(ω) in eq. (2) is given the spectral mixture kernel defined as follows. 1A function g(x) is said to be integrable if R Rn |g(x)|dx < +∞ 2 Definition 2. A Spectral Mixture (SM) kernel is a positive-definite stationary kernel given by k(τ) = Q X q=1 wq exp −1 2τ ⊤Σqτ cos(µ⊤ q τ) (3) where µq ∈Rn, Σq = diag(σ(q) 1 , . . . , σ(q) n ) and wq, σq ∈R+. Due to the universal function approximation property of the mixtures of Gaussians (considered here in the frequency domain) and the relationship given by Theorem 1, the SM kernel is able to approximate continuous stationary kernels to an arbitrary precision given enough spectral components as is [10, 11]. This concept points in the direction of sidestepping the kernel selection problem in GPs and it will be extended to cater for multivariate GPs in Section 3. 2.2 Multi-Output Gaussian Processes A multivariate extension of GPs can be constructed by considering an ensemble of scalar-valued stochastic processes where any finite collection of values across all such processes are jointly Gaussian. We formalise this definition as follows. Definition 3. An m-channel multi-output Gaussian process f(x) := (f1(x), . . . , fm(x)), x ∈X, is an m-tuple of stochastic processes fp : X →R ∀p = 1, . . . , m, such that for any (finite) subset of inputs {xi}N i=1 ⊂X, the random variables {fc(i)(xi)}N i=1 are jointly Gaussian for any choice of indices c(i) ∈{1, . . . , m}. Recall that the construction of scalar-valued GPs requires choosing a scalar-valued mean function and a scalar-valued covariance function. Conversely, an m-channel MOGP is defined by an Rm-valued mean function, whose ith element denotes the mean function of the ith channel, and an Rm × Rmvalued covariance function, whose (i, j)th element denotes the covariance between the ith and jth channels. The symmetry and positive-definiteness conditions of the MOGP kernel are defined as follows. Definition 4. A two-input matrix-valued function K(x, x′) : X × X →Rm×m defined element-wise by [K(x, x′)]ij = kij(x, x′) is a multivariate kernel (covariance function) if it is: (i) Symmetric, i.e., K(x, x′) = K(x′, x)⊤, ∀x, x′ ∈X, and (ii) Positive definite, i.e., ∀N ∈N, c ∈RN×m, x ∈X N such that, [c]pi = cpi, [x]p = xp, we have m X i,j=1 N X p,q=1 cpicqjkij(xp, xq) ≥0. (4) Furthermore, we say that a multivariate kernel K(x, x′) is stationary if K(x, x′) = K(x −x′) or equivalently kij(x, x′) = kij(x −x′) ∀i, j ∈{1, . . . , m}, in this case, we denote τ = x −x′. The design of the MOGP covariance kernel involves jointly choosing functions that model the covariance of each channel (diagonal elements in K) and functions that model the cross-covariance between different channels at different input locations (off-diagonal elements in K). Choosing these m(m + 1)/2 covariance functions is challenging when we want to be as expressive as possible and include, for instance, delays, phase shifts, negative correlations or to enforce specific spectral content while at the same time maintaining positive definiteness of K. The reader is referred to [12, 13] for a comprehensive review of MOGP models. 3 Designing Multi-Output Gaussian Processes in the Fourier Domain We extend the spectral-mixture approach [6] to multi-output Gaussian processes relying on the multivariate version of Theorem 1 first proved by Cramér and thus referred to as Cramér’s Theorem [8, 9] given by Theorem 2. (Cramér’s Theorem) A family {kij(τ)}m i,j=1 of integrable functions are the covariance functions of a weakly-stationary multivariate stochastic process if and only if they (i) admit the 3 representation kij(τ) = Z Rneιω⊤τ Sij(ω)dω ∀i, j ∈{1, . . . , m} (5) where each Sij is an integrable complex-valued function Sij : Rn →C known as the spectral density associated to the covariance function kij(τ), and (ii) fulfil the positive definiteness condition m X i,j=1 zizjSij(ω) ≥0 ∀{z1, . . . , zm} ⊂C, ω ∈Rn (6) where z denotes the complex conjugate of z ∈C. Note that eq. (5) states that each covariance function kij is the inverse Fourier transform of a spectral density Sij, therefore, we will say that these functions are Fourier pairs. Accordingly, we refer to the set of arguments of the covariance function τ ∈Rn as time or space Domain depending of the application considered, and to the set of arguments of the spectral densities ω ∈Rn as Fourier or spectral domain. Furthermore, a direct consequence of the above theorem is that for any element ω in the Fourier domain, the matrix defined by S(ω) = [Sij(ω)]m i,j=1 ∈Rm×m is Hermitian, i.e., Sij(ω) = Sji(ω) ∀i, j, ω. Theorem 2 gives the guidelines to construct covariance functions for MOGP by designing their corresponding spectral densities instead, i.e., the design is performed in the Fourier rather than the space domain. The simplicity of design in the Fourier domain stems from the positive-definiteness condition of the spectral densities in eq. (6), which is much easier to achieve than that of the covariance functions in eq. (4). This can be understood through an analogy with the univariate model: in the single-output case the positive-definiteness condition of the kernel only requires positivity of the spectral density, whereas in the multioutput case the positive-definiteness condition of the multivariate kernel only requires that the matrix S(ω), ∀ω ∈Rn, is positive definite but there are no constraints on each function Sij : ω 7→Sij(ω). 3.1 The Multi-Output Spectral Mixture kernel We now propose a family of Hermitian positive-definite complex-valued functions {Sij(·)}m i,j=1, thus fulfilling the requirements of Theorem 2, eq. (6), to use them as cross-spectral densities within MOGP. This family of functions is designed with the aim of providing physical parametric interpretation and closed-form covariance functions after applying the inverse Fourier transform. Recall that complex-valued positive-definite matrices can be decomposed in the form S(ω) = RH(ω)R(ω), meaning that the (i, j)th entry of S(ω) can be expressed as Sij(ω) = RH :i (ω)R:j(ω); where R(ω) ∈CQ×m, R:i(ω) is the ith column of R(ω), and (·)H denotes the Hermitian (transpose and conjugate) operator. Note that this factor decomposition fulfils eq. (6) for any choice of R(ω) ∈ CQ×m: m X i,j=1 ziRH :i (ω)R:j(ω)zj = m X i=1 ziR:i(ω) 2 = ||R(ω)z||2 ≥0 ∀z = [z1, . . . , zm]⊤∈Cm, ω ∈Rn (7) We refer to Q as the rank of the decomposition, since by choosing Q < m the rank of S(ω) = RH(ω)R(ω) can be at most Q. For ease of notation we choose2 Q = 1, where the columns of R(ω) are complex-valued functions {Ri}m i=1, and S(ω) is modeled as a rank-one matrix according to Sij(ω) = Ri(ω)Rj(ω). Since Fourier transforms and multiplications of square exponential (SE) functions are also SE, we model Ri(ω) as a complex-valued SE function so as to ensure closed-form expression of its corresponding covariance kernel, that is, Ri(ω) = wi exp −1 4(ω −µi)⊤Σ−1 i (ω −µi) exp −ι(θ⊤ i ω + φi) , i = 1, . . . , m (8) where wi, φi ∈R, µi, θi ∈Rn and Σi = diag([σ2 i1, . . . , σ2 in]) ∈Rn×n. With this choice of the functions {Ri}m i=1, the spectral densities {Sij}m i,j=1 are given by Sij(ω) = wij exp −1 2(ω −µij)⊤Σ−1 ij (ω −µij) + ι θ⊤ ijω + φij , i, j = 1, . . . , m (9) 2The extension to arbitrary Q will be presented at the end of this section. 4 meaning that the cross-spectral density between channels i and j is modeled as a complex-valued SE function with the following parameters: • covariance: Σij = 2Σi(Σi + Σj)−1Σj • mean: µij = (Σi + Σj)−1(Σiµj + Σjµi) • magnitude: wij = wiwj exp −1 4(µi −µj)⊤(Σi + Σj)−1(µi −µj) • delay: θij = θi −θj • phase: φij = φi −φj where the so-constructed magnitudes wij ensure positive definiteness and, in particular, the autospectral densities Sii are real-valued SE functions (since θii = φii = 0) as in the standard (scalarvalued) spectral mixture approach [6]. The power spectral density in eq. (9) corresponds to a complex-valued kernel and therefore to a complex-valued GP [14, 15] . In order to restrict this generative model only to real-valued GPs, the proposed power spectral density has to be symmetric with respect to ω [16], we then make Sij(ω) symmetric simply by reassigning Sij(ω) 7→1 2(Sij(ω) + Sij(−ω)), this is equivalent to choosing Ri(ω) to be a vector of two mirrored complex SE functions. The resulting (symmetric with respect to ω) cross-spectral density between the ith and jth channels Sij(ω) and its corresponding real-valued kernel kij(τ) = F−1{Sij(ω)}(τ) are the following Fourier pairs Sij(ω) = wij 2 e −1 2 (ω−µij)⊤Σ−1 ij (ω−µij)+ι θ⊤ ijω+φij + e −1 2 (ω+µij)⊤Σ−1 ij (ω+µij)+ι −θ⊤ ijω+φij kij(τ) = αij exp −1 2(τ + θij)⊤Σij(τ + θij) cos (τ + θij)⊤µij + φij (10) where the magnitude parameter αij = wij(2π) n 2 |Σij|1/2 absorbs the constant resulting from the inverse Fourier transform. We can again confirm that the autocovariances (i = j) are real-valued and contain square-exponential and cosine factors as in the scalar SM approach since αii ≥0 and θii = φii = 0. Conversely, the proposed model for the cross-covariance between different channels (i ̸= j) allows for (i) both negatively- and positively-correlated signals (αij ∈R), (ii) delayed channels through the delay parameter θij ̸= 0 and (iii) out-of-phase channels where the covariance is not symmetric with respect to the delay for φij ̸= 0. Fig. 1 shows cross-spectral densities and their corresponding kernel for a choice of different delay and phase parameters. −5 0 5 Cross Covariances θij = 0 φij = 0 −5 0 5 θij = 0 φij ̸= 0 −5 0 5 θij ̸= 0 φij = 0 −5 0 5 θij ̸= 0 φij ̸= 0 −5 0 5 Cross-Spectral Densities −5 0 5 −5 0 5 −5 0 5 Figure 1: Power spectral density and kernels generated by the proposed model in eq. (10) for different parameters. Bottom: Cross-spectral densities, real part in blue and imaginary part in green. Top: Cross-covariance functions in blue with reference SE envelope in dashed line. From left to right: zero delay and zero phase; zero delay and non-zero phase; non-zero delay and zero phase; and non-zero delay and non-zero phase. 5 The kernel in eq. (10) resulted from a low rank choice for the PSD matrix Sij, therefore, increasing the rank in the proposed model for Sij is equivalent to consider several kernel components. Arbitrarily choosing Q of these components yields the expression for the proposed multivariate kernel: Definition 5. The Multi-Output Spectral Mixture kernel (MOSM) has the form: kij(τ) = Q X q=1 α(q) ij exp −1 2(τ + θ(q) ij )⊤Σ(q) ij (τ + θ(q) ij ) cos (τ + θ(q) ij )⊤µ(q) ij + φ(q) ij (11) where α(q) ij = w(q) ij (2π) n 2 |Σ(q) ij |1/2 and the superindex (·)(q) denotes the parameter of the qth component of the spectral mixture. This multivariate covariance function has spectral-mixture positive-definite kernels as autocovariances, while the cross-covariances are spectral mixture functions with different parameters for different output pairs, which can be (i) non-positive-definite, (ii) non-symmetric, and (iii) delayed with respect to one another. Therefore, the MOSM kernel is a multi-output generalisation of the spectral mixture approach [6] where the positive definiteness is guaranteed by the factor decomposition of Sij as shown in eq. (7). 3.2 Training the model and computing the predictive posterior Fitting the model to observed data follows the same rationale of standard GP, that is, maximising log-probability of the data. Recall that the observations in the multioutput case consist of (i) a location x ∈X, (ii) a channel identifier i ∈{1, . . . , m}, and (iii) an observed value y ∈R; therefore, we denote N observations as the set of 3-tuples D = {(xc, ic, yc)}N c=1. As all observations are jointly Gaussian, we concatenate the observations into the three vectors x = [x1, . . . , xN]⊤∈X N, i = [i1, . . . , iN]⊤∈{1, . . . , m}N, and y = [y1, . . . , yN]⊤∈RN, to express the negative loglikelihood (NLL) by −log p(y|x, Θ) = N 2 log 2π + 1 2 log |Kxi| + 1 2y⊤K−1 xi y (12) where all hyperparameters are denoted by Θ, and Kxi is the covariance matrix of all observed samples, that is, the (r, s)th element [Kxi]rs is the covariance between the process at (location: xr, channel: ir) and the process at (location: xs, channel: is). Recall that, under the proposed MOSM model, this covariance [Kxi]rs is given by eq. (11), that is, kiris(xr −xs) + σ2 ir,noiseδiris, where σ2 ir,noise is a diagonal term to cater for uncorrelated observation noise. The NLL is then minimised with respect to Θ = {w(q) i , µ(q) i , Σ(q) i , θ(q) i , φ(q) i , σ2 i,noise}m,Q i=1,q=1, that is, the original parameters chosen to construct R(ω) in Section 3.1, plus the noise hyperparameters. Once the hyperparameters are optimised, computing the predictive posterior in the proposed MOSM follows the standard GP procedure with the joint covariances given by eq. (11). 3.3 Related work Generalising the scalar spectral mixture kernel to MOGPs can be achieved from the LMC framework as pointed out in [5] (denoted SM-LMC). As this formulation only considers real-valued cross spectral densities, the authors propose a multivariate covariance function by including a complex component to the cross spectral densities to cater for phase differences across channels, which they call the Cross Spectral Mixture kernel (denoted CSM). This multivariate covariance function can be seen as the proposed MOSM model with µi = µj, Σi = Σj, θi = θj ∀i, j ∈{1, . . . , m} and φi = µ⊤ i ψi for ψi ∈Rn. As a consequence, the SM-LMC is a particular case of the proposed MOSM model, where the parameters µi, Σi, θi are restricted to be same for all channels and therefore no phase shifts and no delays are allowed—unlike the MOSM example in Fig. 1. Additionally, Cramér’s theorem has also been used in a similar fashion in [17] but only with real-valued t-Student cross-spectral densities yielding cross-covariances that are either positive-definite or negative-definite. 4 Experiments We show two sets of experiments. First, we validated the ability of the proposed MOSM model in the identification of known auto- and cross-covariances of synthetic data. Second, we compared 6 MOSM against the spectral-mixture linear model of coregionalization (SM-LMC, [3, 6, 5]), the Gaussian convolution model (CONV, [4]), and the cross-spectral mixture model (CSM, [5]) in the estimation of missing real-world data in two different distributed settings: climate signals and metal concentrations. All models were implemented in Tensorflow [18] using GPflow [19] in order to make use of automatic differentiation to compute the gradients of the NLL. The performance of all the models in the experiments was measured by the mean absolute error given by MAE : 1 N N X i=1 |yi −ˆyi| (13) where yi denotes the true value and ˆyi the MOGP estimate. 4.1 Synthetic example: Learning derivatives and delayed signals All models were implemented to recover the auto- and cross-covariances of a three-output GP with the following components: (i) a reference signal sampled from a GP f(x) ∼GP(0, KSM) with spectral mixture covariance kernel KSM and zero mean, (ii) its derivative f ′(x), and (iii) a delayed version fδ(x) = f(x −δ). The motivation for this illustrative example is that the covariances and cross covariances of the aforementioned processes are known explicitly (see [1, Sec. 9.4]) and we can therefore compare our estimates to the true model. The derivative was computed numerically (first order through finite differences) and the training samples were generated as follows: We chose N1 = 500 samples from the reference function in the interval [-20, 20], N2 = 400 samples from the derivative signal in the interval [-20, 0], and N3 = 400 samples from the delayed signal in the interval [-20, 0]. All samples were randomly uniformly chosen in the intervals mentioned and Gaussian noised was added to yield realistic observations. The experiment then consisted in the reconstruction the reference signal in the interval [-20, 20], and the imputation of the derivative and delayed signals over the interval [0, 20]. Fig. 2 shows the ground truth and MOSM estimates for all three synthetic signals and the covariances (normalised), and Table 1 reports the MAE for all models over ten realisations of the experiment. Notice that the proposed model successfully learnt all cross-covariances cov(f(·), f ′(x)) and cov(f(x), f(x −δ)), and autocovariances without prior information about the delayed or the derivative relationship between the two channels. Furthermore, MOSM was the only model that successfully extrapolated the derivate signal and the delayed signal simultaneously, this is due the fact that the cross-covariances needed for this setting are not linear combinations of univariate kernels, hence models based on latent processes fail in this synthetic example. −20 −15 −10 −5 0 5 10 15 20 −2 0 2 4 Reference Signal Synthetic Example: MOSM f(x) −20 −15 −10 −5 0 5 10 15 20 −5.0 −2.5 0.0 2.5 5.0 Derivative Signal f ′(x) −20 −15 −10 −5 0 5 10 15 20 Input −2 0 2 Delayed Signal f(x −δ) −5.0 −2.5 0.0 2.5 5.0 −0.5 0.0 0.5 1.0 Cov: Reference k11(τ) −5.0 −2.5 0.0 2.5 5.0 −0.5 0.0 0.5 1.0 Cov: Derivative k22(τ) −5.0 −2.5 0.0 2.5 5.0 −1 0 1 Cov: Reference and Derivative k21(τ) −5.0 −2.5 0.0 2.5 5.0 −0.5 0.0 0.5 1.0 Cov: Delayed k33(τ) −5.0 −2.5 0.0 2.5 5.0 −0.5 0.0 0.5 1.0 Cov: Reference and Delayed k31(τ) Figure 2: MOSM learning of the covariance functions of a synthetic reference signal, its derivative and a delayed version. Left: synthetic signals, middle: autocovariances, right: cross-covariances. The dashed line is the ground truth, the solid colour lines are the MOSM estimates, and the shaded area is the 95% confidence interval. The training points are shown in green. 7 Table 1: Reconstruction of a synthetic signal, its derivative and delayed version: Mean absolute error for all four models with one-standard-deviation error bars over ten realisations. Model Reference Derivative Delayed CONV 0.211 ± 0.085 0.759 ± 0.075 0.524 ± 0.097 SM-LMC 0.166 ± 0.009 0.747 ± 0.101 0.398 ± 0.042 CSM 0.148 ± 0.010 0.262 ± 0.032 0.368 ± 0.089 MOSM 0.127 ± 0.011 0.223 ± 0.015 0.146 ± 0.017 4.2 Climate data The first real-world dataset contained measurements3 from a sensor network of four climate stations in the south on England: Cambermet, Chimet, Sotonmet and Bramblemet. We considered the normalised air temperature signal from 12 March, 2017 to 16 March, 2017, in 5-minute intervals (5692 samples), from where we randomly chose N = 1000 samples for training. Following [4], we simulated a sensor failure by removing the second half of the measurements for one sensor and leaving the remaining three sensors operating correctly; we reproduced the same setup across all four sensors thus producing four experiments. All models considered had five latent signals/spectral components. For all four models considered, Fig. 3 shows the estimates of missing data for the Cambermet-failure case. Table 2 shows the mean absolute error for all models and failure cases over the missing data region. Observe how all models were able to capture the behaviour of the signal in the missing range, this is because the considered climate signals are very similar to one another. This shows that the MOSM can also collapse to models that share parameters across pairs of outputs when required. 0 1 2 3 4 3 2 1 0 1 2 3 Temperature [ºC] MOSM 95% CI 0 1 2 3 4 3 2 1 0 1 2 3 CONV 95% CI 0 1 2 3 4 3 2 1 0 1 2 3 SMLMC 95% CI 0 1 2 3 4 3 2 1 0 1 2 3 CSM 95% CI Time [Days] Figure 3: Imputation of the Cambermet sensor measurements using the remaining sensors. The red points denote the observations, the dashed black line the true signal, and the solid colour lines the predictive means. From left to right: MOSM, CONV, SM-LMC and CSM. Table 2: Imputation of the climate sensor measurements using the remaining sensors. Mean absolute error for all four experiments with one-standard-deviation error bars over ten realisations. Model Cambermet Chimet Sotonmet Bramblemet CONV 0.098 ± 0.008 0.192 ± 0.015 0.211 ± 0.038 0.163 ± 0.009 SM-LMC 0.084 ± 0.004 0.176 ± 0.003 0.273 ± 0.001 0.134 ± 0.002 CSM 0.094 ± 0.003 0.129 ± 0.004 0.195 ± 0.011 0.130 ± 0.004 MOSM 0.097 ± 0.006 0.137 ± 0.007 0.162 ± 0.011 0.129 ± 0.003 These results do not show a significant difference between the proposed model and the latent processes based models. In order to test for statistical significance, the Kolmogorov-Smirnov test [20, Ch. 7] was used with a significance level α = 0.05, concluding that for the Sotonmet sensor we can assure that the MOSM model yields the best results. Conversely, for the Cambermet, Chimet and Bramblemet sensors, MOSM and CSM provided similar results, though we cannot confirm their difference is statistically significant. However, given the high correlation of these signals and the 3The data can be obtained from www.cambermet.co.uk. and the sites therein. 8 similarity between the MOSM model and the CSM model, the close performance of these two models on this dataset is to be expected. 4.3 Heavy metal concentration The Jura dataset [3] contains, in addition to other geological data, the concentration of seven heavy metals in a region of 14.5 km2 of the Swiss Jura, and it is divided into a training set (259 locations) and a validation set (100 locations). We followed [3, 4], where the motivation was to aid the prediction of a variable that is expensive to measure by using abundant measurements of correlated variables which are less expensive to acquire. Specifically, we estimated Cadmium and Copper at the validation locations using measurements of related variables at the training and test locations: Nickel and Zinc for Cadmium; and Lead, Nickel and Zinc for Copper. The MAE—see eq. (13)—is shown in Table 3, where the results for the CONV model were obtained from [4] and all models considered five latent signals/spectral components, except for the independent Gaussian process (denoted IGP). Observe how the proposed MOSM model outperforms all other models over the Cadmium data, which is statistical significant with a significance level α = 0.05. Conversely, we cannot guarantee a statistically-significant difference between the CSM model and the MOSM in the Copper case. In both cases, testing for statistical significance against the CONV model was not possible since those results were obtained from [4]. On the other hand, the higher variability and non-Gaussianity of the Copper data may be the reason of why the simplest MOGP model (SM-LMC) achieves the best results. Table 3: Mean absolute error for the estimation of Cadmium and Copper concentrations with onestandard-deviation error bars over ten repetitions of the experiment. Model Cadmium Copper IGP 0.56 ± 0.005 16.5 ± 0.1 CONV 0.443 ± 0.006 7.45 ± 0.2 SM-LMC 0.46 ± 0.01 7.0 ± 0.1 CSM 0.47 ± 0.02 7.4 ± 0.3 MOSM 0.43 ± 0.01 7.3 ± 0.1 5 Discussion We have proposed the multioutput spectral mixture (MOSM) kernel to model rich relationships across multiple outputs within Gaussian processes regression models. This has been achieved by constructing a positive-definite matrix of complex-valued spectral densities, and then transforming them via the inverse Fourier transform according to Cramér’s Theorem. The resulting kernel provides a clear interpretation from a spectral viewpoint, where each of its parameters can be identified with frequency, magnitude, phase and delay for a pair of channels. Furthermore, a key feature that is unique to the proposed kernel is the ability joint model delays and phase differences, this is possible due to the complex-valued model for the cross-spectral density considered and validated experimentally using a synthetic example—see Fig. 2. The MOSM kernel has also been compared against existing MOGP models on two real-world datasets, where the proposed model performed competitively in terms of the mean absolute error. Further research should point towards a sparse implementation of the proposed MOGP which can build on [4, 21] to design inducing variables that exploit the spectral content of the processes as in [22, 23]. Acknowledgements We thank Cristóbal Silva (Universidad de Chile) for useful recommendations about GPU implementation, Rasmus Bonnevie from the GPflow team for his assistance on the experimental MOGP module within GPflow, and the anonymous reviewers. This work was financially supported by Conicyt Basal-CMM. 9 References [1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. The MIT Press, 2006. [2] D. Duvenaud, “Automatic model construction with Gaussian processes,” Ph.D. dissertation, University of Cambridge, 2014. [3] P. Goovaerts, Geostatistics for natural resources evaluation. Oxford University Press on Demand, 1997. [4] M. A. Álvarez and N. D. Lawrence, “Sparse convolved Gaussian processes for multi-output regression,” in Advances in Neural Information Processing Systems 21, 2008, pp. 57–64. [5] K. R. Ulrich, D. E. Carlson, K. Dzirasa, and L. Carin, “GP kernels for cross-spectrum analysis,” in Advances in Neural Information Processing Systems 28, 2015, pp. 1999–2007. [6] A. G. Wilson and R. P. Adams, “Gaussian process kernels for pattern discovery and extrapolation,” in Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 1067–1075. [7] S. Bochner, M. Tenenbaum, and H. Pollard, Lectures on Fourier Integrals, ser. Annals of mathematics studies. Princeton University Press, 1959. [8] H. Cramér, “On the theory of stationary random processes,” Annals of Mathematics, pp. 215–230, 1940. [9] A. Yaglom, Correlation Theory of Stationary and Related Random Functions, ser. Correlation Theory of Stationary and Related Random Functions. Springer, 1987, no. v. 1. [10] F. Tobar, T. D. Bui, and R. E. Turner, “Learning stationary time series using Gaussian processes with nonparametric kernels,” in Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 2015, pp. 3501–3509. [11] F. Tobar and R. E. Turner, “Modelling time series via automatic learning of basis functions,” in Proc. of IEEE SAM, 2016, pp. 2209–2213. [12] M. A. Álvarez, L. Rosasco, and N. D. Lawrence, “Kernels for vector-valued functions: A review,” Found. Trends Mach. Learn., vol. 4, no. 3, pp. 195–266, Mar. 2012. [13] M. G. Genton and W. Kleiber, “Cross-covariance functions for multivariate geostatistics,” Institute of Mathematical Statistics, vol. 30, no. 2, 2015. [14] F. Tobar and R. E. Turner, “Modelling of complex signals using Gaussian processes,” in Proc. of IEEE ICASSP, 2015, pp. 2209–2213. [15] R. Boloix-Tortosa, F. J. Payán-Somet, and J. J. Murillo-Fuentes, “Gaussian processes regressors for complex proper signals in digital communications,” in Proc. of IEEE SAM, 2014, pp. 137–140. [16] S. M. Kay, Modern spectral estimation : Theory and application. Englewood Cliffs, N.J. : Prentice Hall, 1988. [17] T. Gneiting, W. Kleiber, and M. Schlather, “Matérn cross-covariance functions for multivariate random fields,” Journal of the American Statistical Association, vol. 105, no. 491, pp. 1167–1177, 2010. [18] M. Abadi et al., “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/ [19] A. G. d. G. Matthews, M. van der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. León-Villagrá, Z. Ghahramani, and J. Hensman, “GPflow: A Gaussian process library using TensorFlow,” 2016. [20] J. W. Pratt and J. D. Gibbons, Concepts of nonparametric theory. Springer Science & Business Media, 2012. [21] M. A. Álvarez, D. Luengo, M. K. Titsias, and N. D. Lawrence, “Efficient multioutput Gaussian processes through variational inducing kernels.” in AISTATS, vol. 9, 2010, pp. 25–32. [22] J. Hensman, N. Durrande, and A. Solin, “Variational Fourier features for Gaussian processes,” arXiv preprint arXiv:1611.06740, 2016. [23] F. Tobar, T. D. Bui, and R. E. Turner, “Design of covariance functions using inter-domain inducing variables,” in NIPS 2015 - Time Series Workshop, December 2015. 10 | 2017 | 141 |
6,610 | Affine-Invariant Online Optimization and the Low-rank Experts Problem Tomer Koren Google Brain 1600 Amphitheatre Pkwy Mountain View, CA 94043 tkoren@google.com Roi Livni Princeton University 35 Olden St. Princeton, NJ 08540 rlivni@cs.princeton.edu Abstract We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm’s performance depends on the best possible preconditioning of the problem in retrospect and on its intrinsic dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order √ rT for the low-rank experts problem, improving by a √r factor over the previously best known bound and resolving an open problem posed by Hazan et al. [15]. 1 Introduction In the online convex optimization setting, a learner is faced with a stream of T convex functions over a bounded convex domain X ⊆Rd. At each round t the learner gets to observe a single convex function ft and has to choose a point xt ∈X. The aim of the learner is to minimize the cumulative T-round regret, defined as T X t=1 ft(xt) −min x∈X T X t=1 ft(x). In this very general setting, Online Gradient Descent [25] achieves an optimal regret rate of Θ(GD √ T), where G is a bound on the Lipschitz constants of the ft and D is a bound on the diameter of the domain, both with respect to the Euclidean norm. For simplicity, let us restrict the exposition to linear losses ft(xt) = gT t xt, in which case G bounds the maximal Euclidean norm ∥gt ∥; it is well known that the general convex case can be easily reduced to this case. One often useful way to obtain faster convergence in optimization is to employ preconditioning, namely to apply a linear transformation P to the gradients before using them to make update steps. In an online optimization setting, if we have had access to the best preconditioner in hindsight we could have achieved a regret rate of the form Θ(infP GPDP √ T), where DP is the diameter of the set P · Xand GP is a bound on the norm of the conditioned gradients ∥P−1gt ∥. We shall thus refer to the quantity GPDP as the conditioning of the problem when a preconditioner P is used. In many cases, however, it is more natural to directly assume a bound |gT t xt| ≤C on the magnitude of the losses rather than assuming the bounds D and G. In this case, the condition number need not be bounded and typical guarantees of gradient-based methods such as online gradient descent do not directly apply. In principle, it is possible to find a preconditioner P such that GPDP = O(C √ d), and if one further assumes that the intrinsic dimensionality of the problem (i.e., the rank of the loss vectors g1, . . ., gT) is r ≪d, the conditioning of the optimization problem can be improved to O(C√r). However, this approach requires one to have access to the transformation P, which is typically data-dependent and known only in retrospect. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper we address the following natural question: can one achieve a regret rate of O(C √ rT) without the explicit prior knowledge of a good preconditioner P? We answer to this question in the affirmative and present a new algorithm that achieves this rate, called Online Lazy Newton. Our algorithm is a variant of the Online Newton Step algorithm due to Hazan et al. [14], that employs a lazy projection step. While the Online Newton Step algorithm was designed to exploit curvature in the loss functions (specifically, a property called exp-concavity), our adaptation is aimed at general—possibly even linear—online convex optimization and exploits latent low-dimensional structure. It turns out that this adaptation of the algorithm is able to achieve O(C √ rT) regret up to a small logarithmic factor, without any prior knowledge of the optimal preconditioner. A crucial property of our algorithm is its affine-invariance: Online Lazy Newton is invariant to any affine transformation of the gradients gt, in the sense that running the algorithm on gradients g′ t = P−1gt and applying the inverse transformation P on the produced decisions results with the same decisions that would have been obtained by applying the algorithm directly to the original vectors gt. As our main application, we establish a new regret rate for the low rank experts problem, introduced by Hazan et al. [15]. The low rank experts setting is a variant of the classical prediction with expert advice problem, where one further assumes that the experts are linearly dependent and their losses span a low dimensional space of rank r. The challenge in this setting is to achieve a regret rate which is independent of number of experts, and only depends on their rank r. In this setting, Hazan et al. [15] proved a lower bound of Ω( √ rT) on the regret, but fell short of providing a matching upper bound and only gave an algorithm achieving a suboptimal O(r √ T) regret bound. Applying the Online Lazy Newton algorithm to this problem, we are able to improve upon the latter bound and achieve a O( p rT logT) regret bound, which is optimal up to a p logT factor and improves upon the prior bound unless T is exponential in the rank r. 1.1 Related work Adaptive regularization is an important topic in online optimization that has received considerable attention in recent years. The AdaGrad algorithm presented in [9] (as well as the closely related algorithm was analyzed in [22]) dynamically adapts to the geometry of the data. In a sense, AdaGrad learns the best preconditioner from a trace-bounded family of Mahalanobis norms. (See Section 2.2 for a detailed discussion and comparison of guarantees). The MegaGrad algorithm of van Erven and Koolen [23] uses a similar dynamic regularization technique in order to obliviously adapt to possible curvature in the loss functions. Lower bounds for preconditioning when the domain is unbounded has been presented in [7]. These lower bounds are inapplicable, however, once losses are bounded (as assumed in this paper). More generally, going beyond worst case analysis and exploiting latent structure in the data is a very active line of research within online learning. Work in this direction include adaptation to stochastic i.i.d data (e.g., [11, 12, 20, 8]), as well as the exploration of various structural assumptions that can be leveraged for better guarantees [4, 12, 13, 5, 19]. Our Online Lazy Newton algorithm is a part of a wide family of algorithms named Follow the Regularized Leader (FTRL). FTRL methods choose at each iteration the minimizer of past observed losses with an additional regularization term [16, 12, 21]. Our algorithm is closely related to the Follow The Approximate Leader (FTAL) algorithm presented in [14]. The FTAL algorithm is designed to achieve logarithmic regret rate for exp-concave problems, exploiting the curvature information of such functions. In contrast, our algorithm is aimed for optimizing general convex functions with possibly no curvature; while FTAL performs FTL over the second-order approximation of the functions, Online Lazy Newton instead utilizes a first-order approximation with an additional rank-one quadratic regularizer. Another algorithm closely related to ours is the Second-order Perceptron algorithm of Cesa-Bianchi et al. [3] (which in turn is closely related to the Vovk-Azoury-Warmuth forecaster [24, 1]), which is a variant of the classical Perceptron algorithm adapted to the case where the data is “skewed”, or ill-conditioned in the sense used above. Similarly to our algorithm, the Second-order Perceptron employs adaptive whitening of the data to address its skewness. Finally, the SON algorithm, proposed in [18] is an enhanced version of Online Newton Step which utilizes sketching to improve over previous second order online learning algorithms. Similar to our paper, they propose a version that is completely invariant to linear transformations. Their regret bound (for our setting) depends linearly on the dimension of the ambient space, and quadratic in the rank of the loss matrix. In contrast, our regret bounds do not depend on the dimension of the ambient space and are linear in the rank of the loss matrix – two properties that are necessary in order to achieve optimal regret bound for the low rank expert problem. 2 This work is highly inspired and motivated by the problem of low rank experts to which we give an optimal algorithm. The problem was first introduced in [15] where the authors established a regret rate of e O(r √ T), where r is the rank of the experts’ losses, which was the first regret bound in this setting that did not depend on the total number of experts. The problem has been further studied and investigated by Cohen and Mannor [6], Barman et al. [2] and Foster et al. [10]. Here we establish the first tight upper bound (up to logarithmic factor) that is independent of the total number of experts N. 2 Setup and Main Results We begin by recalling the standard framework of Online Convex Optimization. At each round t = 1, . . .,T a learner chooses a decision xt from a bounded convex subset X ⊆Rd in d-dimensional space. An adversary then chooses a convex cost function ft, and the learner suffers a loss of ft(xt). We measure the performance of the learner in terms of the regret, which is defined as the difference between accumulated loss incurred by the learner and the loss of the best decision in hindsight. Namely, the T-round regret of the learner is given by RegretT = T X t=1 ft(xt) −min x∈X T X t=1 ft(x). One typically assumes that the diameter of the set X is bounded, and that the convex functions f1, . . ., fT are all Lipschitz continuous, both with respect to certain norms on Rd (typically, the norms are taken as dual to each other). However, a main point of this paper is to refrain from making explicit assumptions on the geometry of the optimization problem, and design algorithms that are, in a sense, oblivious to it. Notation. Given a positive definite matrix A ≻0 we will denote by ∥· ∥A the norm induced by A, namely, ∥x∥A = √ xTAx. The dual norm to ∥· ∥A is defined by ∥g∥∗ A = sup∥x∥A ≤1 |xTg| and can be shown to be equal to ∥g∥∗ A = ∥g∥A−1. Finally, for a non–invertible matrix A, we denote by A† its Moore–Penrose psuedo inverse. 2.1 Main Results Our main results are affine invariant regret bounds for the Online Lazy Newton algorithm, which we present below in Section 3. We begin with a bound for linear losses that controls the regret in terms of the intrinsic dimensionality of the problem and a bound on the losses. Theorem 1. Consider the online convex optimization setting with linear losses ft(x) = gT t x, and assume that |gT t x| ≤C for all t and x ∈X. If Algorithm 1 is run with η < 1/C, then for every H ≻0 the regret is bounded as RegretT ≤4r η log 1 + (DHGHT)2 r + 3η 1 + T X t=1 |gT t x⋆|2 , (1) where r = rank(PT t=1gtgT t ) ≤d and DH = max x,y∈X∥x −y∥H , GH = max 1≤t ≤T ∥gt ∥∗ H. By a standard reduction, the analogue statement for convex losses holds, as long as we assume that the dot-products between gradients and decision vectors are bounded. Corollary 2. Let f1, . . ., fT be an arbitrary sequence of convex functions over X. Suppose Algorithm 1 is run with 1/η > maxt maxx∈X |∇T t xt|. Then, for every H ≻0 the regret is bounded as RegretT ≤4r η log 1 + (DHGHT)2 r + 3η 1 + T X t=1 |∇T t x⋆|2 , (2) where r = rank(PT t=1∇t∇T t ) ≤d and DH = max x,y∈X∥x −y∥H , GH = max 1≤t ≤T ∥∇t ∥∗ H. 3 In particular, we can use the theorem to show that as long as we bound |∇f (xt)Txt| by a constant—a significantly weaker requirement than assuming bounds on the diameter of Xand on the norms of the gradients—one can find a norm ∥· ∥H for which the quantities DH and GH are properly bounded. We stress again that, importantly, Algorithm 1 need not know the matrix H in order to achieve the corresponding bound. Theorem 3. Assume that max 1≤t ≤T max x∈X |∇T t x| ≤C. Let r = rank(PT t=1∇t∇T t ) ≤d, and run Algorithm 1 with η = Θ p r log(T)/T. The regret of the algorithm is then at most O C p rT logT. 2.2 Discussion It is worth comparing our result to previously studied adaptive regularization algorithms techniques. Perhaps the most popular gradient method that employs adaptive regularization is the AdaGrad algorithm introduced in [9]. The AdaGrad algorithm enjoys the regret bound depicted in Eq. (3). It is competitive with any fixed regularization matrix S ≻0 such that Tr(S) ≤d: RegretT(AdaGrad) = O √ d inf S≻0, Tr(S)≤d qXT t=1 ∥x⋆∥2 2 ∥∇t ∥2 S∗ ! , (3) RegretT(OLN) = e O √r inf S≻0 qXT t=1 ∥x⋆∥2 S ∥∇t ∥2 S∗ . (4) On the other hand, for every matrix S ≻0 by the generalized Cauchy-Schwartz inequality we have ∥∇T t x⋆∥≤∥∇t ∥∗ S∥x⋆∥S. Plugging this into Eq. (2) and a proper tuning of η gives a bound which is competitive with any fixed regularization matrix S ≻0, depicted in Eq. (4). Our bound improves on AdaGrad’s regret bound in two ways. First, the bound in Eq. (4) scales with the intrinsic dimension of the problem: when the true underlying dimensionality of the problem is smaller than the dimension of the ambient space, Online Lazy Newton enjoys a superior regret bound. Furthermore, as demonstrated in [15], the dependence of AdaGrad’s regret on the ambient dimension is not an artifact of the analysis, and there are cases where the actual regret grows polynomially with d rather than with the true rank r ≪d. The second case where the Online Lazy Newton bound can be superior over AdaGrad’s is when there exists a conditioning matrix that improves not only the norm of the gradients with respect to the Euclidean norm, but also that the norm of x⋆is smaller with respect to the optimal norm induced by S. More generally, whenever PT t=1(∇T t x⋆)2 < PT t=1 ∥∇t ∥2 S ∥x⋆∥2 2, and in particular when ∥x⋆∥S < ∥x⋆∥2, Eq. (4) will produce a tighter bound than the one in Eq. (3). 3 The Online Lazy Newton Algorithm We next present the main focus of this paper: the affine-invariant algorithm Online Lazy Newton (OLN), given in Algorithm 1. The algorithm maintains two vectors, xt and yt. The vector yt is updated at each iteration using the gradient of ft at xt, via yt = yt−1 −∇t where ∇t = ∇ft(xt). The vector yt is not guaranteed to be at X, hence the actual prediction of OLN is determined via a projection onto the set X, resulting with the vector xt+1 ∈X. However, similarly to ONS, the algorithm first transforms yt via the (pseudo-)inverse of the matrix At given by the sum of the outer products Pt s=1∇s∇T s, and projections are taken with respect to At. In this context, we use the notation ΠA X(y) = arg min x∈X (x −y)TA(x −y). to denote the projection onto a set X with respect to the (semi-)norm ∥· ∥A induced by a positive semidefinite matrix A ⪰0. 4 Algorithm 1 OLN: Online Lazy Newton Parameters: initial point x1 ∈X, step size η > 0 Initialize y0 = 0 and A0 = 0 for t = 1, 2 . . .T do Play xt, incur cost ft(xt), observe gradient ∇t = ∇ft(xt) Rank 1 update At = At−1 + η∇t∇T t Online Newton step and projection: yt = yt−1 −∇t xt+1 = ΠAt X (A† t yt) end for return The main motivation behind working with the At as preconditioners is that—as demonstrated in our analysis—the algorithm becomes invariant to linear transformations of the gradient vectors ∇t. Indeed, if P is some linear transformation, one can observe that if we run the algorithm on P∇t instead of ∇t, this will transform the solution at step t from xt to P−1xt. In turn, the cumulative regret is invariant to such transformations. As seen in Theorem 1, this invariance indeed leads to an algorithm with an improved regret bound when the input representation of the data is poorly conditioned. While the algorithm is very similar to ONS, it is different in several important aspects. First, unlike ONS, our lazy version maintains at each step a vector yt which is updated without any projections. Projection is then applied only when we need to calculate xt+1. In that sense, it can be thought as a gradient descent method with lazy projections (analogous to dual-averaging methods) while ONS is similar to gradient descent methods with a greedy projection step (reminiscent of mirror-descent type algorithms). The effect of this, is a decoupling between past and future conditioning and projections: and if the transformation matrix At changes between rounds, the lazy approach allows us to condition and project the problem at each iteration independently. Second, ONS uses an initialization of A0 = ϵId (while OLN uses A0 = 0) and, as a result, it is not invariant to affine transformations. While this difference might seem negligible as ϵ is typically chosen to be very small, recall that the matrices At are used as preconditioners and their small eigenvalues can be very meaningful, especially when the problem at hand is poorly conditioned. 4 Application: Low Rank Experts In this section we consider the Low-rank Experts problem and show how the Online Lazy Newton algorithm can be used to obtain a nearly optimal regret in that setting. In the Low-rank Experts problem, which is a variant of the classic problem of prediction with expert advice, a learner has to choose at each round t = 1, . . .,T between following the advice of one of N experts. On round t, the learner chooses a distribution over the experts in the form of a probability vector xt ∈∆n (here ∆n denotes the n-dimensional simplex); thereafter, an adversary chooses a cost vector gt ∈[−1, 1]N assigning losses to experts, and the player suffers a loss of xT t gt ∈[−1, 1]. In contrast with the standard experts setting, here we assume that in hindsight the expert share a common low rank structure and we have that rank(g1, . . ., gT) ≤r, for some r < N. It is known that in the stochastic setting (i.e., gt are drawn i.i.d. from some fixed distribution) a follow-the-leader algorithm will enjoy a regret bound of min{ √ rT, p T log N}. In [15] the authors asked whether one can achieve the same regret bound in the online setting. Here we answer this question on the affirmative. Theorem 4 (Low Rank Experts). Consider the low rank expert setting, where rank(g1, . . ., gT) ≤r. Set η = p r log(T)/T, and run Algorithm 1 with X = ∆n and ft(x) = gT t x. Then, the obtained regret satisfies RegretT = O( p rT logT). This bound matches the Ω( √ rT) lower bound of [15] up to a logT factor, and improves upon their O(r √ T) upper bound, so long as T is not exponential in r. Put differently, if one aims at ensuring an 5 average regret of at most ϵ, the OLN algorithm would need O((r/ϵ2) log(1/ϵ)) iterations as opposed to the O(r2/ϵ2) iterations required by the algorithm of [15]. We also remark that, since the Hedge algorithm can be used to obtain regret rate of O( p T log N), we can obtain an algorithm with regret bound of the form O min p rT logT, p T log N by treating Hedge and OLN as meta-experts and applying Hedge over them. 5 Analysis For the proofs of our main theorems we will rely on the following two technical lemmas. Lemma 5 ([17], Lemma 5). Let Φ1, Φ2 : X 7→R be two convex functions defined over a closed and convex domain X ⊆Rd, and let x1 ∈arg minx∈XΦ1(x) and x2 ∈arg minx∈XΦ2(x). Assume that Φ2 is σ-strongly-convex with respect to a norm ∥· ∥. Then, for φ = Φ2 −Φ1 we have ∥x2 −x1∥≤1 σ ∥∇φ(x1)∥∗. Furthermore, if φ is convex then 0 ≤φ(x1) −φ(x2) ≤1 σ ∥∇φ(x1)∥∗2. The following lemma is a slight strengthening of a result given in [14]. Lemma 6. Let g1, . . ., gT ∈Rd be a sequence of vectors, and define Gt = H + Pt s=1 gsgT s for all t, where H is a positive definite matrix such that ∥gt ∥∗ H ≤γ for all t. Then T X t=1 gT t G−1 t gt ≤r log 1 + γ2T r , where r is the rank of the matrix Pt s=1 gsgT s. Proof. Following [14], we first prove that T X t=1 gT t G−1 t gt ≤log det GT det H = log det H−1/2GT H−1/2. (5) To this end, let G0 = H, so that we have Gt = Gt−1 + gtgT t for all t ≥1. The well-known matrix determinant lemma, which states that det(A −uuT) = (1 −uTA−1u) det(A), gives gT t G−1 t gt = 1 −det(Gt −gtgT t ) det Gt = 1 −det(Gt−1) det Gt . Using the inequality 1 −x ≤log(1/x) and summing over t = 1, . . .,T, we obtain T X t=1 gT t G−1 t gt ≤ T X t=1 log det Gt det Gt−1 = log det GT det H , which yields Eq. (5). Next, observe that H−1/2GT H−1/2 = I + PT s=1 H−1/2gsgT sH−1/2 and Tr T X s=1 H−1/2gsgT sH−1/2 ! = T X s=1 Tr gT sH−1gs = T X s=1 (∥gs∥∗ H)2 ≤γ2T. Also, the rank of the matrix PT s=1 H−1/2gsgT sH−1/2 = H−1/2(PT s=1 gsgT s)H−1/2 is at most r. Hence, all the eigenvalues of the matrix H−1/2GT H−1/2 are equal to 1, except for r of them whose sum is at most r + γ2T. Denote the latter by λ1, . . ., λr; using the concavity of log(·) and Jensen’s inequality, we conclude that log det H−1/2GT H−1/2 = rX i=1 log λi ≤r log 1 r rX i=1 λi ≤r log 1 + γ2T r , which together with Eq. (5) gives the lemma. □ 6 We can now prove our main results. We begin by proving Theorem 1. Proof of Theorem 1. For all t, let ˜ft(x) = gT t x + η 2(gT t x)2 and set eFt(x) = tX s=1 ˜fs(x) = −yT t x + 1 2xTAtx. Observe that xt+1, which is the choice of Algorithm 1 at iteration t + 1, is the minimizer of eFt; indeed, since yt is in the column span of At, we can write up to a constant: eFt(x) = 1 2 x −A† t yt TAt x −A† t yt + const. In other words, Algorithm 1 is equivalent to a follow-the-leader algorithm on the functions ˜ft. Next, fix some positive definite matrix H ≻0 and let DH = maxx,y∈X ∥x −y∥H and GH = max1≤t ≤T ∥gt ∥∗ H. Next we have eFt(x) + η 2 ∥x −xt+1∥2 H = = 1 2xTAtx −yT t x + η 2 ∥x −xt+1∥2 H = 1 2 ∥x∥2 At + η 2 ∥x∥2 H −yT t x −2xT t+1x + ∥xt+1∥2 H = η 2 ∥x∥2 Gt −yT t x −2xT t+1x + ∥xt+1∥2 H, where Gt = Pt s=1 gtgT t + H. In turn, we have that the function is η-strongly convex with respect to the norm ∥· ∥Gt , where Gt = H + P gtgT t , and is minimized at x = xt+1. Then by Lemma 5 with Φ1(x) = eFt−1(x) and Φ2(x) = eFt(x) + η 2 ∥x −xt+1∥2 H, thus φ(x) = ˜ft(x) + η 2 ∥x −xt+1∥2 H, we have ˜ft(xt) −˜ft(xt+1) + η 2 ∥xt −xt+1∥2 H ≤1 η(∥gt + ηgtgT t xt + ηH(xt −xt+1)∥∗ Gt )2 ≤2 η(1 + ηgT t xt)2(∥gt ∥∗ Gt )2 + 2η(∥H(xt −xt+1)∥∗ Gt )2 ∵∥v + u∥2 ≤2∥v∥2 + 2∥u∥2 ≤8 η(∥gt ∥∗ Gt )2 + 2η(∥H(xt −xt+1)∥∗ Gt )2 ∵1 η ≥max x∈X |gT t x| ≤8 η(∥gt ∥∗ Gt )2 + 2η(∥H(xt −xt+1)∥∗ H)2 ∵H ≺Gt ⇒H−1 ≻G−1 t = 8 η(∥gt ∥∗ Gt )2 + 2η∥xt −xt+1∥2 H. Overall, we obtain T X t=1 ˜ft(xt) −˜ft(xt+1) ≤8 η T X t=1 gT t G−1 t gt + 3η 2 T X t=1 ∥xt −xt+1∥2 H. By the FTL-BTL Lemma (e.g., [16]), we have that PT t=1 ˜ft(xt) −˜ft(x⋆) ≤PT t=1 ˜ft(xt) −˜ft(xt+1). Hence, we obtain that: T X t=1 ˜ft(xt) −˜ft(x⋆) ≤8 η T X t=1 gT t G−1 t gt + 3η 2 T X t=1 ∥xt −xt+1∥2 H. 7 Plugging in ft(x) = gT t x + η 2 (gT t x)2 and rearranging, we obtain T X t=1 gT t xt −x⋆ ≤8 η T X t=1 gT t G−1 t gt + 3η 2 T X t=1 ∥xt −xt+1∥2 H + η 2 T X t=1 (gT t x⋆)2 ≤8 η T X t=1 gT t G−1 t gt + η 2 T X t=1 (gT t x⋆)2 + 3η 2 TD2 H ≤8r η log 1 + G2 HT r + 3η 2 TD2 H + η 2 T X t=1 (gT t x⋆)2, Finally, note that we have obtained the last inequality for every matrix H ≻0. By rescaling a matrix H and re-parametrizing H →H/( √ TDH) we obtain a matrix whose diameter is DH →1/ √ T and GH → √ TDHGH. Plugging these into the last inequality yield the result. □ Proof of Theorem 3. To simplify notations, let us assume that |∇T t x∗| ≤1. We get from Corollary 2 that for every η: RegretT ≤2r η log 1 + D2G2T2 r + 3η(1 + T). For each GH and DH we can set η = q (2r/T) log(1 + G2 H D2 H/r) and obtain the regret bound RegretT ≤ s rT log 1 + D2 HG2 HT r . Hence, we only need to show that there exists a matrix H ≻0 such that D2 HG2 H = O(r). Indeed, set S = span(∇1, . . ., ∇T), and denote XS to be the projection of Xonto S (i.e., XS = PXwhere P is the projection over S). Define B = {∇∈S : |∇Tx| ≤1, ∀x ∈XS}. Note that by definition we have that {∇t}T t=1 ⊆B. Further, B is a symmetric convex set, hence by an ellipsoid approximation we obtain a positive semidefinite matrix B ⪰0, with positive eigenvalues restricted to S, such that B ⊆{∇∈S : ∇TB∇≤1} ⊆rB. By duality we have that 1 r XS ⊆1 r B∗⊆{v ∈S : vTB†v ≤1}. Thus if PS is the projection over S we have for every x ∈Xthat xTPSB†PSx ≤r. On the other hand for every ∇t we have ∇T t B∇t ≤1. We can now choose H = B† + ϵId where ϵ is arbitrary small and have ∇T t H−1∇t = ∇T t (B† + ϵId)−1∇t ≤2 and xTHx = xTPT SB†PSx + ϵ∥x∥2 ≤2r. □ Acknowledgements The authors would like to thank Elad Hazan for helpful discussions. RL is supported by the Eric and Wendy Schmidt Fund for Strategic Innovations. References [1] K. S. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3):211–246, 2001. [2] S. Barman, A. Gopalan, and A. Saha. Online learning for structured loss spaces. arXiv preprint arXiv:1706.04125, 2017. 8 [3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order perceptron algorithm. SIAM Journal on Computing, 34(3):640–668, 2005. [4] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352, 2007. [5] C.-K. Chiang, T. Yang, C.-J. Lee, M. Mahdavi, C.-J. Lu, R. Jin, and S. Zhu. Online optimization with gradual variations. In COLT, pages 6–1, 2012. [6] A. Cohen and S. Mannor. Online learning with many experts. arXiv preprint arXiv:1702.07870, 2017. [7] A. Cutkosky and K. Boahen. Online learning without prior information. arXiv preprint arXiv:1703.02629, 2017. [8] S. De Rooij, T. Van Erven, P. D. Grünwald, and W. M. Koolen. Follow the leader if you can, hedge if you must. The Journal of Machine Learning Research, 15(1):1281–1316, 2014. [9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. [10] D. J. Foster, A. Rakhlin, and K. Sridharan. Zigzag: A new approach to adaptive online learning. arXiv preprint arXiv:1704.04010, 2017. [11] E. Hazan and S. Kale. On stochastic and worst-case models for investing. In Advances in Neural Information Processing Systems, pages 709–717, 2009. [12] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2-3):165–188, 2010. [13] E. Hazan and S. Kale. Better algorithms for benign bandits. The Journal of Machine Learning Research, 12:1287–1311, 2011. [14] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In International Conference on Computational Learning Theory, pages 499–513. Springer, 2006. [15] E. Hazan, T. Koren, R. Livni, and Y. Mansour. Online learning with low rank experts. In 29th Annual Conference on Learning Theory, pages 1096–1114, 2016. [16] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. [17] T. Koren and K. Levy. Fast rates for exp-concave empirical risk minimization. In Advances in Neural Information Processing Systems, pages 1477–1485, 2015. [18] H. Luo, A. Agarwal, N. Cesa-Bianchi, and J. Langford. Efficient second order online learning by sketching. In Advances in Neural Information Processing Systems, pages 902–910, 2016. [19] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In Conference on Learning Theory, pages 993–1019, 2013. [20] A. Rakhlin, O. Shamir, and K. Sridharan. Localization and adaptation in online learning. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pages 516–526, 2013. [21] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107–194, 2012. [22] M. Streeter and H. B. McMahan. Less regret via online conditioning. Technical report, 2010. [23] T. van Erven and W. M. Koolen. Metagrad: Multiple learning rates in online learning. In Advances in Neural Information Processing Systems, pages 3666–3674, 2016. [24] V. Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213–248, 2001. [25] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. 2003. 9 | 2017 | 142 |
6,611 | Pose Guided Person Image Generation Liqian Ma1 Xu Jia2∗ Qianru Sun3∗ Bernt Schiele3 Tinne Tuytelaars2 Luc Van Gool1,4 1KU-Leuven/PSI, TRACE (Toyota Res in Europe) 2KU-Leuven/PSI, IMEC 3Max Planck Institute for Informatics, Saarland Informatics Campus 4ETH Zurich {liqian.ma, xu.jia, tinne.tuytelaars, luc.vangool}@esat.kuleuven.be {qsun, schiele}@mpi-inf.mpg.de vangool@vision.ee.ethz.ch Abstract This paper proposes the novel Pose Guided Person Generation Network (PG2) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG2 utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128×64 re-identification images and 256×256 fashion photos show that our model generates high-quality person images with convincing details. 1 Introduction Generating realistic-looking images is of great value for many applications such as face editing, movie making and image retrieval based on synthesized images. Consequently, a wide range of methods have been proposed including Variational Autoencoders (VAE) [14], Generative Adversarial Networks (GANs) [6] and Autoregressive models (e.g., PixelRNN [30]). Recently, GAN models have been particularly popular due to their principle ability to generate sharp images through adversarial training. For example in [21, 5, 1], GANs are leveraged to generate faces and natural scene images and several methods are proposed to stabilize the training process and to improve the quality of generation. From an application perspective, users typically have a particular intention in mind such as changing the background, an object’s category, its color or viewpoint. The key idea of our approach is to guide the generation process explicitly by an appropriate representation of that intention to enable direct control over the generation process. More specifically, we propose to generate an image by conditioning it on both a reference image and a specified pose. With a reference image as condition, the model has sufficient information about the appearance of the desired object in advance. The guidance given by the intended pose is both explicit and flexible. So in principle this approach can manipulate any object to an arbitrary pose. In this work, we focus on transferring a person from a given pose to an intended pose. There are many interesting applications derived from this task. For example, in movie making, we can directly manipulate a character’s human body to a desired pose or, for human pose estimation, we can generate training data for rare but important poses. Transferring a person from one pose to another is a challenging task. A few examples can be seen in Figure 1. It is difficult for a complete end-to-end framework to do this because it has to generate both correct poses and detailed appearance simultaneously. Therefore, we adopt a divide-and-conquer strategy, dividing the problem into two stages which focus on learning global human body structure ∗Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Our generation sequence (refined results) Condition image Target pose Coarse result Refined result Target image (GT) Condition image Target pose Coarse result Refined result Target image (GT) Condition image Target pose sequence (c) Generating from a sequence of poses (a) DeepFashion (b) Market-1501 Refined results Figure 1: Generated samples on DeepFashion dataset [16] (a)(c) and Market-1501 dataset [37] (b). and appearance details respectively similar to [35, 9, 3, 19]. At stage-I, we explore different ways to model pose information. A variant of U-Net is employed to integrate the target pose with the person image. It outputs a coarse generation result that captures the global structure of the human body in the target image. A masked L1 loss is proposed to suppress the influence of background change between condition image and target image. However, it would generate blurry result due to the use of L1. At stage-II, a variant of Deep Convolutional GAN (DCGAN) model is used to further refine the initial generation result. The model learns to fill in more appearance details via adversarial training and generates sharper images. Different from the common use of GANs which directly learns to generate an image from scratch, in this work we train a GAN to generate a difference map between the initial generation result and the target person image. The training converges faster since it is an easier task. Besides, we add a masked L1 loss to regularize the training of the generator such that it will not generate an image with many artifacts. Experiments on two dataset, a low-resolution person re-identification dataset and a high-resolution fashion photo dataset, demonstrate the effectiveness of the proposed method. Our contribution is three-fold. i) We propose a novel task of conditioning image generation on a reference image and an intended pose, whose purpose is to manipulate a person in an image to an arbitrary pose. ii) Several ways are explored to integrate pose information with a person image. A novel mask loss is proposed to encourage the model to focus on transferring the human body appearance instead of background information. iii) To address the challenging task of pose transfer, we divide the problem into two stages, with the stage-I focusing on global structure of the human body and the stage-II on filling in appearance details based on the first stage result. 2 Related works Recently there have been a lot of works on generative image modeling with deep learning techniques. These works fall into two categories. The first line of works follow an unsupervised setting. One popular method under this setting is variational autoencoders proposed by Kingma and Welling [14] and Rezende et al. [25], which apply a re-parameterization trick to maximize the lower bound of the data likelihood. Another branch of methods are autogressive models [28, 30, 29] which compute the product of conditional distributions of pixels in a pixel-by-pixel manner as the joint distribution of pixels in an image. The most popular methods are generative adversarial networks (GAN) [6], which simultaneously learn a generator to generate samples and a discriminator to discriminate generated samples from real ones. Many works show that GANs can generate sharp images because of using 2 Target pose Condition image coarse result Discriminator (D) at Stage-II Condition image Condition image Target image Fake pair Generator at Stage-I (G1) ... ... Generator at Stage-II (G2) Real or Fake difference map refined result Condition image skip connection skip connection concat concat Real pair ResUAE (G1) at Stage-I FConv-ResUAE (G2) at Stage-II Figure 2: The overall framework of our Pose Guided Person Generation Network (PG2). It contains two stages. Stage-I focuses on pose integration and generates an initial result that captures the global structure of the human. Stage-II focuses on refining the initial result via adversarial training and generates sharper images. the adversarial loss instead of L1 loss. In this work, we also use the adversarial loss in our framework in order to generate high-frequency details in images. The second group of works generate images conditioned on either category or attribute labels, texts or images. Yan et al. [32] proposed a Conditional Variational Autoencoder (CVAE) to achieve attribute conditioned image generation. Mirza and Osindero [18] proposed to condition both generator and discriminator of GAN on side information to perform category conditioned image generation. Lassner et al. [15] generated full-body people in clothing, by conditioning on the fine-grained body part segments. Reed et al. proposed to generate bird image conditioned on text descriptions by adding textual information to both generator and discriminator [24] and further explored the use of additional location, keypoints or segmentation information to generate images [22, 23]. With only these visual cues as condition and in contrast to our explicit condition on the intended pose, the control exerted over the image generation process is still abstract. Several works further conditioned image generation not only on labels and texts but also on images. Researchers [34, 33, 11, 8] addressed the task of face image generation conditioned on a reference image and a specific face viewpoint. Chen et al. [4] tackled the unseen view inference as a tensor completion problem, and use latent factors to impute the pose in unseen views. Zhao et al. [36] explored generating multi-view cloth images from only a single view input, which is most similar to our task. However, a wide range of poses is consistent with any given viewpoint making the conditioning less expressive than in our work. In this work, we make use of pose information in a more explicit and flexible way, that is, using poses in the format of keypoints to model diverse human body appearance. It should be noted that instead of doing expensive pose annotation, we use a state-of-the-art pose estimation approach to obtain the desired human body keypoints. 3 Method Our task is to simultaneously transfer the appearance of a person from a given pose to a desired pose and keep important appearance details of the identity. As it is challenging to implement this as an end-to-end model, we propose a two-stage approach to address this task, with each stage focusing on one aspect. For the first stage we propose and analyze several model variants and for the second stage we use a variant of a conditional DCGAN to fill in more appearance details. The overall framework of the proposed Pose Guided Person Generation Network (PG2) is shown in Figure 2. 3.1 Stage-I: Pose integration At stage-I, we integrate a conditioning person image IA with a target pose PB to generate a coarse result ˆIB that captures the global structure of the human body in the target image IB. 3 Pose embedding. To avoid expensive annotation of poses, we apply a state-of-the-art pose estimator [2] to obtain approximate human body poses. The pose estimator generates the coordinates of 18 keypoints. Using those directly as input to our model would require the model to learn to map each keypoint to a position on the human body. Therefore, we encode pose PB as 18 heatmaps. Each heatmap is filled with 1 in a radius of 4 pixels around the corresponding keypoints and 0 elsewhere (see Figure 3, target pose). We concatenate IA and PB as input to our model. In this way, we can directly use convolutional layers to integrate the two kinds of information. Generator G1. As generator at stage I, we adopt a U-Net-like architecture [20], i.e., convolutional autoencoder with skip connections as is shown in Figure 2. Specifically, we first use several stacked convolutional layers to integrate IA and PB from small local neighborhoods to larger ones so that appearance information can be integrated and transferred to neighboring body parts. Then, a fully connected layer is used such that information between distant body parts can also exchange information. After that, the decoder is composed of a set of stacked convolutional layers which are symmetric to the encoder to generate an image. The result of the first stage is denoted as ˆIB1. In the U-Net, skip connections between encoder and decoder help propagate image information directly from input to output. In addition, we find that using residual blocks as basic component improves the generation performance. In particular we propose to simplify the original residual block [7] and have only two consecutive conv-relu inside a residual block. pose keypoints estimation pose keypoints connection Target image Target pose Pose mask morphological operations Pose skeleton Figure 3: Process of computing the pose mask. Pose mask loss. To compare the generation ˆIB1 with the target image IB, we adopt L1 distance as the generation loss of stage-I. However, since we only have a condition image and a target pose as input, it is difficult for the model to generate what the background would look like if the target image has a different background from the condition image. Thus, in order to alleviate the influence of background changes, we add another term that adds a pose mask MB to the L1 loss such that the human body is given more weight than the background. The formulation of pose mask loss is given in Eq. 1 with ⊙denoting the pixels-wise multiplication: LG1 = ∥(G1(IA, PB) −IB) ⊙(1 + MB)∥1. (1) The pose mask MB is set to 1 for foreground and 0 for background and is computed by connecting human body parts and applying a set of morphological operations such that it is able to approximately cover the whole human body in the target image, see the example in Figure 3. The output of G1 is blurry because the L1 loss encourages the result to be an average of all possible cases [10]. However, G1 does capture the global structural information specified by the target pose, as shown in Figure 2, as well as other low-frequency information such as the color of clothes. Details of body appearance, i.e. the high-frequency information, will be refined at the second stage through adversarial training. 3.2 Stage-II: Image refinement Since the model at the first stage has already synthesized an image which is coarse but close to the target image in pose and basic color, at the second stage, we would like the model to focus on generating more details by correcting what is wrong or missing in the initial result. We use a variant of conditional DCGAN [21] as our base model and condition it on the stage-I generation result. Generator G2. Considering that the initial result and the target image are already structurally similar, we propose that the generator G2 at the second stage aims to generate an appearance difference map that brings the initial result closer to the target image. The difference map is computed using a U-Net similar to the first stage but with the initial result ˆIB1 and condition image IA as input instead. The difference lies in that the fully-connected layer is removed from the U-Net. This helps to preserve more details from the input because a fully-connected layer compresses a lot of information contained in the input. The use of difference maps speeds up the convergence of model training since the model focuses on learning the missing appearance details instead of synthesizing the target image from 4 scratch. In particular, the training already starts from a reasonable result. The overall architecture of G2 can be seen in Figure 2. Discriminator D. In traditional GANs, the discriminator distinguishes between real groundtruth images and fake generated images (which is generated from random noise). However, in our conditional network, G2 takes the condition image IA instead of a random noise as input. Therefore, real images are the ones which not only are natural but also satisfy a specific requirement. Otherwise, G2 will be mislead to directly output IA which is natural by itself instead of refining the coarse result of the first stage ˆIB1. To address this issue, we pair the G2 output with the condition image to make the discriminator D to recognize the pairs’ fakery, i.e., (ˆIB2, IA) vs (IB, IA). This is diagrammed in Figure 2. The pairwise input encourages D to learn the distinction between ˆIB2 and IB instead of only the distinction between synthesized and natural images. Another difference from traditional GANs is that noise is not necessary anymore since the generator is conditioned on an image IA, which is similar to [17]. Therefore, we have the following loss function for the discriminator D and the generator G2 respectively, LD adv = Lbce(D(IA, IB), 1) + Lbce(D(IA, G2(IA, ˆIB1)), 0), (2) LG adv = Lbce(D(IA, G2(IA, ˆIB1)), 1), (3) where Lbce denotes binary cross-entropy loss. Previous work [10, 17] shows that mixing the adversarial loss with a loss minimizing Lp distance can regularize the image generation process. Here we use the same masked L1 loss as is used at the first stage such that it pays more attention to the appearance of targeted human body than background, LG2 = LG adv + λ∥(G2(IA, ˆIB1) −IB) ⊙(1 + MB)∥1, (4) where λ is the weight of L1 loss. It controls how close the generation looks like the target image at low frequencies. When λ is small, the adversarial loss dominates the training and it is more likely to generate artifacts; when λ is big, the the generator with a basic L1 loss dominates the training, making the whole model generate blurry results2. In the training process of our DCGAN, we alternatively optimize discriminator D and generator G2. As shown in the left part of Figure 2, generator G2 takes the first stage result and the condition image as input and aims to refine the image to confuse the discriminator. The discriminator learns to classify the pair of condition image and the generated image as fake while classifying the pair including the target image as real. 3.3 Network architecture We summarize the network architecture of the proposed model PG2. At stage-I, the encoder of G1 consists of N residual blocks and one fully-connected layer , where N depends on the size of input. Each residual block consists of two convolution layers with stride=1 followed by one sub-sampling convolution layer with stride=2 except the last block. At stage-II, the encoder of G2 has a fully convolutional architecture including N-2 convolution blocks. Each block consists of two convolution layers with stride=1 and one sub-sampling convolution layer with stride=2. Decoders in both G1 and G2 are symmetric to corresponding encoders. Besides, there are shortcut connections between decoders and encoders, which can be seen in Figure 2. In G1 and G2, no batch normalization or dropout are applied. All convolution layers consist of 3×3 filters and the number of filters are increased linearly with each block. We apply rectified linear unit (ReLU) to each layer except the fully connected layer and the output convolution layer. For the discriminator, we adopt the same network architecture as DCGAN [21] except the size of the input convolution layer due to different image resolutions. 4 Experiments We evaluate the proposed PG2 network on two person datasets (Market-1501 [37] and DeepFashion [16]), which contain person images with diverse poses. We present quantitative and qualitative results for three main aspects of PG2: different pose embeddings; pose mask loss vs. standard L1 loss; and two-stage model vs. one-stage model. We also compare with the most related work [36]. 2The influence of λ on generation quality is analyzed in supplementary materials. 5 4.1 Datasets The DeepFashion (In-shop Clothes Retrieval Benchmark) dataset [16] consists of 52,712 in-shop clothes images, and 200,000 cross-pose/scale pairs. All images are in high-resolution of 256×256. In the train set, we have 146,680 pairs each of which is composed of two images of the same person but different poses. We randomly select 12,800 pairs from the test set for testing. We also experiment on a more challenging re-identification dataset Market-1501 [37] containing 32,668 images of 1,501 persons captured from six disjoint surveillance cameras. Persons in this dataset vary in pose, illumination, viewpoint and background, which makes the person generation task more challenging. All images have size 128×64 and are split into train/test sets of 12,936/19,732 following [37]. In the train set, we have 439,420 pairs each of which is composed of two images of the same person but different poses. We randomly select 12,800 pairs from the test set for testing. Implementation details On both datasets, we use the Adam [13] optimizer with β1 = 0.5 and β2 = 0.999. The initial learning rate is set to 2e-5. On DeepFashion, we set the number of convolution blocks N = 6. Models are trained with a minibatch of size 8 for 30k and 20k iterations respectively at stage-I and stage-II. On Market-1501, we set the number of convolution blocks N = 5. Models are trained with a minibatch of size 16 for 22k and 14k iterations respectively at stage-I and stage-II. For data augmentation, we do left-right flip for both datasets3. 4.2 Qualitative results As mentioned above, we investigate three aspects of our proposed PG2 network. Different pose embeddings and losses are compared within stage-I and then we demonstrate the advantage of our two-stage model over a one-stage model. Different pose embeddings. To evaluate our proposed pose embedding method, we implement two alternative methods. For the first, coordinate embedding (CE), we pass the keypoint coordinates through two fully connected layers and concatenate the embedded feature vector with the image embedding vector at the bottleneck fully connected layer. For the second, called heatmap embedding (HME), we feed the 18 keypoint heatmaps to an independent encoder and extract the fully connected layer feature to concatenate with image embedding vector at the bottleneck fully connected layer. Columns 4, 5 and 6 of Figure 4 show qualitative results of the different pose embedding methods when used in stage-I, that is of G1 with CE (G1-CE-L1), with HME (G1-HME-L1) and our G1 (G1-L1). All three use standard L1 loss. We can see that G1-L1 is able to synthesize reasonable looking images that capture the global structure of a person, such as pose and color. However, the other two embedding methods G1-CE-L1 and G1-HME-L1 are quite blurry and the color is wrong. Moreover, results of G1-CE-L1 all get wrong poses. This can be explained by the additional difficulty to map the keypoint coordinates to appropriate image locations making training more challenging. Our proposed pose embedding using 18 channels of pose heatmaps is able to guide the generation process effectively, leading to correctly generated poses. Interestingly, G1-L1 can even generate reasonable face details like eyes and mouth, as shown by the DeepFashion samples. Pose mask loss vs. L1 loss. Comparing the results of G1 trained with L1 loss (G1-L1) and G1 trained with poseMaskLoss (G1-poseMaskLoss) for the Market-1501 dataset, we find that pose mask loss indeed brings improvement to the performance (columns 6 and 7 in Figure 4). By focusing the image generation on the human body, the synthesized image gets sharper and the color looks nicer. We can see that for person ID 164, the person’s upper body generated by G1-L1 is more noisy in color than the one generated by G1-poseMaskLoss. For person ID 23 and 346, the method with pose mask loss generates more clear boundaries for shoulder and head. These comparisons validate that our pose mask loss effectively alleviates the influence of noisy backgrounds and guides the generator to focus on the pose transfer of the human body. The two losses generate similar results for the DeepFashion samples because the background is much simpler. Two-stage vs. one-stage. In addition, we demonstrate the advantage of our two-stage model over a one-stage model. For this we use G1 as generator but train it in an adversarial way to directly generate a new image given a condition image and a target pose as input. This one-stage model is denoted as G1+D and our full model is denoted as G1+G2+D. From Figure 4, we can see that our full model is able to generate photo-realistic results, which contain more details than the one-stage model. 3More details about parameters of the network architecture are given in supplementary materials. 6 For example, for DeepFashion samples, more details in the face and the clothes are transferred to the generated images. For person ID 245, the shorts on the result of G1+D have lighter color and more blurry boundary than G1+G2+D. For person ID 346, the two-stage model is able to generate both the right color and textures for the clothes, while the one-stage model is only able to generate the right color. On Market-1501 samples, the quality of the images generated by both methods decreases because of the more challenging setting. However, the two-stage model is still able to generate better results than the one-stage method. We can see that for person ID 53, the stripes on the T-shirt are retained by our full model while the one-stage model can only generate a blue blob as clothes. Besides, we can also clearly see the stool in the woman’s hands (person ID 23). ID. 53 ID. 164 ID. 23 Target image(GT) Condition image G1-poseMaskLoss (our coarse result) G1-L1 Target pose G1+D G1+G2+D (our refined result) G1-HME-L1 G1-CE-L1 ID. 245 ID. 116 1 2 3 4 5 6 7 8 9 ID. 346 Figure 4: Test results on DeepFashion (upper 3 rows, images are cut for the sake of display) and Market-1501 dataset (lower 3 rows). We test G1 in two aspects: (1) three pose embedding methods, i.e., coordinate embedding (CE), heatmap embedding (HME) and our pose heatmap concatenation in G1-L1, and (2) two losses, i.e., the proposed poseMaskLoss and the standard L1 loss. Column 7, 8 and 9 show the differences among our stage-I (G1), one-stage adversarial model (G1+D) and our two-stage adversarial model (G1+G2+D). Note that all three use poseMaskLoss. The IDs are assigned randomly when splitting the datasets. 4.3 Quantitative results We also give quantitative results on both datasets. Structural Similarity (SSIM) [31] and the Inception Score (IS) [26] are adopted to measure the quality of synthesis. Note that in the Market-1501 dataset, condition images and target images may have different background. Since there is no information in the input about the background in the target image, our method is not able to imagine what the 7 Table 1: Quantitative evaluation. For all measures, higher is better. DeepFashion Market-1501 Model SSIM IS SSIM IS mask-SSIM mask-IS G1-CE-L1 0.694 2.395 0.219 2.568 0.771 2.455 G1-HME-L1 0.735 2.427 0.294 3.171 0.802 2.508 G1-L1 0.735 2.427 0.304 3.006 0.809 2.455 G1-poseMaskLoss 0.779 2.668 0.340 3.326 0.817 2.682 G1+D 0.761 3.091 0.283 3.490 0.803 3.310 G1+G2+D 0.762 3.090 0.253 3.460 0.792 3.435 Table 2: User study results from AMT DeepFashion Market-1501 Model R2G4 G2R5 R2G G2R G1+D 7.8% 9.3% 17.1% 11.1% G1+G2+D 9.2% 14.9% 11.2% 5.5% new background looks like. To reduce the influence of background in our evaluation, we propose a variant of SSIM, called mask-SSIM. A pose mask is added to both the synthesis and the target image before computing SSIM. In this way we only focus on measuring the synthesis quality of a person’s appearance. Similarly, we employ mask-IS to eliminate the effect of background. However, it should be noted that image quality does not always correspond to such image similarity metrics. For example, in Figure 4, our full model generates sharper and more photo-realistic results than G1-poseMaskLoss, but the latter one has a higher SSIM. This is also observed in super-resolution papers [12, 27]. The advantages are also clearly shown in the numerical scores in Table 1. E.g. the proposed pose embedding (G1-L1) consistently outperforms G1-CE-L1 across all measures and both datasets. G1HME-L1 obtains similar quantitative numbers probably due to the similarity of the two embeddings. Changing the loss from L1 to the proposed poseMaskLoss (G1-poseMaskLoss) consistently improves further across all measures and for both datasets. Adding the discriminator during training either after the first stage (G1+D) or in our full model (G1+G2+D) leads to comparable numbers, even though we have observed clear differences in the qualitative results as discussed above. This is explained by the fact that blurry images often get good SSIM despite being less convincing and photo-realistic [12, 27]. 4.4 User study We perform a user study on Amazon Mechanical Turk (AMT) for both datasets. For each one, we show 55 real images and 55 generated images in a random order to 30 users. Following [10, 15], each image is shown for 1 second. The first 10 images are used for practice thus are ignored when computing scores. From the results reported in Table. 2, we can get some observations that (1) On DeepFashion our generated images of G1+D and G1+G2+D manage to confuse users on 9.3% and 14.9% trials respectively (see G2R), showing the advantage of G1+G2+D over G1+D; (2) On Market-1501, the average score of G2R is lower, because the background is much more cluttered than DeepFashion; (3) On Market-1501, G1+G2+D gets a lower score than G1+D, because G1+G2+D transfers more backgrounds from the condition image, which can be figured out in Figure. 4, but in the meantime it brings extra artifacts on backgrounds which lead users to rate ‘Fake’; (4) With respect to R2G, we notice that Market-1501 gets clearly high scores (>10%) because human users sometimes get confused when facing low-quality surveillance images. 4R2G means #Real images rated as generated / #Real images 5G2R means #Generated images rated as Real / #Generated images 8 Target image (GT) Condition image Ours (refined) ID. 215 VariGAN [36] (refined) Figure 5: Comparison examples with [36]. ID. 2662 Target image (GT) Condition image Ours (refined) Target pose Ours (coarse) Figure 6: Our failure cases on DeepFashion. 4.5 Further analysis Since our task with pose condition is novel, there is no direct comparison work. We only compare with the most related one6 [36], which did multi-view person image synthesis on the DeepFashion dataset. It is noted that [36] used the condition image and an additional word vector of the target view e.g. “side” as network input. Comparison examples are shown in Figure 5. It is clear that our refined results are much better than those of [36]. Taking the second row as an example, we can generate high-quality whole body images conditioned on an upper body while the whole body synthesis by [36] only has a rough body shape. Additionally, we give two failure DeepFashion examples by our model in Figure 6. In the top row, only the upper body is generated consistently. The “pieces of legs” is caused by the rare training data for such complicated poses. The bottom row shows inaccurate gender which is caused by the imbalance of training data for male / female. Besides, the condition person wears a long-sleeve jacket of similar color to his inner short-sleeve, making the generated cloth look like a mixture of both. 5 Conclusions In this work, we propose the Pose Guided Person Generation Network (PG2) to address a novel task of synthesizing person images by conditioning it on a reference image and a target pose. A divideand-conquer strategy is employed to divide the generation process into two stages. Stage-I aims to capture the global structure of a person and generate an initial result. A pose mask loss is further proposed to alleviate the influence of the background on person image synthesis. Stage-II fills in more appearance details via adversarial training to generate sharper images. Extensive experimental results on two person datasets demonstrate that our method is able to generate images that are both photo-realistic and pose-wise correct. In the future work, we plan to generate more controllable and diverse person images conditioning on both pose and attribute. Acknowledgments We gratefully acknowledge the support of Toyota Motors Europe, FWO Structure from Semantics project, KU Leuven GOA project CAMETRON, and German Research Foundation (DFG CRC 1223). We would like to thank Bo Zhao for his helpful discussions. References [1] Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv, 1701.07875, 2017. [2] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. arXiv, 1611.08050, 2016. [3] Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4733–4742, 2016. 6Results of VariGAN are provided by the authors. 9 [4] Chao-Yeh Chen and Kristen Grauman. Inferring unseen views of people. In CVPR, pages 2003–2010, 2014. [5] Xi Chen, Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, pages 2172–2180, 2016. [6] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016. [8] Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. arXiv, 1704.04086, 2017. [9] Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. Stacked generative adversarial networks. arXiv preprint arXiv:1612.04357, 2016. [10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. [11] Xu Jia, Amir Ghodrati, Marco Pedersoli, and Tinne Tuytelaars. Towards automatic image editing: Learning to see another you. In BMVC, 2016. [12] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. [13] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 1412.6980, 2014. [14] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv, 1312.6114, 2013. [15] Christoph Lassner, Gerard Pons-Moll, and Peter V Gehler. A generative model of people in clothing. arXiv preprint arXiv:1705.04098, 2017. [16] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, pages 1096–1104, 2016. [17] Michaël Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. [18] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv, 1411.1784, 2014. [19] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016. [20] Tran Minh Quan, David G. C. Hildebrand, and Won-Ki Jeong. Fusionnet: A deep fully residual convolutional neural network for image segmentation in connectomics. arXiv, 1612.05360, 2016. [21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 1511.06434, 2015. [22] Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. In NIPS, 2016. [23] Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Victor Bapst, Matt Botvinick, and Nando de Freitas. Generating interpretable images with controllable structure. Technical report, 2016. [24] Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In ICML, 2016. [25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [26] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, pages 2226–2234, 2016. [27] Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. 10 [28] Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. arXiv, 1605.02226, 2016. [29] Aäron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Koray Kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. In NIPS, pages 4790–4798, 2016. [30] Aäron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, pages 1747–1756, 2016. [31] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600–612, 2004. [32] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In ECCV, pages 776–791, 2016. [33] Jimei Yang, Scott Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015. [34] Junho Yim, Heechul Jung, ByungIn Yoo, Changkyu Choi, Du-Sik Park, and Junmo Kim. Rotating your face using multi-task deep neural network. In CVPR, 2015. [35] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv:, 1612.03242, 2016. [36] Bo Zhao, Xiao Wu, Zhi-Qi Cheng, Hao Liu, and Jiashi Feng. Multi-view image generation from a single-view. arXiv, 1704.04886, 2017. [37] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. Scalable person re-identification: A benchmark. In ICCV, pages 1116–1124, 2015. 11 | 2017 | 143 |
6,612 | Successor Features for Transfer in Reinforcement Learning André Barreto, Will Dabney, Rémi Munos, Jonathan J. Hunt, Tom Schaul, Hado van Hasselt, David Silver {andrebarreto,wdabney,munos,jjhunt,schaul,hado,davidsilver}@google.com DeepMind Abstract Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment’s dynamics remain the same. Our approach rests on two key ideas: successor features, a value function representation that decouples the dynamics of the environment from the rewards, and generalized policy improvement, a generalization of dynamic programming’s policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm. 1 Introduction Reinforcement learning (RL) provides a framework for the development of situated agents that learn how to behave while interacting with the environment [21]. The basic RL loop is defined in an abstract way so as to capture only the essential aspects of this interaction: an agent receives observations and selects actions to maximize a reward signal. This setup is generic enough to describe tasks of different levels of complexity that may unroll at distinct time scales. For example, in the task of driving a car, an action can be to turn the wheel, make a right turn, or drive to a given location. Clearly, from the point of view of the designer, it is desirable to describe a task at the highest level of abstraction possible. However, by doing so one may overlook behavioral patterns and inadvertently make the task more difficult than it really is. The task of driving to a location clearly encompasses the subtask of making a right turn, which in turn encompasses the action of turning the wheel. In learning how to drive an agent should be able to identify and exploit such interdependencies. More generally, the agent should be able to break a task into smaller subtasks and use knowledge accumulated in any subset of those to speed up learning in related tasks. This process of leveraging knowledge acquired in one task to improve performance on other tasks is called transfer [25, 11]. In this paper we look at one specific type of transfer, namely, when subtasks correspond to different reward functions defined in the same environment. This setup is flexible enough to allow transfer to happen at different levels. In particular, by appropriately defining the rewards one can induce different task decompositions. For instance, the type of hierarchical decomposition involved in the driving example above can be induced by changing the frequency at which rewards are delivered: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. a positive reinforcement can be given after each maneuver that is well executed or only at the final destination. Obviously, one can also decompose a task into subtasks that are independent of each other or whose dependency is strictly temporal (that is, when tasks must be executed in a certain order but no single task is clearly “contained” within another). The types of task decomposition discussed above potentially allow the agent to tackle more complex problems than would be possible were the tasks modeled as a single monolithic challenge. However, in order to apply this divide-and-conquer strategy to its full extent the agent should have an explicit mechanism to promote transfer between tasks. Ideally, we want a transfer approach to have two important properties. First, the flow of information between tasks should not be dictated by a rigid diagram that reflects the relationship between the tasks themselves, such as hierarchical or temporal dependencies. On the contrary, information should be exchanged across tasks whenever useful. Second, rather than being posed as a separate problem, transfer should be integrated into the RL framework as much as possible, preferably in a way that is almost transparent to the agent. In this paper we propose an approach for transfer that has the two properties above. Our method builds on two conceptual pillars that complement each other. The first is a generalization of Dayan’s [7] successor representation. As the name suggests, in this representation scheme each state is described by a prediction about the future occurrence of all states under a fixed policy. We present a generalization of Dayan’s idea which extends the original scheme to continuous spaces and also facilitates the use of approximation. We call the resulting scheme successor features. As will be shown, successor features lead to a representation of the value function that naturally decouples the dynamics of the environment from the rewards, which makes them particularly suitable for transfer. The second pillar of our framework is a generalization of Bellman’s [3] classic policy improvement theorem that extends the original result from one to multiple decision policies. This novel result shows how knowledge about a set of tasks can be transferred to a new task in a way that is completely integrated within RL. It also provides performance guarantees on the new task before any learning has taken place, which opens up the possibility of constructing a library of “skills” that can be reused to solve previously unseen tasks. In addition, we present a theorem that formalizes the notion that an agent should be able to perform well on a task if it has seen a similar task before—something clearly desirable in the context of transfer. Combined, the two results above not only set our approach in firm ground but also outline the mechanics of how to actually implement transfer. We build on this knowledge to propose a concrete method and evaluate it in two environments, one encompassing a sequence of navigation tasks and the other involving the control of a simulated two-joint robotic arm. 2 Background and problem formulation As usual, we assume that the interaction between agent and environment can be modeled as a Markov decision process (MDP, Puterman, [19]). An MDP is defined as a tuple M ≡(S, A, p, R, γ). The sets S and A are the state and action spaces, respectively; here we assume that S and A are finite whenever such an assumption facilitates the presentation, but most of the ideas readily extend to continuous spaces. For each s ∈S and a ∈A the function p(·|s, a) gives the next-state distribution upon taking action a in state s. We will often refer to p(·|s, a) as the dynamics of the MDP. The reward received at transition s a−→s′ is given by the random variable R(s, a, s′); usually one is interested in the expected value of this variable, which we will denote by r(s, a, s′) or by r(s, a) = ES′∼p(·|s,a)[r(s, a, S′)]. The discount factor γ ∈[0, 1) gives smaller weights to future rewards. The objective of the agent in RL is to find a policy π—a mapping from states to actions—that maximizes the expected discounted sum of rewards, also called the return Gt = P∞ i=0 γiRt+i+1, where Rt = R(St, At, St+1). One way to address this problem is to use methods derived from dynamic programming (DP), which heavily rely on the concept of a value function [19]. The action-value function of a policy π is defined as Qπ(s, a) ≡Eπ [Gt | St = s, At = a] , (1) where Eπ[·] denotes expected value when following policy π. Once the action-value function of a particular policy π is known, we can derive a new policy π′ which is greedy with respect to Qπ(s, a), that is, π′(s) ∈argmaxaQπ(s, a). Policy π′ is guaranteed to be at least as good as (if not better than) policy π. The computation of Qπ(s, a) and π′, called policy evaluation and policy improvement, define the basic mechanics of RL algorithms based on DP; under certain conditions their successive application leads to an optimal policy π∗that maximizes the expected return from every s ∈S [21]. 2 In this paper we are interested in the problem of transfer, which we define as follows. Let T , T ′ be two sets of tasks such that T ′ ⊂T , and let t be any task. Then there is transfer if, after training on T , the agent always performs as well or better on task t than if only trained on T ′. Note that T ′ can be the empty set. In this paper a task will be defined as a specific instantiation of the reward function R(s, a, s′) for a given MDP. In Section 4 we will revisit this definition and make it more formal. 3 Successor features In this section we present the concept that will serve as a cornerstone for the rest of the paper. We start by presenting a simple reward model and then show how it naturally leads to a generalization of Dayan’s [7] successor representation (SR). Suppose that the expected one-step reward associated with transition (s, a, s′) can be computed as r(s, a, s′) = φ(s, a, s′)⊤w, (2) where φ(s, a, s′) ∈Rd are features of (s, a, s′) and w ∈Rd are weights. Expression (2) is not restrictive because we are not making any assumptions about φ(s, a, s′): if we have φi(s, a, s′) = r(s, a, s′) for some i, for example, we can clearly recover any reward function exactly. To simplify the notation, let φt = φ(st, at, st+1). Then, by simply rewriting the definition of the action-value function in (1) we have Qπ(s, a) = Eπ [rt+1 + γrt+2 + ... | St = s, At = a] = Eπ h φ⊤ t+1w + γφ⊤ t+2w + ... | St = s, At = a i = Eπ P∞ i=tγi−tφi+1 | St = s, At = a ⊤w = ψπ(s, a)⊤w. (3) The decomposition (3) has appeared before in the literature under different names and interpretations, as discussed in Section 6. Since here we propose to look at (3) as an extension of Dayan’s [7] SR, we call ψπ(s, a) the successor features (SFs) of (s, a) under policy π. The ith component of ψπ(s, a) gives the expected discounted sum of φi when following policy π starting from (s, a). In the particular case where S and A are finite and φ is a tabular representation of S × A × S—that is, φ(s, a, s′) is a one-hot vector in R|S|2|A|—ψπ(s, a) is the discounted sum of occurrences, under π, of each possible transition. This is essentially the concept of SR extended from the space S to the set S × A × S [7]. One of the contributions of this paper is precisely to generalize SR to be used with function approximation, but the exercise of deriving the concept as above provides insights already in the tabular case. To see this, note that in the tabular case the entries of w ∈R|S|2|A| are the function r(s, a, s′) and suppose that r(s, a, s′) ̸= 0 in only a small subset W ⊂S × A × S. From (2) and (3), it is clear that the cardinality of W, and not of S × A × S, is what effectively defines the dimension of the representation ψπ, since there is no point in having d > |W|. Although this fact is hinted at by Dayan [7], it becomes more apparent when we look at SR as a particular case of SFs. SFs extend SR in two other ways. First, the concept readily applies to continuous state and action spaces without any modification. Second, by explicitly casting (2) and (3) as inner products involving feature vectors, SFs make it evident how to incorporate function approximation: as will be shown, these vectors can be learned from data. The representation in (3) requires two components to be learned, w and ψπ. Since the latter is the expected discounted sum of φ under π, we must either be given φ or learn it as well. Note that approximating r(s, a, s′) ≈φ(s, a, s′)⊤˜w is a supervised learning problem, so we can use well-understood techniques from the field to learn ˜w (and potentially ˜φ, too) [9]. As for ψπ, we note that ψπ(s, a) = φt+1 + γEπ[ψπ(St+1, π(St+1)) | St = s, At = a], (4) that is, SFs satisfy a Bellman equation in which φi play the role of rewards—something also noted by Dayan [7] regarding SR. Therefore, in principle any RL method can be used to compute ψπ [24]. The SFs ψπ summarize the dynamics induced by π in a given environment. As shown in (3), this allows for a modular representation of Qπ in which the MDP’s dynamics are decoupled from its 3 rewards, which are captured by the weights w. One potential benefit of having such a decoupled representation is that only the relevant module must be relearned when either the dynamics or the reward changes, which may serve as an argument in favor of adopting SFs as a general approximation scheme for RL. However, in this paper we focus on a scenario where the decoupled value-function approximation provided by SFs is exploited to its full extent, as we discuss next. 4 Transfer via successor features We now return to the discussion about transfer in RL. As described, we are interested in the scenario where all components of an MDP are fixed, except for the reward function. One way to formalize this model is through (2): if we suppose that φ ∈Rd is fixed, any w ∈Rd gives rise to a new MDP. Based on this observation, we define Mφ(S, A, p, γ)≡{M(S, A, p, r, γ) | r(s, a, s′)= φ(s, a, s′)⊤w}, (5) that is, Mφ is the set of MDPs induced by φ through all possible instantiations of w. Since what differentiates the MDPs in Mφ is essentially the agent’s goal, we will refer to Mi ∈Mφ as a task. The assumption is that we are interested in solving (a subset of) the tasks in the environment Mφ. Definition (5) is a natural way of modeling some scenarios of interest. Think, for example, how the desirability of water or food changes depending on whether an animal is thirsty or hungry. One way to model this type of preference shifting is to suppose that the vector w appearing in (2) reflects the taste of the agent at any given point in time [17]. Further in the paper we will present experiments that reflect this scenario. For another illustrative example, imagine that the agent’s goal is to produce and sell a combination of goods whose production line is relatively stable but whose prices vary considerably over time. In this case updating the price of the products corresponds to picking a new w. A slightly different way of motivating (5) is to suppose that the environment itself is changing, that is, the element wi indicates not only desirability, but also availability, of feature φi. In the examples above it is desirable for the agent to build on previous experience to improve its performance on a new setup. More concretely, if the agent knows good policies for the set of tasks M ≡{M1, M2, ..., Mn}, with Mi ∈Mφ, it should be able to leverage this knowledge to improve its behavior on a new task Mn+1—that is, it should perform better than it would had it been exposed to only a subset of the original tasks, M′ ⊂M. We can assess the performance of an agent on task Mn+1 based on the value function of the policy followed after wn+1 has become available but before any policy improvement has taken place in Mn+1.1 More precisely, suppose that an agent has been exposed to each one of the tasks Mi ∈M′. Based on this experience, and on the new wn+1, the agent computes a policy π′ that will define its initial behavior in Mn+1. Now, if we repeat the experience replacing M′ with M, the resulting policy π should be such that Qπ(s, a) ≥Qπ′(s, a) for all (s, a) ∈S × A. Now that our setup is clear we can start to describe our solution for the transfer problem discussed above. We do so in two stages. First, we present a generalization of DP’s notion of policy improvement whose interest may go beyond the current work. We then show how SFs can be used to implement this generalized form of policy improvement in an efficient and elegant way. 4.1 Generalized policy improvement One of the fundamental results in RL is Bellman’s [3] policy improvement theorem. In essence, the theorem states that acting greedily with respect to a policy’s value function gives rise to another policy whose performance is no worse than the former’s. This is the driving force behind DP, and most RL algorithms that compute a value function are exploiting Bellman’s result in one way or another. In this section we extend the policy improvement theorem to the scenario where the new policy is to be computed based on the value functions of a set of policies. We show that this extension can be done in a natural way, by acting greedily with respect to the maximum over the value functions available. Our result is summarized in the theorem below. 1Of course wn+1 can, and will be, learned, as discussed in Section 4.2 and illustrated in Section 5. Here we assume that wn+1 is given to make our performance criterion clear. 4 Theorem 1. (Generalized Policy Improvement) Let π1, π2, ..., πn be n decision policies and let ˜Qπ1, ˜Qπ2, ..., ˜Qπn be approximations of their respective action-value functions such that |Qπi(s, a) −˜Qπi(s, a)| ≤ϵ for all s ∈S, a ∈A, and i ∈{1, 2, ..., n}. (6) Define π(s) ∈argmax a max i ˜Qπi(s, a). (7) Then, Qπ(s, a) ≥max i Qπi(s, a) − 2 1 −γ ϵ (8) for any s ∈S and a ∈A, where Qπ is the action-value function of π. The proofs of our theoretical results are in the supplementary material. As one can see, our theorem covers the case where the policies’ value functions are not computed exactly, either because function approximation is used or because some exact algorithm has not be run to completion. This error is captured by ϵ in (6), which re-appears as a penalty term in the lower bound (8). Such a penalty is inherent to the presence of approximation in RL, and in fact it is identical to the penalty incurred in the single-policy case (see e.g. Bertsekas and Tsitsiklis’s Proposition 6.1 [5]). In order to contextualize generalized policy improvement (GPI) within the broader scenario of DP, suppose for a moment that ϵ = 0. In this case Theorem 1 states that π will perform no worse than all of the policies π1, π2, ..., πn. This is interesting because in general maxi Qπi—the function used to induce π—is not the value function of any particular policy. It is not difficult to see that π will be strictly better than all previous policies if no single policy dominates all other policies, that is, if argmaxi maxa ˜Qπi(s, a) ∩argmaxi maxa ˜Qπi(s′, a) = ∅for some s, s′ ∈S. If one policy does dominate all others, GPI reduces to the original policy improvement theorem. If we consider the usual DP loop, in which policies of increasing performance are computed in sequence, our result is not of much use because the most recent policy will always dominate all others. Another way of putting it is to say that after Theorem 1 is applied once adding the resulting π to the set {π1, π2, ..., πn} will reduce the next improvement step to standard policy improvement, and thus the policies π1, π2, ..., πn can be simply discarded. There are however two situations in which our result may be of interest. One is when we have many policies πi being evaluated in parallel. In this case GPI provides a principled strategy for combining these policies. The other situation in which our result may be useful is when the underlying MDP changes, as we discuss next. 4.2 Generalized policy improvement with successor features We start this section by extending our notation slightly to make it easier to refer to the quantities involved in transfer learning. Let Mi be a task in Mφ defined by wi ∈Rd. We will use π∗ i to refer to an optimal policy of MDP Mi and use Q π∗ i i to refer to its value function. The value function of π∗ i when executed in Mj ∈Mφ will be denoted by Q π∗ i j . Suppose now that an agent has computed optimal policies for the tasks M1, M2, ..., Mn ∈Mφ. Suppose further that when presented with a new task Mn+1 the agent computes {Qπ∗ 1 n+1, Qπ∗ 2 n+1, ..., Qπ∗ n n+1}, the evaluation of each π∗ i under the new reward function induced by wn+1. In this case, applying the GPI theorem to the newly-computed set of value functions will give rise to a policy that performs at least as well as a policy based on any subset of these, including the empty set. Thus, this strategy satisfies our definition of successful transfer. There is a caveat, though. Why would one waste time computing the value functions of π∗ 1, π∗ 2, ..., π∗ n, whose performance in Mn+1 may be mediocre, if the same amount of resources can be allocated to compute a sequence of n policies with increasing performance? This is where SFs come into play. Suppose that we have learned the functions Q π∗ i i using the representation scheme shown in (3). Now, if the reward changes to rn+1(s, a, s′) = φ(s, a, s′)⊤wn+1, as long as we have wn+1 we can compute the new value function of π∗ i by simply making Q π∗ i n+1(s, a) = ψπ∗ i (s, a)⊤wn+1. This reduces the computation of all Q π∗ i n+1 to the much simpler supervised problem of approximating wn+1. Once the functions Q π∗ i n+1 have been computed, we can apply GPI to derive a policy π whose performance on Mn+1 is no worse than the performance of π∗ 1, π∗ 2, ..., π∗ n on the same task. A 5 question that arises in this case is whether we can provide stronger guarantees on the performance of π by exploiting the structure shared by the tasks in Mφ. The following theorem answers this question in the affirmative. Theorem 2. Let Mi ∈Mφ and let Q π∗ j i be the action-value function of an optimal policy of Mj ∈Mφ when executed in Mi. Given approximations { ˜Qπ∗ 1 i , ˜Qπ∗ 2 i , ..., ˜Qπ∗ n i } such that Q π∗ j i (s, a) −˜Q π∗ j i (s, a) ≤ϵ (9) for all s ∈S, a ∈A, and j ∈{1, 2, ..., n}, let π(s) ∈argmaxa maxj ˜Q π∗ j i (s, a). Finally, let φmax = maxs,a ||φ(s, a)||, where || · || is the norm induced by the inner product adopted. Then, Q π∗ i i (s, a) −Qπ i (s, a) ≤ 2 1 −γ (φmax minj||wi −wj|| + ϵ) . (10) Note that we used Mi rather than Mn+1 in the theorem’s statement to remove any suggestion of order among the tasks. Theorem 2 is a specialization of Theorem 1 for the case where the set of value functions used to compute π are associated with tasks in the form of (5). As such, it provides stronger guarantees: instead of comparing the performance of π with that of the previously-computed policies πj, Theorem 2 quantifies the loss incurred by following π as opposed to one of Mi’s optimal policies. As shown in (10), the loss Q π∗ i i (s, a) −Qπ i (s, a) is upper-bounded by two terms. The term 2φmaxminj||wi −wj||/(1 −γ) is of more interest here because it reflects the structure of Mφ. This term is a multiple of the distance between wi, the vector describing the task we are currently interested in, and the closest wj for which we have computed a policy. This formalizes the intuition that the agent should perform well in task wi if it has solved a similar task before. More generally, the term in question relates the concept of distance in Rd with difference in performance in Mφ. Note that this correspondence depends on the specific set of features φ used, which raises the interesting question of how to define φ such that tasks that are close in Rd induce policies that are also similar in some sense. Regardless of how exactly φ is defined, the bound (10) allows for powerful extrapolations. For example, by covering the relevant subspace of Rd with balls of appropriate radii centered at wj we can provide performance guarantees for any task w [14]. This corresponds to building a library of options (or “skills”) that can be used to solve any task in a (possibly infinite) set [22]. In Section 5 we illustrate this concept with experiments. Although Theorem 2 is inexorably related to the characterization of Mφ in (5), it does not depend on the definition of SFs in any way. Here SFs are the mechanism used to efficiently apply the protocol suggested by Theorem 2. When SFs are used the value function approximations are given by ˜Q π∗ j i (s, a) = ˜ψ π∗ j (s, a)⊤˜wi. The modules ˜ψ π∗ j are computed and stored when the agent is learning the tasks Mj; when faced with a new task Mi the agent computes an approximation of wi, which is a supervised learning problem, and then uses the GPI policy π defined in Theorem 2 to learn ˜ψ π∗ i . Note that we do not assume that either ψ π∗ j or wi is computed exactly: the effect of errors in ˜ψ π∗ j and ˜wi are accounted for by the term ϵ appearing in (9). As shown in (10), if ϵ is small and the agent has seen enough tasks the performance of π on Mi should already be good, which suggests it may also speed up the process of learning ˜ψ π∗ i . Interestingly, Theorem 2 also provides guidance for some practical algorithmic choices. Since in an actual implementation one wants to limit the number of SFs ˜ψ π∗ j stored in memory, the corresponding vectors ˜wj can be used to decide which ones to keep. For example, one can create a new ˜ψ π∗ i only when minj|| ˜wi −˜wj|| is above a given threshold; alternatively, once the maximum number of SFs has been reached, one can replace ˜ψ π∗ k , where k = argminj|| ˜wi −˜wj|| (here wi is the current task). 5 Experiments In this section we present our main experimental results. Additional details, along with further results and analysis, can be found in Appendix B of the supplementary material. The first environment we consider involves navigation tasks defined over a two-dimensional continuous space composed of four rooms (Figure 1). The agent starts in one of the rooms and must reach a 6 goal region located in the farthest room. The environment has objects that can be picked up by the agent by passing over them. Each object belongs to one of three classes determining the associated reward. The objective of the agent is to pick up the “good” objects and navigate to the goal while avoiding “bad” objects. The rewards associated with object classes change at every 20 000 transitions, giving rise to very different tasks (Figure 1). The goal is to maximize the sum of rewards accumulated over a sequence of 250 tasks, with each task’s rewards sampled uniformly from [−1, 1]3. Figure 1: Environment layout and some examples of optimal trajectories associated with specific tasks. The shapes of the objects represent their classes; ‘S’ is the start state and ‘G’ is the goal. We defined a straightforward instantiation of our approach in which both ˜w and ˜ψ π are computed incrementally in order to minimize losses induced by (2) and (4). Every time the task changes the current ˜ψ πi is stored and a new ˜ψ πi+1 is created. We call this method SFQL as a reference to the fact that SFs are learned through an algorithm analogous to Q-learning (QL)—which is used as a baseline in our comparisons [27]. As a more challenging reference point we report results for a transfer method called probabilistic policy reuse [8]. We adopt a version of the algorithm that builds on QL and reuses all policies learned. The resulting method, PRQL, is thus directly comparable to SFQL. The details of QL, PRQL, and SFQL, including their pseudo-codes, are given in Appendix B. We compared two versions of SFQL. In the first one, called SFQL-φ, we assume the agent has access to features φ that perfectly predict the rewards, as in (2). The second version of our agent had to learn an approximation ˜φ ∈Rh directly from data collected by QL in the first 20 tasks. Note that h may not coincide with the true dimension of φ, which in this case is 4; we refer to the different instances of our algorithm as SFQL-h. The process of learning ˜φ followed the multi-task learning protocol proposed by Caruana [6] and Baxter [2], and described in detail in Appendix B. The results of our experiments can be seen in Figure 2. As shown, all versions of SFQL significantly outperform the other two methods, with an improvement on the average return of more than 100% when compared to PRQL, which itself improves on QL by around 100%. Interestingly, SFQL-h seems to achieve good overall performance faster than SFQL-φ, even though the latter uses features that allow for an exact representation of the rewards. One possible explanation is that, unlike their counterparts φi, the features ˜φi are activated over most of the space S × A × S, which results in a dense pseudo-reward signal that facilitates learning. The second environment we consider is a set of control tasks defined in the MuJoCo physics engine [26]. Each task consists in moving a two-joint torque-controlled simulated robotic arm to a Q-Learning PRQL SFQL-휙 / SFQL-4 SFQL-8 Figure 2: Average and cumulative return per task in the four-room domain. SFQL-h receives no reward during the first 20 tasks while learning ˜φ. Error-bands show one standard error over 30 runs. 7 Tasks Trained Normalized Return Task 1 Task 2 Task 3 Task 4 (a) Performance on training tasks (faded dotted lines in the background are DQN’s results). SFDQN DQN Tasks Trained Normalized Return (b) Average performance on test tasks. (c) Colored and gray circles depict training and test targets, respectively. Figure 3: Normalized return on the reacher domain: ‘1’ corresponds to the average result achieved by DQN after learning each task separately and ‘0’ corresponds to the average performance of a randomly-initialized agent (see Appendix B for details). SFDQN’s results were obtained using the GPI policies πi(s) defined in the text. Shading shows one standard error over 30 runs. specific target location; thus, we refer to this environment as “the reacher domain.” We defined 12 tasks, but only allowed the agents to train in 4 of them (Figure 3c). This means that the agent must be able to perform well on tasks that it has never experienced during training. In order to solve this problem, we adopted essentially the same algorithm as above, but we replaced QL with Mnih et al.’s DQN—both as a baseline and as the basic engine underlying the SF agent [15]. The resulting method, which we call SFDQN, is an illustration of how our method can be naturally combined with complex nonlinear approximators such as neural networks. The features φi used by SFDQN are the negation of the distances to the center of the 12 target regions. As usual in experiments of this type, we give the agents a description of the current task: for DQN the target coordinates are given as inputs, while for SFDQN this is provided as an one-hot vector wt ∈R12 [12]. Unlike in the previous experiment, in the current setup each transition was used to train all four ˜ψ πi through losses derived from (4). Here πi is the GPI policy on the ith task: πi(s) ∈argmaxa maxj ˜ψj(s, a)⊤wi. Results are shown in Figures 3a and 3b. Looking at the training curves, we see that whenever a task is selected for training SFDQN’s return on that task quickly improves and saturates at nearoptimal performance. The interesting point to be noted is that, when learning a given task, SFDQN’s performance also improves in all other tasks, including the test ones, for which it does not have specialized policies. This illustrates how the combination of SFs and GPI can give rise to flexible agents able to perform well in any task of a set of tasks with shared dynamics—which in turn can be seen as both a form of temporal abstraction and a step towards more general hierarchical RL [22, 1]. 6 Related work Mehta et al.’s [14] approach for transfer learning is probably the closest work to ours in the literature. There are important differences, though. First, Mehta et al. [14] assume that both φ and w are always observable quantities provided by the environment. They also focus on average reward RL, in which the quality of a decision policy can be characterized by a single scalar. This reduces the process of selecting a policy for a task to one decision made at the outset, which is in clear contrast with GPI. 8 The literature on transfer learning has other methods that relate to ours [25, 11]. Among the algorithms designed for the scenario considered here, two approaches are particularly relevant because they also reuse old policies. One is Fernández et al.’s [8] probabilistic policy reuse, adopted in our experiments and described in Appendix B. The other approach, by Bernstein [4], corresponds to using our method but relearning all ˜ψ πi from scratch at each new task. When we look at SFs strictly as a representation scheme, there are clear similarities with Littman et al.’s [13] predictive state representation (PSR). Unlike SFs, though, PSR tries to summarize the dynamics of the entire environment rather than of a single policy π. A scheme that is perhaps closer to SFs is the value function representation sometimes adopted in inverse RL [18]. SFs are also related to Sutton et al.’s [23] general value functions (GVFs), which extend the notion of value function to also include “pseudo-rewards.” If we see φi as a pseudo-reward, ψπ i (s, a) becomes a particular case of GVF. Beyond the technical similarities, the connection between SFs and GVFs uncovers some principles underlying both lines of work that, when contrasted, may benefit both. On one hand, Sutton et al.’s [23] and Modayil et al.’s [16] hypothesis that relevant knowledge about the world can be expressed in the form of many predictions naturally translates to SFs: if φ is expressive enough, the agent should be able to represent any relevant reward function. Conversely, SFs not only provide a concrete way of using this knowledge, they also suggest a possible criterion to select the pseudo-rewards φi, since ultimately we are only interested in features that help in the approximation φ(s, a, s′)⊤˜w ≈r(s, a, s′). Another generalization of value functions that is related to SFs is Schaul et al.’s [20] universal value function approximators (UVFAs). UVFAs extend the notion of value function to also include as an argument an abstract representation of a “goal,” which makes them particularly suitable for transfer. The function maxj ˜ψ π∗ j (s, a)⊤˜w used in our framework can be seen as a function of s, a, and ˜w—the latter a generic way of representing a goal—, and thus in some sense this representation is a UVFA. The connection between SFs and UVFAs raises an interesting point: since under this interpretation ˜w is simply the description of a task, it can in principle be a direct function of the observations, which opens up the possibility of the agent determining ˜w even before seeing any rewards. As discussed, our approach is also related to temporal abstraction and hierarchical RL: if we look at ψπ as instances of Sutton et al.’s [22] options, acting greedily with respect to the maximum over their value functions corresponds in some sense to planning at a higher level of temporal abstraction (that is, each ψπ(s, a) is associated with an option that terminates after a single step). This is the view adopted by Yao et al. [28], whose universal option model closely resembles our approach in some aspects (the main difference being that they do not do GPI). Finally, there have been previous attempts to combine SR and neural networks. Kulkarni et al. [10] and Zhang et al. [29] propose similar architectures to jointly learn ˜ψ π(s, a), ˜φ(s, a, s′) and ˜w. Although neither work exploits SFs for GPI, they both discuss other uses of SFs for transfer. In principle the proposed (or similar) architectures can also be used within our framework. 7 Conclusion This paper builds on two concepts, both of which are generalizations of previous ideas. The first one is SFs, a generalization of Dayan’s [7] SR that extends the original definition from discrete to continuous spaces and also facilitates the use of function approximation. The second concept is GPI, formalized in Theorem 1. As the name suggests, this result extends Bellman’s [3] classic policy improvement theorem from a single to multiple policies. Although SFs and GPI are of interest on their own, in this paper we focus on their combination to induce transfer. The resulting framework is an elegant extension of DP’s basic setting that provides a solid foundation for transfer in RL. As a complement to the proposed transfer approach, we derived a theoretical result, Theorem 2, that formalizes the intuition that an agent should perform well on a novel task if it has seen a similar task before. We also illustrated with a comprehensive set of experiments how the combination of SFs and GPI promotes transfer in practice. We believe the proposed ideas lay out a general framework for transfer in RL. By specializing the basic components presented one can build on our results to derive agents able to perform well across a wide variety of tasks, and thus extend the range of environments that can be successfully tackled. 9 Acknowledgments The authors would like to thank Joseph Modayil for the invaluable discussions during the development of the ideas described in this paper. We also thank Peter Dayan, Matt Botvinick, Marc Bellemare, and Guy Lever for the excellent comments, and Dan Horgan and Alexander Pritzel for their help with the experiments. Finally, we thank the anonymous reviewers for their comments and suggestions to improve the paper. References [1] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341–379, 2003. [2] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149–198, 2000. [3] Richard E. Bellman. Dynamic Programming. Princeton University Press, 1957. [4] Daniel S. Bernstein. Reusing old policies to accelerate learning on new MDPs. Technical report, Amherst, MA, USA, 1999. [5] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [6] Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. [7] Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613–624, 1993. [8] Fernando Fernández, Javier García, and Manuela Veloso. Probabilistic policy reuse for inter-task transfer learning. Robotics and Autonomous Systems, 58(7):866–871, 2010. [9] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2002. [10] Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016. [11] Alessandro Lazaric. Transfer in Reinforcement Learning: A Framework and a Survey. Reinforcement Learning: State-of-the-Art, pages 143–173, 2012. [12] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [13] Michael L. Littman, Richard S. Sutton, and Satinder Singh. Predictive representations of state. In Advances in Neural Information Processing Systems (NIPS), pages 1555–1561, 2001. [14] Neville Mehta, Sriraam Natarajan, Prasad Tadepalli, and Alan Fern. Transfer in variable-reward hierarchical reinforcement learning. Machine Learning, 73(3), 2008. [15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [16] Joseph Modayil, Adam White, and Richard S. Sutton. Multi-timescale nexting in a reinforcement learning robot. Adaptive Behavior, 22(2):146–160, 2014. [17] Sriraam Natarajan and Prasad Tadepalli. Dynamic preferences in multi-criteria reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 601–608, 2005. 10 [18] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 663–670, 2000. [19] Martin L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., 1994. [20] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal Value Function Approximators. In International Conference on Machine Learning (ICML), pages 1312–1320, 2015. [21] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [22] Richard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112: 181–211, 1999. [23] Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In International Conference on Autonomous Agents and Multiagent Systems, pages 761–768, 2011. [24] Csaba Szepesvári. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010. [25] Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(1):1633–1685, 2009. [26] Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5026–5033, 2012. [27] Christopher Watkins and Peter Dayan. Q-learning. Machine Learning, 8:279–292, 1992. [28] Hengshuai Yao, Csaba Szepesvári, Richard S Sutton, Joseph Modayil, and Shalabh Bhatnagar. Universal option models. In Advances in Neural Information Processing Systems (NIPS), pages 990–998. 2014. [29] Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, and Wolfram Burgard. Deep reinforcement learning with successor features for navigation across similar environments. CoRR, abs/1612.05533, 2016. 11 | 2017 | 144 |
6,613 | On Quadratic Convergence of DC Proximal Newton Algorithm in Nonconvex Sparse Learning Xingguo Li1,4 Lin F. Yang2⇤ Jason Ge2 Jarvis Haupt1 Tong Zhang3 Tuo Zhao4† 1University of Minnesota 2Princeton University 3Tencent AI Lab 4Georgia Tech Abstract We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal newton algorithm with multi-stage convex relaxation based on the difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory. 1 Introduction We consider a high dimensional regression or classification problem: Given n independent observations {xi, yi}n i=1 ⇢Rd ⇥R sampled from a joint distribution D(X, Y ), we are interested in learning the conditional distribution P(Y |X) from the data. A popular modeling approach is the Generalized Linear Model (GLM) [20], which assumes P (Y |X; ✓⇤) / exp ✓Y X>✓⇤− (X>✓⇤) c(σ) ◆ , where c(σ) is a scaling parameter, and is the cumulant function. A natural approach to estimate ✓⇤is the Maximum Likelihood Estimation (MLE) [25], which essentially minimizes the negative log-likelihood of the data given parameters. However, MLE often performs poorly in parameter estimation in high dimensions due to the curse of dimensionality [6]. To address this issue, machine learning researchers and statisticians follow Occam’s razor principle, and propose sparse modeling approaches [3, 26, 30, 32]. These sparse modeling approaches assume that ✓⇤is a sparse vector with only s⇤nonzero entries, where s⇤< n ⌧d. This implies that many variables in X are essentially irrelevant to modeling, which is very natural to many real world applications such as genomics and medical imaging [7, 21]. Many empirical results have corroborated the success of sparse modeling in high dimensions. Specifically, many sparse modeling approaches obtain a sparse estimator of ✓⇤by solving the following regularized optimization problem, ✓= argmin ✓2Rd L(✓) + Rλtgt(✓), (1) where L : Rd ! R is the convex negative log-likelihood (or pseudo-likelihood) function, Rλtgt : Rd ! R is a sparsity-inducing decomposable regularizer, i.e., Rλtgt(✓) = Pd j=1 rλtgt(✓j) with rλtgt : R ! R, and λtgt > 0 is the regularization parameter. Many existing sparse modeling approaches can be cast as special examples of (1), such as sparse linear regression [30], sparse logistic regression [32], and sparse Poisson regression [26]. ⇤The work was done while the author was at Johns Hopkins University. †The authors acknowledge support from DARPA YFA N66001-14-1-4047, NSF Grant IIS-1447639, and Doctoral Dissertation Fellowship from University of Minnesota. Correspondence to: Xingguo Li <lixx1661@umn.edu> and Tuo Zhao <tuo.zhao@isye.gatech.edu>. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Given a convex regularizer, e.g., Rtgt(✓) = λtgt||✓||1 [30], we can obtain global optima in polynomial time and characterize their statistical properties. However, convex regularizers incur large estimation bias. To address this issue, several nonconvex regularizers are proposed, including the minimax concave penalty (MCP, [39]), smooth clipped absolute deviation (SCAD, [8]), and capped `1regularization [40]. The obtained estimator (e.g., hypothetically global optima to (1)) can achieve faster statistical rates of convergence than their convex counterparts [9, 16, 22, 34]. Despite of these superior statistical guarantees, nonconvex regularizers raise greater computational challenge than convex regularizers in high dimensions. Popular iterative algorithms for convex optimization, such as proximal gradient descent [2, 23] and coordinate descent [17, 29], no longer have strong global convergence guarantees for nonconvex optimization. Therefore, establishing statistical properties of the estimators obtained by these algorithms becomes very challenging, which explains why existing theoretical studies on computational and statistical guarantees for nonconvex regularized sparse modeling approaches are so limited until recent rise of a new area named “statistical optimization”. Specifically, machine learning researchers start to incorporate certain structures of sparse modeling (e.g. restricted strong convexity, large regularization effect) into the algorithmic design and convergence analysis for optimization. This further motivates a few recent progresses: [16] propose proximal gradient algorithms for a family of nonconvex regularized estimators with a linear convergence to an approximate local optimum with suboptimal statistical guarantees; [34, 43] further propose homotopy proximal gradient and coordinate gradient descent algorithms with a linear convergence to a local optimum and optimal statistical guarantees; [9, 41] propose a multistage convex relaxation-based (also known as Difference of Convex (DC) Programming) proximal gradient algorithm, which can guarantee an approximate local optimum with optimal statistical properties. Their computational analysis further shows that within each stage of the convex relaxation, the proximal gradient algorithm achieves a (local) linear convergence to a unique sparse global optimum for the relaxed convex subproblem. The aforementioned approaches only consider first order algorithms, such as proximal gradient descent and proximal coordinate gradient descent. The second order algorithms with theoretical guarantees are still largely missing for high dimensional nonconvex regularized sparse modeling approaches, but this does not suppress the enthusiasm of applying heuristic second order algorithms to real world problems. Some evidences have already corroborated their superior computational performance over first order algorithms (e.g. glmnet [10]). This further motivates our attempt towards understanding the second order algorithms in high dimensions. In this paper, we study a multistage convex relaxation-based proximal Newton algorithm for nonconvex regularized sparse learning. This algorithm is not only highly efficient in practice, but also enjoys strong computational and statistical guarantees in theory. Specifically, by leveraging a sophisticated characterization of local restricted strong convexity and Hessian smoothness, we prove that within each stage of convex relaxation, our proposed algorithm maintains the solution sparsity, and achieves a (local) quadratic convergence, which is a significant improvement over (local) linear convergence of proximal gradient algorithm in [9] (See more details in later sections). This eventually allows us to obtain an approximate local optimum with optimal statistical properties after only a few relaxations. Numerical experiments are provided to support our theory. To the best of our knowledge, this is the first of second order based approaches for high dimensional sparse learning using convex/nonconvex regularizers with strong statistical and computational guarantees. Notations: Given a vector v 2 Rd, we denote the p-norm as ||v||p = (Pd j=1 |vj|p)1/p for a real p > 0 and the number of nonzero entries as ||v||0 = P j 1(vj 6= 0) and v\j = (v1, . . . , vj−1, vj+1, . . . , vd)> 2 Rd−1 as the subvector with the j-th entry removed. Given an index set A ✓{1, ..., d}, A? = {j | j 2 {1, ..., d}, j /2 A} is the complementary set to A. We use vA to denote a subvector of v indexed by A. Given a matrix A 2 Rd⇥d, we use A⇤j (Ak⇤) to denote the j-th column (k-th row) and ⇤max(A) (⇤min(A)) as the largest (smallest) eigenvalue of A. We define ||A||2 F = P j ||A⇤j||2 2 and ||A||2 = p ⇤max(A>A). We denote A\i\j as the submatrix of A with the i-th row and the j-th column removed, A\ij (Ai\j) as the j-th column (i-th row) of A with its i-th (j-th) entry removed, and AAA as a submatrix of A with both row and column indexed by A. If A is a PSD matrix, we define ||v||A = p v>Av as the induced seminorm for vector v. We use conventional notation O(·), ⌦(·),⇥(·) to denote the limiting behavior, ignoring constant, and OP (·) to denote the limiting behavior in probability. C1, C2, . . . are denoted as generic positive constants. 2 2 DC Proximal Newton Algorithm Throughout the rest of the paper, we assume: (1) L(✓) is nonstrongly convex and twice continuously differentiable, e.g., the negative log-likelihood function of the generalized linear model (GLM); (2) L(✓) takes an additive form, i.e., L(✓) = 1 n Pn i=1 `i(✓), where each `i(✓) is associated with an observation (xi, yi) for i = 1, ..., n. Take GLM as an example, we have `i(✓) = (x> i ✓) −yix> i ✓, where is the cumulant function. For nonconvex regularization, we use the capped `1 regularizer [40] defined as Rλtgt(✓) = d X j=1 rtgt(✓j) = λtgt d X j=1 min{|✓j|, βλtgt}, where β > 0 is an additional tuning parameter. Our algorithm and theory can also be extended to the SCAD and MCP regularizers in a straightforward manner [8, 39]. As shown in Figure 1, rλtgt(✓j) can be decomposed as the difference of two convex functions [5], i.e., = − θj θj θj Figure 1: The capped `1 regularizer is the difference of two convex functions. This allows us to relax the nonconvex regularizer based the concave duality. rλ(✓j) = λ|✓j| |{z} convex −max{λ|✓j| −βλ2, 0} | {z } convex . This motivates us to apply the difference of convex (DC) programming approach to solve the nonconvex problem. We then introduce the DC proximal Newton algorithm, which contains three components: the multistage convex relaxation, warm initialization, and proximal Newton algorithm. (I) The multistage convex relaxation is essentially a sequential optimization framework [40]. At the (K + 1)-th stage, we have the output solution from the previous stage b✓{K}. For notational simplicity, we define a regularization vector as λ{K+1} = (λ{K+1} 1 , ..., λ{K+1} d )>, where λ{K+1} j = λtgt · 1(|b✓{K} j | βλtgt) for all j = 1, . . . , d. Let ⊙be the Hadamard (entrywise) product. We solve a convex relaxation of (1) at ✓= b✓{K} as follows, ✓ {K+1} = argmin ✓2Rd Fλ{K+1}(✓), where Fλ{K+1}(✓) = L(✓) + ||λ{K+1} ⊙✓||1, (2) where ||λ{K+1} ⊙✓||1 = Pd j=1 λ{K+1} j |✓j|. One can verify that ||λ{K+1} ⊙✓||1 is essentially a convex relaxation of Rλtgt(✓) at ✓= b✓{K} based on the concave duality in DC programming. We emphasis that ✓ {K} denotes the unique sparse global optimum for (2) (The uniqueness will be elaborated in later sections), and b✓{K} denotes the output solution for (2) when we terminate the iteration at the K-th convex relaxation stage. The stopping criterion will be explained later. (II) The warm initialization is the first stage of DC programming, where we solve the `1 regularized counterpart of (1), ✓ {1} = argmin ✓2Rd L(✓) + λtgt||✓||1. (3) This is an intuitive choice for sparse statistical recovery, since the `1 regularized estimator can give us a good initialization, which is sufficiently close to ✓⇤. Note that this is equivalent to (2) with λ{1} j = λtgt for all j = 1, . . . , d, which can be viewed as the convex relaxation of (1) at b✓{0} = 0 for the first stage. (III) The proximal Newton algorithm proposed in [12] is then applied to solve the convex subproblem (2) at each stage, including the warm initialization (3). For notational simplicity, we omit the stage index {K} for all intermediate updates of ✓, and only use (t) as the iteration index within the K-th stage for all K ≥1. Specifically, at the K-th stage, given ✓(t) at the t-th iteration of the proximal Newton algorithm, we consider a quadratic approximation of (2) at ✓(t) as follows, Q(✓; ✓(t), λ{K}) = L(✓(t)) + (✓−✓(t))>rL(✓(t)) + 1 2||✓−✓(t)||2 r2L(✓(t)) + ||λ{K} ⊙✓||1, (4) 3 where ||✓−✓(t)||2 r2L(✓(t)) = (✓−✓(t))>r2L(✓(t))(✓−✓(t)). We then take ✓(t+ 1 2 ) = argmin✓Q(✓; ✓(t), λ{K}). Since L(✓) = 1 n Pn i=1 `i(✓) takes an additive form, we can avoid directly computing the d by d Hessian matrix in (4). Alternatively, in order to reduce the memory usage when d is large, we rewrite (4) as a regularized weighted least square problem as follows Q(✓; ✓(t)) = 1 n n X i=1 wi(zi −x> i ✓)2 + ||λ{K} ⊙✓||1 + constant, (5) where wi’s and zi’s are some easy to compute constants depending on ✓(t), `i(✓(t))’s, xi’s, and yi’s. Remark 1. Existing literature has shown that (5) can be efficiently solved by coordinate descent algorithms in conjunction with the active set strategy [43]. See more details in [10] and Appendix B. For the first stage (i.e., warm initialization), we require an additional backtracking line search procedure to guarantee the descent of the objective value [12]. Specifically, we denote ∆✓(t) = ✓(t+ 1 2 ) −✓(t). Then we start from ⌘t = 1 and use backtracking line search to find the optimal ⌘t 2 (0, 1] such that the Armijo condition [1] holds. Specifically, given a constant µ 2 (0.9, 1), we update ⌘t = µq from q = 0 and find the smallest integer q such that Fλ{1}(✓(t) + ⌘t∆✓(t)) Fλ{1}(✓(t)) + ↵⌘tγt, where ↵2 (0, 1 2) is a fixed constant and γt = rL ⇣ ✓(t)⌘> · ∆✓(t) + ||λ{1} ⊙ ⇣ ✓(t) + ∆✓(t)⌘ ||1 −||λ{1} ⊙✓(t)||1. We then set ✓(t+1) as ✓(t+1) = ✓(t) + ⌘t∆✓(t). We terminate the iterations when the following approximate KKT condition holds: !λ{1} ⇣ ✓(t)⌘ := min ⇠2@||✓(t)||1 ||rL(✓(t)) + λ{1} ⊙⇠||1 ", where " is a predefined precision parameter. Then we set the output solution as b✓{1} = ✓(t). Note that b✓{1} is then used as the initial solution for the second stage of convex relaxation (2). The proximal Newton algorithm with backtracking line search is summarized in Algorithm 2 in Appendix. Such a backtracking line search procedure is not necessary at K-th stage for all K ≥2. In other words, we simply take ⌘t = 1 and ✓(t+1) = ✓(t+ 1 2 ) for all t ≥0 when K ≥2. This leads to more efficient updates for the proximal Newton algorithm from the second stage of convex relaxation (2). We summarize our proposed DC proximal Newton algorithm in Algorithm 1 in Appendix. 3 Computational and Statistical Theories Before we present our theoretical results, we first introduce some preliminaries, including important definitions and assumptions. We define the largest and smallest s-sparse eigenvalues as follows. Definition 2. We define the largest and smallest s-sparse eigenvalues of r2L(✓) as ⇢+ s = sup kvk0s v>r2L(✓)v v>v and ⇢− s = inf kvk0s v>r2L(✓)v v>v for any positive integer s. We define s = ⇢+ s ⇢− s as the s-sparse condition number. The sparse eigenvalue (SE) conditions are widely studied in high dimensional sparse modeling problems, and are closely related to restricted strong convexity/smoothness properties and restricted eigenvalue properties [22, 27, 33, 44]. For notational convenience, given a parameter ✓2 Rd and a real constant R > 0, we define a neighborhood of ✓with radius R as B(✓, R) := φ 2 Rd | ||φ −✓||2 R . Our first assumption is for the sparse eigenvalues of the Hessian matrix over a sparse domain. Assumption 1. Given ✓2 B(✓⇤, R) for a generic constant R, there exists a generic constant C0 such that r2L(✓) satisfies SE with parameters 0 < ⇢− s⇤+2es < ⇢+ s⇤+2es < +1, where es ≥C02 s⇤+2es s⇤ and s⇤+2es = ⇢+ s⇤+2es ⇢− s⇤+2es . 4 Assumption 1 requires that L(✓) has finite largest and positive smallest sparse eigenvalues, given ✓is sufficiently sparse and close to ✓⇤. Analogous conditions are widely used in high dimensional analysis [13, 14, 34, 35, 43], such as the restricted strong convexity/smoothness of L(✓) (RSC/RSS, [6]). Given any ✓, ✓0 2 Rd, the RSC/RSS parameter can be defined as δ(✓0, ✓) := L(✓0)−L(✓)−rL(✓)>(✓0−✓). For notational simplicity, we define S = {j | ✓⇤ j 6= 0} and S? = {j | ✓⇤ j = 0}. The following proposition connects the SE property to the RSC/RSS property. Proposition 3. Given ✓, ✓0 2 B(✓⇤, R) with ||✓S?||0 es and ||✓0 S?||0 es, L(✓) satisfies 1 2⇢− s⇤+2esk✓0 −✓k2 2 δ(✓0, ✓) 1 2⇢+ s⇤+2esk✓0 −✓k2 2. The proof of Proposition 3 is provided in [6], and therefore is omitted. Proposition 3 implies that L(✓) is essentially strongly convex, but only over a sparse domain (See Figure 2). The second assumption requires r2L(✓) to be smooth over the sparse domain. Assumption 2 (Local Restricted Hessian Smoothness). Recall that es is defined in Assumption 1. There exist generic constants Ls⇤+2es and R such that for any ✓, ✓0 2 B(✓⇤, R) with ||✓S?||0 es and ||✓0 S?||0 es, we have supv2⌦, ||v||=1 v>(r2L(✓0) −r2L(✓))v Ls⇤+2es||✓−✓0||2 2, where ⌦= {v | supp(v) ✓(supp(✓) [ supp(✓0))}. Assumption 2 guarantees that r2L(✓) is Lipschitz continuous within a neighborhood of ✓⇤over a sparse domain. The local restricted Hessian smoothness is parallel to the local Hessian smoothness for analyzing the proximal Newton method in low dimensions [12]. In our analysis, we set the radius R as R := ⇢− s⇤+2es 2Ls⇤+2es , where 2R = ⇢− s⇤+2es Ls⇤+2es is the radius of the region centered at the unique global minimizer of (2) for quadratic convergence of the proximal Newton algorithm. This is parallel to the radius in low dimensions [12], except that we restrict the parameters over the sparse domain. Restricted Strongly Convex Nonstrongly Convex Figure 2: An illustrative two dimensional example of the restricted strong convexity. L(✓) is not strongly convex. But if we restrict ✓to be sparse (Black Curve), L(✓) behaves like a strongly convex function. The third assumption requires the choice of λtgt to be appropriate. Assumption 3. Given the true modeling parameter ✓⇤, there exists a generic constant C1 such that λtgt = C1 q log d n ≥4||rL(✓⇤)||1. Moreover, for a large enough n, we have p s⇤λtgt C2R⇢− s⇤+2es. Assumption 3 guarantees that the regularization is sufficiently large to eliminate irrelevant coordinates such that the obtained solution is sufficiently sparse [4, 22]. In addition, λtgt can not be too large, which guarantees that the estimator is close enough to the true model parameter. The above assumptions are deterministic. We will verify them under GLM in the statistical analysis. Our last assumption is on the predefined precision parameter " as follows. Assumption 4. For each stage of solving the convex relaxation subproblems (2) for all K ≥1, there exists a generic constant C3 such that " satisfies " = C3 pn λtgt 8 . Assumption 4 guarantees that the output solution b✓{K} at each stage for all K ≥1 has a sufficient precision, which is critical for our convergence analysis of multistage convex relaxation. 3.1 Computational Theory We first characterize the convergence for the first stage of our proposed DC proximal Newton algorithm, i.e., the warm initialization for solving (3). Theorem 4 (Warm Initialization, K = 1). Suppose that Assumptions 1 ⇠4 hold. After sufficiently many iterations T < 1, the following results hold for all t ≥T: ||✓(t) −✓⇤||2 R and Fλ{1}(✓(t)) Fλ{1}(✓⇤) + 15λ2 tgts⇤ 4⇢− s⇤+2es , 5 which further guarantee ⌘t = 1, ||✓(t) S?||0 es and ||✓(t+1) −✓ {1}||2 Ls⇤+2es 2⇢− s⇤+2es ||✓(t) −✓ {1}||2 2, where ✓ {1} is the unique sparse global minimizer of (3) satisfying ||✓ {1} S? ||0 es and !λ{1}(✓ {1}) = 0. Moreover, we need at most T + log log 3⇢+ s⇤+2es " ! iterations to terminate the proximal Newton algorithm for the warm initialization (3), where the output solution b✓{1} satisfies ||b✓{1} S? ||0 es, !λ{1}(b✓{1}) ", and ||b✓{1} −✓⇤||2 18λtgt p s⇤ ⇢− s⇤+2es . The proof of Theorem 4 is provided in Appendix C.1. Theorem 4 implies: (I) The objective value is sufficiently small after finite T iterations of the proximal Newton algorithm, which further guarantees sparse solutions and good computational performance in all follow-up iterations. (II) The solution enters the ball B(✓⇤, R) after finite T iterations. Combined with the sparsity of the solution, it further guarantees that the solution enters the region of quadratic convergence. Thus the backtracking line search stops immediately and output ⌘t = 1 for all t ≥T. (III) The total number of iterations is at most O(T + log log 1 ") to achieve the approximate KKT condition !λ{1}(✓(t)) ", which serves as the stopping criterion of the warm initialization (3). Given these good properties of the output solution b✓{1} obtained from the warm initialization, we can further show that our proposed DC proximal Newton algorithm for all follow-up stages (i.e., K ≥2) achieves better computational performance than the first stage. This is characterized by the following theorem. For notational simplicity, we omit the iteration index {K} for the intermediate updates within each stage for the multistage convex relaxation. Theorem 5 (Stage K, K ≥2). Suppose Assumptions 1 ⇠4 hold. Then for all iterations t = 1, 2, ... within each stage K ≥2, we have ||✓(t) S?||0 es and ||✓(t) −✓⇤||2 R, which further guarantee ⌘t = 1, ||✓(t+1) −✓ {K}||2 Ls⇤+2es 2⇢− s⇤+2es ||✓(t) −✓ {K}||2 2, and Fλ{K}(✓(t+1)) < Fλ{K}(✓(t)), where ✓ {K} is the unique sparse global minimizer of (2) at the K-th stage satisfying ||✓ {K} S? ||0 es and !λ{K}(✓ {K}) = 0. Moreover, we need at most log log 3⇢+ s⇤+2es " ! . iterations to terminate the proximal Newton algorithm for the K-th stage of convex relaxation (2), where the output solution b✓{K} satisfies ||b✓{K} S? ||0 es, !λ{K}(b✓{K}) ", and ||b✓{K} −✓⇤||2 C2 0 @krL(✓⇤)Sk2 + λtgt sX j2S 1(|✓⇤ j | βλtgt)2 + " p s⇤ 1 A + C30.7K−1||b✓{1} −✓⇤||2, for some generic constants C2 and C3. The proof of Theorem 5 is provided in Appendix C.2. A geometric interpretation for the computational theory of local quadratic convergence for our proposed algorithm is provided in Figure 3. From the second stage of convex relaxation (2), i.e., K ≥2, Theorem 5 implies: (I) Within each stage, the al6 ... Region of Quadratic Convergence Output Solution for the 2nd Stage Output Solution for the Last Stage Neighborhood of θ∗: B(θ∗,R) Initial Solution for Warm Initialization Output Solution for Warm Initialization !θ{0} !θ{1} !θ{2} !θ{" K} θ∗ Figure 3: A geometric interpretation of local quadratic convergence: the warm initialization enters the region of quadratic convergence (orange region) after finite iterations and the follow-up stages remain in the region of quadratic convergence. The final estimator b✓{ e K} has a better estimation error than the estimator b✓{1} obtained from the convex warm initialization. gorithm maintains a sparse solution throughout all iterations t ≥1. The sparsity further guarantees that the SE property and the restrictive Hessian smoothness hold, which are necessary conditions for the fast convergence of the proximal Newton algorithm. (II) The solution is maintained in the region B(✓⇤, R) for all t ≥1. Combined with the sparsity of the solution, we have that the solution enters the region of quadratic convergence. This guarantees that we only need to set the step size ⌘t = 1 and the objective value is monotonely decreasing without the sophisticated backtracking line search procedure. Thus, the proximal Newton algorithm enjoys the same fast convergence as in low dimensional optimization problems [12]. (III) With the quadratic convergence rate, the number of iterations is at most O(log log 1 ") to attain the approximate KKT condition !λ{K}(✓(t)) ", which is the stopping criteria at each stage. 3.2 Statistical Theory Recall that our computational theory relies on deterministic assumptions (Assumptions 1 ⇠3). However, these assumptions involve data, which are sampled from certain statistical distribution. Therefore, we need to verify that these assumptions hold with high probability under mild data generation process of (i.e., GLM) in high dimensions in the following lemma. Lemma 6. Suppose that xi’s are i.i.d. sampled from a zero-mean distribution with covariance matrix Cov(xi) = ⌃such that 1 > cmax ≥⇤max(⌃) ≥⇤min(⌃) ≥cmin > 0, and for any v 2 Rd, v>xi is sub-Gaussian with variance at most a||v||2 2, where cmax, cmin, and a are generic constants. Moreover, for some constant M > 0, at least one of the following two conditions holds: (I) The Hessian of the cumulant function is uniformly bounded: || 00||1 M , or (II) The covariates are bounded ||xi||1 1, and E[max|u|1[ 00(x>✓⇤) + u]p] M for some p > 2. Then Assumption 1 ⇠3 hold with high probability. The proof of Lemma 6 is provided in Appendix F. Given that these assumptions hold with high probability, we know that the proximal Newton algorithm attains quadratic rate convergence within each stage of convex relaxation with high probability. Then we establish the statistical rate of convergence for the obtained estimator in parameter estimation. Theorem 7. Suppose the observations are generated from GLM satisfying the condition in Lemma 6 for large enough n such that n ≥C4s⇤log d and β = C5/cmin is a constant defined in Section 2 for generic constants C4 and C5, then with high probability, the output solution b✓{K} satisfies ||b✓{K} −✓⇤||2 C6 r s⇤ n + r s0 log d n ! + C70.7K r s⇤log d n ! for generic constants C6 and C7, where s0 = P j2S 1(|✓⇤ j | βλtgt)). Theorem 7 is a direct result combining Theorem 5 and the analysis in [40]. As can be seen, s0 is essentially the number of nonzero ✓j’s with smaller magnitudes than βλtgt, which are often considered as “weak” signals. Theorem 7 essentially implies that by exploiting the multi-stage convex relaxation framework, our proposed DC proximal Newton algorithm gradually reduces the estimation bias for “strong” signals, and eventually obtains an estimator with better statistical properties than the `1-regularized estimator. Specifically, let eK be the smallest integer such that after eK stages of convex relaxation we have C70.7 e K ✓q s⇤log d n ◆ C6 max ⇢q s⇤ n , q s0 log d n : , which is equivalent to requiring eK = O(log log d). This implies the total number of the proximal Newton updates being at most O ; T + log log 1 " · (1 + log log d) < . In addition, the obtained estimator attains the optimal 7 statistical properties in parameter estimation: ||b✓{ e K} −✓⇤||2 OP ✓q s⇤ n + q s0 log d n ◆ v.s. ||b✓{1} −✓⇤||2 OP ✓q s⇤log d n ◆ . (6) Recall that b✓{1} is obtained by the warm initialization (3). As illustrated in Figure 3, this implies the statistical rate in (6) for ||b✓{ e K} −✓⇤||2 obtained from the multistage convex relaxation for the nonconvex regularized problem (1) is a significant improvement over ||b✓{1} −✓⇤||2 obtained from the convex problem (3). Especially when s0 is small, i.e., most of nonzero ✓j’s are strong signals, our result approaches the oracle bound3 OP ⇣q s⇤ n ⌘ [8] as illustrated in Figure 4. 4 Experiments We compare our DC Proximal Newton (DC+PN) algorithm with two competing algorithms for solving the nonconvex regularized sparse logistic regression problem. They are accelerated proximal gradient algorithm (APG) implemented in the SPArse Modeling Software (SPAMS, coded in C++ [18]), and accelerated coordinate descent (ACD) algorithm implemented in R package gcdnet (coded in Fortran, [36]). We further optimize the active set strategy in gcdnet to boost its computational performance. To integrate these two algorithms with the multistage convex relaxation framework, we revise their source code. To further boost the computational efficiency at each stage of the convex relaxation, we apply the pathwise optimization [10] for all algorithms in practice. Specifically, at each stage, we use a geometrically decreasing sequence of regularization parameters {λ[m] = ↵mλ[0]}M m=1, where λ[0] is the smallest value such that the corresponding solution is zero, ↵2 (0, 1) is a shrinOP r s⇤ n + r s0log d n ! Slow Bound: Convex OP !" s∗logd n # Oracle Bound: OP !" s∗ n # Fast Bound: Nonconvex Estimation Error ∥!θ{" K} −θ∗∥2 Percentage of Strong Signals s∗−s′ s∗ Figure 4: An illustration of the statistical rates of convergence in parameter estimation. Our obtained estimator has an error bound between the oracle bound and the slow bound from the convex problem in general. When the percentage of strong signals increases, i.e., s0 decreases, then our result approaches the oracle bound. kage parameter and λtgt = λ[M]. For each λ[m], we apply the corresponding algorithm (DC+PN, DC+APG, and DC+ACD) to solve the nonconvex regularized problem (1). Moreover, we initialize the solution for a new regularization parameter λ[m+1] using the output solution obtained with λ[m]. Such a pathwise optimization scheme has achieved tremendous success in practice [10, 15, 42]. We refer [43] for detailed discussion of pathwise optimization. Our comparison contains 3 datasets: “madelon” (n = 2000, d = 500, [11]), “gisette” (n = 2000,d = 5000, [11]), and three simulated datasets: “sim_1k” (d=1000), “sim_5k” (d=5000), and “sim_10k” (d=10000) with the sample size n = 1000 for all three datasets. We set λtgt = 0.25 p log d/n and β = 0.2 for all settings here. We generate each row of the design matrix X independently from a d-dimensional normal distribution N(0, ⌃), where ⌃jk = 0.5|j−k| for j, k = 1, ..., d. We generate y ⇠Bernoulli(1/[1 + exp(−X✓⇤)]), where ✓⇤has all 0 entries except randomly selected 20 entries. These nonzero entries are independently sampled from Uniform(0, 1). The stopping criteria for all algorithms are tuned such that they attain similar optimization errors. All three algorithms are compared in wall clock time. Our DC+PN algorithm is implemented in C with double precisions and called from R by a wrapper. All experiments are performed on a computer with 2.6GHz Intel Core i7 and 16GB RAM. For each algorithm and dataset, we repeat the algorithm 10 times and report the average value and standard deviation of the wall clock time in Table 1. As can be seen, our DC+PN algorithm significantly outperforms the competing algorithms. We remark that for increasing d, the superiority of DC+PN over DC+ACD becomes less significant as the Newton method is more sensitive to ill conditioned problems. This can be mitigated by using a denser sequence of {λ[m]} along the solutions path. We then illustrate the quadratic convergence of our DC+PN algorithm within each stage of the convex relaxation using the “sim” dataset. Specifically, we plot the gap towards the optimal objective 3The oracle bound assumes that we know which variables are relevant in advance. It is not a realistic bound, but only for comparison purpose. 8 Fλ{K}(✓ {K}) of the K-th stage versus the wall clock time in Figure 5. We see that our proposed DC proximal Newton algorithm achieves quadratic convergence, which is consistent with our theory. Table 1: Quantitive timing comparisons for the nonconvex regularized sparse logistic regression. The average values and the standard deviations (in parenthesis) of the timing performance (in seconds) over 10 random trials are presented. madelon gisette sim_1k sim_5k sim_10k DC+PN 1.51(±0.01)s 5.35(±0.11)s 1.07(±0.02)s 4.53(±0.06)s 8.82(±0.04)s obj value: 0.52 obj value: 0.01 obj value: 0.01 obj value: 0.01 obj value: 0.01 DC+ACD 5.83(±0.03)s 18.92(±2.25)s 9.46(±0.09) s 16.20(±0.24) s 19.1(±0.56) s obj value: 0.52 obj value: 0.01 obj value: 0.01 obj value: 0.01 obj value: 0.01 DC+APG 1.60(±0.03)s 207(±2.25)s 17.8(±1.23) s 111(±1.28) s 222(±5.79) s obj value: 0.52 obj value: 0.01 obj value: 0.01 obj value: 0.01 obj value: 0.01 (a) Simulated Data (b) Gissete Data Figure 5: Timing comparisons in wall clock time. Our proposed DC proximal Newton algorithm demonstrates superior quadratic convergence and significantly outperforms the DC proximal gradient algorithm. 5 Discussions We provide further discussions on the superior performance of our DC proximal Newton. There exist two major drawbacks of existing multi-stage convex relaxation based first order algorithms: (I) The first order algorithms have significant computational overhead in each iteration. For example, for GLM, computing gradients requires frequently evaluating the cumulant function and its derivatives. This often involves extensive non-arithmetic operations such as log(·) and exp(·) functions, which naturally appear in the cumulant function and its derivates, are computationally expensive. To the best of our knowledge, even if we use some efficient numerical methods for calculating exp(·) in [28, 19], the computation still need at least 10 −30 times more CPU cycles than basic arithmetic operations, e.g., multiplications. Our proposed DC Proximal Newton algorithm cannot avoid calculating the cumulant function and its derivatives, when computing quadratic approximations. The computation, however, is much less intense, since the convergence is quadratic. (II) The first order algorithms are computationally expensive with the step size selection. Although for certain GLM, e.g., sparse logistic regression, we can choose the step size parameter as ⌘= 4⇤−1 max( 1 n Pn i=1 xix> i ), such a step size often leads to poor empirical performance. In contrast, as our theoretical analysis and experiments suggest, the proposed DC proximal Newton algorithm needs very few line search steps, which saves much computational effort. Some recent works on proximal Newton or inexact proximal Newton also demonstrate local quadratic convergence guarantees [37, 38]. However, the conditions there are much more stringent than the SE property in terms of the dependence on the problem dimensions. Specifically, their quadratic convergence can only be guaranteed in a much smaller neighborhood. For example, the constant nullspace strong convexity in [37], which plays the rule as the smallest sparse eigenvalue ⇢− s⇤+2es in our analysis, is as small as 1 d. Note that ⇢− s⇤+2es can be (almost) independent of d in our case [6]. Therefore, instead of a constant radius as in our analysis, they can only guarantee the quadratic convergence in a region with radius O ; 1 d < , which is very small in high dimensions. A similar issue exists in [38] that the region of quadratic convergence is too small. 9 References [1] Larry Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966. [2] Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419–2434, 2009. [3] Alexandre Belloni, Victor Chernozhukov, and Lie Wang. Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika, 98(4):791–806, 2011. [4] Peter J Bickel, Yaacov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics, 37(4):1705–1732, 2009. [5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge University Press, 2004. [6] Peter Bühlmann and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media, 2011. [7] Ani Eloyan, John Muschelli, Mary Beth Nebel, Han Liu, Fang Han, Tuo Zhao, Anita D Barber, Suresh Joel, James J Pekar, Stewart H Mostofsky, et al. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging. Frontiers in Systems Neuroscience, 6, 2012. [8] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. [9] Jianqing Fan, Han Liu, Qiang Sun, and Tong Zhang. TAC for sparse learning: Simultaneous control of algorithmic complexity and statistical error. arXiv preprint arXiv:1507.01037, 2015. [10] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1, 2010. [11] Isabelle Guyon, Steve Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003 feature selection challenge. In Advances in neural information processing systems, pages 545–552, 2005. [12] Jason D Lee, Yuekai Sun, and Michael A Saunders. Proximal newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420–1443, 2014. [13] Xingguo Li, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, and Tuo Zhao. A first order free lunch for sqrt-lasso. arXiv preprint arXiv:1605.07950, 2016. [14] Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt. Stochastic variance reduced optimization for nonconvex sparse learning. In International Conference on Machine Learning, pages 917–925, 2016. [15] Xingguo Li, Tuo Zhao, Tong Zhang, and Han Liu. The picasso package for nonconvex regularized m-estimation in high dimensions in R. Technical Report, 2015. [16] Po-Ling Loh and Martin J Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. Journal of Machine Learning Research, 2015. to appear. [17] Zhi-Quan Luo and Paul Tseng. On the linear convergence of descent methods for convex essentially smooth minimization. SIAM Journal on Control and Optimization, 30(2):408–425, 1992. [18] Julien Mairal, Francis Bach, Jean Ponce, et al. Sparse modeling for image and vision processing. Foundations and Trends R⃝in Computer Graphics and Vision, 8(2-3):85–283, 2014. [19] A Cristiano I Malossi, Yves Ineichen, Costas Bekas, and Alessandro Curioni. Fast exponential computation on simd architectures. Proc. of HIPEAC-WAPCO, Amsterdam NL, 2015. [20] Peter McCullagh. Generalized linear models. European Journal of Operational Research, 16(3):285–292, 1984. [21] Benjamin M Neale, Yan Kou, Li Liu, Avi Ma’Ayan, Kaitlin E Samocha, Aniko Sabo, Chiao-Feng Lin, Christine Stevens, Li-San Wang, Vladimir Makarov, et al. Patterns and rates of exonic de novo mutations in autism spectrum disorders. Nature, 485(7397):242–245, 2012. [22] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. 10 [23] Yu Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125–161, 2013. [24] Yang Ning, Tianqi Zhao, and Han Liu. A likelihood ratio framework for high dimensional semiparametric regression. arXiv preprint arXiv:1412.2295, 2014. [25] Johann Pfanzagl. Parametric statistical theory. Walter de Gruyter, 1994. [26] Maxim Raginsky, Rebecca M Willett, Zachary T Harmany, and Roummel F Marcia. Compressed sensing performance bounds under poisson noise. IEEE Transactions on Signal Processing, 58(8):3990–4002, 2010. [27] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Restricted eigenvalue properties for correlated Gaussian designs. Journal of Machine Learning Research, 11(8):2241–2259, 2010. [28] Nicol N Schraudolph. A fast, compact approximation of the exponential function. Neural Computation, 11(4):853–862, 1999. [29] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for `1-regularized loss minimization. Journal of Machine Learning Research, 12:1865–1892, 2011. [30] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. [31] Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, and Ryan J Tibshirani. Strong rules for discarding predictors in Lasso-type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2):245–266, 2012. [32] Sara A van de Geer. High-dimensional generalized linear models and the Lasso. The Annals of Statistics, 36(2):614–645, 2008. [33] Sara A van de Geer and Peter Bühlmann. On the conditions used to prove oracle results for the Lasso. Electronic Journal of Statistics, 3:1360–1392, 2009. [34] Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. The Annals of Statistics, 42(6):2164–2201, 2014. [35] Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM Journal on Optimization, 23(2):1062–1091, 2013. [36] Yi Yang and Hui Zou. An efficient algorithm for computing the hhsvm and its generalizations. Journal of Computational and Graphical Statistics, 22(2):396–415, 2013. [37] Ian En-Hsu Yen, Cho-Jui Hsieh, Pradeep K Ravikumar, and Inderjit S Dhillon. Constant nullspace strong convexity and fast convergence of proximal methods under high-dimensional settings. In Advances in Neural Information Processing Systems, pages 1008–1016, 2014. [38] Man-Chung Yue, Zirui Zhou, and Anthony Man-Cho So. Inexact regularized proximal newton method: provable convergence guarantees for non-smooth convex minimization without strong convexity. arXiv preprint arXiv:1605.07522, 2016. [39] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010. [40] Tong Zhang. Analysis of multi-stage convex relaxation for sparse regularization. Journal of Machine Learning Research, 11:1081–1107, 2010. [41] Tong Zhang et al. Multi-stage convex relaxation for feature selection. Bernoulli, 19(5B):2277–2293, 2013. [42] Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, and Larry Wasserman. The huge package for highdimensional undirected graph estimation in R. Journal of Machine Learning Research, 13:1059–1062, 2012. [43] Tuo Zhao, Han Liu, and Tong Zhang. Pathwise coordinate optimization for sparse learning: Algorithm and theory. arXiv preprint arXiv:1412.7477, 2014. [44] Shuheng Zhou. Restricted eigenvalue conditions on subgaussian random matrices. arXiv preprint arXiv:0912.4045, 2009. 11 | 2017 | 145 |
6,614 | Hypothesis Transfer Learning via Transformation Functions Simon S. Du Carnegie Mellon University ssdu@cs.cmu.edu Jayanth Koushik Carnegie Mellon University jayanthkoushik@cmu.edu Aarti Singh Carnegie Mellon University aartisingh@cmu.edu Barnabás Póczos Carnegie Mellon University bapoczos@cs.cmu.edu Abstract We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain. Existing theoretical analysis either only studies specific algorithms or only presents upper bounds on the generalization error but not on the excess risk. In this paper, we propose a unified algorithm-dependent framework for HTL through a novel notion of transformation function, which characterizes the relation between the source and the target domains. We conduct a general risk analysis of this framework and in particular, we show for the first time, if two domains are related, HTL enjoys faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge Regression than those of the classical non-transfer learning settings. Experiments on real world data demonstrate the effectiveness of our framework. 1 Introduction In a classical transfer learning setting, we have a large amount of data from a source domain and a relatively small amount of data from a target domain. These two domains are related but not necessarily identical, and the usual assumption is that the hypothesis learned from the source domain is useful in the learning task of the target domain. In this paper, we focus on the regression problem where the functions we want to estimate of the source and the target domains are different but related. Figure 1a shows a 1D toy example of this setting, where the source function is f so(x) = sin(4πx) and the target function is f ta(x) = sin(4πx) + 4πx. Many real world problems can be formulated as transfer learning problems. For example, in the task of predicting the reaction time of an individual from his/her fMRI images, we have about 30 subjects but each subject has only about 100 data points. To learn the mapping from neural images to the reaction time of a specific subject, we can treat all but this subject as the source domain, and this subject as the target domain. In Section 6, we show how our proposed method helps us learn this mapping more accurately. This paradigm, hypothesis transfer learning (HTL) has been explored empirically with success in many applications [Fei-Fei et al., 2006, Yang et al., 2007, Orabona et al., 2009, Tommasi et al., 2010, Kuzborskij et al., 2013, Wang and Schneider, 2014]. Kuzborskij and Orabona [2013, 2016] pioneered the theoretical analysis of HTL for linear regression and recently Wang and Schneider [2015] analyzed Kernel Ridge Regression. However, most existing works only provide generalization bounds, i.e. the difference between the true risk and the training error or the leave-one-out error. These analyses are not complete because minimizing the generalization error does not necessarily 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 0 0.2 0.4 0.6 0.8 1 −2 0 2 4 6 8 10 12 14 X Y fso Source data fta Target Data (a) A toy example of transfer learning. We have many more samples from the source domain than the target domain. 0 0.1 0.2 0.3 0.4 −0.5 0 0.5 1 X Y Offset KS Only Target KS fta (b) Transfer learning with Offset Transformation. 0 0.1 0.2 0.3 0.4 −3 −2 −1 0 1 2 3 X Y Scale KS Only Target KS fta (c) Transfer learning with Scale Transformation. Figure 1: Experimental results on synthetic data. reduce the true risk. Further, these works often rely on a particular form of transformation from the source domain to the target domain. For example, Wang and Schneider [2015] studied the offset transformation that instead of estimating the target domain function directly, they learn the residual between the target domain function and the source domain function. It is natural to ask what if we use other transfer functions and how it affects the risk on the target domain. In this paper, we propose a general framework of HTL. Instead of analyzing a specific form of transfer, we treat it as an input of our learning algorithm. We call this input transformation function since intuitively, it captures the relevance between these two domains.1 This framework unifies many previous works Wang and Schneider [2014], Kuzborskij and Orabona [2013], Wang et al. [2016] and naturally induces a class of new learning procedures. Theoretically, we develop excess risk analysis for this framework. The performance depends on the stability [Bousquet and Elisseeff, 2002] of the algorithm used as a subroutine that if the algorithm is stable then the estimation error in the source domain will not affect the estimation in the target domain much. To our knowledge, this connection was first established by Kuzborskij et al. [2013] in the linear regression setting but here we generalize it to a broader context. In particular, we provide explicit risk bounds for two widely used nonlinear estimators, Kernel Smoothing (KS) estimators and Kernel Ridge Regression (KRR) as subroutines. To the best of our knowledge, these are the first results showing when two domains are related, transfer learning techniques have faster statistical convergence rate of excess risk than that of non-transfer learning of kernel based methods. Further, we accompany this framework with a theoretical analysis showing a small amount of data for crossvalidation enables us (1) avoid using HTL when it is not useful and (2) choose the best transformation function as input from a large pool. The rest of the paper is organized as follows. In Section 2 we introduce HTL and provide necessary backgrounds for KS and KRR. We formalize our transformation function based framework in Section 3. Our main theoretical results are in Section 4 and specifically in Section 4.1 and Section 4.2 we provide explicit risk bounds for KS and KRR, respectively. In Section 5 we analyze crossvalidation in HTL setting and in Section 6 we conduct experiments on real world data data. We conclude with a brief discussion of avenues for future work. 2 Preliminaries 2.1 Problem Setup In this paper, we assume both X ∈Rd and Y ∈R lie in compact subsets: ||X||2 ≤X, |Y | ≤Y for some X, Y ∈R+. Throughout the paper, we use T = {(Xi, Yi)}n i=1 to denote a set of samples. Let (Xso, Y so) be the sample from the source domain, and (Xta, Y ta) the sample from the target domain. In our setting, there are nso samples drawn i.i.d from the source distribution: T so = {(Xso i , Y so i )}nso i=1, and nta samples drawn i.i.d from the target distribution: T ta = {(Xta i , Y ta i )}nta i=1. In addition, we also use nval samples drawn i.i.d from the target domain for cross-validation. We model the joint relation between X and Y by: Y so = f so (Xso) + so and Y ta = f ta (Xta) + ta where f so and f ta are regression functions and we assume the noise E [so] = E [ta] = 0, i.i.d, 1We formally define the transformation functions in Section 3. 2 and bounded. We use A : T →ˆf to denote an algorithm that takes a set of samples and produce an estimator. Given an estimator ˆf, we define the integrated L2 risk as R( ˆf) = E ˆf(X) −Y 2 where the expectation is taken over the distribution of (X, Y ). Similarly, the empirical L2 risk on a set of sample T is defined as ˆR( ˆf) = 1 n n i=1 Yi −ˆf (Xi) 2 . In HTL setting, we use ˆf so an estimator from the source domain to facilitate the learning procedure for f ta. 2.2 Kernel Smoothing We say a function f is in the (λ, α) Hölder class [Wasserman, 2006], if for any x, x ∈Rd, f satisfies |f(x) −f(x)| ≤λ ||x −x||α 2 , for some α ∈(0, 1). The kernel smoothing method uses a positive kernel K on [0, 1], highest at 0, decreasing on [0, 1], 0 outside [0, 1], and Rd u2K(u) < ∞. Using T = {(Xi, Yi)}n i=1, the kernel smoothing estimator is defined as follows: ˆf(x) = n i=1 wi(x)Yi, where wi(x) = K(||x−Xi||/h) n j=1 K(||x−Xj||/h) ∈[0, 1]. 2.3 Kernel Ridge Regression Another popular non-linear estimator is the kernel ridge regression (KRR) which uses the theory of reproducing kernel Hilbert space (RKHS) for regression [Vovk, 2013]. Any symmetric positive semidefinite kernel function K : Rd × Rd →R defines a RKHS H. For each x ∈Rd, the function z →K(z, x) is contained in the Hilbert space H; moreover, the Hilbert space is endowed with an inner product ·, ·H such that K(·, x) acts as the kernel of the evaluation functional, meaning f, K(x, ·)H = f(x) for f ∈H. In this paper we assume K is bounded: supx∈Rd K (x, x) = k < ∞. Given the inner product, the H norm of a function g ∈H is defined as ||g||H g, gH and similarly the L2 norm, ||g||2 Rd g(x)2dPX 1/2 for a given PX. Also, the kernel induces an integral operator TK : L2 (PX) →L2 (PX): TK [f] (x) = Rd K (x, x) f (x) dPx (x) with countably many non-zero eigenvalues: {µi}i≥1. For a given function f, the approximation error is defined as: Af (λ) infh∈H ||h −f||2 L2(PX) + λ ||h||2 H for λ ≥0. Finally the estimated function evaluated at point x can be written as ˆf (x) = K(X, x) (K(X, X) + nλI)−1 Y where X ∈Rn×d are the inputs of training samples and Y ∈Rn×1 are the training labels Vovk [2013]. 2.4 Related work Before we present our framework, it is helpful to give a brief overview of existing literature on theoretical analysis of transfer learning. Many previous works focused on the settings when only unlabeled data from the target domain are available [Huang et al., 2006, Sugiyama et al., 2008, Yu and Szepesvári, 2012]. In particular, a line of research has been established based on distribution discrepancy, a loss induced metric for the source and target distributions [Mansour et al., 2009, Ben-David et al., 2007, Blitzer et al., 2008, Cortes and Mohri, 2011, Mohri and Medina, 2012]. For example, recently Cortes and Mohri [2014] gave generalization bounds for kernel based methods under convex loss in terms of discrepancy. In many real world applications such as yield prediction from pictures [Nuske et al., 2014], or prediction of response time from fMRI [Verstynen, 2014], some labeled data from the target domain is also available. Cortes et al. [2015] used these data to improve their discrepancy minimization algorithm. Zhang et al. [2013] focused on modeling target shift (P(Y ) changes), conditional shift (P(X|Y ) changes), and a combination of both. Recently, Wang and Schneider [2014] proposed a kernel mean embedding method to match the conditional probability in the kernel space and later derived generalization bound for this problem Wang and Schneider [2015]. Kuzborskij and Orabona [2013, 2016], Kuzborskij et al. [2016] gave excess risk bounds for target domain estimator in the form of a linear combination of estimators from multiple source domains and an additional linear function. Ben-David and Urner [2013] showed a similar bound of the same setting with different quantities capturing the relatedness. Wang et al. [2016] showed that if the features of source and target domain are [0, 1]d, using orthonormal basis function estimator, transfer learning achieves better excess risk 3 guarantee if f ta −f so can be approximated by the basis functions easier than f ta. Their work can be viewed as a special case of our framework using the transformation function G(a, b) = a + b. 3 Transformation Functions In this section, we first define our class of models and give a meta-algorithm to learn the target regression function. Our models are based on the idea that transfer learning is helpful when one transforms the target domain regression problem into a simpler regression problem using source domain knowledge. Consider the following example. Example: Offset Transfer. Let f so(x) = x (1 −x) sin 2.1π x+0.05 and f ta(x) = f so(x) + x. f so is the so called Doppler function. It requires a large number of samples to estimate well because of its lack of smoothness Wasserman [2006]. For the same reason, f ta is also difficult to estimate directly. However, if we have enough data from the source domain, we can have a fairly good estimate of f so. Further, notice that the offset function w(x) = f ta(x) −f so(x) = x, is just a linear function. Thus, instead of directly using T ta to estimate f ta, we can use the target domain samples to find an estimate of w(x), denoted by ˆw(x), and our estimator for the target domain is just: ˆf ta(x) = ˆf so(x) + ˆw(x). Figure 1b shows this technique gives improved fitting for f ta. The previous example exploits the fact that function w(x) = f ta(x) −f so(x) is a simpler function than f ta. Now we generalize this idea further. Formally, we define the transformation function as G(a, b) : R2 →R where we assume that given a ∈R, G(a, ·) is invertible. Here a will be the regression function of the source domain evaluated at some point and the output of G will be the regression function of the target domain evaluated at the same point. Let G−1 a (·) denote the inverse of G(a, ·) such that G a, G−1 a (c) = c. For example if G(a, b) = a + b and G−1 a (c) = c −a. For a given G and a pair (f so, f ta), they together induce a function wG(x) = G−1 f so(x)(f ta(x)). In the offset transfer example, wG (x) = x. By this definition, for any x, we have G (f so (x) , wG (x)) = f ta (x) . We call wG the auxiliary function of the transformation function G. In the HTL setting, G is a userdefined transformation that represents users’ prior knowledge on the relation between the source and target domains. Now we list some other examples: Example: Scale-Transfer. Consider G(a, b) = ab. This transformation function is useful when f so and f ta satisfy a smooth scale transfer. For example, if f ta = cf so, for some constant c, then wG(x) = c because f ta (x) = G (f so (x) , wG (x)) = f so (x) wG (x) = f so (x) c. See Figure 1c. Example: Non-Transfer. Consider G(a, b) = b. Notice that f ta(x) = wG(x) and so f so is irrelevant. Thus this model is equivalent to traditional regression on the target domain since data from the source domain does not help. 3.1 A Meta Algorithm Given the transformation G and data, we provide a general procedure to estimate f ta. The spirit of the algorithm is turning learning a complex function f ta into an easier function wG. First we use an algorithm Aso that takes T so to obtain ˆf so. Since we have sufficient data from the source domain, ˆf so should be close to the true regression function f so. Second, we construct a new data set using the nta data points from the target domain: T wG = Xta i , HG ˆf so (Xta i ) , Y ta i nta i=1 where HG : R2 →R and satisfies E HG f so Xta i , Y ta i
= G−1 f so(Xta i ) f ta Xta i = wG Xta i where and the expectation is taken over ta. Thus, we can use these newly constructed data to learn wG with algorithm AWG: ˆwG = AWG T WG . Finally, we plug trained ˆf so and ˆwG into transformation G to obtain an estimation for f ta: ˆf ta(X) = G( ˆf so (X) , ˆwG(X)). Pseudocode is shown in Algorithm 1. Unbiased Estimator HG (f so (Xta) , Y ta): In Algorithm 1, we require an unbiased estimator for wG (Xta). Note that if G (a, b) is linear b or ta = 0, we can simply set HG (f so (X) , Y ) = G−1 f so(X) (Y ). For other scenarios, G−1 f so(Xta i ) (Y ta i ) is biased: E G−1 f so(Xta i ) (Y ta i ) = G−1 f so(x) (f ta (x)) and we need to design estimator using the structure of G. 4 Algorithm 1 Transformation Function based Transfer Learning Inputs: Source domain data: T so = {(Xso i , Y so i )}nso i=1, target domain data: T ta = {(Xta i , Y ta i )}nta i=1, transformation function: G, algorithm to train f so: Aso, algorithm to train wG: AwG and HG an unbiased estimator for estimating wG. Outputs: Regression function for the target domain: ˆf ta. 1: Train the source domain regression function ˆf so = Aso (T so). 2: Construct new data using ˆf so and T ta: T wG = {(Xta i , Wi)}nta i=1, where Wi = HG ˆf so (Xta i ) , Y ta i . 3: Train the auxiliary function: ˆwG = AWG (T wG). 4: Output the estimated regression for the target domain: ˆf ta(X) = G ˆf so(X), ˆwG(X) . Remark 1: Many transformation functions are equivalent to a transformation function G (a, b) where G (a, b) is linear in b. For example, for G (a, b) = ab2, i.e., f ta (x) = f so (x) w2 G (x), consider G (a, b) = ab where b in G stands for b2 in G, i.e., f ta (x) = f so (x) w G (x). Therefore w G = w2 G and we only need to estimate w G well instead of estimating wG. More generally, if G (a, b) can be factorized as G (a, b) = g1 (a) g2 (b), i.e., f ta (x) = g1 (f so (x)) g2 (wG (x)), we only need to estimate g2 (wG (x)) and the convergence rate depends on the structure of g2 (wG (x)). Remark 2: When G is not linear in b and ta = 0, observe that in Algorithm 1, we treat Y ta i s as noisy covariates to estimate wG (Xi)s. This problem is called error-in-variable or measurement error and has been widely studied in statistics literature. For details, we refer the reader to the seminal work by Carroll et al. [2006]. There is no universal estimator for the measurement error problem. In Section B, we provide a common technique, regression calibration to deal with measurement error problem. 4 Excess Risk Analyses In this section, we present theoretical analyses for the proposed class of models and estimators. First, we need to impose some conditions on G. The first assures that if the estimations of f so and wG are close to the source regression and auxiliary function, then our estimator for f ta is close to the true target regression function. The second assures that we are estimating a regular function. Assumption 1 G (a, b) is L-Lipschitz: |G(a, b) −G(a, b)| ≤L ||(a, b) −(a, b)||2 and is invertible with respect to b given a, i.e. if G (x, y) = z then G−1 x (z) = y. Assumption 2 Given G, the induced auxiliary function wG is bounded: for x : ||x||2 ≤X, wG (x) ≤B for some B > 0. Offset Transfer and Non-Transfer satisfy these conditions with L = 1 and B = Y . Scale Transfer satisfies these assumptions when f so is lower bounded from away 0. Lastly, we assume our unbiased estimator is also regular. Assumption 3 For x : ||x||2 ≤X and y : |y| ≤Y , HG (x, y) ≤B for some B > 0 and HG is Lipschitz continuous in the first argument:|HG (x, y) −HG (x, y)| ≤L |x −x| for some L > 0. We begin with a general result which only requires the stability of AWG: Theorem 1 Suppose for any two sets of samples that have same features but different labels: T = {(Xta i , Wi)}nta i=1 and T = Xta i , Wi nta i=1, the algorithm AwG for training wG satisfies: AwG (T ) −AwG T ∞≤ nta i=1 ci Xta i Wi − Wi , (1) 5 where ci only depends on Xta i . Then for any x, ˆf ta(x) −f ta(x) 2 =O ˆf so (x) −f so (x) 2 + | wG (x) −wG (x)|2 + nta i=1 ci Xta i ˆf so Xta i −f so Xta i 2 where ˜wG = AwG {(Xta i , HG (f so (Xta i ) , Y ta i ))}nta i=1 , the estimated auxiliary function trained based on true source domain regression function. Theorem 1 shows how the estimation error in the source domain function propagates to our estimation of the target domain function. Notice that if we happen to know f so, then the error is bounded by O | ˜wG (x) −wG (x)|2 , the estimation error of wG. However, since we are using estimated f so to construct training samples for wG, the error might accumulate as nta increases. Though the third term in Theorem 1 might increase with nta, it also depends on the estimation error of f so which is relatively small because of the large amount of source domain data. The stability condition (1) we used is related to the uniform stability introduced by Bousquet and Elisseeff Bousquet and Elisseeff [2002] where they consider how much will the output change if one of the training instance is removed or replaced by another whereas ours depends on two different training data sets. The connection between transfer learning and stability has been discovered by Kuzborskij and Orabona [2013], Liu et al. [2016] and Zhang [2015] in different settings, but they only showed bounds for generalization, not for excess risk. 4.1 Kernel Smoothing We first analyze kernel smoothing method. Theorem 2 Suppose the support of Xta is a subset of the support of Xso and the probability density of PXso and PXta are uniformly bounded away from below on their supports. Further assume f so is (λso, αso) Hölder and wG is (λwG, αwG) Hölder . If we use kernel smoothing estimation for f so and wG with bandwidth hso n−1/(2αso+d) so and hwG n −1/(2αwG+d) ta , with probability at least 1 −δ the risk satisfies: E R ˆf ta −R f ta = O n −2αso 2αso+d so + n −2αwG 2αwG +d ta log 1 δ where the expectation is taken over T so and T ta. Theorem 2 suggests that the risk depends on two sources, one from estimation of f so and one from estimation of wG. For the first term, since in the typical transfer learning scenarios nso >> nta, it is relatively small in the setting we focus on. The second terms shows the power of transfer learning on transforming a possibly complex target regression function into a simpler auxiliary function. It is well known that learning f ta only using target domain has risk of the order Ω n −2αfta/(2αfta+d) ta . Thus, if the auxiliary function is smoother than the target regression function, i.e. αwG > αf ta, we obtain better statistical rate. 4.2 Kernel Ridge Regression Next, we give an upper bound for the excess risk using KRR: Theorem 3 Suppose PXso = PXta and the eigenvalues of the integral operator TK satisfy µi ≤ ai−1/p for i ≥1 a ≥164 Y , p ∈(0, 1) and there exists a constant C ≥1 such that for f ∈H, ||f||∞≤C ||f||p H · ||f||1−p L2(PX). Furthur assume that Af so (λ) ≤cλβso and AwG (λ) ≤cλβwG. If we use KRR for estimating f so and wG with regularization parameters λso n−1/(βso+p) so and 6 λwG n −1/(βwG+p) ta , then with probability at least 1 −δ the excess risk satisfies: E R ˆf ta −R f ta = O n 2 βwG +p ta log (nta) · n −βso βso+p so + n −βwG βwG +p ta log 1 δ where the expectation is taken over T so and T ta. Similar to Theorem 2, Theorem 3 suggests that the estimation error comes from two sources. For estimating the auxiliary function wG, the statistical rate depends on properties of the kernel induced RKHS, and how far the auxiliary function is from this space. For the ease of presentation, we assume PXso = PXta, so the approximation errors Af so and Af ta are defined on the same domain. The error of estimating f so is amplified by O λ−2 wG log (nta) , which is worse than that of nonparametric kernel smoothing. We believe this λ−2 wG is nearly tight because Bousquet and Elisseeff have shown the uniform algorithmic stability parameter for KRR is O λ−2 wG Bousquet and Elisseeff [2002]. Steinwart et al. Steinwart et al. [2009] showed that for non-transfer learning, the optimal statistical rate for excess risk is Ω n −βta βta+p ta , so if βwg ≥βta and nso is sufficiently large then we achieve improved convergence rate through transfer learning. Remark: Theorem 2 and 3 are not directly comparable because our assumptions on the function spaces of these two theorems are different. In general, Hölder space is only a Banach space but not a Hilbert space. We refer readers to Theorem 1 in Zhou [2008] for details. 5 Finding the Best Transformation Function In the previous section we showed for a specific transformation function G, if auxiliary function is smoother than the target regression function then we have smaller excess risk. In practice, we would like to try out a class of transformation functions G , which is possibly uncountable. We can construct a subset of G ⊂G, which is finite and satisfies that each G in G there is a G in G that is close to G. Here we give an example. Consider the transformation functions that have the form: G = {G(a, b) = αa + b where |α| ≤Lα, |a| ≤La} . We can quantize this set of transformation functions by considering a subset of G: G = {G(a, b) = ka + b} where = Lα 2K , k = −K, · · · , 0, · · · , K and |a| ≤La. Here is the quantization unit. The next theorem shows we only need to search the transformation function G in G whose corresponding estimator ˆf ta G has the lowest empirical risk on the validation dataset. Theorem 4 Let G be a class of transformation functions and G be its ||·||∞norm -cover. Suppose wG satisfies the same assumption in Theorem 1 and for any two G1, G2 ∈ G, ||wG1 −wG2||∞≤L ||G1 −G2||∞for some constant L. Denote G = argminG∈GR ˆf ta G and G = argminG∈G ˆR ˆf ta G . If we choose = O R( ˆ f ta G) nta i=1 ci and nval = Ω log G /δ , the with probability at least 1 −δ, E R ˆf ta G −R (f ta) = O E R ˆf ta G −R (f ta) where the expectation is taken over T so and T ta. Remark 1: This theorem implies that if no-transfer function (G (a, b) = b) is in G then we will end up choosing a transformation function that has the same order of excess risk as using no-transfer learning algorithm, thus avoiding negative transfer. Remark 2: Note number of validation set is only logarithmically depending on the size of set of transformation functions. Therefore, we only need to use a very small amount of data from the target domain to do cross-validation. 6 Experiments In this section we use robotics and neural imaging data to demonstrate the effectiveness of the proposed framework. We conduct experiments on real-world data sets with the following procedures. 7 nta = 10 nta = 20 nta = 40 nta = 80 nta = 160 nta = 320 Only Target KS 0.086 ± 0.022 0.076 ± 0.010 0.066 ± 0.008 0.064 ± 0.007 0.065 ± 0.006 0.063 ± 0.005 Only Target KRR 0.080 ± 0.017 0.078 ± 0.022 0.063 ± 0.013 0.050 ± 0.007 0.048 ± 0.006 0.040 ± 0.005 Only Source KRR 0.098 ± 0.017 0.098 ± 0.017 0.098 ± 0.017 0.098 ± 0.017 0.098 ± 0.017 0.098 ± 0.017 Combined KS 0.092 ± 0.011 0.084 ± 0.008 0.077 ± 0.009 0.075 ± 0.006 0.074 ± 0.006 0.067 ± 0.006 Combined KRR 0.087 ± 0.025 0.077 ± 0.015 0.062 ± 0.009 0.061 ± 0.005 0.047 ± 0.003 0.041 ± 0.004 CDM 0.105 ± 0.023 0.074 ± 0.020 0.064 ± 0.008 0.060 ± 0.007 0.053 ± 0.009 0.056 ± 0.004 Offset KS 0.080 ± 0.026 0.066 ± 0.023 0.052 ± 0.006 0.054 ± 0.006 0.050 ± 0.003 0.052 ± 0.004 Offset KRR 0.146 ± 0.112 0.066 ± 0.017 0.053 ± 0.007 0.048 ± 0.006 0.043 ± 0.004 0.041 ± 0.003 Scale KS 0.078 ± 0.022 0.065 ± 0.013 0.056 ± 0.009 0.056 ± 0.005 0.054 ± 0.008 0.055 ± 0.004 Scale KRR 0.102 ± 0.033 0.095 ± 0.100 0.057 ± 0.014 0.052 ± 0.010 0.044 ± 0.004 0.042 ± 0.002 Table 1: 1 standard deviation intervals for the mean squared errors of various algorithms when transferring from kin-8fm to kin-8nh. The values in bold are the smallest errors for each nta. Only Source KS has much worse performance than other algorithms so we do not show its result here. • Directly training on the target data T ta (Only Target KS, Only Target KRR). • Only training on the source data T so (Only Source KS, Only Source KRR). • Training on the combined source and target data (Combined KS, Combined KRR). • The CDM algorithm proposed by Wang and Schneider [2014] with KRR (CDM). • The algorithm described in this paper with G(a, b) = (a+α)b where α is a hyper-parameter (Scale KS, Scale KRR). • The algorithm described in this paper with G(a, b) = αa + b where α is a hyper-parameter (Offset KS, Offset KRR). ∠ For the first experiment, we vary the size of the target domain to study the effect of nta relative to nso. We use two datasets from the ‘kin’ family in Delve [Rasmussen et al., 1996]. The two datasets we use are ‘kin-8fm’ and ‘kin-8nh’, both with 8 dimensional inputs. kin-8fm has fairly linear output, and low noise. kin-8nh on the other hand has non-linear output, and high noise. We consider the task of transfer learning from kin-8fm to kin-8nh. In this experiment, We set nso to 320, and vary nta in {10, 20, 40, 80, 160, 320}. Hyper-parameters were picked using grid search with 10-fold cross-validation on the target data (or source domain data when not using the target domain data). Table 1 shows the mean squared errors on the target data. To better understand the results, we show a box plot of the mean squared errors for nta = 40 onwards in Figure 2(a). The results for nta = 10 and nta = 20 have high variance, so we do not show them in the plot. We also omit the results of Only Source KRR because of its poor performance. We note that our proposed algorithm outperforms other methods across nearly all values of nta especially when nta is small. Only when there are as many points in the target as in the source, does simply training on the target give the best performance. This is to be expected since the primary purpose in doing transfer learning is to alleviate the problem of lack of data in the target domain. Though quite comparable, the performance of the scale methods was worse than the offset methods in this experiment. In general, we would use cross-validation to choose between the two. We now consider another real-world dataset where the covariates are fMRI images taken while subjects perform a Stroop task [Stroop, 1935]. We use the dataset collected by Verstynen [2014] which contains fMRI data of 28 subjects. A total of 120 trials were presented to each participant and fMRI data was collected throughout the trials, and went through a standard post-processing scheme. The result of this is a feature vector corresponding to each trial that describes the activity of brain regions (voxels), and the goal is to use this to predict the response time. To frame the problem in the transfer learning setting, we consider as source the data of all but one subject. The goal is to predict on the remaining subject. We performed five repetitions for each algorithm by drawing nso = 300 data points randomly from the 3000 points in the source domain. We used nta = 80 points from the target domain for training and cross-validation; evaluation was done on the 35 remaining points in the target domain. Figure 2 (b) shows a box plot of the coeffecient of determination values (R-squared) for the best performing algorithms. R-squared is defined as 1 −SSres/SStot where SSres is the sum of squared residuals, and SStot is the total sum of squares. Note that R-squared can be negative when predicting on unseen samples – which were not used to fit the model – as in our case. When positive, it indicates the proportion of explained variance in the dependent variable (higher the better). From the plot, it is clear that Offset KRR and Only Target KRR have the best performances on average and Offset KRR has smaller variance. 8
! ! " "
Figure 2: Box plots of experimental results on real datasets. Each box extends from the first to third quartile, and the horizontal lines in the middle are medians. For the robotics data, we report mean squared error (lower the better) and for the fMRI data, we report R-squared (the higher the better). For the ease of presentation, we only show results of algorithms with good performances. Mean Median Standard Deviation Only Target KS -0.0096 0.0444 0.1041 Only Target KRR 0.1041 0.1186 0.2361 Only Source KS -0.4932 -0.5366 0.4555 Only Source KRR -0.8763 -0.9363 0.6265 Combined KS -0.7540 -0.2023 1.5109 Combined KRR -0.5868 -0.0691 1.3223 CDM -3.1183 -3.4510 2.6473 Offset KS 0.1190 0.1081 0.0612 Offset KRR 0.1080 0.1221 0.0682 Scale KS 0.0017 -0.0321 0.0632 Scale KRR 0.0897 0.1107 0.1104 Table 2: Mean, median, and standard deviation for the coefficient of determination (R-squared) of various algorithms on the fMRI dataset. Table 2 shows the full table of results for the fMRI task. Using only the source data produces large negative R-squared, and while Only Target KRR does produce a positive mean R-squared, it comes with a high variance. On the other hand, both Offset methods have low variance, showing consistent performance. For this particular case, the Scale methods do not perform as well as the Offset methods, and as has been noted earlier, in general we would use cross validation to select an appropriate transfer function. 7 Conclusion and Future Works In this paper, we proposed a general transfer learning framework for the HTL regression problem when there is some data available from the target domain. Theoretical analysis shows it is possible to achieve better statistical rate using transfer learning than standard supervised learning. Now we list two future directions and how our results could be further improved. First, in many real world applications, there is also a large amount of unlabeled data from the target domain available. Combining our proposed framework with previous works for this scenario [Cortes and Mohri, 2014, Huang et al., 2006] is a promising direction to pursue. Second, we only present upper bounds in this paper. It is an interesting direction to obtain lower bounds for HTL and other transfer learning scenarios. 8 Acknowledgements S.S.D. and B.P. were supported by NSF grant IIS1563887 and ARPA-E Terra program. A.S. was supported by AFRL grant FA8750-17-2-0212. 9 References Shai Ben-David and Ruth Urner. Domain adaptation as learning with auxiliary information. In New Directions in Transfer and Multi-Task-Workshop@ NIPS, 2013. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137, 2007. John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In Advances in neural information processing systems, pages 129–136, 2008. Olivier Bousquet and André Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2(Mar):499–526, 2002. Raymond J Carroll, David Ruppert, Leonard A Stefanski, and Ciprian M Crainiceanu. Measurement error in nonlinear models: a modern perspective. CRC press, 2006. Corinna Cortes and Mehryar Mohri. Domain adaptation in regression. In Algorithmic Learning Theory, pages 308–323. Springer, 2011. Corinna Cortes and Mehryar Mohri. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014. Corinna Cortes, Mehryar Mohri, and Andrés Muñoz Medina. Adaptation algorithm and theory based on generalized discrepancy. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 169–178. ACM, 2015. Cecil C Craig. On the tchebychef inequality of bernstein. The Annals of Mathematical Statistics, 4 (2):94–102, 1933. Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006. Jiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Schölkopf, and Alex J Smola. Correcting sample selection bias by unlabeled data. In Advances in neural information processing systems, pages 601–608, 2006. Samory Kpotufe and Vikas Garg. Adaptivity to local smoothness and dimension in kernel regression. In Advances in Neural Information Processing Systems, pages 3075–3083, 2013. Ilja Kuzborskij and Francesco Orabona. Stability and hypothesis transfer learning. In ICML (3), pages 942–950, 2013. Ilja Kuzborskij and Francesco Orabona. Fast rates by transferring from auxiliary hypotheses. Machine Learning, pages 1–25, 2016. Ilja Kuzborskij, Francesco Orabona, and Barbara Caputo. From n to n+ 1: Multiclass transfer incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3358–3365, 2013. Ilja Kuzborskij, Francesco Orabona, and Barbara Caputo. Scalable greedy algorithms for transfer learning. Computer Vision and Image Understanding, 2016. Tongliang Liu, Dacheng Tao, Mingli Song, and Stephen Maybank. Algorithm-dependent generalization bounds for multi-task learning. IEEE transactions on pattern analysis and machine intelligence, 2016. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009. Mehryar Mohri and Andres Munoz Medina. New analysis and algorithm for learning with drifting distributions. In Algorithmic Learning Theory, pages 124–138. Springer, 2012. 10 Stephen Nuske, Kamal Gupta, Srinivasa Narasimhan, and Sanjiv Singh. Modeling and calibrating visual yield estimates in vineyards. In Field and Service Robotics, pages 343–356. Springer, 2014. Francesco Orabona, Claudio Castellini, Barbara Caputo, Angelo Emanuele Fiorilla, and Giulio Sandini. Model adaptation with least-squares svm for adaptive hand prosthetics. In Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pages 2897–2903. IEEE, 2009. Carl Edward Rasmussen, Radford M Neal, Georey Hinton, Drew van Camp, Michael Revow, Zoubin Ghahramani, Rafal Kustra, and Rob Tibshirani. Delve data for evaluating learning in valid experiments. URL http://www. cs. toronto. edu/ delve, 1996. Ingo Steinwart, Don R Hush, and Clint Scovel. Optimal rates for regularized least squares regression. In COLT, 2009. J Ridley Stroop. Studies of interference in serial verbal reactions. Journal of experimental psychology, 18(6):643, 1935. Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V Buenau, and Motoaki Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In Advances in neural information processing systems, pages 1433–1440, 2008. Tatiana Tommasi, Francesco Orabona, and Barbara Caputo. Safety in numbers: Learning categories from few examples with multi model knowledge transfer. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3081–3088. IEEE, 2010. Timothy D Verstynen. The organization and dynamics of corticostriatal pathways link the medial orbitofrontal cortex to future behavioral responses. Journal of neurophysiology, 112(10):2457– 2469, 2014. Vladimir Vovk. Kernel ridge regression. In Empirical Inference, pages 105–116. Springer, 2013. Xuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. In Advances in Neural Information Processing Systems, pages 1898–1906, 2014. Xuezhi Wang and Jeff Schneider. Generalization bounds for transfer learning under model shift. 2015. Xuezhi Wang, Junier B Oliva, Jeff Schneider, and Barnabás Póczos. Nonparametric risk and stability analysis for multi-task learning problems. In 25th International Joint Conference on Artificial Intelligence (IJCAI), volume 1, page 2, 2016. Larry Wasserman. All of nonparametric statistics. Springer Science & Business Media, 2006. Jun Yang, Rong Yan, and Alexander G Hauptmann. Cross-domain video concept detection using adaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, pages 188–197. ACM, 2007. Yaoliang Yu and Csaba Szepesvári. Analysis of kernel mean matching under covariate shift. arXiv preprint arXiv:1206.4650, 2012. Kun Zhang, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 819–827, 2013. Yu Zhang. Multi-task learning and algorithmic stability. In AAAI, volume 2, pages 6–2, 2015. Ding-Xuan Zhou. Derivative reproducing properties for kernel methods in learning theory. Journal of computational and Applied Mathematics, 220(1):456–463, 2008. 11 | 2017 | 146 |
6,615 | Finite Sample Analysis of the GTD Policy Evaluation Algorithms in Markov Setting Yue Wang ∗ School of Science Beijing Jiaotong University 11271012@bjtu.edu.cn Wei Chen Microsoft Research wche@microsoft.com Yuting Liu School of Science Beijing Jiaotong University ytliu@bjtu.edu.cn Zhi-Ming Ma Academy of Mathematics and Systems Science Chinese Academy of Sciences mazm@amt.ac.cn Tie-Yan Liu Microsoft Research Tie-Yan.Liu@microsoft.com Abstract In reinforcement learning (RL) , one of the key components is policy evaluation, which aims to estimate the value function (i.e., expected long-term accumulated reward) of a policy. With a good policy evaluation method, the RL algorithms will estimate the value function more accurately and find a better policy. When the state space is large or continuous Gradient-based Temporal Difference(GTD) policy evaluation algorithms with linear function approximation are widely used. Considering that the collection of the evaluation data is both time and reward consuming, a clear understanding of the finite sample performance of the policy evaluation algorithms is very important to reinforcement learning. Under the assumption that data are i.i.d. generated, previous work provided the finite sample analysis of the GTD algorithms with constant step size by converting them into convex-concave saddle point problems. However, it is well-known that, the data are generated from Markov processes rather than i.i.d. in RL problems.. In this paper, in the realistic Markov setting, we derive the finite sample bounds for the general convex-concave saddle point problems, and hence for the GTD algorithms. We have the following discussions based on our bounds. (1) With variants of step size, GTD algorithms converge. (2) The convergence rate is determined by the step size, with the mixing time of the Markov process as the coefficient. The faster the Markov processes mix, the faster the convergence. (3) We explain that the experience replay trick is effective by improving the mixing property of the Markov process. To the best of our knowledge, our analysis is the first to provide finite sample bounds for the GTD algorithms in Markov setting. 1 Introduction Reinforcement Learning (RL) (Sutton and Barto [1998]) technologies are very powerful to learn how to interact with environments, and has variants of important applications, such as robotics, computer games and so on (Kober et al. [2013], Mnih et al. [2015], Silver et al. [2016], Bahdanau et al. [2016]). In RL problem, an agent observes the current state, takes an action following a policy at the current state, receives a reward from the environment, and the environment transits to the next state in Markov, and again repeats these steps. The goal of the RL algorithms is to find the optimal policy which ∗This work was done when the first author was visiting Microsoft Research Asia. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. leads to the maximum long-term reward. The value function of a fixed policy for a state is defined as the expected long-term accumulated reward the agent would receive by following the fixed policy starting from this state. Policy evaluation aims to accurately estimate the value of all states under a given policy, which is a key component in RL (Sutton and Barto [1998], Dann et al. [2014]). A better policy evaluation method will help us to better improve the current policy and find the optimal policy. When the state space is large or continuous, it is inefficient to represent the value function over all the states by a look-up table. A common approach is to extract features for states and use parameterized function over the feature space to approximate the value function. In applications, there are linear approximation and non-linear approximation (e.g. neural networks) to the value function. In this paper, we will focus on the linear approximation (Sutton et al. [2009a],Sutton et al. [2009b],Liu et al. [2015]). Leveraging the localization technique in Bhatnagar et al. [2009], the results can be generated into non-linear cases with extra efforts. We leave it as future work. In policy evaluation with linear approximation, there were substantial work for the temporal-difference (TD) method, which uses the Bellman equation to update the value function during the learning process (Sutton [1988],Tsitsiklis et al. [1997]). Recently, Sutton et al. [2009a] Sutton et al. [2009b] have proposed Gradient-based Temporal Difference (GTD) algorithms which use gradient information of the error from the Bellman equation to update the value function. It is shown that, GTD algorithms have achieved the lower-bound of the storage and computation complexity, making them powerful to handle high dimensional big data. Therefore, now GTD algorithms are widely used in policy evaluation problems and the policy evaluation step in practical RL algorithms (Bhatnagar et al. [2009],Silver et al. [2014]). However, we don’t have sufficient theory to tell us about the finite sample performance of the GTD algorithms. To be specific, will the evaluation process converge with the increasing of the number of the samples? If yes, how many samples we need to get a target evaluation error? Will the step size in GTD algorithms influence the finite sample error? How to explain the effectiveness of the practical tricks, such as experience replay? Considering that the collection of the evaluation data is very likely to be both time and reward consuming, to get a clear understanding of the finite sample performance of the GTD algorithms is very important to the efficiency of policy evaluation and the entire RL algorithms. Previous work (Liu et al. [2015]) converted the objective function of GTD algorithms into a convexconcave saddle problem and conducted the finite sample analysis for GTD with constant step size under the assumption that data are i.i.d. generated. However, in RL problem, the date are generated from an agent who interacts with the environment step by step, and the state will transit in Markov as introduced previously. As a result, the data are generated from a Markov process but not i.i.d.. In addition, the work did not study the decreasing step size, which are also commonly-used in many gradient based algorithms(Sutton et al. [2009a],Sutton et al. [2009b],Yu [2015]). Thus, the results from previous work cannot provide satisfactory answers to the above questions for the finite sample performance of the GTD algorithms. In this paper, we perform the finite sample analysis for the GTD algorithms in the more realistic Markov setting. To achieve the goal, first of all, same with Liu et al. [2015], we consider the stochastic gradient descent algorithms of the general convex-concave saddle point problems, which include the GTD algorithms. The optimality of the solution is measured by the primal-dual gap (Liu et al. [2015], Nemirovski et al. [2009]). The finite sample analysis for convex-concave optimization in Markov setting is challenging. On one hand, in Markov setting, the non-i.i.d. sampled gradients are no longer unbiased estimation of the gradients. Thus, the proof technique for the convergence of convex-concave problem in i.i.d. setting cannot be applied. On the other hand, although SGD converge for convex optimization problem with the Markov gradients, it is much more difficult to obtain the same results in the more complex convex-concave optimization problem. To overcome the challenge, we design a novel decomposition of the error function (i.e. Eqn (3.1)). The intuition of the decomposition and key techniques are as follows: (1) Although samples are not i.i.d., for large enough τ, the sample at time t + τ is "nearly independent" of the sample at time t, and its distribution is "very close" to the stationary distribution. (2) We split the random variables in the objective related to E operator and the variables related to max operator into different terms in order to control them respectively. It is non-trivial, and we construct a sequence of auxiliary random variables to do so. (3) All constructions above need to be carefully considered the measurable issues 2 in the Markov setting. (4) We construct new martingale difference sequences and apply Azuma’s inequality to derive the high-probability bound from the in-expectation bound. By using the above techniques, we prove a novel finite sample bound for the convex-concave saddle point problem. Considering the GTD algorithms are specific convex-concave saddle point optimization methods, we finally obtained the finite sample bounds for the GTD algorithms, in the realistic Markov setting for RL. To the best of our knowledge, our analysis is the first to provide finite sample bounds for the GTD algorithms in Markov setting. We have the following discussions based on our finite sample bounds. 1. GTD algorithms do converge, under a flexible condition on the step size, i.e. PT t=1 αt → ∞, PT t=1 α2 t PT t=1 αt < ∞, as T →∞, where αt is the step size. Most of step sizes used in practice satisfy this condition. 2. The convergence rate is O r (1 + τ(η)) PT t=1 α2 t PT t=1 αt + q τ(η) log( τ(η) δ ) PT t=1 α2 t PT t=1 αt ! , where τ(η) is the mixing time of the Markov process, and η is a constant. Different step sizes will lead to different convergence rates. 3. The experience replay trick is effective, since it can improve the mixing property of the Markov process. Finally, we conduct simulation experiments to verify our theoretical finding. All the conclusions from the analysis are consistent with our empirical observations. 2 Preliminaries In this section, we briefly introduce the GTD algorithms and related works. 2.1 Gradient-based TD algorithms Consider the reinforcement learning problem with Markov decision process(MDP) (S, A, P, R, γ), where S is the state space, A is the action space, P = {P a s,s′; s, s′ ∈S, a ∈A} is the transition matrix and P a s,s′ is the transition probability from state s to state s′ after taking action a, R = {R(s, a); s ∈S, a ∈A is the reward function and R(s, a) is the reward received at state s if taking action a, and 0 < γ < 1 is the discount factor. A policy function µ : A × S →[0, 1] indicates the probability to take each action at each state. Value function for policy µ is defined as: V µ(s) ≜E [P∞ t=0 γtR(st, at)|s0 = s, µ]. In order to perform policy evaluation in a large state space, states are represented by a feature vector φ(s) ∈Rd, and a linear function ˆv(s) = φ(s)⊤θ is used to approximate the value function. The evaluation error is defined as ∥V (s) −ˆv(s)∥s∼π, which can be decomposed into approximation error and estimation error. In this paper, we will focus on the estimation error with linear function approximation. As we know, the value function in RL satisfies the following Bellman equation: V µ(s) = Eµ,P [R(st, at) + γV µ(st+1)|st = s] ≜T µV µ(s), where T µ is called Bellman operator for policy µ. Gradient-based TD (GTD) algorithms (including GTD and GTD2) proposed by Sutton et al. [2009a] and Sutton et al. [2009b] update the approximated value function by minimizing the objective function related to Bellman equation errors, i.e., the norm of the expected TD update (NEU) and mean-square projected Bellman error (MSPBE) respectively(Maei [2011],Liu et al. [2015]) , GTD : JNEU(θ) = ∥Φ⊤K(T µˆv −ˆv)∥2 (2.1) GTD2 : JMSP BE(θ) = ∥ˆv −PT µˆv∥= ∥Φ⊤K(T µˆv −ˆv)∥2 C−1 (2.2) where K is a diagonal matrix whose elements are π(s), C = Eπ(φiφ⊤ i ), and π is a distribution over the state space S. Actually, the two objective functions in GTD and GTD2 can be unified as below J(θ) = ∥b −Aθ∥2 M −1, (2.3) 3 Algorithm 1 GTD Algorithms 1: for t = 1, . . . , T do 2: Update parameters: yt+1 = PXy yt + αt(ˆbt −ˆAtθt −ˆ Mtyt) xt+1 = PXx xt + αt ˆA⊤ t yt 3: end for Output: ˜xT = PT t=1 αtxt PT t=1 αt ˜yT = PT t=1 αtyt PT t=1 αt where M = I in GTD, M = C, in GTD2, A = Eπ[ρ(s, a)φ(s)(φ(s) −γφ(s′))⊤], b = Eπ[ρ(s, a)φ(s)r], ρ(s, a) = µ(a|s)/µb(a|s) is the importance weighting factor. Since the underlying distribution is unknown, we use the data D = {ξi = (si, ai, ri, s′ i)}n i=1 to estimate the value function by minimizing the empirical estimation error, i.e., ˆJ(θ) = 1/T T X i=1 ∥ˆb −ˆAθ∥2 ˆ M−1 where ˆAi = ρ(si, ai)φ(si)(φ(si) −γφ(s′ i))⊤,ˆbi = ρ(si, ai)φ(si)ri, ˆCi = φ(si)φ(si)⊤. Liu et al. [2015] derived that the GTD algorithms to minimize (2.3) is equivalent to the stochastic gradient algorithms to solve the following convex-concave saddle point problem min x max y L(x, y) = ⟨b −Ax, y⟩−1 2∥y∥2 M , (2.4) with x as the parameter θ in the value function, y as the auxiliary variable used in GTD algorithms. Therefore, we consider the general convex-concave stochastic saddle point problem as below min x∈Xx max y∈Xy{φ(x, y) = Eξ[Φ(x, y, ξ)]}, (2.5) where Xx ⊂Rn and Xy ⊂Rm are bounded closed convex sets, ξ ∈Ξ is random variable and its distribution is Π(ξ), and the expected function φ(x, y) is convex in x and concave in y. Denote z = (x, y) ∈Xx × Xy ≜X, the gradient of φ(z) as g(z), and the gradient of Φ(z, ξ) as G(z, ξ). In the stochastic gradient algorithm, the model is updated as: zt+1 = PX (zt −αt(G(zt, ξt))), where PX is the projection onto X and αt is the step size. After T iterations, we get the model ˜zT 1 = PT t=1 αtzt PT t=1 αt . The error of the model ˜zT 1 is measured by the primal-dual gap error Errφ(˜zT 1 ) = max y∈Xy φ(˜xT 1 , y) −min x∈Xx φ(x, ˜yT 1 ). (2.6) Liu et al. [2015] proved that the estimation error of the GTD algorithms can be upper bounded by their corresponding primal-dual gap error multiply a factor. Therefore, we are going to derive the finite sample primal-dual gap error bound for the convex-concave saddle point problem firstly, and then extend it to the finite sample estimation error bound for the GTD algorithms. Details of GTD algorithms used to optimize (2.4) are placed in Algorithm 1( Liu et al. [2015]). 2.2 Related work The TD algorithms for policy evaluation can be divided into two categories: gradient based methods and least-square(LS) based methods(Dann et al. [2014]). Since LS based algorithms need O(d2) storage and computational complexity while GTD algorithms are both of O(d) complexity, gradient based algorithms are more commonly used when the feature dimension is large. Thus, in this paper, we focus on GTD algorithms. Sutton et al. [2009a] proposed the gradient-based temporal difference (GTD) algorithm for off-policy policy evaluation problem with linear function approximation. Sutton et al. [2009b] proposed GTD2 algorithm which shows a faster convergence in practice. Liu et al. [2015] connected GTD algorithms to a convex-concave saddle point problem and derive a finite sample bound in both on-policy and off-policy cases for constant step size in i.i.d. setting. In the realistic Markov setting, although the finite sample bounds for LS-based algorithms have been proved (Lazaric et al. [2012] Tagorti and Scherrer [2015]) LSTD(λ), to the best of our knowledge, there is no previous finite sample analysis work for GTD algorithms. 4 3 Main Theorems In this section, we will present our main results. In Theorem 1, we present our finite sample bound for the general convex-concave saddle point problem; in Theorem 2, we provide the finite sample bounds for GTD algorithms in both on-policy and off-policy cases. Please refer the complete proofs in the supplementary materials. Our results are derived based on the following common assumptions(Nemirovski [2004], Duchi et al. [2012], Liu et al. [2015]). Please note that, the bounded-data property in assumption 4 in RL can guarantee the Lipschitz and smooth properties in assumption 5-6 (Please see Propsition 1 ). Assumption 1 (Bounded parameter). There exists D > 0, such that ∥z −z′∥≤D, for ∀z, z′ ∈X. Assumption 2 (Step size). The step size αt is non-increasing. Assumption 3 (Problem solvable). The matrix A and C in Problem 2.4 are non-singular. Assumption 4 (Bounded data). Features are bounded by L, rewards are bounded by Rmax and importance weights are bounded by ρmax. Assumption 5 (Lipschitz). For Π-almost every ξ, the function Φ(x, y, ξ) is Lipschitz for both x and y, with finite constant L1x, L1y, respectively. We Denote L1 ≜ √ 2 q L2 1x + L2 1y. Assumption 6 (Smooth). For Π-almost every ξ, the partial gradient function of Φ(x, y, ξ) is Lipschitz for both x and y with finite constant L2x, L2y respectively. We denote L2 ≜ √ 2 q L2 2x + L2 2y. For Markov process, the mixing time characterizes how fast the process converge to its stationary distribution. Following the notation of Duchi et al. [2012], we denote the conditional probability distribution P(ξt ∈A|Fs) as P t [s](A) and the corresponding probability density as pt [s]. Similarly, we denote the stationary distribution of the data generating stochastic process as Π and its density as π. Definition 1. The mixing time τ(P[t], η) of the sampling distribution P conditioned on the σ−field of the initial t sample Ft = σ(ξ1, . . . , ξt) is defined as: τ(P[t], η) ≜ inf n ∆: t ∈N, R |pt+∆ [t] (ξ) −π(ξ)|d(ξ) ≤η o , where pt+∆ [t] is the conditional probability density at time t + ∆, given Ft. Assumption 7 (Mixing time). The mixing times of the stochastic process {ξt} are uniform. i.e., there exists uniform mixing times τ(P, η) ≤∞such that, with probability 1, we have τ(P[s], η) ≤τ(P, η) for all η > 0 and s ∈N. Please note that, any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space satisfy the above assumption(Meyn and Tweedie [2012]). For simplicity and without of confusion, we will denote τ(P, η) as τ(η). 3.1 Finite Sample Bound for Convex-concave Saddle Point Problem Theorem 1. Consider the convex-concave problem in Eqn (2.5). Suppose Assumption 1,2,5,6 hold. Then for the gradient algorithm optimizing the convex-concave saddle point problem in (2.5), for ∀δ > 0 and ∀η > 0 such that τ(η) ≤T/2, with probability at least 1 −δ, we have Errφ(˜zT 1 ) ≤ 1 TP t=1 αt " A + B T X t=1 α2 t + Cτ(η) T X t=1 α2 t + Fη T X t=1 αt + Hτ(η) + 8DL1 v u u t2τ(η) log τ(η) δ T X t=1 α2 t + τ(η)α0 !# where : A = D2 B = 5 2L2 1 C = 6L2 1 + 2L1L2D F = 2L1D H = 6L1Dα0 Proof Sketch of Theorem 1. By the definition of the error function in (2.6) and the property that φ(x, y) is convex for x and concave for y, the expected error can be bounded as below Errφ(˜zT 1 ) ≤max z 1 PT t=1 αt T X t=1 αt h (zt −z)⊤g(zt) i . 5 Denote δt ≜g(zt)−G(zt, ξt), δ′ t ≜g(zt)−G(zt, ξt+τ), δ′′ t ≜G(zt, ξt+τ)−G(zt, ξt). Constructing {vt}t≥1 which is measurable with respect to Ft−1,vt+1 = PX vt −αt(g(zt)−G(zt, ξt)) . We have the following key decomposition to the right hand side in the above inequality, the initiation and the explanation for such decomposition is placed in supplementary materials. For ∀τ ≥0: max z T X t=1 αt h (zt −z)⊤g(zt) i = max z "T −τ X t=1 αt (zt −z)⊤G(zt, ξt) | {z } (a) + (zt −vt)⊤δ′ t | {z } (b) (3.1) + (zt −vt)⊤δ′′ t | {z } (c) + (vt −z)⊤δt | {z } (d) + T X t=T −τ+1 αt h (zt −z)⊤g(zt) i | {z } (e) # . For term(a), we split G(zt, ξt) into three terms by the definition of L2-norm and the iteration formula of zt, and then we bound its summation by PT −τ t=1 ∥αtG(zt, ξt)∥2 + ∥zt −z∥2 −∥zt+1 −z∥2 . Actually, in the summation, the last two terms will be eliminated except for their first and the last terms. Swap the max and P operators and use the Lipschitz Assumption 5, the first term can be bounded. Term (c) includes the sum of G(zt, ξt+τ) −G(zt, ξt), which is might be large in Markov setting. We reformulate it into the sum of G(zt−τ, ξt) −G(zt, ξt) and use the smooth Assumption 6 to bound it. Term (d) is similar to term (a) except that g(zt) −G(zt, ξt) is the gradient that used to update vt. We can bound it similarly with term (a). Term(e) is a constant that does not change much with T →∞, and we can bound it directly through upper bound of each of its own terms. Finally, we combine all the upper bounds to each term, use the mixing time Assumption 7 to choose τ = τ(η) and obtain the error bound in Theorem 1. We decompose Term(b) into a martingale part and an expectation part.By constructing a martingale difference sequence and using the Azuma’s inequality together with the Assumption 7, we can bound Term (b) and finally obtain the high probability error bound. Remark: (1) With T →∞, the error bound approaches 0 in order O( PT t=1 α2 t PT t=1 αt ). (2) The mixing time τ(η) will influence the convergence rate. If the Markov process has better mixing property with smaller τ(η), the algorithm converge faster. (3) If the data are i.i.d. generated (the mixing time τ(η) = 0, ∀η) and the step size is set to the constant c L1 √ T , our bound will reduce to Errφ(˜zT 1 ) ≤ 1 PT t=1 αt h A + B PT t=1 α2 t i = O( L1 √ T ), which is identical to previous work with constant step size in i.i.d. setting (Liu et al. [2015],Nemirovski et al. [2009]). (4) The high probability bound is similar to the expectation bound in the following Lemma 1 except for the last term. This is because we consider the deviation of the data around its expectation to derive the high probability bound. Lemma 1. Consider the convex-concave problem (2.5), under the same as Theorem 1, we have ED[Errφ(˜zT 1 )] ≤ 1 PT t=1 αt " A + B T X t=1 α2 t + Cτ(η) T X t=1 α2 t + Fη T X t=1 αt + Hτ(η) # , ∀η > 0, Proof Sketch of Lemma 1. We start from the key decomposition (3.1), and bound each term with expectation this time. We can easily bound each term as previously except for Term (b). For term (b), since (zt −vt) is not related to max operator and it is measurable with respect to Ft−1, we can bound Term (b) through the definition of mixing time and finally obtain the expectation bound. 3.2 Finite Sample Bounds for GTD Algorithms As a specific convex-concave saddle point problem, the error bounds in Theorem 1&2 can also provide the error bounds for GTD with the following specifications for the Lipschitz constants. Proposition 1. Suppose Assumption 1-4 hold, then the objective function in GTD algorithms is Lipschitz and smooth with the following coefficients: L1 ≤ √ 2(2D(1 + γ)ρmaxL2d + ρmaxLRmax + λM) L2 ≤ √ 2(2(1 + γ)ρmaxL2d + λM) 6 where λM is the largest singular value of M. Theorem 2. Suppose assumptions 1-4 hold, then we have the following finite sample bounds for the error ∥V −˜vT 1 ∥π in GTD algorithms: In on-policy case, the bound in expectation is O L√ L4d3λM πmax(1+τ(η))πmaxo1(T ) νC and with probability 1 −δ is O √ L4d2λM πmax νC s (1 + τ(η))L2do1(T) + r τ(η) log τ(η) δ o2(T) !! ; In off-policy case, the bound in expectation is O L2d√ 2λCλM πmax(1+τ(η))o1(T ) ν(AT M−1A) and with probability 1 −δ is O √ 2λCλM πmax ν(AT M−1A) r L4d2(1 + τ(η))o1(T) + q τ(η) log ( τ(η) δ )o2(T) !! , where νC, ν(AT M −1A) is the smallest eigenvalue of the C and AT M −1A respectively, λC is the largest singular value of C, o1(T) = ( PT t=1 α2 t PT t=1 αt ), o2(T) = ( √PT t=1 α2 t PT t=1 αt ). We would like to make the following discussions for Theorem 2. The GTD algorithms do converge in the realistic Markov setting. As in Theorem 2, the bound in expectation is O p (1 + τ(η))o1(T) and with probability 1 −δ is O r (1 + τ(η))o1(T) + q τ(η) log( τ(η) δ )o2(T) ! . If the step size αt makes o1(T) →0 and o2(T) →0, as T →∞, the GTD algorithms will converge. Additionally, in high probability bound, if PT t=1 α2 t > 1, then o1(T) dominates the order, if PT t=1 α2 t < 1, o2(T) dominates. The setup of the step size can be flexible. Our finite sample bounds for GTD algorithms converge to 0 if the step size satisfies PT t=1 αt →∞, PT t=1 α2 t PT t=1 αt < ∞, as T →∞. This condition on step size is much weaker than the constant step size in previous work Liu et al. [2015], and the common-used step size αt = O( 1 √ t), αt = O( 1 t ), αt = c = O( 1 √ T ) all satisfy the condition. To be specific, for αt = O( 1 √ t), the convergence rate is O( ln(T ) √ T ); for αt = O( 1 t ), the convergence rate is O( 1 ln(T )), for the constant step size, the optimal setup is αt = O( 1 √ T ) considering the trade off between o1(T) and o2(T), and the convergence rate is O( 1 √ T ). The mixing time matters. If the data are generated from a Markov process with smaller mixing time, the error bound will be smaller, and we just need fewer samples to achieve a fixed estimation error. This finding can explain why the experience replay trick (Lin [1993]) works. With experience replay, we store the agent’s experiences (or data samples) at each step, and randomly sample one from the pool of stored samples to update the policy function. By Theorem 1.19 - 1.23 of Durrett [2016], it can be proved that, for arbitrary η > 0, there exists t0, such that ∀t > t0 maxi | Nt(i) t −π(i)| ≤η. That is to say, when the size of the stored samples is larger than t0, the mixing time of the new data process with experience replay is 0. Thus, the experience replay trick improves the mixing property of the data process, and hence improves the convergence rate. Other factors that influence the finite sample bound: (1) With the increasing of the feature norm L, the finite sample bound increase. This is consistent with the empirical finding by Dann et al. [2014] that the normalization of features is crucial for the estimation quality of GTD algorithms. (2) With the increasing of the feature dimension d, the bound increase. Intuitively, we need more samples for a linear approximation in a higher dimension feature space. 4 Experiments In this section, we report our simulation results to validate our theoretical findings. We consider the general convex-concave saddle problem, min x max y L(x, y) = ⟨b −Ax, y⟩+ 1 2∥x∥2 −1 2∥y∥2 (4.1) 7 where A is a n × n matrix, b is a n × 1 vector, Here we set n = 10. We conduct three experiment and set the step size to αt = c = 0.001, αt = O( 1 √ t) = 0.015 √ t and αt = O( 1 t ) = 0.03 t respectively. In each experiment we sample the data ˆA,ˆb three ways: sample from two Markov chains with different mixing time but share the same stationary distribution or sample from stationary distribution i.i.d. directly. We sample ˆA and ˆb from Markov chain by using MCMC Metropolis-Hastings algorithms. Specifically, notice that the mixing time of a Markov chain is positive correlation with the second largest eigenvalue of its transition probability matrix (Levin et al. [2009]), we firstly conduct two transition probability matrix with different second largest eigenvalues( both with 1001 state and the second largest eigenvalue are 0.634 and 0.31 respectively), then using Metropolis-Hastings algorithms construct two Markov chain with same stationary distribution. We run the gradient algorithm for the objective in (4.1) based on the simulation data, without and with experience replay trick. The primal-dual gap error curves are plotted in Figure 1. We have the following observations. (1) The error curves converge in Markov setting with all the three setups of the step size. (2) The error curves with the data generated from the process which has small mixing time converge faster. The error curve for i.i.d. generated data converge fastest. (3) The error curve for different step size convergence at different rate. (4) With experience replay trick, the error curves in the Markov settings converge faster than previously. All these observations are consistent with our theoretical findings. (a) αt = c (b) αt = O( 1 √ t) (c) αt = O( 1 t ) (d) αt = c with trick (e) αt = O( 1 √ t) with trick (f) αt = O( 1 t ) with trick Figure 1: Experimental Results 5 Conclusion In this paper, in the more realistic Markov setting, we proved the finite sample bound for the convexconcave saddle problems with high probability and in expectation. Then, we obtain the finite sample bound for GTD algorithms both in on-policy and off-policy, considering that the GTD algorithms are specific convex-concave saddle point problems. Our finite sample bounds provide important theoretical guarantee to the GTD algorithms, and also insights to improve them, including how to setup the step size and we need to improve the mixing property of the data like experience replay. In the future, we will study the finite sample bounds for policy evaluation with nonlinear function approximation. Acknowledgment This work was supported by A Foundation for the Author of National Excellent Doctoral Dissertation of RP China (FANEDD 201312) and National Center for Mathematics and Interdisciplinary Sciences of CAS. 8 References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086, 2016. Shalabh Bhatnagar, Doina Precup, David Silver, Richard S Sutton, Hamid R Maei, and Csaba Szepesvári. Convergent temporal-difference learning with arbitrary smooth function approximation. In Advances in Neural Information Processing Systems, pages 1204–1212, 2009. Christoph Dann, Gerhard Neumann, and Jan Peters. Policy evaluation with temporal differences: a survey and comparison. Journal of Machine Learning Research, 15(1):809–883, 2014. John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549–1578, 2012. Richard Durrett. Poisson processes. In Essentials of Stochastic Processes, pages 95–124. Springer, 2016. Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238–1274, 2013. Alessandro Lazaric, Mohammad Ghavamzadeh, and Rémi Munos. Finite-sample analysis of leastsquares policy iteration. Journal of Machine Learning Research, 13(1):3041–3074, 2012. David Asher Levin, Yuval Peres, and Elizabeth Lee Wilmer. Markov chains and mixing times. American Mathematical Soc., 2009. Long-Ji Lin. Reinforcement learning for robots using neural networks. PhD thesis, Fujitsu Laboratories Ltd, 1993. Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar Mahadevan, and Marek Petrik. Finite-sample analysis of proximal gradient td algorithms. In UAI, pages 504–513. Citeseer, 2015. Hamid Reza Maei. Gradient temporal-difference learning algorithms. PhD thesis, University of Alberta, 2011. Sean P Meyn and Richard L Tweedie. Markov chains and stochastic stability. Springer Science & Business Media, 2012. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229–251, 2004. Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574– 1609, 2009. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning, pages 387–395, 2014. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9–44, 1988. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998. 9 Richard S Sutton, Hamid R Maei, and Csaba Szepesvári. A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation. In Advances in Neural Information Processing Systems, pages 1609–1616, 2009a. Richard S Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesvári, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th International Conference on Machine Learning, pages 993–1000, 2009b. Manel Tagorti and Bruno Scherrer. On the rate of convergence and error bounds for lstd ( lambda). In Proceedings of the 32nd International Conference on Machine Learning, pages 1521–1529, 2015. John N Tsitsiklis, Benjamin Van Roy, et al. An analysis of temporal-difference learning with function approximation. IEEE transactions on automatic control, 42(5):674–690, 1997. H Yu. On convergence of emphatic temporal-difference learning. In Proceedings of The 28th Conference on Learning Theory, pages 1724–1751, 2015. 10 | 2017 | 147 |
6,616 | Variational Inference via χ Upper Bound Minimization Adji B. Dieng Columbia University Dustin Tran Columbia University Rajesh Ranganath Princeton University John Paisley Columbia University David M. Blei Columbia University Abstract Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions q and finds the closest member to the exact posterior p. Closeness is usually measured via a divergence D(q||p) from q to p. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes Dχ(p||q), the χ-divergence from p to q. CHIVI minimizes an upper bound of the model evidence, which we term the χ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance. 1 Introduction Bayesian analysis provides a foundation for reasoning with probabilistic models. We first set a joint distribution p(x, z) of latent variables z and observed variables x. We then analyze data through the posterior, p(z | x). In most applications, the posterior is difficult to compute because the marginal likelihood p(x) is intractable. We must use approximate posterior inference methods such as Monte Carlo [1] and variational inference [2]. This paper focuses on variational inference. Variational inference approximates the posterior using optimization. The idea is to posit a family of approximating distributions and then to find the member of the family that is closest to the posterior. Typically, closeness is defined by the Kullback-Leibler (KL) divergence KL(q ∥p), where q(z; λ) is a variational family indexed by parameters λ. This approach, which we call KLVI, also provides the evidence lower bound (ELBO), a convenient lower bound of the model evidence log p(x). KLVI scales well and is suited to applications that use complex models to analyze large data sets [3]. But it has drawbacks. For one, it tends to favor underdispersed approximations relative to the exact posterior [4, 5]. This produces difficulties with light-tailed posteriors when the variational distribution has heavier tails. For example, KLVI for Gaussian process classification typically uses a Gaussian approximation; this leads to unstable optimization and a poor approximation [6]. One alternative to KLVI is expectation propagation (EP), which enjoys good empirical performance on models with light-tailed posteriors [7, 8]. Procedurally, EP reverses the arguments in the KL divergence and performs local minimizations of KL(p ∥q); this corresponds to iterative moment matching 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. on partitions of the data. Relative to KLVI, EP produces overdispersed approximations. But EP also has drawbacks. It is not guaranteed to converge [7, Figure 3.6]; it does not provide an easy estimate of the marginal likelihood; and it does not optimize a well-defined global objective [9]. In this paper we develop a new algorithm for approximate posterior inference, χ-divergence variational inference (CHIVI). CHIVI minimizes the χ-divergence from the posterior to the variational family, Dχ2(p ∥q) = Eq(z;λ) hp(z | x) q(z; λ) 2 −1 i . (1) CHIVI enjoys advantages of both EP and KLVI. Like EP, it produces overdispersed approximations; like KLVI, it optimizes a well-defined objective and estimates the model evidence. As we mentioned, KLVI optimizes a lower bound on the model evidence. The idea behind CHIVI is to optimize an upper bound, which we call the χ upper bound (CUBO). Minimizing the CUBO is equivalent to minimizing the χ-divergence. In providing an upper bound, CHIVI can be used (in concert with KLVI) to sandwich estimate the model evidence. Sandwich estimates are useful for tasks like model selection [10]. Existing work on sandwich estimation relies on MCMC and only evaluates simulated data [11]. We derive a sandwich theorem (Section 2) that relates CUBO and ELBO. Section 3 demonstrates sandwich estimation on real data. Aside from providing an upper bound, there are two additional benefits to CHIVI. First, it is a black-box inference algorithm [12] in that it does not need model-specific derivations and it is easy to apply to a wide class of models. It minimizes an upper bound in a principled way using unbiased reparameterization gradients [13, 14] of the exponentiated CUBO. Second, it is a viable alternative to EP. The χ-divergence enjoys the same “zero-avoiding” behavior of EP, which seeks to place positive mass everywhere, and so CHIVI is useful when the KL divergence is not a good objective (such as for light-tailed posteriors). Unlike EP, CHIVI is guaranteed to converge; provides an easy estimate of the marginal likelihood; and optimizes a well-defined global objective. Section 3 shows that CHIVI outperforms KLVI and EP for Gaussian process classification. The rest of this paper is organized as follows. Section 2 derives the CUBO, develops CHIVI, and expands on its zero-avoiding property that finds overdispersed posterior approximations. Section 3 applies CHIVI to Bayesian probit regression, Gaussian process classification, and a Cox process model of basketball plays. On Bayesian probit regression and Gaussian process classification, it yielded lower classification error than KLVI and EP. When modeling basketball data with a Cox process, it gave more accurate estimates of posterior variance than KLVI. Related work. The most widely studied variational objective is KL(q ∥p). The main alternative is EP [15, 7], which locally minimizes KL(p ∥q). Recent work revisits EP from the perspective of distributed computing [16, 17, 18] and also revisits [19], which studies local minimizations with the general family of α-divergences [20, 21]. CHIVI relates to EP and its extensions in that it leads to overdispersed approximations relative to KLVI. However, unlike [19, 20], CHIVI does not rely on tying local factors; it optimizes a well-defined global objective. In this sense, CHIVI relates to the recent work on alternative divergence measures for variational inference [21, 22]. A closely related work is [21]. They perform black-box variational inference using the reverse αdivergence Dα(q ∥p), which is a valid divergence when α > 01. Their work shows that minimizing Dα(q ∥p) is equivalent to maximizing a lower bound of the model evidence. No positive value of α in Dα(q ∥p) leads to the χ-divergence. Even though taking α ≤0 leads to CUBO, it does not correspond to a valid divergence in Dα(q ∥p). The algorithm in [21] also cannot minimize the upper bound we study in this paper. In this sense, our work complements [21]. An exciting concurrent work by [23] also studies the χ-divergence. Their work focuses on upper bounding the partition function in undirected graphical models. This is a complementary application: Bayesian inference and undirected models both involve an intractable normalizing constant. 2 χ-Divergence Variational Inference We present the χ-divergence for variational inference. We describe some of its properties and develop CHIVI, a black box algorithm that minimizes the χ-divergence for a large class of models. 1It satisfies D(p ∥q) ≥0 and D(p ∥q) = 0 ⇐⇒p = q almost everywhere 2 Variational inference (VI) casts Bayesian inference as optimization [24]. VI posits a family of approximating distributions and finds the closest member to the posterior. In its typical formulation, VI minimizes the Kullback-Leibler divergence from q(z; λ) to p(z | x). Minimizing the KL divergence is equivalent to maximizing the ELBO, a lower bound to the model evidence log p(x). 2.1 The χ-divergence Maximizing the ELBO imposes properties on the resulting approximation such as underestimation of the posterior’s support [4, 5]. These properties may be undesirable, especially when dealing with light-tailed posteriors such as in Gaussian process classification [6]. We consider the χ-divergence (Equation 1). Minimizing the χ-divergence induces alternative properties on the resulting approximation. (See Appendix 5 for more details on all these properties.) Below we describe a key property which leads to overestimation of the posterior’s support. Zero-avoiding behavior: Optimizing the χ-divergence leads to a variational distribution with a zero-avoiding behavior, which is similar to EP [25]. Namely, the χ-divergence is infinite whenever q(z; λ) = 0 and p(z | x) > 0. Thus when minimizing it, setting p(z | x) > 0 forces q(z; λ) > 0. This means q avoids having zero mass at locations where p has nonzero mass. The classical objective KL(q ∥p) leads to approximate posteriors with the opposite behavior, called zero-forcing. Namely, KL(q ∥p) is infinite when p(z | x) = 0 and q(z; λ) > 0. Therefore the optimal variational distribution q will be 0 when p(z | x) = 0. This zero-forcing behavior leads to degenerate solutions during optimization, and is the source of “pruning” often reported in the literature (e.g., [26, 27]). For example, if the approximating family q has heavier tails than the target posterior p, the variational distributions must be overconfident enough that the heavier tail does not allocate mass outside the lighter tail’s support.2 2.2 CUBO: the χ Upper Bound We derive a tractable objective for variational inference with the χ2-divergence and also generalize it to the χn-divergence for n > 1. Consider the optimization problem of minimizing Equation 1. We seek to find a relationship between the χ2-divergence and log p(x). Consider Eq(z;λ) hp(x, z) q(z; λ) 2i = 1 + Dχ2(p(z|x) ∥q(z; λ)) = p(x)2[1 + Dχ2(p(z|x) ∥q(z; λ))]. Taking logarithms on both sides, we find a relationship analogous to how KL(q ∥p) relates to the ELBO. Namely, the χ2-divergence satisfies 1 2 log(1 + Dχ2(p(z|x) ∥q(z; λ))) = −log p(x) + 1 2 log Eq(z;λ) hp(x, z) q(z; λ) 2i . By monotonicity of log, and because log p(x) is constant, minimizing the χ2-divergence is equivalent to minimizing Lχ2(λ) = 1 2 log Eq(z;λ) hp(x, z) q(z; λ) 2i . Furthermore, by nonnegativity of the χ2-divergence, this quantity is an upper bound to the model evidence. We call this objective the χ upper bound (CUBO). A general upper bound. The derivation extends to upper bound the general χn-divergence, Lχn(λ) = 1 n log Eq(z;λ) hp(x, z) q(z; λ) ni = CUBOn. (2) This produces a family of bounds. When n < 1, CUBOn is a lower bound, and minimizing it for these values of n does not minimize the χ-divergence (rather, when n < 1, we recover the reverse α-divergence and the VR-bound [21]). When n = 1, the bound is tight where CUBO1 = log p(x). For n ≥1, CUBOn is an upper bound to the model evidence. In this paper we focus on n = 2. Other 2Zero-forcing may be preferable in settings such as multimodal posteriors with unimodal approximations: for predictive tasks, it helps to concentrate on one mode rather than spread mass over all of them [5]. In this paper, we focus on applications with light-tailed posteriors and one to relatively few modes. 3 values of n are possible depending on the application and dataset. We chose n = 2 because it is the most standard, and is equivalent to finding the optimal proposal in importance sampling. See Appendix 4 for more details. Sandwiching the model evidence. Equation 2 has practical value. We can minimize the CUBOn and maximize the ELBO. This produces a sandwich on the model evidence. (See Appendix 8 for a simulated illustration.) The following sandwich theorem states that the gap induced by CUBOn and ELBO increases with n. This suggests that letting n as close to 1 as possible enables approximating log p(x) with higher precision. When we further decrease n to 0, CUBOn becomes a lower bound and tends to the ELBO. Theorem 1 (Sandwich Theorem): Define CUBOn as in Equation 2. Then the following holds: • ∀n ≥1 ELBO ≤log p(x) ≤CUBOn. • ∀n ≥1 CUBOn is a non-decreasing function of the order n of the χ-divergence. • limn→0 CUBOn = ELBO. See proof in Appendix 1. Theorem 1 can be utilized for estimating log p(x), which is important for many applications such as the evidence framework [28], where the marginal likelihood is argued to embody an Occam’s razor. Model selection based solely on the ELBO is inappropriate because of the possible variation in the tightness of this bound. With an accompanying upper bound, one can perform what we call maximum entropy model selection in which each model evidence values are chosen to be that which maximizes the entropy of the resulting distribution on models. We leave this as future work. Theorem 1 can also help estimate Bayes factors [29]. In general, this technique is important as there is little existing work: for example, Ref. [11] proposes an MCMC approach and evaluates simulated data. We illustrate sandwich estimation in Section 3 on UCI datasets. 2.3 Optimizing the CUBO We derived the CUBOn, a general upper bound to the model evidence that can be used to minimize the χ-divergence. We now develop CHIVI, a black box algorithm that minimizes CUBOn. The goal in CHIVI is to minimize the CUBOn with respect to variational parameters, CUBOn(λ) = 1 n log Eq(z;λ) hp(x, z) q(z; λ) ni . The expectation in the CUBOn is usually intractable. Thus we use Monte Carlo to construct an estimate. One approach is to naively perform Monte Carlo on this objective, CUBOn(λ) ≈1 n log 1 S S X s=1 hp(x, z(s)) q(z(s); λ) ni , for S samples z(1), ..., z(S) ∼q(z; λ). However, by Jensen’s inequality, the log transform of the expectation implies that this is a biased estimate of CUBOn(λ): Eq " 1 n log 1 S S X s=1 hp(x, z(s)) q(z(s); λ) ni# ̸= CUBOn. In fact this expectation changes during optimization and depends on the sample size S. The objective is not guaranteed to be an upper bound if S is not chosen appropriately from the beginning. This problem does not exist for lower bounds because the Monte Carlo approximation is still a lower bound; this is why the approach in [21] works for lower bounds but not for upper bounds. Furthermore, gradients of this biased Monte Carlo objective are also biased. We propose a way to minimize upper bounds which also can be used for lower bounds. The approach keeps the upper bounding property intact. It does so by minimizing a Monte Carlo approximation of the exponentiated upper bound, L = exp{n · CUBOn(λ)}. 4 Algorithm 1: χ-divergence variational inference (CHIVI) Input: Data x, Model p(x, z), Variational family q(z; λ). Output: Variational parameters λ. Initialize λ randomly. while not converged do Draw S samples z(1), ..., z(S) from q(z; λ) and a data subsample {xi1, ..., xiM }. Set ρt according to a learning rate schedule. Set log w(s) = log p(z(s)) + N M PM j=1 p(xij | z) −log q(z(s); λt), s ∈{1, ..., S}. Set w(s) = exp(log w(s) −max s log w(s)), s ∈{1, ..., S}. Update λt+1 = λt −(1−n)·ρt S PS s=1 h w(s)n ∇λ log q(z(s); λt) i . end By monotonicity of exp, this objective admits the same optima as CUBOn(λ). Monte Carlo produces an unbiased estimate, and the number of samples only affects the variance of the gradients. We minimize it using reparameterization gradients [13, 14]. These gradients apply to models with differentiable latent variables. Formally, assume we can rewrite the generative process as z = g(λ, ϵ) where ϵ ∼p(ϵ) and for some deterministic function g. Then ˆL = 1 B B X b=1 p(x, g(λ, ϵ(b))) q(g(λ, ϵ(b)); λ) n is an unbiased estimator of L and its gradient is ∇λˆL = n B B X b=1 p(x, g(λ, ϵ(b))) q(g(λ, ϵ(b)); λ) n ∇λ log p(x, g(λ, ϵ(b))) q(g(λ, ϵ(b)); λ) . (3) (See Appendix 7 for a more detailed derivation and also a more general alternative with score function gradients [30].) Computing Equation 3 requires the full dataset x. We can apply the “average likelihood” technique from EP [18, 31]. Consider data {x1, . . . , xN} and a subsample {xi1, ..., xiM }.. We approximate the full log-likelihood by log p(x | z) ≈N M M X j=1 log p(xij | z). Using this proxy to the full dataset we derive CHIVI, an algorithm in which each iteration depends on only a mini-batch of data. CHIVI is a black box algorithm for performing approximate inference with the χn-divergence. Algorithm 1 summarizes the procedure. In practice, we subtract the maximum of the logarithm of the importance weights, defined as log w = log p(x, z) −log q(z; λ). to avoid underflow. Stochastic optimization theory still gives us convergence with this approach [32]. 3 Empirical Study We developed CHIVI, a black box variational inference algorithm for minimizing the χ-divergence. We now study CHIVI with several models: probit regression, Gaussian process (GP) classification, and Cox processes. With probit regression, we demonstrate the sandwich estimator on real and synthetic data. CHIVI provides a useful tool to estimate the marginal likelihood. We also show that for this model where ELBO is applicable CHIVI works well and yields good test error rates. 5 0 20 40 60 80 100 epoch 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 objective Sandwich Plot Using CHIVI and BBVI On Ionosphere Dataset upper bound lower bound 0 50 100 150 200 epoch 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 objective Sandwich Plot Using CHIVI and BBVI On Heart Dataset upper bound lower bound Figure 1: Sandwich gap via CHIVI and BBVI on different datasets. The first two plots correspond to sandwich plots for the two UCI datasets Ionosphere and Heart respectively. The last plot corresponds to a sandwich for generated data where we know the log marginal likelihood of the data. There the gap is tight after only few iterations. More sandwich plots can be found in the appendix. Table 1: Test error for Bayesian probit regression. The lower the better. CHIVI (this paper) yields lower test error rates when compared to BBVI [12], and EP on most datasets. Dataset BBVI EP CHIVI Pima 0.235 ± 0.006 0.234 ± 0.006 0.222 ± 0.048 Ionos 0.123 ± 0.008 0.124 ± 0.008 0.116 ± 0.05 Madelon 0.457 ± 0.005 0.445 ± 0.005 0.453 ± 0.029 Covertype 0.157 ± 0.01 0.155 ± 0.018 0.154 ± 0.014 Second, we compare CHIVI to Laplace and EP on GP classification, a model class for which KLVI fails (because the typical chosen variational distribution has heavier tails than the posterior).3 In these settings, EP has been the method of choice. CHIVI outperforms both of these methods. Third, we show that CHIVI does not suffer from the posterior support underestimation problem resulting from maximizing the ELBO. For that we analyze Cox processes, a type of spatial point process, to compare profiles of different NBA basketball players. We find CHIVI yields better posterior uncertainty estimates (using HMC as the ground truth). 3.1 Bayesian Probit Regression We analyze inference for Bayesian probit regression. First, we illustrate sandwich estimation on UCI datasets. Figure 1 illustrates the bounds of the log marginal likelihood given by the ELBO and the CUBO. Using both quantities provides a reliable approximation of the model evidence. In addition, these figures show convergence for CHIVI, which EP does not always satisfy. We also compared the predictive performance of CHIVI, EP, and KLVI. We used a minibatch size of 64 and 2000 iterations for each batch. We computed the average classification error rate and the standard deviation using 50 random splits of the data. We split all the datasets with 90% of the data for training and 10% for testing. For the Covertype dataset, we implemented Bayesian probit regression to discriminate the class 1 against all other classes. Table 1 shows the average error rate for KLVI, EP, and CHIVI. CHIVI performs better for all but one dataset. 3.2 Gaussian Process Classification GP classification is an alternative to probit regression. The posterior is analytically intractable because the likelihood is not conjugate to the prior. Moreover, the posterior tends to be skewed. EP has been the method of choice for approximating the posterior [8]. We choose a factorized Gaussian for the variational distribution q and fit its mean and log variance parameters. With UCI benchmark datasets, we compared the predictive performance of CHIVI to EP and Laplace. Table 2 summarizes the results. The error rates for CHIVI correspond to the average of 10 error rates obtained by dividing the data into 10 folds, applying CHIVI to 9 folds to learn the variational parameters and performing prediction on the remainder. The kernel hyperparameters were chosen 3For KLVI, we use the black box variational inference (BBVI) version [12] specifically via Edward [33]. 6 Table 2: Test error for Gaussian process classification. The lower the better. CHIVI (this paper) yields lower test error rates when compared to Laplace and EP on most datasets. Dataset Laplace EP CHIVI Crabs 0.02 0.02 0.03 ± 0.03 Sonar 0.154 0.139 0.055 ± 0.035 Ionos 0.084 0.08 ± 0.04 0.069 ± 0.034 Table 3: Average L1 error for posterior uncertainty estimates (ground truth from HMC). We find that CHIVI is similar to or better than BBVI at capturing posterior uncertainties. Demarcus Cousins, who plays center, stands out in particular. His shots are concentrated near the basket, so the posterior is uncertain over a large part of the court Figure 2. Curry Demarcus Lebron Duncan CHIVI 0.060 0.073 0.0825 0.0849 BBVI 0.066 0.082 0.0812 0.0871 using grid search. The error rates for the other methods correspond to the best results reported in [8] and [34]. On all the datasets CHIVI performs as well or better than EP and Laplace. 3.3 Cox Processes Finally we study Cox processes. They are Poisson processes with stochastic rate functions. They capture dependence between the frequency of points in different regions of a space. We apply Cox processes to model the spatial locations of shots (made and missed) from the 2015-2016 NBA season [35]. The data are from 308 NBA players who took more than 150, 000 shots in total. The nth player’s set of Mn shot attempts are xn = {xn,1, ..., xn,Mn}, and the location of the mth shot by the nth player in the basketball court is xn,m ∈[−25, 25] × [0, 40]. Let PP(λ) denote a Poisson process with intensity function λ, and K be a covariance matrix resulting from a kernel applied to every location of the court. The generative process for the nth player’s shot is Ki,j = k(xi, xj) = σ2 exp(−1 2φ2 ||xi −xj||2) f ∼GP(0, k(·, ·)) ; λ = exp(f) ; xn,k ∼PP(λ) for k ∈{1, ..., Mn}. The kernel of the Gaussian process encodes the spatial correlation between different areas of the basketball court. The model treats the N players as independent. But the kernel K introduces correlation between the shots attempted by a given player. Our goal is to infer the intensity functions λ(.) for each player. We compare the shooting profiles of different players using these inferred intensity surfaces. The results are shown in Figure 2. The shooting profiles of Demarcus Cousins and Stephen Curry are captured by both BBVI and CHIVI. BBVI has lower posterior uncertainty while CHIVI provides more overdispersed solutions. We plot the profiles for two more players, LeBron James and Tim Duncan, in the appendix. In Table 3, we compare the posterior uncertainty estimates of CHIVI and BBVI to that of HMC, a computationally expensive Markov chain Monte Carlo procedure that we treat as exact. We use the average L1 distance from HMC as error measure. We do this on four different players: Stephen Curry, Demarcus Cousins, LeBron James, and Tim Duncan. We find that CHIVI is similar or better than BBVI, especially on players like Demarcus Cousins who shoot in a limited part of the court. 4 Discussion We described CHIVI, a black box algorithm that minimizes the χ-divergence by minimizing the CUBO. We motivated CHIVI as a useful alternative to EP. We justified how the approach used in CHIVI enables upper bound minimization contrary to existing α-divergence minimization techniques. This enables sandwich estimation using variational inference instead of Markov chain Monte Carlo. 7 Curry Shot Chart Demarcus Shot Chart Curry Posterior Intensity (KLQP) 0 25 50 75 100 125 150 175 200 225 Curry Posterior Intensity (Chi) 0 25 50 75 100 125 150 175 200 225 Curry Posterior Intensity (HMC) 0 25 50 75 100 125 150 175 200 225 Demarcus Posterior Intensity (KLQP) 0 40 80 120 160 200 240 280 320 360 Demarcus Posterior Intensity (Chi) 0 40 80 120 160 200 240 280 320 Demarcus Posterior Intensity (HMC) 0 40 80 120 160 200 240 280 320 Curry Posterior Uncertainty (KLQP) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Curry Posterior Uncertainty (Chi) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Curry Posterior Uncertainty (HMC) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Demarcus Posterior Uncertainty (KLQP) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Demarcus Posterior Uncertainty (Chi) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Demarcus Posterior Uncertainty (HMC) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 2: Basketball players shooting profiles as inferred by BBVI [12], CHIVI (this paper), and Hamiltonian Monte Carlo (HMC). The top row displays the raw data, consisting of made shots (green) and missed shots (red). The second and third rows display the posterior intensities inferred by BBVI, CHIVI, and HMC for Stephen Curry and Demarcus Cousins respectively. Both BBVI and CHIVI capture the shooting behavior of both players in terms of the posterior mean. The last two rows display the posterior uncertainty inferred by BBVI, CHIVI, and HMC for Stephen Curry and Demarcus Cousins respectively. CHIVI tends to get higher posterior uncertainty for both players in areas where data is scarce compared to BBVI. This illustrates the variance underestimation problem of KLVI, which is not the case for CHIVI. More player profiles with posterior mean and uncertainty estimates can be found in the appendix. We illustrated this by showing how to use CHIVI in concert with KLVI to sandwich-estimate the model evidence. Finally, we showed that CHIVI is an effective algorithm for Bayesian probit regression, Gaussian process classification, and Cox processes. Performing VI via upper bound minimization, and hence enabling overdispersed posterior approximations, sandwich estimation, and model selection, comes with a cost. Exponentiating the original CUBO bound leads to high variance during optimization even with reparameterization gradients. Developing variance reduction schemes for these types of objectives (expectations of likelihood ratios) is an open research problem; solutions will benefit this paper and related approaches. 8 Acknowledgments We thank Alp Kucukelbir, Francisco J. R. Ruiz, Christian A. Naesseth, Scott W. Linderman, Maja Rudolph, and Jaan Altosaar for their insightful comments. This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA PPAML FA8750-14-2-0009, DARPA SIMPLEX N66001-15-C-4032, the Alfred P. Sloan Foundation, and the John Simon Guggenheim Foundation. References [1] C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, 2004. [2] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical models. Machine Learning, 37:183–233, 1999. [3] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 2013. [4] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT press, 2012. [5] C. M. Bishop. Pattern recognition. Machine Learning, 128, 2006. [6] J. Hensman, M. Zwießele, and N. D. Lawrence. Tilted variational Bayes. JMLR, 2014. [7] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001. [8] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. JMLR, 6:1679–1704, 2005. [9] M. J. Beal. Variational algorithms for approximate Bayesian inference. University of London, 2003. [10] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415–447, 1992. [11] R. B. Grosse, Z. Ghahramani, and R. P. Adams. Sandwiching the marginal likelihood using bidirectional monte carlo. arXiv preprint arXiv:1511.02543, 2015. [12] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, 2014. [13] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [14] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In ICML, 2014. [15] M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms. Neural Computation, 12(11):2655–2684, 2000. [16] Andrew Gelman, Aki Vehtari, Pasi Jylänki, Tuomas Sivula, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P Cunningham, David Schiminovich, and Christian Robert. Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data. arXiv preprint arXiv:1412.4869, 2017. [17] Y. W. Teh, L. Hasenclever, T. Lienart, S. Vollmer, S. Webb, B. Lakshminarayanan, and C. Blundell. Distributed Bayesian learning with stochastic natural-gradient expectation propagation and the posterior server. arXiv preprint arXiv:1512.09327, 2015. [18] Y. Li, J. M. Hernández-Lobato, and R. E. Turner. Stochastic Expectation Propagation. In NIPS, 2015. [19] T. Minka. Power EP. Technical report, Microsoft Research, 2004. [20] J. M. Hernández-Lobato, Y. Li, D. Hernández-Lobato, T. Bui, and R. E. Turner. Black-box α-divergence minimization. ICML, 2016. [21] Y. Li and R. E. Turner. Variational inference with Rényi divergence. In NIPS, 2016. [22] Rajesh Ranganath, Jaan Altosaar, Dustin Tran, and David M. Blei. Operator variational inference. In NIPS, 2016. 9 [23] Volodymyr Kuleshov and Stefano Ermon. Neural variational inference and learning in undirected graphical models. In NIPS, 2017. [24] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999. [25] T. Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005. [26] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In International Conference on Learning Representations, 2016. [27] Matthew D Hoffman. Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo. In International Conference on Machine Learning, 2017. [28] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge Univ. Press, 2003. [29] A. E. Raftery. Bayesian model selection in social research. Sociological methodology, 25:111– 164, 1995. [30] J. Paisley, D. Blei, and M. Jordan. Variational Bayesian inference with stochastic search. In ICML, 2012. [31] G. Dehaene and S. Barthelmé. Expectation propagation in the large-data limit. In NIPS, 2015. [32] Peter Sunehag, Jochen Trumpf, SVN Vishwanathan, Nicol N Schraudolph, et al. Variable metric stochastic approximation theory. In AISTATS, pages 560–566, 2009. [33] Dustin Tran, Alp Kucukelbir, Adji B Dieng, Maja Rudolph, Dawen Liang, and David M Blei. Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint arXiv:1610.09787, 2016. [34] H. Kim and Z. Ghahramani. The em-ep algorithm for gaussian process classification. In Proceedings of the Workshop on Probabilistic Graphical Models for Classification (ECML), pages 37–48, 2003. [35] A. Miller, L. Bornn, R. Adams, and K. Goldsberry. Factorized point process intensities: A spatial analysis of professional basketball. In ICML, 2014. 10 | 2017 | 148 |
6,617 | A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks Qinliang Su Xuejun Liao Lawrence Carin Department of Electrical and Computer Engineering Duke University, Durham, NC, USA {qs15, xjliao, lcarin}@duke.edu Abstract We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions. By setting the truncation points appropriately, we are able to generate various types of nonlinearities within a unified framework, including sigmoid, tanh and ReLU, the most commonly used nonlinearities in neural networks. The framework readily integrates into existing stochastic neural networks (with hidden units characterized as random variables), allowing one for the first time to learn the nonlinearities alongside model weights in these networks. Extensive experiments demonstrate the performance improvements brought about by the proposed framework when integrated with the restricted Boltzmann machine (RBM), temporal RBM and the truncated Gaussian graphical model (TGGM). 1 Introduction A typical neural network is composed of nonlinear units connected by linear weights, and such a network is known to have universal approximation ability under mild conditions about the nonlinearity used at each unit [1, 2]. In previous work, the choice of nonlinearity has commonly been taken as a part of network design rather than network learning, and the training algorithms for neural networks have been mostly concerned with learning the linear weights. However, it is becoming increasingly understood that the choice of nonlinearity plays an important role in model performance. For example, [3] showed advantages of rectified linear units (ReLU) over sigmoidal units in using the restricted Boltzmann machine (RBM) [4] to pre-train feedforward ReLU networks. It was further shown in [5] that rectified linear units (ReLU) outperform sigmoidal units in a generative network under the same undirected and bipartite structure as the RBM. A number of recent works have reported benefits of learning nonlinear units along with the inter-unit weights. These methods are based on using parameterized nonlinear functions to activate each unit in a neural network, with the unit-dependent parameters incorporated into the data-driven training algorithms. In particular, [6] considered the adaptive piecewise linear (APL) unit defined by a mixture of hinge-shaped functions, and [7] used nonparametric Fourier basis expansion to construct the activation function of each unit. The maxout network [8] employs piecewise linear convex (PLC) units, where each PLC unit is obtained by max-pooling over multiple linear units. The PLC units were extended to Lp units in [9] where the normalized Lp norm of multiple linear units yields the output of an Lp unit. All these methods have been developed for learning the deterministic characteristics of a unit, lacking a stochastic unit characterization. The deterministic nature limits these methods from being easily applied to stochastic neural networks (for which the hidden units are random variables, rather than being characterized by a deterministic function), such as Boltzmann machines [10], restricted Boltzmann machines [11], and sigmoid belief networks (SBNs) [12]. We propose a probabilistic framework to unify the sigmoid, hyperbolic tangent (tanh) and ReLU nonlinearities, most commonly used in neural networks. The proposed framework represents a 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. unit h probabilistically as p(h|z, ξ), where z is the total net contribution that h receives from other units, and ξ represents the learnable parameters. By taking the expectation of h, a deterministic characterization of the unit is obtained as E(h|z, ξ) ≜ R h p(h|z, ξ)dh. We show that the sigmoid, tanh and ReLU are well approximated by E(h|z, ξ) under appropriate settings of ξ. This is different from [13], in which nonlinearities were induced by the additive noises of different variances, making the model learning much more expensive and nonlinearity producing less flexible. Additionally, more-general nonlinearities may be constituted or learned, with these corresponding to distinct settings of ξ. A neural unit represented by the proposed framework is named a truncated Gaussian (TruG) unit because the framework is built upon truncated Gaussian distributions. Because of the inherent stochasticity, TruG is particularly useful in constructing stochastic neural networks. The TruG generalizes the probabilistic ReLU in [14, 5] to a family of stochastic nonlinearities, with which one can perform two tasks that could not be done previously: (i) One can interchangeably use one nonlinearity in place of another under the same network structure, as long as they are both in the TruG family; for example, the ReLU-based stochastic networks in [14, 5] can be extended to new networks based on probabilistic tanh or sigmoid nonlinearities, and the respective algorithms in [14, 5] can be employed to train the associated new models with little modification; (ii) Any stochastic network constructed with the TruG can learn the nonlinearity alongside the network weights, by maximizing the likelihood function of ξ given the training data. We can learn the nonlinearity at the unit level, with each TruG unit having its own parameters; or we can learn the nonlinearity at the model level, with the entire network sharing the same parameters for all its TruG units. The different choices entail only minor changes in the update equation of ξ, as will be seen subsequently. We integrate the TruG framework into three existing stochastic networks: the RBM, temporal RBM [15] and feedforward TGGM [14], leading to three new models referred to as TruG-RBM, temporal TruG-RBM and TruG-TGGM, respectively. These new models are evaluated against the original models in extensive experiments to assess the performance gains brought about by the TruG. To conserve space, all propositions in this paper are proven in the Supplementary Material. 2 TruG: A Probabilistic Framework for Nonlinearities in Neural Networks For a unit h that receives net contribution z from other units, we propose to relate h to z through the following stochastic nonlinearity, p(h|z, ξ) = N h z, σ2 I(ξ1 ≤h ≤ξ2) R ξ2 ξ1 N (h′ |z, σ2 ) dh′ ≜N[ξ1,ξ2] h z, σ2 , (1) where I(·) is an indicator function and N · z, σ2 is the probability density function (PDF) of a univariate Gaussian distribution with mean z and variance σ2; the shorthand notation N[ξ1,ξ2] indicates the density N is truncated and renormalized such that it is nonzero only in the interval [ξ1, ξ2]; ξ ≜{ξ1, ξ2} contains the truncation points and σ2 is fixed. The units of a stochastic neural network fall into two categories: visible units and hidden units [4]. The network represents a joint distribution over both hidden and visible units and the hidden units are integrated out to yield the marginal distribution of visible units. With a hidden unit expressed in (1), the expectation of h is given by E(h|z, ξ) = z + σ φ( ξ1−z σ ) −φ( ξ2−z σ ) Φ( ξ2−z σ ) −Φ( ξ1−z σ ) , (2) where φ(·) and Φ(·) are, respectively, the PDF and cumulative distribution function (CDF) of the standard normal distribution [16]. As will become clear below, a weighted sum of these expected hidden units constitutes the net contribution received by each visible unit when the hidden units are marginalized out. Therefore E(h|z, ξ) acts as a nonlinear activation function to map the incoming contribution h receives to the outgoing contribution h sends out. The incoming contribution received by h may be a random variable or a function of data such as z = wT x+b; the former case is typically for unsupervised learning and the latter case for supervised learning with x being the predictors. By setting the truncation points to different values, we are able to realize many different kinds of nonlinearities. We plot in Figure 1 three realizations of E(h|z, ξ) as a function of z, each with a particular setting of {ξ1, ξ2} and σ2 = 0.2 in all cases. The plots of ReLU, tanh and sigmoid are 2 -20 -10 0 10 20 z 0 5 10 15 20 activation(z) ReLU TruG (a) -20 -10 0 10 20 z -1 -0.5 0 0.5 1 activation(z) tanh TruG (b) -20 -10 0 10 20 z 0 0.2 0.4 0.6 0.8 1 activation(z) sigmoid TruG (c) -20 -10 0 10 20 z 0 1 2 3 4 5 6 activation(z) sigmoid TruG ReLU (d) Figure 1: Illustration of different nonlinearities realized by the TruG with different truncation points. (a) ξ1 = 0 and ξ2 = +∞; (b) ξ1 = −1 and ξ2 = 1; (c) ξ1 = 0 and ξ2 = 1; (d) ξ1 = 0 and ξ2 = 4. also shown as a comparison. It is seen from Figure 1 that, by choosing appropriate truncation points, E(h|z, ξ) is able to approximate ReLU, tanh and sigmoid, the three types of nonlinearities most widely used in neural networks. We can also realize other types of nonlinearities by setting the truncation points to other values, as exemplified in Figure 1(d). The truncation points can be set manually by hand, selected by cross-validation, or learned in the same way as the inter-unit weights. In this paper, we focus on learning them alongside the weights based on training data. The variance of h, given by [16], Var(h|z, ξ) = σ2 + σ2 ξ1−z σ φ ξ1−z σ −ξ2−z σ φ ξ2−z σ Φ ξ2−z σ −Φ ξ1−z σ −σ2 φ ξ1−z σ −φ ξ2−z σ Φ ξ2−z σ −Φ ξ1−z σ 2 , (3) is employed in learning the truncation points and network weights. Direct evaluation of (2) and (3) is prone to the numerical issue of 0 0, because both φ(z) and Φ(z) are so close to 0 when z < −38 that they are beyond the maximal accuracy a double float number can represent. We solve this problem by using the fact that (2) and (3) can be equivalently expressed in terms of φ(z) Φ(z) by dividing both the numerator and the denominator by φ(·). We make use of the following approximation for the ratio, φ(z) Φ(z) ≈ √ z2 + 4 −z 2 ≜γ(z), for z < −38, (4) the accuracy of which is established in Proposition 1. Proposition 1. The relative error is bounded by γ(z)/ φ(z) Φ(z) −1 < 2 √ z2+4−z √ z2+8−3z −1; moreover, for all z < −38, the relative error is guaranteed to be smaller than 4.8 × 10−7, that is, γ(z)/ φ(z) Φ(z) −1 < 4.8 × 10−7 for all z < −38. 3 RBM with TruG Nonlinearity We generalize the ReLU-based RBM in [5] by using the TruG nonlinearity. The resulting TruG-RBM is defined by the following joint distribution over visible units x and hidden units h, p(x, h) = 1 Z e−E(x,h)I(x ∈{0, 1}n, ξ1 ≤h ≤ξ2), (5) where E(x, h) ≜ 1 2hT diag(d)h −xT Wh −bT x −cT h is an energy function and Z is the normalization constant. Proposition 2 shows (5) is a valid probability distribution. Proposition 2. The distribution p(x, h) defined in (5) is normalizable. By (5), the conditional distribution of x given h is still Bernoulli, p(x|h) = Qn i=1 σ([Wh + b]i), while the conditional p(h|x) is a truncated normal distribution, i.e., p(h|x) = m Y j=1 N[ξ1,ξ2] hj 1 dj [WT x + c]j, 1 dj . (6) 3 By setting ξ1 and ξ2 to different values, we are able to produce different nonlinearities in (6). We train a TruG-RBM based on maximizing the log-likelihood function ℓ(Θ, ξ) ≜ P x∈X ln p(x; Θ, ξ), where Θ ≜ {W, b, c} denotes the network weights, p(x; Θ, ξ) ≜ R ξ2 ξ1 p(x, h)dh is contributed by a single data point x, and X is the training dataset. 3.1 The Gradient w.r.t. Network Weights The gradient w.r.t. Θ is known to be ∂ln p(x) ∂Θ = E h ∂E(x,h) ∂Θ i −E h ∂E(x,h) ∂Θ x i , where E[·] and E[·|x] means the expectation w.r.t. p(x, h) and p(h|x), respectively. If we estimate the gradient using a standard sampling-based method, the variance associated with the estimate is usually very large. To reduce the variance, we follow the traditional RBM in applying the contrastive divergence (CD) to estimate the gradient [4]. Specifically, we approximate the gradient as ∂ln p(x) ∂Θ ≈E ∂E(x, h) ∂Θ x(k) −E ∂E(x, h) ∂Θ x , (7) where x(k) is the k-th sample of the Gibbs sampler p(h(1)|x(0)), p(x(1)|h(1)) · · · p(x(k)|h(k)), with x(0) being the data x. As shown in (6), p(x|h) and p(h|x) are factorized Bernoulli and univariate truncated normal distributions, for which efficient sampling algorithms exist [17, 18]. Furthermore, we can obtain that ∂E(x,h) ∂wij = xihj, ∂E(x,h) ∂bi = xi, ∂E(x,h) ∂cj = hj and ∂E(x,h) ∂dj = 1 2h2 j. Thus estimation of the gradient with CD only requires E hj|x(s) and E h2 j|x(s) , which can be calculated using (2) and (3). Using the estimated gradient, the weights can be updated using the stochastic gradient ascent algorithm or its variants. 3.2 The Gradient w.r.t. Truncation Points The gradient w.r.t. ξ1 and ξ2 are given by ∂ln p(x) ∂ξ1 = m X j=1 (p(hj = ξ1) −p(hj = ξ1|x)) , (8) ∂ln p(x) ∂ξ2 = m X j=1 (p(hj = ξ2|x) −p(hj = ξ2)) , (9) for a single data point, with the derivation provided in the Supplementary Material. It is known that p(hj = ξ|x) = N[ξ1,ξ2] hj = ξ 1 dj [WT x + c]j, 1 dj , which can be easily calculated. However, if we calculate p(hj = ξ) directly, it would be computationally prohibitive. Fortunately, by noticing the identity p(hj = ξ) = P x p(hj = ξ|x)p(x), we are able to estimate it efficiently with CD as p(hj =ξ) ≈p(hj = ξ|x(k)) = N[ξ1,ξ2] hj =ξ [WT x(k)+c]j dj , 1 dj , where x(k) is the k-th sample of the Gibbs sampler as described above. Therefore, the gradient w.r.t. the lower and upper truncation points can be estimated using the equations ∂ln p(x) ∂ξ2 ≈Pm j=1 p(hj =ξ2|x)−p(hj =ξ2|x(k)) and ∂ln p(x) ∂ξ1 ≈−Pm j=1 p(hj =ξ1|x)−p(hj =ξ1|x(k)) . After obtaining the gradients, we can update the truncation points with stochastic gradient ascent methods. It should be emphasized that in the derivation above, we assume a common truncation point pair {ξ1, ξ2} shared among all units for the clarity of presentation. The extension to separate truncation points for different units is straightforward, by simply replacing (8) and (9) with ∂ln p(x) ∂ξ2j = (p(hj = ξ2j|x) −p(hj = ξ2j)) and ∂ln p(x) ∂ξ1j = (p(hj = ξ1j) −p(hj = ξ1j|x)), where ξ1j and ξ2j are the lower and upper truncation point of j-th unit, respectively. For the models discussed subsequently, one can similarly get the gradient w.r.t. unit-dependent truncations points. After training, due to the conditional independence between x and h and the existence of efficient sampling algorithm for truncated normal, samples can be drawn efficiently from the TruG-RBM using the Gibbs sampler discussed below (7). 4 4 Temporal RBM with TruG Nonlinearity We integrate the TruG framework into the temporal RBM (TRBM) [19] to learn the probabilistic nonlinearity in sequential-data modeling. The resulting temporal TruG-RBM is defined by p(X, H) = p(x1, h1)QT t=2 p(xt, ht|xt−1, ht−1), (10) where p(x1, h1) and p(xt, ht|xt−1, ht−1) are both represented by TruG-RBMs; xt ∈Rn and ht ∈Rm are the visible and hidden variables at time step t, with X ≜[x1, x2, · · · , xT ] and H ≜ [h1, h2, · · · , hT ]. To be specific, the distribution p(xt, ht|xt−1, ht−1) is defined as p(xt, ht|xt−1, ht−1) = 1 Zt exp−E(xt,ht) I(x ∈ {0, 1}n, ξ1 ≤ ht ≤ ξ2), where the energy function takes the form E(xt, ht) ≜ 1 2 xT t diag(a) xt + hT t diag(d) ht − 2xT t W1ht −2cT ht −2 (W2xt−1)T ht −2bT xt −2 (W3xt−1)T xt −2(W4ht−1)T ht ; and Zt ≜ R +∞ −∞ R +∞ 0 e−E(xt,ht)dhtdxt. Similar to the TRBM, directly optimizing the log-likelihood is difficult. We instead optimize the lower bound L ≜Eq(H|X)[ln p(X, H; Θ, ξ) −ln q(H|X)] , (11) where q(H|X) is an approximating posterior distribution of H. The lower bound is equal to the log-likelihood when q(H|X) is exactly the true posterior p(H|X). We follow [19] to choose the following approximate posterior, q(H|X) = p(h1|x1) · · · p(hT |xT −1, hT −1, xT ), with which it can be shown that the gradient of the lower bound w.r.t. the network weights is given by ∂L ∂Θ = PT t=1 Ep(ht−1|xt−2,ht−2,xt−1) Ep(xt,ht|xt−1,ht−1) h ∂E(xt,ht) ∂Θ i − Ep(ht|xt−1,ht−1,xt) h ∂E(xt,ht) ∂Θ i . At any time step t, the outside expectation (which is over ht−1) is approximated by sampling from p(ht−1|xt−2, ht−2, xt−1); given ht−1 and xt−1, one can represent p(xt, ht|xt−1, ht−1) as a TruG-RBM and therefore the two inside expectations can be computed in the same way as in Section 3. In particular, the variables in ht are conditionally independent given (xt−1, ht−1, xt), i.e., p(ht|xt−1, ht−1, xt) = Qm j=1 p(hjt|xt−1, ht−1, xt) with each component equal to p(hjt|xt−1, ht−1, xt) =N[ξ1,ξ2] hjt [WT 1 xt+W2xt−1+W4ht−1+c]j dj , 1 dj . (12) Similarly, the variables in xt are conditionally independent given (xt−1, ht−1, ht). As a result, Ep(ht|xt−1,ht−1,xt)[·] can be calculated in closed-form using (2) and (3), and Ep(xt,ht|xt−1,ht−1,xt)[·] can be estimated using the CD algorithm, as in Section Section 3. The gradient of L w.r.t. the upper truncation point is ∂L ∂ξ2 = Eq(H|X) T X t=1 m X j=1 p(hjt = ξ2|xt−1, ht−1, xt) − T X t=1 m X j=1 p(hjt = ξ2|xt−1, ht−1) , with ∂L ∂ξ1 taking a similar form, where the expectations are similarly calculated using the same approach as described above for ∂L ∂Θ. 5 TGGM with TruG Nonlinearity We generalize the feedforward TGGM model in [14] by replacing the probabilistic ReLU with the TruG. The resulting TruG-TGGM model is defined by the joint PDF over visible variables y and hidden variables h, p(y, h|x) = N(y|W1h + b1, σ2I)N[ξ1,ξ2](h|W0x + b0, σ2I), (13) 5 given the predictor variables x. After marginalizing out h, we get the expectation of y as E[y|x] = W1E(h|W0x + b0, ξ) + b1, (14) where E(h|W0x + b0, ξ) is given element-wisely in (2). It is then clear that the expectation of y is related to x through the TruG nonlinearity. Thus E[y|x] yields the same output as a three-layer perceptron that uses (2) to activate its hidden units. Hence, the TruG-TGGM model defined in (13) can be understood as a stochastic perceptron with the TruG nonlinearity. By choosing different values for the truncation points, we are able to realize different kinds of nonlinearities, including ReLU, sigmoid and tanh. To train the model by maximum likelihood estimation, we need to know the gradient of ln p(y|x) ≜ ln R p(y, h|x; Θ)dh, where Θ ≜{W1, W0, b1, b0} represents the model parameters. By rewriting the joint PDF as p(y, h|x) ∝e−E(y,h,x)I(ξ1 ≤h ≤ξ2), the gradient is found to be given by ∂ln p(y|x) ∂Θ =E h ∂E(y,h,x) ∂Θ x i −E h ∂E(y,h,x) ∂Θ x, y i , where E(y, h, x) ≜||y−W1h−b1||2+||h−W0x−b0||2 2σ2 ; E[·|x] is the expectation w.r.t. p(y, h|x); and E[·|x, y] is the expectation w.r.t. p(h|x, y). From (13), we know p(h|x) = N[ξ1,ξ2](h|W0x + b0, σ2I) can be factorized into a product of univariate truncated Gaussian PDFs. Thus the expectation E[h|x] can be computed using (2). However, the expectations E[h|x, y] and E[hhT |x, y] involve a multivariate truncated Gaussian PDF and are expensive to calculate directly. Hence mean-field variational Bayesian analysis is used to compute the approximate expectations. The details are similar to those in [14] except that (2) and (3) are used to calculate the expectation and variance of h. The gradients of the log-likelihood w.r.t. the truncation points ξ1 and ξ2 are given by ∂ln p(y|x) ∂ξ2 = PK j=1 (p(hj =ξ2|y, x) −p(hj =ξ2|x)) and ∂ln p(y|x) ∂ξ1 = −PK j=1 (p(hj =ξ1|y, x) −p(hj =ξ1|x)) for a single data point, with the derivation provided in the Supplementary Material. The probability p(hj = ξ1|x) can be computed directly since it is a univariate truncated Gaussian distribution. For p(hj = ξ2|y, x), we approximate it with the mean-field marginal distributions obtained above. Although TruG-TGGM involves random variables, thanks to the existence of close-form expression for the expectation of univariate truncated normal, the testing is still very easy. Given a predictor ˆx, the output can be simply predicted with the conditional expectation E[y|x] in (14). 6 Experimental Results We evaluate the performance benefit brought about by the TruG framework when integrated into the RBM, temporal RBM and TGGM. In each of the three cases, the evaluation is based on comparing the original network to the associated new network with the TruG nonlinearity. For the TruG, we either manually set {ξ1, ξ2} to particular values, or learn them automatically from data. We consider both the case of learning a common {ξ1, ξ2} shared for all hidden units and the case of learning a separate {ξ1, ξ2} for each hidden unit. Table 1: Averaged test log-probability on MNIST. (⋆) Results reported in [20]; (⋄) Results reported in [21] using RMSprop as the optimizer. Model Trun. Points Ave. Log-prob MNIST Caltech101 TruG-RBM [0, 1] -97.3 -127.9 [0, +∞) -83.2 -105.2 [-1, 1] -124.5 -141.5 c-Learn -82.9 -104.6 s-Learn -82.5 -104.3 RBM — -86.3⋆ -109.0⋄ Results of TruG-RBM The binarized MNIST and Caltech101 Silhouettes are considered in this experiment. The MNIST contains 60,000 training and 10,000 testing images of hand-written digits, while Caltech101 Silhouettes includes 6364 training and 2307 testing images of objects’ silhouettes. For both datasets, each image has 28 × 28 pixels [22]. Throughout this experiment, 500 hidden units are used. RMSprop is used to update the parameters, with the delay and mini-batch size set to 0.95 and 100, respectively. The weight parameters are initialized with the Gaussian noise of zero mean and 0.01 variance, while the lower and upper truncation points at all units are initialized to 0 and 1, respectively. The learning rates for weight parameters are fixed to 10−4. Since truncations points influence the whole networks in a more fundamental way than weight parameters, it is observed that smaller learning rates are often preferred for them. To balance the convergence speed and 6 -15 -10 -5 0 5 10 15 Input before transform: 7 0 0.5 1 1.5 2 2.5 3 3.5 4 Output after transform Sigmoid function <(7) Nonlinearity in Ball Nonlinearity in MNIST Nonlinearity in Motion Nonlinearity in Caltech (a) 0.5 1 1.5 2 2.5 3 3.5 Upper truncation point: 2 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Probability (b) 2 2.5 3 3.5 4 4.5 Upper truncation point: 2 0 0.02 0.04 0.06 0.08 0.1 0.12 Probability (c) Figure 2: (a) The learned nonlinearities in TruG models with shared upper truncation point ξ; The distribution of unit-level upper truncation points of TruG-RBM for (b) MNIST; (c) Caltech101 Silhouettes. performance, we anneal their learning rates from 10−4 to 10−6 gradually. The evaluation is based on the log-probability averaged over test data points, which are estimated using annealed importance sampling (AIS) [23] with 5 × 105 inverse temperatures equally spaced in [0, 1]; the reported test log-probability is averaged over 100 independent AIS runs. To investigate the impact of truncation points, we first set the lower and upper truncation points to three fixed pairs: [0, 1], [0, +∞) and [−1, 1], which correspond to probabilistic approximations of sigmoid, ReLU and tanh nonlinearities, respectively. From Table 1, we see that the ReLU-type TruG-RBM performs much better than the other two types of TruG-RBM. We also learn the truncation points from data automatically. We can see that the model benefits significantly from nonlinearity learning, and the best performance is achieved when the units learn their own nonlinearities. The learned common nonlinearities (c-Learn) for different datasets are plotted in Figure 2(a), which shows that the model always tends to choose a nonlinearity in between sigmoid and ReLU functions. For the case with separate nonlinearities (s-Learn), the distributions of the upper truncation points in the TruGRBM’s for MNIST and Caltech101 Silhouettes are plotted in Figure 2(b) and (c), respectively. Note that due to the detrimental effect observed for negative truncation points, here the lower truncation points are fixed to zero and only the upper points are learned. To demonstrate the reliability of AIS estimate, the convergence plots of estimated log-probabilities are provided in Supplementary Material. Results of Temporal TruG-RBM The Bouncing Ball and CMU Motion Capture datasets are considered in the experiment with temporal models. Bouncing Ball consists of synthetic binary videos of 3 bouncing balls in a box, with 4000 videos for training and 200 for testing, and each video has 100 frames of size 30 × 30. CMU Motion Capture is composed of data samples describing the joint angles associated with different motion types. We follow [24] to train a model on 31 sequences and test the model on two testing sequences (one is running and the other is walking). Both the original TRBM and the TruG-TRBM use 400 hidden units for Bouncing Ball and 300 hidden units for CMU Motion Capture. Stochastic gradient descent (SGD) is used to update the parameters, with the momentum set to 0.9. The learning rates are set to be 10−2 and 10−4 for the two datasets, respectively. The learning rate for truncation points is annealed gradually, as done in Section 6. Since calculating the log-probabilities for these temporal models is computationally prohibitive, prediction error is employed here as the performance evaluation criteria, which is widely used [24, 25] in temporal generative models. The performances averaged over 20 independent runs are reported here. Tables 2 and 3 confirm again that models benefit remarkably from nonlinearity learning, especially in the case of learning a separate nonlinearity for each hidden unit. It is noticed that, although the ReLU-type TruG-TRBM performs better the tanh-type TruG-TRBM on Bouncing Ball, the former performs much worse than the latter on CMU Motion Capture. This demonstrates that a fixed nonlinearity cannot perform well on every dataset. However, by learning truncation points automatically, the TruG can adapt the nonlinearity to the data and thus performs the best on every dataset (up to the representational limit of the TruG framework). Video samples drawn from the trained models are provided in the Supplementary Material. Results of TruG-TGGM Ten datasets from the UCI repository are used in this experiment. Following the procedures in [26], datasets are randomly partitioned into training and testing subsets for 7 Table 2: Test prediction error on Bouncing Ball. (⋆) Taken from [24], in which 2500 hidden units are used. Model Trun. Points Pred. Err. TruG-TRBM [0, 1] 6.38±0.51 [0, +∞) 4.16±0.42 [-1, 1] 6.01±0.52 c-Learn 3.82±0.41 s-Learn 3.66±0.46 TRBM — 4.90±0.47 RTRBM⋆ — 4.00±0.35 Table 3: Test prediction error on CMU Motion Capture, in which ‘w’ and ‘r’ mean walking and running, respectively. (⋆) Taken from [24]. Model Trun. Points Err. (w) Err. (r) TruG-TRBM [0, 1] 8.2±0.18 6.1±0.22 [0, +∞) 21.8±0.31 14.9±0.29 [-1, 1] 7.3±0.21 5.9±0.22 c-Learn 6.7±0.29 5.5±0.22 s-Learn 6.8±0.24 5.4±0.14 TRBM — 9.6±0.15 6.8±0.12 ss-SRTRBM⋆ — 8.1±0.06 5.9±0.05 Table 4: Averaged test RMSEs for multilayer perception (MLP) and TruG-TGGMs under different truncation points. (⋆) Results reported in [26], where BH, CS, EE, K8 NP, CPP, PS, WQR, YH, YPM are the abbreviations of Boston Housing, Concrete Strength, Kin8nm, Naval Propulsion, Cycle Power Plant, Protein Structure, Wine Quality Red, Yacht Hydrodynamic, Year Prediction MSD, respectively. Dataset MLP (ReLU)⋆ TruG-TGGM with Different Trun. Points [0, 1] [0, +∞) [-1, 1] c-Learn s-Learn BH 3.228 ±0.195 3.564±0.655 3.214±0.555 4.003±0.520 3.401±0.375 3.622± 0.538 CS 5.977±0.093 5.210±0.514 5.106±0.573 4.977±0.482 4.910±0.467 4.743± 0.571 EE 1.098±0.074 1.168±0.130 1.252±0.123 1.069±0.166 0.881±0.079 0.913± 0.120 K8 0.091±0.002 0.094±0.003 0.086±0.003 0.091±0.003 0.073±0.002 0.075± 0.002 NP 0.001±0.000 0.002±0.000 0.002±0.000 0.002± 0.000 0.001±0.000 0.001± 0.000 CPP 4.182±0.040 4.023±0.128 4.067±0.129 3.978±0.132 3.952±0.134 3.951± 0.130 PS 4.539±0.029 4.231±0.083 4.387±0.072 4.262±0.079 4.209±0.073 4.206± 0.071 WQR 0.645±0.010 0.662±0.052 0.644±0.048 0.659±0.052 0.645±0.050 0.643± 0.048 YH 1.182±0.165 0.871±0.367 0.821±0.276 0.846±0.310 0.803±0.292 0.793± 0.289 YPM 8.932±N/A 8.961±N/A 8.985±N/A 8.859±N/A 8.893±N/A 8.965± N/A 10 trials except the largest one (Year Prediction MSD), for which only one partition is conducted due to computational complexity. Table 4 summarizes the root mean square error (RMSE) averaged over the different trials. Throughout the experiment, 100 hidden units are used for the two datasets (Protein Structure and Year Prediction MSD), while 50 units are used for the remaining. RMSprop is used to optimize the parameters, with RMSprop delay set to 0.9. The learning rate is chosen from the set {10−3, 2 × 104, 10−4}, while the mini-batch size is set to 100 for the two largest datasets and 50 for the others. The number of VB cycles used in the inference is set to 10 for all datasets. The RMSE’s of TGGMs with fixed and learned truncation points are reported in Table 4, along with the RMSE’s of the (deterministic) multilayer perceptron (MLP) using ReLU nonlinearity for comparison. Similar to what we have observed in generative models, the supervised models also benefit significantly from nonlinearity learning. The TruG-TGGM with learned truncation points perform the best for most datasets, with the separate learning performing slightly better than the common learning overall. Due to the limited space, the learned nonlinearities and their corresponding truncation points are provided in Supplementary Material. 7 Conclusions We have presented a probabilistic framework, termed TruG, to unify ReLU, sigmoid and tanh, the most commonly used nonlinearities in neural networks. The TruG is a family of nonlinearities constructed with doubly truncated Gaussian distributions. The ReLU, sigmoid and tanh are three important members of the TruG family, and other members can be obtained easily by adjusting the lower and upper truncation points. A big advantage offered by the TruG is that the nonlinearity is learnable from data, alongside the model weights. Due to its stochastic nature, the TruG can be readily integrated into many stochastic neural networks for which hidden units are random variables. Extensive experiments have demonstrated significant performance gains that the TruG framework can bring about when it is integrated with the RBM, temporal RBM, or TGGM. Acknowledgements The research reported here was supported by the DOE, NGA, NSF, ONR and by Accenture. 8 References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [2] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251– 257, 1991. [3] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807–814, 2010. [4] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002. [5] Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, and Lawrence Carin. Unsupervised learning with truncated gaussian graphical models. In The Thirty-First National Conference on Artificial Intelligence (AAAI), 2016. [6] Forest Agostinelli, Matthew D. Hoffman, Peter J. Sadowski, and Pierre Baldi. Learning activation functions to improve deep neural networks. CoRR, 2014. [7] Carson Eisenach, Han Liu, and ZhaoranWang. Nonparametrically learning activation functions in deep neural nets. In Under review as a conference paper at ICLR, 2017. [8] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In International Conference on Machine Learning (ICML), 2013. [9] Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu, and Yoshua Bengio. Learned-norm pooling for deep feedforward and recurrent neural networks. In Machine Learning and Knowledge Discovery in Databases, pages 530–546, 2014. [10] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985. [11] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [12] Radford M Neal. Connectionist learning of belief networks. Artificial intelligence, 56(1):71–113, 1992. [13] Brendan J Frey. Continuous sigmoidal belief networks trained using slice sampling. In Advances in Neural Information Processing Systems, pages 452–458, 1997. [14] Qinliang Su, Xuejun Liao, Changyou Chen, and Lawrence Carin. Nonlinear statistical learning with truncated gaussian graphical models. In Proceedings of the 33st International Conference on Machine Learning (ICML-16), 2016. [15] Ilya Sutskever, Geoffrey E Hinton, and Graham W. Taylor. The recurrent temporal restricted boltzmann machine. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1601–1608. Curran Associates, Inc., 2009. [16] Norman L Johnson, Samuel Kotz, and Narayanaswamy Balakrishnan. Continuous univariate distributions, vol. 1-2, 1994. [17] Nicolas Chopin. Fast simulation of truncated gaussian distributions. Statistics and Computing, 21(2):275– 288, 2011. [18] Christian P Robert. Simulation of truncated normal variables. Statistics and computing, 5(2):121–125, 1995. [19] Ilya Sutskever and Geoffrey E Hinton. Learning multilevel distributed representations for high-dimensional sequences. In AISTATS, volume 2, pages 548–555, 2007. [20] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM, 2008. [21] David E Carlson, Edo Collins, Ya-Ping Hsieh, Lawrence Carin, and Volkan Cevher. Preconditioned spectral descent for deep learning. In Advances in Neural Information Processing Systems, pages 2971–2979, 2015. [22] Benjamin M Marlin, Kevin Swersky, Bo Chen, and Nando D Freitas. Inductive principles for restricted boltzmann machine learning. In International conference on artificial intelligence and statistics, pages 509–516, 2010. [23] Radford M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [24] Roni Mittelman, Benjamin Kuipers, Silvio Savarese, and Honglak Lee. Structured recurrent temporal restricted boltzmann machines. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1647–1655, 2014. [25] Zhe Gan, Chunyuan Li, Ricardo Henao, David E Carlson, and Lawrence Carin. Deep temporal sigmoid belief networks for sequence modeling. In Advances in Neural Information Processing Systems, pages 2467–2475, 2015. [26] José Miguel Hernández-Lobato and Ryan P Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. Proceedings of The 32nd International Conference on Machine Learning, 2015. [27] Siamak Ravanbakhsh, Barnabás Póczos, Jeff Schneider, Dale Schuurmans, and Russell Greiner. Stochastic neural networks with monotonic activation functions. AISTATS, 1050:14, 2016. [28] Max Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with an application to information retrieval. In NIPS, pages 1481–1488, 2004. 9 [29] Qinliang Su and Yik-Chung Wu. On convergence conditions of gaussian belief propagation. IEEE Transactions on Signal Processing, 63(5):1144–1155, 2015. [30] Qinliang Su and Yik-Chung Wu. Convergence analysis of the variance in gaussian belief propagation. IEEE Transactions on Signal Processing, 62(19):5119–5131, 2014. [31] Brendan J Frey and Geoffrey E Hinton. Variational learning in nonlinear gaussian belief networks. Neural Computation, 11(1):193–213, 1999. [32] Qinliang Su and Yik-Chung Wu. Distributed estimation of variance in gaussian graphical model via belief propagation: Accuracy analysis and improvement. IEEE Transactions on Signal Processing, 63(23):6258–6271, 2015. [33] Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems 27, pages 963–971. Curran Associates, Inc., 2014. [34] Soumya Ghosh, Francesco Maria Delle Fave, and Jonathan Yedidia. Assumed density filtering methods for learning bayesian neural networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 1589–1595, 2016. 10 | 2017 | 149 |
6,618 | Learning Multiple Tasks with Multilinear Relationship Networks Mingsheng Long, Zhangjie Cao, Jianmin Wang, Philip S. Yu School of Software, Tsinghua University, Beijing 100084, China {mingsheng,jimwang}@tsinghua.edu.cn caozhangjie14@gmail.com psyu@uic.edu Abstract Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. Since deep features eventually transition from general to specific along deep networks, a fundamental problem of multi-task learning is how to exploit the task relatedness underlying parameter tensors and improve feature transferability in the multiple task-specific layers. This paper presents Multilinear Relationship Networks (MRN) that discover the task relationships based on novel tensor normal priors over parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and multilinear relationships of tasks and features, MRN is able to alleviate the dilemma of negativetransfer in the feature layers and under-transfer in the classifier layer. Experiments show that MRN yields state-of-the-art results on three multi-task learning datasets. 1 Introduction Supervised learning machines trained with limited labeled samples are prone to overfitting, while manual labeling of sufficient training data for new domains is often prohibitive. Thus it is imperative to design versatile algorithms for reducing the labeling consumption, typically by leveraging off-theshelf labeled data from relevant tasks. Multi-task learning is based on the idea that the performance of one task can be improved using related tasks as inductive bias [4]. Knowing the task relationship should enable the transfer of shared knowledge from relevant tasks such that only task-specific features need to be learned. This fundamental idea of task relatedness has motivated a variety of methods, including multi-task feature learning that learns a shared feature representation [1, 2, 6, 5, 23], and multi-task relationship learning that models inherent task relationship [10, 14, 29, 31, 15, 17, 8]. Learning inherent task relatedness is a hard problem, since the training data of different tasks may be sampled from different distributions and fitted by different models. Without prior knowledge on the task relatedness, the distribution shift may pose a major difficulty in transferring knowledge across different tasks. Unfortunately, if cross-task knowledge transfer is impossible, then we will overfit each task due to limited amount of labeled data. One way to circumvent this dilemma is to use an external data source, e.g. ImageNet, to learn transferable features through which the shift in the inductive biases can be reduced such that different tasks can be correlated more effectively. This idea has motivated some latest deep learning methods for learning multiple tasks [25, 22, 7, 27], which learn a shared representation in feature layers and multiple independent classifiers in classifier layer. However, these deep multi-task learning methods do not explicitly model the task relationships. This may result in under-transfer in the classifier layer as knowledge can not be transferred across different classifiers. Recent research also reveals that deep features eventually transition from general to specific along the network, and feature transferability drops significantly in higher layers with increasing task dissimilarity [28], hence the sharing of all feature layers may be risky to negativetransfer. Therefore, it remains an open problem how to exploit the task relationship across different deep networks while improving the feature transferability in task-specific layers of the deep networks. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. This paper presents Multilinear Relationship Network (MRN) for multi-task learning, which discovers the task relationships based on multiple task-specific layers of deep convolutional neural networks. Since the parameters of deep networks are natively tensors, the tensor normal distribution [21] is explored for multi-task learning, which is imposed as the prior distribution over network parameters of all task-specific layers to learn find-grained multilinear relationships of tasks, classes and features. By jointly learning transferable features and multilinear relationships, MRN is able to circumvent the dilemma of negative-transfer in feature layers and under-transfer in classifier layer. Experiments show that MRN learns fine-grained relationships and yields state-of-the-art results on standard benchmarks. 2 Related Work Multi-task learning is a learning paradigm that learns multiple tasks jointly by exploiting the shared structures to improve generalization performance [4, 19] and mitigate manual labeling consumption. There are generally two categories of approaches: (1) multi-task feature learning, which learns a shared feature representation such that the distribution shift across different tasks can be reduced [1, 2, 6, 5, 23]; (2) multi-task relationship learning, which explicitly models the task relationship in the forms of task grouping [14, 15, 17] or task covariance [10, 29, 31, 8]. While these methods have achieved improved performance, they may be restricted by their shallow learning paradigm that cannot embody task relationships by suppressing the task-specific variations in transferable features. Deep networks learn abstract representations that disentangle and hide explanatory factors of variation behind data [3, 16]. Deep representations manifest invariant factors underlying different populations and are transferable across similar tasks [28]. Thus deep networks have been successfully explored for domain adaptation [11, 18] and multi-task learning [25, 22, 32, 7, 20, 27], where significant performance gains have been witnessed. Most multi-task deep learning methods [22, 32, 7] learn a shared representation in the feature layers and multiple independent classifiers in the classifier layer without inferring the task relationships. However, this may result in under-transfer in the classifier layer as knowledge cannot be adaptively propagated across different classifiers, while the sharing of all feature layers may still be vulnerable to negative-transfer in the feature layers, as the higher layers of deep networks are tailored to fit task-specific structures and may not be safely transferable [28]. This paper presents a multilinear relationship network based on novel tensor normal priors to learn transferable features and task relationships that mitigate both under-transfer and negative-transfer. Our work contrasts from prior relationship learning [29, 31] and multi-task deep learning [22, 32, 7, 27] methods in two key aspects. (1) Tensor normal prior: our work is the first to explore tensor normal distribution as priors of network parameters in different layers to learn multilinear task relationships in deep networks. Since the network parameters of multiple tasks natively stack into high-order tensors, previous matrix normal distribution [13] cannot be used as priors of network parameters to learn task relationships. (2) Deep task relationship: we define the tensor normal prior on multiple task-specific layers, while previous deep learning methods do not learn the task relationships. To our knowledge, multi-task deep learning by tensor factorization [27] is the first work that tackles multi-task deep learning by tensor factorization, which learns shared feature subspace from multilayer parameter tensors; in contrast, our work learns multilinear task relationships from multiplayer parameter tensors. 3 Tensor Normal Distribution 3.1 Probability Density Function Tensor normal distribution is a natural extension of multivariate normal distribution and matrix-variate normal distribution [13] to tensor-variate distributions. The multivariate normal distribution is order-1 tensor normal distribution, and matrix-variate normal distribution is order-2 tensor normal distribution. Before defining tensor normal distribution, we first introduce the notations and operations of order-K tensor. An order-K tensor is an element of the tensor product of K vector spaces, each of which has its own coordinate system. A vector x ∈Rd1 is an order-1 tensor with dimension d1. A matrix X ∈Rd1×d2 is an order-2 tensor with dimensions (d1, d2). A order-K tensor X ∈Rd1×...×dK with dimensions (d1, . . . , dK) has elements {xi1...iK : ik = 1, . . . , dk}. The vectorization of X is unfolding the tensor into a vector, denoted by vec(X). The matricization of X is a generalization of vectorization, reordering the elements of X into a matrix. In this paper, to simply the notations and 2 describe the tensor relationships, we use the mode-k matricization and denote by X(k) the mode-k matrix of tensor X, where row i of X(k) contains all elements of X having the k-th index equal to i. Consider an order-K tensor X ∈Rd1×...×dK. Since we can vectorize X to a (QK k=1 dk) × 1 vector, the normal distribution on a tensor X can be considered as a multivariate normal distribution on vector vec(X) of dimension QK k=1 dk. However, such an ordinary multivariate normal distribution ignores the special structure of X as a d1 × . . . × dK tensor, and as a result, the covariance characterizing the correlations across elements of X is of size (QK k=1 dk) × (QK k=1 dk), which is often prohibitively large for modeling and estimation. To exploit the structure of X, tensor normal distributions assume that the (QK k=1 dk) × (QK k=1 dk) covariance matrix Σ1:K can be decomposed into the Kronecker product Σ1:K = Σ1 ⊗. . .⊗ΣK, and elements of X (in vectorization) follow the normal distribution, vec (X) ∼N (vec (M) , Σ1 ⊗. . . ⊗ΣK) , (1) where ⊗is the Kronecker product, Σk ∈Rdk×dk is a positive definite matrix indicating the covariance between the dk rows of the mode-k matricization X(k) of dimension dk × (Q k′̸=k dk′), and M is a mean tensor containing the expectation of each element of X. Due to the decomposition of covariance as the Kronecker product, the tensor normal distribution of an order-K tensor X, parameterized by mean tensor M and covariance matrices Σ1, . . . , ΣK, can define probability density function as [21] p (x) = (2π)−d/2 K Y k=1 |Σk|−d/(2dk) ! × exp −1 2(x −µ)TΣ−1 1:K (x −µ) , (2) where |·| is the determinant of a square matrix, and x = vec (X) , µ = vec (M) , Σ1:K = Σ1⊗. . .⊗ ΣK, d = QK k=1 dk. The tensor normal distribution corresponds to the multivariate normal distribution with Kronecker decomposable covariance structure. X following tensor normal distribution, i.e. vec (X) following the normal distribution with Kronecker decomposable covariance, is denoted by X ∼T Nd1×...×dK (M, Σ1, . . . , ΣK) . (3) 3.2 Maximum Likelihood Estimation Consider a set of n samples {Xi}n i=1 where each Xi is an order-3 tensor generated by a tensor normal distribution as in Equation (2). The maximum likelihood estimation (MLE) of the mean tensor M is c M = 1 n n X i=1 Xi. (4) The MLE of covariance matrices bΣ1, . . . , bΣ3 are computed by iteratively updating these equations: bΣ1 = 1 nd2d3 n X i=1 (Xi −M)(1) bΣ3 ⊗bΣ2 −1 (Xi −M)T (1), bΣ2 = 1 nd1d3 n X i=1 (Xi −M)(2) bΣ3 ⊗bΣ1 −1 (Xi −M)T (2), bΣ3 = 1 nd1d2 n X i=1 (Xi −M)(3) bΣ2 ⊗bΣ1 −1 (Xi −M)T (3). (5) This flip-flop algorithm [21] is efficient to solve by simple matrix manipulations and convergence is guaranteed. Covariance matrices bΣ1, . . . , bΣ3 are not identifiable and the solutions to maximizing density function (2) are not unique, while only the Kronecker product Σ1⊗. . .⊗ΣK (1) is identifiable. 4 Multilinear Relationship Networks This work models multiple tasks by jointly learning transferable representations and task relationships. Given T tasks with training data {Xt, Yt}T t=1, where Xt = {xt 1, . . . , xt Nt} and Yt = {yt 1, . . . , yt Nt} are the Nt training examples and associated labels of the t-th task, respectively drawn from Ddimensional feature space and C-cardinality label space, i.e. each training example xt n ∈RD and yt n ∈{1, . . . , C}. Our goal is to build a deep network for multiple tasks yt n = ft(xt n) which learns transferable features and adaptive task relationships to bridge different tasks effectively and robustly. 3 conv1 input conv2 conv3 conv4 conv5 fc6 fc7 TNPrior fc8 TNPrior Task 1 Task T output Figure 1: Multilinear relationship network (MRN) for multi-task learning: (1) convolutional layers conv1–conv5 and fully-connected layer fc6 learn transferable features, so their parameters are shared across tasks; (2) fully-connected layers fc7–fc8 fit task-specific structures, so their parameters are modeled by tensor normal priors for learning multilinear relationships of features, classes and tasks. 4.1 Model We start with deep convolutional neural networks (CNNs) [16], a family of models to learn transferable features that are well adaptive to multiple tasks [32, 28, 18, 27]. The main challenge is that in multitask learning, each task is provided with a limited amount of labeled data, which is insufficient to build reliable classifiers without overfitting. In this sense, it is vital to model the task relationships through which each pair of tasks can help with each other to enable knowledge transfer if they are related, and can remain independent to mitigate negative transfer if they are unrelated. With this idea, we design a Multilinear Relationship Network (MRN) that exploits both feature transferability and task relationship to establish effective and robust multi-task learning. Figure 1 shows the architecture of the proposed MRN model based on AlexNet [16], while other deep networks are also applicable. We build the proposed MRN model upon AlexNet [16], which is comprised of convolutional layers (conv1–conv5) and fully-connected layers (fc6–fc8). The ℓ-th fc layer learns a nonlinear mapping ht,ℓ n = aℓ Wt,ℓht,ℓ−1 n + bt,ℓ for task t, where ht,ℓ n is the hidden representation of each point xt n, Wt,ℓand bt,ℓare the weight and bias parameters, and aℓis the activation function, taken as ReLU aℓ(x) = max(0, x) for hidden layers or softmax units aℓ(x) = ex/ P|x| j=1 exj for the output layer. Denote by y = ft(x) the CNN classifier of t-th task, and the empirical error of CNN on {Xt, Yt} is min ft Nt X n=1 J ft xt n , yt n , (6) where J is the cross-entropy loss function, and ft (xt n) is the conditional probability that CNN assigns xt n to label yt n. We will not describe how to compute the convolutional layers since these layers can learn transferable features in general [28, 18], and we will simply share the network parameters of these layers across different tasks, without explicitly modeling the relationships of features and tasks in these layers. To benefit from pre-training and fine-tuning as most deep learning work, we copy these layers from a model pre-trained on ImageNet 2012 [28], and fine-tune all conv1–conv5 layers. As revealed by the recent literature findings [28], the deep features in standard CNNs must eventually transition from general to specific along the network, and the feature transferability decreases while the task discrepancy increases, making the features in higher layers fc7–fc8 unsafely transferable across different tasks. In other words, the fc layers are tailored to their original task at the expense of degraded performance on the target task, which may deteriorate multi-task learning based on deep neural networks. Most previous methods generally assume that the multiple tasks can be well correlated given the shared representation learned by the feature layers conv1–fc7 of deep networks [25, 22, 32, 27]. However, it may be vulnerable if different tasks are not well correlated under deep features, which is common as higher layers are not safely transferable and tasks may be dissimilar. Moreover, existing multi-task learning methods are natively designed for binary classification tasks, which are not good choices as deep networks mainly adopt multi-class softmax regression. It remains an open problem to explore the task relationships of multi-class classification for multi-task learning. In this work, we jointly learn transferable features and multilinear relationships of features and tasks for multiple task-specific layers L in a Bayesian framework. Based on the transferability of deep 4 networks discussed above, the task-specific layers L are set to {fc7, fc8}. Denote by X = {Xt}T t=1, Y = {Yt}T t=1 the complete training data of T tasks, and by Wt,ℓ∈RDℓ 1×Dℓ 2 the network parameters of the t-th task in the ℓ-th layer, where Dℓ 1 and Dℓ 2 are the rows and columns of matrix Wt,ℓ. In order to capture the task relationship in the network parameters of all T tasks, we construct the ℓ-th layer parameter tensor as Wℓ= W1,ℓ; . . . ; WT,ℓ ∈RDℓ 1×Dℓ 2×T . Denote by W = Wℓ: ℓ∈L the set of parameter tensors of all the task-specific layers L = {fc7, fc8}. The Maximum a Posteriori (MAP) estimation of network parameters W given training data {X, Y} for learning multiple tasks is p (W| X, Y) ∝p (W) · p (Y |X, W ) = Y ℓ∈L p Wℓ · T Y t=1 Nt Y n=1 p yt n xt n, Wℓ , (7) where we assume that for prior p (W), the parameter tensor of each layer Wℓis independent on the parameter tensors of the other layers Wℓ′̸=ℓ, which is a common assumption made by most feedforward neural network methods [3]. Finally, we assume when the network parameter is sampled from the prior, all tasks are independent. These independence assumptions lead to the factorization of the posteriori in Equation (7), which make the final MAP estimation in deep networks easy to solve. The maximum likelihood estimation (MLE) part p (Y |X, W ) in Equation (7) is modeled by deep CNN in Equation (6), which can learn transferable features in lower layers for multi-task learning. We opt to share the network parameters of all these layers (conv1–fc6). This parameter sharing strategy is a relaxation of existing deep multi-task learning methods [22, 32, 7], which share all the feature layers except for the classifier layer. We do not share task-specific layers (the last feature layer fc7 and classifier layer fc8), with the expectation to potentially mitigate negative-transfer [28]. The prior part p (W) in Equation (7) is the key to enabling multi-task deep learning since this prior part should be able to model the multilinear relationship across parameter tensors. This paper, for the first time, defines the prior for the ℓ-th layer parameter tensor by tensor normal distribution [21] as p Wℓ = T NDℓ 1×Dℓ 2×T O, Σℓ 1, Σℓ 2, Σℓ 3 , (8) where Σℓ 1 ∈RDℓ 1×Dℓ 1, Σℓ 2 ∈RDℓ 2×Dℓ 2, and Σℓ 3 ∈RT ×T are the mode-1, mode-2, and mode-3 covariance matrices, respectively. Specifically, in the tensor normal prior, the row covariance matrix Σℓ 1 models the relationships between features (feature covariance), the column covariance matrix Σℓ 2 models the relationships between classes (class covariance), and the mode-3 covariance matrix Σℓ 3 models the relationships between tasks in the ℓ-th layer network parameters {W1,ℓ, . . . , WT,ℓ}. A common strategy used by previous methods is to use identity covariance for feature covariance [31, 8] and class covariance [2], which implicitly assumes independent features and classes and cannot capture the dependencies between them. This work learns all feature covariance, class covariance, task covariance and all network parameters from data to build robust multilinear task relationships. We integrate the CNN error functional (6) and tensor normal prior (8) into MAP estimation (7) and taking negative logarithm, which leads to the MAP estimation of the network parameters W, a regularized optimization problem for Multilinear Relationship Network (MRN) formally writing as min ft|T t=1,Σℓ k|K k=1 T X t=1 Nt X n=1 J ft xt n , yt n +1 2 X ℓ∈L vec(Wℓ) T(Σℓ 1:K) −1vec(Wℓ) − K X k=1 Dℓ Dℓ k ln |Σℓ k| ! , (9) where Dℓ= QK k=1 Dℓ k and K = 3 is the number of modes in parameter tensor W, which could be K = 4 for the convolutional layers (width, height, number of feature maps, and number of tasks); Σℓ 1:3 = Σℓ 1 ⊗Σℓ 2 ⊗Σℓ 3 is the Kronecker product of the feature covariance Σℓ 1, class covariance Σℓ 2, and task covariance Σℓ 3. Moreover, we can assume shared task relationship across different layers as Σℓ 3 = Σ3, which enhances connection between task relationships on features fc7 and classifiers fc8. 4.2 Algorithm The optimization problem (9) is jointly non-convex with respect to the parameter tensors W as well as feature covariance Σℓ 1, class covariance Σℓ 2, and task covariance Σℓ 3. Thus, we alternatively optimize 5 one set of variables with the others fixed. We first update Wt,ℓ, the parameter of task-t in layer-ℓ. When training deep CNN by back-propagation, we only require the gradient of the objective function (denoted by O) in Equation (10) w.r.t. Wt,ℓon each data point (xt n, yt n), which can be computed as ∂O (xt n, yt n) ∂Wt,ℓ = ∂J (ft (xt n) , yt n) ∂Wt,ℓ + (Σℓ 1:3)−1vec Wℓ ··t, (10) where [(Σℓ 1:3)−1vec Wℓ ]··t is the (:, :, t) slice of a tensor folded from elements (Σℓ 1:3)−1vec(Wℓ) that are corresponding to parameter matrix Wt,ℓ. Since training a deep CNN requires a large amount of labeled data, which is prohibitive for many multi-task learning problems, we fine-tune from an AlexNet model pre-trained on ImageNet as in [28]. In each epoch, after updating W, we can update the feature covariance Σℓ 1, class covariance Σℓ 2, and task covariance Σℓ 3 by the flip-flop algorithm as Σℓ 1 = 1 Dℓ 2T (Wℓ)(1) Σℓ 3 ⊗Σℓ 2 −1 (Wℓ)T (1) + ϵIDℓ 1, Σℓ 2 = 1 Dℓ 1T (Wℓ)(2) Σℓ 3 ⊗Σℓ 1 −1 (Wℓ)T (2) + ϵIDℓ 2, Σℓ 3 = 1 Dℓ 1Dℓ 2 (Wℓ)(3) Σℓ 2 ⊗Σℓ 1 −1 (Wℓ)T (3) + ϵIT . (11) where the last term of each update equation is a small penalty traded off by ϵ for numerical stability. However, the above updating equations (11) are computationally prohibitive, due to the dimension explosion of the Kronecker product, e.g. Σℓ 2 ⊗Σℓ 1 is of dimension Dℓ 1Dℓ 2 × Dℓ 1Dℓ 2. To speed up computation, we will use the following rules of Kronecker product: (A ⊗B)−1 = A−1 ⊗B−1 and BT ⊗A vec (X) = vec (AXB). Taking the computation of Σℓ 3 ∈RT ×T as an example, we have (Σℓ 3)ij = 1 Dℓ 1Dℓ 2 (Wℓ)(3),i· Σℓ 2 ⊗Σℓ 1 −1(Wℓ)T (3),j· + ϵIij = 1 Dℓ 1Dℓ 2 (Wℓ)(3),i·vec (Σℓ 1) −1Wℓ ··j(Σℓ 2) −1 + ϵIij, (12) where (Wℓ)(3),i· denotes the i-th row of the mode-3 matricization of tensor Wℓ, and Wℓ ··j denotes the (:, :, j) slice of tensor Wℓ. We can derive that updating Σℓ 3 has a computational complexity of O T 2Dℓ 1Dℓ 2 Dℓ 1 + Dℓ 2 , similarly for Σℓ 1 and Σℓ 2. The total computational complexity of updating covariance matrices Σℓ k|3 k=1 will be O Dℓ 1Dℓ 2T Dℓ 1Dℓ 2 + Dℓ 1T + Dℓ 2T , which is still expensive. A key to computation speedup is that the covariance matrices Σℓ k|3 k=1 should be low-rank, since the features and tasks are enforced to be correlated for multi-task learning. Thus, the inverses of Σℓ k|3 k=1 do not exist in general and we have to compute the generalized inverses using eigendecomposition. We perform eigendecomposition for each Σℓ k and maintain all eigenvectors with eigenvalues greater than zero. The rank r of the eigen-reconstructed covariance matrices should be r ≤min(Dℓ 1, Dℓ 2, T). Thus, the total computational complexity for Σℓ k|3 k=1 is reduced to O rDℓ 1Dℓ 2T Dℓ 1 + Dℓ 2 + T . It is straight-forward to see the computational complexity of updating the parameter tensor W is the cost of back-propagation in standard CNNs plus the cost for computing the gradient of regularization term by Equation (10), which is O rDℓ 1Dℓ 2T Dℓ 1 + Dℓ 2 + T given generalized inverses (Σℓ k)−1|3 k=1. 4.3 Discussion The proposed Multilinear Relationship Network (MRN) is very flexible and can be easily configured to deal with different network architectures and multi-task learning scenarios. For example, replacing the network backbone from AlexNet to VGGnet [24] boils down to configuring task-specific layers L = {fc7, fc8}, where fc7 is the last feature layer while fc8 is the classifier layer in the VGGnet. The architecture of MRN in Figure 1 can readily cope with homogeneous multi-task learning where all tasks share the same output space. It can cope with heterogeneous multi-task learning where different tasks have different output spaces by setting L = {fc7}, by only considering feature layers. The multilinear relationship learning in Equation (9) is a general framework that readily subsumes many classical multi-task learning methods as special cases. Many regularized multi-task algorithms can be classified into two main categories: learning with feature covariances [1, 2, 6, 5] and learning 6 with task relations [10, 14, 29, 31, 15, 17, 8]. Learning with feature covariances can be viewed as a representative formulation in feature-based methods while learning with task relations is for parameter-based methods [30]. More specifically, previous multi-task feature learning methods [1, 2] can be viewed as a special case of Equation (9) by setting all covariance matrices but the feature covariance to identity matrix, i.e. Σk = I|K k=2; and previous multi-task relationship learning methods [31, 8] can be viewed as a special case of Equation (9) by setting all covariance matrices but the task covariance to identity matrix, i.e. Σk = I|K−1 k=1 . The proposed MRN is more general in the architecture perspective in dealing with parameter tensors in multiple layers of deep neural networks. It is noteworthy to highlight a concurrent work on multi-task deep learning using tensor decomposition [27], which is feature-based method that explicitly learns the low-rank shared parameter subspace. The proposed multilinear relationship across parameter tensors can be viewed as a strong alternative to the tensor decomposition, with the advantage to explicitly model the positive and negative relations across features and tasks. As a defense of [27], the tensor decomposition can extract finer-grained feature relations (what to share and how much to share) than the proposed multilinear relationships. 5 Experiments We compare MRN with state-of-the-art multi-task and deep learning methods to verify the efficacy of learning transferable features and multilinear task relationships. Codes and datasets will be released. 5.1 Setup Office-Caltech [12] This dataset is the standard benchmark for multi-task learning and transfer learning. The Office part consists of 4,652 images in 31 categories collected from three distinct domains (tasks): Amazon (A), which contains images downloaded from amazon.com, Webcam (W) and DSLR (D), which are images taken by Web camera and digital SLR camera under different environmental variations. This dataset is organized by selecting the 10 common categories shared by the Office dataset and the Caltech-256 (C) dataset [12], hence it yields four multi-class learning tasks. Spoon Sink Mug Pen Knife Bed Bike Kettle TV Keyboard Classes Alarm-Clock Desk-Lamp Hammer Chair Fan Real World Product Clipart Art Figure 2: Examples of the Office-Home dataset. Office-Home1 [26] This dataset is to evaluate transfer learning algorithms using deep learning. It consists of images from 4 different domains: Artistic images (A), Clip Art (C), Product images (P) and Real-World images (R). For each domain, the dataset contains images of 65 object categories collected in office and home settings. ImageCLEF-DA2 This dataset is the benchmark for ImageCLEF domain adaptation challenge, organized by selecting the 12 common categories shared by the following four public datasets (tasks): Caltech-256 (C), ImageNet ILSVRC 2012 (I), Pascal VOC 2012 (P), and Bing (B). All three datasets are evaluated using DeCAF7 [9] features for shallow methods and original images for deep methods. We compare MRN with standard and state-of-the-art methods: Single-Task Learning (STL), MultiTask Feature Learning (MTFL) [2], Multi-Task Relationship Learning (MTRL) [31], Robust MultiTask Learning (RMTL) [5], and Deep Multi-Task Learning with Tensor Factorization (DMTL-TF) [27]. STL performs per-task classification in separate deep networks without knowledge transfer. MTFL extracts the low-rank shared feature representations by learning feature covariance. RMTL extends MTFL to further capture the task relationships using a low-rank structure and identify outlier tasks using a group-sparse structure. MTRL captures the task relationships using task covariance of a matrix normal distribution. DMTL-TF tackles multi-task deep learning by tensor factorization, which learns shared feature subspace instead of multilinear task relationship in multilayer parameter tensors. To go deep into the efficacy of jointly learning transferable features and multilinear task relationships, we evaluate two MRN variants: (1) MRN8, MRN using only one network layer fc8 for multilinear relationship learning; (2) MRNt, MRN using only task covariance Σ3 for single-relationship learning. The proposed MRN model can natively deal with multi-class problems using the parameter tensors. However, most shallow multi-task learning methods such as MTFL, RMTL and MTRL are formulated 1http://hemanthdv.org/OfficeHome-Dataset 2http://imageclef.org/2014/adaptation 7 Table 1: Classification accuracy on Office-Caltech with standard evaluation protocol (AlexNet). Method 5% 10% 20% A W D C Avg A W D C Avg A W D C Avg STL (AlexNet) 88.9 73.0 80.4 88.7 82.8 92.2 80.9 88.2 88.9 87.6 91.3 83.3 93.7 94.9 90.8 MTFL [2] 90.0 78.9 90.2 86.9 86.5 92.4 85.3 89.5 89.2 89.1 93.5 89.0 95.2 92.6 92.6 RMTL [6] 91.3 82.3 88.8 89.1 87.9 92.6 85.2 93.3 87.2 89.6 94.3 87.0 96.7 93.4 92.4 MTRL [31] 86.4 83.0 95.1 89.1 88.4 91.1 87.1 97.0 87.6 90.7 90.0 88.8 99.2 94.3 93.1 DMTL-TF [27] 91.2 88.3 92.5 85.6 89.4 92.2 91.9 97.4 86.8 92.0 92.6 97.6 94.5 88.4 93.3 MRN8 91.7 96.4 96.9 86.5 92.9 92.7 97.1 97.3 86.6 93.4 93.2 96.9 99.4 82.8 94.4 MRNt 91.1 96.3 97.4 86.1 92.7 92.5 97.7 96.6 86.7 93.4 91.9 96.6 95.9 90.0 93.6 MRN (full) 92.5 97.5 97.9 87.5 93.8 93.6 98.6 98.6 87.3 94.5 94.4 98.3 99.9 89.1 95.5 Table 2: Classification accuracy on Office-Home with standard evaluation protocol (VGGnet). Method 5% 10% 20% A C P R Avg A C P R Avg A C P R Avg STL (VGGnet) 35.8 31.2 67.8 62.5 49.3 51.0 40.7 75.0 68.8 58.9 56.1 54.6 80.4 71.8 65.7 MTFL [2] 40.1 30.4 61.5 59.5 47.9 50.3 35.0 66.3 65.0 54.2 55.2 38.8 69.1 70.0 58.3 RMTL [6] 42.3 32.8 62.3 60.6 49.5 49.7 34.6 65.9 64.6 53.7 55.2 39.2 69.6 70.5 58.6 MTRL [31] 42.7 33.3 62.9 61.3 50.1 51.6 36.3 67.7 66.3 55.5 55.8 39.9 70.2 71.2 59.3 DMTL-TF [27] 49.2 34.5 67.1 62.9 53.4 57.2 42.3 73.6 69.9 60.8 58.3 56.1 79.3 72.1 66.5 MRN8 52.7 34.7 70.1 67.6 56.3 59.1 42.7 75.1 72.8 62.4 58.4 55.6 80.4 72.4 66.7 MRNt 52.0 34.0 69.9 66.8 55.7 58.6 42.6 74.9 72.4 62.1 57.7 54.8 80.2 71.6 66.1 MRN (full) 53.3 36.4 70.5 67.7 57.0 59.9 42.7 76.3 73.0 63.0 58.5 55.6 80.7 72.8 66.9 only for binary-class problems, due to the difficulty in dealing with order-3 parameter tensors for multi-class problems. We adopt one-vs-rest strategy to enable them working on multi-class datasets. We follow the standard evaluation protocol [31, 5] for multi-task learning and randomly select 5%, 10%, and 20% samples from each task as training set and use the rest of the samples as test set. We compare the average classification accuracy for all tasks based on five random experiments, where standard errors are generally less than ±0.5%, which are not significant and thus are not reported for space limitation. We conduct model selection for all methods using five-fold cross-validation on the training set. For deep learning methods, we adopt AlexNet [16] and VGGnet [24], fix convolutional layers conv1–conv5, fine-tune fully-connected layers fc6–fc7, and train classifier layer fc8 via back-propagation. As the classifier layer is trained from scratch, we set its learning rate to be 10 times that of the other layers. We use mini-batch stochastic gradient descent (SGD) with 0.9 momentum and learning rate decaying strategy, and select learning rate between 10−5 and 10−2 by stepsize 10 1 2 . 5.2 Results The multi-task classification results on the Office-Caltech, Office-Home and ImageCLEF-DA datasets based on 5%, 10%, and 20% sampled training data are shown in Tables 1, 2 and 3, respectively. We observe that the proposed MRN model significantly outperforms the comparison methods on most multi-task problems. The substantial accuracy improvement validates that our multilinear relationship networks through multilayer and multilinear relationship learning is able to learn both transferable features and adaptive task relationships, which enables effective and robust multi-task deep learning. We can make the following observations from the results. (1) Shallow multi-task learning methods MTFL, RMTL, and MTRL outperform single-task deep learning method STL in most cases, which confirms the efficacy of learning multiple tasks by exploiting shared structures. Among the shallow multi-task methods, MTRL gives the best accuracies, showing that exploiting task relationship may be more effective than extracting shared feature subspace for multi-task learning. It is worth noting that, although STL cannot learn from knowledge transfer, it can be fine-tuned on each task to improve performance, and thus when the number of training samples are large enough and when different tasks are dissimilar enough (e.g. Office-Home dataset), STL may outperform shallow multi-task learning methods, as evidenced by the results in Table 2. (2) Deep multi-task learning method DMTL-TF outperforms shallow multi-task learning methods with deep features as input, which confirms the importance of learning deep transferable features to enable knowledge transfer across tasks. However, DMTL-TF only learns the shared feature subspace based on tensor factorization of the network parameters, while the task relationships in multiple network layers are not captured. This may result in negative-transfer in the feature layers [28] and under-transfer in the classifier layers. Negative-transfer can be witnessed by comparing multi-task methods with single-task methods: if multi-task learning methods yield lower accuracy in some of the tasks, then negative-transfer arises. 8 Table 3: Classification accuracy on ImageCLEF-DA with standard evaluation protocol (AlexNet). Method 5% 10% 20% C I P B Avg C I P B Avg C I P B Avg STL (AlexNet) 77.4 60.3 48.0 45.0 57.7 78.9 70.5 48.1 41.8 59.8 83.3 74.9 49.2 47.1 63.6 MTFL [2] 79.9 68.6 43.4 41.5 58.3 82.9 71.4 56.7 41.7 63.2 83.1 72.2 54.5 52.5 65.6 RMTL [6] 81.1 71.3 52.4 40.9 61.4 81.5 71.7 55.6 45.3 63.5 83.3 73.3 53.7 49.2 64.9 MTRL [31] 80.8 68.4 51.9 42.9 61.0 83.1 72.7 54.5 45.5 63.9 83.7 75.5 57.5 49.4 66.5 DMTL-TF [27] 87.9 70.0 58.1 34.1 62.5 89.1 82.1 58.7 48.0 69.5 91.7 80.0 63.2 54.1 72.2 MRN8 87.0 74.4 61.8 47.6 67.7 89.1 82.2 64.4 49.3 71.2 91.1 84.1 65.7 54.1 73.7 MRNt 88.5 73.5 63.3 51.1 69.1 88.0 83.1 67.4 54.8 73.3 91.1 83.5 65.7 55.7 74.0 MRN (full) 89.6 76.9 65.4 49.4 70.3 88.1 84.6 68.7 55.6 74.3 92.8 83.3 67.4 57.8 75.3 We go deeper into MRN by reporting the results of the two MRN variants: MRN8 and MRNt, all significantly outperform the comparison methods but generally underperform MRN (full), which verify our motivation that jointly learning transferable features and multilinear task relationships can bridge multiple tasks more effectively. (1) The disadvantage of MRN8 is that it does not learn the task relationship in the lower layers fc7, which are not safely transferable and may result in negative transfer [28]. (2) The shortcoming of MRNt is that it does not learn the multilinear relationship of features, classes and tasks, hence the learned relationships may only capture the task covariance without capturing the feature covariance and class covariance, which may lose some intrinsic relations. A W D C A W D C (a) MTRL Relationship A W D C A W D C (b) MRN Relationship C A W D (c) DMTL-TF Features C A W D (d) MRN Features Figure 3: Hinton diagram of task relationships (a)(b) and t-SNE embedding of deep features (c)(d). 5.3 Visualization Analysis We show that MRN can learn more reasonable task relationships with deep features than MTRL with shallow features, by visualizing the Hinton diagrams of task covariances learned by MTRL and MRN (Σfc8 3 ) in Figures 3(a) and 3(b), respectively. Prior knowledge on task similarity in the Office-Caltech dataset [12] describes that tasks A, W and D are more similar with each other while they are relatively dissimilar to task C. MRN successfully captures this prior task relationship and enhances the task correlation across dissimilar tasks, which enables stronger transferability for multi-task learning. Furthermore, all tasks are positively correlated (green color) in MRN, implying that all tasks can better reinforce each other. However, some of the tasks (D and C) are still negatively correlated (red color) in MTRL, implying these tasks should be drawn far apart and cannot improve with each other. We illustrate the feature transferability by visualizing in Figures 3(c) and 3(d) the t-SNE embeddings [18] of the images in the Office-Caltech dataset with DMTL-TF features and MRN features, respectively. Compared with DMTL-TF features, the data points with MRN features are discriminated better across different categories, i.e., each category has small intra-class variance and large inter-class margin; the data points are also aligned better across different tasks, i.e. the embeddings of different tasks overlap well, implying that different tasks reinforce each other effectively. This verifies that with multilinear relationship learning, MRN can learn more transferable features for multi-task learning. 6 Conclusion This paper presented multilinear relationship networks (MRN) that integrate deep neural networks with tensor normal priors over the network parameters of all task-specific layers, which model the task relatedness through the covariance structures over tasks, classes and features to enable transfer across related tasks. An effective learning algorithm was devised to jointly learn transferable features and multilinear relationships. Experiments testify that MRN yields superior results on standard datasets. 9 Acknowledgments This work was supported by the National Key R&D Program of China (2016YFB1000701), National Natural Science Foundation of China (61772299, 61325008, 61502265, 61672313) and TNList Fund. References [1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853, 2005. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008. [3] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013. [4] R. Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. [5] J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning a shared predictive structure from multiple tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5):1025–1038, 2013. [6] J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning. In KDD, 2011. [7] X. Chu, W. Ouyang, W. Yang, and X. Wang. Multi-task recurrent neural network for immediacy prediction. In ICCV, 2015. [8] C. Ciliberto, Y. Mroueh, T. Poggio, and L. Rosasco. Convex learning of multiple tasks and their structure. In ICML, 2015. [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [10] T. Evgeniou and M. Pontil. Regularized multi-task learning. In KDD, 2004. [11] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, 2011. [12] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [13] A. K. Gupta and D. K. Nagar. Matrix variate distributions. Chapman & Hall, 2000. [14] L. Jacob, J.-P. Vert, and F. R. Bach. Clustered multi-task learning: A convex formulation. In NIPS, 2009. [15] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multi-task feature learning. In ICML, 2011. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [17] A. Kumar and H. Daume III. Learning task grouping and overlap in multi-task learning. ICML, 2012. [18] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015. [19] A. Maurer, M. Pontil, and B. Romera-Paredes. The benefit of multitask representation learning. The Journal of Machine Learning Research, 17(1):2853–2884, 2016. [20] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016. [21] M. Ohlson, M. R. Ahmad, and D. Von Rosen. The multilinear normal distribution: Introduction and some basic properties. Journal of Multivariate Analysis, 113:37–47, 2013. [22] W. Ouyang, X. Chu, and X. Wang. Multisource deep learning for human pose estimation. In CVPR, 2014. [23] B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In ICML, 2013. [24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [25] N. Srivastava and R. Salakhutdinov. Discriminative transfer learning with tree-based priors. In NIPS, 2013. [26] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017. [27] Y. Yang and T. Hospedales. Deep multi-task representation learning: A tensor factorisation approach. ICLR, 2017. [28] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014. [29] Y. Zhang and J. Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In NIPS, 2010. [30] Y. Zhang and Q. Yang. A survey on multi-task learning. arXiv preprint arXiv:1707.08114, 2017. [31] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI, 2010. [32] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, 2014. 10 | 2017 | 15 |
6,619 | Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation Yuhuai Wu∗ University of Toronto Vector Institute ywu@cs.toronto.edu Elman Mansimov∗ New York University mansimov@cs.nyu.edu Shun Liao University of Toronto Vector Institute sliao3@cs.toronto.edu Roger Grosse University of Toronto Vector Institute rgrosse@cs.toronto.edu Jimmy Ba University of Toronto Vector Institute jimmy@psi.utoronto.ca Abstract In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also the method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines. 1 Introduction Agents using deep reinforcement learning (deep RL) methods have shown tremendous success in learning complex behaviour skills and solving challenging control tasks in high-dimensional raw sensory state-space [24, 17, 12]. Deep RL methods make use of deep neural networks to represent control policies. Despite the impressive results, these neural networks are still trained using simple variants of stochastic gradient descent (SGD). SGD and related first-order methods explore weight space inefficiently. It often takes days for the current deep RL methods to master various continuous and discrete control tasks. Previously, a distributed approach was proposed [17] to reduce training time by executing multiple agents to interact with the environment simultaneously, but this leads to rapidly diminishing returns of sample efficiency as the degree of parallelism increases. Sample efficiency is a dominant concern in RL; robotic interaction with the real world is typically scarcer than computation time, and even in simulated environments the cost of simulation often dominates that of the algorithm itself. One way to effectively reduce the sample size is to use more advanced optimization techniques for gradient updates. Natural policy gradient [10] uses the technique of natural gradient descent [1] to perform gradient updates. Natural gradient methods ∗Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1M 2M 4M 6M 8M 10M Number of Timesteps 0 1000 2000 3000 4000 5000 6000 7000 Episode Rewards BeamRider ACKTR A2C TRPO 1M 2M 4M 6M 8M 10M Number of Timesteps 0 100 200 300 400 Episode Rewards Breakout ACKTR A2C TRPO 1M 2M 4M 6M 8M 10M Number of Timesteps 20 10 0 10 20 Episode Rewards Pong ACKTR A2C TRPO 1M 2M 4M 6M 8M 10M Number of Timesteps 0 2500 5000 7500 10000 12500 15000 17500 Episode Rewards Qbert ACKTR A2C TRPO 1M 2M 4M 6M 8M 10M Number of Timesteps 400 600 800 1000 1200 1400 1600 1800 Episode Rewards Seaquest ACKTR A2C TRPO 1M 2M 4M 6M 8M 10M Number of Timesteps 200 400 600 800 1000 Episode Rewards SpaceInvaders ACKTR A2C TRPO Figure 1: Performance comparisons on six standard Atari games trained for 10 million timesteps (1 timestep equals 4 frames). The shaded region denotes the standard deviation over 2 random seeds. follow the steepest descent direction that uses the Fisher metric as the underlying metric, a metric that is based not on the choice of coordinates but rather on the manifold (i.e., the surface). However, the exact computation of the natural gradient is intractable because it requires inverting the Fisher information matrix. Trust-region policy optimization (TRPO) [21] avoids explicitly storing and inverting the Fisher matrix by using Fisher-vector products [20]. However, it typically requires many steps of conjugate gradient to obtain a single parameter update, and accurately estimating the curvature requires a large number of samples in each batch; hence TRPO is impractical for large models and suffers from sample inefficiency. Kronecker-factored approximated curvature (K-FAC) [15, 6] is a scalable approximation to natural gradient. It has been shown to speed up training of various state-of-the-art large-scale neural networks [2] in supervised learning by using larger mini-batches. Unlike TRPO, each update is comparable in cost to an SGD update, and it keeps a running average of curvature information, allowing it to use small batches. This suggests that applying K-FAC to policy optimization could improve the sample efficiency of the current deep RL methods. In this paper, we introduce the actor-critic using Kronecker-factored trust region (ACKTR; pronounced “actor”) method, a scalable trust-region optimization algorithm for actor-critic methods. The proposed algorithm uses a Kronecker-factored approximation to natural policy gradient that allows the covariance matrix of the gradient to be inverted efficiently. To best of our knowledge, we are also the first to extend the natural policy gradient algorithm to optimize value functions via Gauss-Newton approximation. In practice, the per-update computation cost of ACKTR is only 10% to 25% higher than SGD-based methods. Empirically, we show that ACKTR substantially improves both sample efficiency and the final performance of the agent in the Atari environments [4] and the MuJoCo [26] tasks compared to the state-of-the-art on-policy actor-critic method A2C [17] and the famous trust region optimizer TRPO [21]. We make our source code available online at https://github.com/openai/baselines. 2 Background 2.1 Reinforcement learning and actor-critic methods We consider an agent interacting with an infinite-horizon, discounted Markov Decision Process (X, A, γ, P, r). At time t, the agent chooses an action at ∈A according to its policy πθ(a|st) given its current state st ∈X. The environment in turn produces a reward r(st, at) and transitions to the next state st+1 according to the transition probability P(st+1|st, at). The goal of the agent is to maximize the expected γ-discounted cumulative return J (θ) = Eπ[Rt] = Eπ[P∞ i≥0 γir(st+i, at+i)] with respect to the policy parameters θ. Policy gradient methods [28, 25] directly parameterize a policy πθ(a|st) and update parameter θ so as to maximize the objective J (θ). In its general form, 2 the policy gradient is defined as [22], ∇θJ (θ) = Eπ[ ∞ X t=0 Ψt∇θ log πθ(at|st)], where Ψt is often chosen to be the advantage function Aπ(st, at), which provides a relative measure of value of each action at at a given state st. There is an active line of research [22] on designing an advantage function that provides both low-variance and low-bias gradient estimates. As this is not the focus of our work, we simply follow the asynchronous advantage actor critic (A3C) method [17] and define the advantage function as the k-step returns with function approximation, Aπ(st, at) = k−1 X i=0 γir(st+i, at+i) + γkV π φ (st+k) −V π φ (st), where V π φ (st) is the value network, which provides an estimate of the expected sum of rewards from the given state following policy π, V π φ (st) = Eπ[Rt]. To train the parameters of the value network, we again follow [17] by performing temporal difference updates, so as to minimize the squared difference between the bootstrapped k-step returns ˆRt and the prediction value 1 2|| ˆRt −V π φ (st)||2. 2.2 Natural gradient using Kronecker-factored approximation To minimize a nonconvex function J (θ), the method of steepest descent calculates the update ∆θ that minimizes J (θ + ∆θ), subject to the constraint that ||∆θ||B < 1, where || · ||B is the norm defined by ||x||B = (xT Bx) 1 2 , and B is a positive semidefinite matrix. The solution to the constraint optimization problem has the form ∆θ ∝−B−1∇θJ , where ∇θJ is the standard gradient. When the norm is Euclidean, i.e., B = I, this becomes the commonly used method of gradient descent. However, the Euclidean norm of the change depends on the parameterization θ. This is not favorable because the parameterization of the model is an arbitrary choice, and it should not affect the optimization trajectory. The method of natural gradient constructs the norm using the Fisher information matrix F, a local quadratic approximation to the KL divergence. This norm is independent of the model parameterization θ on the class of probability distributions, providing a more stable and effective update. However, since modern neural networks may contain millions of parameters, computing and storing the exact Fisher matrix and its inverse is impractical, so we have to resort to approximations. A recently proposed technique called Kronecker-factored approximate curvature (K-FAC) [15] uses a Kronecker-factored approximation to the Fisher matrix to perform efficient approximate natural gradient updates. We let p(y|x) denote the output distribution of a neural network, and L = log p(y|x) denote the log-likelihood. Let W ∈RCout×Cin be the weight matrix in the ℓth layer, where Cout and Cin are the number of output/input neurons of the layer. Denote the input activation vector to the layer as a ∈RCin, and the pre-activation vector for the next layer as s = Wa. Note that the weight gradient is given by ∇W L = (∇sL)a⊺. K-FAC utilizes this fact and further approximates the block Fℓcorresponding to layer ℓas ˆFℓ, Fℓ= E[vec{∇W L}vec{∇W L}⊺] = E[aa⊺⊗∇sL(∇sL)⊺] ≈E[aa⊺] ⊗E[∇sL(∇sL)⊺] := A ⊗S := ˆFℓ, where A denotes E[aa⊺] and S denotes E[∇sL(∇sL)⊺]. This approximation can be interpreted as making the assumption that the second-order statistics of the activations and the backpropagated derivatives are uncorrelated. With this approximation, the natural gradient update can be efficiently computed by exploiting the basic identities (P ⊗Q)−1 = P −1⊗Q−1 and (P ⊗Q) vec(T) = PTQ⊺: vec(∆W) = ˆF −1 ℓ vec{∇W J } = vec A−1 ∇W J S−1 . From the above equation we see that the K-FAC approximate natural gradient update only requires computations on matrices comparable in size to W. Grosse and Martens [6] have recently extended the K-FAC algorithm to handle convolutional networks. Ba et al. [2] later developed a distributed version of the method where most of the overhead is mitigated through asynchronous computation. Distributed K-FAC achieved 2- to 3-times speed-ups in training large modern classification convolutional networks. 3 1M 2M 2.5M Number of Timesteps 0.5M 1M 2M Episode Rewards Atlantis ACKTR A2C 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Hours 0.5M 1M 2M Episode Rewards Atlantis ACKTR A2C 0 100 200 300 400 500 600 700 Number of Episode 0.5M 1M 2M Episode Rewards Atlantis ACKTR A2C Figure 2: In the Atari game of Atlantis, our agent (ACKTR) quickly learns to obtain rewards of 2 million in 1.3 hours, 600 episodes of games, 2.5 million timesteps. The same result is achieved by advantage actor critic (A2C) in 10 hours, 6000 episodes, 25 million timesteps. ACKTR is 10 times more sample efficient than A2C on this game. 3 Methods 3.1 Natural gradient in actor-critic Natural gradient was proposed to apply to the policy gradient method more than a decade ago by Kakade [10]. But there still doesn’t exist a scalable, sample-efficient, and general-purpose instantiation of the natural policy gradient. In this section, we introduce the first scalable and sampleefficient natural gradient algorithm for actor-critic methods: the actor-critic using Kronecker-factored trust region (ACKTR) method. We use Kronecker-factored approximation to compute the natural gradient update, and apply the natural gradient update to both the actor and the critic. To define the Fisher metric for reinforcement learning objectives, one natural choice is to use the policy function which defines a distribution over the action given the current state, and take the expectation over the trajectory distribution: F = Ep(τ)[∇θ log π(at|st)(∇θ log π(at|st))⊺], where p(τ) is the distribution of trajectories, given by p(s0) QT t=0 π(at|st)p(st+1|st, at). In practice, one approximates the intractable expectation over trajectories collected during training. We now describe one way to apply natural gradient to optimize the critic. Learning the critic can be thought of as a least-squares function approximation problem, albeit one with a moving target. In the setting of least-squares function approximation, the second-order algorithm of choice is commonly Gauss-Newton, which approximates the curvature as the Gauss-Newton matrix G := E[JT J], where J is the Jacobian of the mapping from parameters to outputs [18]. The Gauss-Newton matrix is equivalent to the Fisher matrix for a Gaussian observation model [14]; this equivalence allows us to apply K-FAC to the critic as well. Specifically, we assume the output of the critic v is defined to be a Gaussian distribution p(v|st) ∼N(v; V (st), σ2). The Fisher matrix for the critic is defined with respect to this Gaussian output distribution. In practice, we can simply set σ to 1, which is equivalent to the vanilla Gauss-Newton method. If the actor and critic are disjoint, one can separately apply K-FAC updates to each using the metrics defined above. But to avoid instability in training, it is often beneficial to use an architecture where the two networks share lower-layer representations but have distinct output layers [17, 27]. In this case, we can define the joint distribution of the policy and the value distribution by assuming independence of the two output distributions, i.e., p(a, v|s) = π(a|s)p(v|s), and construct the Fisher metric with respect to p(a, v|s), which is no different than the standard K-FAC except that we need to sample the networks’ outputs independently. We can then apply K-FAC to approximate the Fisher matrix Ep(τ)[∇log p(a, v|s)∇log p(a, v|s)T ] to perform updates simultaneously. In addition, we use regular damping for regularization. We also follow [2] and perform the asynchronous computation of second-order statistics and inverses required by the Kronecker approximation to reduce computation time. 4 3.2 Step-size Selection and trust-region optimization Traditionally, natural gradient is performed with SGD-like updates, θ ←θ −ηF −1∇θL. But in the context of deep RL, Schulman et al. [21] observed that such an update rule can result in large updates to the policy, causing the algorithm to prematurely converge to a near-deterministic policy. They advocate instead using a trust region approach, whereby the update is scaled down to modify the policy distribution (in terms of KL divergence) by at most a specified amount. Therefore, we adopt the trust region formulation of K-FAC introduced by [2], choosing the effective step size η to be min(ηmax, q 2δ ∆θ⊺ˆ F ∆θ), where the learning rate ηmax and trust region radius δ are hyperparameters. If the actor and the critic are disjoint, then we need to tune a different set of ηmax and δ separately for both. The variance parameter for the critic output distribution can be absorbed into the learning rate parameter for vanilla Gauss-Newton. On the other hand, if they share representations, we need to tune one set of ηmax, δ, and also the weighting parameter of the training loss of the critic, with respect to that of the actor. 4 Related work Natural gradient [1] was first applied to policy gradient methods by Kakade [10]. Bagnell and Schneider [3] further proved that the metric defined in [10] is a covariant metric induced by the path-distribution manifold. Peters and Schaal [19] then applied natural gradient to the actor-critic algorithm. They proposed performing natural policy gradient for the actor’s update and using a least-squares temporal difference (LSTD) method for the critic’s update. However, there are great computational challenges when applying natural gradient methods, mainly associated with efficiently storing the Fisher matrix as well as computing its inverse. For tractability, previous work restricted the method to using the compatible function approximator (a linear function approximator). To avoid the computational burden, Trust Region Policy Optimization (TRPO) [21] approximately solves the linear system using conjugate gradient with fast Fisher matrix-vector products, similar to the work of Martens [13]. This approach has two main shortcomings. First, it requires repeated computation of Fisher vector products, preventing it from scaling to the larger architectures typically used in experiments on learning from image observations in Atari and MuJoCo. Second, it requires a large batch of rollouts in order to accurately estimate curvature. K-FAC avoids both issues by using tractable Fisher matrix approximations and by keeping a running average of curvature statistics during training. Although TRPO shows better per-iteration progress than policy gradient methods trained with first-order optimizers such as Adam [11], it is generally less sample efficient. Several methods were proposed to improve the computational efficiency of TRPO. To avoid repeated computation of Fisher-vector products, Wang et al. [27] solve the constrained optimization problem with a linear approximation of KL divergence between a running average of the policy network and the current policy network. Instead of the hard constraint imposed by the trust region optimizer, Heess et al. [8] and Schulman et al. [23] added a KL cost to the objective function as a soft constraint. Both papers show some improvement over vanilla policy gradient on continuous and discrete control tasks in terms of sample efficiency. There are other recently introduced actor-critic models that improve sample efficiency by introducing experience replay [27], [7] or auxiliary objectives [9]. These approaches are orthogonal to our work, and could potentially be combined with ACKTR to further enhance sample efficiency. 5 Experiments We conducted a series of experiments to investigate the following questions: (1) How does ACKTR compare with the state-of-the-art on-policy method and common second-order optimizer baseline in terms of sample efficiency and computational efficiency? (2) What makes a better norm for optimization of the critic? (3) How does the performance of ACKTR scale with batch size compared to the first-order method? We evaluated our proposed method, ACKTR, on two standard benchmark platforms. We first evaluated it on the discrete control tasks defined in OpenAI Gym [5], simulated by Arcade Learning Environment [4], a simulator for Atari 2600 games which is commonly used as a deep reinforcement learning benchmark for discrete control. We then evaluated it on a variety of continuous control 5 ACKTR A2C TRPO (10 M) Domain Human level Rewards Episode Rewards Episode Rewards Episode Beamrider 5775.0 13581.4 3279 8148.1 8930 670.0 N/A Breakout 31.8 735.7 4094 581.6 14464 14.7 N/A Pong 9.3 20.9 904 19.9 4768 -1.2 N/A Q-bert 13455.0 21500.3 6422 15967.4 19168 971.8 N/A Seaquest 20182.0 1776.0 N/A 1754.0 N/A 810.4 N/A Space Invaders 1652.0 19723.0 14696 1757.2 N/A 465.1 N/A Table 1: ACKTR and A2C results showing the last 100 average episode rewards attained after 50 million timesteps, and TRPO results after 10 million timesteps. The table also shows the episode N, where N denotes the first episode for which the mean episode reward over the N th game to the (N + 100)th game crosses the human performance level [16], averaged over 2 random seeds. benchmark tasks defined in OpenAI Gym [5], simulated by the MuJoCo [26] physics engine. Our baselines are (a) a synchronous and batched version of the asynchronous advantage actor critic model (A3C) [17], henceforth called A2C (advantage actor critic), and (b) TRPO [21]. ACKTR and the baselines use the same model architecture except for the TRPO baseline on Atari games, with which we are limited to using a smaller architecture because of the computing burden of running a conjugate gradient inner-loop. See the appendix for other experiment details. 5.1 Discrete control We first present results on the standard six Atari 2600 games to measure the performance improvement obtained by ACKTR. The results on the six Atari games trained for 10 million timesteps are shown in Figure 1, with comparison to A2C and TRPO2. ACKTR significantly outperformed A2C in terms of sample efficiency (i.e., speed of convergence per number of timesteps) by a significant margin in all games. We found that TRPO could only learn two games, Seaquest and Pong, in 10 million timesteps, and performed worse than A2C in terms of sample efficiency. In Table 1 we present the mean of rewards of the last 100 episodes in training for 50 million timesteps, as well as the number of episodes required to achieve human performance [16] . Notably, on the games Beamrider, Breakout, Pong, and Q-bert, A2C required respectively 2.7, 3.5, 5.3, and 3.0 times more episodes than ACKTR to achieve human performance. In addition, one of the runs by A2C in Space Invaders failed to match human performance, whereas ACKTR achieved 19723 on average, 12 times better than human performance (1652). On the games Breakout, Q-bert and Beamrider, ACKTR achieved 26%, 35%, and 67% larger episode rewards than A2C. We also evaluated ACKTR on the rest of the Atari games; see Appendix for full results. We compared ACKTR with Q-learning methods, and we found that in 36 out of 44 benchmarks, ACKTR is on par with Q-learning methods in terms of sample efficiency, and consumed a lot less computation time. Remarkably, in the game of Atlantis, ACKTR quickly learned to obtain rewards of 2 million in 1.3 hours (600 episodes), as shown in Figure 2. It took A2C 10 hours (6000 episodes) to reach the same performance level. 5.2 Continuous control We ran experiments on the standard benchmark of continuous control tasks defined in OpenAI Gym [5] simulated in MuJoCo [26], both from low-dimensional state-space representation and directly from pixels. In contrast to Atari, the continuous control tasks are sometimes more challenging due to high-dimensional action spaces and exploration. The results of eight MuJoCo environments trained for 1 million timesteps are shown in Figure 3. Our model significantly outperformed baselines on six out of eight MuJoCo tasks and performed competitively with A2C on the other two tasks (Walker2d and Swimmer). We further evaluated ACKTR for 30 million timesteps on eight MuJoCo tasks and in Table 2 we present mean rewards of the top 10 consecutive episodes in training, as well as the number of 2The A2C and TRPO Atari baseline results are provided to us by the OpenAI team, https://github.com/ openai/baselines. 6 200K 400K 600K 800K 1M Number of Timesteps 0 200 400 600 800 1000 1200 Episode Reward InvertedPendulum ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 0 2000 4000 6000 8000 10000 Episode Reward InvertedDoublePendulum ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 70 60 50 40 30 20 10 0 Episode Reward Reacher ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 0 500 1000 1500 2000 2500 3000 3500 4000 Episode Reward Hopper ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 40 20 0 20 40 60 Episode Reward Swimmer ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 200 0 200 400 600 800 1000 1200 1400 Episode Reward Walker2d ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 500 0 500 1000 1500 2000 2500 3000 Episode Reward HalfCheetah ACKTR A2C TRPO 200K 400K 600K 800K 1M Number of Timesteps 1500 1000 500 0 500 1000 1500 Episode Reward Ant ACKTR A2C TRPO Figure 3: Performance comparisons on eight MuJoCo environments trained for 1 million timesteps (1 timestep equals 4 frames). The shaded region denotes the standard deviation over 3 random seeds. 10M 20M 40M Number of Timesteps 14 12 10 8 6 4 2 0 Episode Reward Reacher (pixels) ACKTR A2C 10M 20M 40M Number of Timesteps 0 500 1000 1500 2000 Episode Reward Walker2d (pixels) ACKTR A2C 10M 20M 40M Number of Timesteps 1000 500 0 500 1000 1500 2000 2500 3000 Episode Reward HalfCheetah (pixels) ACKTR A2C Figure 4: Performance comparisons on 3 MuJoCo environments from image observations trained for 40 million timesteps (1 timestep equals 4 frames). episodes to reach a certain threshold defined in [7]. As shown in Table 2, ACKTR reaches the specified threshold faster on all tasks, except for Swimmer where TRPO achieves 4.1 times better sample efficiency. A particularly notable case is Ant, where ACKTR is 16.4 times more sample efficient than TRPO. As for the mean reward score, all three models achieve results comparable with each other with the exception of TRPO, which in the Walker2d environment achieves a 10% better reward score. We also attempted to learn continuous control policies directly from pixels, without providing lowdimensional state space as an input. Learning continuous control policies from pixels is much more challenging than learning from the state space, partially due to the slower rendering time compared to Atari (0.5 seconds in MuJoCo vs 0.002 seconds in Atari). The state-of-the-art actorcritic method A3C [17] only reported results from pixels on relatively simple tasks, such as Pendulum, Pointmass2D, and Gripper. As shown in Figure 4 we can see that our model significantly outperforms A2C in terms of final episode reward after training for 40 million timesteps. More specifically, on Reacher, HalfCheetah, and Walker2d our model achieved a 1.6, 2.8, and 1.7 times greater final reward compared to A2C. The videos of trained policies from pixels can be found at https: //www.youtube.com/watch?v=gtM87w1xGoM. Pretrained model weights are available at https: //github.com/emansim/acktr. 5.3 A better norm for critic optimization? The previous natural policy gradient method applied a natural gradient update only to the actor. In our work, we propose also applying a natural gradient update to the critic. The difference lies in the norm with which we choose to perform steepest descent on the critic; that is, the norm || · ||B defined in section 2.2. In this section, we applied ACKTR to the actor, and compared using a first-order method (i.e., Euclidean norm) with using ACKTR (i.e., the norm defined by Gauss-Newton) for critic optimization. Figures 5 (a) and (b) show the results on the continuous control task HalfCheetah and the Atari game Breakout. We observe that regardless of which norm we use to optimize the critic, there are improvements brought by applying ACKTR to the actor compared to the baseline A2C. However, the improvements brought by using the Gauss-Newton norm for optimizing the critic are more substantial in terms of sample efficiency and episode rewards at the end of training. In addition, 7 ACKTR A2C TRPO Domain Threshold Rewards Episodes Rewards Episodes Rewards Episodes Ant 3500 (6000) 4621.6 3660 4870.5 106186 5095.0 60156 HalfCheetah 4700 (4800) 5586.3 12980 5343.7 21152 5704.7 21033 Hopper 2000 (3800) 3915.9 17033 3915.3 33481 3755.0 39426 IP 950 (950) 1000.0 6831 1000.0 10982 1000.0 29267 IDP 9100 (9100) 9356.0 41996 9356.1 82694 9320.0 78519 Reacher -7 (-3.75) -1.5 3325 -1.7 20591 -2.0 14940 Swimmer 90 (360) 138.0 6475 140.7 11516 136.4 1571 Walker2d 3000 (N/A) 6198.8 15043 5874.9 26828 6874.1 27720 Table 2: ACKTR, A2C, and TRPO results, showing the top 10 average episode rewards attained within 30 million timesteps, averaged over the 3 best performing random seeds out of 8 random seeds. “Episode” denotes the smallest N for which the mean episode reward over the N th to the (N + 10)th game crosses a certain threshold. The thresholds for all environments except for InvertedPendulum (IP) and InvertedDoublePendulum (IDP) were chosen according to Gu et al. [7], and in brackets we show the reward threshold needed to solve the environment according to the OpenAI Gym website [5]. the Gauss-Newton norm also helps stabilize the training, as we observe larger variance in the results over random seeds with the Euclidean norm. Recall that the Fisher matrix for the critic is constructed using the output distribution of the critic, a Gaussian distribution with variance σ. In vanilla Gauss-Newton, σ is set to 1. We experimented with estimating σ using the variance of the Bellman error, which resembles estimating the variance of the noise in regression analysis. We call this method adaptive Gauss-Newton. However, we find adaptive Gauss-Newton doesn’t provide any significant improvement over vanilla Gauss-Newton. (See detailed comparisons on the choices of σ in Appendix. 5.4 How does ACKTR compare with A2C in wall-clock time? We compared ACKTR to the baselines A2C and TRPO in terms of wall-clock time. Table 3 shows the average timesteps per second over six Atari games and eight MuJoCo (from state space) environments. The result is obtained with the same experiment setup as previous experiments. Note that in MuJoCo tasks episodes are processed sequentially, whereas in the Atari environment episodes are processed in parallel; hence more frames are processed in Atari environments. From the table we see that ACKTR only increases computing time by at most 25% per timestep, demonstrating its practicality with large optimization benefits. (Timesteps/Second) Atari MuJoCo batch size 80 160 640 1000 2500 25000 ACKTR 712 753 852 519 551 582 A2C 1010 1038 1162 624 650 651 TRPO 160 161 177 593 619 637 Table 3: Comparison of computational cost. The average timesteps per second over six Atari games and eight MuJoCo tasks during training for each algorithms. ACKTR only increases computing time at most 25% over A2C. 5.5 How do ACKTR and A2C perform with different batch sizes? In a large-scale distributed learning setting, large batch size is used in optimization. Therefore, in such a setting, it is preferable to use a method that can scale well with batch size. In this section, we compare how ACKTR and the baseline A2C perform with respect to different batch sizes. We experimented with batch sizes of 160 and 640. Figure 5 (c) shows the rewards in number of timesteps. We found that ACKTR with a larger batch size performed as well as that with a smaller batch size. However, with a larger batch size, A2C experienced significant degradation in terms of sample efficiency. This corresponds to the observation in Figure 5 (d), where we plotted the training curve in terms of number of updates. We see that the benefit increases substantially when using a larger batch size with ACKTR compared to with A2C. This suggests there is potential for large speed-ups with ACKTR in a distributed setting, where one needs to use large mini-batches; this matches the observation in [2]. 8 200K 400K 600K 800K 1M Number of Timesteps 500 0 500 1000 1500 2000 2500 3000 Episode Reward HalfCheetah ACKTR (Both Actor and Critic) ACKTR (Only Actor) A2C 1M 2M 4M 6M 8M 10M Number of Timesteps 0 100 200 300 400 Episode Rewards Breakout ACKTR(Both Actor and Critic) ACKTR(Only Actor) A2C 1M 2M 4M 6M 8M 10M Number of Timesteps 0 100 200 300 400 500 Episode Rewards Breakout ACKTR (640) ACKTR (160) A2C (640) A2C (160) 0 10000 20000 30000 40000 50000 60000 Number of Updates 0 100 200 300 400 500 Episode Rewards Breakout ACKTR (640) ACKTR (160) A2C (640) A2C (160) (a) (b) (c) (d) Figure 5: (a) and (b) compare optimizing the critic (value network) with a Gauss-Newton norm (ACKTR) against a Euclidean norm (first order). (c) and (d) compare ACKTR and A2C with different batch sizes. 6 Conclusion In this work we proposed a sample-efficient and computationally inexpensive trust-regionoptimization method for deep reinforcement learning. We used a recently proposed technique called K-FAC to approximate the natural gradient update for actor-critic methods, with trust region optimization for stability. To the best of our knowledge, we are the first to propose optimizing both the actor and the critic using natural gradient updates. We tested our method on Atari games as well as the MuJoCo environments, and we observed 2- to 3-fold improvements in sample efficiency on average compared with a first-order gradient method (A2C) and an iterative second-order method (TRPO). Because of the scalability of our algorithm, we are also the first to train several non-trivial tasks in continuous control directly from raw pixel observation space. This suggests that extending Kronecker-factored natural gradient approximations to other algorithms in reinforcement learning is a promising research direction. Acknowledgements We would like to thank the OpenAI team for their generous support in providing baseline results and Atari environment preprocessing codes. We also want to thank John Schulman for helpful discussions. References [1] S. I. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998. [2] J. Ba, R. Grosse, and J. Martens. Distributed second-order optimization using Kroneckerfactored approximations. In ICLR, 2017. [3] J. A. Bagnell and J. G. Schneider. Covariant policy search. In IJCAI, 2003. [4] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. [5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016. [6] R. Grosse and J. Martens. A Kronecker-factored approximate Fisher matrix for convolutional layers. In ICML, 2016. [7] S. Gu, T. Lillicrap, Z. Ghahramani, R. E. Turner, and S. Levine. Q-prop: Sample-efficient policy gradient with an off-policy critic. In ICLR, 2017. [8] N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, S. M. A. Eslami, M. Riedmiller, and D. Silver. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017. [9] M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In ICLR, 2017. [10] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, 2002. 9 [11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015. [12] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In ICLR, 2016. [13] J. Martens. Deep learning via Hessian-free optimization. In ICML-10, 2010. [14] J. Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014. [15] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015. [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [17] V. Mnih, A. Puigdomenech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016. [18] J. Nocedal and S. Wright. Numerical Optimization. Springer, 2006. [19] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180–1190, 2008. [20] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 2002. [21] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. [22] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. [23] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484– 489, 2016. [25] R. S. Sutton, D. A. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, 2000. [26] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. [27] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. Sample efficient actor-critic with experience replay. In ICLR, 2016. [28] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256, 1992. 10 | 2017 | 150 |
6,620 | Optimistic posterior sampling for reinforcement learning: worst-case regret bounds Shipra Agrawal Columbia University sa3305@columbia.edu Randy Jia Columbia University rqj2000@columbia.edu Abstract We present an algorithm based on posterior sampling (aka Thompson sampling) that achieves near-optimal worst-case regret bounds when the underlying Markov Decision Process (MDP) is communicating with a finite, though unknown, diameter. Our main result is a high probability regret upper bound of ˜O(D √ SAT) for any communicating MDP with S states, A actions and diameter D, when T ≥S5A. Here, regret compares the total reward achieved by the algorithm to the total expected reward of an optimal infinite-horizon undiscounted average reward policy, in time horizon T. This result improves over the best previously known upper bound of ˜O(DS √ AT) achieved by any algorithm in this setting, and matches the dependence on S in the established lower bound of Ω( √ DSAT) for this problem. Our techniques involve proving some novel results about the anti-concentration of Dirichlet distribution, which may be of independent interest. 1 Introduction Reinforcement Learning (RL) refers to the problem of learning and planning in sequential decision making systems when the underlying system dynamics are unknown, and may need to be learned by trying out different options and observing their outcomes. A typical model for the sequential decision making problem is a Markov Decision Process (MDP), which proceeds in discrete time steps. At each time step, the system is in some state s, and the decision maker may take any available action a to obtain a (possibly stochastic) reward. The system then transitions to the next state according to a fixed state transition distribution. The reward and the next state depend on the current state s and the action a, but are independent of all the previous states and actions. In the reinforcement learning problem, the underlying state transition distributions and/or reward distributions are unknown, and need to be learned using the observed rewards and state transitions, while aiming to maximize the cumulative reward. This requires the algorithm to manage the tradeoff between exploration vs. exploitation, i.e., exploring different actions in different states in order to learn the model more accurately vs. taking actions that currently seem to be reward maximizing. Exploration-exploitation tradeoff has been studied extensively in the context of stochastic multiarmed bandit (MAB) problems, which are essentially MDPs with a single state. The performance of MAB algorithms is typically measured through regret, which compares the total reward obtained by the algorithm to the total expected reward of an optimal action. Optimal regret bounds have been established for many variations of MAB (see Bubeck et al. [2012] for a survey), with a large majority of results obtained using the Upper Confidence Bound (UCB) algorithm, or more generally, the optimism in the face of uncertainty principle. Under this principle, the learning algorithm maintains tight over-estimates (or optimistic estimates) of the expected rewards for individual actions, and at any given step, picks the action with the highest optimistic estimate. More recently, posterior sampling, aka Thompson Sampling [Thompson, 1933], has emerged as another popular algorithm design principle in MAB, owing its popularity to a simple and extendible algorithmic structure, an 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. attractive empirical performance [Chapelle and Li, 2011, Kaufmann et al., 2012], as well as provably optimal performance bounds that have been recently obtained for many variations of MAB [Agrawal and Goyal, 2012, 2013b,a, Russo and Van Roy, 2015, 2014, Bubeck and Liu, 2013]. In this approach, the algorithm maintains a Bayesian posterior distribution for the expected reward of every action; then at any given step, it generates an independent sample from each of these posteriors and takes the action with the highest sample value. We consider the reinforcement learning problem with finite states S and finite actions A in a similar regret based framework, where the total reward of the reinforcement learning algorithm is compared to the total expected reward achieved by a single benchmark policy over a time horizon T. In our setting, the benchmark policy is the infinite-horizon undiscounted average reward optimal policy for the underlying MDP, under the assumption that the MDP is communicating with (unknown) finite diameter D. The diameter D is an upper bound on the time it takes to move from any state s to any other state s′ using an appropriate policy, for each pair s, s′. A finite diameter is understood to be necessary for interesting bounds on the regret of any algorithm in this setting [Jaksch et al., 2010]. The UCRL2 algorithm of Jaksch et al. [2010], which is based on the optimism principle, achieved the best previously known upper bound of ˜O(DS √ AT) for this problem. A similar bound was achieved by Bartlett and Tewari [2009], though assuming the knowledge of the diameter D. Jaksch et al. [2010] also established a worst-case lower bound of Ω( √ DSAT) on the regret of any algorithm for this problem. Our main contribution is a posterior sampling based algorithm with a high probability worst-case regret upper bound of ˜O(D √ SAT + DS7/4A3/4T 1/4), which is ˜O(D √ SAT) when T ≥S5A. This improves the previously best known upper bound for this problem by a factor of √ S, and matches the dependence on S in the lower bound, for large enough T. Our algorithm uses an ‘optimistic version’ of the posterior sampling heuristic, while utilizing several ideas from the algorithm design structure in Jaksch et al. [2010], such as an epoch based execution and the extended MDP construction. The algorithm proceeds in epochs, where in the beginning of every epoch, it generates ψ = ˜O(S) sample transition probability vectors from a posterior distribution for every state and action, and solves an extended MDP with ψA actions and S states formed using these samples. The optimal policy computed for this extended MDP is used throughout the epoch. Posterior Sampling for Reinforcement Learning (PSRL) approach has been used previously in Osband et al. [2013], Abbasi-Yadkori and Szepesvari [2014], Osband and Van Roy [2016], but in a Bayesian regret framework. Bayesian regret is defined as the expected regret over a known prior on the transition probability matrix. Osband and Van Roy [2016] demonstrate an ˜O(H √ SAT) bound on the expected Bayesian regret for PSRL in finite-horizon episodic Markov decision processes, when the episode length is H. In this paper, we consider the stronger notion of worst-case regret, aka minimax regret, which requires bounding the maximum regret for any instance of the problem. 1 Further, we consider a non-episodic communicating MDP setting, and produce a comparable bound of ˜O(D √ SAT) for large T, where D is the unknown diameter of the communicating MDP. In comparison to a single sample from the posterior in PSRL, our algorithm is slightly inefficient as it uses multiple ( ˜O(S)) samples. It is not entirely clear if the extra samples are only an artifact of the analysis. In an empirical study of a multiple sample version of posterior sampling for RL, Fonteneau et al. [2013] show that multiple samples can potentially improve the performance of posterior sampling in terms of probability of taking the optimal decision. Our analysis utilizes some ideas from the Bayesian regret analysis, most importantly the technique of stochastic optimism from Osband et al. [2014] for deriving tighter deviation bounds. However, bounding the worst-case regret requires several new technical ideas, in particular, for proving ‘optimism’ of the gain of the sampled MDP. Further discussion is provided in Section 4. We should also compare our result with the very recent result of Azar et al. [2017], which provides an optimistic version of value-iteration algorithm with a minimax (i.e., worst-case) regret bound of 1Worst-case regret is a strictly stronger notion of regret in case the reward distribution function is known and only the transition probability distribution is unknown, as we will assume here for the most part. In case of unknown reward distribution, extending our worst-case regret bounds would require an assumption of bounded rewards, where as the Bayesian regret bounds in the above-mentioned literature allow more general (known) priors on the reward distributions with possibly unbounded support. Bayesian regret bounds in those more general settings are incomparable to the worst-case regret bounds presented here. 2 ˜O( √ HSAT) when T ≥H3S3A. However, the setting considered in Azar et al. [2017] is that of an episodic MDP, where the learning agent interacts with the system in episodes of fixed and known length H. The initial state of each episode can be arbitrary, but importantly, the sequence of these initial states is shared by the algorithm and any benchmark policy. In contrast, in the non-episodic setting considered in this paper, the state trajectory of the benchmark policy over T time steps can be completely different from the algorithm’s trajectory. To the best of our understanding, the shared sequence of initial states and the fixed known length H of episodes seem to form crucial components of the analysis in Azar et al. [2017], making it difficult to extend their analysis to the non-episodic communicating MDP setting considered in this paper. Among other related work, Burnetas and Katehakis [1997] and Tewari and Bartlett [2008] present optimistic linear programming approaches that achieve logarithmic regret bounds with problem dependent constants. Strong PAC bounds have been provided in Kearns and Singh [1999], Brafman and Tennenholtz [2002], Kakade et al. [2003], Asmuth et al. [2009], Dann and Brunskill [2015]. There, the aim is to bound the performance of the policy learned at the end of the learning horizon, and not the performance during learning as quantified here by regret. Notably, the BOSS algorithm proposed in Asmuth et al. [2009] is similar to the algorithm proposed here in the sense that the former also takes multiple samples from the posterior to form an extended (referred to as merged) MDP. Strehl and Littman [2005, 2008] provide an optimistic algorithm for bounding regret in a discounted reward setting, but the definition of regret is slightly different in that it measures the difference between the rewards of an optimal policy and the rewards of the learning algorithm along the trajectory taken by the learning algorithm. 2 Preliminaries and Problem Definition 2.1 Markov Decision Process (MDP) We consider a Markov Decision Process M defined by tuple {S, A, P, r, s1}, where S is a finite state-space of size S, A is a finite action-space of size A, P : S × A →∆S is the transition model, r : S × A →[0, 1] is the reward function, and s1 is the starting state. When an action a ∈A is taken in a state s ∈S, a reward rs,a is generated and the system transitions to the next state s′ ∈S with probability Ps,a(s′), where P s′∈S Ps,a(s′) = 1. We consider ‘communicating’ MDPs with finite ‘diameter’ (see Bartlett and Tewari [2009] for an in-depth discussion). Below we define communicating MDPs, and recall some useful known results for such MDPs. Definition 1 (Policy). A deterministic policy π : S →A is a mapping from state space to action space. Definition 2 (Diameter D(M)). Diameter D(M) of an MDP M is defined as the minimum time required to go from one state to another in the MDP using some deterministic policy: D(M) = max s̸=s′,s,s′∈S min π:S→A T π s→s′, where T π s→s′ is the expected number of steps it takes to reach state s′ when starting from state s and using policy π. Definition 3 (Communicating MDP). An MDP M is communicating if and only if it has a finite diameter. That is, for any two states s ̸= s′, there exists a policy π such that the expected number of steps to reach s′ from s, T π s→s′, is at most D, for some finite D ≥0. Definition 4 (Gain of a policy). The gain of a policy π, from starting state s1 = s, is defined as the infinite horizon undiscounted average reward, given by λπ(s) = E[ lim T →∞ 1 T T X i=1 rst,π(st)|s1 = s]. where st is the state reached at time t. Lemma 2.1 (Optimal gain for communicating MDPs). For a communicating MDP M with diameter D: 3 (a) (Puterman [2014] Theorem 8.1.2, Theorem 8.3.2) The optimal (maximum) gain λ∗is state independent and is achieved by a deterministic stationary policy π∗, i.e., there exists a deterministic policy π∗such that λ∗:= max s′∈S max π λπ(s′) = λπ∗(s), ∀s ∈S. Here, π∗is referred to as an optimal policy for MDP M. (b) (Tewari and Bartlett [2008], Theorem 4) The optimal gain λ∗satisfies the following equations, λ∗= min h∈RS max s,a rs,a + P T s,ah −hs = max a rs,a + P T s,ah∗−h∗ s, ∀s (1) where h∗, referred to as the bias vector of MDP M, satisfies: max s h∗ s −min s h∗ s ≤D. Given the above definitions and results, we can now define the reinforcement learning problem studied in this paper. 2.2 The reinforcement learning problem The reinforcement learning problem proceeds in rounds t = 1, . . . , T. The learning agent starts from a state s1 at round t = 1. In the beginning of every round t, the agent takes an action at ∈A and observes the reward rst,at as well as the next state st+1 ∼Pst,at, where r and P are the reward function and the transition model, respectively, for a communicating MDP M with diameter D. The learning agent knows the state-space S, the action space A, as well as the rewards rs,a, ∀s ∈ S, a ∈A, for the underlying MDP, but not the transition model P or the diameter D. (The assumption of known and deterministic rewards has been made here only for simplicity of exposition, since the unknown transition model is the main source of difficulty in this problem. Our algorithm and results can be extended to bounded stochastic rewards with unknown distributions using standard Thompson Sampling for MAB, e.g., using the techniques in Agrawal and Goyal [2013b].) The agent can use the past observations to learn the underlying MDP model and decide future actions. The goal is to maximize the total reward PT t=1 rst,at, or equivalently, minimize the total regret over a time horizon T, defined as R(T, M) := Tλ∗−PT t=1 rst,at (2) where λ∗is the optimal gain of MDP M. We present an algorithm for the learning agent with a near-optimal upper bound on the regret R(T, M) for any communicating MDP M with diameter D, thus bounding the worst-case regret over this class of MDPs. 3 Algorithm Description Our algorithm combines the ideas of Posterior sampling (aka Thompson Sampling) with the extended MDP construction used in Jaksch et al. [2010]. Below we describe the main components of our algorithm. Some notations: N t s,a denotes the total number of times the algorithm visited state s and played action a until before time t, and N t s,a(i) denotes the number of time steps among these N t s,a steps where the next state was i, i.e., a transition from state s to i was observed. We index the states from 1 to S, so that PS i=1 N t s,a(i) = N t s,a for any t. We use the symbol 1 to denote the vector of all 1s, and 1i to denote the vector with 1 at the ith coordinate and 0 elsewhere. Doubling epochs: Our algorithm uses the epoch based execution framework of Jaksch et al. [2010]. An epoch is a group of consecutive rounds. The rounds t = 1, . . . , T are broken into consecutive epochs as follows: the kth epoch begins at the round τk immediately after the end of (k −1)th epoch and ends at the first round τ such that for some state-action pair s, a, N τ s,a ≥2N τk s,a. The algorithm computes a new policy ˜πk at the beginning of every epoch k, and uses that policy through all the rounds in that epoch. It is easy to observe that irrespective of how the policy ˜πk is computed, the number of epochs in T rounds is bounded by SA log(T). 4 Posterior Sampling: We use posterior sampling to compute the policy ˜πk in the beginning of every epoch. Dirichlet distribution is a convenient choice maintaining posteriors for the transition probability vectors Ps,a for every s ∈S, a ∈A, as they satisfy the following useful property: given a prior Dirichlet(α1, . . . , αS) on Ps,a, after observing a transition from state s to i (with underlying probability Ps,a(i)), the posterior distribution is given by Dirichlet(α1, . . . , αi + 1, . . . , αS). By this property, for any s ∈S, a ∈A, on starting from prior Dirichlet(1) for Ps,a, the posterior at time t is Dirichlet({N t s,a(i) + 1}i=1,...,S). Our algorithm uses a modified, optimistic version of this approach. At the beginning of every epoch k, for every s ∈S, a ∈A such that Ns,a ≥η, it generates multiple samples for Ps,a from a ‘boosted’ posterior. Specifically, it generates ψ = O(S log(SA/ρ)) independent sample probability vectors Q1,k s,a, . . . , Qψ,k s,a as Qj,k s,a ∼Dirichlet(Mτk s,a), where Mt s,a denotes the vector [M t s,a(i)]i=1,...,S, with M t s,a(i) := 1 κ(N t s,a(i) + ω), for i = 1, . . . , S. (3) Here, κ = O(log(T/ρ)), ω = O(log(T/ρ)), η = q T S A + 12ωS2, and ρ ∈(0, 1) is a parameter of the algorithm. In the regret analysis, we derive sufficiently large constants that can be used in the definition of ψ, κ, ω to guarantee the bounds. However, no attempt has been made to optimize those constants, and it is likely that much smaller constants suffice. For every remaining s, a, i.e., those with small Ns,a (Ns,a < η) the algorithm use a simple optimistic sampling described in Algorithm 1. This special sampling for s, a with small Ns,a has been introduced to handle a technical difficulty in analyzing the anti-concentration of Dirichlet posteriors when the parameters are very small. We suspect that with an improved analysis, this may not be required. Extended MDP: The policy ˜πk to be used in epoch k is computed as the optimal policy of an extended MDP ˜ Mk defined by the sampled transition probability vectors, using the construction of Jaksch et al. [2010]. Given sampled vectors Qj,k s,a, j = 1, . . . , ψ, for every state-action pair s, a, we define extended MDP ˜ Mk by extending the original action space as follows: for every s, a, create ψ actions for every action a ∈A, denoting by aj the action corresponding to action a and sample j; then, in MDP ˜ Mk, on taking action aj in state s, reward is rs,a but transitions to the next state follows the transition probability vector Qj,k s,a. Note that the algorithm uses the optimal policy ˜πk of extended MDP ˜ Mk to take actions in the action space A which is technically different from the action space of MDP ˜ Mk, where the policy ˜πk is defined. We slightly abuse the notation to say that the algorithm takes action at = ˜π(st) to mean that the algorithm takes action at = a ∈A when ˜πk(st) = aj for some j. Our algorithm is summarized as Algorithm 1. 4 Regret Bounds We prove the following bound on the regret of Algorithm 1 for the reinforcement learning problem. Theorem 1. For any communicating MDP M with S states, A actions, and diameter D, with probability 1 −ρ. the regret of Algorithm 1 in time T ≥CDA log2(T/ρ) is bounded as: R(T, M) ≤˜O D √ SAT + DS7/4A3/4T 1/4 + DS5/2A where C is an absolute constant. For T ≥S5A, this implies a regret bound of R(T, M) ≤˜O D √ SAT . Here ˜O hides logarithmic factors in S, A, T, ρ and absolute constants. The rest of this section is devoted to proving the above theorem. Here, we provide a sketch of the proof and discuss some of the key lemmas, all missing details are provided in the supplementary material. 5 Algorithm 1 A posterior sampling based algorithm for the reinforcement learning problem Inputs: State space S, Action space A, starting state s1, reward function r, time horizon T, parameters ρ ∈(0, 1], ψ = O(S log(SA/ρ)), ω = O(log(T/ρ)), κ = O(log(T/ρ)), η = q T S A + 12ωS2. Initialize: τ 1 := 1, Mτ1 s,a = ω1. for all epochs k = 1, 2, . . . , do Sample transition probability vectors: For each s, a, generate ψ independent sample probability vectors Qj,k s,a, j = 1, . . . , ψ, as follows: • (Posterior sampling): For s, a such that N τk s,a ≥η, use samples from the Dirichlet distribution: Qj,k s,a ∼Dirichlet(Mτk s,a), • (Simple optimistic sampling): For remaining s, a, with N τk s,a < η, use the following simple optimistic sampling: let P − s,a = ˆPs,a −∆, where ˆPs,a(i) = N τk s,a(i) N τk s,a , and ∆i = min r 3 ˆ Ps,a(i) log(4S) N τk s,a + 3 log(4S) N τk s,a , ˆPs,a(i) , and let z be a random vector picked uniformly at random from {11, . . . , 1S}; set Qj,k s,a = P − s,a + (1 −PS i=1 P − s,a(i))z. Compute policy ˜πk: as the optimal gain policy for extended MDP ˜ Mk constructed using sample set {Qj,k s,a, j = 1, . . . , ψ, s ∈S, a ∈A}. Execute policy ˜πk: for all time steps t = τk, τk + 1, . . . , until break epoch do Play action at = ˜πk(st). Observe the transition to the next state st+1. Set N t+1 s,a (i), M t+1 s,a (i) for all a ∈A, s, i ∈S as defined (refer to Equation (3)). If N t+1 st,at ≥2N τk st,at, then set τk+1 = t + 1 and break epoch. end for end for 4.1 Proof of Theorem 1 As defined in Section 2, regret R(T, M) is given by R(T, M) = Tλ∗−PT t=1 rst,at, where λ∗is the optimal gain of MDP M, at is the action taken and st is the state reached by the algorithm at time t. Algorithm 1 proceeds in epochs k = 1, 2, . . . , K, where K ≤SA log(T). To bound its regret in time T, we first analyze the regret in each epoch k, namely, Rk := (τk+1 −τk)λ∗−Pτk+1−1 t=τk rst,at, and bound Rk by roughly D X s,a N τk+1 s,a −N τk s,a p N τk s,a where, by definition, for every s, a, (N τk+1 s,a −N τk s,a) is the number of times this state-action pair is visited in epoch k. The proof of this bound has two main components: (a) Optimism: The policy ˜πk used by the algorithm in epoch k is computed as an optimal gain policy of the extended MDP ˜ Mk. The first part of the proof is to show that with high probability, the extended MDP ˜ Mk is (i) a communicating MDP with diameter at most 2D, and (ii) optimistic, i.e., has optimal gain at least (close to) λ∗. Part (i) is stated as Lemma 4.1, with a proof provided in the supplementary material. Now, let ˜λk be the optimal gain of the extended MDP ˜ Mk. In 6 Lemma 4.2, which forms one of the main novel technical components of our proof, we show that with probability 1 −ρ, ˜λk ≥λ∗−˜O(D q SA T ). We first show that above holds if for every s, a, there exists a sample transition probability vector whose projection on a fixed unknown vector (h∗) is optimistic. Then, in Lemma 4.3 we prove this optimism by deriving a fundamental new result on the anti-concentration of any fixed projection of a Dirichlet random vector (Proposition A.1 in the supplementary material). Substituting this upper bound on λ∗, we have the following bound on Rkwith probability 1 −ρ: Rk ≤Pτk+1−1 t=τk ˜λk −rst,at + ˜O(D q SA T ) . (4) (b) Deviation bounds: Optimism guarantees that with high probability, the optimal gain ˜λk for MDP ˜ Mk is at least λ∗. And, by definition of ˜πk, ˜λk is the gain of the chosen policy ˜πk for MDP ˜ Mk. However, the algorithm executes this policy on the true MDP M. The only difference between the two is the transition model: on taking an action aj := ˜πk(s) in state s in MDP ˜ Mk, the next state follows the sampled distribution ˜Ps,a := Qj,k s,a, (5) where as on taking the corresponding action a in MDP M, the next state follows the distribution Ps,a. The next step is to bound the difference between ˜λk and the average reward obtained by the algorithm by bounding the deviation ( ˜Ps,a −Ps,a). This line of argument bears similarities to the analysis of UCRL2 in Jaksch et al. [2010], but with tighter deviation bounds that we are able to guarantee due to the use of posterior sampling instead of deterministic optimistic bias used in UCRL2. Now, since at = ˜πk(st), using the relation between the gain ˜λk, the bias vector ˜h, and reward vector of optimal policy ˜πk for communicating MDP ˜ Mk (refer to Lemma 2.1) Pτk+1−1 t=τk ˜λ −rst,at = Pτk+1−1 t=τk ( ˜Pst,at −1st)T ˜h = Pτk+1−1 t=τk ( ˜Pst,at −Pst,at + Pst,at −1st)T ˜h (6) where with high probability, ˜h ∈RS, the bias vector of MDP ˜ Mk satisfies maxs ˜hs −mins ˜hs ≤D( ˜ Mk) ≤2D (refer to Lemma 4.1). Next, we bound the deviation ( ˜Ps,a −Ps,a)T ˜h for all s, a, to bound the first term in above. Note that ˜h is random and can be arbitrarily correlated with ˜P, therefore, we need to bound maxh∈[0,2D]S( ˜Ps,a −Ps,a)T h. (For the above term, w.l.o.g. we can assume ˜h ∈[0, 2D]S). For s, a such that N τk s,a > η, ˜Ps,a = Qj,k s,a is a sample from the Dirichlet posterior. In Lemma 4.4, we show that with high probability, max h∈[0,2D]S( ˜P k s,a −Ps,a)T h ≤˜O( D p N τk s,a + DS N τk s,a ). (7) This bound is an improvement by a √ S factor over the corresponding deviation bound obtainable for the optimistic estimates of Ps,a in UCRL2. The derivation of this bound utilizes and extends the stochastic optimism technique from Osband et al. [2014]. For s, a with N τk s,a ≤η, ˜Ps,a = Qj,k s,a is a sample from the simple optimistic sampling, where we can only show the following weaker bound, but since this is used only while N τk s,a is small, the total contribution of this deviation will be small: max h∈[0,2D]S( ˜P k s,a −Ps,a)T h ≤˜O D s S N τk s,a + DS N τk s,a ! . (8) Finally, to bound the second term in (6), we observe that E[1T st+1˜h|˜πk, ˜h, st] = P T st,at˜h and use Azuma-Hoeffding inequality to obtain with probability (1 − ρ SA): Pτk+1−1 t=τk (Pst,at −1st)T ˜h ≤O( p (τk+1 −τk) log(SA/ρ)). (9) 7 Combining the above observations (equations (4), (6), (7), (8), (9)), we obtain the following bound on Rk within logarithmic factors: D(τk+1−τk) r SA T +D X s,a N τk+1 s,a −N τk s,a p N τk s,a 1(N τk+1 s,a > η) + √ S1(N τk+1 s,a ≤η) +D p τk+1 −τk. (10) We can finish the proof by observing that (by definition of an epoch) the number of visits of any state-action pair can at most double in an epoch, N τk+1 s,a −N τk s,a ≤N τk s,a, and therefore, substituting this observation in (10), we can bound (within logarithmic factors) the total regret R(T) = PK k=1 Rk as: K P k=1 D(τk+1 −τk) q SA T + D P s,a:N τk s,a>η p N τk s,a + D P s,a:N τk s,a<η p SN τk s,a + D√τk+1 −τk ! ≤ D √ SAT + D log(K)(P s,a p N τK s,a) + D log(K)(SA√Sη) + D √ KT where we used N τk+1 s,a ≤2N τk s,a and P k(τk+1 −τk) = T. Now, we use that K ≤SA log(T), and SA√Sη = O(S7/4A3/4T 1/4 + S5/2A log(T/ρ)) (using η = q T S A + 12ωS2). Also, since P s,a N τK s,a ≤T, by simple worst scenario analysis, P s,a p N τK s,a ≤ √ SAT, and we obtain, R(T, M) ≤˜O(D √ SAT + DS7/4A3/4T 1/4 + DS5/2A). 4.2 Main lemmas Following lemma form the main technical components of our proof. All the missing proofs are provided in the supplementary material. Lemma 4.1. Assume T ≥CDA log2(T/ρ) for a large enough constant C. Then, with probability 1 −ρ, for every epoch k, the diameter of MDP ˜ Mk is bounded by 2D. Lemma 4.2. With probability 1 −ρ, for every epoch k, the optimal gain ˜λk of the extended MDP ˜ Mk satisfies: ˜λk ≥λ∗−O D log2(T/ρ) q SA T , where λ∗the optimal gain of MDP M and D is the diameter. Proof. Let h∗be the bias vector for an optimal policy π∗of MDP M (refer to Lemma 2.1 in the preliminaries section). Since h∗is a fixed (though unknown) vector with |hi −hj| ≤D, we can apply Lemma 4.3 to obtain that with probability 1 −ρ, for all s, a, there exists a sample vector Qj,k s,a for some j ∈{1, . . . , ψ} such that (Qj,k s,a)T h∗≥P T s,ah∗−δ where δ = O D log2(T/ρ) q SA T . Now, consider the policy π for MDP ˜ Mk which for any s, takes action aj, with a = π∗(s) and j being a sample satisfying above inequality. Let Qπ be the transition matrix for this policy, whose rows are formed by the vectors Qj,k s,π∗(s), and Pπ∗be the transition matrix whose rows are formed by the vectors Ps,π∗(s). Above implies Qπh∗≥Pπ∗h∗−δ1. We use this inequality along with the known relations between the gain and the bias of optimal policy in communicating MDPs to obtain that the gain ˜λ(π) of policy in π for MDP ˜ Mk satisfies ˜λ(π) ≥λ∗−δ (details provided in the supplementary material), which proves the lemma statement since by optimality ˜λk ≥˜λ(π). 8 Lemma 4.3. (Optimistic Sampling) Fix any vector h ∈RS such that |hi −hi′| ≤D for any i, i′, and any epoch k. Then, for every s, a, with probability 1 − ρ SA there exists at least one j such that (Qj,k s,a)T h ≥P T s,ah −O D log2(T/ρ) q SA T . Lemma 4.4. (Deviation bound) With probability 1 −ρ, for all epochs k, sample j, all s, a max h∈[0,2D]S(Qj,k s,a −Ps,a)T h ≤ O D s log(SAT/ρ) N τk s,a + DS log(SAT/ρ) N τk s,a ! , N τk s,a > η O D s S log(SAT/ρ) N τk s,a + DS log(S) N τk s,a ! , N τk s,a ≤η 5 Conclusions We presented an algorithm inspired by posterior sampling that achieves near-optimal worst-case regret bounds for the reinforcement learning problem with communicating MDPs in a non-episodic, undiscounted average reward setting. Our algorithm may be viewed as a more efficient randomized version of the UCRL2 algorithm of Jaksch et al. [2010], with randomization via posterior sampling forming the key to the √ S factor improvement in the regret bound provided by our algorithm. Our analysis demonstrates that posterior sampling provides the right amount of uncertainty in the samples, so that an optimistic policy can be obtained without excess over-estimation. While our work surmounts some important technical difficulties in obtaining worst-case regret bounds for posterior sampling based algorithms for communicating MDPs, the provided bound is tight in its dependence on S and A only for large T (specifically, for T ≥S5A). Other related results on tight worst-case regret bounds have a similar requirement of large T (Azar et al. [2017] produce an ˜O( √ HSAT) bound when T ≥H3S3A). Obtaining a cleaner worst-case regret bound that does not require such a condition remains an open question. Other important directions of future work include reducing the number of posterior samples required in every epoch from ˜O(S) to constant or logarithmic in S, and extensions to contextual and continuous state MDPs. 9 References Yasin Abbasi-Yadkori and Csaba Szepesvari. Bayesian optimal control of smoothly parameterized systems: The lazy posterior sampling algorithm. arXiv preprint arXiv:1406.3926, 2014. Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions: with formulas, graphs, and mathematical tables, volume 55. Courier Corporation, 1964. Shipra Agrawal and Navin Goyal. Analysis of Thompson Sampling for the Multi-armed Bandit Problem. In Proceedings of the 25th Annual Conference on Learning Theory (COLT), 2012. Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013a. Shipra Agrawal and Navin Goyal. Further Optimal Regret Bounds for Thompson Sampling. In AISTATS, pages 99–107, 2013b. John Asmuth, Lihong Li, Michael L Littman, Ali Nouri, and David Wingate. A Bayesian sampling approach to exploration in reinforcement learning. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 19–26. AUAI Press, 2009. Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. arXiv preprint arXiv:1703.05449, 2017. Peter L Bartlett and Ambuj Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. AUAI Press, 2009. Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for nearoptimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213–231, 2002. Sébastien Bubeck and Che-Yu Liu. Prior-free and prior-dependent regret bounds for Thompson sampling. In Advances in Neural Information Processing Systems, pages 638–646, 2013. Sébastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends R⃝in Machine Learning, 5(1):1–122, 2012. Apostolos N Burnetas and Michael N Katehakis. Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1):222–255, 1997. Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In Advances in neural information processing systems, pages 2249–2257, 2011. Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, pages 2818–2826, 2015. Raphaël Fonteneau, Nathan Korda, and Rémi Munos. An optimistic posterior sampling strategy for bayesian reinforcement learning. In NIPS 2013 Workshop on Bayesian Optimization (BayesOpt2013), 2013. Charles Miller Grinstead and James Laurie Snell. Introduction to probability. American Mathematical Soc., 2012. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563–1600, 2010. Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003. Emilie Kaufmann, Nathaniel Korda, and Rémi Munos. Thompson Sampling: An Optimal Finite Time Analysis. In International Conference on Algorithmic Learning Theory (ALT), 2012. Michael J Kearns and Satinder P Singh. Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in neural information processing systems, pages 996–1002, 1999. 10 Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandits in metric spaces. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 681–690. ACM, 2008. Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning. arXiv preprint arXiv:1607.00215, 2016. Ian Osband, Dan Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pages 3003–3011, 2013. Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. arXiv preprint arXiv:1402.0635, 2014. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. Daniel Russo and Benjamin Van Roy. Learning to Optimize Via Posterior Sampling. Mathematics of Operations Research, 39(4):1221–1243, 2014. Daniel Russo and Benjamin Van Roy. An Information-Theoretic Analysis of Thompson Sampling. Journal of Machine Learning Research (to appear), 2015. Yevgeny Seldin, François Laviolette, Nicolo Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian inequalities for martingales. IEEE Transactions on Information Theory, 58(12): 7086–7093, 2012. I. G. Shevtsova. An improvement of convergence rate estimates in the Lyapunov theorem. Doklady Mathematics, 82(3):862–864, 2010. Alexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning, pages 856–863. ACM, 2005. Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. Ambuj Tewari and Peter L Bartlett. Optimistic linear programming gives logarithmic regret for irreducible MDPs. In Advances in Neural Information Processing Systems, pages 1505–1512, 2008. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. 11 | 2017 | 151 |
6,621 | Efficient Second-Order Online Kernel Learning with Adaptive Embedding Daniele Calandriello Alessandro Lazaric Michal Valko SequeL team, INRIA Lille - Nord Europe, France {daniele.calandriello, alessandro.lazaric, michal.valko}@inria.fr Abstract Online kernel learning (OKL) is a flexible framework for prediction problems, since the large approximation space provided by reproducing kernel Hilbert spaces often contains an accurate function for the problem. Nonetheless, optimizing over this space is computationally expensive. Not only first order methods accumulate O( √ T) more loss than the optimal function, but the curse of kernelization results in a O(t) per-step complexity. Second-order methods get closer to the optimum much faster, suffering only O(log T) regret, but second-order updates are even more expensive with their O(t2) per-step cost. Existing approximate OKL methods reduce this complexity either by limiting the support vectors (SV) used by the predictor, or by avoiding the kernelization process altogether using embedding. Nonetheless, as long as the size of the approximation space or the number of SV does not grow over time, an adversarial environment can always exploit the approximation process. In this paper, we propose PROS-N-KONS, a method that combines Nyström sketching to project the input point to a small and accurate embedded space; and to perform efficient second-order updates in this space. The embedded space is continuously updated to guarantee that the embedding remains accurate. We show that the per-step cost only grows with the effective dimension of the problem and not with T. Moreover, the second-order updated allows us to achieve the logarithmic regret. We empirically compare our algorithm on recent large-scales benchmarks and show it performs favorably. 1 Introduction Online learning (OL) represents a family of efficient and scalable learning algorithms for building a predictive model incrementally from a sequence of T data points. A popular online learning approach [26] is to learn a linear predictor using gradient descent (GD) in the input space Rd. Since we can explicitly store and update the d weights of the linear predictor, the total runtime of this algorithm is O(Td), allowing it to scale to large problems. Unfortunately, it is sometimes the case that no good predictor can be constructed starting from only the linear combination of the input features. For this reason, online kernel learning (OKL) [10] first maps the points into a high-dimensional reproducing kernel Hilbert space (RKHS) using a non-linear feature map ϕ, and then runs GD on the projected points, which is often referred to as functional GD (FGD) [10]. With the kernel approach, each gradient step does not update a fixed set of weights, but instead introduces the feature-mapped point in the predictor as a support vector (SV). The resulting kernel-based predictor is flexible and data adaptive, but the number of parameters, and therefore the per-step space and time cost, now scales with O(t), the number of SVs included after t steps of GD. This curse of kernelization results in an O(T 2) total runtime, and prevents standard OKL methods from scaling to large problems. Given an RKHS H containing functions with very small prediction loss, the objective of an OL algorithm is to approach over time the performance of the best predictor in H and thus minimize the regret, that is the difference in cumulative loss between the OL algorithm and the best predictor in 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. hindsight. First-order GD achieve a O( √ T) regret for any arbitrary sequence of convex losses [10]. However, if we know that the losses are strongly convex, setting a more aggressive step-size in first-order GD achieves a smaller O(log T) regret [25]. Unfortunately, most common losses, such as the squared loss, are not strongly convex when evaluated for a single point xt. Nonetheless, they posses a certain directional curvature [8] that can be exploited by second-order GD methods, such as kernelized online Newton step (KONS) [2] and kernel-recursive least squares (KRLS) [24], to achieve the O(log T) regret without strong convexity along all directions. The drawback of second-order methods is that they have to store and invert the t × t covariance matrix between all SV included in the predictor. This requires O(t2) space and time per-step, dwarfing the O(t) cost of first-order methods and resulting in an even more infeasible O(T 3) runtime. Contributions In this paper, we introduce PROS-N-KONS, a new OKL method that (1) achieves logarithmic regret for losses with directional curvature using second-order updates, and (2) avoids the curse of kernelization, taking only a fixed per-step time and space cost. To achieve this, we start from KONS, a low-regret exact second-order OKL method proposed in [2], but replace the exact feature map ϕ with an approximate eϕ constructed using a Nyström dictionary approximation. For a dictionary of size j, this non-linearly embeds the points in Rj, where we can efficiently perform exact second-order updates in constant O(j2) per-step time, and achieve the desired O(log T) regret. Combined with an online dictionary learning (KORS [2]) and an adaptive restart strategy, we show that we never get stuck performing GD in an embedded space that is too distant from the true H, but at the same time the size of the embedding j never grows larger than the effective dimension of the problem. While previous methods [13, 11] used fixed embeddings, we adaptively construct a small dictionary that scales only with the effective dimension of the data. We then construct an accurate approximation of the covariance matrix, to avoid the variance due to dictionary changes using carefully designed projections. Related work Although first-order OKL methods cannot achieve logarithmic regret, many approximation methods have been proposed to make them scale to large datasets. Approximate methods usually take one of two approaches, either performing approximate gradient updates in the true RKHS (budgeted perceptron [4], projectron [15], forgetron [6]) preventing SV from entering the predictor, or exact gradient updates in an approximate RKHS (Nyström [13], random feature expansion [11]), where the points are embedded in a finite-dimensional space and the curse of kernelization does not apply. Overall, the goal is to never exceed a budget of SVs in order to maintain a fixed per-step update cost. Among budgeted methods, weight degradation [17] can be done in many different ways, such as removal [6] or more expensive projection [15] and merging. Nonetheless, as long as the size of the budget is fixed, the adversary can exploit this to increase the regret of the algorithm, and oblivious inclusion strategies such as uniform sampling [9] fail. Another approach is to replace the exact feature-map ϕ with an approximate feature map eϕ which allows to explicitly represent the mapped points, and run linear OL on this embedding [13, 21]. When the embedding is oblivious to data, the method is known as random-feature expansion, while a common data-dependent embedding mapping is known as Nyström method [19]. Again, if the embedding is fixed or with a limit in size, the adversary can exploit it. In addition, analyzing a change in embedding during the gradient descent is an open problem, since the underlying RKHS changes with it. The only approximate second-order method known to achieve logarithmic regret is SKETCHEDKONS. Both SKETCHED-KONS and PROS-N-KONS are based on the exact second-order OL method ONS [8] or its kernelized version KONS [2]. However, SKETCHED-KONS only applies budgeting techniques to the Hessian of the second-order updates and not to the predictor itself, resulting in a O(t) per-step evaluation time cost. Moreover, the Hessian sketching is performed only through SV removal, resulting in high instability. In this paper, we solve these two issues with PROS-N-KONS by directly approximating KONS using Nyström functional approximation. This results in updates that are closer to SV projection than removal, and that budget both the representation of the Hessian and the predictor. 2 Background Notation We borrow the notation from [14] and [2]. We use upper-case bold letters A for matrices, lower-case bold letters a for vectors, lower-case letters a for scalars. We denote by [A]ij and [a]i the (i, j) element of a matrix and i-th element of a vector respectively. We denote by IT ∈RT ×T the identity matrix of dimension T and by Diag(a) ∈RT ×T the diagonal matrix with the vector a ∈RT on the diagonal. We use eT,i ∈RT to denote the indicator vector of dimension T for element i. 2 When the dimension of I and ei is clear from the context, we omit the T, and we also indicate the identity operator by I. We use A ⪰B to indicate that A −B is a positive semi-definite (PSD) matrix. Finally, the set of integers between 1 and T is denoted by [T] := {1, . . . , T}. Kernels Given an input space X and a kernel function K(·, ·) : X × X →R, we denote the reproducing kernel Hilbert space (RKHS) induced by K by H , and with ϕ(·) : X →H the associated feature map. Using the feature map, the kernel function can be represented as K(x, x′) = ⟨ϕ(x), ϕ(x′)⟩H, but with a slight abuse of notation we use the simplified notation K(x, x′) = ϕ(x)Tϕ(x′) in the following. Any function f ∈H can be represented as a (potentially infinite) set of weights w such that fw(x) = ϕ(x)Tw. Given a set of t points, Dt = {xs}t s=1 we denote the feature matrix with φs as its s-th column by Φt ∈R∞×t. Online kernel learning (OKL) We consider online kernel learning, where an adversary chooses an arbitrary sequence of points {xt}T t=1 and convex differentiable losses {ℓt}T t=1. The learning protocol is the following. At each round t ∈[T] (1) the adversary reveals the new point xt, (2) the learner chooses a function fwt and predicts fwt(xt) = ϕ(xt)Twt, (3) the adversary reveals the loss ℓt, and (4) the learner suffers ℓt(ϕ(xt)Twt) and observes the associated gradient gt. We are interested in bounding the cumulative regret between the learner and a fixed function w defined as RT (w) = PT t=1 ℓ(φtwt) −ℓ(φtw). Since H is potentially a very large space, we need to restrict the class of comparators w. As in [14], we consider all functions that guarantee bounded predictions, i.e., S = {w : ∀t ∈[T], |φT tw| ≤C}. We make the following assumptions on the losses. Assumption 1 (Scalar Lipschitz). The loss functions ℓt satisfy |ℓ′ t(z)| whenever |z| ≤C. Assumption 2 (Curvature). There exists σt ≥σ > 0 such that for all u, w ∈S and for all t ∈[T], ℓt(φ T tw) := lt(w) ≥lt(u) + ∇lt(u) T(w −u) + σt 2 (∇lt(u) T(w −u))2 . This assumption is weaker than strong convexity as it only requires the losses to be strongly convex in the direction of the gradient. It is satisfied by squared loss, squared hinge loss, and in general, all exp-concave losses [8]. Under this weaker requirement, second-order learning methods [8, 2], obtain the O(log T) regret at the cost of a higher computational complexity w.r.t.first-order methods. Nyström approximation A common approach to alleviate the computational cost is to replace the high-dimensional feature map ϕ with a finite-dimensional approximate feature map eϕ. Let I = {xi}j i=1 be a dictionary of j points from the dataset and ΦI be the associated feature matrix with ϕ(xi) as columns. We define the embedding eϕ(x) := Σ−1UTΦT Iϕ(x) ∈Rj, where ΦI = VΣUT is the singular value decomposition of the feature matrix. While in general ΦI is infinite dimensional and cannot be directly decomposed, we exploit the fact that UΣVTVΣUT = ΦT IΦI = KI = UΛUT and that KI is a (finite-dimensional) PSD matrix. Therefore it is sufficient to compute the eigenvectors U and eigenvalues Λ of KI and take the square root Λ1/2 = Σ. Note that with this definition we are effectively replacing the kernel K and H with an approximate KI and HI, such that KI(x, x′) = eϕ(x)T eϕ(x′) = ϕ(x)TΦIUΣ−1Σ−1UTΦT Iϕ(x′) = ϕ(x)TPIϕ(x′) where PI = ΦI(ΦT IΦI)−1ΦT I is the projection matrix on the column span of ΦI. Since eϕ returns vectors in Rj, this transformation effectively reduces the computation complexity of kernel operations from t down to the size of the dictionary j. The accuracy of eϕ is directly related to the accuracy of the projection PI in approximating the projection Pt = Φt(ΦT tΦt)−1ΦT t, so that for all s, s′ ∈[t], eϕ(xs)T eϕ(xs′) is close to ϕ(xs)TPtϕ(xs′) = ϕ(xs)Tϕ(xs′). Ridge leverage scores All that is left is to find an efficient algorithm to choose a good dictionary I to minimize the error PI −Pt. Among dictionary-selection methods, we focus on those that sample points proportionally to their ridge leverage scores (RLSs) [1] because they provide strong reconstruction guarantees. We now define RLS and associated effective dimension. Definition 1. Given a kernel function K, a set of points Dt = {xs}t s=1 and a parameter γ > 0, the γ-ridge leverage score (RLS) of point i is defined as τt,i = et,iKt(Kt + γIt)−1et,i = φ T i(ΦtΦ T t + γI)−1φi, (1) and the effective dimension of Dt as their sum for the each example of Dt, dt eff(γ) = t X i=1 τt,i = Tr Kt(Kt + γIt)−1 . (2) 3 The RLS of a point measures how orthogonal φi is w.r.t. to the other points in Φt, and therefore how important it is to include it in I to obtain an accurate projection PI. The effective dimension captures the capacity of the RKHS H over the support vectors in Dt. Let {λi}i be the eigenvalues of Kt, since dt eff(γ) = Pt s=1 λi/(λi + γ), the effective dimension can be seen as the soft rank of Kt where only eigenvalues above γ are counted. To estimate the RLS and construct an accurate I, we leverage KORS [2] (see Alg. 1 in App. A) that extends the online row sampling of Cohen et al. [5] to kernels. Starting from an empty dictionary, at each round, KORS receives a new point xt, temporarily adds it to the current dictionary It and estimates its associated RLS eτt. Then it draws a Bernoulli r.v. proportionally to eτt. If the outcome is one, the point is deemed relevant and added to the dictionary, otherwise it is discarded and never added. Note that since points get only evaluated once, and never dropped, the size of the dictionary grows over time and the RKHS HIt is included in the RKHS HIt+1, guaranteeing stability in the RKHS evolution, unlike alternative methods (e.g., [3]) that construct smaller but often changing dictionaries. We restate the quality of the learned dictionaries and the complexity of the algorithm that we use as a building block. Proposition 1 ([2, Thm. 2]). Given parameters 0 < ε ≤1, 0 < γ, 0 < δ < 1, if β ≥3 log(T/δ)/ε2 then the dictionary learned by KORS is such that w.p.1 −δ, (1) for all rounds t ∈[T], we have 0 ⪯ΦT t(Pt −PIt)Φt ⪯+ ε 1−εγI, and (2) the maximum size of the dictionary J is bounded by 1+ε 1−ε3βdT eff(γ) log(2T/δ). The algorithm runs in O(dT eff(α)2 log4(T)) space and e O(dT eff(α)3) time per iteration. 3 The PROS-N-KONS algorithm We first use a toy OKL example from [2] to illustrate the main challenges for FGD in getting both computational efficiency and optimal regret guarantees. We then propose a different approach which will naturally lead to the definition of PROS-N-KONS. Consider the case of binary classification with the square loss, where the point presented by the adversary in the sequence is always the same point xexp, but each round with an opposite {1, −1} label. Note that the difficulty in this problem arises from the adversarial nature of the labels and it is not due to the dataset itself. The cumulative loss of the comparator w becomes (ϕ(xexp)Tw −1)2 + (ϕ(xexp)Tw + 1)2 + . . . for T steps. Our goal is to achieve O(log T) regret w.r.t. the best solution in hindsight, which is easily done by always predicting 0. Intuitively an algorithm will do well when the gradient-step magnitude shrinks as 1/t. Note that these losses are not strongly convex, thus exact first-order FGD only achieves O( √ T) regret and does not guarantee our goal. Exact second-order methods (e.g., KONS) achieve the O(log T) regret, but also store T copies of the SV, and have T 4 runtime. If we try to improve the runtime using approximate updates and a fixed budget of SV, we lose the O(log T) regime, since skipping the insertion of a SV also slows down the reduction in the step-size, both for first-order and second-order methods. If instead we try to compensate the scarcity of SV additions due to the budget with larger updates to the step-size, the adversary can exploit such an unstable algorithm, as is shown in [2] where in order to avoid an unstable solution forces the algorithm to introduce SV with a constant probability. Finally, note that this example can be easily generalized for any algorithm that stores a fixed budget of SV, replacing a single xexp with a set of repeating vectors that exceed the budget. This also defeats oblivious embedding techniques such as random feature expansion with a fixed amount of random features or a fixed dictionary, and simple strategies that update the SV dictionary by insertion and removal. If we relax the fixed-budget requirement, selection algorithms such as KORS can find an appropriate budget size for the SV dictionary. Indeed, this single sample problem is intrinsically simple: its effective dimension dT eff(α) ≃1 is small, and its induced RKHS H = ϕ(xexp) is a singleton. Therefore, following an adaptive embedding approach, we can reduce it to a one-dimensional parametric problem and solve it efficiently in this space using exact ONS updates. Alternatively, we can see this approach as constructing an approximate feature map eϕ that after one step will exactly coincide with the exact feature map ϕ, but allows us to run exact KONS updates efficiently replacing K with eK. Building on this intuition, we propose PROS-N-KONS, a new second-order FGD method that continuously searches for the best embedding space HIt and, at the same time, exploits the small embedding space HIt to efficiently perform exact second-order updates. 4 We start from an empty dictionary I0 and a null predictor w0 = 0. At each round, PROS-N-KONS (Algorithm 1) receives a new point xt and invokes KORS to decide whether it should be included in the current dictionary or not. Let tj with j ∈[J] be the random step when KORS introduces xtj in the dictionary. We analyze PROS-N-KONS as an epoch-based algorithm using these milestones tj. Note that the length hj = tj+1 −tj and total number of epochs J is random, and is decided in a data-adaptive way by KORS based on the difficulty of the problem. During epoch j, we have a fixed dictionary Ij that induces a feature matrix ΦIj containing samples φi ∈Ij, an embedding eϕ(x) : X →Rj = Σ−1 j UT jΦT jϕ(x) based on the singular values Σj and singular vectors Uj of Φj, with its associated approximate kernel function eK and induced RKHS Hj. At each round tj < t < tj+1, we perform an exact KONS update using the approximate map eϕ. This can be computed explicitly since eφt is in Rj and can be easily stored in memory. The update rules are eAt = eAt−1 + σt 2 egteg T t, eυt = eωt−1 −eA−1 t−1egt−1, eωt = ΠAt−1 St (υt) = eυt − h( eφT t eυt) eφT t eA−1 t−1 eφt eA−1 t−1 eφt, Input: Feasible parameter C, step-sizes ηt, regularizer α 1: Initialize j = 0, ew0 = 0, eg0 = 0, eP0 = 0, eA0 = αI, 2: Start a KORS instance with an empty dictionary I0. 3: for t = {1, . . . , T} do { Dictionary changed, reset.} 4: Receive xt, feed it to KORS. Receive zt (point added to dictionary or not) 5: if zt−1 = 1 then 6: j = j + 1 7: Build Kj from Ij and decompose it in UjΣjΣT jUT j 8: Set eAt−1 = αI ∈Rj×j. 9: eωt = 0 ∈Rj 10: else {Execute a gradient-descent step.} 11: Compute map φt and approximate map eφt = Σ−1 j UT jΦT jφt ∈Rj. 12: Compute eυt = eωt−1 −eA−1 t−1egt−1. 13: Compute eωt = eυt − h( e φT t eυt) e φT t e A−1 t−1 e φt eA−1 t−1 eφt where h(z) = sign(z) max{|z| −C, 0} 14: end if 15: Predict eyt = eφT t eωt. 16: Observe egt = ∇eωtℓt( eφT t eωt) = ℓ′ t(eyt) eφt. 17: Update eAt = eAt−1 + σt 2 egtegT t . 18: end for Figure 1: PROS-N-KONS where the oblique projection ΠAt−1 St is computed using the closed-form solution from [14]. When t = tj and a new epoch begins, we perform a reset step before taking the first gradient step in the new embedded space. We update the featuremap eϕ, but we reset eAtj and eωtj to zero. While this may seem a poor choice, as information learned over time is lost, it leaves intact the dictionary. As long as (a) the dictionary, and therefore the embedded space where we perform our GD, keeps improving and (b) we do not needlessly reset too often, we can count on the fast second-order updates to quickly catch up to the best function in the current Hj. The motivating reason to reset the descent procedure when we switch subspace is to guarantee that our starting point in the descent cannot be influenced by the adversary, and therefore allow us to bound the regret for the overall process (Sect. 4). Computational complexity PROS-NKONS’s computation complexity is dominated by eA−1 t inversion required to compute the projection and the gradient update and by the query to KORS, that internally also inverts a j × j matrix. Therefore, a naïve implementation requires O(j3) per-step time and has a space O(j2) space complexity necessary to store eAt. Notice that taking advantage of the fact that KORS only adds SV to the dictionary and never removes them, and that similarly, the eAt matrix is constructed using rank-one updates, a careful implementation reduces the per-step cost to O(j2). Overall, the total runtime of PROS-N-KONS is then O(TJ2), which using the bound on J provided by Prop. 1 and neglecting logarithmic terms reduces to e O(TdT eff(γ)2). Compared to other exact second-order FGD methods, such as KONS or RKLS, PROS-N-KONS dramatically improves the time and space complexity from polynomial to linear. Unlike other approximate second-order methods, PROS-N-KONS does not add a new SV at each step. This way it removes T 2 from the O(T 2 + TdT eff(γ)3) time complexity of SKETCHED-KONS [2]. Moreover, when mint τt,t is small, SKETCHED-KONS needs to compensate by adding a constant probability of adding a SV to the dictionary, resulting in a larger runtime complexity, while PROS-N-KONS has no dependency on the value of the RLS. Even compared to first-order methods, which incur a larger regret, PROS-N-KONS performs favorably, improving on the O(T 2) runtime of exact first-order FGD. Compared to other approximate methods, the variant using rank-one updates matches the O(J2) per-step cost of the more accurate first-order methods such as the budgeted perceptron [4], projectron [15], Nyström GD [13], while improving on their regret. PROS-N-KONS also closely matches faster but less accurate O(J) methods such as the forgetron [6] and budgeted GD [23]. 5 4 Regret guarantees In this section, we study the regret performance of PROS-N-KONS. Theorem 1 (proof in App. B,). For any sequence of losses ℓt satisfying Asm. 2 with Lipschitz constant L, let σ = mint σt. If ηt ≥σ for all t, α ≤ √ T, γ ≤α, and predictions are bounded by C, then the regret of PROS-N-KONS over T steps is bounded w.p.1 −δ as RT (w) ≤J α∥w∥2 + 4 σ dT eff α σL2 log 2σL2T/α + L2 α Tγε 4(1 −ε) + 1 + 2JC, (3) where J ≤3βdT eff (γ) log(2T) is the number of epochs. If γ = α/T the previous bound reduces to RT (w) = O α∥w∥2dT eff (α/T) log(T) + dT eff (α/T) dT eff (α) log2(T) . (4) Remark (bound) The bound in Eq. 3 is composed of three terms. At each epoch of PROS-N-KONS, an instance of KONS is run on the embedded feature space Hj obtained by using the dictionary Ij constructed up to the previous epoch. As a result, we directly use the bound on the regret of KONS (Thm. 1 in [2]) for each of the J epochs, thus leading to the first term in the regret. Since a new epoch is started whenever a new SV is added to the dictionary, the number of epochs J is at most the size of the dictionary returned by KORS up to step T, which w.h.p. is e O(dT eff(γ)), making the first term scale as e O(dT eff(γ)dT eff(α)) overall. Nonetheless, the comparator used in the per-epoch regret of KONS is constrained to the RKHS Hj induced by the embedding used in epoch j. The second term accounts for the difference in performance between the best solutions in the RKHS in epoch j and in the original RKHS H. While this error is directly controlled by KORS through the RLS regularization γ and the parameter ε (hence the factor γε/(1 −ε) from Property (1) in Prop. 1), its impact on the regret is amplified by the length of each epoch, thus leading to an overall linear term that needs to be regularized. Finally, the last term summarizes the regret suffered every time a new epoch is started and the default prediction by = 0 is returned. Since the values yt and byt are constrained in S, this results in a regret of 2JC. Remark (regret comparison) Tuning the RLS regularization as γ = α/T leads to the bound in Eq. 4. While the bound displays an explicit logarithmic dependency on T, this comes at the cost of increasing the effective dimension, which now depends on the regularization α/T. While in general this could possibly compromise the overall regret, if the sequence of points φ1, . . . , φT induces a kernel matrix with a rapidly decaying spectrum, the resulting regret is still competitive. For instance, if the eigenvalues of KT decrease as λt = at−q with constants a > 0 and q > 1, then dT eff (α/T) ≤aqT 1/q/(q −1). This shows that for any q > 2 we obtain a regret1 o( √ T log2 T) showing that KONS still improves over first-order methods. Furthermore, if the kernel has a low rank or the eigenvalues decrease exponentially, the final regret is poly-logarithmic, thus preserving the full advantage of the second-order approach. Notice that this scenario is always verified when H = Rd, and is also verified when the adversary draws samples from a stationary distribution and, e.g., the Gaussian kernel [22] (see also [16, 18]). This result is particularly remarkable when compared to SKETCHED-KONS, whose regret scales as O α∥w∥2 + dT eff (α) (log T)/η , where η is the fraction of samples which is forced into the dictionary (when η = 1, we recover the bound for KONS). Even when the effective dimension is small (e.g., exponentially decaying eigenvalues), SKETCHED-KONS requires setting η to T −p for a constant p > 0 to get a subquadratic space complexity, at the cost of increasing the regret to O(T p log T). On the other hand, PROS-N-KONS achieves a poly-logarithmic regret with linear space complexity up to poly-log factors (i.e., TdT eff(γ)2), thus greatly improving both the learning and computational performance w.r.t. SKETCHED-KONS. Finally, notice that while γ = α/T is the best choice agnostic to the kernel, better bounds can be obtained optimizing Eq. 3 for γ depending on dT eff(γ). For instance, let γ = α/T s, then the optimal value of s for q-polynomially decaying spectrum is s = q/(1 + q), leading to a regret bound e O(T q/(1+q)), which is always o( √ T) for any q > 1. Remark (comparison in the Euclidean case) In the special case H = Rd, we can make a comparison with existing approximate methods for OL. In particular, the closest algorithm is SKETCHEDONS by Luo et al. [14]. Unlike PROS-N-KONS, and similarly to SKETCHED-KONS, they take the 1Here we ignore the term dT eff(α) which is a constant w.r.t. T for any constant α. 6 approach of directly approximating At in the exact H = Rd using frequent directions [7] to construct a k-rank approximation of At for a fixed k. The resulting algorithm achieves a regret that is bounded by k log T + k PT i=k+1 σ2 i , where the sum PT i=k+1 σ2 i is equal to the sum of all the smallest d −k eigenvalues of the final (exact) matrix AT . This quantity can vary from 0, when the data lies in a subspace of rank r ≤k, to T d−k d when the sample lie orthogonally and in equal number along all d directions available in Rd. Computationally, the algorithm requires O(Tdk) time and O(dk) space. Conversely, PROS-N-KONS automatically adapt its time and space complexity to the effective dimension of the algorithm dT eff(α/T) which is smaller than the rank for any α. As a consequence, it requires only e O(Tr2) time and e O(r2) space, achieving a O(r2 log T) regret independently from the spectrum of the covariance matrix. Computationally, all of these complexities are smaller than the ones of SKETCHED-ONS in the regime r < k, which is the only one where SKETCHED-ONS can guarantee a sublinear regret, and where the regrets of the two algorithms are close. Overall, while SKETCHED-ONS implicitly relies on the r < k assumption, but continues to operate in a d dimensional space and suffers large regret if r > k, PROS-N-KONS will adaptively convert the d dimensional problem into a simpler one with the appropriate rank, fully reaping the computational and regret benefits. The bound in Thm. 1 can be refined in the specific case of squared loss as follows. Theorem 2. For any sequence of squared losses ℓt = (yt −byt)2, L=4C and σ=1/(8C2), if ηt ≥σ for all t, α ≤ √ T and γ ≤α, the regret of PROS-N-KONS over T steps is bounded w.p. 1 −δ as RT (w)≤ J X j=1 4 σ dj eff α σL2 log 2σ L2 α Tr(Kj) +ε′L∗ j +J L C + L α +ε′α∥w∥2 2 , (5) where ε′ = α α − γε 1−ε −1 −1 and L∗ j = minw∈H Ptj+1−1 t=tj φT tw −yt 2 + α∥w∥2 2 is the best regularized cumulative loss in H within epoch j. Let L∗ T be the best regularized cumulative loss over all T steps, then L∗ j ≤L∗ T . Furthermore, we have that dj eff ≤dT eff and thus regret in Eq. 5 can be (loosely) bounded as RT (w) = O J dT eff(α) log(T) + +ε′L∗ j + ε′α∥w∥2 2 . The major difference w.r.t. the general bound in Eq. 3 is that we directly relate the regret of PROS-NKONS to the performance of the best predictor in H in hindsight, which replaces the linear term γT/α. As a result, we can set γ = α (for which ε′ = ε/(1 −2ε)) and avoid increasing the effective dimension of the problem. Furthermore, since L∗ T is the regularized loss of the optimal batch solution, we expect it to be small whenever the H is well designed for the prediction task at hand. For instance, if L∗ T scales as O(log T) for a given regularization α (e.g., in the realizable case L∗ T is actually just α∥w∥), then the regret of PROS-N-KONS is directly comparable with KONS up to a multiplicative factor depending on the number of epochs J and with a much smaller time and space complexity that adapt to the effective dimension of the problem (see Prop. 1). 5 Experiments We empirically validate PROS-N-KONS on several regression and binary classification problems, showing that it is competitive with state-of-the-art methods. We focused on verifying 1) the advantage of second-order vs. first-order updates, 2) the effectiveness of data-adaptive embedding w.r.t. the oblivious one, and 3) the effective dimension in real datasets. Note that our guarantees hold for more challenging (possibly adversarial) settings than what we test empirically. Algorithms Beside PROS-N-KONS, we introduce two heuristic variants. CON-KONS follows the same update rules as PROS-N-KONS during the descent steps, but at reset steps it does not reset the solution and instead computes ewt−1 = Φj−1Uj−1Σ−1 j−1eωt−1 starting from eωt−1 and sets eωt = Σ−1 j UT jΦT j ewt−1. A similar update rule is used to map eAt−1 into the new embedded space without resetting it. B-KONS is a budgeted version of PROS-N-KONS that stops updating the dictionary at a maximum budget Jmax and then it continues learning on the last space for the rest of the run. Finally, we also include the best BATCH solution in the final space HJ returned by KORS as a best-in-hindsight comparator. We also compare to two state-of-the-art embedding-based first-order 7 Algorithm parkinson n = 5, 875, d = 20 cpusmall n = 8, 192, d = 12 avg. squared loss #SV time avg. squared loss #SV time FOGD 0.04909 ± 0.00020 30 — 0.02577 ± 0.00050 30 — NOGD 0.04896 ± 0.00068 30 — 0.02559 ± 0.00024 30 — PROS-N-KONS 0.05798 ± 0.00136 18 5.16 0.02494 ± 0.00141 20 7.28 CON-KONS 0.05696 ± 0.00129 18 5.21 0.02269 ± 0.00164 20 7.40 B-KONS 0.05795 ± 0.00172 18 5.35 0.02496 ± 0.00177 20 7.37 BATCH 0.04535 ± 0.00002 — — 0.01090 ± 0.00082 — Algorithm cadata n = 20, 640, d = 8 casp n = 45, 730, d = 9 avg. squared loss #SV time avg. squared loss #SV time FOGD 0.04097 ± 0.00015 30 — 0.08021 ± 0.00031 30 — NOGD 0.03983 ± 0.00018 30 — 0.07844 ± 0.00008 30 — PROS-N-KONS 0.03095 ± 0.00110 20 18.59 0.06773 ± 0.00105 21 40.73 CON-KONS 0.02850 ± 0.00174 19 18.45 0.06832 ± 0.00315 20 40.91 B-KONS 0.03095 ± 0.00118 19 18.65 0.06775 ± 0.00067 21 41.13 BATCH 0.02202 ± 0.00002 — — 0.06100 ± 0.00003 — — Algorithm slice n = 53, 500, d = 385 year n = 463, 715, d = 90 avg. squared loss #SV time avg. squared loss #SV time FOGD 0.00726 ± 0.00019 30 — 0.01427 ± 0.00004 30 — NOGD 0.02636 ± 0.00460 30 — 0.01427 ± 0.00004 30 — DUAL-SGD — — — 0.01440 ± 0.00000 100 — PROS-N-KONS did not complete — — 0.01450 ± 0.00014 149 884.82 CON-KONS did not complete — — 0.01444 ± 0.00017 147 889.42 B-KONS 0.00913 ± 0.00045 100 60 0.01302 ± 0.00006 100 505.36 BATCH 0.00212 ± 0.00001 — — 0.01147 ± 0.00001 — — Table 1: Regression datasets methods from [13]. NOGD selects the first J points and uses them to construct an embedding and then perform exact GD in the embedded space. FOGD uses random feature expansion to construct an embedding, and then runs first-order GD in the embedded space. While oblivious embedding methods are cheaper than data-adaptive Nyström, they are usually less accurate. Finally, DUAL-SGD also performs a random feature expansion embedding, but in the dual space. Given the number #SV of SVs stored in the predictor, and the input dimension d of the dataset’s samples, the time complexity of all first-order methods is O(Td#SV ), while that of PROS-N-KONS and variants is O(T(d + #SV )#SV ). When #SV ∼d (as in our case) the two complexities coincide. The space complexities are also close, with PROS-N-KONS O(#SV 2) not much larger than the first order methods’ O(#SV ). We do not run SKETCHED-KONS because the T 2 runtime is prohibitive. Experimental setup We replicate the experimental setting in [13] with 9 datasets for regression and 3 datasets for binary classification. We use the same preprocessing as Lu et al. [13]: each feature of the points xt is rescaled to fit in [0, 1], for regression the target variable yt is rescaled in [0, 1], while in binary classification the labels are {−1, 1}. We also do not tune the Gaussian kernel bandwidth, but take the value σ = 8 used by [13]. For all datasets, we set β = 1 and ε = 0.5 for all PROS-N-KONS variants and Jmax = 100 for B-KONS. For each algorithm and dataset, we report average and standard deviation of the losses. The scores for the competitor baselines are reported as provided in the original papers [13, 12]. We only report scores for NOGD, FOGD, and DUAL-SGD, since they have been shown to outperform other baselines such as budgeted perceptron [4], projectron [15], forgetron [6], and budgeted GD [23]. For PROS-N-KONS variant we also report the runtime in seconds, but do not compare with the runtimes reported by [13, 12], as that would imply comparing different implementations. Note that since the complexities O(Td#SV ) and O(T(d + #SV )#SV ) are close, we do not expect large differences. All experiments are run on a single machine with 2 Xeon E5-2630 CPUs for a total of 10 cores, and are averaged over 15 runs. Effective dimension and runtime We use size of the dictionary returned by KORS as a proxy for the effective dimension of the datasets. As expected, larger datasets and datasets with a larger input dimension have a larger effective dimension. Furthermore, dT eff(γ) increases (sublinearly) when we reduce γ from 1 to 0.01 in the ijcnn1 dataset. More importantly, dT eff(γ) remains empirically small 8 α = 1, γ = 1 Algorithm ijcnn1 n = 141, 691, d = 22 cod-rna n = 271, 617, d = 8 accuracy #SV time accuracy #SV time FOGD 9.06 ± 0.05 400 — 10.30 ± 0.10 400 — NOGD 9.55 ± 0.01 100 — 13.80 ± 2.10 100 — DUAL-SGD 8.35 ± 0.20 100 — 4.83 ± 0.21 100 — PROS-N-KONS 9.70 ± 0.01 100 211.91 13.95 ± 1.19 38 270.81 CON-KONS 9.64 ± 0.01 101 215.71 18.99 ± 9.47 38 271.85 B-KONS 9.70 ± 0.01 98 206.53 13.99 ± 1.16 38 274.94 BATCH 8.33 ± 0.03 — — 3.781 ± 0.01 — — α = 0.01, γ = 0.01 Algorithm ijcnn1 n = 141, 691, d = 22 cod-rna n = 271, 617, d = 8 accuracy #SV time accuracy #SV time FOGD 9.06 ± 0.05 400 — 10.30 ± 0.10 400 — NOGD 9.55 ± 0.01 100 — 13.80 ± 2.10 100 — DUAL-SGD 8.35 ± 0.20 100 — 4.83 ± 0.21 100 — PROS-N-KONS 10.73 ± 0.12 436 1003.82 4.91 ± 0.04 111 459.28 CON-KONS 6.23 ± 0.18 432 987.33 5.81 ± 1.96 111 458.90 B-KONS 4.85 ± 0.08 100 147.22 4.57 ± 0.05 100 333.57 BATCH 5.61 ± 0.01 — — 3.61 ± 0.01 — — Table 2: Binary classification datasets even for datasets with hundreds of thousands samples, such as year, ijcnn1 and cod-rna. On the other hand, in the slice dataset, the effective dimension is too large for PROS-N-KONS to complete and we only provide results for B-KONS. Overall, the proposed algorithm can process hundreds of thousands of points in a matter of minutes and shows that it can practically scale to large datasets. Regression All algorithms are trained and evaluated using the squared loss. Notice that whenever the budget Jmax is not exceeded, B-KONS and PROS-N-KONS are the same algorithm and obtain the same result. On regression datasets (Tab. 1) we set α = 1 and γ = 1, which satisfies the requirements of Thm. 2. Note that we did not tune α and γ for optimal performance, as that would require multiple runs, and violate the online setting. On smaller datasets such as parkinson and cpusmall, where frequent restarts greatly interfere with the gradient descent, and even a small non-adaptive embedding can capture the geometry of the data, PROS-N-KONS is outperformed by simpler first-order methods. As soon as T reaches the order of tens of thousands (cadata, casp), second-order updates and data adaptivity becomes relevant and PROS-N-KONS outperform its competitors, both in the number of SVs and in the average loss. In this intermediate regime, CON-KONS outperforms PROS-N-KONS and B-KONS since it is less affected by restarts. Finally, when the number of samples raises to hundreds of thousands, the intrinsic effective dimension of the dataset starts playing a larger role. On slice, where the effective dimension is too large to run, B-KONS still outperforms NOGD with a comparable budget of SVs, showing the advantage of second-order updates. Binary classification All algorithms are trained using the hinge loss and they are evaluated using the average online error rate. Results are reported in Tab. 2. While for regression, an arbitrary value of γ = α = 1 is sufficient to obtain good results, it fails for binary classification. Decreasing the two parameters to 0.01 resulted in a 3-fold increase in the number of SVs included and runtime, but almost a 2-fold decrease in error rate, placing PROS-N-KONS and B-KONS on par or ahead of competitors without the need of any further parameter tuning. 6 Conclusions We presented PROS-N-KONS a novel algorithm for sketched second-order OKL that achieves O(dT eff log T) regret for losses with directional curvature. Our sketching is data-adaptive and, when the effective dimension of the dataset is constant, it achieves a constant per-step cost, unlike SKETCHEDKONS [2], which was previously proposed for the same setting. We empirically showed that PROS-N-KONS is practical, performing on par or better than state-of-the-art methods on standard benchmarks using small dictionaries on realistic data. 9 Acknowledgements The research presented was supported by French Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council, Inria and Univertät Potsdam associated-team north-european project Allocate, and French National Research Agency projects ExTra-Learn (n.ANR-14-CE24-0010-01) and BoB (n.ANR-16-CE23-0003). References [1] Ahmed El Alaoui and Michael W. Mahoney. Fast randomized kernel methods with statistical guarantees. In Neural Information Processing Systems, 2015. [2] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Second-order kernel online convex optimization with adaptive sketching. In International Conference on Machine Learning, 2017. [3] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Distributed sequential sampling for kernel matrix approximation. In AISTATS, 2017. [4] Giovanni Cavallanti, Nicolo Cesa-Bianchi, and Claudio Gentile. Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69(2-3):143–167, 2007. [5] Michael B Cohen, Cameron Musco, and Jakub Pachocki. Online row sampling. International Workshop on Approximation, Randomization, and Combinatorial Optimization APPROX, 2016. [6] Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. The forgetron: A kernel-based perceptron on a budget. SIAM Journal on Computing, 37(5):1342–1372, 2008. [7] Mina Ghashami, Edo Liberty, Jeff M Phillips, and David P Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762–1792, 2016. [8] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In Conference on Learning Theory. Springer, 2006. [9] Wenwu He and James T. Kwok. Simple randomized algorithms for online learning with kernels. Neural Networks, 60:17–24, 2014. [10] J. Kivinen, A.J. Smola, and R.C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2004. [11] Quoc Le, Tamás Sarlós, and Alex J Smola. Fastfood - Approximating kernel expansions in loglinear time. In International Conference on Machine Learning, 2013. [12] Trung Le, Tu Nguyen, Vu Nguyen, and Dinh Phung. Dual Space Gradient Descent for Online Learning. In Neural Information Processing Systems, 2016. [13] Jing Lu, Steven C.H. Hoi, Jialei Wang, Peilin Zhao, and Zhi-Yong Liu. Large scale online kernel learning. Journal of Machine Learning Research, 17(47):1–43, 2016. [14] Haipeng Luo, Alekh Agarwal, Nicolo Cesa-Bianchi, and John Langford. Efficient second-order online learning via sketching. In Neural Information Processing Systems, 2016. [15] Francesco Orabona, Joseph Keshet, and Barbara Caputo. The projectron: a bounded kernelbased perceptron. In International conference on Machine learning, 2008. [16] Yi Sun, Jürgen Schmidhuber, and Faustino J. Gomez. On the size of the online kernel sparsification dictionary. In International Conference on Machine Learning, 2012. [17] Zhuang Wang, Koby Crammer, and Slobodan Vucetic. Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale svm training. Journal of Machine Learning Research, 13(Oct):3103–3131, 2012. [18] Andrew J. Wathen and Shengxin Zhu. On spectral distribution of kernel matrices related to radial basis functions. Numerical Algorithms, 70(4):709–726, 2015. [19] Christopher Williams and Matthias Seeger. Using the Nyström method to speed up kernel machines. In Neural Information Processing Systems, 2001. [20] Yi Xu, Haiqin Yang, Lijun Zhang, and Tianbao Yang. Efficient non-oblivious randomized reduction for risk minimization with improved excess risk guarantee. In AAAI Conference on Artificial Intelligence, 2017. [21] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nyström method vs random fourier features: A theoretical and empirical comparison. In Neural Information Processing Systems, 2012. 10 [22] Y. Yang, M. Pilanci, and M. J. Wainwright. Randomized sketches for kernels: Fast and optimal non-parametric regression. Annals of Statistics, 2017. [23] Peilin Zhao, Jialei Wang, Pengcheng Wu, Rong Jin, and Steven C H Hoi. Fast bounded online gradient descent algorithms for scalable kernel-based online learning. In International Conference on Machine Learning, 2012. [24] Fedor Zhdanov and Yuri Kalnishkan. An identity for kernel ridge regression. In Algorithmic Learning Theory. 2010. [25] Changbo Zhu and Huan Xu. Online gradient descent in function space. arXiv:1512.02394, 2015. [26] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In International Conference on Machine Learning, 2003. 11 | 2017 | 152 |
6,622 | Solving Most Systems of Random Quadratic Equations Gang Wang⋆,∗ Georgios B. Giannakis∗ Yousef Saad† Jie Chen⋆ ⋆Key Lab of Intell. Contr. and Decision of Complex Syst., Beijing Inst. of Technology ∗Digital Tech. Center & Dept. of Electrical and Computer Eng., Univ. of Minnesota †Department of Computer Science and Engineering, Univ. of Minnesota {gangwang, georgios, saad}@umn.edu; chenjie@bit.edu.cn. Abstract This paper deals with finding an n-dimensional solution x to a system of quadratic equations yi = |⟨ai, x⟩|2, 1 ≤i ≤m, which in general is known to be NP-hard. We put forth a novel procedure, that starts with a weighted maximal correlation initialization obtainable with a few power iterations, followed by successive refinements based on iteratively reweighted gradient-type iterations. The novel techniques distinguish themselves from prior works by the inclusion of a fresh (re)weighting regularization. For certain random measurement models, the proposed procedure returns the true solution x with high probability in time proportional to reading the data {(ai; yi)}1≤i≤m, provided that the number m of equations is some constant c > 0 times the number n of unknowns, that is, m ≥cn. Empirically, the upshots of this contribution are: i) perfect signal recovery in the high-dimensional regime given only an information-theoretic limit number of equations; and, ii) (near-)optimal statistical accuracy in the presence of additive noise. Extensive numerical tests using both synthetic data and real images corroborate its improved signal recovery performance and computational efficiency relative to state-of-the-art approaches. 1 Introduction One is often faced with solving quadratic equations of the form yi = |⟨ai, x⟩|2, or equivalently, ψi = |⟨ai, x⟩|, 1 ≤i ≤m (1) where x ∈Rn/Cn (hereafter, symbol “A/B” denotes either A or B) is the wanted unknown n × 1 vector, while given observations ψi and feature vectors ai ∈Rn/Cn that are collectively stacked in the data vector ψ := [ψi]1≤i≤m and the m × n sensing matrix A := [ai]1≤i≤m, respectively. Put differently, given information about the (squared) modulus of the inner products of the signal vector x and several known design vectors ai, can one reconstruct exactly (up to a global phase factor) x, or alternatively, the missing phase of ⟨ai, x⟩? In fact, much effort has been devoted to determining the number of such equations necessary and/or sufficient for the uniqueness of the solution x; see e.g., [1, 8]. It has been proved that m ≥2n −1 (m ≥4n −4) generic 1 (which includes the case of random vectors) real (complex) vectors ai are sufficient for uniquely determining an n-dimensional real (complex) vector x [1, Theorem 2.8], [8], while in the real case m = 2n −1 is shown also necessary [1]. In this sense, the number m = 2n −1 of equations as in (1) can be regarded as the information-theoretic limit for such a quadratic system to be uniquely solvable. 1It is out of the scope of the present paper to explain the meaning of generic vectors, whereas interested readers are referred to [1]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In diverse physical sciences and engineering fields, it is impossible or very difficult to record phase measurements. The problem of recovering the signal or phase from magnitude measurements only, also commonly known as phase retrieval, emerges naturally [10, 11]. Relevant application domains include e.g., X-ray crystallography, astronomy, microscopy, ptychography, and coherent diffraction imaging [21]. In such setups, optical measurement and detection systems record solely the photon flux, which is proportional to the (squared) magnitude of the field, but not the phase. Problem (1) in its squared form, on the other hand, can be readily recast as an instance of nonconvex quadratically constrained quadratic programming, that subsumes as special cases several well-known combinatorial optimization problems involving Boolean variables, e.g., the NP-complete stone problem [2, Sec. 3.4.1]. A related task of this kind is that of estimating the mixture of linear regressions, where the latent membership indicators can be converted into the missing phases [29]. Although of simple form and practical relevance across different fields, solving systems of nonlinear equations is arguably the most difficult problem in all of the numerical computations [19, Page 355]. Notation: Lower- (upper-) case boldface letters denote vectors (matrices), e.g., a ∈Rn (A ∈Rm×n). Calligraphic letters are reserved for sets. The floor operation ⌊c⌋gives the largest integer no greater than the given real quantity c > 0, the cardinality |S| counts the number of elements in set S, and ∥x∥denotes the Euclidean norm of x. Since for any phase φ ∈R, vectors x ∈Cn and ejφx are indistinguishable given {ψi} in (1), let dist(z, x) := minφ∈[0,2π) ∥z −xejφ∥be the Euclidean distance of any estimate z ∈Cn to the solution set {ejφx}0≤φ<2π of (1); in particular, φ = 0/π in the real case. 1.1 Prior contributions Following the least-squares (LS) criterion (which coincides with the maximum likelihood (ML) one assuming additive white Gaussian noise), the problem of solving quadratic equations can be naturally recast as an empirical loss minimization minimize z∈Rn/Cn L(z) := 1 m m X i=1 ℓ(z; ψi/yi) (2) where one can choose to work with the amplitude-based loss ℓ(z; ψi) := (ψi−|⟨ai,z⟩|)2/2 [28, 30], or the intensity-based one ℓ(z; yi) := (yi−|⟨ai,z⟩|2)2/2 [3], and its related Poisson likelihood ℓ(z; yi) := yi log(|⟨ai, z⟩|2) −|⟨ai, z⟩|2 [7]. Either way, the objective functional L(z) is nonconvex; hence, it is generally NP-hard and computationally intractable to compute the ML or LS estimate. Minimizing the squared modulus-based LS loss in (2), several numerical polynomial-time algorithms have been devised via convex programming for certain choices of design vectors ai [4, 25]. Such convex paradigms first rely on the matrix-lifting technique to express all squared modulus terms into linear ones in a new rank-1 matrix variable, followed by solving a convex semi-definite program (SDP) after dropping the rank constraint. It has been established that perfect recovery and (near-)optimal statistical accuracy are achieved in noiseless and noisy settings respectively with an optimal-order number of measurements [4]. In terms of computational efficiency however, such lifting-based convex approaches entail storing and solving for an n × n semi-definite matrix from m general SDP constraints, whose computational complexity in the worst case scales as n4.5 log 1/ϵ for m ≈n [25], which is not scalable. Another recent line of convex relaxation [12], [13] reformulated the problem of phase retrieval as that of sparse signal recovery, and solved a linear program in the natural parameter vector domain. Although exact signal recovery can be established assuming an accurate enough anchor vector, its empirical performance is in general not competitive with state-of-the-art phase retrieval approaches. Recent proposals advocate suitably initialized iterative procedures for coping with certain nonconvex formulations directly; see e.g., algorithms abbreviated as AltMinPhase, (R/P)WF, (M)TWF, (S)TAF [16, 3, 7, 26, 28, 27, 30, 22, 6, 24], as well as a prox-linear algorithm [9]. These nonconvex approaches operate directly upon vector optimization variables, thus leading to significant computational advantages over their convex counterparts. With random features, they can be interpreted as performing stochastic optimization over acquired examples {(ai; ψi/yi)}1≤i≤m to approximately minimize the population risk functional L(z) := E(ai,ψi/yi)[ℓ(z; ψi/yi)]. It is well documented that minimizing nonconvex functionals is generally intractable due to existence of multiple critical points [17]. Assuming Gaussian sensing vectors however, such nonconvex paradigms can provably locate the global optimum, several of which also achieve optimal (statistical) guarantees. Specifically, 2 starting with a judiciously designed initial guess, successive improvement is effected by means of a sequence of (truncated) (generalized) gradient-type iterations given by zt+1 := zt −µt m X i∈T t+1 ∇ℓi(zt; ψi/yi), t = 0, 1, . . . (3) where zt denotes the estimate returned by the algorithm at the t-th iteration, µt > 0 is learning rate that can be pre-selected or found via e.g., the backtracking line search strategy, and ∇ℓ(zt, ψi/yi) represents the (generalized) gradient of the modulus- or squared modulus-based LS loss evaluated at zt. Here, T t+1 denotes some time-varying index set signifying the per-iteration gradient truncation. Although they achieve optimal statistical guarantees in both noiseless and noisy settings, state-of-theart (convex and nonconvex) approaches studied under Gaussian designs, empirically require stable recovery of a number of equations (several) times larger than the information-theoretic limit [7, 3, 30]. As a matter of fact, when there are numerously enough measurements (on the order of n up to some polylog factors), the squared modulus-based LS functional admits benign geometric structure in the sense that [23]: i) all local minimizers are also global; and, ii) there always exists a negative directional curvature at every saddle point. In a nutshell, the grand challenge of tackling systems of random quadratic equations remains to develop algorithms capable of achieving perfect recovery and statistical accuracy when the number of measurements approaches the information limit. 1.2 This work Building upon but going beyond the scope of the aforementioned nonconvex paradigms, the present paper puts forward a novel iterative linear-time scheme, namely, time proportional to that required by the processor to scan all the data {(ai; ψi)}1≤i≤m, that we term reweighted amplitude flow, and henceforth, abbreviate as RAF. Our methodology is capable of solving noiseless random quadratic equations exactly, yielding an estimate of (near)-optimal statistical accuracy from noisy modulus observations. Exactness and accuracy hold with high probability and without extra assumption on the unknown signal vector x, provided that the ratio m/n of the number of equations to that of the unknowns is larger than a certain constant. Empirically, our approach is shown able to ensure exact recovery of high-dimensional unstructured signals given a minimal number of equations, where m/n in the real case can be as small as 2. The new twist here is to leverage judiciously designed yet conceptually simple (re)weighting regularization techniques to enhance existing initializations and also gradient refinements. An informal depiction of our RAF methodology is given in two stages as follows, with rigorous details deferred to Section 3: S1) Weighted maximal correlation initialization: Obtain an initializer z0 maximally correlated with a carefully selected subset S ⊊M := {1, 2, . . . , m} of feature vectors ai, whose contributions toward constructing z0 are judiciously weighted by suitable parameters {w0 i > 0}i∈S. S2) Iteratively reweighted “gradient-like” iterations: Loop over 0 ≤t ≤T: zt+1 = zt −µt m m X i=1 wt i ∇ℓ(zt; ψi) (4) for some time-varying weighting parameters {wt i ≥0}, each possibly relying on the current iterate zt and the datum (ai; ψi). Two attributes of the novel approach are worth highlighting next. First, albeit being a variant of the spectral initialization devised in [28], the initialization here [cf. S1)] is distinct in the sense that different importance is attached to each selected datum (ai; ψi). Likewise, the gradient flow [cf. S2)] weighs judiciously the search direction suggested by each datum (ai; ψi). In this manner, more robust initializations and more stable overall search directions can be constructed even based solely on a rather limited number of data samples. Moreover, with particular choices of the weights wt i’s (e.g., taking 0/1 values), the developed methodology subsumes as special cases the recently proposed algorithms RWF [30] and TAF [28]. 2 Algorithm: Reweighted Amplitude Flow This section explains the intuition and basic principles behind each stage of the advocated RAF algorithm in detail. For analytical concreteness, we focus on the real Gaussian model with x ∈Rn, 3 and independent sensing vectors ai ∈Rn ∼N(0, I) for all 1 ≤i ≤m. Nonetheless, the presented approach can be directly applied when the complex Gaussian and the coded diffraction pattern (CDP) models are considered. 2.1 Weighted maximal correlation initialization A key enabler of general nonconvex iterative heuristics’ success in finding the global optimum is to seed them with an excellent starting point [14]. Indeed, several smart initialization strategies have been advocated for iterative phase retrieval algorithms; see e.g., the spectral initialization [16], [3] as well as its truncated variants [7], [28], [9], [30], [15]. One promising approach is the one pursued in [28], which is also shown robust to outliers in [9]. To hopefully approach the informationtheoretic limit however, its performance may need further enhancement. Intuitively, it is increasingly challenging to improve the initialization (over state-of-the-art) as the number of acquired data samples approaches the information-theoretic limit. In this context, we develop a more flexible initialization scheme based on the correlation property (as opposed to the orthogonality in [28]), in which the added benefit is the inclusion of a flexible weighting regularization technique to better balance the useful information exploited in the selected data. Similar to related approaches of the same kind, our strategy entails estimating both the norm ∥x∥and the direction x/∥x∥of x. Leveraging the strong law of large numbers and the rotational invariance of Gaussian ai vectors (the latter suffices to assume x = ∥x∥e1, with e1 being the first canonical vector in Rn), it is clear that 1 m m X i=1 ψ2 i = 1 m m X i=1 ⟨ai, ∥x∥e1⟩ 2 = 1 m m X i=1 a2 i,1 ∥x∥2 ≈∥x∥2 (5) whereby ∥x∥can be estimated to be Pm i=1ψ2 i/m. This estimate proves very accurate even with a limited number of data samples because Pm i=1a2 i,1/m is unbiased and tightly concentrated. The challenge thus lies in accurately estimating the direction of x, or seeking a unit vector maximally aligned with x. Toward this end, let us first present a variant of the initialization in [28]. Note that the larger the modulus ψi of the inner-product between ai and x is, the known design vector ai is deemed more correlated to the unknown solution x, hence bearing useful directional information of x. Inspired by this fact and having available data {(ai; ψi)}1≤i≤m, one can sort all (absolute) correlation coefficients {ψi}1≤i≤m in an ascending order, yielding ordered coefficients 0 < ψ[m] ≤ · · · ≤ψ[2] ≤ψ[1]. Sorting m records takes time proportional to O(m log m).2 Let S ⫋M denote the set of selected feature vectors ai to be used for computing the initialization, which is to be designed next. Fix a priori the cardinality |S| to some integer on the order of m, say, |S| := ⌊3m/13⌋. It is then natural to define S to collect the ai vectors that correspond to one of the largest |S| correlation coefficients {ψ[i]}1≤i≤|S|, each of which can be thought of as pointing to (roughly) the direction of x. Approximating the direction of x therefore boils down to finding a vector to maximize its correlation with the subset S of selected directional vectors ai. Succinctly, the wanted approximation vector can be efficiently found as the solution of maximize ∥z∥=1 1 |S| X i∈S ⟨ai, z⟩ 2 = z∗ 1 |S| X i∈S aia∗ i z (6) where the superscript ∗represents the transpose or the conjugate transpose that will be clear from the context. Upon scaling the unity-norm solution of (6) by the norm estimate obtained Pm i=1ψ2 i/m in (5), to match the magnitude of x, we will develop what we will henceforth refer to as maximal correlation initialization. As long as |S| is chosen on the order of m, the maximal correlation method outperforms the spectral ones in [3, 16, 7], and has comparable performance to the orthogonality-promoting method [28]. Its performance around the information-limit however, is still not the best that we can hope for. Recall from (6) that all selected directional vectors {ai}i∈S are treated the same in terms of their contributions to constructing the initialization. Nevertheless, according to our starting principle, this ordering information carried by the selected ai vectors is not exploited by the initialization scheme in (6) and [28]. In other words, if for i, j ∈S, the correlation coefficient of ψi with ai is larger 2f(m) = O(g(m)) means that there exists a constant C > 0 such that |f(m)| ≤C|g(m)|. 4 than that of ψj with aj, then ai is deemed more correlated (with x) than aj is, hence bearing more useful information about the direction of x. It is thus prudent to weigh more the selected ai vectors associated with larger ψi values. Given the ordering information ψ[|S|] ≤· · · ≤ψ[2] ≤ψ[1] available from the sorting procedure, a natural way to achieve this goal is weighting each ai vector with simple monotonically increasing functions of ψi, say e.g., taking the weights w0 i := ψγ i , ∀i ∈S with the exponent parameter γ ≥0 chosen to maintain the wanted ordering w0 |S| ≤· · · ≤w0 [2] ≤w0 [1]. In a nutshell, a more flexible initialization strategy, that we refer to as weighted maximal correlation, can be summarized as follows ˜z0 := arg max ∥z∥=1 z∗ 1 |S| X i∈S ψγ i aia∗ i z. (7) For any given ϵ > 0, the power method or the Lanczos algorithm can be called for to find an ϵ-accurate solution to (7) in time proportional to O(n|S|) [20], assuming a positive eigengap between the largest and the second largest eigenvalues of the matrix (1/|S|) P i∈S ψγ i aia∗ i , which is often true when {ai} are sampled from continuous distribution. The proposed initialization can be obtained upon scaling ˜z0 from (7) by the norm estimate in (5), to yield z0 := (Pm i=1 ψ2 i/m)˜z0. By default, we take γ := 1/2 in all reported numerical implementations, yielding w0 i := p |⟨ai, x⟩| for all i ∈S. Regarding the initialization procedure in (7), we next highlight two features, whereas technical details and theoretical performance guarantees are provided in Section 3: F1) The weights {w0 i } in the maximal correlation scheme enable leveraging useful information that each feature vector ai may bear regarding the direction of x. F2) Taking w0 i := ψγ i for all i ∈S and 0 otherwise, problem (7) can be equivalently rewritten as ˜z0 := arg max ∥z∥=1 z∗ 1 m m X i=1 w0 i aia∗ i z (8) which subsumes previous initialization schemes with particular selections of weights {w0 i }. For instance, the spectral initialization in [16, 3] is recovered by choosing S := M, and w0 i := ψ2 i for all 1 ≤i ≤m. 1,000 2,000 3,000 4,000 5,000 n: signal dimension (m=2n-1) 0.9 1 1.1 1.2 1.3 1.4 1.5 Relative error Reweight. max. correlation Spectral initialization Trunc. spectral in TWF Orthogonality promoting Trunc. spectral in RWF Figure 1: Relative initialization error for i.i.d. ai ∼N(0, I1,000), 1 ≤i ≤1, 999. For comparison, define Relative error := dist(z, x) ∥x∥ . Throughout the paper, all simulated results were averaged over 100 Monte Carlo (MC) realizations, and each simulated scheme was implemented with their pertinent default parameters. Figure 1 evaluates the performance of the developed initialization relative to several state-of-the-art strategies, and also with the information limit number of data benchmarking the minimal number of samples required. It is clear that our initialization is: i) consistently better than the state-of-the-art; and, ii) stable as n grows, which is in contrast to the instability encountered by the spectral ones [16, 3, 7, 30]. It is worth stressing that the more than 5% empirical advantage (relative to the best) at the challenging information-theoretic benchmark is nontrivial, and is one of the main RAF upshots. This advantage becomes increasingly pronounced as the ratio m/n grows. 2.2 Iteratively reweighted gradient flow For independent data obeying the real Gaussian model, the direction that TAF moves along in stage S2) presented earlier is given by the following (generalized) gradient [28]: 1 m X i∈T ∇ℓ(z; ψi) = 1 m X i∈T a∗ i z −ψi a∗ i z |a∗ i z| ai (9) 5 where the dependence on the iterate count t is neglected for notational brevity, and the convention a∗ i z/|a∗ i z| := 0 is adopted when a∗ i z = 0. Unfortunately, the (negative) gradient of the average in (9) generally may not point towards the true solution x unless the current iterate z is already very close to x. Therefore, moving along such a descent direction may not drag z closer to x. To see this, consider an initial guess z0 that has already been in a basin of attraction (i.e., a region within which there is only a unique stationary point) of x. Certainly, there are summands (a∗ i z −ψi a∗ i z |a∗ i z|)ai in (9), that could give rise to “bad/misleading” gradient directions due to the erroneously estimated signs a∗ i z |a∗ i z| ̸= a∗ i x |a∗ i x| [28], or (a∗ i z)(a∗ i x) < 0 [30]. Those gradients as a whole may drag z away from x, and hence out of the basin of attraction. Such an effect becomes increasingly severe as m approaches the information-theoretic limit of 2n −1, thus rendering past approaches less effective in this case. Although this issue is somewhat remedied by TAF with a truncation procedure, its efficacy is limited due to misses of bad gradients and mis-rejections of meaningful ones around the information limit. To address this challenge, reweighted amplitude flow effecting suitable gradient directions from all data samples {(ai; ψi)}1≤i≤m will be adopted in a (timely) adaptive fashion, namely introducing appropriate weights for all gradients to yield the update zt+1 = zt −µt∇ℓrw(zt; ψi), t = 0, 1, . . . (10) The reweighted gradient ∇ℓrw(zt) evaluated at the current point zt is given as ∇ℓrw(z):= 1 m m X i=1 wi∇ℓ(z; ψi) (11) for suitable weights {wi}1≤i≤m to be designed next. To that end, we observe that the truncation criterion [28] T := 1 ≤i ≤m : |a∗ i z| |a∗ i x| ≥α (12) with some given parameter α > 0 suggests to include only gradient components associated with |a∗ i z| of relatively large sizes. This is because gradients of sizable |a∗ i z|/|a∗ i x| offer reliable and meaningful directions pointing to the truth x with large probability [28]. As such, the ratio |a∗ i z|/|a∗ i x| can be somewhat viewed as a confidence score about the reliability or meaningfulness of the corresponding gradient ∇ℓ(z; ψi). Recognizing that confidence can vary, it is natural to distinguish the contributions that different gradients make to the overall search direction. An easy way is to attach large weights to the reliable gradients, and small weights to the spurious ones. Assume without loss of generality that 0 ≤wi ≤1 for all 1 ≤i ≤m; otherwise, lump the normalization factor achieving this into the learning rate µt. Building upon this observation and leveraging the gradient reliability confidence score |a∗ i z|/|a∗ i x|, the weight per gradient ∇ℓ(z; ψi) in RAF is designed to be wi := 1 1 + βi/(|a∗ i z|/|a∗ i x|), 1 ≤i ≤m (13) in which {βi > 0}1≤i≤m are some pre-selected parameters. Regarding the proposed weighting criterion in (13), three remarks are in order, followed by the RAF algorithm summarized in Algorithm 1. R1) The weights {wt i}1≤i≤m are time adapted to zt. One can also interpret the reweighted gradient flow zt+1 in (10) as performing a single gradient step to minimize the smooth reweighted loss 1 m Pm i=1 wt iℓ(z; ψi) with starting point zt; see also [4] for related ideas successfully exploited in the iteratively reweighted least-squares approach to compressive sampling. R2) Note that the larger |a∗ i z|/|a∗ i x| is, the larger wi will be. More importance will be attached to reliable gradients than to spurious ones. Gradients from almost all data points are are judiciously accounted for, which is in sharp contrast to [28], where withdrawn gradients do not contribute the information they carry. R3) At the points {z} where a∗ i z = 0 for certain i ∈M, the corresponding weight will be wi = 0. That is, the losses ℓ(z; ψi) in (2) that are nonsmooth at points z will be eliminated, to prevent their contribution to the reweighted gradient update in (10). Hence, the convergence analysis of RAF can be considerably simplified because it does not have to cope with the nonsmoothness of the objective function in (2). 6 2.3 Algorithmic parameters To optimize the empirical performance and facilitate numerical implementations, choice of pertinent algorithmic parameters of RAF is independently discussed here. It is obvious that the RAF algorithm entails four parameters. Our theory and all experiments are based on: i) |S|/m ≤0.25; ii) 0 ≤βi ≤10 for all 1 ≤i ≤m; and, iii) 0 ≤γ ≤1. For convenience, a constant step size µt ≡µ > 0 is suggested, but other step size rules such as backtracking line search with the reweighted objective work as well. As will be formalized in Section 3, RAF converges if the constant µ is not too large, with the upper bound depending in part on the selection of {βi}1≤i≤m. In the numerical tests presented in Sections 2 and 4, we take |S| := ⌊3m/13⌋, βi ≡β := 10, γ := 0.5, and µ := 2 (14) while larger step sizes µ > 0 can be afforded for larger m/n values. Algorithm 1 Reweighted Amplitude Flow 1: Input: Data {(ai; ψi}1≤i≤m; maximum number of iterations T; step size µt = 2/6 and weighting parameter βi = 10/5 for real/complex Gaussian model; |S| = ⌊3m/13⌋, and γ = 0.5. 2: Construct S to include indices associated with the |S| largest entries among {ψi}1≤i≤m. 3: Initialize z0 := pPm i=1 ψ2 i/m ˜z0 with ˜z0 being the unit principal eigenvector of Y := 1 m m X i=1 w0 i aia∗ i (15) where w0 i := ψγ i , i ∈S ⊆M 0, otherwise for all 1 ≤i ≤m. 4: Loop: for t = 0 to T −1 zt+1 = zt −µt m m X i=1 wt i a∗ i zt −ψi a∗ i zt |a∗ i zt| ai (16) where wt i := |a∗ i zt|/ψi |a∗ i zt|/ψi+βi for all 1 ≤i ≤m. 5: Output: zT . 3 Main results Our main results summarized in Theorem 1 next establish exact recovery under the real Gaussian model, whose proof is provided in the supplementary material. Our RAF approach however can be generalized readily to the complex Gaussian and CDP models. Theorem 1 (Exact recovery) Consider m noiseless measurements ψ = |Ax| for an arbitrary x ∈Rn. If the data size m ≥c0|S| ≥c1n and the step size µ ≤µ0, then with probability at least 1 −c3e−c2m, the reweighted amplitude flow’s estimates zt in Algorithm 1 obey dist(zt, x) ≤1 10(1 −ν)t∥x∥, t = 0, 1, . . . (17) where c0, c1, c2, c3 > 0, 0 < ν < 1, and µ0 > 0 are certain numerical constants depending on the choice of algorithmic parameters |S|, β, γ, and µ. According to Theorem 1, a few interesting properties of our RAF algorithm are worth highlighting. To start, RAF recovers the true solution exactly with high probability whenever the ratio m/n of the number of equations to the unknowns exceeds some numerical constant. Expressed differently, RAF achieves the information-theoretic optimal order of sample complexity, which is consistent with the state-of-the-art including TWF [7], TAF [28], and RWF [30]. Notice that (17) also holds at t = 0, namely, dist(z0, x) ≤∥x∥/10, therefore providing performance guarantees for the proposed initialization scheme (cf. Step 3 in Algorithm 1). Moreover, starting from this initial estimate, RAF converges linearly to the true solution x. That is, to reach any ϵ-relative solution accuracy (i.e., dist(zT , x) ≤ϵ∥x∥), it suffices to run at most T = O(log 1/ϵ) RAF iterations (cf. Step 4). This in 7 conjunction with the per-iteration complexity O(mn) confirms that RAF solves exactly a quadratic system in time O(mn log 1/ϵ), which is linear in O(mn), the time required to read the entire data {(ai; ψi)}1≤i≤m. Given the fact that the initialization stage can be performed in time O(n|S|) and |S| < m, the overall linear-time complexity of RAF is order-optimal. Proof of Theorem 1 is provided in the supplementary material. 4 Simulated tests 0 10 20 30 40 50 60 70 80 90 100 Realization number 0 5 10 15 20 25 30 ! log10 L(zT) Figure 2: Function value L(zT ) by RAF for 100 MC realizations when m = 2n −1. Our theoretical findings about RAF have been corroborated with comprehensive numerical tests, a sample of which are discussed next. Performance of RAF is evaluated relative to the state-of-the-art (T)WF, RWF, and TAF in terms of the empirical success rate among 100 MC trials, where a success will be declared for a trial if the returned estimate incurs error
ψ −|AzT |
∥x∥ ≤10−5 where the modulus operator | · | is understood element-wise. The real Gaussian model and the physically realizable CDPs were simulated in this section. For fairness, all schemes were implemented with their suggested parameter values. The true signal vector x was randomly generated using x ∼N(0, I), and the i.i.d. sensing vectors ai ai ∼N(0, I). Each scheme obtained the initial guess based on 200 power iterations, followed by a series of T = 2, 000 (truncated/reweighted) gradient iterations. All experiments were performed using MATLAB on an Intel CPU @ 3.4 GHz (32 GB RAM) computer. For reproducibility, the Matlab code of the RAF algorithm is publicly available at https://gangwg.github.io/RAF/. To demonstrate the power of RAF in the high-dimensional regime, the function value L(z) in (2) evaluated at the returned estimate zT for 100 independent trials is plotted (in negative logarithmic scale) in Fig. 2, where m = 2n −1 = 9, 999. It is self-evident that RAF succeeded in all trials even at this challenging information limit. To the best of our knowledge, RAF is the first algorithm that empirically recovers any solution exactly from a minimal number of random quadratic equations. Left panel in Fig. 3 further compares the empirical success rate of five schemes under the real Gaussian model with n = 1, 000 and m/n varying by 0.1 from 1 to 5. Evidently, the developed RAF achieves perfect recovery as soon as m is about 2n, where its competing alternatives do not work well. To demonstrate the stability and robustness of RAF in the presence of additive noise, the right panel in Fig. 3 depicts the normalized mean-square error NMSE := dist2(zT , x) ∥x∥2 as a function of the signal-to-noise ratio (SNR) for m/n taking values {3, 4, 5}. The noise model ψi = |⟨ai, x⟩| + ηi, 1 ≤i ≤m with η := [ηi]1≤i≤m ∼N(0, σ2Im) was employed, where σ2 was set such that certain SNR := 10 log10(∥Ax∥2/mσ2) values on the x-axis were achieved. To examine the efficacy and scalability of RAF in real-world conditions, the last experiment entails the Galaxy image 3 depicted by a three-way array X ∈R1,080×1,920×3, whose first two coordinates encode the pixel locations, and the third the RGB color bands. Consider the physically realizable CDP model with random masks [3]. Letting x ∈Rn (n ≈2 × 106) be a vectorization of a certain band of X, the CDP model with K masks is ψ(k) = |F D(k)x|, 1 ≤k ≤K, 3Downloaded from http://pics-about-space.com/milky-way-galaxy. 8 1 2 3 4 5 m/n for x2 R1,000 0 0.2 0.4 0.6 0.8 1 Empirical success rate RAF TAF TWF RWF WF 5 10 15 20 25 30 35 40 45 SNR (dB) for x2 R1,000 10-6 10-5 10-4 10-3 10-2 10-1 100 NMSE m=3n m=4n m=5n Figure 3: Real Gaussian model: Empirical success rate (Left); and, Relative MSE vs. SNR (Right). where F ∈Cn×n is a DFT matrix, and diagonal matrices D(k) have their diagonal entries sampled uniformly at random from {1, −1, j, −j} with j := √−1. Each D(k) represents a random mask placed after the object to modulate the illumination patterns [5]. Implementing K = 4 masks, each algorithm performs independently over each band 100 power iterations for an initial guess, which was refined by 100 gradient iterations. Recovered images of TAF (left) and RAF (right) are displayed in Fig. 4, whose relative errors were 1.0347 and 1.0715 × 10−3, respectively. WF and TWF returned images of corresponding relative error 1.6870 and 1.4211, which are far away from the ground truth. Figure 4: Recovered Galaxy images after 100 gradient iterations of TAF (Left); and of RAF (Right). 5 Conclusion This paper developed a linear-time algorithm called RAF for solving systems of random quadratic equations. Our procedure consists of two stages: a weighted maximal correlation initializer attainable with a few power or Lanczos iterations, and a sequence of scalable reweighted gradient refinements for a nonconvex nonsmooth LS loss function. It was demonstrated that RAF achieves the optimal sample and computational complexity. Judicious numerical tests showcase its superior performance over state-of-the-art alternatives. Empirically, RAF solves a set of random quadratic equations with high probability so long as a unique solution exists. Promising extensions include studying robust and/or sparse phase retrieval and matrix recovery via (stochastic) reweighted amplitude flow counterparts, and in particular exploiting the power of (re)weighting regularization techniques to enable more general nonconvex optimization such as training deep neural networks [18]. Acknowledgments G. Wang and G. B. Giannakis were partially supported by NSF grants 1500713 and 1514056. Y. Saad was partially supported by NSF grant 1505970. J. Chen was partially supported by the National Natural Science Foundation of China grants U1509215, 61621063, and the Program for Changjiang Scholars and Innovative Research Team in University (IRT1208). 9 References [1] R. Balan, P. Casazza, and D. Edidin, “On signal reconstruction without phase,” Appl. Comput. Harmon. Anal., vol. 20, no. 3, pp. 345–356, May 2006. [2] A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. SIAM, 2001, vol. 2. [3] E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory, vol. 61, no. 4, pp. 1985–2007, Apr. 2015. [4] E. J. Candès, T. Strohmer, and V. Voroninski, “PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming,” Appl. Comput. Harmon. Anal., vol. 66, no. 8, pp. 1241–1274, Nov. 2013. [5] E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal., vol. 39, no. 2, pp. 277–299, Sept. 2015. [6] J. Chen, L. Wang, X. Zhang, and Q. Gu, “Robust Wirtinger flow for phase retrieval with arbitrary corruption,” arXiv:1704.06256, 2017. [7] Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Adv. on Neural Inf. Process. Syst., Montréal, Canada, 2015, pp. 739–747. [8] A. Conca, D. Edidin, M. Hering, and C. Vinzant, “An algebraic characterization of injectivity in phase retrieval,” Appl. Comput. Harmon. Anal., vol. 38, no. 2, pp. 346–356, Mar. 2015. [9] J. C. Duchi and F. Ruan, “Solving (most) of a set of quadratic equalities: Composite optimization for robust phase retrieval,” arXiv:1705.02356, 2017. [10] J. R. Fienup, “Phase retrieval algorithms: A comparison,” Appl. Opt., vol. 21, no. 15, pp. 2758–2769, Aug. 1982. [11] R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction,” Optik, vol. 35, pp. 237–246, Nov. 1972. [12] T. Goldstein and S. Studer, “PhaseMax: Convex phase retrieval via basis pursuit,” arXiv:1610.07531v1, 2016. [13] P. Hand and V. Voroninski, “An elementary proof of convex phase retrieval in the natural parameter space via the linear program phasemax,” arXiv:1611.03935, 2016. [14] R. H. Keshavan, A. Montanari, and S. Oh, “Matrix completion from a few entries,” IEEE Trans. Inf. Theory, vol. 56, no. 6, pp. 2980–2998, Jun. 2010. [15] Y. M. Lu and G. Li, “Phase transitions of spectral initialization for high-dimensional nonconvex estimation,” arXiv:1702.06435, 2017. [16] P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Adv. on Neural Inf. Process. Syst., Stateline, NV, 2013, pp. 2796–2804. [17] P. M. Pardalos and S. A. Vavasis, “Quadratic programming with one negative eigenvalue is NP-hard,” J. Global Optim., vol. 1, no. 1, pp. 15–22, 1991. [18] G. Pereyra, G. Tucker, J. Chorowski, Ł. Kaiser, and G. Hinton, “Regularizing neural networks by penalizing confident output distributions,” arXiv:1701.06548, 2017. [19] J. R. Rice, Numerical Methods in Software and Analysis. Academic Press, 1992. [20] Y. Saad, Numerical Methods for Large Eigenvalue Problems: Revised Edition. SIAM, 2011. [21] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: A contemporary overview,” IEEE Signal Process. Mag., vol. 32, no. 3, pp. 87–109, May 2015. 10 [22] M. Soltanolkotabi, “Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization,” arXiv:1702.06175, 2017. [23] J. Sun, Q. Qu, and J. Wright, “A geometric analysis of phase retrieval,” Found. Comput. Math., 2017 (to appear); see also arXiv:1602.06664, 2016. [24] I. Waldspurger, “Phase retrieval with random Gaussian sensing vectors by alternating projections,” aXiv:1609.03088, 2016. [25] I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program., vol. 149, no. 1, pp. 47–81, 2015. [26] G. Wang and G. B. Giannakis, “Solving random systems of quadratic equations via truncated generalized gradient flow,” in Adv. on Neural Inf. Process. Syst., Barcelona, Spain, 2016, pp. 568–576. [27] G. Wang, G. B. Giannakis, and J. Chen, “Scalable solvers of random quadratic equations via stochastic truncated amplitude flow,” IEEE Trans. Signal Process., vol. 65, no. 8, pp. 1961–1974, Apr. 2017. [28] G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory, 2017 (to appear); see also arXiv:1605.08285, 2016. [29] X. Yi, C. Caramanis, and S. Sanghavi, “Alternating minimization for mixed linear regression,” in Proc. Intl. Conf. on Mach. Learn., Beijing, China, 2014, pp. 613–621. [30] H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, “Reshaped Wirtinger flow and incremental algorithm for solving quadratic system of equations,” J. Mach. Learn. Res., 2017 (to appear); see also arXiv:1605.07719, 2016. 11 | 2017 | 153 |
6,623 | Online Reinforcement Learning in Stochastic Games Chen-Yu Wei Institute of Information Science Academia Sinica, Taiwan bahh723@iis.sinica.edu.tw Yi-Te Hong Institute of Information Science Academia Sinica, Taiwan ted0504@iis.sinica.edu.tw Chi-Jen Lu Institute of Information Science Academia Sinica, Taiwan cjlu@iis.sinica.edu.tw Abstract We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the UCSG algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the diameter, which is an intrinsic value related to the mixing property of SGs. If we let the opponent play an optimistic best response to the learner, UCSG finds an ε-maximin stationary policy with a sample complexity of ˜O (poly(1/ε)), where ε is the gap to the best policy. 1 Introduction Many real-world scenarios (e.g., markets, computer networks, board games) can be cast as multi-agent systems. The framework of Multi-Agent Reinforcement Learning (MARL) targets at learning to act in such systems. While in traditional reinforcement learning (RL) problems, Markov decision processes (MDPs) are widely used to model a single agent’s interaction with the environment, stochastic games (SGs, [32]), as an extension of MDPs, are able to describe multiple agents’ simultaneous interaction with the environment. In this view, SGs are most well-suited to model MARL problems [24]. In this paper, two-player zero-sum SGs are considered. These games proceed like MDPs, with the exception that in each state, both players select their own actions simultaneously 1, which jointly determine the transition probabilities and their rewards . The zero-sum property restricts that the two players’ payoffs sum to zero. Thus, while one player (Player 1) wants to maximize his/her total reward, the other (Player 2) would like to minimize that amount. Similar to the case of MDPs, the reward can be discounted or undiscounted, and the game can be episodic or non-episodic. In the literature, SGs are typically learned under two different settings, and we will call them online and offline settings, respectively. In the offline setting, the learner controls both players in a centralized manner, and the goal is to find the equilibrium of the game [33, 21, 30]. This is also known as finding the worst-case optimality for each player (a.k.a. maximin or minimax policy). In this case, we care about the sample complexity, i.e., how many samples are required to estimate the worst-case optimality such that the error is below some threshold. In the online setting, the learner controls only one of the players, and plays against an arbitrary opponent [24, 4, 5, 8, 31]. In this case, we care 1Turn-based SGs, like Go, are special cases: in each state, one player’s action set contains only a null action. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. about the learner’s regret, i.e., the difference between some benchmark measure and the learner’s total reward earned in the learning process. This benchmark can be defined as the total reward when both players play optimal policies [5], or when Player 1 plays the best stationary response to Player 2 [4]. Some of the above online-setting algorithms can find the equilibrium simply through self-playing. Most previous results on offline sample complexity consider discounted SGs. Their bounds depend heavily on the chosen discount factor [33, 21, 30, 31]. However, as noted in [5, 19], the discounted setting might not be suitable for SGs that require long-term planning, because only finite steps are relevant in the reward function it defines. This paper, to the best of our knowledge, is the first to give an offline sample complexity bound of order ˜O (poly(1/ε)) in the average-reward (undiscounted and non-episodic) setting, where ε is the error parameter. A major difference between our algorithm and previous ones is that the two players play asymmetric roles in our algorithm: by focusing on finding only one player’s worst-case optimal policy at a time, the sampling can be rather efficient. This resembles but strictly extends [13]’s methods in finding the maximin action in a two-stage game. In the online setting, we are only aware of [5]’s R-MAX algorithm that deals with average-reward SGs and provides a regret bound. Considering a similar scenario and adopting the same regret definition, we significantly improve their bounds (see Appendix A for details). Another difference between our algorithm and theirs is that ours is able to output a currently best stationary policy at any stage in the learning process, while theirs only produces a Tε-step fixed-horizon policy for some input parameter Tε. The former could be more natural since the worst-case optimal policy is itself a stationary policy. The techniques used in this paper are most related to RL for MDPs based on the optimism principle [2, 19, 9] (see Appendix A). The optimism principle built on concentration inequalities automatically strikes a balance between exploitation and exploration, eliminating the need to manually adjust the learning rate or the exploration ratio. However, when importing analysis from MDPs to SGs, we face the challenge caused by the opponent’s uncontrollability and non-stationarity. This prevents the learner from freely exploring the state space and makes previous analysis that relies on stationary distribution’s perturbation analysis [2] useless. In this paper, we develop a novel way to replace the opponent’s non-stationary policy with a stationary one in the analysis (introduced in Section 5.1), which facilitates the use of techniques based on perturbation analysis. We hope that this technique can benefit future analysis concerning non-stationary agents in MARL. One related topic is the robust MDP problem [29, 17, 23]. It is an MDP where some state-action pairs have adversarial rewards and transitions. It is often assumed in robust MDP that the adversarial choices by the environment are not directly observable by the Player, but in our SG setting, we assume that the actions of Player 2 can be observed. However, there are still difficulties in SG that are not addressed by previous works on robust MDP. Here we compare our work to [23], a recent work on learning robust MDP. In their setting, there are adversarial and stochastic state-action pairs, and their proposed OLRM2 algorithm tries to distinguish them. Under the scenario where the environment is fully adversarial, which is the counterpart to our setting, the worst-case transitions and rewards are all revealed to the learner, and what the learner needs to do is to perform a maximin planning. In our case, however, the worst-case transitions and rewards are still to be learned, and the opponent’s arbitrary actions may hinder the learner to learn this information. We would say that the contribution of [23] is orthogonal to ours. Other lines of research that are related to SGs are on MDPs with adversarially changing reward functions [11, 27, 28, 10] and with adversarially changing transition probabilities [35, 1]. The assumptions in these works have several differences with ours, and therefore their results are not comparable to our results. However, they indeed provide other viewpoints about learning in stochastic games. 2 Preliminaries Game Models and Policies. A SG is a 4-tuple M = (S, A, r, p). S denotes the state space and A = A1 × A2 the players’ joint action space. We denote S = |S| and A = |A|. The game starts from an initial state s1. Suppose at time t the players are at state st. After the players play the joint actions (a1 t, a2 t), Player 1 receives the reward rt = r(st, a1 t, a2 t) ∈[0, 1] from Player 2, and both players visit state st+1 following the transition probability p(·|st, a1 t, a2 t). For simplicity, we consider 2 deterministic rewards as in [3]. The extension to stochastic case is straightforward. We shorten our notation by a := (a1, a2) or at := (a1 t, a2 t), and use abbreviations such as r(st, at) and p(·|st, at). Without loss of generality, players are assumed to determine their actions based on the history. A policy π at time t maps the history up to time t, Ht = (s1, a1, r1, ..., st) ∈Ht, to a probability distribution over actions. Such policies are called history-dependent policies, whose class is denoted by ΠHR. On the other hand, a stationary policy, whose class is denoted by ΠSR, selects actions as a function of the current state. For either class, joint policies (π1, π2) are often written as π. Average Return and the Game Value. Let the players play joint policy π. Define the T-step total reward as RT (M, π, s) := PT t=1 r(st, at), where s1 = s, and the average reward as ρ(M, π, s) := limT →∞1 T E [RT (M, π, s)], whenever the limit exists. In fact, the game value exists2 [26]: ρ∗(M, s) := sup π1 inf π2 lim T →∞ 1 T E RT (M, π1, π2, s) . If ρ(M, π, s) or ρ∗(M, s) does not depend on the initial state s, we simply write ρ(M, π) or ρ∗(M). The Bias Vector. For a stationary policy π, the bias vector h(M, π, ·) is defined, for each coordinate s, as h(M, π, s) := E " ∞ X t=1 r(st, at) −ρ(M, π, s) s1 = s, at ∼π(·|st) # . (1) The bias vector satisfies the Bellman equation: ∀s ∈S, ρ(M, π, s) + h(M, π, s) = r(s, π) + X s′ p(s′|s, π)h(M, π, s′), where r(s, π) := Ea∼π(·|s)[r(s, a)] and p(s′|s, π) :=Ea∼π(·|s)[p(s′|s, a)]. The vector h(M, π, ·) describes the relative advantage among states under model M and (joint) policy π. The advantage (or disadvantage) of state s compared to state s′ under policy π is defined as the difference between the accumulated rewards with initial states s and s′, which, from (1), converges to the difference h(M, π, s) −h(M, π, s′) asymptotically. For the ease of notation, the span of a vector v is defined as sp(v) := maxi vi −mini vi. Therefore if a model, together with any policy, induces large sp (h), then this model will be difficult to learn because visiting a bad state costs a lot in the learning process. As shown in [3] for the MDP case, the regret has an inevitable dependency on sp(h(M, π∗, ·)), where π∗is the optimal policy. On the other hand, sp(h(M, π, ·)) is closely related to the mean first passage time under the Markov chain induced by M and π. Actually we have sp(h(M, π, ·)) ≤T π(M) := maxs,s′ T π s→s′(M), where T π s→s′(M) denotes the expected time to reach state s′ starting from s when the model is M and the player(s) follow the (joint) policy π. This fact is intuitive, and the proof can be seen at Remark M.1. Notations. In order to save space, we often write equations in vector or matrix form. We use vectors inequalities: if u, v ∈Rn, then u ≤v ⇔ui ≤vi ∀i = 1, ..., n. For a general matrix game with matrix G of size n × m, we denote the value of the game as val G := max p∈∆n min q∈∆m p⊤Gq = min q∈∆m max p∈∆n p⊤Gq, where ∆k is the probability simplex of dimension k. In SGs, given the estimated value function u(s′) ∀s′, we often need to solve the following matrix game equation: v(s) = max a1∼π1(·|s) min a2∼π2(·|s){r(s, a1, a2) + X s′ p(s′|s, a1, a2)u(s′)}, and this is abbreviated with the vector form v = val{r + Pu}. We also use solve1 G and solve2 G to denote the optimal solutions of p and q. In addition, the indicator function is denoted by 1{·} or 1{·}. 2Unlike in one-player MDPs, the sup and inf in the definition of ρ∗(M, s) are not necessarily attainable. Moreover, players may not have stationary optimal policies. 3 3 Problem Settings and Results Overview We assume that the game proceeds for T steps. In order to have meaningful regret bounds (i.e., sublinear to T), we must make some assumptions to the SG model itself. Our two different assumptions are Assumption 1. max s,s′ max π1∈ΠSR max π2∈ΠSR T π1,π2 s→s′ (M) ≤D. Assumption 2. max s,s′ max π2∈ΠSR min π1∈ΠSR T π1,π2 s→s′ (M) ≤D. Why we make these assumptions is as follows. Consider an SG model where the opponent (Player 2) has some way to lock the learner (Player 1) to some bad state. The best strategy for the learner might be to totally avoid, if possible, entering that state. However, in the early stage of the learning process, the learner won’t know this, and he/she will have a certain probability to visit that state and get locked. This will cause linear regret to the learner. Therefore, we assume the following: whatever policy the opponent executes, the learner always has some way to reach any state within some bounded time. This is essentially our Assumption 2. Assumption 1 is the stronger one that actually implies that under any policies executed by the players (not necessarily stationary, see Remark M.2), every state is visited within an average of D steps. We find that under this assumption, the asymptotic regret can be improved. This assumption also has a sense similar to those required for Q-learning-type algorithms’ convergence: they require that every state be visited infinitely often. See [18] for example. These assumptions define some notion of diameters that are specific to the SG model. It is known that under Assumption 1 or Assumption 2, both players have optimal stationary policies, and the game value is independent of the initial state. Thus we can simply write ρ∗(M, s) as ρ∗(M). For a proof of these facts, please refer to Theorem E.1 in the appendix. 3.1 Two Settings and Results Overview We focus on training Player 1 and discuss two settings. In the online setting, Player 1 competes with an arbitrary Player 2. The regret is defined as Reg(on) T = T X t=1 ρ∗(M) −r(st, at). In the offline setting, we control both Player 1 and Player 2’s actions, and find Player 1’s maximin policy. The sample complexity is defined as Lε = T X t=1 1{ρ∗(M) −min π2 ρ(M, π1 t , π2) > ε}, where π1 t is a stationary policy being executed by Player 1 at time t. This definition is similar to those in [20, 19] for one-player MDPs. By the definition of Lε, if we have an upper bound for Lε and run the algorithm for T > Lε steps, there is some t such that π1 t is ε-optimal. We will explain how to pick this t in Section 7 and Appendix L. It turns out that we can use almost the same algorithm to handle these two settings. Since learning in the online setting is more challenging, from now on we will mainly focus on the online setting, and leave the discussion about the offline setting at the end of the paper. Our results can be summarized by the following two theorems. Theorem 3.1. Under Assumption 1, UCSG achieves Reg(on) T = ˜O(D3S5A + DS √ AT) w.h.p. 3 Theorem 3.2. Under Assumption 2, UCSG achieves Reg(on) T = ˜O( 3√ DS2AT 2) w.h.p. 4 Upper Confidence Stochastic Game Algorithm (UCSG) 3We write, “with high probability, g = ˜O(f)” or “w.h.p., g = ˜O(f)” to indicate “with probability ≥1 −δ, g = f1O(f) + f2”, where f1, f2 are some polynomials of log D, log S, log A, log T, log(1/δ). 4 Algorithm 1 UCSG Input: S, A = A1 × A2, T. Initialization: t = 1. for phase k = 1, 2, ... do tk = t. 1. Initialize phase k: vk(s, a) = 0, nk(s, a) = max n 1, Ptk−1 τ=1 1(sτ ,aτ )=(s,a) o , nk(s, a, s′) = Ptk−1 τ=1 1(sτ ,aτ ,sτ+1)=(s,a,s′), ˆpk(s′|s, a) = nk(s,a,s′) nk(s,a) , ∀s, a, s′. 2. Update the confidence set: Mk = { ˜ M : ∀s, a, ˜p(·|s, a) ∈Pk(s, a)}, where Pk(s, a) := CONF1(ˆpk(·|s, a), nk(s, a)) ∩CONF2(ˆpk(·|s, a), nk(s, a)). 3. Optimistic planning: M 1 k, π1 k = MAXIMIN-EVI (Mk, γk) , where γk := 1/√tk. 4. Execute policies: repeat Draw a1 t ∼π1 k(·|st); observe the reward rt and the next state st+1. Set vk(st, at) = vk(st, at) + 1 and t = t + 1. until ∃(s, a) such that vk(s, a) = nk(s, a) end for Definitions of confidence regions: CONF1(ˆp, n) := ˜p ∈[0, 1]S : ∥˜p −ˆp∥1 ≤ q 2S ln(1/δ1) n , δ1 = δ 2S2A log2 T . CONF2(ˆp, n) := ˜p ∈[0, 1]S : ∀i, p ˜pi(1 −˜pi) − p ˆpi(1 −ˆpi) ≤ q 2 ln(6/δ1) n−1 , |˜pi −ˆpi| ≤min q ln(6/δ1) 2n , q 2ˆpi(1−ˆpi) n ln 6 δ1 + 7 3(n−1) ln 6 δ1 . The Upper Confidence Stochastic Game algorithm (UCSG) (Algorithm 1) extends UCRL2 [19], using the optimism principle to balance exploitation and exploration. It proceeds in phases (indexed by k), and only changes the learner’s policy π1 k at the beginning of each phase. The length of each phase is not fixed a priori, but depends on the statistics of past observations. In the beginning of each phase k, the algorithm estimates the transition probabilities using empirical frequencies ˆpk(·|s, a) observed in previous phases (Step 1). With these empirical frequencies, it can then create a confidence region Pk(s, a) for each transition probability. The transition probabilities lying in the confidence regions constitute a set of plausible stochastic game models Mk, where the true model M belongs to with high probability (Step 2). Then, Player 1 optimistically picks one model M 1 k from Mk, and finds the optimal (stationary) policy π1 k under this model (Step 3). Finally, Player 1 executes the policy π1 k for a while until some (s, a)-pair’s number of occurrences is doubled during this phase (Step 4). The count vk(s, a) records the number of steps the (s, a)-pair is observed in phase k; it is reset to zero in the beginning of every phase. In Step 3, to pick an optimistic model and a policy is to pick M 1 k ∈Mk and π1 k ∈ΠSR such that ∀s, min π2 ρ(M 1 k, π1 k, π2, s) ≥max ˜ M∈Mk ρ∗( ˜ M, s) −γk. (2) where γk denotes the error parameter for MAXIMIN-EVI. The LHS of (2) is well-defined because Player 2 has stationary optimal policy under the MDP induced by M 1 k and π1 k. Roughly speaking, (2) says that min π2 ρ(M 1 k, π1 k, π2, s) should approximate max ˜ M∈Mk,π1 min π2 ρ( ˜ M, π1, π2, s) by an error no more than γk. That is, (M 1 k, π1 k) are picked optimistically in Mk × ΠSR considering the most adversarial opponent. 4.1 Extended SG and Maximin-EVI The calculation of M 1 k and π1 k involves the technique of Extended Value Iteration (EVI), which also appears in [19] as a one-player version. Consider the following SG, named M +. Let the state space S and Player 2’s action space A2 remain the same as in M. Let A1+, p+(·|·, ·, ·), r+(·, ·, ·) be Player 1’s action set, the transition kernel, and 5 the reward function of M +, such that for any a1 ∈A1 and a2 ∈A2 and an admissible transition probability ˜p(·|s, a1, a2) ∈Pk(s, a1, a2), there is an action a1+ ∈A1+ such that p+(·|s, a1+, a2) = ˜p(·|s, a1, a2) and r+(s, a1+, a2) = r(s, a1, a2). In other words, Player 1 selecting an action in A1+ is equivalent to selecting an action in A1 and simultaneously selecting an admissible transition probability in the confidence region Pk(·, ·). Suppose that M ∈Mk, then the extended SG M + satisfies Assumption 2 because the true model M is embedded in M +. By Theorem E.1 in Appendix E, it has a constant game value ρ∗(M +) independent of the initial state, and satisfies Bellman equation of the form val{r + Pf} = ρ · e + f, for some bounded function f(·), where e stands for the all-one constant vector. With the above conditions, we can use value iteration with Schweitzer transform (a.k.a. aperiodic transform)[34] to solve the optimal policy in the extended EG M +. We call it MAXIMIN-EVI. For the details of MAXIMIN-EVI, please refer to Appendix F. We only summarize the result with the following Lemma. Lemma 4.1. Suppose the true model M ∈Mk, then the estimated model M 1 k and stationary policy π1 k output by MAXIMIN-EVI in Step 3 satisfy ∀s, min π2 ρ(M 1 k, π1 k, π2, s) ≥max π1 min π2 ρ(M, π1, π2, s) −γk. Before diving into the analysis under the two assumptions, we first establish the following fact. Lemma 4.2. With high probability, the true model M ∈Mk for all phases k. It is proved in Appendix D. With Lemma 4.2, we can fairly assume M ∈Mk in most of our analysis. 5 Analysis under Assumption 1 In this section, we import analysis techniques from one-player MDPs [2, 19, 22, 9]. We also develop some techniques that deal with non-stationary opponents. We model Player 2’s behavior in the most general way, i.e., assuming it using a history-dependent randomized policy. Let Ht = (s1, a1, r1, ..., st−1, at−1, rt−1, st) ∈Ht be the history up to st, then we assume π2 t to be a mapping from Ht to a distribution over A2. We will simply write π2 t (·) and hide its dependency on Ht inside the subscript t. A similar definition applies to π1 t (·). With abuse of notations, we denote by k(t) the phase where step t lies in, and thus our algorithm uses policy π1 t (·) = π1 k(t)(·|st). The notations π1 t and π1 k are used interchangeably. Let Tk := tk+1 −tk be the length of phase k. We decompose the regret in phase k in the following way: Λk := Tkρ∗(M) − tk+1−1 X t=tk r(st, at) = 4 X n=1 Λ(n) k , (3) in which we define Λ(1) k = Tk ρ∗(M) −min π2 ρ(M 1 k, π1 k, π2, stk) , Λ(2) k = Tk min π2 ρ(M 1 k, π1 k, π2, stk) −ρ(M 1 k, π1 k, ¯π2 k, stk) , Λ(3) k = Tk ρ(M 1 k, π1 k, ¯π2 k, stk) −ρ(M, π1 k, ¯π2 k) , Λ(4) k = Tkρ(M, π1 k, ¯π2 k) − tk+1−1 X t=tk r(st, at), where ¯π2 k is some stationary policy of Player 2 which will be defined later. Since the actions of Player 2 are arbitrary, ¯π2 k is imaginary and only exists in analysis. Note that under Assumption 1, any stationary policy pair over M induces an irreducible Markov chain, so we do not need to specify the initial states for ρ(M, π1 k, ¯π2 k) in (3). Among the four terms, Λ(2) k is clearly non-positive, and Λ(1) k , by optimism, can be bounded using Lemma 4.1. Now remains to bound Λ(3) k and Λ(4) k . 6 5.1 Bounding P k Λ(3) k and P k Λ(4) k The Introduction of ¯π2 k. Λ(3) k and Λ(4) k involve the artificial policy ¯π2 k, which is a stationary policy that replaces Player 2’s non-stationary policy in the analysis. This replacement costs some constant regret but facilitates the use of perturbation analysis in regret bounding. The selection of ¯π2 k is based on the principle that the behavior (e.g., total number of visits to some (s, a)) of the Markov chain induced by M, π1 k, ¯π2 k should be close to the empirical statistics. Intuitively, ¯π2 k can be defined as ¯π2 k(a2|s) := Ptk+1−1 t=tk 1st=sπ2 t (a2) Ptk+1−1 t=tk 1st=s . (4) Note two things, however. First, since we need the actual trajectory in defining this policy, it can only be defined after phase k has ended. Second, ¯π2 k can be undefined because the denominator of (4) can be zero. However, this will not happen in too many steps. Actually, we have Lemma 5.1. P k Tk1{¯π2 k not well-defined}≤˜O(DS2A) with high probability. Before describing how we bound the regret with the help of ¯π2 k and the perturbation analysis, we establish the following lemma: Lemma 5.2. We say the transition probability at time step t is ε-accurate if |p1 k(s′|st, πt) − p(s′|st, πt)| ≤ε ∀s′ where p1 k denotes the transition kernel of M 1 k. We let Bt(ε) = 1 if the transition probability at time t is ε-accurate; otherwise Bt(ε) = 0. Then for any state s, with high probability, PT t=1 1st=s1Bt(ε)=0 ≤˜O A/ε2 . Now we are able to sketch the logic behind our proofs. Let’s assume that ¯π2 k models π2 k quite well, i.e., the expected frequency of every state-action pair induced by M, π1 k, ¯π2 k is close to the empirical frequency induced by M, π1 k, π2 k. Then clearly, Λ(4) k is close to zero in expectation. The term Λ(3) k now becomes the difference of average reward between two Markov reward processes with slightly different transition probabilities. This term has a counterpart in [19] as a single-player version. Using similar analysis, we can prove that the dominant term of Λ(3) k is proportional to sp(h(M 1 k, π1 k, ¯π2 k, ·)). In the single-player case, [19] can directly claim that sp(h(M 1 k, π1 k, ·)) ≤D (see their Remark 8), but unfortunately, this is not the case in the two-player version. 4 To continue, we resort to the perturbation analysis for the mean first passage times (developed in Appendix C). Lemma 5.2 shows that M 1 k will not be far from M for too many steps. Then Theorem C.9 in Appendix C tells that if M 1 k are close enough to M, T π1 k,¯π2 k(M 1 k) can be bounded by 2T π1 k,¯π2 k(M). As Remark M.1 implies that sp(h(M 1 k, π1 k, ¯π2 k, ·)) ≤T π1 k,¯π2 k(M 1 k) and Assumption 1 guarantees that T π1 k,¯π2 k(M) ≤D, we have sp(h(M 1 k, π1 k, ¯π2 k, ·)) ≤T π1 k,¯π2 k(M 1 k) ≤2T π1 k,¯π2 k(M) ≤ 2D. The above approach leads to Lemma 5.3, which is a key in our analysis. We first define some notations. Under Assumption 1, any pair of stationary policies induces an irreducible Markov chain, which has a unique stationary distribution. If the policy pair π = (π1, π2) is executed, we denote its stationary distribution by µ(M, π1, π2, ·) = µ(M, π, ·). Besides, denote vk(s) := Ptk+1−1 t=tk 1st=s. We say a phase k is benign if the following hold true: the true model M lies in Mk, ¯π2 k is well-defined, sp(h(M 1 k, π1 k, ¯π2 k, ·)) ≤2D, and µ(M, π1 k, ¯π2 k, s) ≤2vk(s) Tk ∀s. We can show the following: Lemma 5.3. P k Tk1{phase k is not benign} ≤˜O(D3S5A) with high probability. Finally, for benign phases, we can have the following two lemmas. Lemma 5.4. P k Λ(4) k 1{¯π2 k is well-defined }≤˜O(D √ ST + DSA) with high probability. 4The argument in [19] is simple: suppose that h(M 1 k, π1 k, s) −h(M 1 k, π1 k, s′) > D, by the communicating assumption, there is a path from s′ to s with expected time no more than D. Thus a policy that first goes from s′ to s within D steps and then executes π1 k will outperform π1 k at s′. This leads to a contradiction. In two-player SGs, with a similar argument, we can also show that sp(h(M 1 k, π1 k, π2∗ k , ·)) ≤D, where π2∗ k is the best response to π1 k under M 1 k. However, since Player 2 is uncontrollable, his/her policy π2 k (or ¯π2 k) can be quite different from π2∗ k , and thus sp(h(M 1 k, π1 k, ¯π2 k, ·)) ≤D does not necessarily hold true. 7 Lemma 5.5. P k Λ(3) k 1{phase k is benign} ≤˜O(DS √ AT + DS2A) with high probability, Proof of Theorem 3.1. The regret proof starts from the decomposition of (3). Λ(1) k is bounded with the help of Lemma 4.1: P k Λ(1) k ≤P k Tk/√tk = O( √ T). P k Λ(2) k ≤0 by definition. Then with Lemma 5.1, 5.3, 5.4, and 5.5, we can bound Λ(3) k and Λ(4) k by ˜O(D3S5A + DS √ AT). 6 Analysis under Assumption 2 In Section 5, the main ingredient of regret analysis lies in bounding the span of the bias vector, sp(h(M 1 k, π1 k, ¯π2 k, ·)). However, the same approach does not work because under the weaker Assumption 2, we do not have a bound on the mean first passage time under arbitrary policy pairs. Hence we adopt the approach of approximating the average reward SG problem by a sequence of finite-horizon SGs: on a high level, first, with the help of Assumption 2, we approximate the T multiple of the original average-reward SG game value (i.e. the total reward in hindsight) with the sum of those of H-step episodic SGs; second, we resort to [9]’s results to bound the H-step SGs’ sample complexity and translates it to regret. Approximation by repeated episodic SGs. For the approximation, the quantity H does not appear in UCSG but only in the analysis. The horizon T is divided into episodes each with length H. Index episodes with i = 1, ..., T/H, and denote episode i’s first time step by τi. We say i ∈ph(k) if all H steps of episode i lie in phase k. Define the H-step expected reward under joint policy π with initial state s as VH(M, π, s) := E hPH t=1 rt|at ∼π, s1 = s i . Now we decompose the regret in phase k as ∆k := Tkρ∗− tk+1−1 X t=tk r(st, at) ≤ 6 X n=1 ∆(n) k , (5) where ∆(1) k = P i∈ph(k) H ρ∗−minπ2 ρ(M 1 k, π1 k, π2, sτi) , ∆(2) k = P i∈ph(k) H minπ2 ρ(M 1 k, π1 k, π2, sτi) −minπ2 VH(M 1 k, π1 k, π2, sτi) , ∆(3) k = P i∈ph(k) minπ2 VH(M 1 k, π1 k, π2, sτi) −VH(M 1 k, π1 k, π2 i , sτi) , ∆(4) k = P i∈ph(k) VH(M 1 k, π1 k, π2 i , sτi) −VH(M, π1 k, π2 i , sτi) , ∆(5) k = P i∈ph(k) VH(M, π1 k, π2 i , sτi) −Pτi+1−1 t=τi r(st, at) , ∆(6) k = 2H. Here, π2 i denotes Player 2’s policy in episode i, which may be non-stationary. ∆(6) k comes from the possible two incomplete episodes in phase k. ∆(1) k is related to the tolerance level we set for the MAXIMIN-EVI algorithm: ∆(1) k ≤Tkγk = Tk/√tk. ∆(2) k is an error caused by approximating an infinite-horizon SG by a repeated episodic H-step SG (with possibly different initial states). ∆(3) k is clearly non-positive. It remains to bound ∆(2) k , ∆(4) k and ∆(5) k . Lemma 6.1. By Azuma-Hoeffding’s inequality, P k ∆(5) k ≤˜O( √ HT) with high probability. Lemma 6.2. Under Assumption 2, P k ∆(2) k ≤TD/H + P k Tkγk. From sample complexity to regret bound. As the main contributor of regret, ∆(4) k corresponds to the inaccuracy in the transition probability estimation. Here we largely reuse [9]’s results where they consider one-player episodic MDP with a fixed initial state distribution. Their main lemma states that the number of episodes in phases such that |VH(M 1 k, πk, s0) −VH(M, πk, s0)| > ε will not exceed ˜O H2S2A/ε2 , where s0 is their initial state in each episode. In other words, P k Tk H 1{|VH(M 1 k, πk, s0) −VH(M, πk, s0)| > ε} = ˜O(H2S2A/ε2). Note that their proof allows πk to be an arbitrarily selected non-stationary policy for phase k. 8 We can directly utilize their analysis and we summarize it as Theorem K.1 in the appendix. While their algorithm has an input ε, this input can be removed without affecting bounds. This means that the PAC bounds holds for arbitrarily selected ε. With the help of Theorem K.1, we have Lemma 6.3. P k ∆(4) k ≤˜O(S √ HAT + HS2A) with high probability. Proof of Theorem 3.2. With the decomposition (5) and the help of Lemma 6.1, 6.2, and 6.3, the regret is bounded by ˜O( T D H + S √ HAT + S2AH) = ˜O( 3√ DS2AT 2) by selecting H = max{D, 3p D2T/(S2A)}. 7 Sample Complexity of Offline Training In Section 3.1, we defined Lε to be the sample complexity of Player 1’s maximin policy. In our offline version of UCSG, in each phase k we let both players each select their own optimistic policy. After Player 1 has optimistically selected π1 k, Player 2 then optimistically selects his policy π2 k based on the known π1 k. Specifically, the model-policy pair M 2 k, π2 k is obtained by another extended value iteration on the extended MDP under fixed π1 k, where Player 2’s action set is extended. By setting the stopping threshold also as γk, we have ρ(M 2 k, π1 k, π2 k, s) ≤ min ˜ M∈Mk min π2 ρ( ˜ M, π1 k, π2, s) + γk (6) when value iteration halts. With this selection rule, we are able to obtain the following theorems. Theorem 7.1. Under Assumption 1, UCSG achieves Lε = ˜O(D3S5A + D2S2A/ε2) w.h.p. Theorem 7.2. Let Assumption 2 hold, and further assume that max s,s′ max π1∈ΠSR min π2∈ΠSR T π1,π2 s→s′ (M) ≤D. Then UCSG achieves Lε = ˜O(DS2A/ε3) w.h.p. The algorithm can output a single stationary policy for Player 1 with the following guarantee: if we run the offline version of UCSG for T > Lε steps, the algorithm can output a single stationary policy that is ε-optimal. We show how to output this policy in the proofs of Theorem 7.1 and 7.2. 8 Open Problems In this work, we obtain the regret of ˜O(D3S5A + DS √ AT) and ˜O( 3√ DS2AT) under different mixing assumptions. A natural open problem is how to improve these bounds on both asymptotic and constant terms. A lower bound of them can be inherited from the one-player MDP setting, which is Ω( √ DSAT) [19]. Another open problem is that if we further weaken the assumptions to maxs,s′ minπ1 minπ2 T π1,π2 s→s′ ≤ D, can we still learn the SG? We have argued that if we only have this assumption, in general we cannot get sublinear regret in the online setting. However, it is still possible to obtain polynomial-time offline sample complexity if the two players cooperate to explore the state-action space. Acknowledgments We would like to thank all anonymous reviewers who have devoted their time for reviewing this work and giving us valuable feedbacks. We would like to give special thanks to the reviewer who reviewed this work’s previous version in ICML; your detailed check of our proofs greatly improved the quality of this paper. 9 References [1] Yasin Abbasi, Peter L Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesvári. Online learning in markov decision processes with adversarially chosen transition probability distributions. In Advances in Neural Information Processing Systems, 2013. [2] Peter Auer and Ronald Ortner. Logarithmic online regret bounds for undiscounted reinforcement learning. In Advances in Neural Information Processing Systems, 2007. [3] Peter L Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009. [4] Michael Bowling and Manuela Veloso. Rational and convergent learning in stochastic games. In International Joint Conference on Artificial Intelligence, 2001. [5] Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 2002. [6] Sébastien Bubeck and Aleksandrs Slivkins. The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, 2012. [7] Grace E Cho and Carl D Meyer. Markov chain sensitivity measured by mean first passage times. Linear Algebra and Its Applications, 2000. [8] Vincent Conitzer and Tuomas Sandholm. Awesome: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Machine Learning, 2007. [9] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, 2015. [10] Travis Dick, Andras Gyorgy, and Csaba Szepesvari. Online learning in markov decision processes with changing cost sequences. In Proceedings of International Conference of Machine Learning, 2014. [11] Eyal Even-Dar, Sham M Kakade, and Yishay Mansour. Online markov decision processes. Mathematics of Operations Research, 2009. [12] Awi Federgruen. On n-person stochastic games by denumerable state space. Advances in Applied Probability, 1978. [13] Aurélien Garivier, Emilie Kaufmann, and Wouter M Koolen. Maximin action identification: A new bandit framework for games. In Conference on Learning Theory, pages 1028–1050, 2016. [14] Arie Hordijk. Dynamic programming and markov potential theory. MC Tracts, 1974. [15] Jeffrey J Hunter. Generalized inverses and their application to applied probability problems. Linear Algebra and Its Applications, 1982. [16] Jeffrey J Hunter. Stationary distributions and mean first passage times of perturbed markov chains. Linear Algebra and Its Applications, 2005. [17] Garud N. Iyengar. Robust dynamic programming. Math. Oper. Res., 30(2):257–280, 2005. [18] Tommi Jaakkola, Michael I Jordan, and Satinder P Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural computation, 1994. [19] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 2010. [20] Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003. 10 [21] Michail G Lagoudakis and Ronald Parr. Value function approximation in zero-sum markov games. In Proceedings of Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., 2002. [22] Tor Lattimore and Marcus Hutter. Pac bounds for discounted mdps. In International Conference on Algorithmic Learning Theory. Springer, 2012. [23] Shiau Hong Lim, Huan Xu, and Shie Mannor. Reinforcement learning in robust markov decision processes. Math. Oper. Res., 41(4):1325–1353, 2016. [24] Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of International Conference of Machine Learning, 1994. [25] A Maurer and M Pontil. Empirical bernstein bounds and sample variance penalization. In Conference on Learning Theory, 2009. [26] J-F Mertens and Abraham Neyman. Stochastic games. International Journal of Game Theory, 1981. [27] Gergely Neu, Andras Antos, András György, and Csaba Szepesvári. Online markov decision processes under bandit feedback. In Advances in Neural Information Processing Systems, 2010. [28] Gergely Neu, András György, and Csaba Szepesvári. The adversarial stochastic shortest path problem with unknown transition probabilities. In AISTATS, 2012. [29] Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Math. Oper. Res., 53(5):780–798, 2005. [30] Julien Perolat, Bruno Scherrer, Bilal Piot, and Olivier Pietquin. Approximate dynamic programming for two-player zero-sum markov games. In Proceedings of International Conference of Machine Learning, 2015. [31] HL Prasad, Prashanth LA, and Shalabh Bhatnagar. Two-timescale algorithms for learning nash equilibria in general-sum stochastic games. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2015. [32] Lloyd S Shapley. Stochastic games. Proceedings of the National Academy of Sciences, 1953. [33] Csaba Szepesvári and Michael L Littman. Generalized markov decision processes: Dynamicprogramming and reinforcement-learning algorithms. In Proceedings of International Conference of Machine Learning, 1996. [34] J Van der Wal. Successive approximations for average reward markov games. International Journal of Game Theory, 1980. [35] Jia Yuan Yu and Shie Mannor. Arbitrarily modulated markov decision processes. In Proceedings of Conference on Decision and Control. IEEE, 2009. 11 | 2017 | 154 |
6,624 | Independence clustering (without a matrix) Daniil Ryabko INRIA Lillle, 40 avenue de Halley, Villeneuve d’Ascq, France daniil@ryabko.net Abstract The independence clustering problem is considered in the following formulation: given a set S of random variables, it is required to find the finest partitioning {U1, . . . , Uk} of S into clusters such that the clusters U1, . . . , Uk are mutually independent. Since mutual independence is the target, pairwise similarity measurements are of no use, and thus traditional clustering algorithms are inapplicable. The distribution of the random variables in S is, in general, unknown, but a sample is available. Thus, the problem is cast in terms of time series. Two forms of sampling are considered: i.i.d. and stationary time series, with the main emphasis being on the latter, more general, case. A consistent, computationally tractable algorithm for each of the settings is proposed, and a number of fascinating open directions for further research are outlined. 1 Introduction Many applications face the situation where a set S = {x1, . . . , xN} of samples has to be divided into clusters in such a way that inside each cluster the samples are dependent, but the clusters between themselves are as independent as possible. Here each xi may itself be a sample or a time series xi = Xi 1, . . . , Xi n. For example, in financial applications, xi can be a series of recordings of prices of a stock i over time. The goal is to find the segments of the market such that different segments evolve independently, but within each segment the prices are mutually informative [15, 17]. In biological applications, each xi may be a DNA sequence, or may represent gene expression data [28, 20], or, in other applications, an fMRI record [4, 13]. The staple approach to this problem in applications is to construct a matrix of (pairwise) correlations between the elements, and use traditional clustering methods, e.g., linkage-based methods or k means and its variants, with this matrix [15, 17, 16]. If mutual information is used, it is used as a (pairwise) proximity measure between individual inputs, e.g. [14]. We remark that pairwise independence is but a surrogate for (mutual) independence, and, in addition, correlation is a surrogate for pairwise independence. There is, however, no need to resort to surrogates unless forced to do so by statistical or computational hardness results. We therefore propose to reformulate the problem from the first principles, and then show that it is indeed solvable both statistically and computationally — but calls for completely different algorithms. The formulation proposed is as follows. Given a set S = (x1, . . . , xN) of random variables, it is required to find the finest partitioning {U1, . . . , Uk} of S into clusters such that the clusters U1, . . . , Uk are mutually independent. To our knowledge, this problem in its full generality has not been addressed before. A similar informal formulation appears in the work [1] that is devoted to optimizing a generalization of the ICA objective. However, the actual problem considered only concerns the case of tree-structured dependence, which allows for a solution based on pairwise measurements of mutual information. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Note that in the fully general case pairwise measurements are useless, as are, furthermore, bottom-up (e.g., linkage-based) approaches. Thus, in particular, a proximity matrix cannot be used for the analysis. Indeed, it is easy to construct examples in which any pair or any small group of elements are independent, but are dependent when the same group is considered jointly with more elements. For instance, consider a group of Bernoulli 1/2-distributed random variables x1, . . . , xN+1, where x1, . . . , xN are i.i.d. and xN+1 = PN i=1 xi mod 2. Note that any N out of these N + 1 random variables are i.i.d., but together the N + 1 are dependent. Add then two more groups like this, say, y1, . . . , yN+1 and z1, . . . , zN+1 that have the exact same distribution, with the groups of x, y and z mutually independent. Naturally, these are the three clusters we would want to recover. However, if we try to cluster the union of the three, then any algorithm based on pairwise correlations will return an essentially arbitrary result. What is more, if we try to find clusters that are pairwise independent, then, for example, the clustering {(xi, yi, zi)i=1..N} of the input set into N + 1 clusters appears correct, but, in fact, the resulting clusters are dependent. Of course, real-world data does not come in the form of summed up Bernoulli variables, but this simple example shows that considering independence of small subsets may be very misleading. The considered problem is split into two parts considered separately: the computational and the statistical part. This is done by first considering the problem assuming the joint distribution of all the random variables is known, and is accessible via an oracle. Thus, the problem becomes computational. A simple, computationally efficient algorithm is proposed for this case. We then proceed to the time-series formulations: the distribution of (x1, . . . , xN) is unknown, but a sample (X1 1, . . . , XN 1 ), . . . , (X1 n, . . . , XN n ) is provided, so that xi can be identified with the time series Xi 1, . . . , Xi n. The sample may be either independent and identically distributed (i.i.d.), or, in a more general formulation, stationary. As one might expect, relying on the existing statistical machinery, the case of known distributions can be directly extended to the case of i.i.d. samples. Thus, we show that it is possible to replace the oracle access with statistical tests and estimators, and then use the same algorithm as in the case of known distributions. The general case of stationary samples turns out to be much more difficult, in particular because of a number of strong impossibility results. In fact, it is challenging already to determine what is possible and what is not from the statistical point of view. In this case, it is not possible to replicate the oracle access to the distribution, but only its weak version that we call fickle oracle. We find that, in this case, it is only possible to have a consistent algorithm for the case of known k. An algorithm that has this property is constructed. This algorithm is computationally feasible when the number of clusters k is small, as its complexity is O(N 2k). Besides, a measure of information divergence is proposed for time-series distributions that may be of independent interest, since it can be estimated consistently without any assumptions at all on the distributions or their densities (the latter may not exist). The main results of this work are theoretical. The goal is to determine, as a first step, what is possible and what is not from both statistical and computational points of view. The main emphasis is placed on highly dependent time series, as warranted by the applications cited above, leaving experimental investigations for future work. The contribution can be summarized as follows: • a consistent, computationally feasible algorithm for known distributions, unknown number of clusters, and an extension to the case of unknown distributions and i.i.d. samples; • an algorithm that is consistent under stationary ergodic sampling with arbitrary, unknown distributions, but with a known number k of clusters; • an impossibility result for clustering stationary ergodic samples with k unknown; • an information divergence measure for stationary ergodic time-series distributions along with its estimator that is consistent without any extra assumptions; In addition, an array of open problems and exciting directions for future work is proposed. Related work. Apart from the work on independence clustering mentioned above, it is worth pointing out the relation to some other problems. First, the proposed problem formulation can be viewed as a Bayesian-network learning problem: given an unknown network, it is required to split it into independent clusters. In general, learning a Bayesian network is NP-hard [5], even for rather restricted classes of networks (e.g., [18]). Here the problem we consider is much less general, which is why it admits a polynomial-time solution. A related clustering problem, proposed in [23] (see also [12]) is clustering time series with respect to distribution. Here, it is required to put two time series samples x1, x2 into the same cluster if and only if their distribution is the same. Similar to the independence clustering introduced here, this problem admits a consistent algorithm if the samples are i.i.d. (or 2 mixing) and the number of distributions (clusters) is unknown, and in the case of stationary ergodic samples if and only if k is known. 2 Set-up and preliminaries A set S := {x1, . . . , xN} is given, where we will either assume that the joint distribution of xi is known, or else that the distribution is unknown but a sample (X1 1, . . . , X1 n), . . . , (XN 1 , . . . , XN n ) is given. In the latter case, we identify each xi with the sequence (sample) Xi 1, . . . , Xi n, or Xi 1..n for short, of length n. The lengths of the samples are the same only for the sake of notational convenience; it is easy to generalize all algorithms to the case of different sample lengths ni, but the asymptotic would then be with respect to n := mini=1..N ni. It is assumed that Xi j ∈X := R are real-valued, but extensions to more general cases are straightforward. For random variables A, B, C we write (A ⊥B)|C to say that A is conditionally independent of B given C, and A ⊥B ⊥C to say that A, B and C are mutually independent. The (unique up to a permutation) partitioning U := {U1, . . . , Uk} of the set S is called the groundtruth clustering if U1, . . . , Uk are mutually independent (U1 ⊥· · · ⊥Uk) and no refinement of U has this property. A clustering algorithm is consistent if it outputs the ground-truth clustering, and it is asymptotically consistent if w.p. 1 it outputs the ground-truth clustering from some n on. For a discrete A-valued r.v. X its Shannon entropy is defined as H(X) := P a∈A −P(X = a) log P(X = a), letting 0 log 0 = 0. For a distribution with a density f its (differential) entropy is defined as H(X) =: − R f(x) log f(x). For two random variables X, Y their mutual information I(X, Y ) is defined as I(X, Y ) = H(X)+H(Y )−H(X, Y ). For discrete random variables, as well as for continuous ones with a density, X ⊥Y if and only if I(X, Y ) = 0; see, e.g., [6]. Likewise, I(X1, . . . , Xm) is defined as P i=1..m H(Xi) −H(X1, . . . , Xm). For the sake of convenience, in the next two sections we make the assumption stated below. However, we will show (Sections 5,6) that this assumption can be gotten rid of as well. Assumption 1. All distributions in question have densities bounded away from zero on their support. 3 Known distributions As with any statistical problem, it is instructive to start with the case where the (joint) distribution of all the random variables in question is known. Finding out what can be done and how to do it in this case helps us to set the goals for the (more realistic) case of unknown distributions. Thus, in this section, x1, . . . , xN are not time series, but random variables whose joint distribution is known to the statistician. The access to this distribution is via an oracle; specifically, our oracle will provide answers to the following questions about mutual information (where, for convenience, we assume that the mutual information with the empty set is 0): Oracle TEST. Given sets of random variables A, B, C, D ⊂{x1, . . . , xN} answer whether I(A, B) > I(C, D). Remark 1 ( Conditional independence oracle). Equivalently, one can consider an oracle that answers conditional independence queries of the form (A ⊥B)|C. The definition above is chosen for the sake of continuity with the next section, and it also makes the algorithm below more intuitive. However, in order to test conditional independence statistically one does not have to use mutual information, but may resort to any other divergence measure instead. The proposed algorithm (see the pseudocode listing below) works as follows. It attempts to split the input set recursively into two independent clusters, until it is no longer possible. To split a set in two, it starts with putting one element x from the input set S into a candidate cluster C := {x}, and measures its mutual information I(C, R) with the rest of the set, R := S \ C. If I(C, R) is already 0 then we have split the set into two independent clusters and can stop. Otherwise, the algorithm then takes the elements out of R one by one without replacement and each time looks whether I(C, R) has decreased. Once such an element is found, it is moved from R to C and the process is restarted from the beginning with C thus updated. Note that, if we have started with I(C, R) > 0, then taking elements out of R without replacement we eventually should find a one that decreases I(C, R), since I(C, ∅) = 0 and I(C, R) cannot increase in the process. 3 Theorem 1. The algorithm CLIN outputs the correct clustering using at most 2kN 2 oracle calls. Proof. We shall first show that the procedure for splitting a set into two indeed splits the input set into two independent sets, if and only if such two sets exist. First, note that if I(C, S \C) = 0 then C ⊥R and the function terminates. In the opposite case, when I(C, S \ C) > 0, by removing an element from R := S \ C, I(C, R) can only decrease (indeed, h(C|R) ≤h(C|R \ {x}) by information processing inequality). Eventually when all elements are removed, I(C, R) = I(C, {}) = 0, so there must be an element x removing which decreases I(C, R). When such an element x found it is moved to C. Note that, in this case, indeed x⊥\C. However, it is possible that removing an element x from R does not reduce I(C, R), yet x⊥\C. This is why the while loop is needed, that is, the whole process has to be repeated until no elements can be moved to C. By the end of each for loop, we have either found at least one element to move to C, or we have assured that C ⊥S \ C and the loop terminates. Since there are only finitely many elements in S \ C, the while loop eventually terminates. Moreover, each of the two loops (while and for) terminates in at most N iterations. Finally, notice that if (C1, C2) ⊥C3 and C1 ⊥C2 then also C1 ⊥C2 ⊥C3, which means that by repeating the Split function recursively we find the correct clustering. From the above, the bound on the number of oracle calls is easily obtained by direct calculation. 4 I.I.D. sampling Figure 1: CLIN: cluster with k unknown, given an oracle for MI INPUT: The set S. (C1, C2) := Split(S) if C2 ̸= ∅then Output:CLIN(C1), CLIN(C2) else Output: C1 end if Function Split(Set S of samples) Initialize: C := {x1}, R := S \ C; while TEST(I(C; R) > 0) do for each x ∈R do if TEST(I(C; R)>I(C; R \ {x})) then move x from R to C break the for loop else move x from R to M end if end for M := {}, R := S \ C; end while Return(C,R) END function In this section we assume that the distribution of (x1, . . . , xN) is not known, but an i.i.d. sample (X1 1, . . . , XN 1 ), . . . , (X1 n, . . . , XN n ) is provided. We identify xi with the (i.i.d.) time series Xi 1..n. Formally, N Xvalued processes is just a single X N-valued process. The latter can be seen as a matrix (Xi j)i=1..N,j=1..∞, where each row i is the sample xi = Xi 1..n.. and each column j is what is observed at time j: X1 j ..XN j . The case of i.i.d. samples is not much different from the case of a known distribution. What we need is to replace the oracle test with (nonparametric) statistical tests. First, a test for independence is needed to replace the oracle call TEST(I(C, R) > 0) in the while loop. Such tests are indeed available, see, for example, [8]. Second, we need an estimator of mutual information I(X, Y ), or, which is sufficient, for entropies, but with a rate of convergence. If the rate of convergence is known to be asymptotically bounded by, say, t(n), then, in order to construct an asymptotically consistent test, we can take any t′(n) →0 such that t(n) = o(t′(n)) and decide our inequality as follows: if ˆI(C; R \ {x}) < ˆI(C; R) −t′(n) then say that I(C; R \ {x}) < I(C; R). The required rates of convergence, which are of order √n under Assumption 1, can be found in [3]. Given the result of the previous section, it is clear that if the oracle is replaced by the tests described, then CLIN is a.s. consistent. Thus, we have demonstrated the following. Theorem 2. Under Assumption 1, there is an asymptotically consistent algorithm for independence clustering with i.i.d. sampling. Remark 2 (Necessity of the assumption). The independence test of [8] does not need Assumption 1, as it is distribution-free. Since the mutual information is defined in terms of densities, if we want to completely get rid of Assumption 1, we would need to use some other measure of dependence for the test. One such measure is defined in the next section already for the general case of process distributions. However, the rates of convergence of its empirical estimates under i.i.d. sampling remain to be studied. 4 Remark 3 (Estimators vs. tests). As noted in Remark 1, the tests we are using are, in fact, tests for (conditional) independence: testing I(C; R) > I(C; R \ {x}) is testing for (C ⊥{x}|R \ {x}). Conditional independence can be tested directly, without estimating I (see, for example 27), potentially allowing for tighter performance guarantees under more general conditions. 5 Stationary sampling We now enter the hard mode. The general case of stationary sampling presents numerous obstacles, often in the form of theoretical impossibility results: there are (provably) no rates of convergence, no independence test, and 0 mutual information rate does not guarantee independence. Besides, some simple-looking questions regarding the existence of consistent tests, which indeed have simple answers in the i.i.d. case, remain open in the stationary ergodic case. Despite all this, a computationally feasible asymptotically consistent independence clustering algorithm can be obtained, although only for the case of a known number of clusters. This parallels the situation of clustering according to the distribution [23, 12]. In this section we assume that the distribution of (x1, . . . , xN) is not known, but a jointly stationary ergodic sample (X1 1, . . . , XN 1 ), . . . , (X1 n, . . . , XN n ) is provided. Thus, xi is a stationary ergodic time series Xi 1..n. Here is also where we drop Assumption 1; in particular, densities do not have to exist. This new relaxed set of assumptions can be interpreted as using a weaker oracle, as explained in Remark 5 below. We start with preliminaries about stationary processes, followed by impossibility results, and concluding with an algorithm for the case of known k. 5.1 Preliminaries: stationary ergodic processes Stationary, ergodicity, information rate. (Time-series) distributions, or processes, are measures on the space (X ∞, FX ∞), where FX ∞is the Borel sigma-algebra of X ∞. Recall that N X-valued process is just a single X N-valued process. So the distributions are on the space ((X N)∞, F(AN)∞); this will be often left implicit. For a sequence x ∈An and a set B ∈B denote ν(x, B) the frequency with which the sequence x falls in the set B. A process ρ is stationary if ρ(X1..|B| = B) = ρ(Xt..t+|B|−1 = B) for any measurable B ∈X ∗and t ∈N. We further abbreviate ρ(B) := ρ(X1..|B| = B). A stationary process ρ is called (stationary) ergodic if the frequency of occurrence of each measurable B ∈X ∗in a sequence X1, X2, . . . generated by ρ tends to its a priori (or limiting) probability a.s.: ρ(limn→∞ν(X1..n, B) = ρ(B)) = 1. By virtue of the ergodic theorem, this definition can be shown to be equivalent to the more standard definition of stationary ergodic processes given in terms of shift-invariant sets [26]. Denote S and E the sets of all stationary and stationary ergodic processes correspondingly. The ergodic decomposition theorem for stationary processes (see, e.g., 7) states that any stationary process can be expressed as a mixture of stationary ergodic processes. That is, a stationary process ρ can be envisaged as first selecting a stationary ergodic distribution according to some measure Wρ over the set of all such distributions, and then using this ergodic distribution to generate the sequence. More formally, for any ρ ∈S there is a measure Wρ on (S, FS), such that Wρ(E) = 1, and ρ(B) = R dWρ(µ)µ(B), for any B ∈FX ∞. For a stationary time series x, its m-order entropy hm(x) is defined as EX1..m−1h(Xm|X1..m−1) (so the usual Shannon entropy is the entropy of order 0). By stationarity, the limit limm→∞hm exists and equals limm→∞1 mh(X1..m) (see, for example, [6] for more details). This limit is called entropy rate and is denoted h∞. For l stationary processes xi = (Xi 1, . . . , Xi n, . . . ), i = 1..l, the m-order mutual information is defined as Im(x1, . . . , xl) := Pl i=1 hm(xi) −hm(x1, . . . , xl) and the mutual information rate is defined as the limit I∞(x1, . . . , xl) := lim m→∞Im(x1, . . . , xl). (1) Discretisations and a metric. For each m, l ∈N, let Bm,l be a partitioning of X m into 2l sets such that jointly they generate Fm of X m, i.e. σ(∪l∈NBm,l) = Fm. The distributional distance between a pair of process distributions ρ1, ρ2 is defined as follows [7]: d(ρ1, ρ2) = ∞ X m,l=1 wmwl X B∈Bm,l |ρ1(B) −ρ2(B)|, (2) 5 where we set wj := 1/j(j + 1), but any summable sequence of positive weights may be used. As shown in [22], empirical estimates of this distance are asymptotically consistent for arbitrary stationary ergodic processes. These estimates are used in [23, 12] to construct time-series clustering algorithms for clustering with respect to distribution. Here we will only use this distance in the impossibility results. Basing on these ideas, Györfi[9] suggested to use a similar construction for studying independence, namely d(ρ1, ρ2) = P∞ m,l=1 wmwl P A,B∈Bm,l |ρ1(A)ρ2(B) −ρ(A × B)|, where ρ1 and ρ2 are the two marginals of a process ρ on pairs, and noted that its empirical estimates are asymptotically consistent. The distance we will use is similar, but is based on mutual information. 5.2 Impossibility results First of all, while the definition of ergodic processes guarantees convergence of frequencies to the corresponding probabilities, this convergence can be arbitrary slow [26]: there are no meaningful bounds on |ν(X1..n, 0) −ρ(X1 = 0)| in terms of n for ergodic ρ. This means that we cannot use tests for (conditional) independence of the kind employed in the i.i.d. case (Section 4). Thus, the first question we have to pose is whether it is possible to test independence, that is, to say whether x1 ⊥x2 based on a stationary ergodic samples X1 1..n, X2 1..n. Here we show that the answer in a certain sense is negative, but some important questions remain open. An (independence) test ϕ is a function that takes two samples X1 1..n, X2 1..n and a parameter α ∈(0, 1), called the confidence level, and outputs a binary answer: independent or not. A test ϕ is α-level consistent if, for every stationary ergodic distribution ρ over a pair of samples (X1 1..n.., X2 1..n..), for every confidence level α, ρ(ϕα(X1 1..n, X2 1..n) = 1) < α if the marginal distributions of the samples are independent, and ϕα(X1 1..n, X2 1..n) converges to 1 as n →∞with ρ-probability 1 otherwise. The next proposition can be established using the criterion of [25]. Recall that, for ρ ∈S, the measure Wρ over E is its ergodic decomposition. The criterion states that there is an α-level consistent test for H0 against E \ H0 if an only if Wρ(H0) = 1 for every ρ ∈cl H0. Proposition 1. There is no α-level consistent independence test (jointly stationary ergodic samples). Proof. The example is based on the so-called translation process, constructed as follows. Fix some irrational α ∈(0, 1) and select r0 ∈[0, 1] uniformly at random. For each i = 1..n.. let ri = (ri−1 + α) mod 1 (the previous element is shifted by α to the right, considering the [0,1] interval looped). The samples Xi are obtained from ri by thresholding at 1/2, i.e. Xi := I{ri > 0.5} (here ri can be considered hidden states). This process is stationary and ergodic; besides, it has 0 entropy rate [26], and this is not the last of its peculiarities. Take now two independent copies of this process to obtain a pair (x1, x2) = (X1 1, X2 1 . . . , X1 n, X2 n, . . . ). The resulting process on pairs, which we denote ρ, is stationary, but it is not ergodic. To see the latter, observe that the difference between the corresponding hidden states remains constant. In fact, each initial state (r1, r2) corresponds to an ergodic component of our process on pairs. By the same argument, these ergodic components are not independent. Thus, we have taken two independent copies of a stationary ergodic process, and obtained a stationary process which is not ergodic and whose ergodic components are pairs of processes that are not independent! To apply the criterion cited above, it remains to show that the process ρ we constructed can be obtained as a limit of stationary ergodic processes on pairs. To see this, consider, for each ε, a process ρε, whose construction is identical to ρ except that instead of shifting the hidden states by α we shift them by α + uε i where uε i are i.i.d. uniformly random on [−ε, ε]. It is easy to see that limε→0 ρε = ρ in distributional distance, and all ρε are stationary ergodic. Thus, if H0 is the set of all stationary ergodic distributions on pairs, we have found a distribution ρ ∈cl H0 such that Wρ(H0) = 0. Thus, there is no consistent test that could provide a given level of confidence under H0, even if only asymptotic consistency is required under H1. However, a yet weaker notion of consistency might suffice to construct asymptotically consistent clustering algorithms. Namely, we could ask for a test whose answer converges to either 0 or 1 according to whether the distributions generating the samples are independent or not. Unfortunately, it is not known whether a test consistent in this weaker sense exists or not. I conjecture that it does not. The conjecture is based not only on the result above, but also on the result of [24] that shows that there is no such test for the related problem of homogeneity testing, that is, for testing whether two given samples have the same or different distributions. This negative result holds even if the distributions are independent, binary-valued, the 6 difference is restricted to P(X0 = 0), and, finally, for a smaller family of processes (B-processes). Thus, for now what we can say is that there is no test for independence available that would be consistent under ergodic sampling. Therefore, we cannot distinguish even between the cases of 1 and 2 clusters. Thus, in the following it is assumed that the number of clusters k is given. The last problem we have to address is mutual information for processes. The analogue of mutual information for stationary processes is the mutual information rate (1). Unfortunately, 0 mutual information rate does not imply independence. This is manifest on processes with 0 entropy rate, for example those of the example in the proof of Proposition 1. What happens is that, if two processes are dependent, then indeed at least one of the m-order entropy rates Im is non-zero, but the limit may still be zero. Since we do not know in advance which Im to take, we will have to consider all of them, as is explained in the next subsection. 5.3 Clustering with the number of clusters known The quantity introduced below, which we call sum-information, will serve as an analogue of mutual information in the i.i.d. case, allowing us to get around the problem that the mutual information rate may be 0 for a pair of dependent stationary ergodic processes. Defined in the same vein as the distributional distance (2), this new quantity is a weighted sum over all the mutual informations up to time n; in addition, all the individual mutual informations are computed for quantized versions of random variables in question, with decreasing cell size of quantization, keeping all the mutual information resulting from different quantizations. The latter allows us not to require the existence of densities. Weighting is needed in order to be able to obtain consistent empirical estimates of the theoretical quantity under study. First, for a process x = (X1, . . . , Xn, . . . ) and for each m, l ∈N define the l’th quantized version [X1..m]l of X1..m as the index of the cell of Bm,l to which X1..m belongs. Recall that each of the partitions Bm,l has cell size 2l, and that wl := 1/l(l + 1). Definition 1 (sum-information). For stationary x1..xN define the sum-information sI(x1, . . . , xN) := ∞ X m=1 1 mwm ∞ X l=1 1 l wl N X i=1 h([Xi 1..m]l) ! −h([X1 1..m]l, . . . , [XN 1..m]l) (3) The next lemma follows from the fact that ∪l∈NBm,l generates Fm and ∪m∈NFm generates F∞. Lemma 1. sI(x1, . . . , xN) = 0 if and only if x1, . . . , xN are mutually independent. The empirical estimates ˆhn([Xi 1..m]l) of entropy are defined by replacing unknown probabilities by frequencies; the estimate b sIn(x1, . . . , xN) of is obtained by replacing h in (3) with ˆh. Remark 4 (Computing b sIn). The expression (3) might appear to hint at a computational disaster, as it involves two infinite sums, and, in addition, the number of elements in the sum inside h([]l) grows exponentially in l. However, it is easy to see that, when we replace the probabilities with frequencies, all but a finite number of summands are either zero or can be collapsed (because they are constant). Moreover, the sums can be further truncated so that the total computation becomes quasilinear in n. This can be done exactly the same way as for distributional distance, as described in [12, Section 5]. The following lemma can be proven analogously to the corresponding statement about consistency of empirical estimates of the distributional distance, given in [22, Lemma 1]. Lemma 2. Let the distribution ρ of x1, . . . , xN be jointly stationary ergodic. Then b sIn(x1, . . . , xk) →sI(x1, . . . , xN) ρ-a.s. This lemma alone is enough to establish the existence of a consistent clustering algorithm. To see this, first consider the following problem, which is the “independence” version of the classical statistical three-sample problem. The 3-sample-independence problem. Three samples x1, x2, x3, are given, and it is known that either (x1, x2) ⊥x3 or x1 ⊥(x2, x3) but not both. It is required to find out which one is the case. Proposition 2. There exists an algorithm for solving the 3-sample-independence problem that is asymptotically consistent under ergodic sampling. 7 Indeed, it is enough to consider an algorithm that compares b sIn((x1, x2), x3) and b sIn(x1, (x2, x3)) and answers according to whichever is smaller. The independence clustering problem which we are after is a generalisation of the 3-sampleindependence problem to N samples. We can also have a consistent algorithm for the clustering problem, simply comparing all possible clusterings U1, . . . , Uk of the N samples given and selecting whichever minimizes b sIn(U1, . . . , Uk). Such an algorithm is of course not practical, since the number of computations it makes must be exponential in N and k. We will show that the number of candidate clustering can be reduced dramatically, making the problem amenable to computation. Figure 2: CLINk: cluster given k and an estimator of mutual sum-information Consider all the clusterings obtained by applying recursively the function Split to each of the sets in each of the candidate partitions, starting with the input set S, until k clusters are obtained. Output the clustering U that minimizes b sI(U) Function Split(Set S of samples) Initialize: C := {x1}, R := S \ C, P := {} while R ̸= ∅do Initialize:M := {}, d := 0; xmax:= index of any x in R Add (C, R) to P for each x ∈R do r := ˆ sI(C, R) move x from R to M r′ := ˆ sI(C, R); d′ := r −r′ if d′ > d then d := d′, xmax:=index of(x) end if end for Move xxmax from M to C; R := S \ C end while Return(List of candidate splits P) END function The proposed algorithm CLINk (Algorithm 2 below) works similarly to CLIN, but with some important differences. Like before, the main procedure is to attempt to split the given set of samples into two clusters. This splitting procedure starts with a single element x1 and estimates its sum-information b sI(x1, R) with the rest of the elements, R. It then takes the elements out of R one by one without replacement, each time measuring how this changes b sI(x1, R). As before, once and if we find an element that is not independent of x1, this change will be positive. However, unlike in the i.i.d. case, here we cannot test whether this change is 0. Yet, we can say that if, among the tested elements, there is one that gives a non-zero change in sI, then one of such elements will be the one that gives the maximal change in b sI (provided, of course, that we have enough data for the estimates b sI to be close enough to the theoretical values sI). Thus, we keep each split that arises from such a maximal-change element, resulting in O(N 2) candidate splits for the case of 2 clusters. For k clusters, we have to consider all the combinations of the splits, resulting in O(N 2k−2) candidate clusterings. Then select the one that minimizes b sI. Theorem 3. CLINk is asymptotically consistent under ergodic sampling. This algorithm makes at most N 2k−2 calls to the estimator of mutual sum-information. Proof. The consistency of b sI (Lemma 2) implies that, for every ε > 0, from some n on w.p. 1, all the estimates of sI the algorithm uses will be within ε of their sI values. Since I(U1, . . . , Uk) = 0 if and only if U1, . . . , Uk is the correct clustering (Lemma 1), it is enough to show that, assuming all the b sI estimates are close enough to the sI values, the clustering that minimizes b sI(U1, . . . , Uk) is among those the algorithm searchers through, that is, among the clusterings obtained by applying recursively the function Split to each of the sets in each of the candidate partitions, starting with the input set S, until k clusters are obtained. To see the latter, on each iteration of the while loop, we either already have a correct candidate split in P, that is, a split (U1, U2) such that sI(U1, U2) = 0, or we find (executing the for loop) an element x′ to add to the set C such that C⊥\x′. Indeed, if at least one such element x′ exists, then among all such elements there is one that maximizes the difference d′. Since the set C is initialized as a singleton, a correct split is eventually found if it exists. Applying the same procedure exhaustively to each of the elements of each of the candidate splits producing all the combinations of k candidate clusterings, under the assumption that all the estimates b sI are sufficiently close the corresponding values, we are guaranteed to have the one that minimizes I(U1, . . . , Uk) among the output. Remark 5 (Fickle oracle). Another way to look at the difference between the stationary and the i.i.d. cases is to consider the following “fickle” version of the oracle test of Section 3. Consider the oracle that, as before, given sets of random variables A, B, C, D ⊂{x1, . . . , xN} answers whether sI(A, B) > sI(C, D). However, the answer is only guaranteed to be correct in the case 8 sI(A, B) ̸= sI(C, D). If sI(A, B) = sI(C, D) then the answer is arbitrary (and can be considered adversarial). One can see that Lemma 2 guarantees the existence of the oracle that has the requisite fickle correctness property asymptotically, that is, w.p. 1 from some n on. It is also easy to see that Algorithm 2 can be rewritten in terms of calls to such an oracle. 6 Generalizations, future work A general formulation of the independence clustering problem has been presented, and attempt has been made to trace out broadly the limits of what is possible and what is not possible in this formulation. In doing so, clear-cut formulations have been favoured over utmost generality, and over, on the other end of the spectrum, precise performance guarantees. Thus, many interesting questions have been left out; some of these are outlined in this section. Beyond time series. For the case when the distribution of the random variables xi is unknown, we have assumed that a sample Xi 1..n is available for each i = 1..N. Thus, each xi is represented by a time series. A time series is but one form the data may come in. Other ways include functional data, mutli-dimensional- or continuous-time processes, or graphs. Generalizations to some of these models, such as, for example, space-time stationary processes, are relatively straightforward, while others require more care. Some generalizations to infinite stationary graphs may be possible along the lines of [21]. In any case, the generalization problem is statistical (rather than algorithmic). If the number of clusters is unknown, we need to be able to replace the emulate the oracle test of section 3 with statistical tests. As explained in Section 4, it is sufficient to find a test for conditional independence, or an estimator of entropy along with guarantees on its convergence rates. If these are not available, as is the case of stationary ergodic samples, we can still have a consistent algorithm for k known, as long as we have an asymptotically consistent estimator of mutual information (without rates), or, more generally, if we can emulate the fickle oracle (Remark 5). Beyond independence. The problem formulation considered rests on the assumption that there exists a partition U1, . . . , Uk of the input set S such that U1, . . . , Uk are jointly independent, that is, such that I(U1, . . . , Uk) = 0. In reality, perhaps, nothing is really independent, and so some relaxations are in order. It is easy to introduce some thresholding in the algorithms (replacing 0 in each test by some threshold α) and derive some basic consistency guarantees for the resulting algorithms. The general problem formulation is to find a finest clustering such that I(U1, . . . , Uk) > ε, for a given ε (note that, unlike in the independence case of ε = 0, such a clustering may not be unique). If one wants to get rid of ε, a tree of clusterings may be considered for all ε ≥0, which is a common way to treat unknown parameters in the clustering literature (e.g.,[2]). Another generalization can be obtained by considering the problem from the graphical model point of view. The random variables xi are vertices of a graph, and edges represent dependencies. In this representation, clusters are connected components of the graph. A generalization then is to clusters that are the smallest components that are connected (to each other) by at most l edges, where l is a parameter. Yet another generalization would be to decomposable distributions of [10]. Performance guarantees. Non-asymptotic results (finite-sample performance guarantees) can be obtained under additional assumptions, using the corresponding results on (conditional) independence tests and on estimators of divergence between distributions. Here it is worth noting that we are not restricted to using the mutual information I, but any measure of divergence can be used, for example, Rényi divergence; a variety of relevant estimators and corresponding bounds, obtained under such assumptions as Hölder continuity, can be found in [19, 11]. From any such bounds, at least some performance guarantees for CLIN can be obtained simply using the union bound over all the invocations of the tests. Complexity. The algorithmic aspects of the problem have only been started upon in this work. Thus, it remains to find out what is the computational complexity of the studied problem. So far, we have presented only some upper bounds, by constructing algorithms and bounding their complexity (kN 2 for CLIN and N 2k for CLINk). Lower bounds (and better upper bounds) are left for future work. A subtlety worth noting is that, for the case of known distributions, the complexity may be affected by the choice of the oracle. In other words, some calculations may be “pushed” inside the oracle. In this regard, it may be better to consider the oracle for testing conditional independence, rather than a comparison of mutual informations, as explained in Remarks 1, 3. The complexity of the stationary-sampling version of the problem can be studied using the fickle oracle of Remark 5. The consistency of the algorithm should then be established for every assignment of those answers of the oracle that are arbitrary (adversarial). 9 References [1] Francis R Bach and Michael I Jordan. Beyond independent components: trees and clusters. Journal of Machine Learning Research, 4(Dec):1205–1233, 2003. [2] Maria-Florina Balcan, Yingyu Liang, and Pramod Gupta. Robust hierarchical clustering. Journal of Machine Learning Research, 15(1):3831–3871, 2014. [3] Jan Beirlant, Edward J Dudewicz, László Györfi, and Edward C Van der Meulen. Nonparametric entropy estimation: An overview. International Journal of Mathematical and Statistical Sciences, 6(1):17–39, 1997. [4] Simon Benjaminsson, Peter Fransson, and Anders Lansner. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fmri. Frontiers in systems neuroscience, 4:34, 2010. [5] David Maxwell Chickering. Learning Bayesian networks is NP-complete. In Learning from data, pages 121–130. Springer, 1996. [6] Thomas M. Cover and Joy A. Thomas. Elements of information theory. Wiley-Interscience, New York, NY, USA, 2006. [7] Robert M. Gray. Probability, Random Processes, and Ergodic Properties. Springer Verlag, 1988. [8] Arthur Gretton and László Györfi. Consistent nonparametric tests of independence. Journal of Machine Learning Research, 11(Apr):1391–1423, 2010. [9] László Györfi. Private communication. 2011. [10] Radim Jirouvsek. Solution of the marginal problem and decomposable distributions. Kybernetika, 27(5):403–412, 1991. [11] Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, and James M Robins. Influence functions for machine learning: Nonparametric estimators for entropies, divergences and mutual informations. arXiv preprint arXiv:1411.4342, 2014. [12] Azadeh Khaleghi, Daniil Ryabko, Jérémie Mary, and Philippe Preux. Consistent algorithms for clustering time series. Journal of Machine Learning Research, 17:1–32, 2016. [13] Artemy Kolchinsky, Martijn P van den Heuvel, Alessandra Griffa, Patric Hagmann, Luis M Rocha, Olaf Sporns, and Joaquín Goñi. Multi-scale integration and predictability in resting state brain activity. Frontiers in Neuroinformatics, 8, 2014. [14] Alexander Kraskov, Harald Stögbauer, Ralph G Andrzejak, and Peter Grassberger. Hierarchical clustering using mutual information. EPL (Europhysics Letters), 70(2):278, 2005. [15] Rosario N Mantegna. Hierarchical structure in financial markets. The European Physical Journal B-Condensed Matter and Complex Systems, 11(1):193–197, 1999. [16] Guillaume Marrelec, Arnaud Messé, and Pierre Bellec. A Bayesian alternative to mutual information for the hierarchical clustering of dependent random variables. PloS one, 10(9):e0137278, 2015. [17] Gautier Marti, Sébastien Andler, Frank Nielsen, and Philippe Donnat. Clustering financial time series: How long is enough? In IJCAI’16, 2016. [18] Christopher Meek. Finding a path is harder than finding a tree. J. Artif. Intell. Res. (JAIR), 15:383–389, 2001. [19] Dávid Pál, Barnabás Póczos, and Csaba Szepesvári. Estimation of rényi entropy and mutual information based on generalized nearest-neighbor graphs. In Advances in Neural Information Processing Systems, pages 1849–1857, 2010. [20] Ido Priness, Oded Maimon, and Irad Ben-Gal. Evaluation of gene-expression clustering via mutual information distance measure. BMC bioinformatics, 8(1):111, 2007. 10 [21] D. Ryabko. Hypotheses testing on infinite random graphs. In Proceedings of the 28th International Conference on Algorithmic Learning Theory (ALT’17), volume 76 of PMLR, pages 400–411, Kyoto, Japan, 2017. JMLR. [22] D. Ryabko and B. Ryabko. Nonparametric statistical inference for ergodic processes. IEEE Transactions on Information Theory, 56(3):1430–1435, 2010. [23] Daniil Ryabko. Clustering processes. In Proc. the 27th International Conference on Machine Learning (ICML 2010), pages 919–926, Haifa, Israel, 2010. [24] Daniil Ryabko. Discrimination between B-processes is impossible. Journal of Theoretical Probability, 23(2):565–575, 2010. [25] Daniil Ryabko. Testing composite hypotheses about discrete ergodic processes. Test, 21(2):317– 329, 2012. [26] P. Shields. The interactions between ergodic theory and information theory. IEEE Trans. on Information Theory, 44(6):2079–2093, 1998. [27] K Zhang, J Peters, D Janzing, and B Schölkopf. Kernel-based conditional independence test and application in causal discovery. In Proceedings of the 27th Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2011. [28] Xiaobo Zhou, Xiaodong Wang, Edward R Dougherty, Daniel Russ, and Edward Suh. Gene clustering based on clusterwide mutual information. Journal of Computational Biology, 11(1):147– 161, 2004. 11 | 2017 | 155 |
6,625 | Effective Parallelisation for Machine Learning Michael Kamp University of Bonn and Fraunhofer IAIS kamp@cs.uni-bonn.de Mario Boley Max Planck Institute for Informatics and Saarland University mboley@mpi-inf.mpg.de Olana Missura Google Inc. olanam@google.com Thomas G¨artner University of Nottingham thomas.gaertner@nottingham.ac.uk Abstract We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question on the efficient parallelisation of machine learning algorithms in the sense of Nick’s Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic application scenarios. 1 Introduction This paper contributes a novel and provably effective parallelisation scheme for a broad class of learning algorithms. The significance of this result is to allow the confident application of machine learning algorithms with growing amounts of data. In critical application scenarios, i.e., when errors have almost prohibitively high cost, this confidence is essential [27, 36]. To this end, we consider the parallelisation of an algorithm to be effective if it achieves the same confidence and error bounds as the sequential execution of that algorithm in much shorter time. Indeed, our parallelisation scheme can reduce the runtime of learning algorithms from polynomial to polylogarithmic. For that, it consumes more data and is executed on a quasi-polynomial number of processing units. To formally describe and analyse our parallelisation scheme, we consider the regularised risk minimisation setting. For a fixed but unknown joint probability distribution D over an input space X and an output space Y, a dataset D ⊆X × Y of size N ∈N drawn iid from D, a convex hypothesis space F of functions f : X →Y, a loss function ℓ: F × X × Y →R that is convex in F, and a convex regularisation term Ω: F →R, regularised risk minimisation algorithms solve L(D) = argmin f∈F X (x,y)∈D ℓ(f, x, y) + Ω(f) . (1) The aim of this approach is to obtain a hypothesis f ∈F with small regret Q (f) = E [ℓ(f, x, y)] −argmin f ′∈F E [ℓ(f ′, x, y)] . (2) 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Regularised risk minimisation algorithms are typically designed to be consistent and efficient. They are consistent if there is a function N0 : R+ × R+ →R+ such that for all ε > 0, ∆∈(0, 1], N ∈N with N ≥N0(ε, ∆), and training data D ∼DN, the probability of generating an ε-bad hypothesis is no greater than ∆, i.e., P (Q (L(D)) > ε) ≤∆. (3) They are efficient if the sample complexity N0(ε, ∆) is polynomial in 1/ε, log 1/∆and the runtime complexity TL is polynomial in the sample complexity. This paper considers the parallelisation of such consistent and efficient learning algorithms, e.g., support vector machines, regularised least squares regression, and logistic regression. We additionally assume that data is abundant and that F can be parametrised in a fixed, finite dimensional Euclidean space Rd such that the convexity of the regularised risk minimisation problem (Equation 1) is preserved. In other cases, (non-linear) low-dimensional embeddings [2, 28] can preprocess the data to facilitate parallel learning with our scheme. With slight abuse of notation, we identify the hypothesis space with its parametrisation. The main theoretical contribution of this paper is to show that algorithms satisfying the above conditions can be parallelised effectively. We consider a parallelisation to be effective if the (ε, ∆)guarantees (Equation 3) are achieved in time polylogarithmic in N0(ε, ∆). The cost for achieving this reduction in runtime comes in the form of an increased data size and in the number of processing units used. For the parallelisation scheme presented in this paper, we are able to bound this cost by a quasi-polynomial in 1/ε and log 1/∆. The main practical contribution of this paper is an effective parallelisation scheme that treats the underlying learning algorithm as a black-box, i.e., it can be parallelised without further mathematical derivations and without writing dedicated code. Similar to averaging-based parallelisations [32, 45, 46], we apply the underlying learning algorithm in parallel to random subsets of the data. Each resulting hypothesis is assigned to a leaf of an aggregation tree which is then traversed bottom-up. Each inner node computes a new hypothesis that is a Radon point [30] of its children’s hypotheses. In contrast to aggregation by averaging, the Radon point increases the confidence in the aggregate doubly-exponentially with the height of the aggregation tree. We describe our parallelisation scheme, the Radon machine, in detail in Section 2. Comparing the Radon machine to the underlying learning algorithm which is applied to a dataset of the size necessary to achieve the same confidence, we are able to show a reduction in runtime from polynomial to polylogarithmic in Section 3. The empirical evaluation of the Radon machine in Section 4 confirms its potential in practical settings. Given the same amount of data as the underlying learning algorithm, the Radon machine achieves a substantial reduction of computation time in realistic applications. Using 150 processors, the Radon machine is between 80 and around 700-times faster than the underlying learning algorithm on a single processing unit. Compared with parallel learning algorithms from Spark’s MLlib, it achieves hypotheses of similar quality, while requiring only 15 −85% of their runtime. Parallel computing [18] and its limitations [13] have been studied for a long time in theoretical computer science [7]. Parallelising polynomial time algorithms ranges from being ‘embarrassingly’ [26] easy to being believed to be impossible. For the class of decision problems that are the hardest in P, i.e., for P-complete problems, it is believed that there is no efficient parallel algorithm in the sense of Nick’s Class (NC [9]): efficient parallel algorithms in this sense are those that can be executed in polylogarithmic time on a polynomial number of processing units. Our paper thus contributes to Algorithm 1 Radon Machine Input: learning algorithm L, dataset D ⊆X × Y, Radon number r ∈N, and parameter h ∈N Output: hypothesis f ∈F 1: divide D into rh iid subsets Di of roughly equal size 2: run L in parallel to obtain fi = L(Di) 3: S ←{f1, . . . , frh} 4: for i = h −1, . . . , 1 do 5: partition S into iid subsets S1, . . . , Sri of size r each 6: calculate Radon points r(S1), . . . , r(Sri) in parallel # see Definition 1 and Appendix C.1 7: S ←{r(S1), . . . , r(Sri)} 8: end for 9: return r(S) 2 understanding the extent to which efficient parallelisation of polynomial time learning algorithms is possible. This connection and other approaches to parallel learning are discussed in Section 5. 2 From Radon Points to Radon Machines The Radon machine, described in Algorithm 1, first executes the underlying (base) learning algorithm on random subsets of the data to quickly achieve weak hypotheses and then iteratively aggregates them to stronger ones. Both the generation of weak hypotheses and the aggregation can be executed in parallel. To aggregate hypotheses, we follow along the lines of the iterated Radon point algorithm which was originally devised to approximate the centre point, i.e., a point of largest Tukey depth [38], of a finite set of points [8]. The Radon point [30] of a set of points is defined as follows: Definition 1. A Radon partition of a set S ⊂F is a pair A, B ⊂S such that A ∩B = ∅but ⟨A⟩∩⟨B⟩̸= ∅, where ⟨·⟩denotes the convex hull. The Radon number of a space F is the smallest r ∈N such that for all S ⊂F with |S| ≥r there is a Radon partition; or ∞if no S ⊂F with Radon partition exists. A Radon point of a set S with Radon partition A, B is any r ∈⟨A⟩∩⟨B⟩. We now present the Radon machine (Algorithm 1), which is able to effectively parallelise consistent and efficient learning algorithms. Input to this parallelisation scheme is a learning algorithm L on a hypothesis space F, a dataset D ⊆X × Y, the Radon number r ∈N of the hypothesis space F, and a parameter h ∈N. It divides the dataset into rh subsets D1, . . . , Drh (line 1) and runs the algorithm L on each subset in parallel (line 2). Then, the set of hypotheses (line 3) is iteratively aggregated to form better sets of hypotheses (line 4-8). For that the set is partitioned into subsets of size r (line 5) and the Radon point of each subset is calculated in parallel (line 6). The final step of each iteration is to replace the set of hypotheses by the set of Radon points (line 7). The scheme requires a hypothesis space with a valid notion of convexity and finite Radon number. While other notions of convexity are possible [16, 33], in this paper we restrict our consideration to Euclidean spaces with the usual notion of convexity. Radon’s theorem [30] states that the Euclidean space Rd has Radon number r = d + 2. Radon points can then be obtained by solving a system of linear equations of size r × r (to be fully self-contained we state the system of linear equations explicitly in Appendix C.1). The next proposition gives a guarantee on the quality of Radon points: Proposition 2. Given a probability measure P over a hypothesis space F with finite Radon number r, let F denote a random variable with distribution P. Furthermore, let r be the random variable obtained by computing the Radon point of r random points drawn according to P r. Then it holds for the expected regret Q and all ε ∈R that P (Q (r) > ε) ≤(rP (Q (F) > ε))2 . The proof of Proposition 2 is provided in Section 7. Note that this proof also shows the robustness of the Radon point compared to the average: if only one of r points is ε-bad, the Radon point is still ε-good, while the average may or may not be; indeed, in a linear space with any set of ε-good hypotheses and any ε′ ≥ε, we can always find a single ε′-bad hypothesis such that the average of all these hypotheses is ε′-bad. A direct consequence of Proposition 2 is a bound on the probability that the output of the Radon machine with parameter h is bad: Theorem 3. Given a probability measure P over a hypothesis space F with finite Radon number r, let F denote a random variable with distribution P. Denote by r1 the random variable obtained by computing the Radon point of r random points drawn iid according to P and by P1 its distribution. For any h ∈N, let rh denote the Radon point of r random points drawn iid from Ph−1 and by Ph its distribution. Then for any convex function Q : F →R and all ε ∈R it holds that P (Q(rh) > ε) ≤(rP (Q(F) > ε))2h . The proof of Theorem 3 is also provided in Section 7. For the Radon machine with parameter h, Theorem 3 shows that the probability of obtaining an ε-bad hypothesis is doubly exponentially reduced: with a bound δ on this probability for the base learning algorithm, the bound ∆on this probability for the Radon machine is ∆= (rδ)2h . (4) In the next section we compare the Radon machine to its base learning algorithm which is applied to a dataset of the size necessary to achieve the same ε and ∆. 3 3 Sample and Runtime Complexity In this section we first derive the sample and runtime complexity of the Radon machine R from the sample and runtime complexity of the base learning algorithm L. We then relate the runtime complexity of the Radon machine to an application of the base learning algorithm which achieves the same (ε, ∆)-guarantee. For that, we consider consistent and efficient base learning algorithms with a sample complexity of the form N L 0 (ε, δ) = (αε + βε ld 1/δ)k, for some1 αε, βε ∈R, and k ∈N. From now on, we also assume that δ ≤1/2r for the base learning algorithm. The Radon machine creates rh base hypotheses and, with ∆as in Equation 4, has sample complexity N R 0 (ε, ∆) = rhN L 0 (ε, δ) = rh · αε + βε ld 1 δ k . (5) Theorem 3 then implies that the Radon machine with base learning algorithm L is consistent: with N ≥N R 0 (ε, ∆) samples it achieves an (ε, ∆)-guarantee. To achieve the same guarantee as the Radon machine, the application of the base learning algorithm L itself (sequentially) would require M ≥N L 0 (ε, ∆) samples, where N L 0 (ε, ∆) = N L 0 ε, (rδ)2h = αε + 2h · βε ld 1 rδ k . (6) For base learning algorithms L with runtime TL(n) polynomial in the data size n ∈N, i.e., TL(n) ∈O (nκ) with κ ∈N, we now determine the runtime TR,h(N) of the Radon machine with h iterations and c = rh processing units on N ∈N samples. In this case all base learning algorithms can be executed in parallel. In practical applications fewer physical processors can be used to simulate rh processing units—we discuss this case in Section 5. The runtime of the Radon machine can be decomposed into the runtime of the base learning algorithm and the runtime for the aggregation. The base learning algorithm requires n ≥N L 0 (ε, δ) samples and can be executed on rh processors in parallel in time TL(n). The Radon point in each of the h iterations can then be calculated in parallel in time r3 (see Appendix C.1). Thus, the runtime of the Radon machine with N = rhn samples is TR,h(N) = TL (n) + hr3 . (7) In contrast, the runtime of the base learning algorithm for achieving the same guarantee is TL(M) with M ≥N L 0 (ε, ∆). Ignoring logarithmic and constant terms, N L 0 (ε, ∆) behaves as 2hN L 0 (ε, δ). To obtain polylogarithmic runtime of R compared to TL(M), we choose the parameter h ≈ld M −ld ld M such that n ≈M/2h = ld M. Thus, the runtime of the Radon machine is in O ldκ M + r3 ld M . This result is formally summarised in Theorem 4. Theorem 4. The Radon machine with a consistent and efficient regularised risk minimisation algorithm on a hypothesis space with finite Radon number has polylogarithmic runtime on quasipolynomially many processing units if the Radon number can be upper bounded by a function polylogarithmic in the sample complexity of the efficient regularised risk minimisation algorithm. The theorem is proven in Appendix A.1 and relates to Nick’s Class [1]: A decision problem can be solved efficiently in parallel in the sense of Nick’s Class, if it can be decided by an algorithm in polylogarithmic time on polynomially many processors (assuming, e.g., PRAM model). For the class of decision problems that are the hardest in P, i.e., for P-complete problems, it is believed that there is no efficient parallel algorithm for solving them in this sense. Theorem 4 provides a step towards finding efficient parallelisations of regularised risk minimisers and towards answering the open question: is consistent regularised risk minimisation possible in polylogarithmic time on polynomially many processors. A similar question, for the case of learning half spaces, has been called a fundamental open problem by Long and Servedio [21] who gave an algorithms which runs on polynomially many processors in time that depends polylogarithmically on the sample size but is inversely proportional to a parameter of the learning problem. While Nick’s Class as a notion of efficiency has been criticised [17], it is the only notion of efficiency that forms a proper complexity class in the sense of Blum [4]. To overcome the weakness of using only this notion, Kruskal et al. [17] suggested to consider also the inefficiency of simulating the parallel algorithm on a single processing unit. We discuss the inefficiency and the speed-up in Appendix A.2. 1We derive αε, βε for hypothesis spaces with finite VC [41] and Rademacher [3] complexity in App. C.2. 4 4 Empirical Evaluation This empirical study compares the Radon machine to state-of-the-art parallel machine learning algorithms from the Spark machine learning library [25], as well as the natural baseline of averaging hypotheses instead of calculating their Radon point (averaging-at-the-end, Avg). We use base learning algorithms from WEKA [44] and scikit-learn [29]. We compare the Radon machine to the base learning algorithms on moderately sized datasets, due to scalability limitations of the base learners, and reserve larger datasets for the comparison with parallel learners. The experiments are executed on a Spark cluster (5 worker nodes, 25 processors per node)2. All results are obtained using 10-fold cross validation. We apply the Radon machine with parameter h = 1 and the maximal parameter h such that each instance of the base learning algorithm is executed on a subset of size at least 100 (denoted h = max). Averaging-at-the-end executes the base learning algorithm on the same number of subsets rh as the Radon machine with that parameter and is denoted in the Figures by stating the parameter h as for the Radon machine. All other parameters of the learning algorithms are optimised on an independent split of the datasets. See Appendix B for additional details. What is the speed-up of our scheme in practice? In Figure 1(a), we compare the Radon machine to its base learners on moderately sized datasets (details on the datasets are provided in Appendix B). 2The source code implementation in Spark can be found in the bitbucket repository https://bitbucket.org/Michael_Kamp/radonmachine. 102 103 104 105 106 training time (log-scale) WekaSGD WekaLogReg LinearSVC PRM(h=1)[WekaSGD] PRM(h=1)[WekaLogReg] PRM(h=1)[LinearSVC] PRM(h=max)[WekaSGD] PRM(h=max)[WekaLogReg] PRM(h=max)[LinearSVC] codrna Stagger1 SEA_50 poker click_pred SUSY 0.0 0.2 0.4 0.6 0.8 1.0 AUC (a) 102 103 104 105 Avg(h=1)[WekaSGD] Avg(h=max)[WekaSGD] Avg(h=1)[WekaLogReg] Avg(h=max)[WekaLogReg] PRM(h=1)[WekaSGD] PRM(h=1)[WekaLogReg] PRM(h=max)[WekaSGD] PRM(h=max)[WekaLogReg] 20_news SUSY HIGGS wikidata CASP9 0.0 0.2 0.4 0.6 0.8 1.0 (b) 102 103 104 105 SparkLogRegwSGD SparkSVMwSGD SparkLogRegwLBFGS SparkLogReg PRM(h=1)[WekaSGD] PRM(h=1)[WekaLogReg] PRM(h=max)[WekaSGD] PRM(h=max)[WekaLogReg] 20_news SUSY HIGGS wikidata CASP9 0.0 0.2 0.4 0.6 0.8 1.0 (c) Figure 1: (a) Runtime (log-scale) and AUC of base learners and their parallelisation using the Radon machine (PRM) for 6 datasets with N ∈[488 565, 5 000 000], d ∈[3, 18]. Each point represents the average runtime (upper part) and AUC (lower part) over 10 folds of a learner—or its parallelisation— on one datasets. (b) Runtime and AUC of the Radon machine compared to the averaging-at-the-end baseline (Avg) on 5 datasets with N ∈[5 000 000, 32 000 000], d ∈[18, 2 331]. (c) Runtime and AUC of several Spark machine learning library algorithms and the Radon machine using base learners that are comparable to the Spark algorithms on the same datasets as in Figure 1(b). 5 poker Stagger1 SEA_50 SUSY click_pred codrna 101 102 103 speedup WekaSGD WekaLogReg LinearSVC Figure 2: Speed-up (log-scale) of the Radon machine over its base learners per dataset from the same experiment as in Figure 1(a). 106 107 dataset size 103 104 105 106 runtime 1.57 1.17 central Radon Figure 3: Dependence of the runtime on the dataset size for of the Radon machine compared to its base learners. 103 104 training time 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 AUC 20_newsgroups SUSY HIGGS wikidata CASP9 PRM(h=max)[WekaSGD] PRM(h=max)[WekaLogReg] Avg(h=max)[WekaSGD] Avg(h=max)[WekaLogReg] SparkSVMwSGD SparkLogRegwLBFGS Figure 4: Representation of the results in Figure 1(b) and 1(c) in terms of the trade-off between runtime and AUC for the Radon machine (PRM) and averaging-at-the-end (Avg), both with parameter h = max, and parallel machine learning algorithms in Spark. The dashed lines connect the Radon machine to averaging-at-the-end with the same base learning algorithm and a comparable Spark machine learning algorithm. There, the Radon machine is between 80 and around 700-times faster than the base learner using 150 processors. The speed-up is detailed in Figure 2. On the SUSY dataset (with 5 000 000 instances and 18 features), the Radon machine on 150 processors with h = 3 is 721 times faster than its base learning algorithms. At the same time, their predictive performances, measured by the area under the ROC curve (AUC) on an independent test dataset, are comparable. How does the scheme compare to averagingat-the-end? In Figure 1(b) we compare the runtime and AUC of the parallelisation scheme against the averaging-at-the-end baseline (Avg). In terms of the AUC, the Radon machine outperforms the averaging-at-the-end baseline on all datasets by at least 10%. The runtimes can hardly be distinguished in that figure. A small difference can however be noted in Figure 4 which is discussed in more details in the next paragraph. Since averaging is less computationally expensive than calculating the Radon point, the runtimes of the averaging-at-the-end baselines are slightly lower than the ones of the Radon machine. However, compared to the computational complexity of executing the base learner, this advantage becomes negligible. How does our scheme compare to state-of-the-art Spark machine learning algorithms? We compare the Radon machine to various Spark machine learning algorithms on 5 large datasets. The results in Figure 1(c) indicate that the proposed parallelisation scheme with h = max has a substantially smaller runtime than the Spark algorithms on all datasets. On the SUSY and HIGGS dataset, the Radon machine is one order of magnitude faster than the Spark implementations—here the comparatively small number of features allows for a high level of parallelism. On the CASP9 dataset, the Radon machine is 15% faster than the fastest Spark algorithm. The performance in terms of AUC of the Radon machine is similar to the Spark algorithms. In particular, when using WekaLogReg with h = max, the Radon machine outperforms the Spark algorithms in terms of AUC and runtime on the datasets SUSY, wikidata, and CASP9. Details are given in the Appendix B. A summarizing comparison of the parallel approaches in terms of their trade-off between runtime and predictive performance is depicted in Figure 4. Here, results are shown for the Radon machine and averagingat-the-end with parameter h = max and for the two Spark algorithms most similar to the base 6 learning algorithms. Note that it is unclear what caused the consistently weak performance of all algorithms on wikidata. Nonetheless, the results show that on all datasets the Radon machine has comparable predictive performance to the Spark algorithms and substantially higher predictive performance than averaging-at-the-end. At the same time, the Radon machine has a runtime comparable to averaging-at-the-end on all datasets and both are substantially faster than the Spark algorithms. How does the runtime depend on the dataset size in a real-world system? The runtime of the Radon machine can be distinguished into its learning phase and its aggregation phase. While the learning phase fully benefits from parallelisation, this comes at the cost of additional runtime for the aggregation phase. The time for aggregating the hypotheses does not depend on the number of instances in the dataset but for a fixed parameter h it only depends on the dimension of the hypothesis space and that parameter. In Figure 3 we compare the runtimes of all base learning algorithms per dataset size to the Radon machines. Results indicate that, while the runtimes of the base learning algorithms depends on the dataset size with an average exponent of 1.57, the runtime of the Radon machine depends on the dataset size with an exponent of only 1.17. How generally applicable is the scheme? As an indication of the general applicability in practice, we also consider regression and multi-class classification. For regression, we apply the scheme to the Scikit-learn implementation of regularised least squares regression [29]. On the dataset YearPredictionMSD, regularised least squares regression achieves an RMSE of 12.57, whereas the Radon machine achieved an RMSE of 13.64. At the same time, the Radon machine is 197-times faster. We also compare the Radon machine on a multi-class prediction problem using conditional maximum entropy models. For multi-class classification, we use the implementation described in Mcdonald et al. [23], who propose to use averaging-at-the-end for distributed training. We compare the Radon machine to averaging-at-the-end with conditional maximum entropy models on two large multiclass datasets (drift and spoken-arabic-digit). On average, our scheme performs better with only slightly longer runtime. The minimal difference in runtime can be explained—similar to the results in Figure 1(b)—by the smaller complexity of calculating the average instead of the Radon point. 5 Discussion and Future Work In the experiments we considered datasets where the number of dimensions is much smaller than the number of instances. What about high-dimensional models? The basic version of the parallelisation scheme presented in this paper cannot directly be applied to cases in which the size of the dataset is not at least a multiple of the Radon number of the hypothesis space. For various types of data such as text, this might cause concerns. However, random projections [15] or low-rank approximations [2, 28] can alleviate this problem and are already frequently employed in machine learning. An alternative might be to combine our parallelisation scheme with block coordinate descent [37]. In this case, the scheme can be applied iteratively to subsets of the features. In the experiments we considered only linear models. What about non-linear models? Learning non-linear models causes similar problems to learning high-dimensional ones. In non-parametric methods like kernel methods, for instance, the dimensionality of the optimisation problem is equal to the number of instances, thus prohibiting the application of our parallelisation scheme. However, similar low-rank approximation techniques as described above can be applied with non-linear kernels [11]. Furthermore, methods for speeding up the learning process for non-linear models explicitly approximate an embedding in which then a linear model can be learned [31]. Using explicitly constructed feature spaces, Radon machines can directly be applied to non-linear models. We have theoretically analysed our parallelisation scheme for the case that there are enough processing units available to find each weak hypothesis on a separate processing units. What if there are not rh, but only c < rh processing units? The parallelisation scheme can quite naturally be “deparallelised” and partially executed in sequence. For the runtime this implies an additional factor of max{1, rh/c}. Thus, the Radon machine can be applied with any number of processing units. The scheme improves ∆doubly exponentially in its parameter h but for that it requires the weak hypotheses to already achieve δ ≤1/2r. Is the scheme only applicable in high-confidence domains? Many application scenarios require high-confidence error bounds, e.g., in the medical domain [27] or in intrusion detection [36]. In practice our scheme achieves similar predictive quality much faster than its base learner. 7 Besides runtime, communication plays an essential role in parallel learning. What is the communication complexity of the scheme? As for all aggregation at the end strategies, the overall amount of communication is low compared to periodically communicating schemes. For the parallel aggregation of hypotheses, the scheme requires O(rh+1) messages (which can be sent in parallel) each containing a single hypothesis of size O(r). Our scheme is ideally suited for inherently distributed data and might even mitigate privacy concerns. In a lot of applications data is available in the form of potentially infinite data streams. Can the scheme be applied to distributed data streams? For each data stream, a hypotheses could be maintained using an online learning algorithm and periodically aggregated using the Radon machine, similar to the federated learning approach proposed by McMahan et al. [24]. In this paper, we investigated the parallelisation of machine learning algorithms. Is the Radon machine more generally applicable? The parallelisation scheme could be applied to more general randomized convex optimization algorithms with unknown and random target functions. We will investigate its applicability for learning in non-Euclidean, abstract convexity spaces. 6 Conclusion and Related Work In this paper we provided a step towards answering an open problem: Is parallel machine learning possible in polylogarithmic time using a polynomial number of processors only? This question has been posed for half-spaces by Long and Servedio [21] and called “a fundamental open problem about the abilities and limitations of efficient parallel learning algorithms”. It relates machine learning to Nick’s Class of parallelisable decision problems and its variants [13]. Early theoretical treatments of parallel learning with respect to NC considered probably approximately correct (PAC) [5, 39] concept learning. Vitter and Lin [42] introduced the notion of NC-learnable for concept classes for which there is an algorithm that outputs a probably approximately correct hypothesis in polylogarithmic time using a polynomial number of processors. In this setting, they proved positive and negative learnability results for a number of concept classes that were previously known to be PAC-learnable in polynomial time. More recently, the special case of learning half spaces in parallel was considered by Long and Servedio [21] who gave an algorithm for this case that runs on polynomially many processors in time that depends polylogarithmically on the size of the instances but is inversely proportional to a parameter of the learning problem. Our paper complements these theoretical treatments of parallel machine learning and provides a provably effective parallelisation scheme for a broad class of regularised risk minimisation algorithms. Some parallelisation schemes also train learning algorithms on small chunks of data and average the found hypotheses. While this approach has advantages [12, 32], current error bounds do not allow a derivation of polylogarithmic runtime [20, 35, 45] and it has been doubted to have any benefit over learning on a single chunk [34]. Another popular class of parallel learning algorithms is based on stochastic gradient descent, targeting expected risk minimisation directly [34, and references therein]. The best so far known algorithm in this class [34] is the distributed mini-batch algorithm [10]. This algorithm still runs for a number of rounds inversely proportional to the desired optimisation error, hence not in polylogarithmic time. A more traditional approach is to minimise the empirical risk, i.e., an empirical sample-based approximation of the expected risk, using any, deterministic or randomised, optimisation algorithm. This approach relies on generalisation guarantees relating the expected and empirical risk minimisation as well as a guarantee on the optimisation error introduced by the optimisation algorithm. The approach is readily parallelisable by employing available parallel optimisation algorithms [e.g., 6]. It is worth noting that these algorithms solve a harder than necessary optimisation problem and often come with prohibitively high communication cost in distributed settings [34]. Recent results improve over these [22] but cannot achieve polylogarithmic time as the number of iterations depends linearly on the number of processors. Apart from its theoretical advantages, the Radon machine also has several practical benefits. In particular, it is a black-box parallelisation scheme in the sense that it is applicable to a wide range of machine learning algorithms and it does not depend on the implementation of these algorithms. It speeds up learning while achieving a similar hypothesis quality as the base learner. Our empirical evaluation indicates that in practice the Radon machine achieves either a substantial speed-up or a higher predictive performance than other parallel machine learning algorithms. 8 7 Proof of Proposition 2 and Theorem 3 In order to prove Proposition 2 and consecutively Theorem 3, we first investigate some properties of Radon points and convex functions. We proof these properties for the more general case of quasiconvex functions. Since every convex function is also quasi-convex, the results hold for convex functions as well. A quasi-convex function is defined as follows. Definition 5. A function Q : F →R is called quasi-convex if all its sublevel sets are convex, i.e., ∀θ ∈R : {f ∈F | Q (f) < θ} is convex. First we give a different characterisation of quasi-convex functions. Proposition 6. A function Q : F →R is quasi-convex if and only if for all S ⊆F and all s′ ∈⟨S⟩ there exists an s ∈S with Q (s) ≥Q (s′). Proof. (⇒) Suppose this direction does not hold. Then there is a quasi-convex function Q, a set S ⊆F, and an s′ ∈⟨S⟩such that for all s ∈S it holds that Q (s) < Q (s′) (therefore s′ /∈S). Let C = {c ∈F | Q (c) < Q (s′)}. As S ⊆C = ⟨C⟩we also have that ⟨S⟩⊆⟨C⟩which contradicts ⟨S⟩∋s′ /∈C. (⇐) Suppose this direction does not hold. Then there exists an ε such that S = {s ∈F | Q (s) < ε} is not convex and therefore there is an s′ ∈⟨S⟩\ S. By assumption ∃s ∈S : Q (s) ≥Q (s′). Hence Q (s′) < ε and we have a contradiction since this would imply s′ ∈S. The next proposition concerns the value of any convex function at a Radon point. Proposition 7. For every set S with Radon point r and every quasi-convex function Q it holds that |{s ∈S | Q (s) ≥Q (r)}| ≥2. Proof. We show a slightly stronger result: Take any family of pairwise disjoint sets Ai with T i⟨Ai⟩̸= ∅and r ∈T i⟨Ai⟩. From proposition 6 follows directly the existence of an ai ∈Ai such that Q (ai) ≥Q (r). The desired result follows then from ai ̸= aj ⇐i ̸= j. We are now ready to proof Proposition 2 and Theorem 3 (which we re-state here for convenience). Theorem 3. Given a probability measure P over a hypothesis space F with finite Radon number r, let F denote a random variable with distribution P. Denote by r1 the random variable obtained by computing the Radon point of r random points drawn iid according to P and by P1 its distribution. For any h ∈N, let rh denote the Radon point of r random points drawn iid from Ph−1 and by Ph its distribution. Then for any convex function Q : F →R and all ε ∈R it holds that P (Q(rh) > ε) ≤(rP (Q(F) > ε))2h . Proof of Proposition 2 and Theorem 3. By proposition 7, for any Radon point r of a set S there must be two points a, b ∈S with Q (a) , Q (b) ≥Q (r). Henceforth, the probability of Q (r) > ε is less than or equal to the probability of the pair a, b having Q (a) , Q (b) > ε. Proposition 2 follows by an application of the union bound on all pairs from S. Repeated application of the proposition proves Theorem 3. Acknowledgements Part of this work was conducted while Mario Boley, Olana Missura, and Thomas G¨artner were at the University of Bonn and partially funded by the German Science Foundation (DFG, under ref. GA 1615/1-1 and GA 1615/2-1). The authors would like to thank Dino Oglic, Graham Hutton, Roderick MacKenzie, and Stefan Wrobel for valuable discussions and comments. 9 References [1] Sanjeev Arora and Boaz Barak. Computational complexity: A modern approach. Cambridge University Press, 2009. [2] Maria Florina Balcan, Yingyu Liang, Le Song, David Woodruff, and Bo Xie. Communication efficient distributed kernel principal component analysis. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 725– 734, 2016. [3] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2003. [4] Manuel Blum. A machine-independent theory of the complexity of recursive functions. Journal of the ACM (JACM), 14(2):322–336, 1967. [5] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM (JACM), 36(4):929–965, 1989. [6] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011. [7] Ashok K. Chandra and Larry J. Stockmeyer. Alternation. In 17th Annual Symposium on Foundations of Computer Science, pages 98–108, 1976. [8] Kenneth L. Clarkson, David Eppstein, Gary L. Miller, Carl Sturtivant, and Shang-Hua Teng. Approximating center points with iterative Radon points. International Journal of Computational Geometry & Applications, 6(3):357–377, 1996. [9] Stephen A. Cook. Deterministic CFL’s are accepted simultaneously in polynomial time and log squared space. In Proceedings of the eleventh annual ACM symposium on Theory of computing, pages 338–345, 1979. [10] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13(1):165–202, 2012. [11] Shai Fine and Katya Scheinberg. Efficient svm training using low-rank kernel representations. Journal of Machine Learning Research, 2:243–264, 2002. [12] Yoav Freund, Yishay Mansour, and Robert E. Schapire. Why averaging classifiers can protect against overfitting. In Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics, 2001. [13] Raymond Greenlaw, H. James Hoover, and Walter L. Ruzzo. Limits to parallel computation: P-completeness theory. Oxford University Press, Inc., 1995. [14] Steve Hanneke. The optimal sample complexity of PAC learning. Journal of Machine Learning Research, 17(38):1–15, 2016. [15] William B. Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [16] David Kay and Eugene W. Womble. Axiomatic convexity theory and relationships between the Carath´eodory, Helly, and Radon numbers. Pacific Journal of Mathematics, 38(2):471–485, 1971. [17] Clyde P. Kruskal, Larry Rudolph, and Marc Snir. A complexity theory of efficient parallel algorithms. Theoretical Computer Science, 71(1):95–132, 1990. [18] Vipin Kumar, Ananth Grama, Anshul Gupta, and George Karypis. Introduction to parallel computing: design and analysis of algorithms. Benjamin-Cummings Publishing Co., Inc., 1994. 10 [19] Moshe Lichman. UCI machine learning repository, 2013. URL http://archive.ics. uci.edu/ml. [20] Shao-Bo Lin, Xin Guo, and Ding-Xuan Zhou. Distributed learning with regularized least squares. Journal of Machine Learning Research, 18(92):1–31, 2017. URL http://jmlr. org/papers/v18/15-586.html. [21] Philip M. Long and Rocco A. Servedio. Algorithms and hardness results for parallel large margin learning. Journal of Machine Learning Research, 14:3105–3128, 2013. [22] Chenxin Ma, Jakub Koneˇcn´y, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richt´arik, and Martin Tak´aˇc. Distributed optimization with arbitrary local solvers. Optimization Methods and Software, 32(4):813–848, 2017. [23] Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, and Gideon S. Mann. Efficient large-scale distributed training of conditional maximum entropy models. In Advances in Neural Information Processing Systems, pages 1231–1239, 2009. [24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282, 2017. [25] Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, and Ameet Talwalkar. Mllib: Machine learning in apache spark. Journal of Machine Learning Research, 17(34):1–7, 2016. [26] Cleve Moler. Matrix computation on distributed memory multiprocessors. Hypercube Multiprocessors, 86(181-195):31, 1986. [27] Ilia Nouretdinov, Sergi G. Costafreda, Alexander Gammerman, Alexey Chervonenkis, Vladimir Vovk, Vladimir Vapnik, and Cynthia H.Y. Fu. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression. Neuroimage, 56(2):809–813, 2011. [28] Dino Oglic and Thomas G¨artner. Nystr¨om method with kernel k-means++ samples as landmarks. In Proceedings of the 34th International Conference on Machine Learning, pages 2652–2660, 06–11 Aug 2017. [29] Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, RRon Weiss, Vincent Dubourg, Jake Vanderplas, AAlexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ´Edouard Duchesnay Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [30] Johann Radon. Mengen konvexer K¨orper, die einen gemeinsamen Punkt enthalten. Mathematische Annalen, 83(1):113–115, 1921. [31] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 1177–1184, 2007. [32] Jonathan D. Rosenblatt and Boaz Nadler. On the optimality of averaging in distributed statistical learning. Information and Inference, 5(4):379–404, 2016. [33] Alexander M. Rubinov. Abstract convexity and global optimization, volume 44. Springer Science & Business Media, 2013. [34] Ohad Shamir and Nathan Srebro. Distributed stochastic optimization and learning. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing, pages 850–857, 2014. [35] Ohad Shamir, Nati Srebro, and Tong Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In International conference on machine learning, pages 1000–1008, 2014. 11 [36] Robin Sommer and Vern Paxson. Outside the closed world: On using machine learning for network intrusion detection. In Symposium on Security and Privacy, pages 305–316, 2010. [37] Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright. Optimization for machine learning. MIT Press, 2012. [38] John W Tukey. Mathematics and the picturing of data. In Proceedings of the International Congress of Mathematicians, volume 2, pages 523–531, 1975. [39] Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984. [40] Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: Networked science in machine learning. SIGKDD Explorations, 15(2):49–60, 2013. [41] Vladimir N. Vapnik and Alexey Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2): 264–280, 1971. [42] Jeffrey S. Vitter and Jyh-Han Lin. Learning in parallel. Information and Computation, 96(2): 179–202, 1992. [43] Ulrike Von Luxburg and Bernhard Sch¨olkopf. Statistical learning theory: models, concepts, and results. In Inductive Logic, volume 10 of Handbook of the History of Logic, pages 651– 706. Elsevier, 2011. [44] Ian H. Witten, Eibe Frank, Mark A. Hall, and Christopher J. Pal. Data Mining: Practical machine learning tools and techniques. Elsevier, 2017. [45] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14(1):3321–3363, 2013. [46] Martin Zinkevich, Markus Weimer, Alexander J. Smola, and Lihong Li. Parallelized stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 2595–2603, 2010. 12 | 2017 | 156 |
6,626 | Deep Mean-Shift Priors for Image Restoration Siavash A. Bigdeli University of Bern bigdeli@inf.unibe.ch Meiguang Jin University of Bern jin@inf.unibe.ch Paolo Favaro University of Bern favaro@inf.unibe.ch Matthias Zwicker University of Bern, and University of Maryland, College Park zwicker@cs.umd.edu Abstract In this paper we introduce a natural image prior that directly represents a Gaussiansmoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing. 1 Introduction Image restoration tasks, such as deblurring and denoising, are ill-posed problems, whose solution requires effective image priors. In the last decades, several natural image priors have been proposed, including total variation [29], gradient sparsity priors [12], models based on image patches [5], and Gaussian mixtures of local filters [25], just to name a few of the most successful ideas. See Figure 1 for a visual comparison of some popular priors. More recently, deep learning techniques have been used to construct generic image priors. Here, we propose an image prior that is directly based on an estimate of the natural image probability distribution. Although this seems like the most intuitive and straightforward idea to formulate a prior, only few previous techniques have taken this route [20]. Instead, most priors are built on intuition or statistics of natural images (e.g., sparse gradients). Most previous deep learning priors are derived in the context of specific algorithms to solve the restoration problem, but it is not clear how these priors relate to the probability distribution of natural images. In contrast, our prior directly represents the natural image distribution smoothed with a Gaussian kernel, an approximation similar to using a Gaussian kernel density estimate. Note that we cannot hope to use the true image probability distribution itself as our prior, since we only have a finite set of samples from this distribution. We show a visual comparison in Figure 1, where our prior is able to capture the structure of the underlying image, but others tend to simplify the texture to straight lines and sharp edges. We formulate image restoration as a Bayes estimator, and define a utility function that includes the smoothed natural image distribution. We approximate the estimator with a bound, and show that the gradient of the bound includes the gradient of the logarithm of our prior, that is, the Gaussian smoothed density. In addition, the gradient of the logarithm of the smoothed density is proportional to the mean-shift vector [8], and it has recently been shown that denoising autoencoders (DAEs) learn such a mean-shift vector field for a given set of data samples [1, 4]. Hence we call our prior a deep mean-shift prior, and our framework is an example of Bayesian inference using deep learning. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Input Our prior BM3D [9] EPLL [41] FoE [28] SF [31] Figure 1: Visualization of image priors using the method by Shaham et al. [32]: Our deep mean-shift prior learns complex structures with different curvatures. Other priors prefer simpler structures like lines with small curvature or sharp corners. We demonstrate image restoration using our prior for noise-blind deblurring, super-resolution, and image demosaicing, where we solve Bayes estimation using a gradient descent approach. We achieve performance that is competitive with the state of the art for these applications. In summary, the main contributions of this paper are: • A formulation of image restoration as a Bayes estimator that leverages the Gaussian smoothed density of natural images as its prior. In addition, the formulation allows us to solve noise-blind restoration problems. • An implementation of the prior, which we call deep mean-shift prior, that builds on denoising autoencoders (DAEs). We rely on the observation that DAEs learn a mean-shift vector field, which is proportional to the gradient of the logarithm of the prior. • Image restoration techniques based on gradient-descent risk minimization with competitive results for noise-blind image deblurring, super-resolution, and demosaicing. 1 2 Related Work Image Priors. A comprehensive review of previous image priors is outside the scope of this paper. Instead, we refer to the overview by Shaham et al. [32], where they propose a visualization technique to compare priors. Our approach is most related to techniques that leverage CNNs to learn image priors. These techniques build on the observation by Venkatakrishnan et al. [33] that many algorithms that solve image restoration via MAP estimation only need the proximal operator of the regularization term, which can be interpreted as a MAP denoiser [22]. Venkatakrishnan et al. [33] build on the ADMM algorithm and propose to replace the proximal operator of the regularizer with a denoiser such as BM3D [9] or NLM [5]. Unsurprisingly, this inspired several researchers to learn the proximal operator using CNNs [6, 40, 35, 22]. Meinhardt et al. [22] consider various proximal algorithms including the proximal gradient method, ADMM, and the primal-dual hybrid gradient method, where in each case the proximal operator for the regularizer can be replaced by a neural network. They show that no single method will produce systematically better results than the others. In the proximal techniques the relation between the proximal operator of the regularizer and the natural image probability distribution remains unclear. In contrast, we explicitly use the Gaussiansmoothed natural image distribution as a prior, and we show that we can learn the gradient of its logarithm using a denoising autoencoder. Romano et al. [27] designed a prior model that is also implemented by a denoiser, but that does not build on a proximal formulation such as ADMM. Interestingly, the gradient of their regularization term boils down to the residual of the denoiser, that is, the difference between its input and output, which is the same as in our approach. However, their framework does not establish the connection between the prior and the natural image probability distribution, as we do. Finally, Bigdeli and Zwicker [4] formulate an energy function, where they use a Denoising Autoencoder (DAE) network for the prior, as in our approach, but they do not address the case of noise-blind restoration. Noise- and Kernel-Blind Deconvolution. Kernel-blind deconvolution has seen the most effort recently, while we support the fully (noise and kernel) blind setting. Noise-blind deblurring is usually 1The source code of the proposed method is available at https://github.com/siavashbigdeli/DMSP. 2 performed by first estimating the noise level and then restoration with the estimated noise. Jin et al. [14] proposed a Bayes risk formulation that can perform deblurring by adaptively changing the regularization without the need of the noise variance estimate. Zhang et al. [37, 38] explored a spatially-adaptive sparse prior and scale-space formulation to handle noise- or kernel-blind deconvolution. These methods, however, are tailored specifically to image deconvolution. Also, they only handle the noise- or kernel-blind case, but not fully blind. 3 Bayesian Formulation We assume a standard model for image degradation, y = k ∗ξ + n, n ∼N(0, σ2 n), (1) where ξ is the unknown image, k is the blur kernel, n is zero-mean Gaussian noise with variance σ2 n, and y is the observed degraded image. We restore an estimate x of the unknown image by defining and maximizing an objective consisting of a data term and an image likelihood, argmax x Φ(x) = data(x) + prior(x). (2) Our core contribution is to construct a prior that corresponds to the logarithm of the Gaussiansmoothed probability distribution of natural images. We will optimize the objective using gradient descent, and leverage the fact that we can learn the gradient of the prior using a denoising autoencoder (DAE). We next describe how we define our objective by formulating a Bayes estimator in Section 3.1, then explain how we leverage DAEs to obtain the gradient of our prior in Section 3.2, describe our gradient descent approach in Section 3.3, and finally our image restoration applications in Section 4. 3.1 Defining the Objective via a Bayes Estimator A typical approach to solve the restoration problem is via a maximum a posteriori (MAP) estimate, where one considers the posterior distribution of the restored image p(x|y) ∝p(y|x)p(x), derives an objective consisting of a sum of data and prior terms by taking the logarithm of the posterior, and maximizes it (minimizes the negative log-posterior, respectively). Instead, we will compute a Bayes estimator x for the restoration problem by maximizing the posterior expectation of a utility function, E˜x[G(˜x, x)] = Z G(˜x, x)p(y|˜x)p(˜x)d˜x (3) where G denotes the utility function (e.g., a Gaussian), which encourages its two arguments to be similar. This is a generalization of MAP, where the utility is a Dirac impulse. Ideally, we would like to use the true data distribution as the prior p(˜x). But we only have data samples, hence we cannot learn this exactly. Therefore, we introduce a smoothed data distribution p′(x) = Eη[p(x + η)] = Z gσ(η)p(x + η)dη, (4) where η has a Gaussian distribution with zero-mean and variance σ2, which is represented by the smoothing kernel gσ. The key idea here is that it is possible to estimate the smoothed distribution p′(x) or its gradient from sample data. In particular, we will need the gradient of its logarithm, which we will learn using denoising autoencoders (DAEs). We now define our utility function as G(˜x, x) = gσ(˜x −x)p′(x) p(˜x) . (5) where we use the same Gaussian function gσ with standard deviation σ as introduced for the smoothed distribution p′. This penalizes the estimate x if the latent parameter ˜x is far from it. In addition, the term p′(x)/p(˜x) penalizes the estimate if its smoothed density is lower than the true density of the latent parameter. Unlike the utility in Jin et al. [14], this approach will allow us to express the prior directly using the smoothed distribution p′. By inserting our utility function into the posterior expected utility in Equation (3) we obtain E˜x[G(˜x, x)] = Z gσ(ϵ)p(y|x + ϵ) Z gσ(η)p(x + η)dηdϵ, (6) 3 where the true density p(˜x) canceled out, as desired, and we introduced the substitution ϵ = ˜x −x. We finally formulate our objective by taking the logarithm of the expected utility in Equation (6), and introducing a lower bound that will allow us to split Equation (6) into a data term and an image likelihood. By exploiting the concavity of the log function, we apply Jensen’s inequality and get our objective Φ(x) as log E˜x[G(˜x, x)] = log Z gσ(ϵ)p(y|x + ϵ) Z gσ(η)p(x + η)dηdϵ ≥ Z gσ(ϵ) log " p(y|x + ϵ) Z gσ(η)p(x + η)dη # dϵ = Z gσ(ϵ) log p(y|x + ϵ)dϵ | {z } Data term data(x) + log Z gσ(η)p(x + η)dη | {z } Image likelihood prior(x) = Φ(x). (7) Image Likelihood. We denote the image likelihood as prior(x) = log Z gσ(η)p(x + η)dη. (8) The key observation here is that our prior expresses the image likelihood as the logarithm of the Gaussian-smoothed true natural image distribution p(x), which is similar to a kernel density estimate. Data Term. Given that the degradation noise is Gaussian, we see that [14] data(x) = Z gσ(ϵ) log p(y|x + ϵ)dϵ = −|y −k ∗x|2 2σ2n −M σ2 2σ2n |k|2 −N log σn + const, (9) where M and N denote the number of pixels in x and y respectively. This will allow us to address noise-blind problems as we will describe in detail in Section 4. 3.2 Gradient of the Prior via Denoising Autoencoders (DAE) A key insight of our approach is that we can effectively learn the gradients of our prior in Equation (8) using denoising autoencoders (DAEs). A DAE rσ is trained to minimize [34] LDAE = Eη,x |x −rσ(x + η)|2 , (10) where the expectation is over all images x and Gaussian noise η with variance σ2, and rσ indicates that the DAE was trained with noise variance σ2. Note that this is the same loss as in non-parametric least squares estimators [23, 26, 20]. Similar to Alain and Bengio [1], we parametrize this estimator using neural networks for fast evaluation. They show that the output rσ(x) of the optimal DAE (by assuming unlimited capacity) is related to the true data distribution p(x) as rσ(x) = x −Eη [p(x −η)η] Eη [p(x −η)] = x − R gσ(η)p(x −η)ηdη R gσ(η)p(x −η)dη (11) where the noise has a Gaussian distribution gσ with standard deviation σ. This is simply a continuous formulation of mean-shift, and gσ corresponds to the smoothing kernel in our prior, Equation (8). To obtain the relation between the DAE and the desired gradient of our prior, we first rewrite the numerator in Equation (11) using the Gaussian derivative definition to remove η, that is Z gσ(η)p(x −η)ηdη = −σ2 Z ∇gσ(η)p(x −η)dη = −σ2∇ Z gσ(η)p(x −η)dη, (12) where we used the Leibniz rule to interchange the ∇operator with the integral. Plugging this back into Equation (11), we have rσ(x) = x + σ2∇ R gσ(η)p(x −η)dη R gσ(η)p(x −η)dη = x + σ2∇log Z gσ(η)p(x −η)dη. (13) One can now see that the DAE error, that is, the difference rσ(x) −x between the output of the DAE and its input, is the gradient of the image likelihood in Equation (8). Hence, a main result of our approach is that we can write the gradient of our prior using the DAE error, ∇prior(x) = ∇log Z gσ(η)p(x + η)dη = 1 σ2 rσ(x) −x . (14) 4 NB: 1. ut = 1 σ2n KT (Kxt−1 −y) −∇priors L(xt−1) 2. ¯u = µ¯u −αut 3. xt = xt−1 + ¯u NA: 1. ut = λtKT (Kxt−1 −y) −∇priors L(xt−1) 2. ¯u = µ¯u −αut 3. xt = xt−1 + ¯u KE: 4. vt = λt xT (Kt−1xt−1 −y) + Mσ2kt−1 5. ¯v = µk¯v −αkvt 6. kt = kt−1 + ¯v Table 1: Gradient descent steps for non-blind (NB), noise-blind (NA), and kernel-blind (KE) image deblurring. Kernel-blind deblurring involves the steps for (NA) and (KE) to update image and kernel. 3.3 Stochastic Gradient Descent We consider the optimization as minimization of the negative of our objective Φ(x) and refer to it as gradient descent. Similar to Bigdeli and Zwicker [4], we observed that the trained DAE is overfitted to noisy images. Because of the large gap in dimensionality between the embedding space and the natural image manifold, the vast majority of training inputs (noisy images) for the DAE lie at a distance very close to σ from the natural image manifold. Hence, the DAE cannot effectively learn mean-shift vectors for locations that are closer than σ to the natural image manifold. In other words, our DAE does not produce meaningful results for input images that do not exhibit noise close to the DAE training σ. To address this issue, we reformulate our prior to perform stochastic gradient descent steps that include noise sampling. We rewrite our prior from Equation (8) as prior(x) = log Z gσ(η)p(x + η)dη (15) = log Z gσ2(η2) Z gσ1(η1)p(x + η1 + η2)dη1dη2 (16) ≥ Z gσ2(η2) log " Z gσ1(η1)p(x + η1 + η2)dη1 # dη2 = priorL(x), (17) where σ2 1 + σ2 2 = σ2, we used the fact that two Gaussian convolutions are equivalent to a single convolution with a Gaussian whose variance is the sum of the two, and we applied Jensen’s inequality again. This leads to a new lower bound for the prior, which we call priorL(x). Note that the bound proposed by Jin et al. [14] corresponds to the special case where σ1 = 0 and σ2 = σ. We address our DAE overfitting issue by using the new lower bound priorL(x) with σ1 = σ2 = σ √ 2. Its gradient is ∇priorL(x) = 2 σ2 Z g σ √ 2 (η2) r σ √ 2 (x + η2) −(x + η2) dη2. (18) In practice, computing the integral over η2 is not possible at runtime. Instead, we approximate the integral with a single noise sample, which leads to the stochastic evaluation of the gradient as ∇priors L(x) = 2 σ2 r σ √ 2 (x + η2) −x , (19) where η2 ∼N(0, σ2 2). This addresses the overfitting issue, since it means we add noise each time before we evaluate the DAE. Given the stochastically sampled gradient of the prior, we apply a gradient descent approach with momentum that consists of the following steps: 1. ut = −∇data(xt−1) −∇priors L(xt−1) 2. ¯u = µ¯u −αut 3. xt = xt−1 + ¯u (20) where ut is the update step for x at iteration t, ¯u is the running step, and µ and α are the momentum and step-size. 4 Image Restoration using the Deep Mean-Shift Prior We next describe the detailed gradient descent steps, including the derivatives of the data term, for different image restoration tasks. We provide a summary in Table 1. For brevity, we omit the role of downsampling (required for super-resolution) and masking. 5 Levin [19] Berkeley [2] Method σn: 2.55 5.10 7.65 10.2 2.55 5.10 7.65 10.2 FD [18] 30.03 28.40 27.32 26.52 24.44 23.24 22.64 22.07 EPLL [41] 32.03 29.79 28.31 27.20 25.38 23.53 22.54 21.91 RTF-6 [30]* 32.36 26.34 21.43 17.33 25.70 23.45 19.83 16.94 CSF [31] 29.85 28.13 27.28 26.70 24.73 23.61 22.88 22.44 DAEP [4] 32.64 30.07 28.30 27.15 25.42 23.67 22.78 22.21 IRCNN [40] 30.86 29.85 28.83 28.05 25.60 24.24 23.42 22.91 EPLL [41] + NE 31.86 29.77 28.28 27.16 25.36 23.53 22.55 21.90 EPLL [41] + NA 32.16 30.25 28.96 27.85 25.57 23.90 22.91 22.27 TV-L2 + NA 31.05 29.14 28.03 27.16 24.61 23.65 22.90 22.34 GradNet 7S [14] 31.43 28.88 27.55 26.96 25.57 24.23 23.46 22.94 Ours 29.68 29.45 28.95 28.29 25.69 24.45 23.60 22.99 Ours + NA 32.57 30.21 29.00 28.23 26.00 24.47 23.61 22.97 Table 2: Average PSNR (dB) for non-blind deconvolution on two datasets (*trained for σn = 2.55). Non-Blind Deblurring (NB). The gradient descent steps for non-blind deblurring with a known kernel and degradation noise variance are given in Table 1, top row (NB). Here K denotes the Toeplitz matrix of the blur kernel k. Noise-Adaptive Deblurring (NA). When the degradation noise variance σ2 n is unknown, we can solve Equation (9) for the optimal σ2 n (since it is independent of the prior), which gives σ2 n = 1 N |y −k ∗x|2 + Mσ2|k|2 . (21) By plugging this back into the equation, we get the following data term data(x) = −N 2 log |y −k ∗x|2 + Mσ2|k|2 , (22) which is independent of the degradation noise variance σ2 n. We show the gradient descent steps in Table 1, second row (NA), where λt = N |y −Kxt−1|2 + Mσ2|k|2−1 adaptively scales the data term with respect to the prior. Noise- and Kernel-Blind Deblurring (NA+KE). Gradient descent in noise-blind optimization includes an intuitive regularization for the kernel. We can use the objective in Equation (22) to jointly optimize for the unknown image and the unknown kernel. The gradient descent steps to update the image remain as in Table 1, second row (NA), and we take additional steps to update the kernel estimate, as in Table 1, third row (KE). Additionally, we project the kernel by applying kt = max(kt, 0) and kt = kt |kt|1 after each step. 5 Experiments and Results Our DAE uses the neural network architecture by Zhang et al. [39]. We generated training samples by adding Gaussian noise to images from ImageNet [10]. We experimented with different noise levels and found σ1 = 11 to perform well for all our deblurring and super-resolution experiments. Unless mentioned, for image restoration we always take 300 iterations with step length α = 0.1 and momentum µ = 0.9. The runtime of our method is linear in the number of pixels, and our implementation takes about 0.2 seconds per iteration for one megapixel on an Nvidia Titan X (Pascal). 5.1 Image Deblurring: Non-Blind and Noise-Blind In this section we evaluate our method for image deblurring using two datasets. Table 2 reports the average PSNR for 32 images from the Levin et al. [19] and 50 images from the Berkeley [2] segmentation dataset, where 10 images are randomly selected and blurred with 5 kernels as in Jin et al. [14]. We highlight the best performing PSNR in bold and underline the second best value. The 6 Ground Truth EPLL [41] DAEP [4] GradNet 7S [14] Ours Ours + NA Figure 2: Visual comparison of our deconvolution results. Ground Truth Blurred with 1% noise Ours (blind) SSD Error Ratio 1 2 3 4 30 40 50 60 70 80 90 100 % Below Error Ratio Sun et al. Wipf and Zhang Levin et al. Babacan et al. Log−TV PD Log−TV MM Ours Figure 3: Performance of our method for fully (noise- and kernel-) blind deblurring on Levin’s set. upper half of the table includes non-blind methods for deblurring. EPLL [41] + NE uses a noise estimation step followed by non-blind deblurring. Noise-blind experiments are denoted by NA for noise adaptivity. We include our results for non-blind (Ours) and noise-blind (Ours + NA). Our noise adaptive approach consistently performs well in all experiments and on average we achieve better results than the state of the art. Figure 2 provides a visual comparison of our results. Our prior is able to produce sharp textures while also preserving the natural image structure. 5.2 Image Deblurring: Noise- and Kernel-Blind We performed fully blind deconvolution with our method using Levin et al.’s [19] dataset. In this test, we performed 1000 gradient descent iterations. We used momentum µ = 0.7 and step size α = 0.3 for the unknown image and momentum µk = 0.995 and step size αk = 0.005 for the unknown kernel. Figure 3 shows visual results of fully blind deblurring and performance comparison to state of the art (last column). We compare the SSD error ratio and the number of images in the dataset that achieves error ratios less than a threshold. Results for other methods are as reported by Perrone and Favaro [24]. Our method can reconstruct all the blurry images in the dataset with errors ratios less than 3.5. Note that our optimization performs end-to-end estimation of the final results and we do not use the common two stage blind deconvolution (kernel estimation, followed by non-blind deconvolution). Additionally our method uses a noise adaptive scheme where we do not assume knowledge of the input noise level. 5.3 Super-resolution To demonstrate the generality of our prior, we perform an additional test with single image superresolution. We evaluate our method on the two common datasets Set5 [3] and Set14 [36] for different upsampling scales. Since these tests do not include degradation noise (σn = 0), we perform our optimization with a rough weight for the prior and decrease it gradually to zero. We compare our method in Table 3. The upper half of the table represents methods that are specifically trained for super-resolution. SRCNN [11] and TNRD [7] have separate models trained for ×2, 3, 4 scales, and we used the model for ×4 to produce the ×5 results. VDSR [16] and DnCNN-3 [39] have a single model trained for ×2, 3, 4 scales, which we also used to produce ×5 results. The lower half of the table represents general priors that are not designed specifically for super-resolution. Our method performs on par with state of the art methods over all the upsampling scales. 7 Set5 [3] Set14 [36] Method scale: ×2 ×3 ×4 ×5 ×2 ×3 ×4 ×5 Bicubic 31.80 28.67 26.73 25.32 28.53 25.92 24.44 23.46 SRCNN [11] 34.50 30.84 28.60 26.12 30.52 27.48 25.76 24.05 TNRD [7] 34.62 31.08 28.83 26.88 30.53 27.60 25.92 24.61 VDSR [16] 34.50 31.39 29.19 25.91 30.72 27.81 26.16 24.01 DnCNN-3 [39] 35.20 31.58 29.30 26.30 30.99 27.93 26.25 24.26 DAEP [4] 35.23 31.44 29.01 27.19 31.07 27.93 26.13 24.88 IRCNN [40] 35.07 31.26 29.01 27.13 30.79 27.68 25.96 24.73 Ours 35.16 31.38 29.16 27.38 30.99 27.90 26.22 25.01 Table 3: Average PSNR (dB) for super-resolution on two datasets. Matlab [21] RTF [15] Gharbi et al. [13] Gharbi et al. [13] f.t. SEM [17] Ours 33.9 37.8 38.4 38.6 38.8 38.7 Table 4: Average PSNR (dB) in linear RGB space for demosaicing on the Panasonic dataset [15]. 5.4 Demosaicing We finally performed a demosaicing experiment on the dataset introduced by Khashabi et al. [15]. This dataset is constructed by taking RAW images from a Panasonic camera, where the images are downsampled to construct the ground truth data. Due to the down sampling effect, in this evaluation we train a DAE with σ1 = 3 noise standard deviation. The test dataset consists of 100 noisy images captured by a Panasonic camera using a Bayer color filter array (RGGB). We initialize our method with Matlab’s demosaic function [21]. To get even better initialization, we perform our initial optimization with a large degradation noise estimate (σn = 2.5) and then perform the optimization with a lower estimate (σn = 1). We summarize the quantitative results in Table 4. Our method is again on par with the state of the art. Additionally, our prior is not trained for a specific color filter array and therefore is not limited to a specific sub-pixel order. Figure 4 shows a qualitative comparison, where our method produces much smoother results compared to the previous state of the art. Ground Truth RTF [15] Gharbi et al. [13] SEM [17] Ours Figure 4: Visual comparison for demosaicing noisy images from the Panasonic data set [15]. 6 Conclusions We proposed a Bayesian deep learning framework for image restoration with a generic image prior that directly represents the Gaussian smoothed natural image probability distribution. We showed that we can compute the gradient of our prior efficiently using a trained denoising autoencoder (DAE). Our formulation allows us to learn a single prior and use it for many image restoration tasks, such as noise-blind deblurring, super-resolution, and image demosaicing. Our results indicate that we achieve performance that is competitive with the state of the art for these applications. In the future, we would like to explore generalizing from Gaussian smoothing of the underlying distribution to other types of kernels. We are also considering multi-scale optimization where one would reduce the Bayes utility support gradually to get a tighter bound with respect to maximum a posteriori. Finally, our approach is not limited to image restoration and could be exploited to address other inverse problems. 8 Acknowledgments. MJ and PF acknowledge support from the Swiss National Science Foundation (SNSF) on project 200021-153324. References [1] Guillaume Alain and Yoshua Bengio. What regularized auto-encoders learn from the data-generating distribution. Journal of Machine Learning Research, 15:3743–3773, 2014. [2] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):898–916, 2011. [3] Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie-Line Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In British Machine Vision Conference, BMVC 2012, Surrey, UK, September 3-7, 2012, pages 1–10, 2012. [4] Siavash Arjomand Bigdeli and Matthias Zwicker. Image restoration using autoencoding priors. arXiv preprint arXiv:1703.09964, 2017. [5] Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In Computer Vision and Pattern Recognition (CVPR), 2005 IEEE Conference on, volume 2, pages 60–65. IEEE, 2005. [6] JH Chang, Chun-Liang Li, Barnabas Poczos, BVK Kumar, and Aswin C Sankaranarayanan. One network to solve them all—solving linear inverse problems using deep projection models. arXiv preprint arXiv:1703.09912, 2017. [7] Yunjin Chen and Thomas Pock. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1256– 1272, 2017. [8] Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002. [9] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising with block-matching and 3d filtering. In Electronic Imaging 2006, pages 606414–606414. International Society for Optics and Photonics, 2006. [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition (CVPR), 2009 IEEE Conference on, pages 248–255. IEEE, 2009. [11] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295–307, 2016. [12] Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T Roweis, and William T Freeman. Removing camera shake from a single photograph. In ACM Transactions on Graphics (TOG), volume 25, pages 787–794. ACM, 2006. [13] Michaël Gharbi, Gaurav Chaurasia, Sylvain Paris, and Frédo Durand. Deep joint demosaicking and denoising. ACM Transactions on Graphics (TOG), 35(6):191, 2016. [14] M. Jin, S. Roth, and P. Favaro. Noise-blind image deblurring. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017. [15] Daniel Khashabi, Sebastian Nowozin, Jeremy Jancsary, and Andrew W Fitzgibbon. Joint demosaicing and denoising via learned nonparametric random fields. IEEE Transactions on Image Processing, 23(12):4968– 4981, 2014. [16] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pages 1646–1654. IEEE, 2016. [17] Teresa Klatzer, Kerstin Hammernik, Patrick Knobelreiter, and Thomas Pock. Learning joint demosaicing and denoising based on sequential energy minimization. In Computational Photography (ICCP), 2016 IEEE International Conference on, pages 1–11. IEEE, 2016. [18] Dilip Krishnan and Rob Fergus. Fast image deconvolution using hyper-laplacian priors. In Advances in Neural Information Processing Systems, pages 1033–1041, 2009. 9 [19] Anat Levin, Rob Fergus, Frédo Durand, and William T Freeman. Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics (TOG), 26(3):70, 2007. [20] Anat Levin and Boaz Nadler. Natural image denoising: Optimality and inherent bounds. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2833–2840. IEEE, 2011. [21] Henrique S Malvar, Li-wei He, and Ross Cutler. High-quality linear interpolation for demosaicing of bayerpatterned color images. In Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference on, volume 3, pages iii–485. IEEE, 2004. [22] Tim Meinhardt, Michael Möller, Caner Hazirbas, and Daniel Cremers. Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. arXiv preprint arXiv:1704.03488, 2017. [23] KOICHI Miyasawa. An empirical bayes estimator of the mean of a normal population. Bull. Inst. Internat. Statist, 38(181-188):1–2, 1961. [24] Daniele Perrone and Paolo Favaro. A logarithmic image prior for blind deconvolution. International Journal of Computer Vision, 117(2):159–172, 2016. [25] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Transactions on Image Processing, 12(11):1338–1351, Nov 2003. [26] M Raphan and E P Simoncelli. Least squares estimation without priors or supervision. Neural Computation, 23(2):374–420, Feb 2011. Published online, Nov 2010. [27] Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by denoising (red). arXiv preprint arXiv:1611.02862, 2016. [28] Stefan Roth and Michael J Black. Fields of experts: A framework for learning image priors. In Computer Vision and Pattern Recognition (CVPR), 2005 IEEE Conference on, volume 2, pages 860–867. IEEE, 2005. [29] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259 – 268, 1992. [30] Uwe Schmidt, Jeremy Jancsary, Sebastian Nowozin, Stefan Roth, and Carsten Rother. Cascades of regression tree fields for image restoration. IEEE transactions on pattern analysis and machine intelligence, 38(4):677–689, 2016. [31] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2774–2781. IEEE, 2014. [32] Tamar Rott Shaham and Tomer Michaeli. Visualizing image priors. In European Conference on Computer Vision, pages 136–153. Springer, 2016. [33] Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. In GlobalSIP, pages 945–948. IEEE, 2013. [34] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, pages 1096–1103. ACM, 2008. [35] Lei Xiao, Felix Heide, Wolfgang Heidrich, Bernhard Schölkopf, and Michael Hirsch. Discriminative transfer learning for general image restoration. arXiv preprint arXiv:1703.09245, 2017. [36] Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In International Conference on Curves and Surfaces, pages 711–730. Springer, 2010. [37] Haichao Zhang and David Wipf. Non-uniform camera shake removal using a spatially-adaptive sparse penalty. In Advances in Neural Information Processing Systems, pages 1556–1564, 2013. [38] Haichao Zhang and Jianchao Yang. Scale adaptive blind deblurring. In Advances in Neural Information Processing Systems, pages 3005–3013, 2014. [39] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. arXiv preprint arXiv:1608.03981, 2016. [40] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. arXiv preprint arXiv:1704.03264, 2017. [41] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 479–486. IEEE, 2011. 10 | 2017 | 157 |
6,627 | On Structured Prediction Theory with Calibrated Convex Surrogate Losses Anton Osokin INRIA/ENS∗, Paris, France HSE†, Moscow, Russia Francis Bach INRIA/ENS∗, Paris, France Simon Lacoste-Julien MILA and DIRO Université de Montréal, Canada Abstract We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called “calibration function” relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for structured prediction. 1 Introduction Structured prediction is a subfield of machine learning aiming at making multiple interrelated predictions simultaneously. The desired outputs (labels) are typically organized in some structured object such as a sequence, a graph, an image, etc. Tasks of this type appear in many practical domains such as computer vision [34], natural language processing [42] and bioinformatics [19]. The structured prediction setup has at least two typical properties differentiating it from the classical binary classification problems extensively studied in learning theory: 1. Exponential number of classes: this brings both additional computational and statistical challenges. By exponential, we mean exponentially large in the size of the natural dimension of output, e.g., the number of all possible sequences is exponential w.r.t. the sequence length. 2. Cost-sensitive learning: in typical applications, prediction mistakes are not all equally costly. The prediction error is usually measured with a highly-structured task-specific loss function, e.g., Hamming distance between sequences of multi-label variables or mean average precision for ranking. Despite many algorithmic advances to tackle structured prediction problems [4, 35], there have been relatively few papers devoted to its theoretical understanding. Notable recent exceptions that made significant progress include Cortes et al. [13] and London et al. [28] (see references therein) which proposed data-dependent generalization error bounds in terms of popular empirical convex surrogate losses such as the structured hinge loss [44, 45, 47]. A question not addressed by these works is whether their algorithms are consistent: does minimizing their convex bounds with infinite data lead to the minimization of the task loss as well? Alternatively, the structured probit and ramp losses are consistent [31, 30], but non-convex and thus it is hard to obtain computational guarantees for them. In this paper, we aim at getting the property of consistency for surrogate losses that can be efficiently minimized with guarantees, and thus we consider convex surrogate losses. The consistency of convex surrogates is well understood in the case of binary classification [50, 5, 43] and there is significant progress in the case of multi-class 0-1 loss [49, 46] and general multi∗DI École normale supérieure, CNRS, PSL Research University †National Research University Higher School of Economics 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. class loss functions [3, 39, 48]. A large body of work specifically focuses on the related tasks of ranking [18, 9, 40] and ordinal regression [37]. Contributions. In this paper, we study consistent convex surrogate losses specifically in the context of an exponential number of classes. We argue that even while being consistent, a convex surrogate might not allow efficient learning. As a concrete example, Ciliberto et al. [10] recently proposed a consistent approach to structured prediction, but the constant in their generalization error bound can be exponentially large as we explain in Section 5. There are two possible sources of difficulties from the optimization perspective: to reach adequate accuracy on the task loss, one might need to optimize a surrogate loss to exponentially small accuracy; or to reach adequate accuracy on the surrogate loss, one might need an exponential number of algorithm steps because of exponentially large constants in the convergence rate. We propose a theoretical framework that jointly tackles these two aspects and allows to judge the feasibility of efficient learning. In particular, we construct a calibration function [43], i.e., a function setting the relationship between accuracy on the surrogate and task losses, and normalize it by the means of convergence rate of an optimization algorithm. Aiming for the simplest possible application of our framework, we propose a family of convex surrogates that are consistent for any given task loss and can be optimized using stochastic gradient descent. For a special case of our family (quadratic surrogate), we provide a complete analysis including general lower and upper bounds on the calibration function for any task loss, with exact values for the 0-1, block 0-1 and Hamming losses. We observe that to have a tractable learning algorithm, one needs both a structured loss (not the 0-1 loss) and appropriate constraints on the predictor, e.g., in the form of linear constraints for the score vector functions. Our framework also indicates that in some cases it might be beneficial to use non-consistent surrogates. In particular, a non-consistent surrogate might allow optimization only up to specific accuracy, but exponentially faster than a consistent one. We introduce the structured prediction setting suitable for studying consistency in Sections 2 and 3. We analyze the calibration function for the quadratic surrogate loss in Section 4. We review the related works in Section 5 and conclude in Section 6. 2 Structured prediction setup In structured prediction, the goal is to predict a structured output y ∈Y (such as a sequence, a graph, an image) given an input x ∈X. The quality of prediction is measured by a task-dependent loss function L(ˆy, y | x) ≥0 specifying the cost for predicting ˆy when the correct output is y. In this paper, we consider the case when the number of possible predictions and the number of possible labels are both finite. For simplicity,1 we also assume that the sets of possible predictions and correct outputs always coincide and do not depend on x. We refer to this set as the set of labels Y, denote its cardinality by k, and map its elements to 1, . . . , k. In this setting, assuming that the loss function depends only on ˆy and y, but not on x directly, the loss is defined by a loss matrix L ∈Rk×k. We assume that all the elements of the matrix L are non-negative and will use Lmax to denote the maximal element. Compared to multi-class classification, k is typically exponentially large in the size of the natural dimension of y, e.g., contains all possible sequences of symbols from a finite alphabet. Following standard practices in structured prediction [12, 44], we define the prediction model by a score function f : X →Rk specifying a score fy(x) for each possible output y ∈Y. The final prediction is done by selecting a label with the maximal value of the score pred(f(x)) := argmax ˆy∈Y fˆy(x), (1) with some fixed strategy to resolve ties. To simplify the analysis, we assume that among the labels with maximal scores, the predictor always picks the one with the smallest index. The goal of prediction-based machine learning consists in finding a predictor that works well on the unseen test set, i.e., data points coming from the same distribution D as the one generating the training data. One way to formalize this is to minimize the generalization error, often referred to as the actual (or population) risk based on the loss L, RL(f) := IE(x,y)∼D L pred(f(x)), y . (2) Minimizing the actual risk (2) is usually hard. The standard approach is to minimize a surrogate risk, which is a different objective easier to optimize, e.g., convex. We define a surrogate loss as a function 1Our analysis is generalizable to rectangular losses, e.g., ranking losses studied by Ramaswamy et al. [40]. 2 Φ : Rk × Y →R depending on a score vector f = f(x) ∈Rk and a target label y ∈Y as input arguments. We denote the y-th component of f with fy. The surrogate risk (the Φ-risk) is defined as RΦ(f) := IE(x,y)∼D Φ(f(x), y), (3) where the expectation is taken w.r.t. the data-generating distribution D. To make the minimization of (3) well-defined, we always assume that the surrogate loss Φ is bounded from below and continuous. Examples of common surrogate losses include the structured hinge-loss [44, 47] ΦSSVM(f, y) := maxˆy∈Y fˆy + L(ˆy, y) −fy, the log loss (maximum likelihood learning) used, e.g., in conditional random fields [25], Φlog(f, y) := log(P ˆy∈Y exp fˆy) −fy, and their hybrids [38, 21, 22, 41]. In terms of task losses, we consider the unstructured 0-1 loss L01(ˆy, y) := [ˆy ̸= y],2 and the two following structured losses: block 0-1 loss with b equal blocks of labels L01,b(ˆy, y) := [ˆy and y are not in the same block]; and (normalized) Hamming loss between tuples of T binary variables yt: LHam,T (ˆy, y) := 1 T PT t=1[ˆyt ̸= yt]. To illustrate some aspects of our analysis, we also look at the mixed loss L01,b,η: a convex combination of the 0-1 and block 0-1 losses, defined as L01,b,η := ηL01 + (1 −η)L01,b for some η ∈[0, 1]. 3 Consistency for structured prediction 3.1 Calibration function We now formalize the connection between the actual risk RL and the surrogate Φ-risk RΦ via the so-called calibration function, see Definition 1 below [5, 49, 43, 18, 3]. As it is standard for this kind of analysis, the setup is non-parametric, i.e. it does not take into account the dependency of scores on input variables x. For now, we assume that a family of score functions FF consists of all vector-valued Borel measurable functions f : X →F where F ⊆Rk is a subspace of allowed score vectors, which will play an important role in our analysis. This setting is equivalent to a pointwise analysis, i.e, looking at the different input x independently. We bring the dependency on the input back into the analysis in Section 3.3 where we assume a specific family of score functions. Let DX represent the marginal distribution for D on x and IP(· | x) denote its conditional given x. We can now rewrite the risk RL and Φ-risk RΦ as RL(f) = IEx∼DX ℓ(f(x), IP(· | x)), RΦ(f) = IEx∼DX φ(f(x), IP(· | x)), where the conditional risk ℓand the conditional Φ-risk φ depend on a vector of scores f and a conditional distribution on the set of output labels q as ℓ(f, q) := Xk c=1 qcL(pred(f), c), φ(f, q) := Xk c=1 qcΦ(f, c). The calibration function HΦ,L,F between the surrogate loss Φ and the task loss L relates the excess surrogate risk with the actual excess risk via the excess risk bound: HΦ,L,F(δℓ(f, q)) ≤δφ(f, q), ∀f ∈F, ∀q ∈∆k, (4) where δφ(f, q) = φ(f, q) −inf ˆ f∈F φ( ˆf, q), δℓ(f, q) = ℓ(f, q) −inf ˆ f∈F ℓ( ˆf, q) are the excess risks and ∆k denotes the probability simplex on k elements. In other words, to find a vector f that yields an excess risk smaller than ε, we need to optimize the Φ-risk up to HΦ,L,F(ε) accuracy (in the worst case). We make this statement precise in Theorem 2 below, and now proceed to the formal definition of the calibration function. Definition 1 (Calibration function). For a task loss L, a surrogate loss Φ, a set of feasible scores F, the calibration function HΦ,L,F(ε) (defined for ε ≥0) equals the infimum excess of the conditional surrogate risk when the excess of the conditional actual risk is at least ε: HΦ,L,F(ε) := inf f∈F, q∈∆kδφ(f, q) (5) s.t. δℓ(f, q) ≥ε. (6) We set HΦ,L,F(ε) to +∞when the feasible set is empty. By construction, HΦ,L,F is non-decreasing on [0, +∞), HΦ,L,F(ε) ≥0, the inequality (4) holds, and HΦ,L,F(0) = 0. Note that HΦ,L,F can be non-convex and even non-continuous (see examples in Figure 1). Also, note that large values of HΦ,L,F(ε) are better. 2Here we use the Iverson bracket notation, i.e., [A] := 1 if a logical expression A is true, and zero otherwise. 3 0 0.5 1 " H(") no constraints tight constraints 0 0.2 0.4 " H(") no constraints tight constraints (a): Hamming loss LHam,T (b): Mixed loss L01,b,0.4 Figure 1: Calibration functions for the quadratic surrogate Φquad (12) defined in Section 4 and two different task losses. (a) – the calibration functions for the Hamming loss LHam,T when used without constraints on the scores, F = Rk (in red), and with the tight constraints implying consistency, F = span(LHam,T ) (in blue). The red curve can grow exponentially slower than the blue one. (b) – the calibration functions for the mixed loss L01,b,η with η = 0.4 (see Section 2 for the definition) when used without constraints on the scores (red) and with tight constraints for the block 0-1 loss (blue). The blue curve represents level-0.2 consistency. The calibration function equals zero for ε ≤η/2, but grows exponentially faster than the red curve representing a consistent approach and thus could be better for small η. More details on the calibration functions in this figure are given in Section 4. 3.2 Notion of consistency We use the calibration function HΦ,L,F to set a connection between optimizing the surrogate and task losses by Theorem 2, which is similar to Theorem 3 of Zhang [49]. Theorem 2 (Calibration connection). Let HΦ,L,F be the calibration function between the surrogate loss Φ and the task loss L with feasible set of scores F ⊆Rk. Let ˇHΦ,L,F be a convex non-decreasing lower bound of the calibration function. Assume that Φ is continuous and bounded from below. Then, for any ε > 0 with finite ˇHΦ,L,F(ε) and any f ∈FF, we have RΦ(f) < R∗ Φ,F + ˇHΦ,L,F(ε) ⇒ RL(f) < R∗ L,F + ε, (7) where R∗ Φ,F := inff∈FF RΦ(f) and R∗ L,F := inff∈FF RL(f). Proof. We take the expectation of (4) w.r.t. x, where the second argument of ℓis set to the conditional distribution IP(· | x). Then, we apply Jensen’s inequality (since ˇHΦ,L,F is convex) to get ˇHΦ,L,F(RL(f) −R∗ L,F) ≤RΦ(f) −R∗ Φ,F < ˇHΦ,L,F(ε), (8) which implies (7) by monotonicity of ˇHΦ,L,F. A suitable convex non-decreasing lower bound ˇHΦ,L,F(ε) required by Theorem 2 always exists, e.g., the zero constant. However, in this case Theorem 2 is not informative, because the l.h.s. of (7) is never true. Zhang [49, Proposition 25] claims that ˇHΦ,L,F defined as the lower convex envelope of the calibration function HΦ,L,F satisfies ˇHΦ,L,F(ε) > 0, ∀ε > 0, if HΦ,L,F(ε) > 0, ∀ε > 0, and, e.g., the set of labels is finite. This statement implies that an informative ˇHΦ,L,F always exists and allows to characterize consistency through properties of the calibration function HΦ,L,F. We now define a notion of level-η consistency, which is more general than consistency. Definition 3 (level-η consistency). A surrogate loss Φ is consistent up to level η ≥0 w.r.t. a task loss L and a set of scores F if and only if the calibration function satisfies HΦ,L,F(ε) > 0 for all ε > η and there exists ˆε > η such that HΦ,L,F(ˆε) is finite. Looking solely at (standard level-0) consistency vs. inconsistency might be too coarse to capture practical properties related to optimization accuracy (see, e.g., [29]). For example, if HΦ,L,F(ε) = 0 only for very small values of ε, then the method can still optimize the actual risk up to a certain level which might be good enough in practice, especially if it means that it can be optimized faster. Examples of calibration functions for consistent and inconsistent surrogate losses are shown in Figure 1. Other notions of consistency. Definition 3 with η = 0 and F = Rk results in the standard setting often appearing in the literature. In particular, in this case Theorem 2 implies Fisher consistency as 4 formulated, e.g., by Pedregosa et al. [37] for general losses and Lin [27] for binary classification. This setting is also closely related to many definitions of consistency used in the literature. For example, for a bounded from below and continuous surrogate, it is equivalent to infinite-sample consistency [49], classification calibration [46], edge-consistency [18], (L, Rk)-calibration [39], prediction calibration [48]. See [49, Appendix A] for the detailed discussion. Role of F. Let the approximation error for the restricted set of scores F be defined as R∗ L,F −R∗ L := inff∈FF RL(f) −inff RL(f). For any conditional distribution q, the score vector f := −Lq will yield an optimal prediction. Thus the condition span(L) ⊆F is sufficient for F to have zero approximation error for any distribution D, and for our 0-consistency condition to imply the standard Fisher consistency with respect to L. In the following, we will see that a restricted F can both play a role for computational efficiency as well as statistical efficiency (thus losses with smaller span(L) might be easier to work with). 3.3 Connection to optimization accuracy and statistical efficiency The scale of a calibration function is not intrinsically well-defined: we could multiply the surrogate function by a scalar and it would multiply the calibration function by the same scalar, without changing the optimization problem. Intuitively, we would like the surrogate loss to be of order 1. If with this scale the calibration function is exponentially small (has a 1/k factor), then we have strong evidence that the stochastic optimization will be difficult (and thus learning will be slow). To formalize this intuition, we add to the picture the complexity of optimizing the surrogate loss with a stochastic approximation algorithm. By using a scale-invariant convergence rate, we provide a natural normalization of the calibration function. The following two observations are central to the theoretical insights provided in our work: 1. Scale. For a properly scaled surrogate loss, the scale of the calibration function is a good indication of whether a stochastic approximation algorithm will take a large number of iterations (in the worst case) to obtain guarantees of small excess of the actual risk (and vice-versa, a large coefficient indicates a small number of iterations). The actual verification requires computing the normalization quantities given in Theorem 6 below. 2. Statistics. The bound on the number of iterations directly relates to the number of training examples that would be needed to learn, if we see each iteration of the stochastic approximation algorithm as using one training example to optimize the expected surrogate. To analyze the statistical convergence of surrogate risk optimization, we have to specify the set of score functions that we work with. We assume that the structure on input x ∈X is defined by a positive definite kernel K : X × X →R. We denote the corresponding reproducing kernel Hilbert space (RKHS) by H and its explicit feature map by ψ(x) ∈H. By the reproducing property, we have ⟨f, ψ(x)⟩H = f(x) for all x ∈X, f ∈H, where ⟨·, ·⟩H is the inner product in the RKHS. We define the subspace of allowed scores F ⊆Rk via the span of the columns of a matrix F ∈Rk×r. The matrix F explicitly defines the structure of the score function. With this notation, we will assume that the score function is of the form f(x) = FWψ(x), where W : H →Rr is a linear operator to be learned (a matrix if H is of finite dimension) that represents a collection of r elements in H, transforming ψ(x) to a vector in Rr by applying the RKHS inner product r times.3 Note that for structured losses, we usually have r ≪k. The set of all score functions is thus obtained by varying W in this definition and is denoted by FF,H. As a concrete example of a score family FF,H for structured prediction, consider the standard sequence model with unary and pairwise potentials. In this case, the dimension r equals Ts + (T −1)s2, where T is the sequence length and s is the number of labels of each variable. The columns of the matrix F consist of 2T −1 groups (one for each unary and pairwise potential). Each row of F has exactly one entry equal to one in each column group (with zeros elsewhere). In this setting, we use the online projected averaged stochastic subgradient descent ASGD4 (stochastic w.r.t. data (x(n), y(n)) ∼D) to minimize the surrogate risk directly [6]. The n-th update consists in W (n) := PD W (n−1) −γ(n)F T∇Φψ(x(n))T , (9) 3Note that if rank(F) = r, our setup is equivalent to assuming a joint kernel [47] in the product form: Kjoint((x, c), (x′, c′)) := K(x, x′)F(c, :)F(c′, :)T, where F(c, :) is the row c for matrix F. 4See, e.g., [36] for the formal setup of kernel ASGD. 5 where F T∇Φψ(x(n))T : H →Rr is the stochastic functional gradient, γ(n) is the step size and PD is the projection on the ball of radius D w.r.t. the Hilbert–Schmidt norm5. The vector ∇Φ ∈Rk is a regular gradient of the sampled surrogate Φ(f(x(n)), y(n)) w.r.t. the scores, ∇Φ = ∇fΦ(f, y(n))|f=f(x(n)). We wrote the above update using an explicit feature map ψ for notational simplicity, but kernel ASGD can also be implemented without it by using the kernel trick. The convergence properties of ASGD in RKHS are analogous to the finite-dimensional ASGD because they rely on dimension-free quantities. To use a simple convergence analysis, we follow Ciliberto et al. [10] and make the following simplifying assumption: Assumption 4 (Well-specified optimization w.r.t. the function class FF,H). The distribution D is such that R∗ Φ,F := inff∈FF RΦ(f) has some global minimum f∗that also belongs to FF,H. Assumption 4 simply means that each row of W ∗defining f∗belongs to the RKHS H implying a finite norm ∥W ∗∥HS. Assumption 4 can be relaxed if the kernel K is universal, but then the convergence analysis becomes much more complicated [36]. Theorem 5 (Convergence rate). Under Assumption 4 and assuming that (i) the functions Φ(f, y) are bounded from below and convex w.r.t. f ∈Rk for all y ∈Y; (ii) the expected square of the norm of the stochastic gradient is bounded, IE(x,y)∼D∥F T∇Φψ(x)T∥2 HS ≤M 2 and (iii) ∥W ∗∥HS ≤D, then running the ASGD algorithm (9) with the constant step-size γ := 2D M √ N for N steps admits the following expected suboptimality for the averaged iterate ¯f(N): IE[RΦ(¯f(N))] −R∗ Φ,F ≤2DM √ N where ¯f(N) := 1 N XN n=1FW (n)ψ(x(n))T. (10) Theorem 5 is a straight-forward extension of classical results [33, 36]. By combining the convergence rate of Theorem 5 with Theorem 2 that connects the surrogate and actual risks, we get Theorem 6 which explicitly gives the number of iterations required to achieve ε accuracy on the expected population risk (see App. A for the proof). Note that since ASGD is applied in an online fashion, Theorem 6 also serves as the sample complexity bound, i.e., says how many samples are needed to achieve ε target accuracy (compared to the best prediction rule if F has zero approximation error). Theorem 6 (Learning complexity). Under the assumptions of Theorem 5, for any ε > 0, the random (w.r.t. the observed training set) output ¯f(N) ∈FF,H of the ASGD algorithm after N > N ∗:= 4D2M 2 ˇ H2 Φ,L,F(ε) (11) iterations has the expected excess risk bounded with ε, i.e., IE[RL(¯f(N))] < R∗ L,F + ε. 4 Calibration function analysis for quadratic surrogate A major challenge to applying Theorem 6 is the computation of the calibration function HΦ,L,F. In App. C, we present a generalization to arbitrary multi-class losses of a surrogate loss class from Zhang [49, Section 4.4.2] that is consistent for any task loss L. Here, we consider the simplest example of this family, called the quadratic surrogate Φquad, which has the advantage that we can bound or even compute exactly its calibration function. We define the quadratic surrogate as Φquad(f, y) := 1 2k∥f + L(:, y)∥2 2 = 1 2k k X c=1 (f 2 c + 2fcL(c, y) + L(c, y)2). (12) One simple sufficient condition for the surrogate (12) to be consistent and also to have zero approximation error is that F fully contains span(L). To make the dependence on the score subspace explicit, we parameterize it with a matrix F ∈Rk×r with the number of columns r typically being much smaller than the number of labels k. With this notation, we have F = span(F) = {Fθ | θ ∈Rr}, and the dimensionality of F equals the rank of F, which is at most r.6 5The Hilbert–Schmidt norm of a linear operator A is defined as ∥A∥HS = √ trA‡A where A‡ is the adjoint operator. In the case of finite dimension, the Hilbert–Schmidt norm coincides with the Frobenius matrix norm. 6Evaluating Φquad requires computing F TF and F TL(:, y) for which direct computation is intractable when k is exponential, but which can be done in closed form for the structured losses we consider (the Hamming and block 0-1 loss). More generally, these operations require suitable inference algorithms. See also App. F. 6 For the quadratic surrogate (12), the excess of the expected surrogate takes a simple form: δφquad(Fθ, q) = 1 2k∥Fθ + Lq∥2 2. (13) Equation (13) holds under the assumption that the subspace F contains the column space of the loss matrix span(L), which also means that the set F contains the optimal prediction for any q (see Lemma 9 in App. B for the proof). Importantly, the function δφquad(Fθ, q) is jointly convex in the conditional probability q and parameters θ, which simplifies its analysis. Lower bound on the calibration function. We now present our main technical result: a lower bound on the calibration function for the surrogate loss Φquad (12). This lower bound characterizes the easiness of learning with this surrogate given the scaling intuition mentioned in Section 3.3. The proof of Theorem 7 is given in App. D.1. Theorem 7 (Lower bound on HΦquad). For any task loss L, its quadratic surrogate Φquad, and a score subspace F containing the column space of L, the calibration function can be lower bounded: HΦquad,L,F(ε) ≥ ε2 2k maxi̸=j ∥PF∆ij∥2 2 ≥ε2 4k, (14) where PF is the orthogonal projection on the subspace F and ∆ij = ei −ej ∈Rk with ec being the c-th basis vector of the standard basis in Rk. Lower bound for specific losses. We now discuss the meaning of the bound (14) for some specific losses (the detailed derivations are given in App. D.3). For the 0-1, block 0-1 and Hamming losses (L01, L01,b and LHam,T , respectively) with the smallest possible score subspaces F, the bound (14) gives ε2 4k, ε2 4b and ε2 8T , respectively. All these bounds are tight (see App. E). However, if F = Rk the bound (14) is not tight for the block 0-1 and mixed losses (see also App. E). In particular, the bound (14) cannot detect level-η consistency for η > 0 (see Def. 3) and does not change when the loss changes, but the score subspace stays the same. Upper bound on the calibration function. Theorem 8 below gives an upper bound on the calibration function holding for unconstrained scores, i.e, F = Rk (see the proof in App. D.2). This result shows that without some appropriate constraints on the scores, efficient learning is not guaranteed (in the worst case) because of the 1/k scaling of the calibration function. Theorem 8 (Upper bound on HΦquad). If a loss matrix L with Lmax > 0 defines a pseudometric7 on labels and there are no constraints on the scores, i.e., F = Rk, then the calibration function for the quadratic surrogate Φquad can be upper bounded: HΦquad,L,Rk(ε) ≤ε2 2k, 0 ≤ε ≤Lmax. From our lower bound in Theorem 7 (which guarantees consistency), the natural constraint on the score is F = span(L), with the dimension of this space giving an indication of the intrinsic “difficulty” of a loss. Computations for the lower bounds in some specific cases (see App. D.3 for details) show that the 0-1 loss is “hard” while the block 0-1 loss and the Hamming loss are “easy”. Note that in all these cases the lower bound (14) is tight, see the discussion below. Exact calibration functions. Note that the bounds proven in Theorems 7 and 8 imply that, in the case of no constraints on the scores F = Rk, for the 0-1, block 0-1 and Hamming losses, we have ε2 4k ≤HΦquad,L,Rk(ε) ≤ε2 2k, (15) where L is the matrix defining a loss. For completeness, in App. E, we compute the exact calibration functions for the 0-1 and block 0-1 losses. Note that the calibration function for the 0-1 loss equals the lower bound, illustrating the worst-case scenario. To get some intuition, an example of a conditional distribution q that gives the (worst case) value to the calibration function (for several losses) is qi = 1 2 + ε 2, qj = 1 2 −ε 2 and qc = 0 for c ̸∈{i, j}. See the proof of Proposition 12 in App. E.1. In what follows, we provide the calibration functions in the cases with constraints on the scores. For the block 0-1 loss with b equal blocks and under constraints that the scores within blocks are equal, the calibration function equals (see Proposition 14 of App. E.2) HΦquad,L01,b,F01,b(ε) = ε2 4b, 0 ≤ε ≤1. (16) 7A pseudometric is a function d(a, b) satisfying the following axioms: d(x, y) ≥0, d(x, x) = 0 (but possibly d(x, y) = 0 for some x ̸= y), d(x, y) = d(y, x), d(x, z) ≤d(x, y) + d(y, z). 7 For the Hamming loss defined over T binary variables and under constraints implying separable scores, the calibration function equals (see Proposition 15 in App. E.3) HΦquad,LHam,T ,FHam,T (ε) = ε2 8T , 0 ≤ε ≤1. (17) The calibration functions (16) and (17) depend on the quantities representing the actual complexities of the loss (the number of blocks b and the length of the sequence T) and can be exponentially larger than the upper bound for the unconstrained case. In the case of mixed 0-1 and block 0-1 loss, if the scores f are constrained to be equal inside the blocks, i.e., belong to the subspace F01,b = span(L01,b) ⊊Rk, then the calibration function is equal to 0 for ε ≤η 2, implying inconsistency (and also note that the approximation error can be as big as η for F01,b). However, for ε > η 2, the calibration function is of the order 1 b(ε −η 2)2. See Figure 1b for the illustration of this calibration function and Proposition 17 of App. E.4 for the exact formulation and the proof. Note that while the calibration function for the constrained case is inconsistent, its value can be exponentially larger than the one for the unconstrained case for ε big enough and when the blocks are exponentially large (see Proposition 16 of App. E.4). Computation of the SGD constants. Applying the learning complexity Theorem 6 requires to compute the quantity DM where D bounds the norm of the optimal solution and M bounds the expected square of the norm of the stochastic gradient. In App. F, we provide a way to bound this quantity for our quadratic surrogate (12) under the simplifying assumption that each conditional qc(x) (seen as function of x) belongs to the RKHS H (which implies Assumption 4). In particular, we get DM = L2 maxξ(κ(F)√rRQmax), ξ(z) = z2 + z, (18) where κ(F) is the condition number of the matrix F, R is an upper bound on the RKHS norm of object feature maps ∥ψ(x)∥H. We define Qmax as an upper bound on Pk c=1 ∥qc∥H (can be seen as the generalization of the inequality Pk c=1 qc ≤1 for probabilities). The constants R and Qmax depend on the data, the constant Lmax depends on the loss, r and κ(F) depend on the choice of matrix F. We compute the constant DM for the specific losses that we considered in App. F.1. For the 0-1, block 0-1 and Hamming losses, we have DM = O(k), DM = O(b) and DM = O(log3 2 k), respectively. These computations indicate that the quadratic surrogate allows efficient learning for structured block 0-1 and Hamming losses, but that the convergence could be slow in the worst case for the 0-1 loss. 5 Related works Consistency for multi-class problems. Building on significant progress for the case of binary classification, see, e.g. [5], there has been a lot of interest in the multi-class case. Zhang [49] and Tewari & Bartlett [46] analyze the consistency of many existing surrogates for the 0-1 loss. Gao & Zhou [20] focus on multi-label classification. Narasimhan et al. [32] provide a consistent algorithm for arbitrary multi-class loss defined by a function of the confusion matrix. Recently, Ramaswamy & Agarwal [39] introduce the notion of convex calibrated dimension, as the minimal dimensionality of the score vector that is required for consistency. In particular, they showed that for the Hamming loss on T binary variables, this dimension is at most T. In our analysis, we use scores of rank (T + 1), see (35) in App. D.3, yielding a similar result. The task of ranking has attracted a lot of attention and [18, 8, 9, 40] analyze different families of surrogate and task losses proving their (in-)consistency. In this line of work, Ramaswamy et al. [40] propose a quadratic surrogate for an arbitrary low rank loss which is related to our quadratic surrogate (12). They also prove that several important ranking losses, i.e., precision@q, expected rank utility, mean average precision and pairwise disagreement, are of low-rank. We conjecture that our approach is compatible with these losses and leave precise connections as future work. Structured SVM (SSVM) and friends. SSVM [44, 45, 47] is one of the most used convex surrogates for tasks with structured outputs, thus, its consistency has been a question of great interest. It is known that Crammer-Singer multi-class SVM [15], which SSVM is built on, is not consistent for 0-1 loss unless there is a majority class with probability at least 1 2 [49, 31]. However, it is consistent for the “abstain” and ordinal losses in the case of 3 classes [39]. Structured ramp loss and probit surrogates are closely related to SSVM and are consistent [31, 16, 30, 23], but not convex. 8 Recently, Do˘gan et al. [17] categorized different versions of multi-class SVM and analyzed them from Fisher and universal consistency point of views. In particular, they highlight differences between Fisher and universal consistency and give examples of surrogates that are Fisher consistent, but not universally consistent and vice versa. They also highlight that the Crammer-Singer SVM is neither Fisher, not universally consistent even with a careful choice of regularizer. Quadratic surrogates for structured prediction. Ciliberto et al. [10] and Brouard et al. [7] consider minimizing Pn i=1 ∥g(xi) −ψo(yi)∥2 H aiming to match the RKHS embedding of inputs g : X →H to the feature maps of outputs ψo : Y →H. In their frameworks, the task loss is not considered at the learning stage, but only at the prediction stage. Our quadratic surrogate (12) depends on the loss directly. The empirical risk defined by both their and our objectives can be minimized analytically with the help of the kernel trick and, moreover, the resulting predictors are identical. However, performing such computation in the case of large dataset can be intractable and the generalization properties have to be taken care of, e.g., by the means of regularization. In the large-scale scenario, it is more natural to apply stochastic optimization (e.g., kernel ASGD) that directly minimizes the population risk and has better dependency on the dataset size. When combined with stochastic optimization, the two approaches lead to different behavior. In our framework, we need to estimate r = rank(L) scalar functions, but the alternative needs to estimate k functions (if, e.g., ψo(y) = ey ∈Rk), which results in significant differences for low-rank losses, such as block 0-1 and Hamming. Calibration functions. Bartlett et al. [5] and Steinwart [43] provide calibration functions for most existing surrogates for binary classification. All these functions differ in term of shape, but are roughly similar in terms of constants. Pedregosa et al. [37] generalize these results to the case of ordinal regression. However, their calibration functions have at best a 1/k factor if the surrogate is normalized w.r.t. the number of classes. The task of ranking has been of significant interest. However, most of the literature [e.g., 11, 14, 24, 1], only focuses on calibration functions (in the form of regret bounds) for bipartite ranking, which is more akin to cost-sensitive binary classification. Ávila Pires et al. [3] generalize the theoretical framework developed by Steinwart [43] and present results for the multi-class SVM of Lee et al. [26] (the score vectors are constrained to sum to zero) that can be built for any task loss of interest. Their surrogate Φ is of the form P c∈Y L(c, y)a(fc) where P c∈Y fc = 0 and a(f) is some convex function with all subgradients at zero being positive. The recent work by Ávila Pires & Szepesvári [2] refines the results, but specifically for the case of 0-1 loss. In this line of work, the surrogate is typically not normalized by k, and if normalized the calibration functions have the constant 1/k appearing. Finally, Ciliberto et al. [10] provide the calibration function for their quadratic surrogate. Assuming that the loss can be represented as L(ˆy, y) = ⟨V ψo(ˆy), ψo(y)⟩HY, ˆy, y ∈Y (this assumption can always be satisfied in the case of a finite number of labels, by taking V as the loss matrix L and ψo(y) := ey ∈Rk where ey is the y-th vector of the standard basis in Rk). In their Theorem 2, they provide an excess risk bound leading to a lower bound on the corresponding calibration function HΦ,L,Rk(ε) ≥ε2 c2 ∆where a constant c∆= ∥V ∥2 maxy∈Y ∥ψo(y)∥simply equals the spectral norm of the loss matrix for the finite-dimensional construction provided above. However, the spectral norm of the loss matrix is exponentially large even for highly structured losses such as the block 0-1 and Hamming losses, i.e., ∥L01,b∥2 = k −k b , ∥LHam,T ∥2 = k 2. This conclusion puts the objective of Ciliberto et al. [10] in line with ours when no constraints are put on the scores. 6 Conclusion In this paper, we studied the consistency of convex surrogate losses specifically in the context of structured prediction. We analyzed calibration functions and proposed an optimization-based normalization aiming to connect consistency with the existence of efficient learning algorithms. Finally, we instantiated all components of our framework for several losses by computing the calibration functions and the constants coming from the normalization. By carefully monitoring exponential constants, we highlighted the difference between tractable and intractable task losses. These were first steps in advancing our theoretical understanding of consistent structured prediction. Further steps include analyzing more losses such as the low-rank ranking losses studied by Ramaswamy et al. [40] and, instead of considering constraints on the scores, one could instead put constraints on the set of distributions to investigate the effect on the calibration function. 9 Acknowledgements We would like to thank Pascal Germain for useful discussions. This work was partly supported by the ERC grant Activia (no. 307574), the NSERC Discovery Grant RGPIN-2017-06936 and the MSR-INRIA Joint Center. References [1] Agarwal, Shivani. Surrogate regret bounds for bipartite ranking via strongly proper losses. Journal of Machine Learning Research (JMLR), 15(1):1653–1674, 2014. [2] Ávila Pires, Bernardo and Szepesvári, Csaba. Multiclass classification calibration functions. arXiv, 1609.06385v1, 2016. [3] Ávila Pires, Bernardo, Ghavamzadeh, Mohammad, and Szepesvári, Csaba. Cost-sensitive multiclass classification risk bounds. In ICML, 2013. [4] Bakir, Gökhan, Hofmann, Thomas, Schölkopf, Bernhard, Smola, Alexander J., Taskar, Ben, and Vishwanathan, S.V.N. Predicting Structured Data. MIT press, 2007. [5] Bartlett, Peter L., Jordan, Michael I., and McAuliffe, Jon D. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [6] Bousquet, Olivier and Bottou, Léon. The tradeoffs of large scale learning. In NIPS, 2008. [7] Brouard, Céline, Szafranski, Marie, and d’Alché-Buc, Florence. Input output kernel regression: Supervised and semi-supervised structured output prediction with operator-valued kernels. Journal of Machine Learning Research (JMLR), 17(176):1–48, 2016. [8] Buffoni, David, Gallinari, Patrick, Usunier, Nicolas, and Calauzènes, Clément. Learning scoring functions with order-preserving losses and standardized supervision. In ICML, 2011. [9] Calauzènes, Clément, Usunier, Nicolas, and Gallinari, Patrick. On the (non-)existence of convex, calibrated surrogate losses for ranking. In NIPS, 2012. [10] Ciliberto, Carlo, Rudi, Alessandro, and Rosasco, Lorenzo. A consistent regularization approach for structured prediction. In NIPS, 2016. [11] Clémençon, Stéphan, Lugosi, Gábor, and Vayatis, Nicolas. Ranking and empirical minimization of U-statistics. The Annals of Statistics, pp. 844–874, 2008. [12] Collins, Michael. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, 2002. [13] Cortes, Corinna, Kuznetsov, Vitaly, Mohri, Mehryar, and Yang, Scott. Structured prediction theory based on factor graph complexity. In NIPS, 2016. [14] Cossock, David and Zhang, Tong. Statistical analysis of bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140–5154, 2008. [15] Crammer, Koby and Singer, Yoram. On the algorithmic implementation of multiclass kernelbased vector machines. Journal of Machine Learning Research (JMLR), 2:265–292, 2001. [16] Do, Chuong B., Le, Quoc, Teo, Choon Hui, Chapelle, Olivier, and Smola, Alex. Tighter bounds for structured estimation. In NIPS, 2009. [17] Do˘gan, Ürün, Glasmachers, Tobias, and Igel, Christian. A unified view on multi-class support vector classification. Journal of Machine Learning Research (JMLR), 17(45):1–32, 2016. [18] Duchi, John C., Mackey, Lester W., and Jordan, Michael I. On the consistency of ranking algorithms. In ICML, 2010. [19] Durbin, Richard, Eddy, Sean, Krogh, Anders, and Mitchison, Graeme. Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge university press, 1998. 10 [20] Gao, Wei and Zhou, Zhi-Hua. On the consistency of multi-label learning. In COLT, 2011. [21] Gimpel, Kevin and Smith, Noah A. Softmax-margin CRFs: Training loglinear models with cost functions. In NAACL, 2010. [22] Hazan, Tamir and Urtasun, Raquel. A primal-dual message-passing algorithm for approximated large scale structured prediction. In NIPS, 2010. [23] Keshet, Joseph. Optimizing the measure of performance in structured prediction. In Advanced Structured Prediction. MIT Press, 2014. [24] Kotlowski, Wojciech, Dembczynski, Krzysztof, and Huellermeier, Eyke. Bipartite ranking through minimization of univariate loss. In ICML, 2011. [25] Lafferty, John, McCallum, Andrew, and Pereira, Fernando. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [26] Lee, Yoonkyung, Lin, Yi, and Wahba, Grace. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99(465):67–81, 2004. [27] Lin, Yi. A note on margin-based loss functions in classification. Statistics & Probability Letters, 68(1):73–82, 2004. [28] London, Ben, Huang, Bert, and Getoor, Lise. Stability and generalization in structured prediction. Journal of Machine Learning Research (JMLR), 17(222):1–52, 2016. [29] Long, Phil and Servedio, Rocco. Consistency versus realizable H-consistency for multiclass classification. In ICML, 2013. [30] McAllester, D. A. and Keshet, J. Generalization bounds and consistency for latent structural probit and ramp loss. In NIPS, 2011. [31] McAllester, David. Generalization bounds and consistency for structured labeling. In Predicting Structured Data. MIT Press, 2007. [32] Narasimhan, Harikrishna, Ramaswamy, Harish G., Saha, Aadirupa, and Agarwal, Shivani. Consistent multiclass algorithms for complex performance measures. In ICML, 2015. [33] Nemirovski, A., Juditsky, A., Lan, G., and Shapiro, A. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [34] Nowozin, Sebastian and Lampert, Christoph H. Structured learning and prediction in computer vision. Foundations and Trends in Computer Graphics and Vision, 6(3–4):185–365, 2011. [35] Nowozin, Sebastian, Gehler, Peter V., Jancsary, Jeremy, and Lampert, Christoph H. Advanced Structured Prediction. MIT Press, 2014. [36] Orabona, Francesco. Simultaneous model selection and optimization through parameter-free stochastic learning. In NIPS, 2014. [37] Pedregosa, Fabian, Bach, Francis, and Gramfort, Alexandre. On the consistency of ordinal regression methods. Journal of Machine Learning Research (JMLR), 18(55):1–35, 2017. [38] Pletscher, Patrick, Ong, Cheng Soon, and Buhmann, Joachim M. Entropy and margin maximization for structured output learning. In ECML PKDD, 2010. [39] Ramaswamy, Harish G. and Agarwal, Shivani. Convex calibration dimension for multiclass loss matrices. Journal of Machine Learning Research (JMLR), 17(14):1–45, 2016. [40] Ramaswamy, Harish G., Agarwal, Shivani, and Tewari, Ambuj. Convex calibrated surrogates for low-rank loss matrices with applications to subset ranking losses. In NIPS, 2013. [41] Shi, Qinfeng, Reid, Mark, Caetano, Tiberio, van den Hengel, Anton, and Wang, Zhenhua. A hybrid loss for multiclass and structured prediction. IEEE transactions on pattern analysis and machine intelligence (TPAMI), 37(1):2–12, 2015. 11 [42] Smith, Noah A. Linguistic structure prediction. Synthesis lectures on human language technologies, 4(2):1–274, 2011. [43] Steinwart, Ingo. How to compare different loss functions and their risks. Constructive Approximation, 26(2):225–287, 2007. [44] Taskar, Ben, Guestrin, Carlos, and Koller, Daphne. Max-margin markov networks. In NIPS, 2003. [45] Taskar, Ben, Chatalbashev, Vassil, Koller, Daphne, and Guestrin, Carlos. Learning structured prediction models: a large margin approach. In ICML, 2005. [46] Tewari, Ambuj and Bartlett, Peter L. On the consistency of multiclass classification methods. Journal of Machine Learning Research (JMLR), 8:1007–1025, 2007. [47] Tsochantaridis, I., Joachims, T., Hofmann, T., and Altun, Y. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research (JMLR), 6:1453–1484, 2005. [48] Williamson, Robert C., Vernet, Elodie, and Reid, Mark D. Composite multiclass losses. Journal of Machine Learning Research (JMLR), 17(223):1–52, 2016. [49] Zhang, Tong. Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research (JMLR), 5:1225–1251, 2004. [50] Zhang, Tong. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56–134, 2004. 12 | 2017 | 158 |
6,628 | Invariance and Stability of Deep Convolutional Representations Alberto Bietti Inria∗ alberto.bietti@inria.fr Julien Mairal Inria∗ julien.mairal@inria.fr Abstract In this paper, we study deep signal representations that are near-invariant to groups of transformations and stable to the action of diffeomorphisms without losing signal information. This is achieved by generalizing the multilayer kernel introduced in the context of convolutional kernel networks and by studying the geometry of the corresponding reproducing kernel Hilbert space. We show that the signal representation is stable, and that models from this functional space, such as a large class of convolutional neural networks, may enjoy the same stability. 1 Introduction The results achieved by deep neural networks for prediction tasks have been impressive in domains where data is structured and available in large amounts. In particular, convolutional neural networks (CNNs) [14] have shown to model well the local appearance of natural images at multiple scales, while also representing images with some invariance through pooling operations. Yet, the exact nature of this invariance and the characteristics of functional spaces where convolutional neural networks live are poorly understood; overall, these models are sometimes only seen as clever engineering black boxes that have been designed with a lot of insight collected since they were introduced. Understanding the geometry of these functional spaces is nevertheless a fundamental question. In addition to potentially bringing new intuition about the success of deep networks, it may for instance help solving the issue of regularization, by providing ways to control the variations of prediction functions in a principled manner. Small deformations of natural signals often preserve their main characteristics, such as the class label in a classification task (e.g., the same digit with different handwritings may correspond to the same images up to small deformations), and provide a much richer class of transformations than translations. Representations that are stable to small deformations allow more robust models that may exploit these invariances, which may lead to improved sample complexity. The scattering transform [5, 17] is a recent attempt to characterize convolutional multilayer architectures based on wavelets. The theory provides an elegant characterization of invariance and stability properties of signals represented via the scattering operator, through a notion of Lipschitz stability to the action of diffeomorphisms. Nevertheless, these networks do not involve “learning” in the classical sense since the filters of the networks are pre-defined, and the resulting architecture differs significantly from the most used ones. In this work, we study these theoretical properties for more standard convolutional architectures from the point of view of positive definite kernels [27]. Specifically, we consider a functional space derived from a kernel for multi-dimensional signals, which admits a multilayer and convolutional structure that generalizes the construction of convolutional kernel networks (CKNs) [15, 16]. We show that this functional space contains a large class of CNNs with smooth homogeneous activation functions in addition to CKNs [15], allowing us to obtain theoretical results for both classes of models. ∗Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The main motivation for introducing a kernel framework is to study separately data representation and predictive models. On the one hand, we study the translation-invariance properties of the kernel representation and its stability to the action of diffeomorphisms, obtaining similar guarantees as the scattering transform [17], while preserving signal information. When the kernel is appropriately designed, we also show how to obtain signal representations that are near-invariant to the action of any group of transformations. On the other hand, we show that these stability results can be translated to predictive models by controlling their norm in the functional space. In particular, the RKHS norm controls both stability and generalization, so that stability may lead to improved sample complexity. Related work. Our work relies on image representations introduced in the context of convolutional kernel networks [15, 16], which yield a sequence of spatial maps similar to traditional CNNs, but each point on the maps is possibly infinite-dimensional and lives in a reproducing kernel Hilbert space (RKHS). The extension to signals with d spatial dimensions is straightforward. Since computing the corresponding Gram matrix as in classical kernel machines is computationally impractical, CKNs provide an approximation scheme consisting of learning finite-dimensional subspaces of each RKHS’s layer, where the data is projected, see [15]. The resulting architecture of CKNs resembles traditional CNNs with a subspace learning interpretation and different unsupervised learning principles. Another major source of inspiration is the study of group-invariance and stability to the action of diffeomorphisms of scattering networks [17], which introduced the main formalism and several proof techniques from harmonic analysis that were keys to our results. Our main effort was to extend them to more general CNN architectures and to the kernel framework. Invariance to groups of transformations was also studied for more classical convolutional neural networks from methodological and empirical points of view [6, 9], and for shallow learned representations [1] or kernel methods [13, 19, 22]. Note also that other techniques combining deep neural networks and kernels have been introduced. Early multilayer kernel machines appear for instance in [7, 26]. Shallow kernels for images modelling local regions were also proposed in [25], and a multilayer construction was proposed in [4]. More recently, different models based on kernels are introduced in [2, 10, 18] to gain some theoretical insight about classical multilayer neural networks, while kernels are used to define convex models for two-layer neural networks in [36]. Finally, we note that Lipschitz stability of deep models to additive perturbations was found to be important to get robustness to adversarial examples [8]. Our results show that convolutional kernel networks already enjoy such a property. Notation and basic mathematical tools. A positive definite kernel K that operates on a set X implicitly defines a reproducing kernel Hilbert space H of functions from X to R, along with a mapping ϕ : X →H. A predictive model associates to every point z in X a label in R; it consists of a linear function f in H such that f(z) = ⟨f, ϕ(z)⟩H, where ϕ(z) is the data representation. Given now two points z, z′ in X, Cauchy-Schwarz’s inequality allows us to control the variation of the predictive model f according to the geometry induced by the Hilbert norm ∥.∥H: |f(z) −f(z′)| ≤∥f∥H∥ϕ(z) −ϕ(z′)∥H. (1) This property implies that two points z and z′ that are close to each other according to the RKHS norm should lead to similar predictions, when the model f has reasonably small norm in H. Then, we consider notation from signal processing similar to [17]. We call a signal x a function in L2(Ω, H), where Ωis a subset of Rd representing spatial coordinates, and H is a Hilbert space, when ∥x∥2 L2 := R Ω∥x(u)∥2 Hdu < ∞, where du is the Lebesgue measure on Rd. Given a linear operator T : L2(Ω, H) →L2(Ω, H′), the operator norm is defined as ∥T∥L2(Ω,H)→L2(Ω,H′) := sup∥x∥L2(Ω,H)≤1 ∥Tx∥L2(Ω,H′). For the sake of clarity, we drop norm subscripts, from now on, using the notation ∥· ∥for Hilbert space norms, L2 norms, and L2 →L2 operator norms, while | · | denotes the Euclidean norm on Rd. Some useful mathematical tools are also presented in Appendix A. 2 Construction of the Multilayer Convolutional Kernel We now present the multilayer convolutional kernel, which operates on signals with d spatial dimensions. The construction follows closely that of convolutional kernel networks [15] but generalizes it to input signals defined on the continuous domain Ω= Rd (which does not prevent signals to have compact support), as done by Mallat [17] for analyzing the properties of the scattering transform; the issue of discretization where Ωis a discrete grid is addressed in Section 2.1. 2 xk–1 : Ω→Hk–1 xk–1(u) ∈Hk–1 Pkxk–1(v) ∈Pk (patch extraction) kernel mapping MkPkxk–1(v) = ϕk(Pkxk–1(v)) ∈Hk MkPkxk–1 : Ω→Hk xk := AkMkPkxk–1 : Ω→Hk linear pooling xk(w) = AkMkPkxk–1(w) ∈Hk Figure 1: Construction of the k-th signal representation from the k–1-th one. Note that while Ω is depicted as a box in R2 here, our construction is supported on Ω= Rd. Similarly, a patch is represented as a squared box for simplicity, but it may potentially have any shape. In what follows, an input signal is denoted by x0 and lives in L2(Ω, H0), where H0 is typically Rp0 (e.g., with p0 = 3, x0(u) may represent the RGB pixel value at location u). Then, we build a sequence of RKHSs H1, H2, . . ., and transform x0 into a sequence of “feature maps” supported on Ω, respectively denoted by x1 in L2(Ω, H1), x2 in L2(Ω, H2), .... As depicted in Figure 1, a new map xk is built from the previous one xk–1 by applying successively three operators that perform patch extraction (Pk), kernel mapping (Mk) in a new RKHS Hk, and linear pooling (Ak), respectively. When going up in the hierarchy, the points xk(u) carry information from larger signal neighborhoods centered at u in Ωwith more invariance, as we will formally show. Patch extraction operator. Given the layer xk–1, we consider a patch shape Sk, defined as a compact centered subset of Rd, e.g., a box [−1, 1] × [−1, 1] for images, and we define the Hilbert space Pk := L2(Sk, Hk–1) equipped with the norm ∥z∥2 = R Sk ∥z(u)∥2dνk(u), where dνk is the normalized uniform measure on Sk for every z in Pk. More precisely, we now define the linear patch extraction operator Pk : L2(Ω, Hk–1) →L2(Ω, Pk) such that for all u in Ω, Pkxk–1(u) = (v 7→xk–1(u + v))v∈Sk ∈Pk. Note that by equipping Pk with a normalized measure, the operator Pk preserves the norm. By Fubini’s theorem, we have indeed ∥Pkxk–1∥= ∥xk–1∥and hence Pkxk–1 is in L2(Ω, Pk). Kernel mapping operator. In a second stage, we map each patch of xk–1 to a RKHS Hk with a kernel mapping ϕk : Pk →Hk associated to a positive definite kernel Kk. It is then possible to define the non-linear pointwise operator Mk such that MkPkxk–1(u) := ϕk(Pkxk–1(u)) ∈Hk. As in [15], we use homogeneous dot-product kernels of the form Kk(z, z′) = ∥z∥∥z′∥κk ⟨z, z′⟩ ∥z∥∥z′∥ with κk(1) = 1, (2) which ensures that ∥MkPkxk–1(u)∥= ∥Pkxk–1(u)∥and that MkPkxk–1 is in L2(Ω, Hk). Concrete examples of kernels satisfying (2) with some other properties are presented in Appendix B. Pooling operator. The last step to build the layer xk is to pool neighboring values to achieve some local shift-invariance. As in [15], we apply a linear convolution operator Ak with a Gaussian kernel at scale σk, hσk(u) := σ−d k h(u/σk), where h(u) = (2π)−d/2 exp(−|u|2/2). Then, xk(u) = AkMkPkxk–1(u) = Z Rd hσk(u −v)MkPkxk–1(v)dv ∈Hk. Applying Schur’s test to the integral operator Ak (see Appendix A), we obtain that ∥Ak∥≤1. Thus, ∥xk∥≤∥MkPkxk–1∥and xk ∈L2(Ω, Hk). Note that a similar pooling operator is used in the scattering representation [5, 17], though in a different way which does not affect subsequent layers. 3 Multilayer construction. Finally, we obtain a multilayer representation by composing multiple times the previous operators. In order to increase invariance with each layer, the size of the patch Sk and pooling scale σk typically grow exponentially with k, with σk and supc∈Sk |c| of the same order. With n layers, the final representation is given by the feature map Φn(x0) := xn = AnMnPnAn–1Mn–1Pn–1 · · · A1M1P1x0 ∈L2(Ω, Hn). (3) Then, we can define a kernel Kn on two signals x0 and x′ 0 by Kn(x0, x′ 0) := ⟨Φn(x0), Φn(x′ 0)⟩, whose RKHS HKn contains all functions of the form f(x0) = ⟨w, Φn(x0)⟩with w ∈L2(Ω, Hn). The following lemma shows that this representation preserves all information about the signal at each layer, and each feature map xk can be sampled on a discrete set with no loss of information. This suggests a natural approach for discretization which we discuss next. For space limitation reasons, all proofs in this paper are relegated to Appendix C. Lemma 1 (Signal preservation). Assume that Hk contains linear functions ⟨w, ·⟩with w in Pk (this is true for all kernels Kk described in Appendix B), then the signal xk–1 can be recovered from a sampling of xk = AkMkPkxk–1 at discrete locations as soon as the union of patches centered at these points covers all of Ω. It follows that xk can be reconstructed from such a sampling. 2.1 From Theory to Practice: Discretization and Signal Preservation The previous construction defines a kernel representation for general signals in L2(Ω, H0), which is an abstract object defined for theoretical purposes, as often done in signal processing [17]. In practice, signals are discrete, and it is thus important to discuss the problem of discretization, as done in [15]. For clarity, we limit the presentation to 1-dimensional signals (Ω= Rd with d = 1), but the arguments can easily be extended to higher dimensions d when using box-shaped patches. Notation from the previous section is preserved, but we add a bar on top of all discrete analogues of their discrete counterparts, e.g., ¯xk is a discrete feature map in ℓ2(Z, ¯Hk) for some RKHS ¯Hk. Input signals x0 and ¯x0. Discrete signals acquired by a physical device are often seen as local integrators of signals defined on a continuous domain (e.g., sensors from digital cameras integrate the pointwise distribution of photons that hit a sensor in a spatial window). Let us then consider a signal x0 in L2(Ω, H0) and s0 a sampling interval. By defining ¯x0 in ℓ2(Z, H0) such that ¯x0[n] = x0(ns0) for all n in Z, it is thus natural to assume that x0 =A0x, where A0 is a pooling operator (local integrator) applied to an original signal x. The role of A0 is to prevent aliasing and reduce high frequencies; typically, the scale σ0 of A0 should be of the same magnitude as s0, which we choose to be s0 = 1 in the following, without loss of generality. This natural assumption will be kept later in the analysis. Multilayer construction. We now want to build discrete feature maps ¯xk in ℓ2(Z, ¯Hk) at each layer k involving subsampling with a factor sk w.r.t. ¯xk–1. We now define the discrete analogues of the operators Pk (patch extraction), Mk (kernel mapping), and Ak (pooling) as follows: for n ∈Z, ¯Pk¯xk–1[n] := e−1/2 k (¯xk–1[n], ¯xk–1[n + 1], . . . , ¯xk–1[n + ek −1]) ∈¯Pk := ¯Hek k–1 ¯ Mk ¯Pk¯xk–1[n] := ¯ϕk( ¯Pk¯xk–1[n]) ∈¯Hk ¯xk[n]= ¯Ak ¯ Mk ¯Pk¯xk–1[n] := s1/2 k X m∈Z ¯hk[nsk −m] ¯ Mk ¯Pk¯xk–1[m]=(¯hk ∗¯ Mk ¯Pk¯xk–1)[nsk] ∈¯Hk, where (i) ¯Pk extracts a patch of size ek starting at position n in ¯xk–1[n] (defining a patch centered at n is also possible), which lives in the Hilbert space ¯Pk defined as the direct sum of ek times ¯Hk–1; (ii) ¯ Mk is a kernel mapping identical to the continuous case, which preserves the norm, like Mk; (iii) ¯Ak performs a convolution with a Gaussian filter and a subsampling operation with factor sk. The next lemma shows that under mild assumptions, this construction preserves signal information. Lemma 2 (Signal recovery with subsampling). Assume that ¯Hk contains the linear functions ⟨w, ·⟩ for all w ∈¯Pk and that ek ≥sk. Then, ¯xk–1 can be recovered from ¯xk. We note that this result relies on recovery by deconvolution of a pooling convolution with filter ¯hk, which is stable when its scale parameter, typically of order sk to prevent anti-aliasing, is small enough. This suggests using small values for ek, sk, as in typical recent convolutional architectures [30]. 4 Links between the parameters of the discrete and continuous models. Due to subsampling, the patch size in the continuous and discrete models are related by a multiplicative factor. Specifically, a patch of size ek with discretization corresponds to a patch Sk of diameter eksk−1sk−2 . . . s1 in the continuous case. The same holds true for the scale parameter of the Gaussian pooling. 2.2 From Theory to Practice: Kernel Approximation and Convolutional Kernel Networks Besides discretization, two modifications are required to use the image representation we have described in practice. The first one consists of using feature maps with finite spatial support, which introduces border effects that we did not study, but which are negligible when dealing with large realistic images. The second one requires finite-dimensional approximation of the kernel maps, leading to the convolutional kernel network model of [15]. Typically, each RKHS’s mapping is approximated by performing a projection onto a subspace of finite dimension, a classical approach to make kernel methods work at large scale [12, 31, 34]. One advantage is its compatibility with the RKHSs (meaning that the approximations live in the respective RKHSs), and the stability results we will present next are preserved thanks to the non-expansiveness of the projection. It is then be possible to derive theoretical results for the CKN model, which appears as a natural implementation of the kernel constructed previously; yet, we will also show in Section 5 that the results apply more broadly to CNNs that are contained in the functional space associated to the kernel. 3 Stability to Deformations and Translation Invariance In this section, we study the translation-invariance and the stability of the kernel representation described in Section 2 for continuous signals under the action of diffeomorphisms. We use a similar characterization of stability to the one introduced by Mallat [17]: for a C1-diffeomorphism τ : Ω→Ω, let Lτ denote the linear operator defined by Lτx(u) = x(u −τ(u)), the representation Φ(·) is stable under the action of diffeomorphisms if there exist two constants C1 and C2 such that ∥Φ(Lτx) −Φ(x)∥≤(C1∥∇τ∥∞+ C2∥τ∥∞)∥x∥, (4) where ∇τ is the Jacobian of τ, ∥∇τ∥∞:= supu∈Ω∥∇τ(u)∥, and ∥τ∥∞:= supu∈Ω|τ(u)|. As in [17], our results will assume the regularity condition ∥∇τ∥∞< 1/2. In order to have a translationinvariant representation, we want C2 to be small (a translation is a diffeomorphism with ∇τ = 0), and indeed we will show that C2 is proportional to 1/σn, where σn is the scale of the last pooling layer, which typically increases exponentially with the number of layers n. Note that unlike the scattering transform [17], we do not have a representation that preserves the norm, i.e., such that ∥Φ(x)∥=∥x∥. While the patch extraction Pk and kernel mapping Mk operators do preserve the norm, the pooling operators Ak may remove (or significantly reduce) frequencies from the signal that are larger than 1/σk. Yet, natural signals such as natural images often have high energy in the low-frequency domain (the power spectra of natural images is often considered to have a polynomial decay in 1/f 2, where f is the signal frequency [33]). For such classes of signals, a large fraction of the signal energy will be preserved by the pooling operator. In particular, with some additional assumptions on the kernels Kk, it is possible to show [3]: ∥Φ(x)∥≥∥An · · · A0x∥. Additionally, when using a Gaussian kernel mapping ϕn+1 on top of the last feature map as a prediction layer instead of a linear layer, the final representation Φf(x) := ϕn+1(Φn(A0x)) preserves stability and always has unit norm (see the extended version of the paper [3] for details). This suggests that norm preservation may be a less relevant concern in our kernel setting. 3.1 Stability Results In order to study the stability of the representation (3), we assume that the input signal x0 may be written as x0 = A0x, where A0 is an initial pooling operator at scale σ0, which allows us to control the high frequencies of the signal in the first layer. As discussed previously in Section 2.1, this assumption is natural and compatible with any physical acquisition device. Note that σ0 can be taken arbitrarily small, making the operator A0 arbitrarily close to the identity, so that this assumption does not limit the generality of our results. Moreover, we make the following assumptions for each layer k: 5 (A1) Norm preservation: ∥ϕk(x)∥= ∥x∥for all x in Pk; (A2) Non-expansiveness: ∥ϕk(x) −ϕk(x′)∥≤∥x −x′∥for all x, x′ in Pk; (A3) Patch sizes: there exists κ > 0 such that at any layer k we have sup c∈Sk |c| ≤κσk−1. Note that assumptions (A1-2) imply that the operators Mk preserve the norm and are non-expansive. Appendix B exposes a large class of homogeneous kernels that satisfy assumptions (A1-2). General bound for stability. The following result gives an upper bound on the quantity of interest, ∥Φ(Lτx) −Φ(x)∥, in terms of the norm of various linear operators which control how τ affects each layer. The commutator of linear operators A and B is denoted [A, B] = AB −BA. Proposition 3. Let Φ(x)=Φn(A0x) where Φn is defined in (3) for x in L2(Ω, H0). Then, ∥Φ(Lτx) −Φ(x)∥≤ n X k=1 ∥[PkAk−1, Lτ]∥+ ∥[An, Lτ]∥+ ∥LτAn −An∥ ! ∥x∥ (5) In the case of a translation Lτx(u) = Lcx(u) = x(u −c), it is easy to see that pooling and patch extraction operators commute with Lc (this is also known as covariance or equivariance to translations), so that we are left with the term ∥LcAn −An∥, which should control translation invariance. For general diffeomorphisms τ, we no longer have exact covariance, but we show below that commutators are stable to τ, in the sense that ∥[PkAk−1, Lτ]∥is controlled by ∥∇τ∥∞, while ∥LτAn −An∥is controlled by ∥τ∥∞and decays with the pooling size σn. Bound on ∥[PkAk−1, Lτ]∥. We begin by noting that Pkz can be identified with (Lcz)c∈Sk isometrically for all z in L2(Ω, Hk–1), since ∥Pkz∥2 = R Sk ∥Lcz∥2dνk(c) by Fubini’s theorem. Then, ∥PkAk−1Lτz −LτPkAk−1z∥2 = Z Sk ∥LcAk−1Lτz −LτLcAk−1z∥2dνk(c) ≤sup c∈Sk ∥LcAk−1Lτx −LτLcAk−1z∥2, so that ∥[PkAk−1, Lτ]∥ ≤ supc∈Sk ∥[LcAk−1, Lτ]∥. The following result lets us bound ∥[LcAk−1, Lτ]∥when |c| ≤κσk−1, which is satisfied under assumption (A3). Lemma 4. Let Aσ be the pooling operator with kernel hσ(u) = σ−dh(u/σ). If ∥∇τ∥∞≤1/2, there exists a constant C1 such that for any σ and |c| ≤κσ, we have ∥[LcAσ, Lτ]∥≤C1∥∇τ∥∞, where C1 depends only on h and κ. A similar result is obtained in Mallat [17, Lemma E.1] for commutators of the form [Aσ, Lτ], but we extend it to handle integral operators LcAσ with a shifted kernel. The proof (given in Appendix C.4) relies on the fact that [LcAσ, Lτ] is an integral operator in order to bound its norm via Schur’s test. Note that κ can be made larger, at the cost of an increase of the constant C1 of the order κd+1. Bound on ∥LτAn −An∥. We bound the operator norm ∥LτAn −An∥in terms of ∥τ∥∞using the following result due to Mallat [17, Lemma 2.11], with σ = σn: Lemma 5. If ∥∇τ∥∞≤1/2, we have ∥LτAσ −Aσ∥≤C2 σ ∥τ∥∞, (6) with C2 = 2d · ∥∇h∥1. Combining Proposition 3 with Lemmas 4 and 5, we immediately obtain the following result. Theorem 6. Let Φ(x) be a representation given by Φ(x) = Φn(A0x) and assume (A1-3). If ∥∇τ∥∞≤1/2, we have ∥Φ(Lτx) −Φ(x)∥≤ C1 (1 + n) ∥∇τ∥∞+ C2 σn ∥τ∥∞ ∥x∥. (7) 6 This result matches the desired notion of stability in Eq. (4), with a translation-invariance factor that decays with σn. The dependence on a notion of depth (the number of layers n here) also appears in [17], with a factor equal to the maximal length of scattering paths, and with the same condition ∥∇τ∥∞≤1/2. However, while the norm of the scattering representation is preserved as the length of these paths goes to infinity, the norm of Φ(x) can decrease with depth due to pooling layers, though this concern may be alleviated by using an additional non-linear prediction layer, as discussed previously (see also [3]). 3.2 Stability with Kernel Approximations As in the analysis of the scattering transform of [17], we have characterized the stability and shiftinvariance of the data representation for continuous signals, in order to give some intuition about the properties of the corresponding discrete representation, which we have described in Section 2.1. Another approximation performed in the CKN model of [15] consists of adding projection steps on finite-dimensional subspaces of the RKHS’s layers, as discusssed in Section 2.2. Interestingly, the stability properties we have obtained previously are compatible with these steps. We may indeed redefine the operator Mk as the pointwise operation such that Mkz(u) = Πkϕk(z(u)) for any map z in L2(Ω, Pk), instead of Mkz(u) = ϕk(z(u)); Πk : Hk →Fk is here a projection operator onto a linear subspace. Then, Mk does not necessarily preserve the norm anymore, but ∥Mkz∥≤∥z∥, with a loss of information corresponding to the quality of approximation of the kernel Kk on the points z(u). On the other hand, the non-expansiveness of Mk is satisfied thanks to the non-expansiveness of the projection. Additionally, the CKN construction provides a finite-dimensional representation at each layer, which preserves the norm structure of the original Hilbert spaces isometrically. In summary, it is possible to show that the conclusions of Theorem 6 remain valid for this tractable CKN representation, but we lose signal information in the process. The stability of the predictions can then be controlled through the norm of the last (linear) layer, which is typically used as a regularizer [15]. 4 Global Invariance to Group Actions In Section 3, we have seen how the kernel representation of Section 2 creates invariance to translations by commuting with the action of translations at intermediate layers, and how the last pooling layer on the translation group governs the final level of invariance. It is often useful to encode invariances to different groups of transformations, such as rotations or reflections (see, e.g., [9, 17, 22, 29]). Here, we show how this can be achieved by defining adapted patch extraction and pooling operators that commute with the action of a transformation group G (this is known as group covariance or equivariance). We assume that G is locally compact, so that we can define a left-invariant Haar measure µ—that is, a measure on G that satisfies µ(gS) = µ(S) for any Borel set S ⊂G and g in G. We assume the initial signal x(u) is defined on G, and we define subsequent feature maps on the same domain. The action of an element g ∈G is denoted by Lg, where Lgx(u) = x(g−1u). Then, we are interested in defining a layer—that is, a succession of patch extraction, kernel mapping, and pooling operators—that commutes with Lg, in order to achieve equivariance to the group G. Patch extraction. We define patch extraction as follows Px(u) = (x(uv))v∈S for all u ∈G, where S ⊂G is a patch centered at the identity. P commutes with Lg since PLgx(u) = (Lgx(uv))v∈S = (x(g−1uv))v∈S = Px(g−1u) = LgPx(u). Kernel mapping. The pointwise operator M is defined as in Section 2, and thus commutes with Lg. Pooling. The pooling operator on the group G is defined in a similar fashion as [22] by Ax(u) = Z G x(uv)h(v)dµ(v) = Z G x(v)h(u−1v)dµ(v), where h is a pooling filter typically localized around the identity element. It is easy to see from the first expression of Ax(u) that ALgx(u) = LgAx(u), making the pooling operator G-equivariant. 7 In our analysis of stability in Section 3, we saw that inner pooling layers are useful to guarantee stability to local deformations, while global invariance is achieved mainly through the last pooling layer. In some cases, one only needs stability to a subgroup of G, while achieving global invariance to the whole group, e.g., in the roto-translation group [21], one might want invariance to a global rotation but stability to local translations. Then, one can perform pooling just on the subgroup to stabilize (e.g., translations) in intermediate layers, while pooling on the entire group at the last layer to achieve the global group invariance. 5 Link with Convolutional Neural Networks In this section, we study the connection between the kernel representation defined in Section 2 and CNNs. Specifically, we show that the RKHS HKn obtained from our kernel construction contains a set of CNNs on continuous domains with certain types of smooth homogeneous activations. An important consequence is that the stability results of previous sections apply to this class of CNNs. CNN maps construction. We now define a CNN function fσ that takes as input an image x0 in L2(Ω, Rp0) with p0 channels, and builds a sequence of feature maps, represented at layer k as a function zk in L2(Ω, Rpk) with pk channels; it performs linear convolutions with a set of filters (wi k)i=1,...,pk, followed by a pointwise activation function σ to obtain intermediate feature maps ˜zk, then applies a linear pooling filter and repeats the same operations at each layer. Note that here, each wi k is in L2(Sk, Rpk–1), with channels denoted by wij k ∈L2(Sk, R). Formally, the intermediate map ˜zk in L2(Ω, Rpk) is obtained for k ≥1 by ˜zi k(u) = nk(u)σ ⟨wi k, Pkzk–1(u)⟩/nk(u) , (8) where ˜zk(u) = (˜z1 k(u), . . . , ˜zpk k (u)) in Rpk, and Pk is the patch extraction operator, which operates here on finite-dimensional maps. The activation involves a pointwise non-linearity σ along with a quantity nk(u) that is independent of the filters and that will be made explicit in the sequel. Finally, the map zk is obtained by using a pooling operator as in Section 2, with zk = Ak˜zk, and z0 = x0. Homogeneous activations. The choice of non-linearity σ relies on Lemma B.2 of the appendix, which shows that for many choices of smooth functions σ, the RKHSs Hk defined in Section 2 contains the linear functions z 7→∥z∥σ(⟨g, z⟩/∥z∥) for all g in Pk. While this homogenization involving the quantities ∥z∥is not standard in classical CNNs, we note that (i) the most successful activation function, namely rectified linear units, is homogeneous—that is, relu(⟨g, z⟩) = ∥z∥relu(⟨g, z⟩/∥z∥); (ii) while relu is nonsmooth and thus not in our RKHSs, there exists a smoothed variant that satisfies the conditions of Lemma B.2 for useful kernels. As noticed in [35, 36], this is for instance the case for the inverse polynomial kernel described in Appendix B, In Figure 2, we plot and compare these different variants of relu. Then, we may now define the quantities nk(u) := ∥Pkxk−1(u)∥in (8), which are due to the homogenization, and which are independent of the filters wi k. Classification layer. The final CNN prediction function fσ is given by inner products with the feature maps of the last layer: fσ(x0) = ⟨wn+1, zn⟩, with parameters wn+1 in L2(Ω, Rpn). The next result shows that for appropriate σ, the function fσ is in HKn. The construction of this function in the RKHS and the proof are given in Appendix D. We note that a similar construction for fully connected networks with constraints on weights and inputs was given in [35]. Proposition 7 (CNNs and RKHSs). Assume the activation σ satisfies Cσ(a) < ∞for all a ≥0, where Cσ is defined for a given kernel in Lemma B.2. Then the CNN function fσ defined above is in the RKHS HKn, with norm ∥fσ∥2 ≤pn pn X i=1 ∥wi n+1∥2 2Bn,i, where Bn,i is defined recursively by B1,i = C2 σ(∥wi 1∥2 2) and Bk,i = C2 σ pk–1 Ppk–1 j=1 ∥wij k ∥2 2Bk–1,j . The results of this section imply that our study of the geometry of the kernel representations, and in particular the stability and invariance properties of Section 3, apply to the generic CNNs defined 8 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 x 0.0 0.5 1.0 1.5 2.0 f(x) f : x (x) ReLU sReLU 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 x 0 1 2 3 4 f(x) f : x |x| (wx/|x|) ReLU, w=1 sReLU, w = 0 sReLU, w = 0.5 sReLU, w = 1 sReLU, w = 2 Figure 2: Comparison of one-dimensional functions obtained with relu and smoothed relu (sReLU) activations. (Left) non-homogeneous setting of [35, 36]. (Right) our homogeneous setting, for different values of the parameter w. Note that for w ≥0.5, sReLU and ReLU are indistinguishable. above, thanks to the Lipschitz smoothness relation (1). The smoothness is then controlled by the RKHS norm of these functions, which sheds light on the links between generalization and stability. In particular, functions with low RKHS norm (a.k.a. “large margin”) are known to generalize better to unseen data (see, e.g., the notion of margin bounds for SVMs [27, 28]). This implies, for instance, that generalization is harder if the task requires classifying two slightly deformed images with different labels, since this requires a function with large RKHS norm according to our stability analysis. In contrast, if a stable function (i.e., with small RKHS norm) is sufficient to do well on a training set, learning becomes “easier” and few samples may be enough for good generalization. Acknowledgements This work was supported by a grant from ANR (MACARON project under grant number ANR14-CE23-0003-01), by the ERC grant number 714381 (SOLARIS project), and by the MSR-Inria joint center. References [1] F. Anselmi, L. Rosasco, and T. Poggio. On invariance and selectivity in representation learning. Information and Inference, 5(2):134–158, 2016. [2] F. Anselmi, L. Rosasco, C. Tan, and T. Poggio. Deep convolutional networks are hierarchical kernel machines. preprint arXiv:1508.01084, 2015. [3] A. Bietti and J. Mairal. Group invariance and stability to deformations of deep convolutional representations. preprint arXiv:1706.03078, 2017. [4] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [5] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on pattern analysis and machine intelligence (PAMI), 35(8):1872–1886, 2013. [6] J. Bruna, A. Szlam, and Y. LeCun. Learning stable group invariant representations with convolutional networks. preprint arXiv:1301.3537, 2013. [7] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Advances in Neural Information Processing Systems (NIPS), 2009. [8] M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving robustness to adversarial examples. In International Conference on Machine Learning (ICML), 2017. [9] T. Cohen and M. Welling. Group equivariant convolutional networks. In International Conference on Machine Learning (ICML), 2016. [10] A. Daniely, R. Frostig, and Y. Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Advances in Neural Information Processing Systems (NIPS), 2016. 9 [11] J. Diestel and J. J. Uhl. Vector Measures. American Mathematical Society, 1977. [12] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research (JMLR), 2:243–264, 2001. [13] B. Haasdonk and H. Burkhardt. Invariant kernel functions for pattern analysis and machine learning. Machine learning, 68(1):35–61, 2007. [14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. [15] J. Mairal. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks. In Advances in Neural Information Processing Systems (NIPS), 2016. [16] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In Advances in Neural Information Processing Systems (NIPS), 2014. [17] S. Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331–1398, 2012. [18] G. Montavon, M. L. Braun, and K.-R. Müller. Kernel analysis of deep networks. Journal of Machine Learning Research (JMLR), 12:2563–2581, 2011. [19] Y. Mroueh, S. Voinea, and T. A. Poggio. Learning with group invariant features: A kernel perspective. In Advances in Neural Information Processing Systems (NIPS), 2015. [20] K. Muandet, K. Fukumizu, B. Sriperumbudur, B. Schölkopf, et al. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends R⃝in Machine Learning, 10(1-2):1–141, 2017. [21] E. Oyallon and S. Mallat. Deep roto-translation scattering for object classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [22] A. Raj, A. Kumar, Y. Mroueh, T. Fletcher, and B. Schoelkopf. Local group invariant representations via orbit embeddings. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2017. [23] S. Saitoh. Integral transforms, reproducing kernels and their applications, volume 369. CRC Press, 1997. [24] I. J. Schoenberg. Positive definite functions on spheres. Duke Mathematical Journal, 9(1):96– 108, 1942. [25] B. Schölkopf. Support Vector Learning. PhD thesis, Technischen Universität Berlin, 1997. [26] B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299–1319, 1998. [27] B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. 2001. [28] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [29] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2013. [30] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2014. [31] A. J. Smola and B. Schölkopf. Sparse greedy matrix approximation for machine learning. In Proceedings of the International Conference on Machine Learning (ICML), 2000. 10 [32] E. M. Stein. Harmonic Analysis: Real-variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University Press, 1993. [33] A. Torralba and A. Oliva. Statistics of natural image categories. Network: computation in neural systems, 14(3):391–412, 2003. [34] C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In Advances in Neural Information Processing Systems (NIPS), 2001. [35] Y. Zhang, J. D. Lee, and M. I. Jordan. ℓ1-regularized neural networks are improperly learnable in polynomial time. In International Conference on Machine Learning (ICML), 2016. [36] Y. Zhang, P. Liang, and M. J. Wainwright. Convexified convolutional neural networks. In International Conference on Machine Learning (ICML), 2017. 11 | 2017 | 159 |
6,629 | Query Complexity of Clustering with Side Information Arya Mazumdar and Barna Saha College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003 {arya,barna}@cs.umass.edu Abstract Suppose, we are given a set of n elements to be clustered into k (unknown) clusters, and an oracle/expert labeler that can interactively answer pair-wise queries of the form, “do two elements u and v belong to the same cluster?”. The goal is to recover the optimum clustering by asking the minimum number of queries. In this paper, we provide a rigorous theoretical study of this basic problem of query complexity of interactive clustering, and give strong information theoretic lower bounds, as well as nearly matching upper bounds. Most clustering problems come with a similarity matrix, which is used by an automated process to cluster similar points together. However, obtaining an ideal similarity function is extremely challenging due to ambiguity in data representation, poor data quality etc., and this is one of the primary reasons that makes clustering hard. To improve accuracy of clustering, a fruitful approach in recent years has been to ask a domain expert or crowd to obtain labeled data interactively. Many heuristics have been proposed, and all of these use a similarity function to come up with a querying strategy. Even so, there is a lack systematic theoretical study. Our main contribution in this paper is to show the dramatic power of side information aka similarity matrix on reducing the query complexity of clustering. A similarity matrix represents noisy pair-wise relationships such as one computed by some function on attributes of the elements. A natural noisy model is where similarity values are drawn independently from some arbitrary probability distribution f+ when the underlying pair of elements belong to the same cluster, and from some f−otherwise. We show that given such a similarity matrix, the query complexity reduces drastically from Θ(nk) (no similarity matrix) to O( k2 log n H2(f+∥f−)) where H2 denotes the squared Hellinger divergence. Moreover, this is also information-theoretic optimal within an O(log n) factor. Our algorithms are all efficient, and parameter free, i.e., they work without any knowledge of k, f+ and f−, and only depend logarithmically with n. Our lower bounds could be of independent interest, and provide a general framework for proving lower bounds for classification problems in the interactive setting. Along the way, our work also reveals intriguing connection to popular community detection models such as the stochastic block model and opens up many avenues for interesting future research. 1 Introduction Clustering is one of the most fundamental and popular methods for data classification. In this paper we provide a rigorous theoretical study of clustering with the help of an oracle, a model that saw a recent surge of popular heuristic algorithms. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Suppose we are given a set of n points, that need to be clustered into k clusters where k is unknown to us. Suppose there is an oracle that either knows the true underlying clustering or can compute the best clustering under some optimization constraints. We are allowed to query the oracle whether any two points belong to the same cluster or not. How many such queries are needed to be asked at minimum to perform the clustering exactly? The motivation to this problem lies at the heart of modern machine learning applications where the goal is to facilitate more accurate learning from less data by interactively asking for labeled data, e.g., active learning and crowdsourcing. Specifically, automated clustering algorithms that rely just on a similarity matrix often return inaccurate results. Whereas, obtaining few labeled data adaptively can help in significantly improving its accuracy. Coupled with this observation, clustering with an oracle has generated tremendous interest in the last few years with increasing number of heuristics developed for this purpose [22, 40, 13, 42, 43, 18, 39, 12, 21, 29]. The number of queries is a natural measure of “efficiency” here, as it directly relates to the amount of labeled data or the cost of using crowd workers –however, theoretical guarantees on query complexity is lacking in the literature. On the theoretical side, query complexity or the decision tree complexity is a classical model of computation that has been extensively studied for different problems [16, 4, 8]. For the clustering problem, one can obtain an upper bound of O(nk) on the query complexity easily and it is achievable even when k is unknown [40, 13]: to cluster an element at any stage of the algorithm, ask one query per existing cluster with this element (this is sufficient due to transitivity), and start a new cluster if all queries are negative. It turns out that Ω(nk) is also a lower bound, even for randomized algorithms (see, e.g., [13]). In contrast, the heuristics developed in practice often ask significantly less queries than nk. What could be a possible reason for this deviation between the theory and practice? Before delving into this question, let us look at a motivating application that drives this work. A Motivating Application: Entity Resolution. Entity resolution (ER, also known as record linkage) is a fundamental problem in data mining and has been studied since 1969 [17]. The goal of ER is to identify and link/group different manifestations of the same real world object, e.g., different ways of addressing (names, email address, Facebook accounts) the same person, Web pages with different descriptions of the same business, different photos of the same object etc. (see the excellent survey by Getoor and Machanavajjhala [20]). However, lack of an ideal similarity function to compare objects makes ER an extremely challenging task. For example, DBLP, the popular computer science bibliography dataset is filled with ER errors [30]. It is common for DBLP to merge publication records of different persons if they share similar attributes (e.g. same name), or split the publication record of a single person due to slight difference in representation (e.g. Marcus Weldon vs Marcus K. Weldon). In recent years, a popular trend to improve ER accuracy has been to incorporate human wisdom. The works of [42, 43, 40] (and many subsequent works) use a computer-generated similarity matrix to come up with a collection of pair-wise questions that are asked interactively to a crowd. The goal is to minimize the number of queries to the crowd while maximizing the accuracy. This is analogous to our interactive clustering framework. But intriguingly, as shown by extensive experiments on various real datasets, these heuristics use far less queries than nk [42, 43, 40]–barring the Ω(nk) theoretical lower bound. On a close scrutiny, we find that all of these heuristics use some computer generated similarity matrix to guide in selecting the queries. Could these similarity matrices, aka side information, be the reason behind the deviation and significant reduction in query complexity? Let us call this clustering using side information, where the clustering algorithm has access to a similarity matrix. This can be generated directly from the raw data (e.g., by applying Jaccard similarity on the attributes), or using a crude classifier which is trained on a very small set of labelled samples. Let us assume the following generative model of side information: a noisy weighted upper-triangular similarity matrix W = {wi,j}, 1 ≤i < j ≤n, where wi,j is drawn from a probability distribution f+ if i, j, i ̸= j, belong to the same cluster, and else from f−. However, the algorithm designer is given only the similarity matrix without any information on f+ and f−. In this work, one of our major contributions is to show the separation in query complexity of clustering with and without such side information. Indeed the recent works of [18, 33] analyze popular heuristic algorithms of [40, 43] where the probability distributions are obtained from real datasets which show that these heuristics are significantly suboptimal even for very simple distributions. To the best of our knowledge, before this work, there existed no algorithm that works for arbitrary unknown distributions f+ and f−with near-optimal performances. We develop a generic framework for proving information theoretic lower bounds for interactive clustering using side information, and design efficient algorithms for arbitrary 2 f+ and f−that nearly match the lower bound. Moreover, our algorithms are parameter free, that is they work without any knowledge of f+, f−or k. Connection to popular community detection models. The model of side information considered in this paper is a direct and significant generalization of the planted partition model, also known as the stochastic block model (SBM) [28, 15, 14, 2, 1, 25, 24, 11, 36]. The stochastic block model is an extremely well-studied model of random graphs which is used for modeling communities in real world, and is a special case of a similarity matrix we consider. In SBM, two vertices within the same community share an edge with probability p, and two vertices in different communities share an edge with probability q, that is f+ is Bernoulli(p) and f−is Bernoulli(q). It is often assumed that k, the number of communities, is a constant (e.g. k = 2 is known as the planted bisection model and is studied extensively [1, 36, 15] or a slowly growing function of n (e.g. k = o(log n)). The points are assigned to clusters according to a probability distribution indicating the relative sizes of the clusters. In contrast, not only in our model f+ and f−can be arbitrary probability mass functions (pmfs), we do not have to make any assumption on k or the cluster size distribution, and can allow for any partitioning of the set of elements (i.e., adversarial setting). Moreover, f+ and f−are unknown. For SBM, parameter free algorithms are known relatively recently for constant number of linear sized clusters [3, 24]. There are extensive literature that characterize the threshold phenomenon in SBM in terms of p and q for exact and approximate recovery of clusters when relative cluster sizes are known and nearly balanced (e.g., see [2] and therein for many references). For k = 2 and equal sized clusters, sharp thresholds are derived in [1, 36] for a specific sparse region of p and q 1. In a more general setting, the vertices in the ith and the jth communities are connected with probability qij and threshold results for the sparse region has been derived in [2] - our model can be allowed to have this as a special case when we have pmfs fi,js denoting the distributions of the corresponding random variables. If an oracle gives us some of the pairwise binary relations between elements (whether they belong to the same cluster or not), the threshold of SBM must also change. But by what amount? This connection to SBM could be of independent interest to study query complexity of interactive clustering with side information, and our work opens up many possibilities for future direction. Developing lower bounds in the interactive setting appears to be significantly challenging, as algorithms may choose to get any deterministic information adaptively by querying, and standard lower bounding techniques based on Fano-type inequalities [9, 31] do not apply. One of our major contributions in this paper is to provide a general framework for proving information-theoretic lower bound for interactive clustering algorithms which holds even for randomized algorithms, and even with the full knowledge of f+, f−and k. In contrast, our algorithms are computationally efficient and are parameter free (works without knowing f+, f−and k). The technique that we introduce for our upper bounds could be useful for designing further parameter free algorithms which are extremely important in practice. Other Related works. The interactive framework of clustering model has been studied before where the oracle is given the entire clustering and the oracle can answer whether a cluster needs to be split or two clusters must be merged [7, 6]. Here we contain our attention to pair-wise queries, as in all practical applications that motivate this work [42, 43, 22, 40]. In most cases, an expert human or crowd serves as an oracle. Due to the scale of the data, it is often not possible for such an oracle to answer queries on large number of input data. Only recently, some heuristic algorithms with k-wise queries for small values of k but k > 2 have been proposed in [39], and a non-interactive algorithm that selects random triangle queries have been analyzed in [41]. Also recently, the stochastic block model with active label-queries have been studied in [19]. Perhaps conceptually closest to us is a recent work by [5] where they consider pair-wise queries for clustering. However, their setting is very different. They consider the specific NP-hard k-means objective with distance matrix which must be a metric and must satisfy a deterministic separation property. Their lower bounds are computational and not information theoretic; moreover their algorithm must know the parameters. There exists a significant gap between their lower and upper bounds:∼log k vs k2, and it would be interesting if our techniques can be applied to improve this. Here we have assumed the oracle always returns the correct answer. To deal with the possibility that the crowdsourced oracle may give wrong answers, there are simple majority voting mechanisms or more complicated techniques [39, 12, 21, 29, 10, 41] to handle such errors. Our main objective is to 1Most recent works consider the region of interest as p = a log n n and q = b log n n for some a > b > 0. 3 study the power of side information, and we do not consider the more complex scenarios of handling erroneous oracle answers. The related problem of clustering with noisy queries is studied by us in a companion work [34]. Most of the results of the two papers are available online in a more extensive version [32]. Contributions. Formally the problem we study in this paper can be described as follows. Problem 1 (Query-Cluster with an Oracle). Consider a set of elements V ≡[n] with k latent clusters Vi, i = 1, . . . , k, where k is unknown. There is an oracle O : V × V →{±1}, that when queried with a pair of elements u, v ∈V × V , returns +1 iff u and v belong to the same cluster, and −1 iff u and v belong to different clusters. The queries Q ⊆V × V can be done adaptively. Consider the side information W = {wu,v : 1 ≤u < v ≤n}, where the (u, v)th entry of W, wu,v is a random variable drawn from a discrete probability distribution f+ if u, v belong to the same cluster, and is drawn from a discrete2 probability distribution f−3 if u, v belong to different clusters. The parameters k, f+ and f−are unknown. Given V and W, find Q ⊆V × V such that |Q| is minimum, and from the oracle answers and W it is possible to recover Vi, i = 1, 2, ..., k. Without side information, as noted earlier, it is easy to see an algorithm with query complexity O(nk) for Query-Cluster. When no side information is available, it is also not difficult to have a lower bound of Ω(nk) on the query complexity. Our main contributions are to develop strong information theoretic lower bounds as well as nearly matching upper bounds when side information is available, and characterize the effect of side information on query complexity precisely. Upper Bound (Algorithms). We show that with side information W, a drastic reduction in query complexity of clustering is possible, even with unknown parameters f+, f−, and k. We propose a Monte Carlo randomized algorithm that reduces the number of queries from O(nk) to O( k2 log n H2(f+∥f−)), where H(f∥g) is the Hellinger divergence between the probability distributions f, and g, and recovers the clusters accurately with high probability (with success probability 1 −1 n) without knowing f+, f−or k (see, Theorem 1). Depending on the value of k, this could be highly sublinear in n. Note that the squared Hellinger divergence between two pmfs f and g is defined to be, H2(f∥g) = 1 2 X i p f(i) − p g(i) 2 . We also develop a Las Vegas algorithm, that is one which recovers the clusters with probability 1 (and not just with high probability), with query complexity O(n log n + k2 log n H2(f+∥f−)). Since f+ and f− can be arbitrary, not knowing the distributions provides a major challenge, and we believe, our recipe could be fruitful for designing further parameter-free algorithms. We note that all our algorithms are computationally efficient - in fact, the time required is bounded by the size of the side information matrix, i.e., O(n2). Theorem 1. Let the number of clusters k be unknown and f+ and f−be unknown discrete distributions with fixed cardinality of support. There exists an efficient (polynomial-time) Monte Carlo algorithm for Query-Cluster that has query complexity O(min (nk, k2 log n H2(f+∥f−))) and recovers all the clusters accurately with probability 1 −o( 1 n). Moreover there exists an efficient Las Vegas algorithm that with probability 1 −o( 1 n) has query complexity O(n log n + min (nk, k2 log n H2(f+∥f−))). Lower Bound. Our main lower bound result is information theoretic, and can be summarized in the following theorem. Note especially that, for lower bound we can assume the knowledge of k, f+, f− in contrast to upper bounds, which makes the results stronger. In addition, f+ and f−can be discrete or continuous distributions. Note that when H2(f+∥f−) is close to 1, e.g., when the side information is perfect, no queries are required. However, that is not the case in practice, and we are interested in the region where f+ and f−are “close”, that is H2(f+∥f−) is small. Theorem 2. Assume H2(f+∥f−) ≤ 1 18. Any (possibly randomized) algorithm with the knowledge of f+, f−, and the number of clusters k, that does not perform Ω min {nk, k2 H2(f+∥f−)} expected 2Our lower bound holds for continuous distributions as well. 3For simplicity of expression, we treat the sample space to be of constant size. However, all our results extend to any finite sample space scaling linearly with its size. 4 number of queries, will be unable to return the correct clustering with probability at least 1 6 − O( 1 √ k). And to recover the clusters with probability 1, the number of queries must be Ω n + min {nk, k2 H2(f+∥f−)} . The lower bound therefore matches the query complexity upper bound within a logarithmic factor. Note that when no querying is allowed, this turns out exactly to be the setting of stochastic block model though with much general distributions. We have analyzed this case in Appendix C. To see how the probability of error must scale, we have used a generalized version of Fano’s inequality (e.g., [23]). However, when the number of queries is greater than zero, plus when queries can be adaptive, any such standard technique fails. Hence, significant effort has to be put forth to construct a setting where information theoretic minimax bounds can be applied. This lower bound could be of independent interest, and provides a general framework for deriving lower bounds for fundamental problems of classification, hypothesis testing, distribution testing etc. in the interactive learning setting. They may also lead to new lower bound proving techniques in the related multi-round communication complexity model where information again gets revealed adaptively. Organization. The proof of the lower bound is provided in Section 2. The Monte Carlo algorithm is given in Section 3. The detailed proof of the Monte Carlo algorithm, and the Las Vegas algorithm and its proof are given in Appendix A and Appendix B respectively in the supplementary material for space constraint. 2 Lower Bound (Proof of Theorem 2) In this section, we develop our information theoretic lower bounds. We prove a more general result from which Theorem 2 follows. Lemma 1. Consider the case when we have k equally sized clusters of size a each (that is total number of elements is n = ka). Suppose we are allowed to make at most Q adaptive queries to the oracle. The probability of error for any algorithm for Query-Cluster is at least, 1 −2 k 1 + r 4Q ak 2 − 4Q ak(k −1) −2√aH(f+∥f−). The main high-level technique to prove Lemma 1 is the following. Suppose, a node is to be assigned to a cluster. This situation is obviously akin to a k-hypothesis testing problem, and we want to use a lower bound on the probability of error. The side information and the query answers constitute a random vector whose distributions (among the k possible) must be far apart for us to successfully identify the clustering. But the main challenge comes from the interactive nature of the algorithm since it reveals deterministic information and into characterizing the set of elements that are not queried much by the algorithm. Proof of Lemma 1. Since the total number of queries is Q, the average number of queries per element is at most 2Q ak . Therefore there exist at least ak 2 elements that are queried at most T < 4Q ak times. Let x be one such element. We just consider the problem of assignment of x to a cluster (all other elements have been correctly assigned already), and show that any algorithm will make wrong assignment with positive probability. Step 1: Setting up the hypotheses. Note that the side information matrix W = (wi,j) is provided where the wi,js are independent random variables. Now assume the scenario when we use an algorithm ALG to assign x to one of the k clusters, Vu, u = 1, . . . , k. Therefore, given x, ALG takes as input the random variables wi,xs where i ∈⊔tVt, makes some queries involving x and outputs a cluster index, which is an assignment for x. Based on the observations wi,xs, the task of ALG is thus a multi-hypothesis testing among k hypotheses. Let Hu, u = 1, . . . k denote the k different hypotheses Hu : x ∈Vu. And let Pu, u = 1, . . . k denote the joint probability distributions of the random matrix W when x ∈Vu. In short, for any event A, Pu(A) = Pr(A|Hu). Going forward, the subscript of probabilities or expectations will denote the appropriate conditional distribution. Step 2: Finding “weak” clusters. There must exist t ∈{1, . . . , k} such that, k X v=1 Pt{ a query made by ALG involving cluster Vv} ≤Et{Number of queries made by ALG} ≤T. 5 We now find a subset of clusters, that are “weak,” i.e., not queried enough if Ht were true. Consider the set J′ ≡{v ∈{1, . . . , k} : Pt{ a query made by ALG involving cluster Vv} < 2T k(1−β)}, where β ≡ 1 1+√4Q ak . We must have, (k −|J′|) · 2T k(1−β) ≤T, which implies, |J′| ≥(1+β)k 2 . Now, to output a cluster without using the side information, ALG has to either make a query to the actual cluster the element is from, or query at least k −1 times. In any other case, ALG must use the side information (in addition to using queries) to output a cluster. Let Eu denote the event that ALG outputs cluster Vu by using the side information. Let J′′ ≡{u ∈{1, . . . , k} : Pt(Eu) ≤ 2 βk}. Since Pk u=1 Pt(Eu) ≤1, we must have, (k −|J′′|) · 2 βk < 1, or |J′′| > k −βk 2 = (2−β)k 2 . We have, |J′ ∩J′′| > (1+β)k 2 + (2−β)k 2 −k = k 2. This means, {Vu : u ∈J′ ∩J′′} contains more than ak 2 elements. Since there are ak 2 elements that are queried at most T times, these two sets must have nonzero intersection. Hence, we can assume that x ∈Vℓfor some ℓ∈J′ ∩J′′, i.e., let Hℓbe the true hypothesis. Now we characterize the error events of the algorithm ALG in assignment of x. Step 3: Characterizing error events for “x”. We now consider the following two events. E1 = {a query made by ALG involving cluster Vℓ}; E2 = {k −1 or more queries were made by ALG}. Note that if the algorithm ALG can correctly assign x to a cluster without using the side information then either of E1 or E2 must have to happen. Recall, Eℓdenotes the event that ALG outputs cluster Vℓ using the side information. Now consider the event E ≡EℓS E1 S E2. The probability of correct assignment is at most Pℓ(E). We now bound this probability of correct recovery from above. Step 4: Bounding probability of correct recovery via Hellinger distance. We have, Pℓ(E) ≤Pt(E) + |Pℓ(E) −Pt(E)| ≤Pt(E) + ∥Pℓ−Pt∥T V ≤Pt(E) + √ 2H(Pℓ∥Pt), where, ∥P −Q∥T V ≡supA |P(A) −Q(A)| denotes the total variation distance between two probability distributions P and Q and in the last step we have used the relationship between total variation distance and the Hellinger divergence (see, for example, [38, Eq. (3)]). Now, recall that Pℓand Pt are the joint distributions of the independent random variables wi,x, i ∈∪uVu. Now, we use the fact that squared Hellinger divergence between product distribution of independent random variables are less than the sum of the squared Hellinger divergence between the individual distribution. We also note that the divergence between identical random variables are 0. We obtain p 2H2(Pℓ∥Pt) ≤ p 2 · 2aH2(f+∥f−) = 2√aH(f+∥f−). This is true because the only times when wi,x differs under Pt and under Pℓis when x ∈Vt or x ∈Vℓ. As a result we have, Pℓ(E) ≤Pt(E) + 2√aH(f+∥f−). Now, using Markov inequality Pt(E2) ≤ T k−1 ≤ 4Q ak(k−1). Therefore, Pt(E) ≤Pt(Eℓ) + Pt(E1) + Pt(E2) ≤2 βk + 8Q ak2(1 −β) + 4Q ak(k −1). Therefore, putting the value of β we get, Pℓ(E) ≤2 k 1 + q 4Q ak 2 + 4Q ak(k−1) + 2√aH(f+∥f−), which proves the lemma. Proof of Theorem 2. Consider two cases. In the first case, suppose, nk < k2 9H2(f+∥f−). Now consider the situation of Lemma 1, with a = n k . The probability of error of any algorithm must be at least, 1 −2 k 1 + q 4Q ak 2 − 4Q ak(k−1) −2 3 ≥1 6 −O( 1 √ k), if the number of queries Q ≤nk 72 . In the second case, suppose nk ≥ k2 9H2(f+∥f−). Assume, a = ⌊ 1 9H2(f+∥f−)⌋. Then a ≥2, since H2(f+∥f−) ≤ 1 18. We have nk ≥k2a. Consider the situation when we are already given a complete cluster Vk with n −(k −1)a elements, remaining (k −1) clusters each has 1 element, and the rest (a −1)(k −1) elements are evenly distributed (but yet to be assigned) to the k −1 clusters. Now we are exactly in the situation of Lemma 1 with k −1 playing the role of k. If we have Q < ak2 72 , The probability of error is at least 1 −ok(1) −1 6 −2 3 = 1 6 −O( 1 √ k). Therefore Q must be Ω( k2 H2(f+∥f−)). Note that in this proof we have not in particular tried to optimize the constants. If we want to recover the clusters with probability 1, then Ω(n) is a trivial lower bound. Hence, coupled with the above we get a lower bound of Ω(n + min {nk, k2 H2(f+∥f−)}) in that case. 6 3 Algorithms We propose two algorithms (Monte Carlo and Las Vegas) both of which are completely parameter free that is they work without any knowledge of k, f+ and f−, and meet the respective lower bounds within an O(log n) factor. Here we present the Monte Carlo algorithm which drastically reduces the number of queries from O(nk) (no side information) to O( k2 log n H2(f+∥f−)) and recovers the clusters exactly with probability at least 1−on(1). The detailed proof of it, as well as the Las Vegas algorithm are presented in Appendix A and Appendix B respectively in the supplementary material. Our algorithm uses a subroutine called Membership that takes as input an element v ∈V and a subset of elements C ⊆V \ {v}. Assume that f+, f−are discrete distributions over fixed set of q points a1, a2, . . . , aq; that is wi,j takes value in the set {a1, a2, . . . , aq}. Define the empirical “inter” distribution pv,C for i = 1, . . . , q, pv,C(i) = |{u∈C:wu,v=ai}| |C| Also compute the “intra” distribution pC for i = 1, . . . , q, pC(i) = |{(u,v)∈C×C:u̸=v,wu,v=ai}| |C|(|C|−1) . Then we use Membership(v, C) = −H2(pv,C∥pC) as affinity of vertex v to C, where H(pv,C∥pC) denotes the Hellinger divergence between distributions. Note that since the membership is always negative, a higher membership implies that the ‘inter’ and ‘intra’ distributions are closer in terms of Hellinger distance. Designing a parameter free Monte Carlo algorithm seems to be highly challenging as here, the number of queries depends only logarithmically with n. Intuitively, if an element v has the highest membership in some cluster C, then v should be queried with C first. Also an estimation from side information is reliable when the cluster already has enough members. Unfortunately, we know neither whether the current cluster size is reliable, nor we are allowed to make even one query per element. To overcome this bottleneck, we propose an iterative-update algorithm which we believe will find more uses in developing parameter free algorithms. We start by querying a few points so that there is at least one cluster with Θ(log n) points. Now based on these queried memberships, we learn two empirical distributions p1 + from intra-cluster similarity values, and p1 −from inter-cluster similarity values. Given an element v which has not been clustered yet, and a cluster C with the highest number of current members, we would like to consider the submatrix of side information pertaining to v and all u ∈C and determine whether that side information is generated from f+ or f−. We know if the statistical distance between f+ and f−is small, then we would need more members in C to successfully do this test. Since we do not know f+ and f−, we compute the squared Hellinger divergence between p1 + and p1 −, and use that to compute a threshold τ1 on the size of C. If C crosses this size threshold, we just use the side information to determine if v should belong to C. Otherwise, we query further until there is one cluster with size τ1, and re-estimate the empirical distributions p2 + and p2 −. Again, we recompute a threshold τ2, and stop if the cluster under consideration crosses this new threshold. If not we continue. Interestingly, we can show when the process converges, we have a very good estimate of H(f+∥f−) and, moreover it converges fast. Algorithm. Phase 1. Initialization. We initialize the algorithm by selecting any element v and creating a singleton cluster {v}. We then keep selecting new elements randomly and uniformly that have not yet been clustered, and query the oracle with it by choosing exactly one element from each of the clusters formed so far. If the oracle returns +1 to any of these queries then we include the element in the corresponding cluster, else we create a new singleton cluster with it. We continue this process until one cluster has grown to a size of ⌈C log n⌉, where C is a constant. Phase 2. Iterative Update. Let C1, C2, ...Clx be the set of clusters formed after the xth iteration for some lx ≤k, where we consider Phase 1 as the 0-th iteration. We estimate p+,x = |{u, v ∈Ci : u ̸= v, wu,v = ai}| Plx i=1 |Ci|(|Ci −1|) ; p−,x = |{u ∈Ci, v ∈Cj, i < j, i, j ∈[1, lx] : wu,v = ai}| Plx i=1 P i<j |Ci||Cj| Define M E x = C log n H(p+,x∥p−,x)2 . If there is no cluster of size at least M E x formed so far, we select a new element yet to be clustered and query it exactly once with the existing clusters (that is by selecting one arbitrary point from every cluster and querying the oracle with it and the new element), and include it in an existing cluster or create a new cluster with it based on the query answer. We then set x = x + 1 and move to the next iteration to get updated estimates of p+,x, p−,x, M E x and lx. Else if there is a cluster of size at least M E x , we stop and move to the next phase. 7 Phase 3. Processing the grown clusters. Once Phase 2 has converged, let p+, p−, H(p+∥p−), M E and l be the final estimates. For every cluster C of size |C| ≥M E, call it grown and do the following. (3A.) For every unclustered element v, if Membership(v, C) ≥−( 4H(p+∥p−) C −2H(p+∥p−)2 C√log n ), then we include v in C without querying. (3B.) We create a new list Waiting(C), initially empty. If −( 4H(p+∥p−) C −2H(p+∥p−)2 C√log n ) > Membership(v, C) ≥−( 4H(p+∥p−) C + 2H(p+∥p−)2 C√log n ), then we include v in Waiting(C). For every element in Waiting(C), we query the oracle with it by choosing exactly one element from each of the clusters formed so far starting with C. If oracle returns answer “yes” to any of these queries then we include the element in that cluster, else we create a new singleton cluster with it. We continue this until Waiting(C) is exhausted. We then call C completely grown, remove it from further consideration, and move to the next grown cluster. if there is no other grown cluster, then we move back to Phase 2. Analysis. The main steps of the analysis are as follows (for full analysis see Appendix A). 1. First, Lemma 3 shows with high probability H(p+∥p−) ∈[H(f+∥f−) ± 4H(p+∥p−)2 B√log n ] for a suitable constant B that depends on C. Using it, we can show the process converges whenever a cluster has grown to a size of 4C log n H2(f+∥f−). The proof relies on adapting the Sanov’s Theorem (see Lemma 2) of information theory. We are measuring the distance between distributions via Hellinger distance, as opposed to KL divergence (which would have been a natural choice because of its presence in the rate function in Sanov’s therem), because Hellinger distance is a metric which proves to be crucial in our analysis. 2. Lemma 5 and Corollary 1 show that every element that is included in C in Phase (3A) truly belongs to C, and elements that are not in Waiting(C) can not be in C with high probability. Once Phase 2 has converged, if the condition of (3A) is satisfied, the element must belong to C. There is a small gray region of confidence interval (3B) such that if an element belongs there, we cannot be sure either way, but if an element does not satisfy either (3A) or 3B, it cannot be part of C. 3. Lemma 6 shows that size of Waiting(C) is constant showing an anti-concentration property. This coupled with the fact that the process converges when a cluster reaches size 4C log n H2(f+∥f−) gives the desired query complexity bound in Lemma 7. 4 Experimental Results In this section, we report experimental results on a popular bibliographic dataset cora [35] consisting of 1879 nodes, 191 clusters and 1699612 edges out of which 62891 are intra-cluster edges. We remove any singleton node from the dataset – the final number of vertices that we classify is 1812 with 124 clusters. We use the similarity function computation used by [18] to compute f+ and f−. The two distributions are shown in Figure 1 on the left. The Hellinger square divergence between the two distributions is 0.6. In order to observe the dependency of the algorithm performance on the learnt distributions, we perturb the exact distributions to obtain two approximate distributions as shown in Figure 1 (middle) with Hellinger square divergence being 0.4587. We consider three strategies. Suppose the cluster in which a node v must be included has already been initialized and exists in the current solution. Moreover, suppose the algorithm decides to use queries to find membership of v. Then in the best strategy, only one query is needed to identify the cluster in which v belongs. In the worst strategy, the algorithm finds the correct cluster after querying all the existing clusters whose current membership is not enough to take a decision using side information. In the greedy strategy, the algorithm queries the clusters in non-decreasing order of Hellinger square divergence between f+ (or approximate version of it) and the estimated distribution from side information between v and each existing clusters. Note that, in practice, we will follow the greedy strategy. Figure 2 shows the performance of each strategy. We plot the number of queries vs F1 Score which computes the harmonic mean of precision and recall. We observe that the performance of greedy strategy is very close to that of best. With just 1136 queries, greedy achieves 80% precision and close to 90% recall. The best strategy would need 962 queries to achieve that performance. The performance of our algorithm on the exact and approximate distributions are also very close which indicates it is enough to learn a distribution that is close to exact. For example, using the approximate distributions, 8 Figure 1: (left) Exact distributions of similarity values, (middle) approximate distributions of similarity values, (right) Number of Queries vs F1 Score for both distributions. to achieve similar precision and recall, the greedy strategy just uses 1148 queries, that is 12 queries more than when we use when the distributions are known. Figure 2: Number of Queries vs F1 Score using three strategies: best, greedy, worst. Discussion. This is the first rigorous theoretical study of interactive clustering with side information, and it unveils many interesting directions for future study of both theoretical and practical significance (see Appendix D for more details). Having arbitrary f+, f−is a generalization of SBM. Also it raises an important question about how SBM recovery threshold changes with queries. For sparse region of SBM, where f+ is Bernoulli( a′ log n n ) and f−is Bernoulli( b′ log n n ), a′ > b′, Lemma 1 is not tight yet. However, it shows the following trend. Let us set a = n k in Lemma 1 with the above f+, f−. We conjecture by ignoring the lower order terms and a √log n factor that with Q queries, the sharp recovery threshold of sparse SBM changes from ( √ a′− √ b′) ≥ √ k to ( √ a′− √ b′) ≥ √ k 1 −Q nk . Proving this bound remains an exciting open question. We propose two computationally efficient algorithms that match the query complexity lower bound within log n factor and are completely parameter free. In particular, our iterative-update method to design Monte-Carlo algorithm provides a general recipe to develop any parameter-free algorithm, which are of extreme practical importance. The convergence result is established by extending Sanov’s theorem from the large deviation theory which gives bound only in terms of KL-divergence. Due to the generality of the distributions, the only tool we could use is Sanov’s theorem. However, Hellinger distance comes out to be the right measure both for lower and upper bounds. If f+ and f−are common distributions like Gaussian, Bernoulli etc., then other concentration results stronger than Sanov may be applied to improve the constants and a logarithm factor to show the trade-off between queries and thresholds as in sparse SBM. While some of our results apply to general fi,js, a full picture with arbitrary fi,js and closing the gap of log n between the lower and upper bound remain an important future direction. 9 Acknowledgement. This work is supported in part by NSF awards CCF 1642658, CCF 1642550, CCF 1464310, CCF 1652303, a Yahoo ACE Award and a Google Faculty Research Award. We are particularly thankful to an anonymous reviewer whose comments led to notable improvement of the presentation of the paper. References [1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE Trans. Information Theory, 62(1):471–487, 2016. [2] E. Abbe and C. Sandon. Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 670–688, 2015. [3] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing the parameters. In Advances in Neural Information Processing Systems, pages 676–684, 2015. [4] M. Ajtai, J. Komlos, W. L. Steiger, and E. Szemer´edi. Deterministic selection in o (loglog n) parallel time. In Proceedings of the eighteenth annual ACM symposium on Theory of computing, pages 188–195. ACM, 1986. [5] H. Ashtiani, S. Kushagra, and S. Ben-David. Clustering with same-cluster queries. NIPS, 2016. [6] P. Awasthi, M.-F. Balcan, and K. Voevodski. Local algorithms for interactive clustering. In ICML, pages 550–558, 2014. [7] M.-F. Balcan and A. Blum. Clustering with interactive feedback. In International Conference on Algorithmic Learning Theory, pages 316–328. Springer, 2008. [8] B. Bollob´as and G. Brightwell. Parallel selection with high probability. SIAM Journal on Discrete Mathematics, 3(1):21–31, 1990. [9] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. In COLT, pages 35–1, 2012. [10] Y. Chen, G. Kamath, C. Suh, and D. Tse. Community recovery in graphs with locality. In Proceedings of The 33rd International Conference on Machine Learning, pages 689–698, 2016. [11] P. Chin, A. Rao, and V. Vu. Stochastic block model and community detection in the sparse graphs: A spectral algorithm with optimal rate of recovery. arXiv preprint arXiv:1501.05021, 2015. [12] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In WWW, pages 285–294, 2013. [13] S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Top-k and clustering with noisy comparisons. ACM Trans. Database Syst., 39(4):35:1–35:39, 2014. [14] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov´a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physical Review E, 84(6):066106, 2011. [15] M. E. Dyer and A. M. Frieze. The solution of some random np-hard problems in polynomial expected time. Journal of Algorithms, 10(4):451–489, 1989. [16] U. Feige, P. Raghavan, D. Peleg, and E. Upfal. Computing with noisy information. SIAM Journal on Computing, 23(5):1001–1018, 1994. [17] I. P. Fellegi and A. B. Sunter. A theory for record linkage. Journal of the American Statistical Association, 64(328):1183–1210, 1969. 10 [18] D. Firmani, B. Saha, and D. Srivastava. Online entity resolution using an oracle. PVLDB, 9(5):384–395, 2016. [19] A. Gadde, E. E. Gad, S. Avestimehr, and A. Ortega. Active learning for community detection in stochastic block models. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages 1889–1893. IEEE, 2016. [20] L. Getoor and A. Machanavajjhala. Entity resolution: theory, practice & open challenges. PVLDB, 5(12):2018–2019, 2012. [21] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. In EC, pages 167–176, 2011. [22] C. Gokhale, S. Das, A. Doan, J. F. Naughton, N. Rampalli, J. Shavlik, and X. Zhu. Corleone: Hands-off crowdsourcing for entity matching. In SIGMOD Conference, pages 601–612, 2014. [23] A. Guntuboyina. Lower bounds for the minimax risk using-divergences, and applications. IEEE Transactions on Information Theory, 57(4):2386–2399, 2011. [24] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite programming. IEEE Transactions on Information Theory, 62(5):2788–2797, 2016. [25] B. E. Hajek, Y. Wu, and J. Xu. Computational lower bounds for community detection on random graphs. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 899–928, 2015. [26] T. S. Han and S. Verdu. Generalizing the fano inequality. IEEE Transactions on Information Theory, 40(4):1247–1251, 1994. [27] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30, 1963. [28] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109–137, 1983. [29] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In NIPS, pages 1953–1961, 2011. [30] H. K¨opcke, A. Thor, and E. Rahm. Evaluation of entity resolution approaches on real-world match problems. Proceedings of the VLDB Endowment, 3(1-2):484–493, 2010. [31] S. H. Lim, Y. Chen, and H. Xu. Clustering from labels and time-varying graphs. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1188–1196. Curran Associates, Inc., 2014. [32] A. Mazumdar and B. Saha. Clustering via crowdsourcing. arXiv preprint arXiv:1604.01839, 2016. [33] A. Mazumdar and B. Saha. A Theoretical Analysis of First Heuristics of Crowdsourced Entity Resolution. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017. [34] A. Mazumdar and B. Saha. Clustering with noisy queries. In Advances in Neural Information Processing Systems (NIPS) 31, 2017. [35] A. McCallum, 2004. http://www.cs.umass.edu/~mcallum/data/cora-refs.tar.gz. [36] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 69–75. ACM, 2015. [37] Y. Polyanskiy and S. Verd´u. Arimoto channel coding converse and r´enyi divergence. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 1327–1333. IEEE, 2010. 11 [38] I. Sason and S. V´erdu. f divergence inequalities. IEEE Transactions on Information Theory, 62(11):5973–6006, 2016. [39] V. Verroios and H. Garcia-Molina. Entity resolution with crowd errors. In 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015, pages 219–230, 2015. [40] N. Vesdapunt, K. Bellare, and N. Dalvi. Crowdsourcing algorithms for entity resolution. PVLDB, 7(12):1071–1082, 2014. [41] R. K. Vinayak and B. Hassibi. Crowdsourced clustering: Querying edges vs triangles. In Advances in Neural Information Processing Systems, pages 1316–1324, 2016. [42] J. Wang, T. Kraska, M. J. Franklin, and J. Feng. Crowder: Crowdsourcing entity resolution. PVLDB, 5(11):1483–1494, 2012. [43] J. Wang, G. Li, T. Kraska, M. J. Franklin, and J. Feng. Leveraging transitive relations for crowdsourced joins. In SIGMOD Conference, pages 229–240, 2013. 12 | 2017 | 16 |
6,630 | Variational Memory Addressing in Generative Models Jörg Bornschein Andriy Mnih Daniel Zoran Danilo J. Rezende {bornschein, amnih, danielzoran, danilor}@google.com DeepMind, London, UK Abstract Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory. 1 Introduction Recent years have seen rapid developments in generative modelling. Much of the progress was driven by the use of powerful neural networks to parameterize conditional distributions composed to define the generative process (e.g., VAEs [1, 2], GANs [3]). In the Variational Autoencoder (VAE) framework for example, we typically define a generative model p(z), p✓(x|z) and an approximate inference model qφ(z|x). All conditional distributions are parameterized by multilayered perceptrons (MLPs) which, in the simplest case, output the mean and the diagonal variance of a Normal distribution given the conditioning variables. We then optimize a variational lower bound to learn the generative model for x. Considering recent progress, we now have the theory and the tools to train powerful, potentially non-factorial parametric conditional distributions p(x|y) that generalize well with respect to x (normalizing flows [4], inverse autoregressive flows [5], etc.). Another line of work which has been gaining popularity recently is memory augmented neural networks [6, 7, 8]. In this family of models the network is augmented with a memory buffer which allows read and write operations and is persistent in time. Such models usually handle input and output to the memory buffer using differentiable “soft” write/read operations to allow back-propagating gradients during training. Here we propose a memory-augmented generative model that uses a discrete latent variable a acting as an address into the memory buffer M. This stochastic perspective allows us to introduce a variational approximation over the addressing variable which takes advantage of target information 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. z m p(z) x p(x|z, m(z)) m z p(m) x p(x|m, z) a m z p(a) x p(x|ma, z) Figure 1: Left: Sketch of typical SOTA generative latent variable model with memory. Red edges indicate approximate inference distributions q(·|·). The KL(q||p) cost to identify a specific memory entry might be substantial, even though the cost of accessing a memory entry should be in the order of log |M|. Middle & Right: We combine a top-level categorical distribution p(a) and a conditional variational autoencoder with a Gaussian p(z|m). when retrieving contents from memory during training. We compute the sampling distribution over the addresses based on a learned similarity measure between the memory contents at each address and the target. The memory contents ma at the selected address a serve as a context for a continuous latent variable z, which together with ma is used to generate the target observation. We therefore interpret memory as a non-parametric conditional mixture distribution. It is non-parametric in the sense that we can change the content and the size of the memory from one evaluation of the model to another without having to relearn the model parameters. And since the retrieved content ma is dependent on the stochastic variable a, which is part of the generative model, we can directly use it downstream to generate the observation x. These two properties set our model apart from other work on VAEs with mixture priors [9, 10] aimed at unconditional density modelling. Another distinguishing feature of our approach is that we perform sampling-based variational inference on the mixing variable instead of integrating it out as is done in prior work, which is essential for scaling to a large number of memory addresses. Most existing memory-augmented generative models use soft attention with the weights dependent on the continuous latent variable to access the memory. This does not provide clean separation between inferring the address to access in memory and the latent factors of variation that account for the variability of the observation relative to the memory contents (see Figure 1). Or, alternatively, when the attention weights depend deterministically on the encoder, the retrieved memory content can not be directly used in the decoder. Our contributions in this paper are threefold: a) We interpret memory-read operations as conditional mixture distribution and use amortized variational inference for training; b) demonstrate that we can combine discrete memory addressing variables with continuous latent variables to build powerful models for generative few-shot learning that scale gracefully with the number of items in memory; and c) demonstrate that the KL divergence over the discrete variable a serves as a useful measure to monitor memory usage during inference and training. 2 Model and Training We will now describe the proposed model along with the variational inference procedure we use to train it. The generative model has the form p(x|M) = X a p(a|M) Z z p(z|ma) p(x|z, ma) dz (1) where x is the observation we wish to model, a is the addressing categorical latent variable, z the continuous latent vector, M the memory buffer and ma the memory contents at the ath address. The generative process proceeds by first sampling an address a from the categorical distribution p(a|M), retrieving the contents ma from the memory buffer M, and then sampling the observation x from a conditional variational auto-encoder with ma as the context conditioned on (Figure 1, B). 2 The intuition here is that if the memory buffer contains a set of templates, a trained model of this type should be able to produce observations by distorting a template retrieved from a randomly sampled memory location using the conditional variational autoencoder to account for the remaining variability. We can write the variational lower bound for the model in (1): log p(x|M) ≥ E a,z⇠q(·|M,x) [log p(x, z, a|M) −log q(a, z|M, x)] (2) where q(a, z|M, x) = q(a|M, x)q(z|ma, x). (3) In the rest of the paper, we omit the dependence on M for brevity. We will now describe the components of the model and the variational posterior (3) in detail. The first component of the model is the memory buffer M. We here do not implement an explicit write operation but consider two possible sources for the memory content: Learned memory: In generative experiments aimed at better understanding the model’s behaviour we treat M as model parameters. That is we initialize M randomly and update its values using the gradient of the objective. Few-shot learning: In the generative few-shot learning experiments, before processing each minibatch, we sample |M| entries from the training data and store them in their raw (pixel) form in M. We ensure that the training minibatch {x1, ..., x|B|} contains disjoint samples from the same character classes, so that the model can use M to find suitable templates for each target x. The second component is the addressing variable a 2 {1, ..., |M|} which selects a memory entry ma from the memory buffer M. The varitional posterior distribution q(a|x) is parameterized as a softmax over a similarity measure between x and each of the memory entries ma: qφ(a|x) / exp Sq φ(ma, x), (4) where Sq φ(x, y) is a learned similarity function described in more detail below. Given a sample a from the posterior qφ(a|x), retreiving ma from M is a purely deterministic operation. Sampling from q(a|x) is easy as it amounts to computing its value for each slot in memory and sampling from the resulting categorical distribution. Given a, we can compute the probability of drawing that address under the prior p(a). We here use a learned prior p(a) that shares some parameters with q(a|x). Similarity functions: To obtain an efficient implementation for mini-batch training we use the same memory content M for the all training examples in a mini-batch and choose a specific form for the similarity function. We parameterize Sq(m, x) with two MLPs: hφ that embeds the memory content into the matching space and hq φ that does the same to the query x. The similarity is then computed as the inner product of the embeddings, normalized by the norm of the memory content embedding: Sq(ma, x) = hea, eqi ||ea||2 (5) where ea = hφ(ma) , eq = hq φ(x). (6) This form allows us to compute the similarities between the embeddings of a mini-batch of |B| observations and |M| memory entries at the computational cost of O(|M||B||e|), where |e| is the dimensionality of the embedding. We also experimented with several alternative similarity functions such as the plain inner product (hea, eqi) and the cosine similarity (hea, eqi/||ea|| · ||eq||) and found that they did not outperform the above similarity function. For the unconditioneal prior p(a), we learn a query point ep 2 R|e| to use in similarity function (5) in place of eq. We share hφ between p(a) and q(a|x). Using a trainable p(a) allows the model to learn that some memory entries are more useful for generating new targets than others. Control experiments showed that there is only a very small degradation in performance when we assume a flat prior p(a) = 1/|M|. 2.1 Gradients and Training For the continuous variable z we use the methods developed in the context of variational autoencoders [1]. We use a conditional Gaussian prior p(z|ma) and an approximate conditional posterior q(z|x, ma). However, since we have a discrete latent variable a in the model we can not simply backpropagate gradients through it. Here we show how to use VIMCO [11] to estimate the gradients for 3 this model. With VIMCO, we essentially optimize the multi-sample variational bound [12, 13, 11]: log p(x) ≥ E a(k)⇠q(a|x) z(k)⇠q(z|ma,x) " log 1 K K X k=1 p(x, ma, z) q(a, z|x) # = L (7) Multiple samples from the posterior enable VIMCO to estimate low-variance gradients for those parameters φ of the model which influence the non-differentiable discrete variable a. The corresponding gradient estimates are: r✓L ' X a(k),z(k) ⇠q(·|x) !(k) ⇣ r✓log p✓(x, a(k), z(k)) −r✓log q✓(z|a, x) ⌘ (8) rφL ' X a(k),z(k) ⇠q(·|x) !(k) φ rφ log qφ(a(k)|x) with !(k) = ˜!(k) P k ˜!(k) , ˜!(k) = p(x, a(k), z(k)) q(a(k), z(k)|x) and !(k) φ = log 1 K X k0 ˜!(k0) −log 1 K −1 X k06=k ˜!(k0) −!(k) For z-related gradients this is equivalent to IWAE [13]. Alternative gradient estimators for discrete latent variable models (e.g. NVIL [14], RWS [12] or Gumbel-max relaxation-based approaches [15, 16]) might work here too, but we have not investigated their effectiveness. Notice how the gradients r log p(x|z, a) provide updates for the memory contents ma (if necessary), while the gradients r log p(a) and r log q(a|x) provide updates for the embedding MLPs. The former update the mixture components while the latter update their relative weights. The log-likelihood bound (2) suggests that we can decompose the overall loss into three terms: the expected reconstruction error Ea,z⇠q [log p(x|a, z)] and the two KL terms which measure the information flow from the approximate posterior to the generative model for our latent variables: KL(q(a|x)||p(a)), and Ea⇠q [KL(q(z|a, x)||p(z|a))]. 3 Related work Attention and external memory are two closely related techniques that have recently become important building blocks for neural models. Attention has been widely used for supervised learning tasks such as translation, image classification and image captioning. External memory can be seen as an input or an internal state and attention mechanisms can either be used for selective reading or incremental updating. While most work involving memory and attention has been done in the context supervised learning, here we are interested in using them effectively in the generative setting. In [17] the authors use soft-attention with learned memory contents to augment models to have more parameters in the generative model. External memory as a way of implementing one-shot generalization was introduced in [18]. This was achieved by treating the exemplars conditioned on as memory entries accessed through a soft attention mechanism at each step of the incremental generative process similar to the one in DRAW [19]. Generative Matching Networks [20] are a similar architecture which uses a single-step VAE generative process instead of an iterative DRAW-like one. In both cases, soft attention is used to access the exemplar memory, with the address weights computed based on a learned similarity function between an observation at the address and a function of the latent state of the generative model. In contrast to this kind of deterministic soft addressing, we use hard attention, which stochastically picks a single memory entry and thus might be more appropriate in the few-shot setting. As the memory location is stochastic in our model, we perform variational inference over it, which has not been done for memory addressing in a generative model before. A similar approach has however been used for training stochastic attention for image captioning [21]. In the context of memory, hard attention has been used in RLNTM – a version of the Neural Turing Machine modified to use stochastic hard addressing [22]. However, RLNTM has been trained using REINFORCE rather than variational inference. A number of architectures for VAEs augmented with mixture priors have 4 Figure 2: A: Typical learning curve when training a model to recall MNIST digits (M ⇠training data (each step); x ⇠M; |M| = 256): In the beginning the continuous latent variables model most of the variability of the data; after ⇡100k update steps the stochastic memory component takes over and both the NLL bound and the KL(q(a|x)||p(a)) estimate approach log(256), the NLL of an optimal probabilistic lookup-table. B: Randomly selected samples from the MNIST model with learned memory: Samples within the same row use a common ma. been proposed, but they do not use the mixture component indicator variable to index memory and integrate out the variable instead [9, 10], which prevents them from scaling to a large number of mixing components. An alternative approach to generative few-shot learning proposed in [23] uses a hierarchical VAE to model a large number of small related datasets jointly. The statistical structure common to observations in the same dataset are modelled by a continuous latent vector shared among all such observations. Unlike our model, this model is not memory-based and does not use any form of attention. Generative models with memory have also been proposed for sequence modelling in [24], using differentiable soft addressing. Our approach to stochastic addressing is sufficiently general to be applicable in this setting as well and it would be interesting how it would perform as a plug-in replacement for soft addressing. 4 Experiments We optimize the parameters with Adam [25] and report experiments with the best results from learning rates in {1e-4, 3e-4}. We use minibatches of size 32 and K=4 samples from the approximate posterior q(·|x) to compute the gradients, the KL estimates, and the log-likelihood bounds. We keep the architectures deliberately simple and do not use autoregressive connections or IAF [5] in our models as we are primarily interested in the quantitative and qualitative behaviour of the memory component. 4.1 MNIST with fully connected MLPs We first perform a series of experiments on the binarized MNIST dataset [26]. We use 2 layered enand decoders with 256 and 128 hidden units with ReLU nonlinearities and a 32 dimensional Gaussian latent variable z. Train to recall: To investigate the model’s capability to use its memory to its full extent, we consider the case where it is trained to maximize the likelihood for random data points x which are present in M. During inference, an optimal model would pick the template ma that is equivalent to x with probability q(a|x)=1. The corresponding prior probability would be p(a) ⇡1/|M|. Because there are no further variations that need to be modeled by z, its posterior q(z|x, m) can match the prior p(z|m), yielding a KL cost of zero. The model expected log likelihood would be -log |M|, equal to the log-likelihood of an optimal probabilistic lookup table. Figure 2A illustrates that our model converges to the optimal solution. We observed that the time to convergence depends on the size of the memory and with |M| > 512 the model sometimes fails to find the optimal solution. It is noteworthy that the trained model from Figure 2A can handle much larger memory sizes at test time, e.g. achieving NLL ⇡log(2048) given 2048 test set images in memory. This indicates that the matching MLPs for q(a|x) are sufficiently discriminative. 5 Figure 3: Approximate inference with q(a|x): Histogram and corresponding top-5 entries ma for two randomly selected targets. M contains 10 examples from 8 unseen test-set character classes. Figure 4: A: Generative one-shot sampling: Left most column is the testset example provided in M; remaining columns show randomly selected samples from p(x|M). The model was trained with 4 examples from 8 classes each per gradient step. B: Breakdown of the KL cost for different models trained with varying number of examples per class in memory. KL(q(a|x)||p(a)) increases from 2.0 to 4.5 nats as KL(q(z|ma, x)||p(z|ma)) decreases from 28.2 to 21.8 nats. As the number of examples per class increases, the model shifts the responsibility for modeling the data from the continuous variable z to the discrete a. The overall testset NLL for the different models improves from 75.1 to 69.1 nats. Learned memory: We train models with |M| 2 {64, 128, 256, 512, 1024} randomly initialized mixture components (ma 2 R256). After training, all models converged to an average KL(q(a|x)||p(a)) ⇡2.5 ± 0.3 nats over both the training and the test set, suggesting that the model identified between e2.2 ⇡9 and e2.8 ⇡16 clusters in the data that are represented by a. The entropy of p(a) is significantly higher, indicating that multiple ma are used to represent the same data clusters. A manual inspection of the q(a|x) histograms confirms this interpretation. Although our model overfits slightly more to the training set, we do generally not observe a big difference between our model and the corresponding baseline VAE (a VAE with the same architecture, but without the top level mixture distribution) in terms of the final NLL. This is probably not surprising, because MNIST provides many training examples describing a relatively simple data manifold. Figure 2B shows samples from the model. 4.2 Omniglot with convolutional MLPs To apply the model to a more challenging dataset and to use it for generative few-shot learning, we train it on various versions of the Omniglot [27] dataset. For these experiments we use convolutional en- and decoders: The approximate posterior q(z|m, x) takes the concatenation of x and m as input and predicts the mean and variance for the 64 dimensional z. It consists of 6 convolutional layers with 3 ⇥3 kernels and 48 or 64 feature maps each. Every second layer uses a stride of 2 to get an overall downsampling of 8 ⇥8. The convolutional pyramid is followed by a fully-connected MLP with 1 hidden layer and 2|z| output units. The architecture of p(x|m, z) uses the same downscaling pyramid to map m to a |z|-dimensional vector, which is concatenated with z and upscaled with transposed convolutions to the full image size again. We use skip connections from the downscaling layers of m to the corresponding upscaling layers to preserve a high bandwidth path from m to x. To reduce overfitting, given the relatively small size of the Omniglot dataset, we tie the parameters of the convolutional downscaling layers in q(z|m) and p(x|m, z). The embedding MLPs for p(a) and q(a|x) use the same convolutional architecture and map images x and memory content ma into 6 Figure 5: Robustness to increasing memory size at test-time: A: Varying the number of confounding memory entries: At test-time we vary the number of classes in M. For an optimal model of disjoint data from C classes we expect L = average L per class + log C (dashed lines). The model was trained with 4 examples from 8 character classes in memory per gradient step. We also show our best soft-attenttion baseline model which was trained with 16 examples from two classes each gradient step. B: Memory contains examples from all 144 test-set character classes and we vary the number of examples per class. At C=0 we show the LL of our best unconditioned baseline VAE. The models were trained with 8 character classes and {1, 4, 8} examples per class in memory. a 128-dimensional matching space for the similarity calculations. We left their parameters untied because we did not observe any improvement nor degradation of performance when tying them. With learned memory: We run experiments on the 28 ⇥28 pixel sized version of Omniglot which was introduced in [13]. The dataset contains 24,345 unlabeled examples in the training, and 8,070 examples in the test set from 1623 different character classes. The goal of this experiment is to show that our architecture can learn to use the top-level memory to model highly multi-modal input data. We run experiments with up to 2048 randomly initialized mixture components and observe that the model makes substantial use of them: The average KL(q(a|x)||p(a)) typically approaches log |M|, while KL(q(z|·)||p(z|·)) and the overall training-set NLL are significantly lower compared to the corresponding baseline VAE. However big models without regularization tend to overfit heavily (e.g. training-set NLL < 80 nats; testset NLL > 150 nats when using |M|=2048). By constraining the model size (|M|=256, convolutions with 32 feature maps) and adding 3e-4 L2 weight decay to all parameters with the exception of M, we obtain a model with a testset NLL of 103.6 nats (evaluated with K=5000 samples from the posterior), which is about the same as a two-layer IWAE and slightly worse than the best RBMs (103.4 and ⇡100 respectively, [13]). Few-shot learning: The 28 ⇥28 pixel version [13] of Omniglot does not contain any alphabet or character-class labels. For few-shot learning we therefore start from the original dataset [27] and scale the 104 ⇥104 pixel sized examples with 4 ⇥4 max-pooling to 26 ⇥26 pixels. We here use the 45/5 split introduced in [18] because we are mostly interested in the quantitative behaviour of the memory component, and not so much in finding optimal regularization hyperparameters to maximize performance on small datasets. For each gradient step, we sample 8 random character-classes from random alphabets. From each character-class we sample 4 examples and use them as targets x to form a minibatch of size 32. Depending on the experiment, we select a certain number of the remaining examples from the same character classes to populate M. We chose 8 character-classes and 4 examples per class for computational convenience (to obtain reasonable minibatch and memory sizes). In control experiments with 32 character classes per minibatch we obtain almost indistinguishable learning dynamics and results. To establish meaningful baselines, we train additional models with identical encoder and decoder architectures: 1) A simple, unconditioned VAE. 2) A memory-augmented generative model with soft-attention. Because the soft-attention weights have to depend solely on the variables in the generative model and may not take input directly from the encoder, we have to use z as the top-level latent variable: p(z), p(x|z, m(z)) and q(z|x). The overall structure of this model resembles the structure of prior work on memory-augmented generative models (see section 3 and Figure 1A), and is very similar to the one used in [20], for example. For the unconditioned baseline VAE we obtain a NLL of 90.8, while our memory augmented model reaches up to 68.8 nats. Figure 5 shows the scaling properties of our model when varying the number of conditioning examples at test-time. We observe only minimal degradation compared 7 Model Ctest 1 2 3 4 5 10 19 Generative Matching Nets 1 83.3 78.9 75.7 72.9 70.1 59.9 45.8 Generative Matching Nets 2 86.4 84.9 82.4 81.0 78.8 71.4 61.2 Generative Matching Nets 4 88.3 87.3 86.7 85.4 84.0 80.2 73.7 Variational Memory Addressing 1 86.5 83.0 79.6 79.0 76.5 76.2 73.9 Variational Memory Addressing 2 87.2 83.3 80.9 79.3 79.1 77.0 75.0 Variational Memory Addressing 4 87.5 83.3 81.2 80.7 79.5 78.6 76.7 Variational Memory Addressing 16 89.6 85.1 81.5 81.9 81.3 79.8 77.0 Table 1: Our model compared to Generative Matching Networks [20]: GMNs have an extra stage that computes joint statistics over the memory context which gives the model a clear advantage when multiple conditiong examples per class are available. But with increasing number of classes C it quickly degrades. LL bounds were evaluated with K=1000 posterior samples. to a theoretically optimal model when we increase the number of concurrent character classes in memory up to 144, indicating that memory readout works reliably with |M| ≥2500 items in memory. The soft-attention baseline model reaches up to 73.4 nats when M contains 16 examples from 1 or 2 character-classes, but degrades rapidly with increasing number of confounding classes (see Figure 5A). Figure 3 shows histograms and samples from q(a|x), visually confirming that our model performs reliable approximate inference over the memory locations. We also train a model on the Omniglot dataset used in [20]. This split provides a relatively small training set. We reduce the number of feature channels and hidden layers in our MLPs and add 3e-4 L2 weight decay to all parameters to reduce overfitting. The model in [20] has a clear advantage when many examples from very few character classes are in memory because it was specifically designed to extract joint statistics from memory before applying the soft-attention readout. But like our own soft-attention baseline, it quickly degrades as the number of concurrent classes in memory is increased to 4 (table 1). Few-shot classification: Although this is not the main aim of this paper, we can use the trained model to perform discriminative few-shot classification: We can estimate p(c|x) ⇡ P ma has label c Ez⇠q(z|a,x) [p(x, z, ma)/p(x)] or by using the feed forward approximation p(c|x) ⇡ P ma has label c q(a|x). Without any further retraining or finetuneing we obtain classification accuracies of 91%, 97%, 77% and 90% for 5-way 1-shot, 5-way 5-shot, 20-way 1-shot and 20-way 5-shot respectively with q(a|x). 5 Conclusions In our experiments we generally observe that the proposed model is very well behaved: we never used temperature annealing for the categorical softmax or other tricks to encourage the model to use memory. The interplay between p(a) and q(a|x) maintains exploration (high entropy) during the early phase of training and decreases naturally as the sampled ma become more informative. The KL divergences for the continuous and discrete latent variables show intuitively interpretable results for all our experiments: On the densely sampled MNIST dataset only a few distinctive mixture components are identified, while on the more disjoint and sparsely sampled Omniglot dataset the model chooses to use many more memory entries and uses the continuous latent variables less. By interpreting memory addressing as a stochastic operation, we gain the ability to apply a variational approximation which helps the model to perform precise memory lookups during inference and training. Compared to soft-attention approaches, we loose the ability to naively backprop through read-operations and we have to use approximations like VIMCO. However, our experiments strongly suggest that this can be a worthwhile trade-off. Our experiments also show that the proposed variational approximation is robust to increasing memory sizes: A model trained with 32 items in memory performed nearly optimally with more than 2500 items in memory at test-time. Beginning with M ≥48 our hard-attention implementation becomes noticeably faster in terms of wall-clock time per parameter update than the corresponding soft-attention baseline. Even though we use K=4 posterior samples during training and the soft-attention baseline only requires a single one. 8 Acknowledgments We thank our colleagues at DeepMind and especially Oriol Vinyals and Sergey Bartunov for insightful discussions. References [1] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [2] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pages 1278–1286, 2014. [3] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [4] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. [5] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. [6] Sreerupa Das, C. Lee Giles, and Guo zheng Sun. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, pages 791–795. Morgan Kaufmann Publishers, 1992. [7] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015. [8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016. [9] Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648, 2016. [10] Eric Nalisnick, Lars Hertel, and Padhraic Smyth. Approximate inference for deep latent gaussian mixtures. In NIPS Workshop on Bayesian Deep Learning, 2016. [11] Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016. [12] Jörg Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv preprint arXiv:1406.2751, 2014. [13] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. [14] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. [15] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. [16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. stat, 1050:1, 2017. 9 [17] Chongxuan Li, Jun Zhu, and Bo Zhang. Learning to generate with memory. In International Conference on Machine Learning, pages 1177–1186, 2016. [18] Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016. [19] DRAW: A Recurrent Neural Network For Image Generation, 2015. [20] Sergey Bartunov and Dmitry P Vetrov. Fast adaptation in generative models with generative matching networks. arXiv preprint arXiv:1612.02192, 2016. [21] Jimmy Ba, Ruslan R Salakhutdinov, Roger B Grosse, and Brendan J Frey. Learning wake-sleep recurrent attention models. In Advances in Neural Information Processing Systems, pages 2593–2601, 2015. [22] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015. [23] Harrison Edwards and Amos Storkey. Towards a Neural Statistician. 2 2017. [24] Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J Rezende, David Amos, and Timothy Lillicrap. Generative temporal models with memory. arXiv preprint arXiv:1702.04649, 2017. [25] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [26] Hugo Larochelle. Binarized mnist dataset http://www.cs.toronto.edu/~larocheh/public/ datasets/binarized_mnist/binarized_mnist_[train|valid|test].amat, 2011. [27] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. 10 | 2017 | 160 |
6,631 | Shallow Updates for Deep Reinforcement Learning Nir Levine∗ Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 levin.nir1@gmail.com Tom Zahavy∗ Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 tomzahavy@campus.technion.ac.il Daniel J. Mankowitz Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 danielm@tx.technion.ac.il Aviv Tamar Dept. of Electrical Engineering and Computer Sciences, UC Berkeley Berkeley, CA 94720 avivt@berkeley.edu Shie Mannor Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 shie@ee.technion.ac.il * Joint first authors. Ordered alphabetically by first name. Abstract Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach – the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer. 1 Introduction Reinforcement learning (RL) is a field of research that uses dynamic programing (DP; Bertsekas 2008), among other approaches, to solve sequential decision making problems. The main challenge in applying DP to real world problems is an exponential growth of computational requirements as the problem size increases, known as the curse of dimensionality (Bertsekas, 2008). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. RL tackles the curse of dimensionality by approximating terms in the DP calculation such as the value function or policy. Popular function approximators for this task include deep neural networks, henceforth termed deep RL (DRL), and linear architectures, henceforth termed shallow RL (SRL). SRL methods have enjoyed wide popularity over the years (see, e.g., Tsitsiklis et al. 1997; Bertsekas 2008 for extensive reviews). In particular, batch algorithms based on a least squares (LS) approach, such as Least Squares Temporal Difference (LSTD, Lagoudakis & Parr 2003) and Fitted-Q Iteration (FQI, Ernst et al. 2005) are known to be stable and data efficient. However, the success of these algorithms crucially depends on the quality of the feature representation. Ideally, the representation encodes rich, expressive features that can accurately represent the value function. However, in practice, finding such good features is difficult and often hampers the usage of linear function approximation methods. In DRL, on the other hand, the features are learned together with the value function in a deep architecture. Recent advancements in DRL using convolutional neural networks demonstrated learning of expressive features (Zahavy et al., 2016; Wang et al., 2016) and state-of-the-art performance in challenging tasks such as video games (Mnih et al. 2015; Tessler et al. 2017; Mnih et al. 2016), and Go (Silver et al., 2016). To date, the most impressive DRL results (E.g., the works of Mnih et al. 2015, Mnih et al. 2016) were obtained using online RL algorithms, based on a stochastic gradient descent (SGD) procedure. On the one hand, SRL is stable and data efficient. On the other hand, DRL learns powerful representations. This motivates us to ask: can we combine DRL with SRL to leverage the benefits of both? In this work, we develop a hybrid approach that combines batch SRL algorithms with online DRL. Our main insight is that the last layer in a deep architecture can be seen as a linear representation, with the preceding layers encoding features. Therefore, the last layer can be learned using standard SRL algorithms. Following this insight, we propose a method that repeatedly re-trains the last hidden layer of a DRL network with a batch SRL algorithm, using data collected throughout the DRL run. We focus on value-based DRL algorithms (e.g., the popular DQN of Mnih et al. 2015) and on SRL based on LS methods1, and propose the Least Squares DQN algorithm (LS-DQN). Key to our approach is a novel regularization term for the least squares method that uses the DRL solution as a prior in a Bayesian least squares formulation. Our experiments demonstrate that this hybrid approach significantly improves performance on the Atari benchmark for several combinations of DRL and SRL methods. To support our results, we performed an in-depth analysis to tease out the factors that make our hybrid approach outperform DRL. Interestingly, we found that the improved performance is mainly due to the large batch size of SRL methods compared to the small batch size that is typical for DRL. 2 Background In this section we describe our RL framework and several shallow and deep RL algorithms that will be used throughout the paper. RL Framework: We consider a standard RL formulation (Sutton & Barto, 1998) based on a Markov Decision Process (MDP). An MDP is a tuple ⟨S, A, R, P, γ⟩, where S is a finite set of states, A is a finite set of actions, and γ ∈[0, 1] is the discount factor. A transition probability function P : S × A →∆S maps states and actions to a probability distribution over next states. Finally, R : S ×A →[Rmin, Rmax] denotes the reward. The goal in RL is to learn a policy π : S →∆A that solves the MDP by maximizing the expected discounted return E [P∞ t=0 γtrt| π]. Value based RL methods make use of the action value function Qπ(s, a) = E[P∞ t=0 γtrt|st = s, at = a, π], which represents the expected discounted return of executing action a ∈A from state s ∈S and following the policy π thereafter. The optimal action value function Q∗(s, a) obeys a fundamental recursion known as the Bellman equation Q∗(s, a) = E [rt + γ maxa′ Q∗(st+1, a′)| st = s, at = a]. 1Our approach can be generalized to other DRL/SRL variants. 2 2.1 SRL algorithms Least Squares Temporal Difference Q-Learning (LSTD-Q): LSTD (Barto & Crites, 1996) and LSTD-Q (Lagoudakis & Parr, 2003) are batch SRL algorithms. LSTD-Q learns a control policy π from a batch of samples by estimating a linear approximation ˆQπ = Φwπ of the action value function Qπ ∈R|S||A|, where wπ ∈Rk are a set of weights and Φ ∈R|S||A|×k is a feature matrix. Each row of Φ represents a feature vector for a state-action pair ⟨s, a⟩. The weights wπ are learned by enforcing ˆQπ to satisfy a fixed point equation w.r.t. the projected Bellman operator, resulting in a system of linear equations Awπ = b, where A = ΦT (Φ −γPΠπΦ) and b = ΦT R. Here, R ∈R|S||A| is the reward vector, P ∈R|S||A|×|S| is the transition matrix and Ππ ∈R|S|×|S||A| is a matrix describing the policy. Given a set of NSRL samples D = {si, ai, ri, si+1}NSRL i=1 , we can approximate A and b with the following empirical averages: ˜A = 1 NSRL NSRL X i=1 φ(si, ai)T φ(si, ai)−γφ(si+1, π(si+1)) , ˜b = 1 NSRL NSRL X i=1 φ(si, ai)T ri . (1) The weights wπ can be calculated using a least squares minimization: ˜wπ = arg minw ∥˜Aw −˜b∥2 2 or by calculating the pseudo-inverse: ˜wπ = ˜A†˜b. LSTD-Q is an off-policy algorithm: the same set of samples D can be used to train any policy π so long as π(si+1) is defined for every si+1 in the set. Fitted Q Iteration (FQI): The FQI algorithm (Ernst et al., 2005) is a batch SRL algorithm that computes iterative approximations of the Q-function using regression. At iteration N of the algorithm, the set D defined above and the approximation from the previous iteration QN−1 are used to generate supervised learning targets: yi = ri + γ maxa′ QN−1(si+1, a ′), , ∀i ∈NSRL. These targets are then used by a supervised learning (regression) method to compute the next function in the sequence QN, by minimizing the MSE loss QN = argminQ PNSRL i=1 (Q(si, ai) −(ri + γ maxa′ QN−1(si+1, a′)))2. For a linear function approximation Qn(a, s) = φT (s, a)wn, LS can be used to give the FQI solution wn = arg minw ∥˜Aw −˜b∥2 2, where ˜A,˜b are given by: ˜A = 1 NSRL NSRL X i=1 φ(si, ai)T φ(si, ai) , ˜b = 1 NSRL NSRL X i=1 φ(si, ai)T yi . (2) The FQI algorithm can also be used with non-linear function approximations such as trees (Ernst et al., 2005) and neural networks (Riedmiller, 2005). The DQN algorithm (Mnih et al., 2015) can be viewed as online form of FQI. 2.2 DRL algorithms Deep Q-Network (DQN): The DQN algorithm (Mnih et al., 2015) learns the Q function by minimizing the mean squared error of the Bellman equation, defined as Est,at,rt,st+1∥Qθ(st, at) −yt∥2 2, where yt = rt + γ maxa′ Qθtarget(st+1, a ′). The DQN maintains two separate networks, namely the current network with weights θ and the target network with weights θtarget. Fixing the target network makes the DQN algorithm equivalent to FQI (see the FQI MSE loss defined above), where the regression algorithm is chosen to be SGD (RMSPROP, Hinton et al. 2012). The DQN is an off-policy learning algorithm. Therefore, the tuples ⟨st, at, rt, st+1⟩that are used to optimize the network weights are first collected from the agent’s experience and are stored in an Experience Replay (ER) buffer (Lin, 1993) providing improved stability and performance. Double DQN (DDQN): DDQN (Van Hasselt et al., 2016) is a modification of the DQN algorithm that addresses overly optimistic estimates of the value function. This is achieved by performing action selection with the current network θ and evaluating the action with the target network, θtarget, yielding the DDQN target update yt = rt if st+1 is terminal, otherwise yt = rt + γQθtarget(st+1, maxa Qθ(st+1, a)). 3 3 The LS-DQN Algorithm We now present a hybrid approach for DRL with SRL updates2. Our algorithm, the LS-DQN Algorithm, periodically switches between training a DRL network and re-training its last hidden layer using an SRL method. 3 We assume that the DRL algorithm uses a deep network for representing the Q function4, where the last layer is linear and fully connected. Such networks have been used extensively in deep RL recently (e.g., Mnih et al. 2015; Van Hasselt et al. 2016; Mnih et al. 2016). In such a representation, the last layer, which approximates the Q function, can be seen as a linear combination of features (the output of the penultimate layer), and we propose to learn more accurate weights for it using SRL. Explicitly, the LS-DQN algorithm begins by training the weights of a DRL network, wk, using a value-based DRL algorithm for NDRL steps (Line 2). LS-DQN then updates the last hidden layer weights, wlast k , by executing LS-UPDATE: retraining the weights using a SRL algorithm with NSRL samples (Line 3). The LS-UPDATE consists of the following steps. First, data trajectories D for the batch update are gathered using the current network weights, wk (Line 7). In practice, the current experience replay can be used and no additional samples need to be collected. The algorithm next generates new features Φ (s, a) from the data trajectories using the current DRL network with weights wk. This step guarantees that we do not use samples with inconsistent features, as the ER contains features from ’old’ networks weights. Computationally, this step requires running a forward pass of the deep network for every sample in D, and can be performed quickly using parallelization. Once the new features are generated, LS-DQN uses an SRL algorithm to re-calculate the weights of the last hidden layer wlast k (Line 9). While the LS-DQN algorithm is conceptually straightforward, we found that naively running it with off-the-shelf SRL algorithms such as FQI or LSTD resulted in instability and a degradation of the DRL performance. The reason is that the ‘slow’ SGD computation in DRL essentially retains information from older training epochs, while the batch SRL method ‘forgets’ all data but the most recent batch. In the following, we propose a novel regularization method for addressing this issue. Algorithm 1 LS-DQN Algorithm Require: w0 1: for k = 1 · · · SRLiters do 2: wk ←trainDRLNetwork(wk−1) ▷Train the DRL network for NDRL steps 3: wlast k ←LS-UPDATE(wk) ▷Update the last layer weights with the SRL solution 4: end for 5: 6: function LS-UPDATE(w) 7: D ←gatherData(w) 8: Φ(s, a) ←generateFeatures(D, w) 9: wlast ←SRL-Algorithm(D, Φ(s, a)) 10: return wlast 11: end function Regularization Our goal is to improve the performance of a value-based DRL agent using a batch SRL algorithm. Batch SRL algorithms, however, do not leverage the knowledge that the agent has gained before the most recent batch5. We observed that this issue prevents the use of off-the-shelf implementations of SRL methods in our hybrid LS-DQN algorithm. 2Code is available online at https://github.com/Shallow-Updates-for-Deep-RL 3We refer the reader to Appendix B for a diagram of the algorithm. 4The features in the last DQN layer are not action dependent. We generate action-dependent features Φ (s, a) by zero-padding to a one-hot state-action feature vector. See Appendix E for more details. 5While conceptually, the data batch can include all the data seen so far, due to computational limitations, this is not a practical solution in the domains we consider. 4 To enjoy the benefits of both worlds, that is, a batch algorithm that can use the accumulated knowledge gained by the DRL network, we introduce a novel Bayesian regularization method for LSTD-Q and FQI that uses the last hidden layer weights of the DRL network wlast k as a Bayesian prior for the SRL algorithm 6. SRL Bayesian Prior Formulation: We are interested in learning the weights of the last hidden layer (wlast), using a least squares SRL algorithm. We pursue a Bayesian approach, where the prior weights distribution at iteration k of LS-DQN is given by wprior ∼N(wlast k , λ−2), and we recall that wlast k are the last hidden layer weights of the DRL network at iteration SRLiter = k. The Bayesian solution for the regression problem in the FQI algorithm is given by (Box & Tiao, 2011) wlast = ( ˜A + λI)−1(˜b + λwlast k ) , where ˜A and ˜b are given in Equation 2. A similar regularization can be added to LSTD-Q based on a regularized fixed point equation (Kolter & Ng, 2009). Full details are in Appendix A. 4 Experiments In this section, we present experiments showcasing the improved performance attained by our LSDQN algorithm compared to state-of-the-art DRL methods. Our experiments are divided into three sections. In Section 4.1, we start by investigating the behavior of SRL algorithms in high dimensional environments. We then show results for the LS-DQN on five Atari domains, in Section 4.2, and compare the resulting performance to regular DQN and DDQN agents. Finally, in Section 4.3, we present an ablative analysis of the LS-DQN algorithm, which clarifies the reasons behind our algorithm’s success. 4.1 SRL Algorithms with High Dimensional Observations In the first set of experiments, we explore how least squares SRL algorithms perform in domains with high dimensional observations. This is an important step before applying a SRL method within the LS-DQN algorithm. In particular, we focused on answering the following questions: (1) What regularization method to use? (2) How to generate data for the LS algorithm? (3) How many policy improvement iterations to perform? To answer these questions, we performed the following procedure: We trained DQN agents on two games from the Arcade Learning Environment (ALE, Bellemare et al.); namely, Breakout and Qbert, using the vanilla DQN implementation (Mnih et al., 2015). For each DQN run, we (1) periodically 7 save the current DQN network weights and ER; (2) Use an SRL algorithm (LSTD-Q or FQI) to re-learn the weights of the last layer, and (3) evaluate the resulting DQN network by temporarily replacing the DQN weights with the SRL solution weights. After the evaluation, we replace back the original DQN weights and continue training. Each evaluation entails 20 roll-outs 8 with an ϵ-greedy policy (similar to Mnih et al., ϵ = 0.05). This periodic evaluation setup allowed us to effectively experiment with the SRL algorithms and obtain clear comparisons with DQN, without waiting for full DQN runs to complete. (1) Regularization: Experiments with standard SRL methods without any regularization yielded poor results. We found the main reason to be that the matrices used in the SRL solutions (Equations 1 and 2) are ill-conditioned, resulting in instability. One possible explanation stems from the sparseness of the features. The DQN uses ReLU activations (Jarrett et al., 2009), which causes the network to learn sparse feature representations. For example, once the DQN completed training on Breakout, 96% of features were zero. Once we added a regularization term, we found that the performance of the SRL algorithms improved. We experimented with the ℓ2 and Bayesian Prior (BP) regularizers (λ ∈ 0, 102 ). While the ℓ2 regularizer showed competitive performance in Breakout, we found that the BP performed better across domains (Figure 1, best regularizers chosen, shows the average score of each configuration following the explained evaluation procedure, for the different epochs). Moreover, the BP regularizer 6The reader is referred to Ghavamzadeh et al. (2015) for an overview on using Bayesian methods in RL. 7Every three million DQN steps, referred to as one epoch (out of a total of 50 million steps). 8Each roll-out starts from a new (random) game and follows a policy until the agent loses all of its lives. 5 was not sensitive to the scale of the regularization coefficient. Regularizers in the range (10−1, 101) performed well across all domains. A table of average scores for different coefficients can be found in Appendix C.1. Note that we do not expect for much improvement as we replace back the original DQN weights after evaluation. (2) Data Gathering: We experimented with two mechanisms for generating data: (1) generating new data from the current policy, and (2) using the ER. We found that the data generation mechanism had a significant impact on the performance of the algorithms. When the data is generated only from the current DQN policy (without ER) the SRL solution resulted in poor performance compared to a solution using the ER (as was observed by Mnih et al. 2015). We believe that the main reason the ER works well is that the ER contains data sampled from multiple (past) policies, and therefore exhibits more exploration of the state space. (3) Policy Improvement: LSTD-Q and FQI are off-policy algorithms and can be applied iteratively on the same dataset (e.g. LSPI, Lagoudakis & Parr 2003). However, in practice, we found that performing multiple iterations did not improve the results. A possible explanation is that by improving the policy, the policy reaches new areas in the state space that are not represented well in the current ER, and therefore are not approximated well by the SRL solution and the current DRL network. Figure 1: Periodic evaluation for DQN (green), LS-DQNLSTD-Q with Bayesian prior regularization (red, solid λ = 10, dashed λ = 1), and ℓ2 regularization (blue, solid λ = 0.001, dashed λ = 0.0001). 4.2 Atari Experiments We next ran the full LS-DQN algorithm (Alg. 1) on five Atari domains: Asterix, Space Invaders, Breakout, Q-Bert and Bowling. We ran the LS-DQN using both DQN and DDQN as the DRL algorithm, and using both LSTD-Q and FQI as the SRL algorithms. We chose to run a LS-update every NDRL = 500k steps, for a total of 50M steps (SRLiters = 100). We used the current ER buffer as the ‘generated’ data in the LS-UPDATE function (line 7 in Alg. 1, NSRL = 1M), and a regularization coefficient λ = 1 for the Bayesian prior solution (both for FQI and LSTQ-Q). We emphasize the we did not use any additional samples beyond the samples already obtained by the DRL algorithm. Figure 2 presents the learning curves of the DQN network, LS-DQN with LSTD-Q, and LS-DQN with FQI (referred to as DQN, LS-DQNLSTD-Q, and LS-DQNFQI, respectively) on three domains: Asterix, Space Invaders and Breakout. Note that we use the same evaluation process as described in Mnih et al. (2015). We were also interested in a test to measure differences between learning curves, and not only their maximal score. Hence we chose to perform Wilcoxon signed-rank test on the average scores between the three DQN variants. This non-parametric statistical test measures whether related samples differ in their means (Wilcoxon, 1945). We found that the learning curves for both LS-DQNLSTD-Q and LS-DQNFQI were statistically significantly better than those of DQN, with p-values smaller than 1e-15 for all three domains. Table 1 presents the maximum average scores along the learning curves of the five domains, when the SRL algorithms were incorporated into both DQN agents and DDQN agents (the notation is similar, i.e., LS-DDQNFQI)9. Our algorithm, LS-DQN, attained better performance compared to the 9 Scores for DQN and DDQN were taken from Van Hasselt et al. (2016). 6 Figure 2: Learning curves of DQN (green), LS-DQNLSTD-Q (red), and LS-DQNFQI (blue). vanilla DQN agents, as seen by the higher scores in Table 1 and Figure 2. We observe an interesting phenomenon for the game Asterix: In Figure 2, the DQN’s score “crashes” to zero (as was observed by Van Hasselt et al. 2016). LS-DQNLSTD-Q did not manage to resolve this issue, even though it achieved a significantly higher score that that of the DQN. LS-DQNFQI, however, maintained steady performance and did not “crash” to zero. We found that, in general, incorporating FQI as an SRL algorithm into the DRL agents resulted in improved performance. Table 1: Maximal average scores across five different Atari domains for each of the DQN variants. `````````` Algorithm Game Breakout Space Invaders Asterix Qbert Bowling DQN9 401.20 1975.50 6011.67 10595.83 42.40 LS-DQNLSTD-Q 420.00 3207.44 13704.23 10767.47 61.21 LS-DQNFQI 438.55 3360.81 13636.81 12981.42 75.38 DDQN9 375.00 3154.60 15150.00 14875.00 70.50 LS-DDQNFQI 397.94 4400.83 16270.45 12727.94 80.75 4.3 Ablative Analysis In the previous section, we saw that the LS-DQN algorithm has improved performance, compared to the DQN agents, across a number of domains. The goal of this section is to understand the reasons behind the LS-DQN’s improved performance by conducting an ablative analysis of our algorithm. For this analysis, we used a DQN agent that was trained on the game of Breakout, in the same manner as described in Section 4.1. We focus on analyzing the LS-DQNFQI algorithm, that has the same optimization objective as DQN (cf. Section 2), and postulate the following conjectures for its improved performance: (i) The SRL algorithms use a Bayesian regularization term, which is not included in the DQN objective. (ii) The SRL algorithms have less hyperparameters to tune and generate an explicit solution compared to SGD-based DRL solutions. (iii) Large-batch methods perform better than small-batch methods when combining DRL with SRL. (iv) SRL algorithms focus on training the last layer and are easier to optimize. The Experiments: We started by analyzing the learning method of the last layer (i.e., the ‘shallow’ part of the learning process). We did this by optimizing the last layer, at each LS-UPDATE epoch, using (1) FQI with a Bayesian prior and a LS solution, and (2) an ADAM (Kingma & Ba, 2014) optimizer with and without an additional Bayesian prior regularization term in the loss function. We compared these approaches for different mini-batch sizes of 32, 512, and 4096 data points, and used λ = 1 for all experiments. Relating to conjecture (ii), note that the FQI algorithm has only one hyper-parameter to tune and produces an explicit solution using the whole dataset simultaneously. ADAM, on the other hand, has more hyper-parameters to tune and works on different mini-batch sizes. 7 The Experimental Setup: The experiments were done in a periodic fashion similar to Section 4.1, i.e., testing behavior in different epochs over a vanilla DQN run. For both ADAM and FQI, we first collected 80k data samples from the ER at each epoch. For ADAM, we performed 20 iterations over the data, where each iteration consisted of randomly permuting the data, dividing it into mini-batches and optimizing using ADAM over the mini-batches10. We then simulate the agent and report average scores across 20 trajectories. The Results: Figure 3 depicts the difference between the average scores of (1) and (2) to that of the DQN baseline scores. We see that larger mini-batches result in improved performance. Moreover, the LS solution (FQI) outperforms the ADAM solutions for mini-batch sizes of 32 and 512 on most epochs, and even slightly outperforms the best of them (mini-batch size of 4096 and a Bayesian prior). In addition, a solution with a prior performs better than a solution without a prior. Summary: Our ablative analysis experiments strongly support conjectures (iii) and (iv) from above, for explaining LS-DQN’s improved performance. That is, large-batch methods perform better than small-batch methods when combining DRL with SRL as explained above; and SRL algorithms that focus on training only the last layer are easier to optimize, as we see that optimizing the last layer improved the score across epochs. Figure 3: Differences of the average scores, for different learning methods, compared to vanilla DQN. Positive values represent improvement over vanilla DQN. For example, for mini-batch of 32 (left figure), in epoch 3, FQI (blue) out-performed vanilla DQN by 21, while ADAM with prior (red), and ADAM without prior (green) under-performed by -46, and -96, respectively. Note that: (1) as the mini-batch size increases, the improvement of ADAM over DQN becomes closer to the improvement of FQI over DQN, and (2) optimizing the last layer improves performance. We finish this Section with an interesting observation. While the LS solution improves the performance of the DRL agents, we found that the LS solution weights are very close to the baseline DQN solution. See Appendix D, for the full results. Moreover, the distance was inversely proportional to the performance of the solution. That is, the FQI solution that performed the best, was the closest (in ℓ2 norm) to the DQN solution, and vice versa. There were orders of magnitude differences between the norms of solutions that performed well and those that did not. Similar results, i.e., that large-batch solutions find solutions that are close to the baseline, have been reported in (Keskar et al., 2016). We further compare our results with the findings of Keskar et al. in the section to follow. 5 Related work We now review recent works that are related to this paper. Regularization: The general idea of applying regularization for feature selection, and to avoid overfitting is a common theme in machine learning. However, applying it to RL algorithms is challenging due to the fact that these algorithms are based on finding a fixed-point rather than optimizing a loss function (Kolter & Ng, 2009).Value-based DRL approaches do not use regularization layers (e.g. pooling, dropout and batch normalization), which are popular in other deep learning methods. The DQN, for example, has a relatively shallow architecture (three convolutional layers, followed by two fully connected layers) without any regularization layers. Recently, regularization was introduced 10 The selected hyper-parameters used for these experiments can be found in Appendix D, along with results for one iteration of ADAM. 8 in problems that combine value-based RL with other learning objectives. For example, Hester et al. (2017) combine RL with supervised learning from expert demonstration, and introduce regularization to avoid over-fitting the expert data; and Kirkpatrick et al. (2017) introduces regularization to avoid catastrophic forgetting in transfer learning. SRL methods, on the other hand, perform well with regularization (Kolter & Ng, 2009) and have been shown to converge Farahmand et al. (2009). Batch size: Our results suggest that a large batch LS solution for the last layer of a value-based DRL network can significantly improve it’s performance. This result is somewhat surprising, as it has been observed by practitioners that using larger batches in deep learning degrades the quality of the model, as measured by its ability to generalize (Keskar et al., 2016). However, our method differs from the experiments performed by Keskar et al. 2016 and therefore does not contradict them, for the following reasons: (1) The LS-DQN Algorithm uses the large batch solution only for the last layer. The lower layers of the network are not affected by the large batch solution and therefore do not converge to a sharp minimum. (2) The experiments of (Keskar et al., 2016) were performed for classification tasks, whereas our algorithm is minimizing an MSE loss. (3) Keskar et al. showed that large-batch solutions work well when piggy-backing (warm-started) on a small-batch solution. Similarly, our algorithm mixes small and large batch solutions as it switches between them periodically. Moreover, it was recently observed that flat minima in practical deep learning model classes can be turned into sharp minima via re-parameterization without changing the generalization gap, and hence it requires further investigation Dinh et al. (2017). In addition, Hoffer et al. showed that large-batch training can generalize as well as small-batch training by adapting the number of iterations Hoffer et al. (2017). Thus, we strongly believe that our findings on combining large and small batches in DRL are in agreement with recent results of other deep learning research groups. Deep and Shallow RL: Using the last-hidden layer of a DNN as a feature extractor and learning the last layer with a different algorithm has been addressed before in the literature, e.g., in the context of transfer learning (Donahue et al., 2013). In RL, there have been competitive attempts to use SRL with unsupervised features to play Atari (Liang et al., 2016; Blundell et al., 2016), and to learn features automatically followed by a linear control rule (Song et al., 2016), but to the best of our knowledge, this is the first attempt that successfully combines DRL with SRL algorithms. 6 Conclusion In this work we presented LS-DQN, a hybrid approach that combines least-squares RL updates within online deep RL. LS-DQN obtains the best of both worlds: rich representations from deep RL networks as well as stability and data efficiency of least squares methods. Experiments with two deep RL methods and two least squares methods revealed that a hybrid approach consistently improves over vanilla deep RL in the Atari domain. Our ablative analysis indicates that the success of the LS-DQN algorithm is due to the large batch updates made possible by using least squares. This work focused on value-based RL. However, our hybrid linear/deep approach can be extended to other RL methods, such as actor critic (Mnih et al., 2016). More broadly, decades of research on linear RL methods have provided methods with strong guarantees, such as approximate linear programming (Desai et al., 2012) and modified policy iteration (Scherrer et al., 2015). Our approach shows that with the correct modifications, such as our Bayesian regularization term, linear methods can be combined with deep RL. This opens the door to future combinations of well-understood linear RL with deep representation learning. Acknowledgement This research was supported by the European Community’s Seventh Framework Program (FP7/2007-2013) under grant agreement 306638 (SUPREL). A. Tamar is supported in part by Siemens and the Viterbi Scholarship, Technion. 9 References Barto, AG and Crites, RH. Improving elevator performance using reinforcement learning. Advances in neural information processing systems, 8:1017–1023, 1996. Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. Bertsekas, Dimitri P. Approximate dynamic programming. 2008. Blundell, Charles, Uria, Benigno, Pritzel, Alexander, Li, Yazhe, Ruderman, Avraham, Leibo, Joel Z, Rae, Jack, Wierstra, Daan, and Hassabis, Demis. Model-free episodic control. stat, 1050:14, 2016. Box, George EP and Tiao, George C. Bayesian inference in statistical analysis. John Wiley & Sons, 2011. Desai, Vijay V, Farias, Vivek F, and Moallemi, Ciamac C. Approximate dynamic programming via a smoothed linear program. Operations Research, 60(3):655–674, 2012. Dinh, Laurent, Pascanu, Razvan, Bengio, Samy, and Bengio, Yoshua. Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933, 2017. Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 30th international conference on machine learning (ICML-13), pp. 647–655, 2013. Ernst, Damien, Geurts, Pierre, and Wehenkel, Louis. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503–556, 2005. Farahmand, Amir M, Ghavamzadeh, Mohammad, Mannor, Shie, and Szepesvári, Csaba. Regularized policy iteration. In Advances in Neural Information Processing Systems, pp. 441–448, 2009. Ghavamzadeh, Mohammad, Mannor, Shie, Pineau, Joelle, Tamar, Aviv, et al. Bayesian reinforcement learning: A survey. Foundations and Trends R⃝in Machine Learning, 8(5-6):359–483, 2015. Hester, Todd, Vecerik, Matej, Pietquin, Olivier, Lanctot, Marc, Schaul, Tom, Piot, Bilal, Sendonaris, Andrew, Dulac-Arnold, Gabriel, Osband, Ian, Agapiou, John, et al. Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017. Hinton, Geoffrey, Srivastava, NiRsh, and Swersky, Kevin. Neural networks for machine learning lecture 6a overview of mini–batch gradient descent. 2012. Hoffer, Elad, Hubara, Itay, and Soudry, Daniel. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017. Jarrett, Kevin, Kavukcuoglu, Koray, LeCun, Yann, et al. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146–2153. IEEE, 2009. Keskar, Nitish Shirish, Mudigere, Dheevatsa, Nocedal, Jorge, Smelyanskiy, Mikhail, and Tang, Ping Tak Peter. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017. Kolter, J Zico and Ng, Andrew Y. Regularization and feature selection in least-squares temporal difference learning. In Proceedings of the 26th annual international conference on machine learning. ACM, 2009. Lagoudakis, Michail G and Parr, Ronald. Least-squares policy iteration. Journal of machine learning research, 4(Dec):1107–1149, 2003. Liang, Yitao, Machado, Marlos C, Talvitie, Erik, and Bowling, Michael. State of the art control of atari games using shallow reinforcement learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016. Lin, Long-Ji. Reinforcement learning for robots using neural networks. 1993. 10 Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928–1937, 2016. Riedmiller, Martin. Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317–328. Springer, 2005. Scherrer, Bruno, Ghavamzadeh, Mohammad, Gabillon, Victor, Lesner, Boris, and Geist, Matthieu. Approximate modified policy iteration and its application to the game of tetris. Journal of Machine Learning Research, 16: 1629–1676, 2015. Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. Song, Zhao, Parr, Ronald E, Liao, Xuejun, and Carin, Lawrence. Linear feature encoding for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4224–4232, 2016. Sutton, Richard and Barto, Andrew. Reinforcement Learning: An Introduction. MIT Press, 1998. Tessler, Chen, Givony, Shahar, Zahavy, Tom, Mankowitz, Daniel J, and Mannor, Shie. A deep hierarchical approach to lifelong learning in minecraft. Proceedings of the National Conference on Artificial Intelligence (AAAI), 2017. Tsitsiklis, John N, Van Roy, Benjamin, et al. An analysis of temporal-difference learning with function approximation. IEEE transactions on automatic control 42.5, pp. 674–690, 1997. Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. Proceedings of the National Conference on Artificial Intelligence (AAAI), 2016. Wang, Ziyu, Schaul, Tom, Hessel, Matteo, van Hasselt, Hado, Lanctot, Marc, and de Freitas, Nando. Dueling network architectures for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1995–2003, 2016. Wilcoxon, Frank. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80–83, 1945. Zahavy, Tom, Ben-Zrihem, Nir, and Mannor, Shie. Graying the black box: Understanding dqns. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1899–1908, 2016. 11 | 2017 | 161 |
6,632 | Learning with Bandit Feedback in Potential Games Johanne Cohen LRI-CNRS, Université Paris-Sud,Université Paris-Saclay, France johanne.cohen@lri.fr Amélie Héliou LIX, Ecole Polytechnique, CNRS, AMIBio, Inria, Université Paris-Saclay amelie.heliou@polytechnique.edu Panayotis Mertikopoulos Univ. Grenoble Alpes, CNRS, Inria, LIG, F-38000, Grenoble, France panayotis.mertikopoulos@imag.fr Abstract This paper examines the equilibrium convergence properties of no-regret learning with exponential weights in potential games. To establish convergence with minimal information requirements on the players’ side, we focus on two frameworks: the semi-bandit case (where players have access to a noisy estimate of their payoff vectors, including strategies they did not play), and the bandit case (where players are only able to observe their in-game, realized payoffs). In the semi-bandit case, we show that the induced sequence of play converges almost surely to a Nash equilibrium at a quasi-exponential rate. In the bandit case, the same result holds for ε-approximations of Nash equilibria if we introduce an exploration factor ε > 0 that guarantees that action choice probabilities never fall below ε. In particular, if the algorithm is run with a suitably decreasing exploration factor, the sequence of play converges to a bona fide Nash equilibrium with probability 1. 1 Introduction Given the manifest complexity of computing Nash equilibria, a central question that arises is whether such outcomes could result from a dynamic process in which players act on empirical information on their strategies’ performance over time. This question becomes particularly important when the players’ view of the game is obstructed by situational uncertainty and the “fog of war”: for instance, when deciding which route to take to work each morning, a commuter is typically unaware of how many other commuters there are at any given moment, what their possible strategies are, how to best respond to their choices, etc. In fact, in situations of this kind, players may not even know that they are involved in a game; as such, it does not seem reasonable to assume full rationality, common knowledge of rationality, flawless execution, etc. to justify the Nash equilibrium prediction. A compelling alternative to this “rationalistic” viewpoint is provided by the framework of online learning, where players are treated as oblivious entities facing a repeated decision process with a priori unknown rules and outcomes. In this context, when the players have no Bayesian prior on their environment, the most widely used performance criterion is that of regret minimization, a worst-case guarantee that was first introduced by Hannan [1], and which has given rise to a vigorous literature at the interface of optimization, statistics and theoretical computer science – for a survey, see [2, 3]. By this token, our starting point in this paper is the following question: If all players of a repeated game follow a no-regret algorithm, does the induced sequence of play converge to Nash equilibrium? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. For concreteness, we focus on the exponential weights (EW) scheme [4–7], one of the most popular and widely studied algorithms for no-regret learning. In a nutshell, the main idea of the method is that the optimizing agent tallies the cumulative payoffs of each action and then employs a pure strategy with probability proportional to the exponential of these cumulative “scores”. Under this scheme, players are guaranteed a universal, min-max O(T 1/2) regret bound (with T denoting the horizon of play), and their empirical frequency of play is known to converge to the game’s set of coarse correlated equilibria (CCE) [8]. In this way, no-regret learning would seem to provide a positive partial answer to our original question: coarse correlated equilibria are indeed learnable if all players follow an exponential weights learning scheme. On the flip side however, the set of coarse correlated equilibria may contain highly non-rationalizable strategies, so the end prediction of empirical convergence to such equilibria is fairly lax. For instance, in a recent paper, Viossat and Zapechelnyuk constructed a 4 × 4 variant of Rock-Paper-Scissors with a coarse correlated equilibrium that assigns positive weight only on strictly dominated strategies [9]. Even more recently, [10] showed that the mean dynamics of the exponential weights method (and, more generally, any method “following the regularized leader”) may cycle in perpetuity in zero-sum games, precluding any possibility of convergence to equilibrium in this case. Thus, in view of these negative results, a more calibrated answer to the above question is “not always”: especially when the issue at hand is convergence to a Nash equilibrium (as opposed to coarser notions), “no regret” is a rather loose guarantee. Paper outline and summary of results. To address the above limitations, we focus on two issues: a) Convergence to Nash equilibrium (as opposed to correlated equilibria, coarse or otherwise). b) The convergence of the actual sequence of play (as opposed to empirical frequencies). The reason for focusing on the actual sequence of play is that time-averages provide a fairly weak convergence mode: a priori, a player could oscillate between non-equilibrium strategies with suboptimal payoffs, but time-averages might still converge to equilibrium. On the other hand, convergence of the actual sequence of play both implies empirical convergence and also guarantees that players will be playing a Nash equilibrium in the long run, so it is a much stronger notion. To establish convergence, we focus throughout on the class of potential games [11] that has found widespread applications in theoretical computer science [12], transportation networks [13], wireless communications [14], biology [15], and many other fields. We then focus on two different feedback models: in the semi-bandit framework (Section 3), players are assumed to have some (possibly imperfect) estimate of their payoff vectors at each stage, including strategies that they did not play; in the full bandit framework (Section 4), this assumption is relaxed and players are only assumed to observe their realized, in-game payoff at each stage. Starting with the semi-bandit case, our main result is that under fairly mild conditions for the errors affecting the players’ observations (zero-mean martingale noise with tame second-moment tails), learning with exponential weights converges to a Nash equilibrium of the game with probability 1 (or to an ε-equilibrium if the algorithm is implemented with a uniform exploration factor ε > 0).1 We also show that this convergence occurs at a quasi-exponential rate, i.e. much faster than the algorithm’s O( √ T) regret minimization rate would suggest. These conclusions also apply to the bandit framework when the algorithm is run with a positive exploration factor ε > 0. Thus, by choosing a sufficiently small exploration factor, the end state of the EW algorithm in potential games with bandit feedback is arbitrarily close to a Nash equilibrium. On the other hand, extending the stochastic approximation and martingale limit arguments that underlie the bandit analysis to the ε = 0 case is not straightforward. However, by letting the exploration factor go to zero at a suitable rate (similar to the temperature parameter in simulated annealing schemes), we are able to recover convergence to the game’s exact Nash set (and not an approximation thereof). We find this property particularly appealing for practical applications because it shows that equilibrium can be achieved in a wide class of games with minimal information requirements. 1Having a exploration factor ε > 0 simply means here that action selection probabilities never fall below ε. 2 Related work. No-regret learning has given rise to a vast corpus of literature in theoretical computer science and machine learning, and several well-known families of algorithms have been proposed for that purpose. The most popular of these methods is based on exponential/multiplicative weight update rules, and several variants of this general scheme have been studied under different names in the literature (Hedge, EXP3, etc.) [4–7]. When applied to games, the time-average of the resulting trajectory of play converges to equilibrium in two-player zero-sum games [6, 16, 17] and the players’ social welfare approaches an approximate optimum [18]. In a similar vein, focusing on the so-called “Hedge” variant of the multiplicative weights (MW) algorithm, Kleinberg et al. [19] proved that the dynamics’ long-term limit in load balancing games is exponentially better than the worst correlated equilibrium. The convergence rate to approximate efficiency and to coarse correlated equilibria was further improved by Syrgkanis et al. [20] for a wide class of N-player normal form games using a natural class of regularized learning algorithms. This result was then extended to a class of games known as smooth games [21] with good properties in terms of the game’s price of anarchy [22]. In the context of potential games, learning algorithms and dynamics have received signifcant attention and considerable efforts have been devoted to studying the long-term properties of the players’ actual sequence of play. To that end, Kleinberg et al. [23] showed that, after a polynomially small transient stage, players end up playing a pure equilibrium for a fraction of time that is arbitrarily close to 1 with probability also arbitrarily close to 1. Mehta et al. [24] obtained a stronger result for (generic) 2-player coordination games, showing that the multiplicative weights algorithm (a linearized variant of the EW algorithm) converges to a pure Nash equilibrium for all but a measure 0 of initial conditions. More recently, Palaiopanos et al. [25] showed that the MW update rule converges to equilibrium in potential games; however, if the EW algorithm is run with a constant step-size that is not small enough, the induced sequence of play may exhibit chaotic behavior, even in simple 2 × 2 games. On the other hand, if the same algorithm is run with a decreasing step-size, Krichene et al. [26] showed that play converges to Nash equilibrium in all nonatomic potential games with a convex potential (and hence, in all nonatomic congestion games). In the above works, players are assumed to have full (though possibly imperfect) knowledge of their payoff vectors, including actions that were not chosen. Going beyond this semi-bandit framework, Coucheney et al. [27] showed that a “penalty-regulated” variant of the EW algorithm converges to εlogit equilibria (and hence ε-approximate Nash equilibria) in congestion games with bandit feedback. As in [26], the results of Coucheney et al. [27] employ the powerful ordinary differential equation (ODE) method of Benaïm [28] which leverages the convergence of an underlying, continuous-time dynamical system to obtain convergence of the algorithm at hand. We also employ this method to compare the actual sequence of play to the replicator dynamics of evolutionary game theory [29]; however, finetuning the bias-variance trade-off that arises when estimating the payoff of actions that were not employed is a crucial difficulty in this case. Overcoming this hurdle is necessary when seeking convergence to actual Nash equilibria (as opposed to ε-approximations thereof), so a key contribution of our paper is an extension of Benaïm’s theory to account for estimators with (possibly) unbounded variance. 2 The setup 2.1 Game-theoretic preliminaries An N-player game in normal form consists of a (finite) set of players N = {1, . . . , N}, each with a finite set of actions (or pure strategies) Ai. The preferences of the i-th player for one action over another are determined by an associated payoff function ui : A ≡Q i Ai →R that maps the profile (αi; α−i) of all players’ actions to the player’s reward ui(αi; α−i).2 Putting all this together, a game will be denoted by the tuple Γ ≡Γ(N, A, u). Players can also mix their strategies by playing probability distributions xi = (xiαi)αi∈Ai ∈∆(Ai) over their action sets Ai. The resulting probability vector xi is called a mixed strategy and we write Xi = ∆(Ai) for the mixed strategy space of player i. Aggregating over players, we also write X = Q i Xi for the game’s strategy space, i.e. the space of all mixed strategy profiles x = (xi)i∈N . 2In the above (αi; α−i) is shorthand for (α1, . . . , αi, . . . , αN), used here to highlight the action of player i against that of all other players. 3 In this context (and in a slight abuse of notation), the expected payoff of the i-th player in the profile x = (x1, . . . , xN) is ui(x) = X α1∈A1 · · · X αN∈AN ui(α1, . . . , αN) x1α1 · · · xNαN . (2.1) To keep track of the payoff of each pure strategy, we also write viαi(x) = ui(αi; x−i) for the payoff of strategy αi ∈Ai under the profile x ∈X and vi(x) = (viαi(x))αi∈Ai (2.2) for the resulting payoff vector of player i. We thus have ui(x) = ⟨vi(x), xi⟩= X αi∈Ai xiαiviαi(x), (2.3) where ⟨v, x⟩≡v⊤x denotes the ordinary pairing between v and x. The most widely used solution concept in game theory is that of a Nash equilibrium (NE), i.e. a state x∗∈X such that ui(x∗ i ; x∗ −i) ≥ui(xi; x∗ −i) for every deviation xi ∈Xi of player i and all i ∈N. (NE) Equivalently, writing supp(xi) = {αi ∈Ai : xi > 0} for the support of xi ∈Xi, we have the characterization viαi(x∗) ≥viβi(x∗) for all αi ∈supp(x∗ i ) and all βi ∈Ai, i ∈N. (2.4) A Nash equilibrium x∗∈X is further said to be pure if supp(x∗ i ) = {ˆαi} for some ˆαi ∈Ai and all i ∈N. In generic games (that is, games where small changes to any payoff do not introduce new Nash equilibria or destroy existing ones), every pure Nash equilibrium is also strict in the sense that (2.4) holds as a strict inequality for all αi ̸= ˆαi. In our analysis, it will be important to consider the following relaxations of the notion of a Nash equilibrium: First, weakening the inequality (NE) leads to the notion of a δ-equilibrium, defined here as any mixed strategy profile x∗∈X such that ui(x∗ i ; x∗ −i) + δ ≥ui(xi; x∗ −i) for every deviation xi ∈Xi and all i ∈N. (NEδ) Finally, we say that x∗is a restricted equilibrium (RE) of Γ if viαi(x∗) ≥viβi(x∗) for all αi ∈supp(x∗ i ) and all βi ∈A′ i, i ∈N, (RE) where A′ i is some restricted subset of Ai containing supp(x∗ i ). In words, restricted equilibria are Nash equilibria of Γ restricted to subgames where only a subset of the players’ pure strategies are available at any given moment. Clearly, Nash equilibria are restricted equilibria but the converse does not hold: for instance, every pure strategy profile is a restricted equilibrium, but not necessarily a Nash equilibrium. Throughout this paper, we will focus almost exclusively on the class of potential games, which have been studied extensively in the context of congestion, traffic networks, oligopolies, etc. Following Monderer and Shapley [11], Γ is a potential game if it admits a potential function f : Q i Ai →R such that ui(xi; x−i) −ui(x′ i; x−i) = f(xi; x−i) −f(x′ i; x−i), (2.5) for all xi, x′ i ∈Xi, x−i ∈X−i ≡Q j̸=i Xi, and all i ∈N. A simple differentiation of (2.1) then yields vi(x) = ∇xiui(x) = ∇xif(x) for all i ∈N. (2.6) Obviously, every local maximizer of f is a Nash equilibrium so potential games always admit Nash equilibria in pure strategies (which are also strict if the game is generic). 2.2 The exponential weights algorithm Our basic learning framework is as follows: At each stage n = 1, 2, . . . , all players i ∈N select an action αi(n) ∈Ai based on their mixed strategies; subsequently, they receive some feedback on their chosen actions, they update their mixed strategies, and the process repeats. 4 A popular (and very widely studied) class of algorithms for no-regret learning in this setting is the exponential weights (EW) scheme introduced by Vovk [4] and studied further by Auer et al. [5], Freund and Schapire [6], Arora et al. [7], and many others. Somewhat informally, the main idea is that each player tallies the cumulative payoffs of each of their actions, and then employs a pure strategy αi ∈Ai with probability roughly proportional to the these cumulative payoff “scores”. Focusing on the so-called “ε-HEDGE” variant of the EW algorithm [6], this process can be described in pseudocode form as follows: Algorithm 1 ε-HEDGE with generic feedback Require: step-size sequence γn > 0, exploration factor ε ∈[0, 1], initial scores Yi ∈RAi. 1: for n = 1, 2, . . . do 2: for every player i ∈N do 3: set mixed strategy: Xi ←ε unifi +(1 −ε) Λi(Yi); 4: choose action αi ∼Xi; 5: acquire estimate ˆvi of realized payoff vector vi(αi; α−i); 6: update scores: Yi ←Yi + γnˆvi; 7: end for 8: end for Mathematically, Algorithm 1 represents the recursion Xi(n) = ε unifi +(1 −ε) Λi(Yi(n)), Yi(n + 1) = Yi(n) + γn+1ˆvi(n + 1), (ε-Hedge) where unifi = 1 |Ai|(1, . . . , 1) (2.7) stands for the uniform distribution over Ai and Λi : RAi →Xi denotes the logit choice map Λi(yi) = (exp(yiαi))αi∈Ai P αi∈Ai exp(yiαi), (2.8) which assigns exponentially higher probability to pure strategies with higher scores. Thus, action selection probabilities under (ε-Hedge) are a convex combination of uniform exploration (with total weight ε) and exponential weights (with total weight 1 −ε).3 As a result, for ε ≈1, action selection is essentially uniform; at the other extreme, when ε = 0, we obtain the original Hedge algorithm of Freund and Schapire [6] with feedback sequence ˆv(n) and no explicit exploration. The no-regret properties of (ε-Hedge) have been extensively studied in the literature as a function of the algorithm’s step-size sequence γn, exploration factor ε, and the statistical properties of the payoff estimates ˆv(n) – for a survey, we refer the reader to [2, 3]. In our convergence analysis, we examine the role of each of these factors in detail, focusing in particular on the distinction between “semi-bandit feedback” (when it is possible to estimate the payoff of pure strategies that were not played) and “bandit feedback” (when players only observe the payoff of their chosen action). 3 Learning with semi-bandit feedback 3.1 The model We begin with the semi-bandit framework, i.e. the case where each player has access to a possibly imperfect estimate of their entire payoff vector at stage n. More precisely, we assume here that the feedback sequence ˆvi(n) to Algorithm 1 is of the general form ˆvi(n) = vi(αi(n); α−i(n)) + ξi(n), (3.1) where (ξi(n))i∈N is a martingale noise process representing the players’ estimation error and satisfying the following statistical hypotheses: 3Of course, the exploration factor ε could also be player-dependent. For simplicity, we state all our results here with the same ε for all players. 5 1. Zero-mean: E[ξi(n) | Fn−1] = 0 for all n = 1, 2, . . . (a.s.). (H1) 2. Tame tails: P(∥ξi(n)∥2∞≥z | Fn−1) ≤A/zq for some q > 2, A > 0, and all n = 1, 2, . . . (a.s.). (H2) In the above, the expectation E[ · ] is taken with respect to some underlying filtered probability space (Ω, F, (Fn)n∈N, P) which serves as a stochastic basis for the process (α(n), ˆv(n), Y (n), X(n))n≥1.4 In words, Hypothesis (H1) simply means that the players’ feedback sequence ˆv(n) is conditionally unbiased with respect to the history of play, i.e. E[ˆvi(n) | Fn−1] = vi(X(n −1)), for all n = 1, 2, . . . (a.s.). (3.2a) Hypothesis (H2) further implies that the variance of the estimator ˆv is conditionally bounded, i.e. Var[ˆv(n) | Fn−1] ≤σ2 for all n = 1, 2, . . . (a.s.). (3.2b) By Chebyshev’s inequality, an estimator with finite variance enjoys the tail bound P(∥ξi(n)∥∞≥ z | Fn−1) = O(1/z2). At the expense of working with slightly more conservative step-size policies (see below), much of our analysis goes through with this weaker requirement for the tails of ξ. However, the extra control provided by the O(1/zq) tail bound simplifies the presentation considerably, so we do not consider this relaxation here. In any event, Hypothesis (H2) is satisfied by a broad range of error noise distributions (including all compactly supported, sub-Gaussian and sub-exponential distributions), so the loss in generality is small compared to the gain in clarity and concision. 3.2 Convergence analysis With all this at hand, our main result for the convergence of (ε-Hedge) with semi-bandit feedback of the form (3.1) is as follows: Theorem 1. Let Γ be a generic potential game and suppose that Algorithm 1 is run with i) semibandit feedback satisfying (H1) and (H2); ii) a nonnegative exploration factor ε ≥0; and iii) a step-size sequence of the form γn ∝1/nβ for some β ∈(1/q, 1]. Then: 1. X(n) converges (a.s.) to a δ-equilibrium of Γ with δ ≡δ(ε) →0 as ε →0. 2. If limn→∞X(n) is an ε-pure state of the form x∗ i = ε unifi +(1 −ε)eˆαi for some ˆα ∈A, then ˆα is a.s. a strict equilibrium of Γ and convergence occurs at a quasi-exponential rate: Xiˆαi(n) ≥1 −ε −be−c Pn k=1 γk for some positive b, c > 0. (3.3) Corollary 2. If Algorithm 1 is run with assumptions as above and no exploration (ε = 0), X(n) converges to a Nash equilibrium with probability 1. Moreover, if the limit of X(n) is pure and β < 1, we have Xiˆαi(n) ≥1 −be−cn1−β for some positive b, c > 0. (3.4) Sketch of the proof. The proof of Theorem 1 is fairly convoluted, so we relegate the details to the paper’s technical appendix and only present here a short sketch thereof. Our main tool is the so-called ordinary differential equation (ODE) method, a powerful stochastic approximation scheme due to Benaïm and Hirsch [28, 30]. The key observation is that the mixed strategy sequence X(n) generated by Algorithm 1 can be viewed as a “Robbins–Monro approximation” (an asymptotic pseudotrajectory to be precise) of the ε-perturbed exponential learning dynamics ˙yi = vi(x), xi = ε unifi +(1 −ε) Λi(yi), (XLε) By differentiating, it follows that xi(t) evolves according to the ε-perturbed replicator dynamics ˙xiα = xiα −|Ai|−1ε h viα(x) −(1 −ε)−1 X β∈Ai(xiβ −|Ai|−1ε)viβ(x) i , (RDε) 4Notation-wise, this means that the players’ actions at stage n are drawn based on their mixed strategies at stage n −1. This slight discrepancy with the pseudocode representation of Algorithm 1 is only done to simplify notation later on. 6 which, for ε = 0, boil down to the ordinary replicator dynamics of Taylor and Jonker [29]: ˙xiα = xiα[viα(x) −⟨vi(x), xi⟩], (RD) A key property of the replicator dynamics that readily extends to the ε-perturbed variant (RDε) is that the game’s potential f is a strict Lyapunov function – i.e. f(x(t)) is increasing under (RDε) unless x(t) is stationary. By a standard result of Benaïm [28], this implies that the discrete-time process X(n) converges (a.s.) to a connected set of rest points of (RDε), which are themselves approximate restricted equilibria of Γ. Of course, since every ε-pure point of the form (ε unifi +(1 −ε)eαi)i∈N is also stationary under (RDε), the above does not imply that the limit of X(n) is an approximate equilibrium of Γ. To rule out non-equilibrium outcomes, we first note that the set of rest points of (RDε) is finite (by genericity), so X(n) must converge to a point. Then, the final step of our convergence proof is provided by a martingale recurrence argument which shows that when X(n) converges to a point, this limit must be an approximate equilibrium of Γ. Finally, the rate of convergence (3.3) is obtained by comparing the payoff of a player’s equilibrium strategy to that of the player’s other strategies, and then “inverting” the logit choice map to translate this into an exponential decay rate for ∥Xiˆαi(n) −x∗∥. We close this section with two remarks on Theorem 1. First, we note that there is an inverse relationship between the tail exponent q in (H2) and the decay rate β of the algorithm’s step-size sequence γn ∝n−β. Specifically, higher values of q imply that the noise in the players’ observations is smaller (on average and with high probability), so players can be more aggressive in their choice of step-size. This is reflected in the lower bound 1/q for β and the fact that the players’ rate of convergence to Nash equilibrium increases for smaller β; in particular, (3.3) shows that Algorithm 1 enjoys a convergence bound which is just shy of O(exp(−n1−1/q)). Thus, if the noise process ξ is sub-Gaussian/sub-exponential (so q can be taken arbitrarily large), a near-constant step-size sequence (small β) yields an almost linear convergence rate. Second, if the noise process ξ is “isotropic” in the sense of Benaïm [28, Thm. 9.1], the instability of non-pure Nash equilibria under the replicator dynamics can be used to show that the limit of X(n) is pure with probability 1.5 When this is the case, the quasi-exponential convergence rate (3.3) becomes universal in that it holds with probability 1 (as opposed to conditioning on limn→∞X(n) being pure). We find this property particularly appealing for practical applications because it shows that equilibrium is reached exponentially faster than the O(1/√n) worst-case regret bound of (ε-Hedge) would suggest. 4 Payoff-based learning: the bandit case We now turn to the bandit framework, a minimal-information setting where, at each stage of the process, players only observe their realized payoffs ˆui(n) = ui(αi(n); α−i(n)). (4.1) In this case, players have no clue about the payoffs of strategies that were not chosen, so they must construct an estimator for their payoff vector, including its missing components. A standard way to do this is via the bandit estimator ˆviαi(n) = 1(αi(n) = αi) P(αi(n) = αi | Fn−1) · ˆui(n) = ˆui(n)/Xiαi(n −1) if αi = αi(n), 0 otherwise. (4.2) Indeed, a straightforward calculation shows that E[ˆviαi(n) | Fn−1] = X α−i∈A−i X−i,α−i(n −1) X βi∈Ai Xiβi(n −1) 1(αi = βi) Xiαi(n −1)ui(βi; α−i) = ui(αi; X−i(n −1)) = viαi(X(n −1)), (4.3) 5Specifically, we refer here to the so-called “folk theorem” of evolutionary game theory which states that x∗ is asymptotically stable under (RD) if and only if it is a strict Nash equilibrium of Γ [15]. The extension of this result to the ε-replicator system (RDε) is immediate. 7 so the estimator (4.2) is unbiased in the sense of (H1)/(3.2a). On the other hand, a similar calculation shows that the variance of ˆviαi(n) grows as O(1/Xiαi(n −1)), implying that (H2)/(3.2b) may fail to hold if the players’ action selection probabilities become arbitrarily small. Importantly, this can never happen if (ε-Hedge) is run with a strictly positive exploration factor ε > 0. In that case, we can show that the bandit estimator (4.2) satisfies both (H1) and (H2), leading to the following result: Theorem 3. Let Γ be a generic potential game and suppose that Algorithm 1 is run with i) the bandit estimator (4.2); ii) a strictly positive exploration factor ε > 0; and iii) a step-size sequence of the form γn ∝1/nβ for some β ∈(0, 1]. Then: 1. X(n) converges (a.s.) to a δ-equilibrium of Γ with δ ≡δ(ε) →0 as ε →0. 2. If limn→∞X(n) is an ε-pure state of the form x∗ i = ε unifi +(1 −ε)eˆαi for some ˆα ∈A, then ˆα is a.s. a strict equilibrium of Γ and convergence occurs at a quasi-exponential rate: Xiˆαi(n) ≥1 −ε −be−c Pn k=1 γk for some positive b, c > 0. (4.4) Proof. Under Algorithm 1, the estimator (4.2) gives ∥ˆvi(n)∥= |ˆui(n)| Xiαi(n−1)(n) ≤|ui(αi(n); α−i(n))| ε ≤umax ε , (4.5) where umax = maxi∈N maxα1∈A1 · · · maxαN∈AN ui(α1, . . . , αN) denotes the absolute maximum payoff in Γ. This implies that (H2) holds true for all q > 2, so our claim follows from Theorem 1. Theorem 3 shows that the limit of Algorithm 1 is closer to the Nash set of the game if the exploration factor ε is taken as small as possible. On the other hand, the crucial limitation of this result is that it does not apply to the case ε = 0 which corresponds to the game’s bona fide Nash equilibria. As we discussed above, the reason for this is that the variance of ˆv(n) may grow without bound if action choice probabilities become arbitrarily small, in which case the main components of our proof break down. With this “bias-variance” trade-off in mind, we introduce below a modified version of Algorithm 1 with an “annealing” schedule for the method’s exploration factor: Algorithm 2 Exponential weights with annealing Require: step-size sequence γn > 0, vanishing exploration factor εn > 0, initial scores Yi ∈RAi 1: for n = 1, 2, . . . do 2: for every player i ∈N do 3: set mixed strategy: Xi ←εn unifi +(1 −εn) Λi(Yi); 4: choose action αi ∼Xi and receive payoff ˆui ←ui(αi; α−i); 5: set ˆviαi ←ˆui/Xiαi and ˆviβi ←0 for βi ̸= αi; 6: update scores: Yi ←Yi + γnˆvi; 7: end for 8: end for Of course, the convergence of Algorithm 2 depends heavily on the rate at which εn decays to 0 relative to the algorithm’s step-size sequence γn. This can be seen clearly in our next result: Theorem 4. Let Γ be a generic potential game and suppose that Algorithm 1 is run with i) the bandit estimator (4.2); ii) a step-size sequence of the form γn ∝1/nβ for some β ∈(1/2, 1]; and iii) a decreasing exploration factor εn ↓0 such that lim n→∞ γn ε2n = 0, ∞ X n=1 γ2 n εn < ∞, and lim n→∞ εn −εn+1 γ2n = 0. (4.6) Then, X(n) converges (a.s.) to a Nash equilibrium of Γ. 8 The main challenge in proving Theorem 4 is that, unless the “innovation term” Ui(n) = ˆvi(n) − vi(X(n −1)) has bounded variance, Benaïm’s general theory does not imply that X(n) forms an asymptotic pseudotrajectory of the underlying mean dynamics – here, the unperturbed replicator system (RD). Nevertheless, under the summability condition (4.6), it is possible to show that this is the case by using a martingale limit argument based on Burkholder’s inequality. Furthermore, under the stated conditions, it is also possible to show that, if X(n) converges, its limit is necessarily a Nash equilibrium of Γ. Our proof then follows in roughly the same way as in the case of Theorem 1; for the details, we refer the reader to the appendix. We close this section by noting that the summability condition (4.6) imposes a lower bound on the step-size exponent β that is different from the lower bound in Theorem 3. In particular, if β = 1/2, (4.6) cannot hold for any vanishing sequence of exploration factors εn ↓0. Given that the innovation term Ui is bounded, we conjecture that this sufficient condition is not tight and can be relaxed further. We intend to address this issue in future work. 5 Conclusion and perspectives The results of the previous sections show that no-regret learning via exponential weights enjoys appealing convergence properties in generic potential games. Specifically, in the semi-bandit case, the sequence of play converges to a Nash equilibrium with probability 1, and convergence to pure equilibria occurs at a quasi-exponential rate. In the bandit case, the same holds true for O(ε)equilibria if the algorithm is run with a positive mixing factor ε > 0; and if the algorithm is run with a decreasing mixing schedule, the sequence of play converges to an actual Nash equilibrium (again, with probability 1). In future work, we intend to examine the algorithm’s convergence properties in other classes of games (such as smooth games), extend our analysis to the general “follow the regularized leader” (FTRL) class of policies (of which EW is a special case), and to examine the impact of asynchronicities and delays in the players’ feedback/update cycles. Acknowledgments Johanne Cohen was partially supported by the grant CNRS PEPS MASTODONS project ADOC 2017. Amélie Héliou and Panayotis Mertikopoulos gratefully acknowledge financial support from the Huawei Innovation Research Program ULTRON and the ANR JCJC project ORACLESS (grant no. ANR–16–CE33–0004–01). References [1] James Hannan. Approximation to Bayes risk in repeated play. In Melvin Dresher, Albert William Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, Volume III, volume 39 of Annals of Mathematics Studies, pages 97–139. Princeton University Press, Princeton, NJ, 1957. [2] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. [3] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [4] Volodimir G. Vovk. Aggregating strategies. In COLT ’90: Proceedings of the 3rd Workshop on Computational Learning Theory, pages 371–383, 1990. [5] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of Computer Science, 1995. [6] Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999. [7] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: A meta-algorithm and applications. Theory of Computing, 8(1):121–164, 2012. [8] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, September 2000. [9] Yannick Viossat and Andriy Zapechelnyuk. No-regret dynamics and fictitious play. Journal of Economic Theory, 148(2):825–842, March 2013. 9 [10] Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In SODA ’18: Proceedings of the 29th annual ACM-SIAM symposium on discrete algorithms, to appear. [11] Dov Monderer and Lloyd S. Shapley. Potential games. Games and Economic Behavior, 14(1):124 – 143, 1996. [12] Noam Nisan, Tim Roughgarden, Eva Tardos, and V. V. Vazirani, editors. Algorithmic Game Theory. Cambridge University Press, 2007. [13] William H. Sandholm. Population Games and Evolutionary Dynamics. Economic learning and social evolution. MIT Press, Cambridge, MA, 2010. [14] Samson Lasaulce and Hamidou Tembine. Game Theory and Learning for Wireless Networks: Fundamentals and Applications. Academic Press, Elsevier, 2010. [15] Josef Hofbauer and Karl Sigmund. Evolutionary game dynamics. Bulletin of the American Mathematical Society, 40(4):479–519, July 2003. [16] Dean Foster and Rakesh V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic Behavior, 21(1):40–55, October 1997. [17] Avrim Blum and Yishay Mansour. Learning, regret minimization, and equilibria. In Noam Nisan, Tim Roughgarden, Eva Tardos, and V. V. Vazirani, editors, Algorithmic Game Theory, chapter 4. Cambridge University Press, 2007. [18] Avrim Blum, Mohammad Taghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the price of total anarchy. In STOC ’08: Proceedings of the 40th annual ACM symposium on the Theory of Computing, pages 373–382. ACM, 2008. [19] Robert Kleinberg, Georgios Piliouras, and Éva Tardos. Load balancing without regret in the bulletin board model. Distributed Computing, 24(1):21–29, 2011. [20] Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. Fast convergence of regularized learning in games. In Advances in Neural Information Processing Systems, pages 2989–2997, 2015. [21] Tim Roughgarden. Intrinsic robustness of the price of anarchy. Journal of the ACM (JACM), 62(5):32, 2015. [22] Dylan J Foster, Thodoris Lykouris, Karthik Sridharan, and Eva Tardos. Learning in games: Robustness of fast convergence. In Advances in Neural Information Processing Systems, pages 4727–4735, 2016. [23] Robert Kleinberg, Georgios Piliouras, and Eva Tardos. Multiplicative updates outperform generic no-regret learning in congestion games. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 533–542. ACM, 2009. [24] Ruta Mehta, Ioannis Panageas, and Georgios Piliouras. Natural selection as an inhibitor of genetic diversity: Multiplicative weights updates algorithm and a conjecture of haploid genetics. In ITCS ’15: Proceedings of the 6th Conference on Innovations in Theoretical Computer Science, 2015. [25] Gerasimos Palaiopanos, Ioannis Panageas, and Georgios Piliouras. Multiplicative weights update with constant step-size in congestion games: Convergence, limit cycles and chaos. In NIPS ’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017. [26] Walid Krichene, Benjamin Drighès, and Alexandre M. Bayen. Online learning of Nash equilibria in congestion games. SIAM Journal on Control and Optimization, 53(2):1056–1081, 2015. [27] Pierre Coucheney, Bruno Gaujal, and Panayotis Mertikopoulos. Penalty-regulated dynamics and robust learning procedures in games. Mathematics of Operations Research, 40(3):611–633, August 2015. [28] Michel Benaïm. Dynamics of stochastic approximation algorithms. Séminaire de probabilités de Strasbourg, 33, 1999. [29] Peter D. Taylor and Leo B. Jonker. Evolutionary stable strategies and game dynamics. Mathematical Biosciences, 40(1-2):145–156, 1978. [30] Michel Benaïm and Morris W. Hirsch. Asymptotic pseudotrajectories and chain recurrent flows, with applications. Journal of Dynamics and Differential Equations, 8(1):141–176, 1996. 10 | 2017 | 162 |
6,633 | A Greedy Approach for Budgeted Maximum Inner Product Search Hsiang-Fu Yu⇤ Amazon Inc. rofuyu@cs.utexas.edu Cho-Jui Hsieh University of California, Davis chohsieh@ucdavis.edu Qi Lei The University of Texas at Austin leiqi@ices.utexas.edu Inderjit S. Dhillon The University of Texas at Austin inderjit@cs.utexas.edu Abstract Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%. 1 Introduction In this paper, we study the computational issue in the prediction phase for many embedding based models such as matrix factorization and deep learning models in recommender systems, which can be mathematically formulated as a Maximum Inner Product Search (MIPS) problem. Specifically, given a large collection of n candidate vectors: H = ! hj 2 Rk : 1, . . . , n and a query vector w 2 Rk, MIPS aims to identify a subset of candidates that have top largest inner product values with w. We also denote H = [h1, . . . , hj, . . . , hn]> as the candidate matrix. A naive linear search procedure to solve MIPS for a given query w requires O(nk) operations to compute n inner products and O(n log n) operations to obtain the sorted ordering of the n candidates. Recently, MIPS has drawn a lot of attention in the machine learning community due to its wide applicability, such as the prediction phase of embedding based recommender systems [6, 7, 10]. In such an embedding based recommender system, each user i is associated with a vector wi of dimension k, while each item j is associated with a vector hj of dimension k. The interaction (such as preference) between a user and an item is modeled by wT i hj. It is clear that identifying top-ranked items in such a system for a user is exactly a MIPS problem. Because both the number of users (the number of queries) and the number of items (size of vector pool in MIPS) can easily grow to millions, a naive linear search is extremely expensive; for example, to compute the preference for all m users over n items with latent embeddings of dimension k in a recommender system requires at least O(mnk) operations. When both m and n are large, the prediction procedure is extremely time consuming; it is even slower than the training procedure used to obtain the m + n embeddings, which ⇤Work done while at the University of Texas at Austin. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. costs only O(|⌦|k) operations per iteration, where |⌦| is number of observations and is much smaller than mn. Taking the yahoo-music dataset as an example, m = 1M, n = 0.6M, |⌦| = 250M, and mn = 600B ≫250M = |⌦|. As a result, the development of efficient algorithms for MIPS is needed in large-scale recommender systems. In addition, MIPS can be found in many other machine learning applications, such as the prediction for a multi-class or multi-label classifier [16, 17], an object detector, a structure SVM predicator, or as a black-box routine to improve the efficiency of learning and inference algorithm [11]. Also, the prediction phase of neural network could also benefit from a faster MIPS algorithm: the last layer of NN is often a dense fully-connected layer, so finding the label with maximum score becomes a MIPS problem with dense vectors [6]. There is a recent line of research on accelerating MIPS for large n, such as [2, 3, 9, 12–14]. However, most of them do not have the flexibility to control the trade-off between search efficiency and search quality in the prediction phase. In this paper, we consider the budgeted MIPS problem, which is a generalized version of the standard MIPS with a computation budget: how to generate a set of top-ranked candidates under a given budget on the number of inner products one can perform. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which handles budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to existing approaches. Our Contributions: • We develop Greedy-MIPS, which is a novel algorithm without any nearest neighbor search reduction that is essential in many state-of-the-art approaches [2, 12, 14]. • We establish a sublinear time theoretical guarantee for Greedy-MIPS under certain assumptions. • Greedy-MIPS is orders of magnitudes faster than many state-of-the-art MIPS approaches to obtain a desired search performance. As a specific example, on the yahoo-music data sets with n = 624, 961 and k = 200, Greedy-MIPS runs 200x faster than the naive approach and yields search results with the top-5 precision more than 75%, while the search performance of other state-of-the-art approaches under the similar speedup drops to less than 3% precision. • Greedy-MIPS supports MIPS with a budget, which brings the ability to control of the trade-off between computation efficiency and search quality in the prediction phase. 2 Existing Approaches for Fast MIPS Because of its wide applicability, several algorithms have been proposed for efficient MIPS. Most of existing approaches consider to reduce the MIPS problem to the nearest neighbor search problem (NNS), where the goal is to identify the nearest candidates of the given query, and apply an existing efficient NNS algorithm to solve the reduced problem. [2] is the first MIPS work which adopts such a MIPS-to-NNS reduction. Variants MIPS-to-NNS reduction are also proposed in [14, 15]. Experimental results in [2] show the superiority of the NNS reduction over the traditional branchand-bound search approaches for MIPS [9, 13]. After the reduction, there are many choices to solve the transformed NNS problem, such as locality sensitive hashing scheme (LSH-MIPS) considered in [12, 14, 15], PCA-tree based approaches (PCA-MIPS) in [2], or K-Means approaches in [1]. Fast MIPS approaches with sampling schemes have become popular recently. Various sampling schemes have been proposed to handle MIPS problem with different constraints. The idea of the sampling-based MIPS approach is first proposed in [5] as an approach to perform approximate matrix-matrix multiplications. Its applicability on MIPS problems is studied very recently [3]. The idea behind a sampling-based approach called Sample-MIPS, is about to design an efficient sampling procedure such that the j-th candidate is selected with probability p(j): p(j) ⇠h> j w. In particular, Sample-MIPS is an efficient scheme to sample (j, t) 2 [n] ⇥[k] with the probability p(j, t): p(j, t) ⇠hjtwt. Each time a pair (j, t) is sampled, we increase the count for the j-th item by one. By the end of the sampling process, the spectrum of the counts forms an estimation of n inner product values. Due to the nature of the sampling approach, it can only handle the situation where all the candidate vectors and query vectors are nonnegative. Diamond-MSIPS, a diamond sampling scheme proposed in [3], is an extension of Sample-MIPS to handle the maximum squared inner product search problem (MSIPS) where the goal is to identify candidate vectors with largest values of (h> j w)2. However, the solutions to MSIPS can be very different from the solutions to MIPS in general. For example, if all the inner product values are negative, the ordering for MSIPS is the exactly reverse ordering induced by MIPS. Here we can see that the applicability of both Sample-MIPS and Diamond-MSIPS to MIPS is very limited. 2 3 Budgeted MIPS The core idea behind the fast approximate MIPS approaches is to trade the search quality for the shorter query latency: the shorter the search latency, the lower the search quality. In most existing fast MIPS approaches, the trade-off depends on the approach-specific parameters such as the depth of the PCA tree in PCA-MIPS or the number of hash functions in LSH-MIPS. Such specific parameters are usually required to construct approach-specific data structures before any query is given, which means that the trade-off is somewhat fixed for all the queries. Thus, the computation cost for a given query is fixed. However, in many real-world scenarios, each query might have a different computational budget, which raises the question: Can we design a MIPS approach supporting the dynamic adjustment of the trade-off in the query phase? 3.1 Essential Components for Fast MIPS Before any query request: • Query-Independent Data Structure Construction: A pre-processing procedure is performed on the entire candidate sets to construct an approach-specific data structure D to store information about H: the LSH hash tables, space partition trees (e.g., KD-tree or PCA-tree), or cluster centroids. For each query request: • Query-dependent Pre-processing: In some approaches, a query dependent pre-processing is needed. For example, a vector augmentation is required in all MIPS-to-NNS approaches. In addition, [2] also requires another normalization. TP is used to denote the time complexity of this stage. • Candidate Screening: In this stage, based on the pre-constructed data structure D, an efficient procedure is performed to filter candidates such that only a subset of candidates C(w) ⇢H is selected. In a naive linear approach, no screening procedure is performed, so C(w) simply contains all the n candidates. For a tree-based structure, C(w) contains all the candidates stored in the leaf node of the query vector. In a sampling-based MIPS approach, an efficient sampling scheme is designed to generate highly possible candidates to form C(w). TS denotes the computational cost of the screening stage. • Candidate Ranking: An exact ranking is performed on the selected candidates in C(w) obtained from the screening stage. This involves the computation of |C(w)| inner products and the sorting procedure among these |C(w)| values. The overall time complexity TR = O(|C(w)|k + |C(w)| log|C(w)|). The per-query computational cost: TQ = TP + TS + TR. (1) It is clear that the candidate screening stage is the key component for a fast MIPS approach. In terms of the search quality, the performance highly depends on whether the screening procedure can identify highly possible candidates. Regarding the query latency, the efficiency highly depends on the size of C(w) and how fast to generate C(w). The major difference among various MIPS approaches is the choice of the data structure D and the screening procedure. 3.2 Budgeted MIPS: Problem Definition Budgeted MIPS is an extension of the standard approximate MIPS problem with a computational budget: how to generate top-ranked candidates under a given budget on the number of inner products one can perform. Note that the cost for the candidate ranking (TR) is inevitable in the per-query cost (1). A viable approach for budgeted MIPS must include a screening procedure which satisfies the following requirements: • the flexibility to control the size of C(w) in the candidate screening stage such that |C(w)| B, where B is a given budget, and • an efficient screening procedure to obtain C(w) in O(Bk) time such thatTQ = O(Bk + B log B). As mentioned earlier, most recently proposed MIPS-to-NNS approaches algorithms apply various search space partition data structures or techniques (e.g., LSH, KD-tree, or PCA-tree) designed for NNS to index the candidates H in the query-independent pre-processing stage. As the construction of D is query independent, both the search performance and the computation cost are somewhat fixed when the construction is done. For example, the performance of a PCA-MIPS depends on the depth of the PCA-tree. Given a query vector w, there is no control to the size of C(w) in the candidate generating phase. LSH-based approaches also have the similar issue. There might be some ad-hoc treatments to adjust C(w), it is not clear how to generalize PCA-MIPS and LSH-MIPS in a principled way to handle the situation with a computational budget: how to reduce the size of C(w) under a limited budget and how to improve the performance when a larger budget is given. 3 Unlike other NNS-based algorithms, the design of Sample-MIPS naturally enables it to support budgeted MIPS for a nonnegative candidate matrix H and a nonnegative query w. The more the number of samples, the lower the variance of the estimated frequency spectrum. Clearly, SampleMIPS has the flexibility to control the size of C(w), and thus is a viable approach for the budgeted MIPS problem. However, Sample-MIPS works only on the situation with non-negative H and w. Diamond-MSIPS has the similar issue. 4 Greedy-MIPS We carefully study the structure of MIPS and develop a simple but novel algorithm called GreedyMIPS, which handles budgeted MIPS by design. Unlike the recent MIPS-to-NNS approaches, Greedy-MIPS is an approach without any reduction to a NNS problem. Moreover, Greedy-MIPS is a viable approach for the budgeted MIPS problem without the non-negativity limitation inherited in the sampling approaches. The key component for a fast MIPS approach is the algorithm used in the candidate screening phase. In budgeted MIPS, for any given budget B and query w, an ideal procedure for the candidate screening phase costs O(Bk) time to generate C(w) which contains the B items with the largest B inner product values over the n candidates in H. The requirement on the time complexity O(Bk) implies that the procedure is independent from n = |H|, the number of candidates in H. One might wonder whether such an ideal procedure exists or not. In fact, designing such an ideal procedure with the requirement to generate the largest B items in O(Bk) time is even more challenging than the original budgeted MIPS problem. Definition 1. The rank of an item x among a set of items X = ! x1, . . . , x|X| is defined as rank(x | X) := X|X| j=1 I[xj ≥x], (2) where I[·] is the indicator function. A ranking induced by X is a function ⇡(·) : X ! {1, . . . , |X|} such that ⇡(xj) = rank(xj | X) 8xj 2 X. One way to store a ranking ⇡(·) induced by X is by a sorted index array s[r] of size |X| such that ⇡(xs[1]) ⇡(xs[2]) · · · ⇡(xs[|X|]). We can see that s[r] stores the index to the item x with ⇡(x) = r. To design an efficient candidate screening procedure, we study the operations required for MIPS: In the simple linear MIPS approach, nk multiplication operations are required to obtain n inner product values ! h> 1 w, . . . , h> n w . We define an implicit matrix Z 2 Rn⇥k as Z = H diag(w), where diag(w) 2 Rk⇥k is a matrix with w as it diagonal. The (j, t) entry of Z denotes the multiplication operation zjt = hjtwt and zj = diag(w)hj denotes the j-th row of Z. In Figure 1, we use Z> to demonstrate the implicit matrix. Note that Z is query dependant, i.e., the values of Z depend on the query vector w, and n inner product values can be obtained by taking the column-wise summation of Z>. In particular, for each j we have h> j w = Pk t=1 zjt, j = 1, . . . , n. Thus, the ranking induced by the n inner product values can be characterized by the marginal ranking ⇡(j|w) defined on the implicit matrix Z as follows: ⇡(j|w) := rank k X t=1 zjt &&&&& ( k X t=1 z1t, · · · , k X t=1 znt )! = rank * h> j w | ! h> 1 w, . . . , h> n w + . (3) As mentioned earlier, it is hard to design an ideal candidate screening procedure generating C(w) based on the marginal ranking. Because the main goal for the candidate screening phase is to quickly identify candidates which are highly possible to be top-ranked items, it suffices to have an efficient procedure generating C(w) by an approximation ranking. Here we propose a greedy heuristic ranking: ¯⇡(j|w) := rank * maxk t=1 zjt && ! maxk t=1 z1t, · · · , maxk t=1 znt + , (4) which is obtained by replacing the summation terms in (3) by max operators. The intuition behind this heuristic is that the largest element of zj multiplied by k is an upper bound of h> j w: h> j w = k X t=1 zjt k max{zjt : t = 1, . . . , k}. (5) Thus, ¯⇡(j|w), which is induced by such an upper bound of h> j w, could be a reasonable approximation ranking for the marginal ranking ⇡(j|w). 4 ⇡(j, t|w) Z> = diag(w)H> : zjt = hjtwt, 8j, t z11 z21 z31 z41 z51 z61 z71 z12 z22 z32 z42 z52 z62 z72 z13 z23 z33 z43 z53 z63 z73 + z11 z21 z31 z41 z51 z61 z71 z12 z22 z32 z42 z52 z62 z72 z13 z23 z33 z43 z53 z63 z73 ⇡t(j|w) h1>w h2>w h3>w h4>w h5>w h6>w h7>w ⇡(j|w) Figure 1: nk multiplications in a naive linear MIPS approach. ⇡(j, t|w): joint ranking. ⇡t(j|w): conditional ranking. ⇡(j|w): marginal ranking. Next we design an efficient procedure which generates C(w) according to the ranking ¯⇡(j|w) defined in (4). First, based on the relative orderings of {zjt}, we consider the joint ranking and the conditional ranking defined as follows: • Joint ranking: ⇡(j, t|w) is the exact ranking over the nk entries of Z. ⇡(j, t|w) := rank(zjt | {z11, . . . , znk}). • Conditional ranking: ⇡t(j|w) is the exact ranking over the n entires of the t-th row of Z>. ⇡t(j|w) := rank(zjt | {z1t, . . . , znt}). See Figure 1 for an illustration for both rankings. Similar to the marginal ranking, both joint and conditional rankings are query dependent. Observe that, in (4), for each j, only a single maximum entry of Z, maxk t=1 zjt, is considered to obtain the ranking ¯⇡(j|w). To generate C(w) based on ¯⇡(j|w), we can iterate (j, t) entries of Z in a greedy sequence such that (j1, t1) is visited before (j2, t2) if zj1t1 > zj2t2, which is exactly the sequence corresponding to the joint ranking ⇡(j, t|w). Each time an entry (j, t) is visited, we can include the index j into C(w) if j /2 C(w). In Theorem 1, we show that the sequence to include a newly observed j into C(w) is exactly the sequence induced by the ranking ¯⇡(j|w) defined in (4). Theorem 1. For all j1 and j2 such that ¯⇡(j1|w) < ¯⇡(j2|w), j1 will be included into C(w) before j2 if we iterate (j, t) pairs following the sequence induced by the joint ranking ⇡(j, t|w). A proof can be found in Section D.1. At first glance, generating (j, t) in the sequence according to the joint ranking ⇡(j, t|w) might require the access to all the nk entries of Z and cost O(nk) time. In fact, based on Property 1 of conditional rankings, we can design an efficient variant of the k-way merge algorithm [8] to generate (j, t) pairs in the desired sequence iteratively. Property 1. Given a fixed candidate matrix H, for any possible w with wt 6= 0, the conditional ranking ⇡t(j|w) is either ⇡t+(j) or ⇡t−(j), where ⇡t+(j) = rank(hjt | {h1t, . . . , hnt}), and ⇡t−(j) = rank(−hjt | {−h1t, . . . , −hnt}). In particular, ⇡t(j|w) = ⇢⇡t+(j) if wt > 0, ⇡t−(j) if wt < 0. Property 1 enables us to characterize a query dependent conditional ranking ⇡t(j|w) by two query independent rankings ⇡t+(j) and ⇡t−(j). Thus, for each t, we can construct and store a sorted index array st[r], r = 1, . . . , n such that ⇡t+(st[1]) ⇡t+(st[2]) · · · ⇡t+(st[n]), (6) ⇡t−(st[1]) ≥⇡t−(st[2]) ≥· · · ≥⇡t−(st[n]). (7) Thus, in the phase of query-independent data structure construction of Greedy-MIPS, we compute and store k query-independent rankings ⇡t+(·) by k sorted index arrays of length n: st[r], r = 1, . . . , n, t = 1, . . . , k. The entire construction costs O(kn log n) time and O(kn) space. Next we describe the details of the proposed Greedy-MIPS algorithm for a given query w and a budget B. Greedy-MIPS utilizes the idea of the k-way merge algorithm to visit (j, t) entries of Z according to the joint ranking ⇡(j, t|w). Designed to merge k sorted sublists into a single sorted list, the k-way merge algorithm uses 1) k pointers, one for each sorted sublist, and 2) a binary tree structure (either a heap or a selection tree) containing the elements pointed by these k pointers to obtain the next element to be appended into the sorted list [8]. 4.1 Query-dependent Pre-processing We divide nk entries of (j, t) into k groups. The t-th group contains n entries: {(j, t) : j = 1, . . . , n}. Here we need an iterator playing a similar role as the pointer which can iterate index j 2 {1, . . . , n} in the sorted sequence induced by the conditional ranking ⇡t(·|w). Utilizing Property 1, the t-th pre-computed sorted arrays st[r], r = 1, . . . , n can be used to construct such an iterator, called CondIter, which supports current() to access the currently pointed index j and getNext() to 5 Algorithm 1 CondIter: an iterator over j 2 {1, . . . , n} based on the conditional ranking ⇡t(j|w). This code assumes that the k sorted index arrays st[r], r=1, . . . , n, t=1, . . . , k are available. class CondIter: def constructor(dim_idx, query_val): t, w, ptr dim_idx, query_val, 1 def current(): return ⇢st[ptr] if w > 0, st[n −ptr + 1] otherwise. def hasNext(): return (ptr < n) def getNext(): ptr ptr + 1 and return current() Algorithm 2 Query-dependent preprocessing procedure in Greedy-MIPS. • Input: query w 2 Rk • For t = 1, . . . , k - iters[t] CondIter(t, wt) - z hjtwt, where j = iters[t].current() - Q.push((z, t)) • Output: - iters[t], t k: iterators for ⇡t(·|w). - Q: a max-heap of ⇢ (z, t) | z = n max j=1 zjt, 8t k . advance the iterator. In Algorithm 1, we describe a pseudo code for CondIter, which utilizes the facts (6) and (7) such that both the construction and the index access cost O(1) space and O(1) time. For each t, we use iters[t] to denote the CondIter for the t-th conditional ranking ⇡t(j|w). Regarding the binary tree structure used in Greedy-MIPS, we consider a max-heap Q of (z, t) pairs. z 2 R is the compared key used to maintain the heap property of Q, and t 2 {1, . . . , k} is an integer to denote the index to a entry group. Each (z, t) 2 Q denotes the (j, t) entry of Z where j = iters[t].current() and z = zjt = hjtwt. Note that there are most k elements in the max-heap at any time. Thus, we can implement Q by a binary heap such that 1) Q.top() returns the maximum pair (z, t) in O(1) time; 2) Q.pop() deletes the maximum pair of Q in O(log k) time; and 3) Q.push((z, t)) inserts a new pair in O(log k) time. Note that the entire Greedy-MIPS can also be implemented using a selection tree among the k entries pointed by the k iterators. See Section B in the supplementary material for more details. In the query-dependent pre-processing phase, we need to construct iters[t], t = 1, . . . , k, one for each conditional ranking ⇡t(j|w), and a max-heap Q which is initialized to contain ! (z, t) | z = maxn j=1 zjt, t k . A detailed procedure is described in Algorithm 2 which costs O(k log k) time and O(k) space. 4.2 Candidate Screening The core idea of Greedy-MIPS is to iteratively traverse (j, t) entries of Z in a greedy sequence and collect newly observed indices j into C(w) until |C(w)| = B. In particular, if r = ⇡(j, t|w), then (j, t) entry is visited at the r-th iterate. Similar to the k-way merge algorithm, we describe a detailed procedure in Algorithm 3, which utilizes the CondIter in Algorithm 1 to perform the screening. Recall both requirements of a viable candidate screening procedure for budgeted MIPS: 1) the flexibility to control the size |C(w)| B; and 2) an efficient procedure runs in O(Bk). First, it is clear that Algorithm 3 has the flexibility to control the size of C(w) by the exiting condition of the outer while-loop. Next, to analyze the overall time complexity of Algorithm 3, we need to know the number of the zjt entries the algorithm iterates before C(w) = B. Theorem 2 gives an upper bound on this number of iterations. Theorem 2. There are at least B distinct indices j in the first Bk entries (j, t) in terms of the joint ranking ⇡(j, t|w) for any w; that is, |{j | 8(j, t) such that ⇡(j, t|w) Bk}| ≥B. (8) A detailed proof can be found in Section D of the supplementary material. Note that there are some O(log k) time operations within both the outer and inner while loops such as Q.push((z, t)) and Q.pop()). As the goal of the screening procedure is to identify j indices only, we can skip the Q.push ** zjt, t ++ for an entry (j, t) with the j having been included in C(w). As a results, we can guarantee that Q.pop() is executed at most B + k −1 times when |C(w)| = B. The extra k −1 times occurs in the situation that iters[1].current() = · · · = iters[k].current() at the beginning of the entire screening procedure. 6 Algorithm 3 An improved candidate screening procedure in Greedy-MIPS. The time complexity is O(Bk). • Input: - H, w, and the computational budget B - Q and iters[t]: output of Algorithm 2 - C(w): an empty list - visited[j] = 0, 8j n: a zero-initialized array. • While |C(w)| < B: - (z, t) Q.pop() · · · O(log k) - j iters[t].current() - If visited[j] = 0: * append j into C(w) and visited[j] 1 - While iters[t].hasNext(): * j iters[t].getNext() * if visited[j] = 0: — z hjtwt and Q.push((z, t)) · · · O(log k) — break • visited[j] 0, 8j 2 C(w) · · · O(B) • Output: C(w) = {j | ¯⇡(j|w) B} To check weather a index j in the current C(w) in O(1) time, we use an auxiliary zero-initialized array of length n: visited[j], j = 1, . . . , n to denote whether an index j has been included in C(w) or not. As C(w) contains at most B indices, only B elements of this auxiliary array will be modified during the screening procedure. Furthermore, the auxiliary array can be reset to zero using O(B) time in the end of Algorithm 3, so this auxiliary array can be utilized again for a different query vector w. Notice that Algorithm 3 still iterates Bk entries of Z but at most B + k −1 entries will be pushed into or pop from the max-heap Q. Thus, the overall time complexity of Algorithm 3 is O(Bk + (B + k) log k) = O(Bk), which makes Greedy-MIPS a viable budgeted MIPS approach. 4.3 Connection to Sampling Approaches Sample-MIPS, as mentioned earlier, is essentially a sampling algorithm with replacement scheme to draw entries of Z such that (j, t) is sampled with the probability proportional to zjt. Thus, SampleMIPS can be thought as a traversal of (j, t) entries using in a stratified random sequence determined by a distribution of the values of {zjt}, while the core idea of Greedy-MIPS is to iterate (j, t) entries of Z in a greedy sequence induced by the ordering of {zjt}. Next, we discuss the differences of Greedy-MIPS from Sample-MIPS and Diamond-MSIPS. Sample-MIPS can be applied to the situation where both H and w are nonnegative because of the nature of sampling scheme. In contrast, Greedy-MIPS can work on any MIPS problems as only the ordering of {zjt} matters in Greedy-MIPS. Instead of h> j w, Diamond-MSIPS is designed for the MSIPS problem which is to identify candidates with largest (h> j w)2 or |h> j w| values. In fact, for nonnegative MIPS problems, the diamond sampling is equivalent to Sample-MIPS. Moreover, for MSIPS problems with negative entries, when the number of samples is set to be the budget B,2 the Diamond-MSIPS is equivalent to apply Sample-MIPS to sample (j, t) entries with the probability p(j, t) / |zjt|. Thus, the applicability of the existing sampling-based approaches remains limited for general MIPS problems. 4.4 Theoretical Guarantee Greedy-MIPS is an algorithm based on a greedy heuristic ranking (4). Similar to the analysis of Quicksort, we study the average complexity of Greedy-MIPS by assuming a distribution of the input dataset. For simplicity, our analysis is performed on a stochastic implicit matrix Z instead of w. Each entry in Z is assumed to follow a uniform distribution uniform(a, b). We establish Theorem 3 to prove that the number of entries (j, t) iterated by Greedy-MIPS to include the index to the largest candidate is sublinear to n = |H| with a high probability when n is large enough. Theorem 3. Assume that all the entries zjt are drawn from a uniform distribution uniform(a, b). Let j⇤be the index to the largest candidate (i.e., ⇡(j⇤|Z) = 1). With high probability, we have ¯⇡(j⇤|Z) O(k log(n)n 1 k ). A detailed proof can be found in the supplementary material. Notice that theoretical guarantees for approximate MIPS is challenging even for randomized algorithms. For example, the analysis for Diamond-MSIPS in [3] requires nonnegative assumptions and only works on MSIPS (max-squared-inner-product search) problems instead of MIPS problems. 5 Experimental Results In this section, we perform extensive empirical comparisons to compare Greedy-MIPS with other state-of-the-art fast MIPS approaches on both real-world and synthetic datasets: We use netflix and yahoo-music as our real-world recommender system datasets. There are 17, 770 and 624, 961 items in netflix and yahoo-music, respectively. In particular, we obtain the user embeddings {wi} 2 Rk 2This setting is used in the experiments in [3]. 7 Figure 2: MIPS comparison on netflix and yahoo-music. Figure 3: MIPS comparison on synthetic datasets with n 2 2{17,18,19,20} and k = 128. The datasets used to generate results are created with each entry drawn from a normal distribution. Figure 4: MIPS Comparison on synthetic datasets with n = 218 and k 2 2{2,5,7,10}. The datasets used to generate results on are created with each entry drawn from a normal distribution. and item embeddings hj 2 Rk by the standard low-rank matrix factorization [4] with k 2 {50, 200}. We also generate synthetic datasets with various n = 2{17,18,19,20} and k = 2{2,5,7,10}. For each synthetic dataset, both candidate vector hj and query w vector are drawn from the normal distribution. 5.1 Experimental Settings To have fair comparisons, all the compared approaches are implemented in C++. • Greedy-MIPS: our proposed approach in Section 4. • PCA-MIPS: the approach proposed in [2]. We vary the depth of PCA tree to control the trade-off. • LSH-MIPS: the approach proposed in [12, 14]. We use the nearest neighbor transform function proposed in [2, 12] and use the random projection scheme as the LSH function as suggested in [12]. We also implement the standard amplification procedure with an OR-construction of b hyper LSH hash functions. Each hyper LSH function is a result of an AND-construction of a random projections. We vary values (a, b) to control the trade-off. • Diamond-MSIPS: the sampling scheme proposed in [3] for the maximum squared inner product search. As it shows better performance than LSH-MIPS in [3] in terms of MIPS problems, we also include Diamond-MSIPS into our comparison. • Naive-MIPS: the baseline approach which applies a linear search to identify the exact top-K candidates. Evaluation Criteria. For each dataset, the actual top-20 items for each query are regarded as the ground truth. We report the average performance on a randomly selected 2,000 query vectors. To evaluate the search quality, we use the precision on the top-P prediction (prec@P), obtained by selecting top-P items from C(w) returned by the candidate screening procedure. Results with P = 5 is shown in the paper, while more results with various P are in the supplementary material. To evaluate the search efficiency, we report the relative speedups over the Naive-MIPS approach: speedup = prediction time required by Naive-MIPS prediction time by a compared approach . 8 Remarks on Budgeted MIPS versus Non-Budgeted MIPS. As mentioned in Section 3, PCAMIPS and LSH-MIPS cannot handle MIPS with a budget. Both the search computation cost and the search quality are fixed when the corresponding data structure is constructed. As a result, to understand the trade-off between search efficiency and search quality for these two approaches, we can only try various values for its parameters (such as the depth for PCA tree and the amplification parameters (a, b) for LSH). For each combination of parameters, we need to re-run the entire query-independent pre-processing procedure to construct a new data structure. Remarks on data structure construction. Note that the time complexity for the construction for Greedy-MIPS is O(kn log n), which is on par to O(kn) for Diamond-MSIPS, and faster than O(knab) for LSH-MIPS and O(k2n) for PCA-MIPS. As an example, the construction for Greedy-MIPS only takes around 10 seconds on yahoo-music with n = 624, 961 and k = 200. 5.2 Experimental Results Results on Real-World Data sets. Comparison results for netflix and yahoo-music are shown in Figure 2. The first, second, and third columns present the results with k = 50 and k = 200, respectively. It is clearly observed that given a fixed speedup, Greedy-MIPS yields predictions with much higher search quality. In particular, on the yahoo-music data set with k = 200, Greedy-MIPS runs 200x faster than Naive-MIPS and yields search results with p@5 = 70%, while none of PCAMIPS, LSH-MIPS, and Diamond-MSIPS can achieve a p@5 > 10% while maintaining the similar 200x speedups. Results on Synthetic Data Sets. We also perform comparisons on synthetic datasets. The comparison with various n 2 2{17,18,19,20} is shown in Figure 3, while the comparison with various k 2 2{2,5,7,10} is shown in Figure 4. We observe that the performance gap between Greedy-MIPS over other approaches remains when n increases, while the gap becomes smaller when k increases. However, Greedy-MIPS still outperforms other approaches significantly. 6 Conclusions and Future Work In this paper, we develop a novel Greedy-MIPS algorithm, which has the flexibility to handle budgeted MIPS, and yields surprisingly superior performance compared to state-of-the-art approaches. The current implementation focuses on MIPS with dense vectors, while in the future we plan to implement our algorithm also for high dimensional sparse vectors. We also establish a theoretical guarantee for Greedy-MIPS based on the assumption that data are generated from a random distribution. How to relax the assumption or how to design a nondeterministic pre-processing step for Greedy-MIPS to satisfy the assumption are interesting future directions of this work. Acknowledgements This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. CJH was supported by NSF grant RI-1719097. References [1] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clustering is efficient for approximate maximum inner product search, 2016. arXiv preprint arXiv:1507.05910. [2] Yoram Bachrach, Yehuda Finkelstein, Ran Gilad-Bachrach, Liran Katzir, Noam Koenigstein, Nir Nice, and Ulrich Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, pages 257–264, 2014. [3] Grey Ballard, Seshadhri Comandur, Tamara Kolda, and Ali Pinar. Diamond sampling for approximate maximum all-pairs dot-product (MAD) search. In Proceedings of the IEEE International Conference on Data Mining, 2015. [4] Wei-Sheng Chin, Yong Zhuang, Yu-Chin Juan, and Chih-Jen Lin. A learning-rate schedule for stochastic gradient methods to matrix factorization. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2015. [5] Edith Cohen and David D. Lewis. Approximating matrix multiplication for pattern recognition tasks. In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 682–691, 1997. 9 [6] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, 2016. [7] Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. The Yahoo! music dataset and KDD-Cup’11. In JMLR Workshop and Conference Proceedings: Proceedings of KDD Cup 2011 Competition, volume 18, pages 3–18, 2012. [8] Donald E. Knuth. The Art of Cmoputer Programming, Volumne 3: Sorting and Searching. Addison-Wesley, 2nd edition, 1998. [9] Noam Koenigstein, Parikshit Ram, and Yuval Shavitt. Efficient retrieval of recommendations in a matrix factorization framework. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 535–544, 2012. [10] Yehuda Koren, Robert M. Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. IEEE Computer, 42:30–37, 2009. [11] Stephen Mussmann and Stefano Ermon. Learning and inference via maximum inner product search. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, 2016. [12] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the International Conference on Machine Learning, pages 1926–1934, 2015. [13] Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 931–939, 2012. [14] Anshumali Shrivastava and Ping Li. Asymmetric lsh (ALSH) for sublinear time maximum inner product search (MIPS). In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2321–2329, 2014. [15] Anshumali Shrivastava and Ping Li. Improved asymmetric locality senstive hashing lsh (ALSH) for maximum inner product search (MIPS). In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence (UAI), pages 812–821, 2015. [16] Jason Weston, Samy Bengio, and Nicolas Usunier. Large scale image annotation: learning to rank with joint word-image embeddings. Mach. Learn., 81(1):21–35, October 2010. [17] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit S. Dhillon. Large-scale multi-label learning with missing labels. In Proceedings of the International Conference on Machine Learning, pages 593–601, 2014. 10 | 2017 | 163 |
6,634 | Riemannian approach to batch normalization Minhyung Cho Jaehyung Lee Applied Research Korea, Gracenote Inc. mhyung.cho@gmail.com jaehyung.lee@kaist.ac.kr Abstract Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets. 1 Introduction Batch Normalization (BN) [1] has become an essential component for breaking performance records in image recognition tasks [2, 3]. It speeds up training deep neural networks by normalizing the distribution of the input to each neuron in the network by the mean and standard deviation of the input computed over a mini-batch of training data, potentially reducing internal covariate shift [1], the change in the distributions of internal nodes of a deep network during the training. The authors of BN demonstrated that applying BN to a layer makes its forward pass invariant to linear scaling of its weight parameters [1]. They argued that this property prevents model explosion with higher learning rates by making the gradient propagation invariant to linear scaling. Moreover, the gradient becomes inversely proportional to the scale factor of each weight parameter. While this property could stabilize the parameter growth by reducing the gradients for larger weights, it could also have an adverse effect in terms of optimization since there can be an infinite number of networks, with the same forward pass but different scaling, which may converge to different local optima owing to different gradients. In practice, networks may become sensitive to the parameters of regularization methods such as weight decay. This ambiguity in the optimization process can be removed by interpreting the space of weight vectors as a Riemannian manifold on which all the scaled versions of a weight vector correspond to a single point on the manifold. A properly selected metric tensor makes it possible to perform a gradient descent on this manifold [4, 5], following the gradient direction while staying on the manifold. This approach fundamentally removes the aforementioned ambiguity while keeping the invariance property intact, thus ensuring stable weight updates. In this paper, we first focus on selecting a proper manifold along with the corresponding Riemannian metric for the scale invariant weight vectors used in BN (and potentially in other normalization techniques [6, 7, 8]). Mapping scale invariant weight vectors to two well-known matrix manifolds yields the same metric tensor, leading to a natural choice of the manifold and metric. Then, we derive the necessary operators to perform a gradient descent on this manifold, which can be understood as a constrained optimization on the unit sphere. Next, we present two optimization algorithms corresponding to the Stochastic Gradient Descent (SGD) with momentum and Adam [9] algorithms. An intuitive gradient clipping method is also proposed utilizing the geometry of this space. Finally, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. we illustrate the application of these algorithms to networks with BN layers, together with an effective regularization method based on variational inference on the manifold. Experiments show that the resulting algorithm consistently outperforms the original BN algorithm on various types of network architectures and datasets. 2 Background 2.1 Batch normalization We briefly revisit the BN transform and its properties. While it can be applied to any single activation in the network, in practice it is usually inserted right before the nonlinearity, taking the pre-activation z = w⊤x as its input. In this case, the BN transform is written as BN(z) = z −E[z] p Var[z] = w⊤(x −E[x]) p w⊤Rxxw = u⊤(x −E[x]) p u⊤Rxxu (1) where w is a weight vector, x is a vector of activations in the previous layer, u = w/|w|, and Rxx is the covariance matrix of x. Note that BN(w⊤x) = BN(u⊤x). It was shown in [1] that ∂BN(w⊤x) ∂x = ∂BN(u⊤x) ∂x and ∂BN(z) ∂w = 1 |w| ∂BN(z) ∂u (2) illustrating the properties discussed in Sec. 1. 2.2 Optimization on Riemannian manifold Recent studies have shown that various constrained optimization problems in Euclidian space can be expressed as unconstrained optimization problems on submanifolds embedded in Euclidian space [5]. For applications to neural networks, we are interested in Stiefel and Grassmann manifolds [4, 10]. We briefly review them here. The Stiefel manifold V(p, n) is the set of p ordered orthonormal vectors in Rn(p ≤n). A point on the manifold is represented by an n-by-p orthonormal matrix Y , where Y ⊤Y = Ip. The Grassmann manifold G(p, n) is the set of p-dimensional subspaces of Rn(p ≤n). It follows that span(A), where A ∈Rn×p, is understood to be a point on the Grassmann manifold G(p, n) (note that two matrices A and B are equivalent if and only if span(A) = span(B)). A point on this manifold can be specified by an arbitrary n-by-p matrix, but for computational efficiency, an orthonormal matrix is commonly chosen to represent a point. Note that the representation is not unique [5]. To perform gradient descent on those manifolds, it is essential to equip them with a Riemannian metric tensor and derive geometric concepts such as geodesics, exponential map, and parallel translation. Given a tangent vector v ∈TxM on a Riemannian manifold M with its tangent space TxM at a point x, let us denote γv(t) as a unique geodesic on M, with initial velocity v. The exponential map is defined as expx(v) = γv(1), which maps v to the point that is reached in a unit time along the geodesic starting at x. The parallel translation of a tangent vector on a Riemannian manifold can be obtained by transporting the vector along the geodesic by an infinitesimally small amount, and removing the vertical component of the tangent space [11]. In this way, the transported vector stays in the tangent space of the manifold at a new point. Using the concepts above, a gradient descent algorithm for an abstract Riemannian manifold is given in Algorithm 1 for reference. This reduces to the familiar gradient descent algorithm when M = Rn, since expyt−1(−η · h) is given as yt−1 −η · ∇f(yt−1) in Rn. Algorithm 1 Gradient descent of a function f on an abstract Riemannian manifold M Require: Stepsize η Initialize y0 ∈M for t = 1, · · · , T h ←gradf(yt−1) ∈Tyt−1M where gradf(y) is the gradient of f at y ∈M yt ←expyt−1(−η · h) 2 3 Geometry of scale invariant vectors As discussed in Sec. 2.1, inserting the BN transform makes the weight vectors w, used to calculate the pre-activation w⊤x, invariant to linear scaling. Assuming that there are no additional constraints on the weight vectors, we can focus on the manifolds on which the scaled versions of a vector collapse to a point. A natural choice for this would be the Grassmann manifold since the space of the scaled versions of a vector is essentially a one-dimensional subspace of Rn. On the other hand, the Stiefel manifold can also represent the same space if we set p = 1, in which case V(1, n) reduces to the unit sphere. We can map each of the weight vectors w to its normalized version, i.e., w/|w|, on V(1, n). We show that popular choices of metrics on those manifolds lead to the same geometry. Tangent vectors to the Stiefel manifold V(p, n) at Z are all the n-by-p matrices ∆such that Z⊤∆+ ∆⊤Z = 0 [4]. The canonical metric on the Stiefel manifold is derived based on the geometry of quotient spaces of the orthogonal group [4] and is given by gs(∆1, ∆2) = tr(∆⊤ 1 (I −ZZ⊤/2)∆2) (3) where ∆1, ∆2 are tangent vectors to V(p, n) at Z. If p = 1, the condition Z⊤∆+ ∆⊤Z = 0 is reduced to Z⊤∆= 0, leading to gs(∆1, ∆2) = tr(∆⊤ 1 ∆2). Now, let an n-by-p matrix Y be a representation of a point on the Grassmann manifold G(p, n). Tangent vectors to the manifold at span(Y ) with the representation Y are all the n-by-p matrices ∆such that Y ⊤∆= 0. Since Y is not a unique representation, the tangent vector ∆changes with the choice of Y . For example, given a representation Y1 and its tangent vector ∆1, if a different representation is selected by performing right multiplication, i.e., Y2 = Y1R, then the tangent vector must be moved in the same way, that is ∆2 = ∆1R. The canonical metric, which is invariant under the action of the orthogonal group and scaling [10], is given by gg(∆1, ∆2) = tr (Y ⊤Y )−1∆⊤ 1 ∆2 (4) where Y ⊤∆1 = 0 and Y ⊤∆2 = 0. For G(1, n) with a representation y, the metric is given by gg(∆1, ∆2) = ∆⊤ 1 ∆2/y⊤y. The metric is invariant to the scaling of y as shown below ∆⊤ 1 ∆2/y⊤y = (k∆1)⊤(k∆2)/(ky)⊤(ky). (5) Without loss of generality, we can choose a representation with y⊤y = 1 to obtain gg(∆1, ∆2) = tr(∆⊤ 1 ∆2), which coincides with the canonical metric for V(1, n). Hereafter, we will focus on the geometry of G(1, n) with the metric and representation chosen above, derived from the general formula in [4, 10]. Gradient of a function The gradient of a function f(y) defined on G(1, n) is given by gradf = g −(yT g)y (6) where gi = ∂f/∂yi. Exponential map Let h be a tangent vector to G(1, n) at y. The exponential map on G(1, n) emanating from y with initial velocity h is given by expy(h) = y cos |h| + h |h| sin |h|. (7) It can be easily shown that expy(h) = expy((1 + 2π/|h|)h). Parallel translation Let ∆and h be tangent vectors to G(1, n) at y. The parallel translation of ∆along the geodesic with the initial velocity h in a unit time is given by pty(∆; h) = ∆− u(1 −cos |h|) + y sin |h| u⊤∆, (8) where u = h/|h|. Note that |∆| = |pty(∆; h)|. If ∆= h, it can be further simplified as pty(h) = h cos |h| −y|h| sin |h|. (9) Note that BN(z) is not invariant to scaling with negative numbers. That is, BN(−z) = −BN(z). To be precise, there is an one-to-one mapping between the set of weights on which BN(z) is invariant and a point on V(1, n), but not on G(1, n). However, the proposed method interprets each weight vector as a point on the manifold only when the weight update is performed. As long as the weight vector stays in the domain where V(1, n) and G(1, n) have the same invariance property, the weight update remains equivalent. We prefer G(1, n) since the operators can easily be extended to G(p, n), opening up further applications. 3 (a) Gradient (b) Exponential map (c) Parallel translation Figure 1: An illustration of the operators on the Grassmann manifold G(1, 2). A 2-by-1 matrix y is an orthonormal representation on G(1, 2). (a) A gradient calculated in Euclidean coordinate is projected onto the tangent space TyG(1, 2). (b) y1 = expy(h). (c) h1 = pty(h), |⃗h| = |⃗h1|. 4 Optimization algorithms on G(1, n) In this section, we derive optimization algorithms on the Grassmann manifold G(1, n). The algorithms given below are iterative algorithms to solve the following unconstrained optimization: min y∈G(1,n) f(y). (10) 4.1 Stochastic gradient descent with momentum The application of Algorithm 1 to the Grassmann manifold G(1, n) is straightforward. We extend this algorithm to the one with momentum to speed up the training [12]. Algorithm 2 presents the pseudo-code of the SGD with momentum on G(1, n). This algorithm differs from conventional SGD in three ways. First, it projects the gradient onto the tangent space at the point y, as shown in Fig. 1 (a). Second, it moves the position by the exponential map in Fig. 1 (b). Third, it moves the momentum by the parallel translation of the Grassmann manifold in Fig. 1 (c). Note that if the weight is initialized with a unit vector, it remains a unit vector after the update. Algorithm 2 has an advantage over conventional SGD in that the amount of movement is intuitive, i.e., it can be measured by the angle between the original point and the new point. As it returns to the original point after moving by 2π (radian), it is natural to restrict the maximum movement induced by a gradient to 2π. For first order methods like gradient descent, it would be beneficial to restrict the maximum movement even more so that it stays in the range where linear approximation is valid. Let h be the gradient calculated at t = 0. The amount of the first step by the gradient of h is δ0 = η · |h| and the contributions to later steps are recursively calculated by δt = γ · δt−1. The overall contribution of h is P∞ t=0 δt = η · |h|/(1 −γ). In practice, we found it beneficial to restrict this amount to less than 0.2 (rad) ∼= 11.46◦by clipping the norm of h at ν. For example, with initial learning rate η = 0.2, setting γ = 0.9 and ν = 0.1 guarantees this condition. Algorithm 2 Stochastic gradient descent with momentum on G(1, n) Require: learning rate η, momentum coefficient γ, norm_threshold ν Initialize y0 ∈Rn×1 with a random unit vector Initialize τ0 ∈Rn×1 with a zero vector for t = 1, · · · , T g ←∂f(yt−1)/∂y Run a backward pass to obtain g h ←g −(y⊤ t−1g)yt−1 Project g onto the tangent space at yt−1 ˆh ←norm_clip(h, ν)† Clip the norm of the gradient at ν d ←γτt−1 −ηˆh Update delta with momentum yt ←expyt−1(d) Move to the new position by the exponential map in Eq. (7) τt ←ptyt−1(d) Move the momentum by the parallel translation in Eq. (9) Note that h, ˆh, d ⊥yt−1 and τt ⊥yt where h, ˆh, d, yt−1, yt ∈Rn×1 †norm_clip(h, ν) = ν · h/|h| if |h| > ν, else h 4 4.2 Adam Adam [9] is a recently developed first-order optimization algorithm based on adaptive estimates of lower-order moments that has been successfully applied to training deep neural networks. In this section, we derive Adam on the Grassmann manifold G(1, n). Adam computes the individual adaptive learning rate for each parameter. In contrast, we assign one adaptive learning rate to each weight vector that corresponds to a point on the manifold. In this way, the direction of the gradient is not corrupted, and the size of the step is adaptively controlled. The pseudo-code of Adam on G(1, n) is presented in Algorithm 3. It was shown in [9] that the effective step size of Adam (|d| in Algorithm 3) has two upper bounds. The first occurs in the most severe case of sparsity, and the upper bound is given as η 1−β1 √1−β2 since the previous momentum terms are negligible. The second case occurs if the gradient remains stationary across time steps, and the upper bound is given as η. For the common selection of hyperparameters β1 = 0.9, β2 = 0.99, two upper bounds coincide. In our experiments, η was chosen to be 0.05 and the upper bound was |d| ≤0.05 (rad). Algorithm 3 Adam on G(1, n) Require: learning rate η, momentum coefficients β1, β2, norm_threshold ν, scalar ϵ = 10−8 Initialize y0 ∈Rn×1 with a random unit vector Initialize τ0 ∈Rn×1 with a zero vector Initialize a scalar v0 = 0 for t = 1, · · · , T ηt ←η p 1 −βt 2/(1 −βt 1) Calculate the bias correction factor g ←∂f(yt−1)/∂y Run a backward pass to obtain g h ←g −(y⊤ t−1g)yt−1 Project g onto the tangent space at yt−1 ˆh ←norm_clip(h, ν) Clip the norm of the gradient at ν mt ←β1 · τt−1 + (1 −β1) · ˆh vt ←β2 · vt−1 + (1 −β2) · ˆh⊤ˆh (vt is a scalar) d ←−ηt · mt/√vt + ϵ Calculate delta yt ←expyt−1(d) Move to the new point by exponential map in Eq. (7) τt ←ptyt−1(mt; d) Move the momentum by parallel translation in Eq. (8) Note that h, ˆh, mt, d ⊥yt−1 and τt ⊥yt where h, ˆh, mt, d, τt, yt−1, yt ∈Rn×1 5 Batch normalization on the product manifold of G(1, ·) In Sec. 3, we have shown that a weight vector used to compute the pre-activation that serves as an input to the BN transform can be naturally interpreted as a point on G(1, n). For deep networks with multiple layers and multiple units per layer, there can be multiple weight vectors that the BN transform is applied to. In this case, the training of neural networks is converted into an optimization problem with respect to a set of points on Grassmann manifolds and the remaining set of parameters. It is formalized as min X∈M L(X) where M = G(1, n1) × · · · × G(1, nm) × Rl (11) where n1 . . . nm are the dimensions of weight vectors, m is the number of the weight vectors on G(1, ·) which will be optimized using Algorithm 2 or 3, and l is the number of remaining parameters which include biases, learnable scaling and offset parameters in BN layers, and other weight matrices. Algorithm 4 presents the whole process of training deep neural networks. The forward pass and backward pass remain unchanged. The only change made is updating the weights by Algorithm 2 or Algorithm 3. Note that we apply the proposed algorithm only when the input layer to BN is under-complete, that is, the number of output units is smaller than the number of input units, because the regularization algorithm we will derive in Sec. 5.1 is only valid in this case. There should be ways to expand the regularization to over-complete layers. However, we do not elaborate on this topic since 1) the ratio of over-complete layers is very low (under 0.07% for wide resnets and under 5.5% for VGG networks) and 2) we believe that over-complete layers are suboptimal in neural networks, which should be avoided by proper selection of network architectures. 5 Algorithm 4 Batch normalization on product manifolds of G(1, ·) Define the neural network model with BN layers m ←0 for W = {weight matrices in the network such that W ⊤x is an input to a BN layer} Let W be an n × p matrix if n > p for i = 1, · · · , p m ←m + 1 Assign a column vector wi in W to ym ∈G(1, n) Assign remaining parameters to v ∈Rl min L(y1, · · · , ym, v)† w.r.t yi ∈G(1, ni) for i = 1, · · · , m and v ∈Rl for t = 1, · · · , T Run a forward pass to calculate L Run a backward pass to obtain ∂L ∂yi for i = 1, · · · , m and ∂L ∂v for i = 1, · · · , m Update the point yi by Algorithm 2 or Algorithm 3 Update v by conventional optimization algorithms (such as SGD) † For orthogonality regularization as in Sec. 5.1, L is replaced with L + P W LO(α, W) 5.1 Regularization using variational inference In conventional neural networks, L2 regularization is normally adopted to regularize the networks. However, it does not work on Grassmann manifolds because the gradient vector of the L2 regularization is perpendicular to the tangent space of the Grassmann manifold. In [13], the L2 regularization was derived based on the Gaussian prior and delta posterior in the framework of variational inference. We extend this theory to Grassmann manifolds in order to derive a proper regularization method in this space. Consider the complexity loss, which accounts for the cost of describing the network weights. It is given by the Kullback-Leibler divergence between the posterior distribution Q(w|β) and the prior distribution P(w|α) [13]: LC(α, β) = DKL(Q(w|β) ∥P(w|α)). (12) Factor analysis (FA) [14] establishes the link between the Grassmann manifold and the space of probabilistic distributions [15]. The factor analyzer is given by p(x) = N(u, C), C = ZZ⊤+ σ2I (13) where Z is a full-rank n-by-p matrix (n > p) and N denotes a normal distribution. Under the condition that u = 0 and σ →0, the samples from the analyzer lie in the linear subspace span(Z). In this way, a linear subspace can be considered as an FA distribution. Suppose that n-dimensional p weight vectors y1, · · · , yp for n > p are in the same layer, which are assumed as p points on G(1, n). Let yi be a representation of a point such that y⊤ i yi = 1. With the choice of delta posterior and β = [y1, · · · , yp], the corresponding FA distribution can be given by q(x|Y ) = N(0, Y Y ⊤+ σ2I), where Y = [y1, · · · , yp] with the subspace condition σ →0. The FA distribution for the prior is set to p(x|α) = N(0, αI) that depends on the hyperparameter α. Substituting the FA distribution of the prior and posterior into Eq. (12) gives the complexity loss LC(α, Y ) = DKL q(x|Y ) ∥p(x|α) . (14) Eq. (14) is minimized when the column vectors of Y are orthogonal to each other (refer to Appendix A for details). That is, minimizing LC(α, Y ) will maximally scatter the points away from each other on G(1, n). However, it is difficult to estimate its gradient. Alternatively, we minimize LO(α, Y ) = α 2 ∥Y ⊤Y −I ∥2 F (15) where ∥· ∥F is the Frobenius norm. It has the same minimum as the original complexity loss and the negative of its gradient is a descent direction of the original loss (refer to Appendix B). 6 6 Experiments We evaluated the proposed learning algorithm for image classification tasks using three benchmark datasets: CIFAR-10 [16], CIFAR-100 [16], and SVHN (Street View House Number) [17]. We used the VGG network [18] and wide residual network [2, 19, 20] for experiments. The VGG network is a widely used baseline for image classification tasks, while the wide residual network [2] has shown state-of-the-art performance on the benchmark datasets. We followed the experimental setups described in [2] so that the performance of algorithms can be directly compared. Source code is publicly available at https://github.com/MinhyungCho/riemannian-batch-normalization. CIFAR-10 is a database of 60,000 color images in 10 classes, which consists of 50,000 training images and 10,000 test images. CIFAR-100 is similar to CIFAR-10, except that it has 100 classes and contains fewer images per class. For preprocessing, we normalized the data using the mean and variance calculated from the training set. During training, the images were randomly flipped horizontally, padded by four pixels on each side with the reflection, and a 32×32 crop was randomly sampled. SVHN [17] is a digit classification benchmark dataset that contains 73,257 images in the training set, 26,032 images in the test set, and 531,131 images in the extra set. We merged the extra set and the training set in our experiment, following the step in [2]. The only preprocessing done was to divide the intensity by 255. Detailed architectures for various VGG networks are described in [18]. We used 512 neurons in fully connected layers rather than 4096 neurons, and the BN layer was placed before every ReLU activation layer. The learnable scaling parameter in the BN layer was set to one because it does not reduce the expressive power of the ReLU layer [21]. For SVHN experiments using VGG networks, the dropout was applied after the pooling layer with dropout rate 0.4. For wide residual networks, we adopted exactly the same model architectures in [2], including the BN and dropout layers. In all cases, the biases were removed except the final layer. For the baseline, the networks were trained by SGD with Nesterov momentum [22]. The weight decay was set to 0.0005, momentum to 0.9, and minibatch size to 128. For CIFAR experiments, the initial learning rate was set to 0.1 and multiplied by 0.2 at 60, 120, and 160 epochs. It was trained for a total of 200 epochs. For SVHN, the initial learning rate was set to 0.01 and multiplied by 0.1 at 60 and 120 epochs. It was trained for a total of 160 epochs. For the proposed method, we used different learning rates for the weights in Euclidean space and on Grassmann manifolds. Let us denote the learning rates for Euclidean space and Grassmann manifolds as ηe and ηg, respectively. The selected initial learning rates were ηe = 0.01, ηg = 0.2 for Algorithm 2 and ηe = 0.01, ηg = 0.05 for Algorithm 3. The same initial learning rates were used for all CIFAR experiments. For SVHN, they were scaled by 1/10, following the same ratio as the baseline [2]. The training algorithm for Euclidean parameters was identical to the one used in the baseline with one exception. We did not apply weight decay to scaling and offset parameters of BN, whereas the baseline did as in [2]. To clarify, applying weight decay to mean and variance parameters of BN was essential for reproducing the performance of baseline. The learning rate schedule was also identical to the baseline, both for ηe and ηg. The threshold for clipping the gradient ν was set to 0.1. The regularization strength α in Eq. (15) was set to 0.1, which gradually achieved near zero LO during the course of the training, as shown in Fig. 2. Figure 2: Changes in LO in Eq. (15) during training for various α values (y-axis on the left). The red dotted line denotes the learning rate (ηg, y-axis on the right). VGG-11 was trained by SGD-G on CIFAR-10. 6.1 Results Tables 1 and 2 compare the performance of the baseline SGD and two proposed algorithms described in Sec. 4 and 5, on CIFAR-10, CIFAR-100, and SVHN datasets. All the numbers reported are the median of five independent runs. In most cases, the networks trained using the proposed algorithms 7 (a) CIFAR-10 (b) CIFAR-100 (c) SVHN Figure 3: Training curves of the baseline and proposed optimization methods. (a) WRN-28-10 on CIFAR-10. (b) WRN-28-10 on CIFAR-100. (c) WRN-22-8 on SVHN. outperformed the baseline across various datasets and network configurations, especially for the ones with more parameters. The best performance was 3.72% (SGD and SGD-G) and 17.85% (ADAM-G) on CIFAR-10 and CIFAR-100, respectively, with WRN-40-10; and 1.55% (ADAM-G) on SVHN with WRN-22-8. Training curves of the baseline and proposed methods are presented in Figure 3. The training curves for SGD suffer from instability or experience a plateau after each learning rate drop, compared to the proposed methods. We believe that this comes from the inverse proportionality of the gradient to the norm of BN weight parameters (as in Eq. (2)). During the training process, this norm is affected by weight decay, hence the magnitude of the gradient. It is effectively equivalent to disturbing the learning rate by weight decay. The authors of wide resnet also observed that applying weight decay caused this phenomena, but weight decay was indispensable for achieving the reported performance [2]. Proposed methods resolve this issue in a principled way. Table 3 summarizes the performance of recently published algorithms on the same datasets. We present the best performance of five independent runs in this table. Table 1: Classification error rate of various networks on CIFAR-10 and CIFAR-100 (median of five runs). VGG-l denotes a VGG network with l layers. WRN-d-k denotes a wide residual network that has d convolutional layers and a widening factor k. SGD-G and Adam-G denote Algorithm 2 and Algorithm 3, respectively. The results in parenthesis show those reported in [2]. Dataset CIFAR-10 CIFAR-100 Model SGD SGD-G Adam-G SGD SGD-G Adam-G VGG-11 7.43 7.14 7.59 29.25 28.02 28.05 VGG-13 5.88 5.87 6.05 26.17 25.29 24.89 VGG-16 6.32 5.88 5.98 26.84 25.64 25.29 VGG-19 6.49 5.92 6.02 27.62 25.79 25.59 WRN-52-1 6.23 (6.28) 6.56 6.58 27.44 (29.78) 28.13 28.16 WRN-16-4 4.96 (5.24) 5.35 5.28 23.41 (23.91) 24.51 24.24 WRN-28-10 3.89 (3.89) 3.85 3.78 18.66 (18.85) 18.19 18.30 WRN-40-10† 3.72 (3.8) 3.72 3.80 18.39 (18.3) 18.04 17.85 †This model was trained on two GPUs. The gradients were summed from two minibatches of size 64, and BN statistics were calculated from each minibatch. 7 Conclusion and discussion We presented new optimization algorithms for scale-invariant vectors by representing them on G(1, n) and following the intrinsic geometry. Specifically, we derived SGD with momentum and Adam algorithms on G(1, n). An efficient regularization algorithm in this space has also been proposed. Applying them in the context of BN showed consistent performance improvements over the baseline BN algorithm with SGD on CIFAR-10, CIFAR-100, and SVHN datasets. 8 Table 2: Classification error rate of various networks on SVHN (median of five runs). Model SGD SGD-G Adam-G VGG-11 2.11 2.10 2.14 VGG-13 1.78 1.74 1.72 VGG-16 1.85 1.76 1.76 VGG-19 1.94 1.81 1.77 WRN-52-1 1.68 (1.70) 1.72 1.67 WRN-16-4 1.64 (1.64) 1.67 1.61 WRN-16-8 1.60 (1.54) 1.69 1.68 WRN-22-8 1.64 1.63 1.55 Table 3: Performance comparison with previously published results. Method CIFAR-10 CIFAR-100 SVHN NormProp [7] 7.47 29.24 1.88 ELU [23] 6.55 24.28 Scalable Bayesian optimization [24] 6.37 27.4 Generalizing pooling [25] 6.05 1.69 Stochastic depth [26] 4.91 24.98 1.75 ResNet-1001 [20] 4.62 22.71 Wide residual network [2] 3.8 18.3 1.54 Proposed (best of five runs) 3.491 17.592 1.493 1WRN-40-10+SGD-G 2WRN-40-10+Adam-G 3WRN-22-8+Adam-G Our work interprets each scale invariant piece of the weight matrix as a separate manifold, whereas natural gradient based algorithms [27, 28, 29] interpret the whole parameter space as a manifold and constrain the shape of the cost function (i.e. to the KL divergence) to obtain a cost efficient metric. There are similar approaches to ours such as Path-SGD [30] and the one based on symmetry-invariant updates [31], but the comparison remains to be done. Proposed algorithms are computationally as efficient as their non-manifold versions since they do not affect the forward and backward propagation steps, where majority of the computation takes place. The weight update step is 2.5-3.5 times more expensive, but still O(n). We did not explore the full range of possibilities offered by the proposed algorithm. For example, techniques similar to BN, such as weight normalization [6] and normalization propagation [7], have scale invariant weight vectors and can benefit from the proposed algorithm in the same way. Layer normalization [8] is invariant to weight matrix rescaling, and simple vectorization of the weight matrix enables the application of the proposed algorithm. 9 References [1] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448–456, 2015. [2] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016. [3] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016. [4] Alan Edelman, Tomás A Arias, and Steven T Smith. The geometry of algorithms with orthogonality constraints. SIAM journal on Matrix Analysis and Applications, 20(2):303–353, 1998. [5] P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization algorithms on matrix manifolds. Princeton University Press, 2009. [6] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems 29, pages 901–909, 2016. [7] Devansh Arpit, Yingbo Zhou, Bhargava Kota, and Venu Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1168–1176, 2016. [8] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [10] P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Riemannian geometry of grassmann manifolds with a view on algorithmic computation. Acta Applicandae Mathematicae, 80(2):199–220, 2004. [11] M.P. do Carmo. Differential Geometry of Curves and Surfaces. Prentice-Hall, 1976. [12] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by backpropagating errors. Nature, 323(6088):533–536, 10 1986. [13] Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems 24, pages 2348–2356, 2011. [14] Zoubin Ghahramani, Geoffrey E Hinton, et al. The EM algorithm for mixtures of factor analyzers. Technical report, Technical Report CRG-TR-96-1, University of Toronto, 1996. [15] Jihun Hamm and Daniel D Lee. Extended grassmann kernels for subspace-based learning. In Advances in Neural Information Processing Systems 21, pages 601–608, 2009. [16] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. [17] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011. [18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016. [21] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. arXiv preprint arXiv:1702.03275, 2017. [22] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, pages 1139–1147, 2013. [23] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289, 2015. [24] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and Ryan Adams. Scalable bayesian optimization using deep neural networks. In International Conference on Machine Learning, pages 2171–2180, 2015. [25] Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In International conference on artificial intelligence and statistics, 2016. [26] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, pages 646–661. Springer, 2016. [27] Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. 10 [28] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In International Conference on Learning Representations, 2014. [29] James Martens and Roger B Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In Proceedings of The 32nd International Conference on Machine Learning, pages 2408–2417, 2015. [30] Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pages 2422–2430, 2015. [31] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. 11 | 2017 | 164 |
6,635 | Adaptive Clustering through Semidefinite Programming Martin Royer Laboratoire de Mathématiques d’Orsay, Univ. Paris-Sud, CNRS, Université Paris-Saclay, 91405 Orsay, France martin.royer@math.u-psud.fr Abstract We analyze the clustering problem through a flexible probabilistic model that aims to identify an optimal partition on the sample X1, ..., Xn. We perform exact clustering with high probability using a convex semidefinite estimator that interprets as a corrected, relaxed version of K-means. The estimator is analyzed through a non-asymptotic framework and showed to be optimal or near-optimal in recovering the partition. Furthermore, its performances are shown to be adaptive to the problem’s effective dimension, as well as to K the unknown number of groups in this partition. We illustrate the method’s performances in comparison to other classical clustering algorithms with numerical experiments on simulated high-dimensional data. 1 Introduction Clustering, a form of unsupervised learning, is the classical problem of assembling n observations X1, ..., Xn from a p-dimensional space into K groups. Applied fields are craving for robust clustering techniques, such as computational biology with genome classification, data mining or image segmentation from computer vision. But the clustering problem has proven notoriously hard when the embedding dimension is large compared to the number of observations (see for instance the recent discussions from [2, 21]). A famous early approach to clustering is to solve for the geometrical estimator K-means [19, 13, 14]. The intuition behind its objective is that groups are to be determined in a way to minimize the total intra-group variance. It can be interpreted as an attempt to "best" represent the observations by K points, a form of vector quantization. Although the method shows great performances when observations are homoscedastic, K-means is a NP-hard, ad-hoc method. Clustering with probabilistic frameworks are usually based on maximum likelihood approaches paired with a variant of the EM algorithm for model estimation, see for instance the works of Fraley & Raftery [11] and Dasgupta & Schulman [9]. These methods are widespread and popular, but they tend to be very sensitive to initialization and model misspecifications. Several recent developments establish a link between clustering and semidefinite programming. Peng & Wei [17] show that the K-means objective can be relaxed into a convex, semidefinite program, leading Mixon et al. [16] to use this relaxation under a subgaussian mixture model to estimate the cluster centers. Yan and Sarkar [24] use a similar semidefinite program in the context of covariate clustering, when the network has nodes and covariates. Chrétien et al. [8] use a slightly different form of a semidefinite program to recover the adjacency matrix of the cluster graph with high probability. Lastly in the different context of variable clustering, Bunea et al. [6] present a semidefinite program with a correction step to produce non-asymptotic exact recovery results. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work, we build upon the work and context of [6], and transpose and adapt their ideas for point clustering: we introduce a semidefinite estimator for point clustering inspired by the findings of [17] with a correction component originally presented in [6]. We show that it produces a very strong contender for clustering recovery in terms of speed, adaptivity and robustness to model perturbations. In order to do so we produce a flexible probabilistic model inducing an optimal partition of the data that we aim to recover. Using the same structure of proof in a different context, we establish elements of stochastic control (see for instance Lemma A.1 on the concentration of random subgaussian Gram matrices in the supplementary material) to derive conditions of exact clustering recovery with high probability and show optimal performances – including in high dimensions, improving on [16], as well as adaptivity to the effective dimension of the problem. We also show that our results continue to hold without knowledge of the number of structures given one single positive tuning parameter. Lastly we provide evidence of our method’s efficiency and further insight from simulated data. Notation. Throughout this work we use the convention 0/0 := 0 and [n] = {1, ..., n}. We take an ≲bn to mean that an is smaller than bn up to an absolute constant factor. Let Sd−1 denote the unit sphere in Rd. For q ∈N∗∪{+∞}, ν ∈Rd, |ν|q is the lq-norm and for M ∈Rd×d′, |M|q, |M|F and |M|op are respectively the entry-wise lq-norm, the Frobenius norm associated with scalar product ⟨., .⟩and the operator norm. |D|V is the variation semi-norm for a diagonal matrix D, the difference between its maximum and minimum element. Let A ≽B mean that A −B is symmetric, positive semidefinite. 2 Probabilistic modeling of point clustering Consider X1, ..., Xn and let νa = E [Xa]. The variable Xa can be decomposed into Xa = νa + Ea, a = 1, ..., n, (1) with Ea stochastic centered variables in Rp. Definition 1. For K > 1, µ = (µ1, ..., µK) ∈(Rp)K, δ ⩾0 and G = {G1, ..., GK} a partition of [n], we say X1, ..., Xn are (G, µ, δ)-clustered if ∀k ∈[K], ∀a ∈Gk, |νa −µk|2 ⩽δ. We then call ∆(µ) := min k<l |µk −µl|2 (2) the separation between the cluster means, and ρ(G, µ, δ) := ∆(µ)/δ (3) the discriminating capacity of (G, µ, δ). In this work we assume that X1, ..., Xn are (G, µ, δ)-clustered. Notice that this definition does not impose any constraint on the data: for any given G, there exists a choice of µ, means and radius δ important enough so that X1, ..., Xn are (G, µ, δ)-clustered. But we are interested in partitions with greater discriminating capacity, i.e. that make more sense in terms of group separation. Indeed remark that if ρ(G, µ, δ) < 2, the population clusters {νa}a∈G1, ..., {νa}a∈GK are not linearly separable, but a high ρ(G, µ, δ) implies that they are well-separated from each other. Furthermore, we have the following result. Proposition 1. Let (G∗ K, µ∗, δ∗) ∈arg max ρ(G, µ, δ) for (G, µ, δ) such that X1, ..., Xn are (G, µ, δ)-clustered, and |G| = K. If ρ(G∗ K, µ∗, δ∗) > 4 then G∗ K is the unique maximizer of ρ(G, µ, δ). So G∗ K is the partition maximizing the discriminating capacity over partitions of size K. Therefore in this work, we will assume that there is a K > 1 such that X1, ..., Xn is (G, µ, δ)-clustered with |G| = K and ρ(G, µ, δ) > 4. By Proposition 1, G is then identifiable. It is the partition we aim to recover. We also assume that X1, ..., Xn are independent observations with subgaussian behavior. Instead of the classical isotropic definition of a subgaussian random vector (see for example [20]), we use a more flexible definition that can account for anisotropy. Definition 2. Let Y be a random vector in Rd, Y has a subgaussian distribution if there exist Σ ∈Rd×d such that ∀x ∈Rd, E h exT (Y −E Y )i ⩽exT Σx/2. (4) 2 We then call Σ a variance-bounding matrix of random vector Y , and write shorthand Y ∼subg(Σ). Note that Y ∼subg(Σ) implies Cov(Y ) ≼Σ in the semidefinite sense of the inequality. To sum-up our modeling assumptions in this work: Hypothesis 1. Let X1, ..., Xn be independent, subgaussian, (G, µ, δ)-clustered with ρ(G, µ, δ) > 4. Remark that the modelization of Hypothesis 1 can be connected to another popular probabilistic model: if we further ask that X1, ..., Xn are identically-distributed within a group (and hence δ = 0), the model becomes a realization of a mixture model. 3 Exact partition recovery with high probability Let G = {G1, ..., GK} and m := mink∈[K] |Gk| denote the minimum cluster size. G can be represented by its caracteristic matrix B∗∈Rn×n defined as ∀k, l ∈[K]2, ∀(a, b) ∈Gk × Gl, B∗ ab := 1/|Gk| if k = l 0 otherwise. In what follows, we will demonstrate the recovery of G through recovering its caracteristic matrix B∗. We introduce the sets of square matrices C{0,1} K := {B ∈Rn×n + : BT = B, tr(B) = K, B1n = 1n, B2 = B} (5) CK := {B ∈Rn×n + : BT = B, tr(B) = K, B1n = 1n, B ≽0} (6) C := [ K∈N CK. (7) We have: C{0,1} K ⊂CK ⊂C and CK is convex. Notice that B∗∈C{0,1} K . A result by Peng, Wei (2007) [17] shows that the K-means estimator ¯B can be expressed as ¯B = arg max B∈C{0,1} K ⟨bΛ, B⟩ (8) for bΛ := (⟨Xa, Xb⟩)(a,b)∈[n]2 ∈Rn×n, the observed Gram matrix. Therefore a natural relaxation is to consider the following estimator: bB := arg max B∈CK ⟨bΛ, B⟩. (9) Notice that E bΛ = Λ + Γ for Λ := (⟨νa, νb⟩)(a,b)∈[n]2 ∈Rn×n, and Γ := E [⟨Ea, Eb⟩](a,b)∈[n]2 = diag (tr(Var(Ea)))1⩽a⩽n ∈Rn×n. The following two results demonstrate that Λ is the signal structure that lead the optimizations of (8) and (9) to recover B∗, whereas Γ is a bias term that can hurt the process of recovery. Proposition 2. There exist c0 > 1 absolute constant such that if ρ2(G, µ, δ) > c0(6 + √n/m) and m∆2(µ) > 8|Γ|V , then we have arg max B∈C{0,1} K ⟨Λ + Γ, B⟩= B∗= arg max B∈CK ⟨Λ + Γ, B⟩. (10) This proposition shows that the bB estimator, as well as the K-means estimator, would recover partition G on the population Gram matrix if the variation semi-norm of Γ were sufficiently small compared to the cluster separation. Notice that to recover the partition on the population version, we require the discriminating capacity to grow as fast as 1 + (√n/m)1/2 instead of simply 1 from Hypothesis 1. The following proposition demonstrates that if the condition on the variation semi-norm of Γ is not met, G may not even be recovered on the population version. Proposition 3. There exist G, µ, δ and Γ such that ρ2(G, µ, δ) = +∞but we have m∆2(µ) < 2|Γ|V and B∗/∈arg max B∈C{0,1} K ⟨Λ + Γ, B⟩ and B∗/∈arg max B∈CK ⟨Λ + Γ, B⟩. (11) 3 So Proposition 3 shows that even if the population clusters are perfectly discriminated, there is a configuration for the variances of the noise that makes it impossible to recover the right clustering by K-means. This shows that K-means may fail when the random variable homoscedasticity assumption is violated, and that it is important to correct for Γ = diag(tr(Var(Ea)))1⩽a⩽n. Suppose we produce such an estimator bΓcorr. Then substracting bΓcorr from bΛ can be interpreted as a correcting term, i.e. a way to de-bias bΛ as an estimator of Λ. Hence the previous results demonstrate the interest of studying the following semi-definite estimator of the projection matrix B∗, let bBcorr := arg max B∈CK ⟨bΛ −bΓcorr, B⟩. (12) In order to demonstrate the recovery of B∗by this estimator, we introduce different quantitative measures of the "spread" of our stochastic variables, that affect the quality of the recovery. By Hypothesis 1 there exist Σ1, ..., Σn such that ∀a ∈[n], Xa ∼subg(Σa). Let σ2 := max a∈[n] |Σa|op, V2 := max a∈[n] |Σa|F , γ2 := max a∈[n] tr(Σa) (13) We now produce bΓcorr. Since there is no relation between the variances of the points in our model, there is very little hope of estimating Var(Ea). As for our quantity of interest tr(Var(Ea)), a form of volume, a rough estimation is challenging but possible. The estimator from [6] can be adapted to our context. For (a, b) ∈[n]2 let V (a, b) := max(c,d)∈([n]\{a,b})2 ⟨Xa −Xb, Xc−Xd |Xc−Xd|2 ⟩ , bb1 := arg minb∈[n]\{a} V (a, b) and bb2 := arg minb∈[n]\{a,bb1} V (a, b). Then for a ∈[n], let bΓcorr := diag ⟨Xa −Xbb1, Xa −Xbb2⟩a∈[n] . (14) Proposition 4. Assume that m > 2. For c6, c7 > 0 absolute constants, with probability larger than 1 −c6/n we have |bΓcorr −Γ|∞⩽c7 σ2log n + (δ + σ p log n)γ + δ2 . (15) So apart from the radius δ terms, that come from generous model assumptions, a proxy for Γ is produced at a σ2 log n rate that we could not expect to improve on. Nevertheless, this control on Γ is key to attain the optimal rates below. It is general and completely independent of the structure of G, as there is no relation between G and Γ. We are now ready to introduce this paper’s main result: a condition on the separation between the cluster means sufficient for ensuring recovery of B∗with high probability. Theorem 1. Assume that m > 2. For c1, c2 > 0 absolute constants, if m∆2(µ) ⩾c2 σ2(n + m log n) + V2( p n + m log n) + γ(σ p log n + δ) + δ2(√n + m) , (16) then with probability larger than 1 −c1/n we have bBcorr = B∗, and therefore bGcorr = G. We call the right hand side of (16) the separating rate. Notice that we can read two kinds of requirements coming from the separating rate: requirements on the radius δ, and requirements on σ2, V2, γ dependent on the distributions of observations. It appears as if δ + σ√log n can be interpreted as a geometrical width of our problem. If we ask that δ is of the same order as σ√log n, a maximum gaussian deviation for n variables, then all conditions on δ from (16) can be removed. Thus for convenience of the following discussion we will now assume δ ≲σ√log n. How optimal is the result from Theorem 1? Notice that our result is adapted to anisotropy in the noise, but to discuss optimality it is easier to look at the isotropic scenario: V2 = √pσ2 and γ2 = pσ2. Therefore ∆2(µ)/σ2 represents a signal-to-noise ratio. For simplicity let us also assume that all groups have equal size, that is |G1| = ... = |GK| = m so that n = mK and the sufficient condition (16) becomes ∆2(µ) σ2 ≳ K + log n + r (K + log n)pK n . (17) 4 Optimality. To discuss optimality, we distinguish between low and high dimensional setups. In the low-dimensional setup n ∨m log n ≳p, we obtain the following condition: ∆2(µ) σ2 ≳ K + log n . (18) Discriminating with high probability between n observations from two gaussians in dimension 1 would require a separating rate of at least σ2 log n. This implies that when K ≲log n, our result is minimax. Otherwise, to our knowledge the best clustering result on approximating mixture center is from [16], and on the condition that ∆2(µ)/σ2 ≳K2. Furthermore, the K ≳log n regime is known in the stochastic-block-model community as a hard regime where a gap is surmised to exist between the minimal information-theoretic rate and the minimal achievable computational rate (see for example [7]). In the high-dimensional setup n ∨m log n ≲p, condition (17) becomes: ∆2(µ) σ2 ≳ r (K + log n)pK n . (19) There are few information-theoretic bounds for high-dimension clustering. Recently, Banks, Moore, Vershynin, Verzelen and Xu (2017) [3] proved a lower bound for Gaussian mixture clustering detection, namely they require a separation of order p K(log K)p/n. When K ≲log n, our condition is only different in that it replaces log(K) by log(n), a price to pay for going from detecting the clusters to exactly recovering the clusters. Otherwise when K grows faster than log n there might exist a gap between the minimal possible rate and the achievable, as discussed previously. Adaptation to effective dimension. We can analyse further the condition (16) by introducing an effective dimension r∗, measuring the largest volume repartition for our variance-bounding matrices Σ1, ..., Σn. We will show that our estimator adapts to this effective dimension. Let r∗:= γ2 σ2 = maxa∈[n] tr(Σa) maxa∈[n] |Σa|op , (20) r∗can also be interpreted as a form of global effective rank of matrices Σa. Indeed, define Re(Σ) := tr(Σ)/|Σ|op, then we have r∗⩽maxa∈[n] Re(Σa) ⩽maxa∈[n] rank(Σa) ⩽p. Now using V2 ⩽√r∗σ2 and γ = √r∗σ, condition (16) can be written as ∆2(µ) σ2 ≳ K + log n + r (K + log n)r∗K n . (21) By comparing this equation to (17), notice that r∗is in place of p, indeed playing the role of an effective dimension for the problem. This shows that our estimator adapts to this effective dimension, without the use of any dimension reduction step. In consequence, equation (21) distinguishes between an actual high-dimensional setup: n ∨m log n ≲r∗and a "low" dimensional setup r∗≲n ∨m log n under which, regardless of the actual value of p, our estimators recovers under the near-minimax condition of (18). This informs on the effect of correcting term bΓcorr in the theorem above when n + m log n ≲r∗. The un-corrected version of the semi-definite program (9) has a leading separating rate of γ2/m = σ2r∗/m, but with the bΓcorr correction on the other hand, (21) has leading separating factor smaller than σ2p (K + log n)r∗/m = σ2√n + m log n × √r∗/m. This proves that in a high-dimensional setup, our correction enhances the separating rate of at least a factor p (n + m log n)/r∗. 4 Adaptation to the unknown number of group K It is rarely the case that K is known, but we can proceed without it. We produce an estimator adaptive to the number of groups K: let bκ ∈R+, we now study the following adaptive estimator: eBcorr := arg max B∈C ⟨bΛ −bΓcorr, B⟩−bκ tr(B). (22) 5 Theorem 2. Suppose that m > 2 and (16) is satisfied. For c3, c4, c5 > 0 absolute constants suppose that the following condition on bκ is satisfied c4 V2√n + σ2n + γ(σ p log n + δ) + δ2√n < c5bκ < m∆2(µ), (23) then we have eBcorr = B∗with probability larger than 1 −c3/n Notice that condition (23) essentially requires bκ to be seated between m∆2(µ) and some components of the right-hand side of (16). So under (23), the results from the previous section apply to the adaptive estimator eBcorr as well and this shows that it is not necessary to know K in order to perform well for recovering G. Finding an optimized, data-driven parameter bκ using some form of cross-validation is outside of the scope of this paper. 5 Numerical experiments We illustrate our method on simulated Gaussian data in two challenging, high-dimensional setup experiments for comparing clustering estimators. Our sample of n = 100 points are drawn from K = 5 identically-sized, perfectly discriminated non-isovolumic clusters of Gaussians - that is we have ∀k ∈[K], ∀a ∈Gk, Ea ∼N(0, Σk) such that |G1| = ... = |GK| = 20. The distributions are chosen to be isotropic, and the ratio between the lowest and the highest standard deviation is of 1 to 10. We draw points of a Rp space in two different scenarii. In (S1), for a given dimension space p = 500 and a fixed isotropic noise level, we report the algorithm’s performances as the signal-to-noise ratio ∆2(µ)/σ2 is increased from 1 to 15. In (S2) we impose a fixed signal to noise ratio and observe the algorithm’s decay in performance as the space dimension p is increased from 102 to 105 (logarithmic scale). All reported points of the simulated space represent a hundred simulations, and indicate a median value with asymmetric standard deviations in the form of errorbars. Solving for estimator bBcorr is a hard problem as n grows. For this task we implemented an ADMM solver from the work of Boyd et al. [4] with multiple stopping criterions including a fixed number of iterations of T = 1000. The complexity of the optimization is then roughly O(Tn3). For reference, we compare the recovering capacities of bGcorr, labeled ’pecok’ in Figure 1 with other classical clustering algorithm. We chose three different but standard clustering procedures: Lloyd’s K-means algorithm [13] with a thousand K-means++ initialization of [1] (although in scenario (S2), the algorithm is too slow to converge as p grows so we do not report it), Ward’s method for Hierarchical Clustering [23] and the low-rank clustering algorithm applied to the Gram matrix, a spectral method appearing in McSherry [15]. Lastly we include the CORD algorithm from Bunea et al. [5]. We measure the performances of estimators by computing the adjusted mutual information (see for instance [22]) between the truth and its estimate. In the two experiments, the results of bGcorr are markedly better than that of other methods. Scenario (S1) shows it can achieve exact recovery with a lesser signal to noise ratio than its competitors, whereas scenario (S2) shows its performances start to decay much later than the other methods as the space dimension is increased exponentially. Table 1 summarizes the simulations in a different light: for different parameter value on each line, we count the number of experiments (out of a hundred) that had an adjusted mutual information score equal to 0.9 or higher. This accounts for exact recoveries, or approximate recoveries that reasonably reflected the underlying truth. In this table it is also evident that bGcorr performs uniformly better, be it for exact or approximate recovery: it manages to recover the underlying truth much sooner in terms of signal-to-noise ratio, and for a given signal-to-noise ratio it will represent the truth better as the embedding dimension increases. Lastly Table 1 provides the median computing time in seconds for each method over the entire experiment. bGcorr comes with important computation times because bΓcorr is very costly to compute. Our method is computationally intensive but it is of polynomial order. The solving of a semidefinite program is a vastly developing field of Operational Research and even though we used the classical ADMM method of [4] that proved effective, this instance of the program could certainly have seen a more profitable implementation in the hands of a domain expert. All of the compared methods have a very hard time reaching high sample sizes n in the high dimensional context. The PYTHON3 implementation of the method used is found in open access here: martinroyer/pecok [18] 6 0 2 4 6 8 10 12 14 16 SNR ∆2(µ)/σ2 0.2 0.0 0.2 0.4 0.6 0.8 1.0 adj_mi(G, cG) kmeans++ pecok lowrank-spectral hierarchical cord Scenario (S1) 102 103 104 105 p 0.0 0.2 0.4 0.6 0.8 1.0 adj_mi(G, cG) pecok lowrank-spectral hierarchical cord Scenario (S2) Figure 1: Performance comparison for clustering estimators and bGcorr, labeled ’pecok4’ in reference to [6]. The adjusted mutual information equals 1 when the clusterings are identical, 0 when they are independent. hierarchical kmeans++ lowrank-spectral pecok4 cord 90% SNR=4.75 0 0 0 51 0 90% SNR=6 0 0 0 100 0 (S1) 90% SNR=7.25 18 0 12 100 26 90% SNR=8.5 100 0 100 100 76 med time (s) 0.01 2.76 0.23 1.84 (+18.92)1 0.76 90% dim=102 100 / 100 100 94 90% dim=103 0 / 0 100 31 (S2) 90% dim=5.103 0 / 0 100 0 90% dim=104 0 / 0 49 0 med time (s) 0.14 ∞ 0.19 1.94 (+68.12)1 0.68 Table 1: Approximate recovery result for experiment (S1) and (S2): number of experiments that had a score superior to 90%, out of a hundred, and computing times over the experiments 1The median time in parenthesis is the time to compute bΓcorr, as opposed to the main time for performing the SDP. Indeed the bΓcorr is very time consuming, its cost is roughly O(n4p). It must be noted that much faster alternatives, such as the one presented in [6], perform equally well (there is no significant difference in performance) for the recovery of G, but this is outside the scope of this paper. 6 Conclusion In this paper we analyzed a new semidefinite positive algorithm for point clustering within the context of a flexible probabilistic model and exhibit the key quantities that guarantee non-asymptotic exact recovery. It implies an essential bias-removing correction that significanty improves the recovering rate in the high-dimensional setup. Hence we showed the estimator to be near-minimax, adapted to an effective dimension of the problem. We also demonstrated that our estimator can be optimally adapted to a data-driven choice of K, with a single tuning parameter. Lastly we illustrated on high-dimensional experiments that our approach is empirically stronger than other classical clustering methods. The bΓcorr correction step of the algorithm, it can be interpreted as an independent, denoising step for the Gram matrix, and we recommend using such a procedure where the probabilistic framework we developed seems appropriate. 7 In practice, it is generally more realistic to look at approximate clustering results, but in this work we chose the point of view of exact clustering for investigating theoretical properties of our estimator. Our experimental results provide evidence that this choice is not restrictive, i.e. that our findings translate very well to approximate recovery. We expect our results to hold with similar speeds for approximate clustering, up to some logarithmic terms. One could think of adapting works on community detection by Guédon and Vershynin [12] based on Grothendieck’s inequality, or work by Fei and Chen [10] from the stochastic-block-model community on similar semidefinite programs. In fact, referring to a detection bound by Banks, Moore, Vershynin, Verzelen and Xu (2017) [3], our only margin for improvement on the separation speed is to transform the logarithmic factor √log n into √log K when the number of clusters K is of order O(log n) – otherwise the problem is rather open. As for the robustness of this procedure, a few aspects are to be considered: the algorithm we studied solves for a convexified objective, therefore its performances are empirically more stable than that of an objective that would prove non-convex, especially in the high-dimensional context. In this work we also benefit from a permissive probabilistic framework that allows for multiple deviations from the classical gaussian cluster model, and come at no price in terms of the performance of our estimator. Points from a same cluster are allowed to have significantly different means or fluctuations, and the results for exact recovery with high probability are unchanged, near-minimax and adaptive. Likewise on simulated data the estimator proves the most efficient in exact as well as approximate recovery. Acknowledgements This work is supported by a public grant overseen by the French National research Agency (ANR) as part of the “Investissement d’Avenir" program, through the “IDI 2015" project funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02. It is also supported by the CNRS PICS funding HighClust. We thank Christophe Giraud for a shrewd, unwavering thesis direction. References [1] D. Arthur and S. Vassilvitskii. K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, pages 1027–1035, Philadelphia, PA, USA, 2007. Society for Industrial and Applied Mathematics. [2] M. Azizyan, A. Singh, and L. Wasserman. Minimax theory for high-dimensional gaussian mixtures with sparse mean separation. In Proceedings of the 26th International Conference on Neural Information Processing Systems, NIPS’13, pages 2139–2147, USA, 2013. Curran Associates Inc. [3] J. Banks, C. Moore, N. Verzelen, R. Vershynin, and J. Xu. Information-theoretic bounds and phase transitions in clustering, sparse PCA, and submatrix localization. arXiv e-prints arXiv:1607.05222, July 2016. [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1–122, January 2011. [5] F. Bunea, C. Giraud, and X. Luo. Minimax optimal variable clustering in g-models via cord. arXiv preprint arXiv:1508.01939, 2015. [6] F. Bunea, C. Giraud, M. Royer, and N. Verzelen. PECOK: a convex optimization approach to variable clustering. arXiv e-prints arXiv:1606.05100, June 2016. [7] Y. Chen and J. Xu. Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. Journal of Machine Learning Research, 17(27):1–57, 2016. [8] S. Chrétien, C. Dombry, and A. Faivre. A semi-definite programming approach to low dimensional embedding for unsupervised clustering. CoRR, abs/1606.09190, 2016. [9] S. Dasgupta and L. Schulman. A probabilistic analysis of em for mixtures of separated, spherical gaussians. J. Mach. Learn. Res., 8:203–226, May 2007. 8 [10] Y. Fei and Y. Chen. Exponential error rates of SDP for block models: Beyond Grothendieck’s inequality. arXiv e-prints arXiv:1705.08391, May 2017. [11] C. Fraley and A. E. Raftery. Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611–631, 2002. [12] O. Guédon and R. Vershynin. Community detection in sparse networks via grothendieck’s inequality. CoRR, abs/1411.4686, 2014. [13] S. Lloyd. Least squares quantization in pcm. IEEE Trans. Inf. Theor., 28(2):129–137, September 1982. [14] J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, pages 281–297, Berkeley, Calif., 1967. University of California Press. [15] F. McSherry. Spectral partitioning of random graphs. In Proceedings of the 42Nd IEEE Symposium on Foundations of Computer Science, FOCS ’01, pages 529–, Washington, DC, USA, 2001. IEEE Computer Society. [16] D. G. Mixon, S. Villar, and R. Ward. Clustering subgaussian mixtures with k-means. In 2016 IEEE Information Theory Workshop (ITW), pages 211–215, Sept 2016. [17] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM J. on Optimization, 18(1):186–205, February 2007. [18] Martin Royer. ADMM implementation of PECOK. https://github.com/martinroyer/ pecok, October, 2017. [19] H. Steinhaus. Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci, 1:801–804, 1956. [20] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Chapter 5 of: Compressed Sensing, Theory and Applications. Cambridge University Press, 2012. [21] N. Verzelen and E. Arias-Castro. Detection and Feature Selection in Sparse Mixture Models. arXiv e-prints arXiv:1405.1478, May 2014. [22] Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. J. Mach. Learn. Res., 11:2837–2854, December 2010. [23] J. H. Ward. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236–244, 1963. [24] B. Yan and P. Sarkar. Convex Relaxation for Community Detection with Covariates. arXiv e-prints arXiv:1607.02675, July 2016. 9 | 2017 | 165 |
6,636 | #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning Haoran Tang1∗, Rein Houthooft34∗, Davis Foote2, Adam Stooke2, Xi Chen2†, Yan Duan2†, John Schulman4, Filip De Turck3, Pieter Abbeel 2† 1 UC Berkeley, Department of Mathematics 2 UC Berkeley, Department of Electrical Engineering and Computer Sciences 3 Ghent University – imec, Department of Information Technology 4 OpenAI Abstract Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration. 1 Introduction Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally until it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration—actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation—maximizing short-term rewards using the agent’s current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for highdimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research. ∗These authors contributed equally. Correspondence to: Haoran Tang <hrtang@math.berkeley.edu>, Rein Houthooft <rein.houthooft@openai.com> †Work done at OpenAI 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Most of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling [21] and i.i.d./correlated Gaussian noise [19, 30]. Although these heuristics are sufficient in tasks with well-shaped rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards [25]. Recently developed exploration strategies for deep RL have led to significantly improved performance on environments with sparse rewards. Bootstrapped DQN [24] led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance on Montezuma’s Revenge, an extremely challenging Atari 2600 game [4]. Variational Information Maximizing Exploration (VIME, [13]) encourages the agent to explore by acquiring information about environment dynamics, and performs well on various robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains. Some of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB algorithm of [18] chooses the action at at time t that maximizes ˆr(at) + q 2 log t n(at) where ˆr(at) is the estimated reward, and n(at) is the number of times action at was previously chosen. In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation–Exploration Bonus (MBIE-EB) of [34] counts state-action pairs with a table n(s, a) and adding a bonus reward of the form β √ n(s,a) to encourage exploring less visited pairs. [16] show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces. This paper presents a simple approach for exploration, which extends classic counting-based methods to high-dimensional, continuous state spaces. We discretize the state space with a hash function and apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately balance generalization across states, and distinguishing between states. We select problems from rllab [8] and Atari 2600 [3] featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naïve exploration strategies. The main strength of the presented approach is that it is fast, flexible and complementary to most existing RL algorithms. In summary, this paper proposes a generalization of classic count-based exploration to highdimensional spaces through hashing (Section 2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section 4). 2 Methodology 2.1 Notation This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S, A, P, r, ρ0, γ, T), in which S is the state space, A the action space, P a transition probability distribution, r : S × A →R a reward function, ρ0 an initial state distribution, γ ∈(0, 1] a discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward Eπ,P hPT t=0 γtr(st, at) i over a policy π, which outputs a distribution over actions given a state. 2.2 Count-Based Exploration via Static Hashing Our approach discretizes the state space with a hash function φ : S →Z. An exploration bonus r+ : S →R is added to the reward function, defined as r+(s) = β p n(φ(s)) , (1) where β ∈R≥0 is the bonus coefficient. Initially the counts n(·) are set to zero for the whole range of φ. For every state st encountered at time step t, n(φ(st)) is increased by one. The agent is trained with rewards (r + r+), while performance is evaluated as the sum of rewards without bonuses. 2 Algorithm 1: Count-based exploration through static hashing, using SimHash 1 Define state preprocessor g : S →RD 2 (In case of SimHash) Initialize A ∈Rk×D with entries drawn i.i.d. from the standard Gaussian distribution N(0, 1) 3 Initialize a hash table with values n(·) ≡0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}M m=0 with policy π 6 Compute hash codes through any LSH method, e.g., for SimHash, φ(sm) = sgn(Ag(sm)) 7 Update the hash table counts ∀m : 0 ≤m ≤M as n(φ(sm)) ←n(φ(sm)) + 1 8 Update the policy π using rewards r(sm, am) + β √ n(φ(sm)) M m=0 with any RL algorithm Note that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in the Supplementary Material, but no significant performance gains over state counting could be witnessed. A possible reason is that the policy itself is sufficiently random to try most actions at a novel state. Clearly the performance of this method will strongly depend on the choice of hash function φ. One important choice we can make regards the granularity of the discretization: we would like for “distant” states to be be counted separately while “similar” states are merged. If desired, we can incorporate prior knowledge into the choice of φ, if there would be a set of salient state features which are known to be relevant. A short discussion on this matter is given in the Supplementary Material. Algorithm 1 summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics [2]. A computationally efficient type of LSH is SimHash [6], which measures similarity by angular distance. SimHash retrieves a binary code of state s ∈S as φ(s) = sgn(Ag(s)) ∈{−1, 1}k, (2) where g : S →RD is an optional preprocessing function and A is a k × D matrix with i.i.d. entries drawn from a standard Gaussian distribution N(0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states. 2.3 Count-Based Exploration via Learned Hashing When the MDP states have a complex structure, as is the case with image observations, measuring their similarity directly in pixel space fails to provide the semantic similarity measure one would desire. Previous work in computer vision [7, 20, 36] introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks [12, 17, 31]. Considering these results, it may be difficult for a method such as SimHash to cluster states appropriately using only raw pixels. Therefore, rather than using SimHash, we propose to use an autoencoder (AE) to learn meaningful hash codes in one of its hidden layers as a more advanced LSH method. This AE takes as input states s and contains one special dense layer comprised of D sigmoid functions. By rounding the sigmoid activations b(s) of this layer to their closest binary number ⌊b(s)⌉∈{0, 1}D, any state s can be binarized. This is illustrated in Figure 1 for a convolutional AE. A problem with this architecture is that dissimilar inputs si, sj can map to identical hash codes ⌊b(si)⌉= ⌊b(sj)⌉, but the AE still reconstructs them perfectly. For example, if b(si) and b(sj) have values 0.6 and 0.7 at a particular dimension, the difference can be exploited by deconvolutional layers in order to reconstruct si and sj perfectly, although that dimension rounds to the same binary value. One can imagine replacing the bottleneck layer b(s) with the hash codes ⌊b(s)⌉, but then gradients cannot be back-propagated through the rounding function. A solution is proposed by Gregor et al. [10] and Salakhutdinov & Hinton [28] is to inject uniform noise U(−a, a) into the sigmoid 3 6 × 6 6 × 6 6 × 6 6 × 6 6 × 6 6 × 6 ⌊·⌉ code downsample softmax linear 64 × 52 × 52 1 × 52 × 52 96 × 24 × 24 96 × 10 × 10 96 × 5 × 5 2400 b(·) 512 1024 96 × 5 × 5 96 × 11 × 11 96 × 24 × 24 1 × 52 × 52 Figure 1: The autoencoder (AE) architecture for ALE; the solid block represents the dense sigmoidal binary code layer, after which noise U(−a, a) is injected. Algorithm 2: Count-based exploration using learned hash codes 1 Define state preprocessor g : S →{0, 1}D as the binary code resulting from the autoencoder (AE) 2 Initialize A ∈Rk×D with entries drawn i.i.d. from the standard Gaussian distribution N(0, 1) 3 Initialize a hash table with values n(·) ≡0 4 for each iteration j do 5 Collect a set of state-action samples {(sm, am)}M m=0 with policy π 6 Add the state samples {sm}M m=0 to a FIFO replay pool R 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool {sn}N n=1 ∼R, for example using stochastic gradient descent 9 Compute g(sm) = ⌊b(sm)⌉, the D-dim rounded hash code for sm learned by the AE 10 Project g(sm) to a lower dimension k via SimHash as φ(sm) = sgn(Ag(sm)) 11 Update the hash table counts ∀m : 0 ≤m ≤M as n(φ(sm)) ←n(φ(sm)) + 1 12 Update the policy π using rewards r(sm, am) + β √ n(φ(sm)) M m=0 with any RL algorithm activations. By choosing uniform noise with a > 1 4, the AE is only capable of (always) reconstructing distinct state inputs si ̸= sj, if it has learned to spread the sigmoid outputs sufficiently far apart, |b(si) −b(sj)| > ϵ, in order to counteract the injected noise. As such, the loss function over a set of collected states {si}N i=1 is defined as L {sn}N n=1 = −1 N N X n=1 h log p(sn) −λ K PD i=1 min n (1 −bi(sn))2 , bi(sn)2o i , (3) with p(sn) the AE output. This objective function consists of a negative log-likelihood term and a term that pressures the binary code layer to take on binary values, scaled by λ ∈R≥0. The reasoning behind this latter term is that it might happen that for particular states, a certain sigmoid unit is never used. Therefore, its value might fluctuate around 1 2, causing the corresponding bit in binary code ⌊b(s)⌉to flip over the agent lifetime. Adding this second loss term ensures that an unused bit takes on an arbitrary binary value. For Atari 2600 image inputs, since the pixel intensities are discrete values in the range [0, 255], we make use of a pixel-wise softmax output layer [37] that shares weights between all pixels. The architectural details are described in the Supplementary Material and are depicted in Figure 1. Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code ⌊b(s)⌉, which can be done through random projection to a lower-dimensional space via SimHash as in Eq. (2). On the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm 2 line 8). A solution is to downsample the binary code to a very low dimension, or by slowing down the training process. On the other hand, the code has to remain relatively unique 4 for states that are both distinct and close together on the image manifold. This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units. States already well represented by the AE tend to saturate the sigmoid activations, causing the resulting loss gradients to be close to zero, making the code less prone to change. 3 Related Work Classic count-based methods such as MBIE [33], MBIE-EB and [16] solve an approximate Bellman equation as an inner loop before the agent takes an action [34]. As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based [21] or policy gradient-based [22, 30] methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research. A related line of classical exploration methods is based on the idea of optimism in the face of uncertainty [5] but not restricted to using counting to implement “optimism”, e.g., R-Max [5], UCRL [14], and E3 [15]. These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings. Bayesian RL methods [9, 11, 16, 35], which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by [27] and [25]. Another type of exploration is curiosity-based exploration. These methods try to capture the agent’s surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to [29] and [26] for an extensive review on curiosity and intrinsic rewards. Several exploration strategies for deep RL have been proposed to handle high-dimensional state space recently. [13] propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus. [32] propose to use the prediction error of a learned dynamics model as an exploration bonus. Thompson sampling through bootstrapping is proposed by [24], using bootstrapped Q-functions. The most related exploration strategy is proposed by [4], in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudo-count is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection is that our method also implies a density model ρ(s) = n(φ(s)) N over all visited states, where N is the total number of states visited. Another method similar to hashing is proposed by [1], which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems. 4 Experiments Experiments were designed to investigate and answer the following research questions: 1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 5 To answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Trust Region Policy Optimization (TRPO, [30]) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in the Supplementary Material. 4.1 Continuous Control The rllab benchmark [8] consists of various control tasks to test deep RL algorithms. We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure 2, and adopt the experimental setup as defined in [13]—a description can be found in the Supplementary Material. These tasks are all highly difficult to solve with naïve exploration strategies, such as adding Gaussian noise to the actions. Figure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely MountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from [8]. (a) MountainCar (b) CartPoleSwingup (c) SwimmerGather (d) HalfCheetah Figure 3: Mean average return of different algorithms on rllab tasks with sparse rewards. The solid line represents the mean average return, while the shaded area represents one standard deviation, over 5 seeds for the baseline and SimHash (the baseline curves happen to overlap with the axis). Figure 3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME [13] on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and the hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reaching the goal in all environments (which corresponds to a nonzero return), while baseline TRPO with Gaussia n control noise fails completely. Although TRPO-SimHash picks up the sparse reward on HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash is comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather. 4.2 Arcade Learning Environment The Arcade Learning Environment (ALE, [3]), which consists of Atari 2600 video games, is an important benchmark for deep RL due to its high-dimensional state space and wide variety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite, Gravitar, Montezuma’s Revenge, Solaris, and Venture. The agent is trained for 500 iterations in all experiments, with each iteration consisting of 0.1 M steps (the TRPO batch size, corresponds to 0.4 M frames). Policies and value functions are neural networks with identical architectures to [22]. Although the policy and baseline take into account the previous four frames, the counting algorithm only looks at the latest frame. 6 Table 1: Atari 2600: average total reward after training for 50 M time steps. Boldface numbers indicate best results. Italic numbers are the best among our methods. Freeway Frostbite Gravitar Montezuma Solaris Venture TRPO (baseline) 16.5 2869 486 0 2758 121 TRPO-pixel-SimHash 31.6 4683 468 0 2897 263 TRPO-BASS-SimHash 28.4 3150 604 238 1201 616 TRPO-AE-SimHash 33.5 5214 482 75 4467 445 Double-DQN 33.3 1683 412 0 3068 98.0 Dueling network 0.0 4672 588 0 2251 497 Gorila 11.7 605 1054 4 N/A 1245 DQN Pop-Art 33.4 3469 483 0 4544 1172 A3C+ 27.3 507 246 142 2175 0 pseudo-count 29.2 1450 – 3439 – 369 BASS To compare with the autoencoder-based learned hash code, we propose using Basic Abstraction of the ScreenShots (BASS, also called Basic; see [3]) as a static preprocessing function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2) most objects are large and monochrome, and 3) winning depends mostly on knowing object locations and motions. We designed an adapted version of BASS3, that divides the RGB screen into square cells, computes the average intensity of each color channel inside a cell, and assigns the resulting values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cell size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the channel, then feature(i, j, z) = j B 255C2 P (x,y)∈cell(i,j) I(x, y, z) k . (4) Afterwards, the resulting integer-valued feature tensor is converted to an integer hash code (φ(st) in Line 6 of Algorithm 1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well. We compare our results to double DQN [39], dueling network [40], A3C+ [4], double DQN with pseudo-counts [4], Gorila [23], and DQN Pop-Art [38] on the “null op” metric4. We show training curves in Figure 4 and summarize all results in Table 1. Surprisingly, TRPO-pixel-SimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPOBASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on Montezuma’s Revenge and Venture, where it captures object locations better than other methods.5 TRPO-AESimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris. As observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma’s Revenge and Venture. Therefore, a static or adaptive preprocessing step can be important for a good hash function. In conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domaindependent state preprocessing techniques, it can sometimes achieve far better results. A reason why our proposed method does not achieve state-of-the-art performance on all games is that TRPO does not reuse off-policy experience, in contrast to DQN-based algorithms [4, 23, 38]), and is 3The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted version does not make this assumption. 4The agent takes no action for a random number (within 30) of frames at the beginning of each episode. 5We provide videos of example game play and visualizations of the difference bewteen Pixel-SimHash and BASS-SimHash at https://www.youtube.com/playlist?list=PLAd-UMX6FkBQdLNWtY8nH1-pzYJA_1T55 7 0 100 200 300 400 500 −5 0 5 10 15 20 25 30 35 (a) Freeway 0 100 200 300 400 500 0 2000 4000 6000 8000 10000 (b) Frostbite 0 100 200 300 400 500 100 200 300 400 500 600 700 800 900 1000 TRPO-AE-SimHash TRPO TRPO-BASS-SimHash TRPO-pixel-SimHash (c) Gravitar 0 100 200 300 400 500 0 100 200 300 400 500 (d) Montezuma’s Revenge 0 100 200 300 400 500 −1000 0 1000 2000 3000 4000 5000 6000 7000 (e) Solaris 0 100 200 300 400 500 −200 0 200 400 600 800 1000 1200 (f) Venture Figure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration, while the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPOpixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash. hence less efficient in harnessing extremely sparse rewards. This explanation is corroborated by the experiments done in [4], in which A3C+ (an on-policy algorithm) scores much lower than DQN (an off-policy algorithm), while using the exact same exploration bonus. 5 Conclusions This paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It provides a simple yet powerful baseline for solving MDPs that require informed exploration. Acknowledgments We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft was supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). References [1] Abel, David, Agarwal, Alekh, Diaz, Fernando, Krishnamurthy, Akshay, and Schapire, Robert E. Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119, 2016. [2] Andoni, Alexandr and Indyk, Piotr. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 459–468, 2006. [3] Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 06 2013. 8 [4] Bellemare, Marc G, Srinivasan, Sriram, Ostrovski, Georg, Schaul, Tom, Saxton, David, and Munos, Remi. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems 29 (NIPS), pp. 1471–1479, 2016. [5] Brafman, Ronen I and Tennenholtz, Moshe. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231, 2002. [6] Charikar, Moses S. Similarity estimation techniques from rounding algorithms. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing (STOC), pp. 380–388, 2002. [7] Dalal, Navneet and Triggs, Bill. Histograms of oriented gradients for human detection. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 886–893, 2005. [8] Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John, and Abbeel, Pieter. Benchmarking deep reinforcement learning for continous control. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1329–1338, 2016. [9] Ghavamzadeh, Mohammad, Mannor, Shie, Pineau, Joelle, and Tamar, Aviv. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning, 8(5-6):359–483, 2015. [10] Gregor, Karol, Besse, Frederic, Jimenez Rezende, Danilo, Danihelka, Ivo, and Wierstra, Daan. Towards conceptual compression. In Advances in Neural Information Processing Systems 29 (NIPS), pp. 3549–3557. 2016. [11] Guez, Arthur, Heess, Nicolas, Silver, David, and Dayan, Peter. Bayes-adaptive simulation-based search with value function approximation. In Advances in Neural Information Processing Systems (Advances in Neural Information Processing Systems (NIPS)), pp. 451–459, 2014. [12] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. 2015. [13] Houthooft, Rein, Chen, Xi, Duan, Yan, Schulman, John, De Turck, Filip, and Abbeel, Pieter. VIME: Variational information maximizing exploration. In Advances in Neural Information Processing Systems 29 (NIPS), pp. 1109–1117, 2016. [14] Jaksch, Thomas, Ortner, Ronald, and Auer, Peter. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010. [15] Kearns, Michael and Singh, Satinder. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002. [16] Kolter, J Zico and Ng, Andrew Y. Near-bayesian exploration in polynomial time. In Proceedings of the 26th International Conference on Machine Learning (ICML), pp. 513–520, 2009. [17] Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS), pp. 1097–1105, 2012. [18] Lai, Tze Leung and Robbins, Herbert. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. [19] Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [20] Lowe, David G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV), pp. 1150–1157, 1999. [21] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. 9 [22] Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [23] Nair, Arun, Srinivasan, Praveen, Blackwell, Sam, Alcicek, Cagdas, Fearon, Rory, De Maria, Alessandro, Panneershelvam, Vedavyas, Suleyman, Mustafa, Beattie, Charles, Petersen, Stig, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015. [24] Osband, Ian, Blundell, Charles, Pritzel, Alexander, and Van Roy, Benjamin. Deep exploration via bootstrapped DQN. In Advances in Neural Information Processing Systems 29 (NIPS), pp. 4026–4034, 2016. [25] Osband, Ian, Van Roy, Benjamin, and Wen, Zheng. Generalization and exploration via randomized value functions. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 2377–2386, 2016. [26] Oudeyer, Pierre-Yves and Kaplan, Frederic. What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics, 1:6, 2007. [27] Pazis, Jason and Parr, Ronald. PAC optimal exploration in continuous space Markov decision processes. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI), 2013. [28] Salakhutdinov, Ruslan and Hinton, Geoffrey. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969 – 978, 2009. [29] Schmidhuber, Jürgen. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010. [30] Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. [31] Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [32] Stadie, Bradly C, Levine, Sergey, and Abbeel, Pieter. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015. [33] Strehl, Alexander L and Littman, Michael L. A theoretical analysis of model-based interval estimation. In Proceedings of the 21st International Conference on Machine Learning (ICML), pp. 856–863, 2005. [34] Strehl, Alexander L and Littman, Michael L. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. [35] Sun, Yi, Gomez, Faustino, and Schmidhuber, Jürgen. Planning to be surprised: Optimal Bayesian exploration in dynamic environments. In Proceedings of the 4th International Conference on Artificial General Intelligence (AGI), pp. 41–51. 2011. [36] Tola, Engin, Lepetit, Vincent, and Fua, Pascal. DAISY: An efficient dense descriptor applied to wide-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5): 815–830, 2010. [37] van den Oord, Aaron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1747–1756, 2016. [38] van Hasselt, Hado, Guez, Arthur, Hessel, Matteo, and Silver, David. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016. [39] van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double Q-learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), 2016. [40] Wang, Ziyu, de Freitas, Nando, and Lanctot, Marc. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1995–2003, 2016. 10 | 2017 | 166 |
6,637 | Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition Naoya Takeishi§, Yoshinobu Kawahara†,‡, Takehisa Yairi§ §Department of Aeronautics and Astronautics, The University of Tokyo †The Institute of Scientific and Industrial Research, Osaka University ‡RIKEN Center for Advanced Intelligence Project {takeishi,yairi}@ailab.t.u-tokyo.ac.jp, ykawahara@sanken.osaka-u.ac.jp Abstract Spectral decomposition of the Koopman operator is attracting attention as a tool for the analysis of nonlinear dynamical systems. Dynamic mode decomposition is a popular numerical algorithm for Koopman spectral analysis; however, we often need to prepare nonlinear observables manually according to the underlying dynamics, which is not always possible since we may not have any a priori knowledge about them. In this paper, we propose a fully data-driven method for Koopman spectral analysis based on the principle of learning Koopman invariant subspaces from observed data. To this end, we propose minimization of the residual sum of squares of linear least-squares regression to estimate a set of functions that transforms data into a form in which the linear regression fits well. We introduce an implementation with neural networks and evaluate performance empirically using nonlinear dynamical systems and applications. 1 Introduction A variety of time-series data are generated from nonlinear dynamical systems, in which a state evolves according to a nonlinear map or differential equation. In summarization, regression, or classification of such time-series data, precise analysis of the underlying dynamical systems provides valuable information to generate appropriate features and to select an appropriate computation method. In applied mathematics and physics, the analysis of nonlinear dynamical systems has received significant interest because a wide range of complex phenomena, such as fluid flows and neural signals, can be described in terms of nonlinear dynamics. A classical but popular view of dynamical systems is based on state space models, wherein the behavior of the trajectories of a vector in state space is discussed (see, e.g., [1]). Time-series modeling based on a state space is also common in machine learning. However, when the dynamics are highly nonlinear, analysis based on state space models becomes challenging compared to the case of linear dynamics. Recently, there is growing interest in operator-theoretic approaches for the analysis of dynamical systems. Operator-theoretic approaches are based on the Perron–Frobenius operator [2] or its adjoint, i.e., the Koopman operator (composition operator) [3], [4]. The Koopman operator defines the evolution of observation functions (observables) in a function space rather than state vectors in a state space. Based on the Koopman operator, the analysis of nonlinear dynamical systems can be lifted to a linear (but infinite-dimensional) regime. Consequently, we can consider modal decomposition, with which the global characteristics of nonlinear dynamics can be inspected [4], [5]. Such modal decomposition has been intensively used for scientific purposes to understand complex phenomena (e.g., [6]–[9]) and also for engineering tasks, such as signal processing and machine learning. In fact, modal decomposition based on the Koopman operator has been utilized in various engineering tasks, including robotic control [10], image processing [11], and nonlinear system identification [12]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. One of the most popular algorithms for modal decomposition based on the Koopman operator is dynamic mode decomposition (DMD) [6], [7], [13]. An important premise of DMD is that the target dataset is generated from a set of observables that spans a function space invariant to the Koopman operator (referred to as Koopman invariant subspace). However, when only the original state vectors are available as the dataset, we must prepare appropriate observables manually according to the underlying nonlinear dynamics. Several methods have been proposed to utilize such observables, including the use of basis functions [14] and reproducing kernels [15]. Note that these methods work well only if appropriate basis functions or kernels are prepared; however, it is not always possible to prepare such functions if we have no a priori knowledge about the underlying dynamics. In this paper, we propose a fully data-driven method for modal decomposition via the Koopman operator based on the principle of learning Koopman invariant subspaces (LKIS) from scratch using observed data. To this end, we estimate a set of parametric functions by minimizing the residual sum of squares (RSS) of linear least-squares regression, so that the estimated set of functions transforms the original data into a form in which the linear regression fits well. In addition to the principle of LKIS, an implementation using neural networks is described. Moreover, we introduce empirical performance of DMD based on the LKIS framework with several nonlinear dynamical systems and applications, which proves the feasibility of LKIS-based DMD as a fully data-driven method for modal decomposition via the Koopman operator. 2 Background 2.1 Koopman spectral analysis We focus on a (possibly nonlinear) discrete-time autonomous dynamical system xt+1 = f(xt), x ∈M, t ∈T = {0} ∪N, (1) where M denotes the state space and (M, Σ, µ) represents the associated probability space. In dynamical system (1), Koopman operator K [4], [5] is defined as an infinite-dimensional linear operator that acts on observables g : M →R (or C), i.e., Kg(x) = g(f(x)), (2) with which the analysis of nonlinear dynamics (1) can be lifted to a linear (but infinite-dimensional) regime. Since K is linear, let us consider a set of eigenfunctions {ϕ1, ϕ2, . . . } of K with eigenvalues {λ1, λ2, . . . }, i.e., Kϕi = λiϕi for i ∈N, where ϕ : M →C and λ ∈C. Further, suppose that g can be expressed as a linear combination of those infinite number of eigenfunctions, i.e., g(x) = P∞ i=1 ϕi(x)ci with a set of coefficients {c1, c2, . . . }. By repeatedly applying K to both sides of this equation, we obtain the following modal decomposition: g(xt) = ∞ X i=1 λt iϕi(x0)ci. (3) Here, the value of g is decomposed into a sum of Koopman modes wi = ϕi(x0)ci, each of which evolves over time with its frequency and decay rate respectively given by ∠λi and |λi|, since λi is a complex value. The Koopman modes and their eigenvalues can be investigated to understand the dominant characteristics of complex phenomena that follow nonlinear dynamics. The above discussion can also be applied straightforwardly to continuous-time dynamical systems [4], [5]. Modal decomposition based on K, often referred to as Koopman spectral analysis, has been receiving attention in nonlinear physics and applied mathematics. In addition, it is a useful tool for engineering tasks including machine learning and pattern recognition; the spectra (eigenvalues) of K can be used as features of dynamical systems, the eigenfunctions are a useful representation of time-series for various tasks, such as regression and visualization, and K itself can be used for prediction and optimal control. Several methods have been proposed to compute modal decomposition based on K, such as generalized Laplace analysis [5], [16], the Ulam–Galerkin method [17], and DMD [6], [7], [13]. DMD, which is reviewed in more detail in the next subsection, has received significant attention and been utilized in various data analysis scenarios (e.g., [6]–[9]). Note that the Koopman operator and modal decomposition based on it can be extended to random dynamical systems actuated by process noise [4], [14], [18]. In addition, Proctor et al. [19], [20] discussed Koopman analysis of systems with control signals. In this paper, we primarily target autonomous deterministic dynamics (e.g., Eq. (1)) for the sake of presentation clarity. 2 2.2 Dynamic mode decomposition and Koopman invariant subspace Let us review DMD, an algorithm for Koopman spectral analysis (further details are in the supplementary). Consider a set of observables {g1, . . . , gn} and let g = [g1 · · · gn]T be a vector-valued observable. In addition, define two matrices Y0, Y1 ∈Rn×m generated by x0, f and g, i.e., Y0 = [g(x0) · · · g(xm−1)] and Y1 = [g(f(x0)) · · · g(f(xm−1))] , (4) where m + 1 is the number of snapshots in the dataset. The core functionality of DMD algorithms is computing the eigendecomposition of matrix A = Y1Y † 0 [13], [21], where Y † 0 is the Moore– Penrose pseudoinverse of Y0. The eigenvectors of A are referred to as dynamic modes, and they coincide with the Koopman modes if the corresponding eigenfunctions of K are in span{g1, . . . , gn} [21]. Alternatively (but nearly equivalently), the condition under which DMD works as a numerical realization of Koopman spectral analysis can be described as follows. Rather than calculating the infinite-dimensional K directly, we can consider the restriction of K to a finite-dimensional subspace. Assume the observables are elements of L2(M, µ). The Koopman invariant subspace is defined as G ⊂L2(M, µ) s.t. ∀g ∈G, Kg ∈G. If G is spanned by a finite number of functions, then the restriction of K to G, which we denote K, becomes a finite-dimensional linear operator. In the sequel, we assume the existence of such G. If {g1, . . . , gn} spans G, then DMD’s matrix A = Y1Y † 0 coincides with K ∈Rn×n asymptotically, wherein K is the realization of K with regard to the frame (or basis) {g1, . . . , gn}. For modal decomposition (3), the (vector-valued) Koopman modes are given by w and the values of the eigenfunctions are obtained by ϕ = zHg, where w and z are the right- and left-eigenvectors of K normalized such that wH i zj = δi,j [14], [21], and zH denotes the conjugate transpose of z. Here, an important problem in the practice of DMD arises, i.e., we often have no access to g that spans a Koopman invariant subspace G. In this case, for nonlinear dynamics, we must manually prepare adequate observables. Several researchers have addressed this issue; Williams et al. [14] leveraged a dictionary of predefined basis functions to transform original data, and Kawahara [15] defined Koopman spectral analysis in a reproducing kernel Hilbert space. Brunton et al. [22] proposed the use of observables selected in a data-driven manner [23] from a function dictionary. Note that, for these methods, we must select an appropriate function dictionary or kernel function according to the target dynamics. However, if we have no a priori knowledge about them, which is often the case, such existing methods do not have to be applied successfully to nonlinear dynamics. 3 Learning Koopman invariant subspaces 3.1 Minimizing residual sum of squares of linear least-squares regression In this paper, we propose a method to learn a set of observables {g1, . . . , gn} that spans a Koopman invariant subspace G, given a sequence of measurements as the dataset. In the following, we summarize desirable properties for such observables, upon which the proposed method is constructed. Theorem 1. Consider a set of square-integrable observables {g1, . . . , gn}, and define a vectorvalued observable g = [g1 · · · gn]T. In addition, define a linear operator G whose matrix form is given as G = R M(g ◦f)gHdµ R M ggHdµ †. Then, ∀x ∈M, g(f(x)) = Gg(x) if and only if {g1, . . . , gn} spans a Koopman invariant subspace. Proof. If ∀x ∈M, g(f(x)) = Gg(x), then for any ˆg = Pn i=1 aigi ∈span{g1, . . . , gn}, Kˆg = n X i=1 aigi(f(x)) = n X j=1 n X i=1 aiGi,j ! gj(x) ∈span{g1, . . . , gn}, where Gi,j denotes the (i, j)-element of G; thus, span{g1, . . . , gn} is a Koopman invariant subspace. On the other hand, if {g1, . . . , gn} spans a Koopman invariant subspace, there exists a linear operator K such that ∀x ∈M, g(f(x)) = Kg(x); thus, R M(g ◦f)gHdµ = R M KggHdµ. Therefore, an instance of the matrix form of K is obtained in the form of G. According to Theorem 1, we should obtain g that makes g ◦f −Gg zero. However, such problems cannot be solved with finite data because g is a function. Thus, we give the corresponding empirical 3 risk minimization problem based on the assumption of ergodicity of f and the convergence property of the empirical matrix as follows. Assumption 1. For dynamical system (1), the time-average and space-average of a function g : M →R (or C) coincide in m →∞for almost all x0 ∈M, i.e., lim m→∞ 1 m m−1 X j=0 g(xj) = Z M g(x)dµ(x), for almost all x0 ∈M. Theorem 2. Define Y0 and Y1 by Eq. (4) and suppose that Assumption 1 holds. If all modes are sufficiently excited in the data (i.e., rank(Y0) = n), then matrix A = Y1Y † 0 almost surely converges to the matrix form of linear operator G in m →∞. Proof. From Assumption 1, 1 mY1Y H 0 and 1 mY0Y H 0 respectively converge to R M(g ◦f)gHdµ and R M ggHdµ for almost all x0 ∈M. In addition, since the rank of Y0Y H 0 is always n, ( 1 mY0Y H 0 )† converges to ( R M ggHdµ)† in m →∞[24]. Consequently, in m →∞, A = 1 mY1Y H 0 1 mY0Y H 0 † almost surely converges to G, which is the matrix form of linear operator G. Since A = Y1Y † 0 is the minimum-norm solution of the linear least-squares regression from the columns of Y0 to those of Y1, we constitute the learning problem to estimate a set of function that transforms the original data into a form in which the linear least-squares regression fits well. In particular, we minimize RSS, which measures the discrepancy between the data and the estimated regression model (i.e., linear least-squares in this case). We define the RSS loss as follows: LRSS(g; (x0, . . . , xm)) =
Y1 −(Y1Y † 0 )Y0
2 F , (5) which becomes zero when g spans a Koopman invariant subspace. If we implement a smooth parametric model on g, the local minima of LRSS can be found using gradient descent. We adopt g that achieves a local minimum of LRSS as a set of observables that spans (approximately) a Koopman invariant subspace. 3.2 Linear delay embedder for state space reconstruction In the previous subsection, we have presented an important part of the principle of LKIS, i.e., minimization of the RSS of linear least-squares regression. Note that, to define RSS loss (5), we need access to a sequence of the original states, i.e., (x0, . . . , xm) ∈Mm+1, as a dataset. In practice, however, we cannot necessarily observe full states x due to limited memory and sensor capabilities. In this case, only transformed (and possibly degenerated) measurements are available, which we denote y = ψ(x) with a measurement function ψ : M →Rr. To define RSS loss (5) given only degenerated measurements, we must reconstruct the original states x from the actual observations y. Here, we utilize delay-coordinate embedding, which has been widely used for state space reconstruction in the analysis of nonlinear dynamics. Consider a univariate time-series (. . . , yt−1, yt, yt+1, . . . ), which is a sequence of degenerated measurements yt = ψ(xt). According to the well-known Taken’s theorem [25], [26], a faithful representation of xt that preserves the structure of the state space can be obtained by ˜xt = yt yt−τ · · · yt−(d−1)τ T with some lag parameter τ and embedding dimension d if d is greater than 2 dim(x). For a multivariate time-series, embedding with non-uniform lags provides better reconstruction [27]. For example, when we have a twodimensional time-series yt = [y1,t y2,t]T, an embedding with non-uniform lags is similar to ˜xt = y1,t y1,t−τ11 · · · y1,t−τ1d1 y2,t y2,t−τ21 · · · y2,t−τ2d2 T with each value of τ and d. Several methods have been proposed for selection of τ and d [27]–[29]; however, appropriate values may depend on the given application (attractor inspection, prediction, etc.). In this paper, we propose to surrogate the parameter selection of the delay-coordinate embedding by learning a linear delay embedder from data. Formally, we learn embedder φ such that ˜xt = φ(y(k) t ) = Wφ yT t yT t−1 · · · yT t−k+1 T , Wφ ∈Rp×kr, (6) where p = dim(˜x), r = dim(y), and k is a hyperparameter of maximum lag. We estimate weight Wφ as well as the parameters of g by minimizing RSS loss (5), which is now defined using ˜x instead of x. Learning φ from data yields an embedding that is suitable for learning a Koopman invariant subspace. Moreover, we can impose L1 regularization on weight Wφ to make it highly interpretable if necessary according to the given application. 4 φ φ original time-series . . . , yt−k+1, yt−k+2, . . . , yt, yt+1, . . . ˜xt ˜xt+1 g g(˜xt) g(˜xt+1) g LRSS h h yt yt+1 Lrec Lrec Figure 1: An instance of LKIS framework, in which g and h are implemented by MLPs. 3.3 Reconstruction of original measurements Simple minimization of LRSS may yield trivial g, such as constant values. We should impose some constraints to prevent such trivial solutions. In the proposed framework, modal decomposition is first obtained in terms of learned observables g; thus, the values of g must be back-projected to the space of the original measurements y to obtain a physically meaningful representation of the dynamic modes. Therefore, we modify the loss function by employing an additional term such that the original measurements y can be reconstructed from the values of g by a reconstructor h, i.e., y ≈h(g(˜x)). Such term is given as follows: Lrec(h, g; (˜x0, . . . , ˜xm)) = m X j=0 ∥yj −h(g(˜xj))∥2 , (7) and, if h is a smooth parametric model, this term can also be reduced using gradient descent. Finally, the objective function to be minimized becomes L(φ, g, h; (y0, . . . , ym)) = LRSS(g, φ; (˜xk−1, . . . , ˜xm)) + αLrec(h, g; (˜xk−1, . . . , ˜xm)), (8) where α is a parameter that controls the balance between LRSS and Lrec. 3.4 Implementation using neural networks In Sections 3.1–3.3, we introduced the main concepts for the LKIS framework, i.e., RSS loss minimization, learning the linear delay embedder, and reconstruction of the original measurements. Here, we demonstrate an implementation of the LKIS framework using neural networks. Figure 1 shows a schematic diagram of the implementation of the framework. We model g and h using multi-layer perceptrons (MLPs) with a parametric ReLU activation function [30]. Here, the sizes of the hidden layer of MLPs are defined by the arithmetic means of the sizes of the input and output layers of the MLPs. Thus, the remaining tunable hyperparameters are k (maximum delay of φ), p (dimensionality of ˜x), and n (dimensionality of g). To obtain g with dimensionality much greater than that of the original measurements, we found that it was useful to set k > 1 even when full-state measurements (e.g., y = x) were available. After estimating the parameters of φ, g, and h, DMD can be performed normally by using the values of the learned g, defining the data matrices in Eq. (4), and computing the eigendecomposition of A = Y1Y † 0 ; the dynamic modes are obtained by w, and the values of the eigenfunctions are obtained by ϕ = zHg, where w and z are the right- and left-eigenvectors of A. See Section 2.2 for details. In the numerical experiments described in Sections 5 and 6, we performed optimization using firstorder gradient descent. To stabilize optimization, batch normalization [31] was imposed on the inputs of hidden layers. Note that, since RSS loss function (5) is not decomposable with regard to data points, convergence of stochastic gradient descent (SGD) cannot be shown straightforwardly. However, we empirically found that the non-decomposable RSS loss was often reduced successfully, even with mini-batch SGD. Let us show an example; the full-batch RSS loss (denoted L⋆ RSS) under the updates of the mini-batch SGD are plotted in the rightmost panel of Figure 4. Here, L⋆ RSS decreases rapidly and remains small. For SGD on non-decomposable losses, Kar et al. [32] provided guarantees for some cases; however, examining the behavior of more general non-decomposable losses under mini-batch updates remains an open problem. 4 Related work The proposed framework is motivated by the operator-theoretic view of nonlinear dynamical systems. In contrast, learning a generative (state-space) model for nonlinear dynamical systems directly has been actively studied in machine learning and optimal control communities, on which we mention a 5 20 40 60 80 100 -4 -2 0 2 4 6 8 10 12 x1 x2 Re(6) -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Im(6) -0.2 -0.1 0 0.1 0.2 0.3 LKIS linear Hankel basis exp. truth Figure 2: (left) Data generated from system (9) and (right) the estimated Koopman eigenvalues. While linear Hankel DMD produces an inconsistent eigenvalue, LKIS-DMD successfully identifies λ, µ, λ2, and λ0µ0 = 1. 20 40 60 80 100 -4 -2 0 2 4 6 8 10 12 noisy x1 noisy x2 Re(6) -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Im(6) -0.2 -0.1 0 0.1 0.2 0.3 LKIS linear Hankel basis exp. truth Figure 3: (left) Data generated from system (9) and white Gaussian observation noise and (right) the estimated Koopman eigenvalues. LKIS-DMD successfully identifies the eigenvalues even with the observation noise. few examples. A classical but popular method for learning nonlinear dynamical systems is using an expectation-maximization algorithm with Bayesian filtering/smoothing (see, e.g., [33]). Recently, using approximate Bayesian inference with the variational autoencoder (VAE) technique [34] to learn generative dynamical models has been actively researched. Chung et al. [35] proposed a recurrent neural network with random latent variables, Gao et al. [36] utilized VAE-based inference for neural population models, and Johnson et al. [37] and Krishnan et al. [38] developed inference methods for structured models based on inference with a VAE. In addition, Karl et al. [39] proposed a method to obtain a more consistent estimation of nonlinear state space models. Moreover, Watter et al. [40] proposed a similar approach in the context of optimal control. Since generative models are intrinsically aware of process and observation noises, incorporating methodologies developed in such studies to the operator-theoretic perspective is an important open challenge to explicitly deal with uncertainty. We would like to mention some studies closely related to our method. After the first submission of this manuscript (in May 2017), several similar approaches to learning data transform for Koopman analysis have been proposed [41]–[45]. The relationships and relative advantages of these methods should be elaborated in the future. 5 Numerical examples In this section, we provide numerical examples of DMD based on the LKIS framework (LKIS-DMD) implemented using neural networks. We conducted experiments on three typical nonlinear dynamical systems: a fixed-point attractor, a limit-cycle attractor, and a system with multiple basins of attraction. We show the results of comparisons with other recent DMD algorithms, i.e., Hankel DMD [46], [47], extended DMD [14], and DMD with reproducing kernels [15]. The detailed setups of the experiments discussed in this section and the next section are described in the supplementary. Fixed-point attractor Consider a two-dimensional nonlinear map on xt = [x1,t x2,t]T: x1,t+1 = λx1,t, x2,t+1 = µx2,t + (λ2 −µ)x2 1,t, (9) which has a stable equilibrium at the origin if λ, µ < 1. The Koopman eigenvalues of system (9) include λ and µ, and the corresponding eigenfunctions are ϕλ(x) = x1 and ϕµ(x) = x2 −x2 1, respectively. λiµj is also an eigenvalue with corresponding eigenfunction ϕi λϕj µ. A minimal Koopman invariant subspace of system (9) is span{x1, x2, x2 1}, and the eigenvalues of the Koopman operator restricted to such subspace include λ, µ and λ2. We generated a dataset using system (9) with λ = 0.9 and µ = 0.5 and applied LKIS-DMD (n = 4), linear Hankel DMD [46], [47] (delay 2), and DMD with basis expansion by {x1, x2, x2 1}, which corresponds to extended DMD [14] with a right and minimal observable dictionary. The estimated Koopman eigenvalues are shown in Figure 2, wherein LKIS-DMD successfully identifies the eigenvalues of the target invariant subspace. In Figure 3, we show eigenvalues estimated using data contaminated with white Gaussian observation noise (σ = 0.1). The eigenvalues estimated by LKIS-DMD coincide with the true values even with the observation noise, whereas the results of DMD with basis expansion (i.e., extended DMD) are directly affected by the observation noise. Limit-cycle attractor We generated data from the limit cycle of the FitzHugh–Nagumo equation ˙x1 = x3 1/3 + x1 −x2 + I, ˙x2 = c(x1 −bx2 + a), (10) where a = 0.7, b = 0.8, c = 0.08, and I = 0.8. Since trajectories in a limit-cycle are periodic, the (discrete-time) Koopman eigenvalues should lie near the unit circle. Figure 4 shows the eigenvalues 6 Re(6) Im(6) LKIS Re(6) Im(6) linear Hankel Re(6) Im(6) polynomial Re(6) Im(6) RBF iterations 0 50 100 150 10-6 10-4 10-2 100 102 104 log(L? RSS) log(,L? rec) Figure 4: The left four panels show the estimated Koopman eigenvalues on the limit-cycle of the FitzHugh-Nagumo equation by LKIS-DMD, linear Hankel DMD, and kernel DMDs with polynomial and RBF kernels. The hyperparameters of each DMD are set to produce 16 eigenvalues. The rightmost plot shows the full-batch (size 2,000) loss under mini-batch (size 200) SGD updates along iterations. Non-decomposable part L⋆ RSS decreases rapidly and remains small, even by SGD. Re(log(6)=/t) -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Im(log(6)=/t) -10 -5 0 5 10 x _x x _x Figure 5: (left) The continuous-time Koopman eigenvalues estimated by LKIS-DMD on the Duffing equation. (center) The true basins of attraction of the Duffing equation, wherein points in the blue region evolve toward (1, 0) and points in the red region evolve toward (−1, 0). Note that the stable manifold of the saddle point is not drawn precisely. (right) The values of the Koopman eigenfunction with a nearly zero eigenvalue computed by LKIS-DMD, whose level sets should correspond to the basins of attraction. There is rough agreement between the true boundary of the basins of attraction and the numerically computed boundary. The right two plots are best viewed in color. estimated by LKIS-DMD (n = 16), linear Hankel DMD [46], [47] (delay 8), and DMDs with reproducing kernels [15] (polynomial kernel of degree 4 and RBF kernel of width 1). The eigenvalues produced by LKIS-DMD agree well with those produced by kernel DMDs, whereas linear Hankel DMD produces eigenvalues that would correspond to rapidly decaying modes. Multiple basins of attraction Consider the unforced Duffing equation ¨x = −δ ˙x −x(β + αx2), x = [x ˙x]T , (11) where α = 1, β = −1, and δ = 0.5. States x following (11) evolve toward [1 0]T or [−1 0]T depending on which basin of attraction the initial value belongs to unless the initial state is on the stable manifold of the saddle. Generally, a Koopman eigenfunction whose continuous-time eigenvalue is zero takes a constant value in each basin of attraction [14]; thus, the contour plot of such an eigenfunction shows the boundary of the basins of attraction. We generated 1,000 episodes of time-series starting at different initial values uniformly sampled from [−2, 2]2. The left plot in Figure 5 shows the continuous-time Koopman eigenvalues estimated by LKIS-DMD (n = 100), all of which correspond to decaying modes (i.e., negative real parts) and agree with the property of the data. The center plot in Figure 5 shows the true basins of attraction of (11), and the right plot shows the estimated values of the eigenfunction corresponding to the eigenvalue of the smallest magnitude. The surface of the estimated eigenfunction agrees qualitatively with the true boundary of the basins of attractions, which indicates that LKIS-DMD successfully identifies the Koopman eigenfunction. 6 Applications The numerical experiments in the previous section demonstrated the feasibility of the proposed method as a fully data-driven method for Koopman spectral analysis. Here, we introduce practical applications of LKIS-DMD. Chaotic time-series prediction Prediction of a chaotic time-series has received significant interest in nonlinear physics. We would like to perform the prediction of a chaotic time-series using DMD, since DMD can be naturally utilized for prediction as follows. Since g(xt) is decomposed as Pn i=1 ϕi(xt)ci and ϕ is obtained by ϕi(xt) = zH i g(xt) where zi is a left-eigenvalue of K, the next step of g can be described in terms of the current step, i.e., g(xt+1) = Pn i=1 λi(zH i g(xt))ci. In 7 0 10 20 30 RMS error 0.5 1 1.5 2 2.5 3 LKIS LSTM linear Hankel -20 -15 -10 -5 0 5 10 15 20 30-step prediction truth 0 10 20 30 RMS error 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 -10 -5 0 5 10 Figure 6: The left plot shows RMS errors from 1- to 30-step predictions, and the right plot shows a part of the 30-step prediction obtained by LKIS-DMD on (upper) the Lorenz-x series and (lower) the Rossler-x series. raw LKIS OC-SVM RuLSIF Figure 7: The top plot shows the raw time-series obtained by a far-infrared laser [50]. The other plots show the results of unstable phenomena detection, wherein the peaks should correspond to the occurrences of unstable phenomena. addition, in the case of LKIS-DMD, the values of g must be back-projected to y using the learned h. We generated two types of univariate time-series by extracting the {x} series of the Lorenz attractor [48] and the Rossler attractor [49]. We simulated 25,000 steps for each attractor and used the first 10,000 steps for training, the next 5,000 steps for validation, and the last 10,000 steps for testing prediction accuracy. We examined the prediction accuracy of LKIS-DMD, a simple LSTM network, and linear Hankel DMD [46], [47], all of whose hyperparameters were tuned using the validation set. The prediction accuracy of every method and an example of the predicted series on the test set by LKIS-DMD are shown in Figure 6. As can be seen, the proposed LKIS-DMD achieves the smallest root-mean-square (RMS) errors in the 30-step prediction. Unstable phenomena detection One of the most popular applications of DMD is the investigation of the global characteristics of dynamics by inspecting the spatial distribution of the dynamic modes. In addition to the spatial distribution, we can investigate the temporal profiles of mode activations by examining the values of corresponding eigenfunctions. For example, assume there is an eigenfunction ϕλ≪1 that corresponds to a discrete-time eigenvalue λ whose magnitude is considerably smaller than one. Such a small eigenvalue indicates a rapidly decaying (i.e., unstable) mode; thus, we can detect occurrences of unstable phenomena by observing the values of ϕλ≪1. We applied LKIS-DMD (n = 10) to a time-series generated by a far-infrared laser, which was obtained from the Santa Fe Time Series Competition Data [50]. We investigated the values of eigenfunction ϕλ≪1 corresponding to the eigenvalue of the smallest magnitude. The original time-series and values of ϕλ≪1 obtained by LKIS-DMD are shown in Figure 7. As can be seen, the activations of ϕλ≪1 coincide with sudden decays of the pulsation amplitudes. For comparison, we applied the novelty/change-point detection technique using one-class support vector machine (OC-SVM) [51] and direct density-ratio estimation by relative unconstrained least-squares importance fitting (RuLSIF) [52]. We computed AUC, defining the sudden decays of the amplitudes as the points to be detected, which were 0.924, 0.799, and 0.803 for LKIS, OC-SVM, and RuLSIF, respectively. 7 Conclusion In this paper, we have proposed a framework for learning Koopman invariant subspaces, which is a fully data-driven numerical algorithm for Koopman spectral analysis. In contrast to existing approaches, the proposed method learns (approximately) a Koopman invariant subspace entirely from the available data based on the minimization of RSS loss. We have shown empirical results for several typical nonlinear dynamics and application examples. We have also introduced an implementation using multi-layer perceptrons; however, one possible drawback of such an implementation is the local optima of the objective function, which makes it difficult to assess the adequacy of the obtained results. Rather than using neural networks, the observables to be learned could be modeled by a sparse combination of basis functions as in [23] but still utilizing optimization based on RSS loss. Another possible future research direction could be incorporating approximate Bayesian inference methods, such as VAE [34]. The proposed framework is based on a discriminative viewpoint, but inference methodologies for generative models could be used to modify the proposed framework to explicitly consider uncertainty in data. 8 Acknowledgments This work was supported by JSPS KAKENHI Grant No. JP15J09172, JP26280086, JP16H01548, and JP26289320. References [1] M. W. Hirsch, S. Smale, and R. L. Devaney, Differential equations, dynamical systems, and an introduction to chaos, 3rd. Academic Press, 2013. [2] A. Lasota and M. C. Mackey, Chaos, fractals, and noise: Stochastic aspects of dynamics, 2nd. Springer, 1994. [3] B. O. Koopman, “Hamiltonian systems and transformation in Hilbert space,” Proceedings of the National Academy of Sciences of the United States of America, vol. 17, no. 5, pp. 315–318, 1931. [4] I. Mezi´c, “Spectral properties of dynamical systems, model reduction and decompositions,” Nonlinear Dynamics, vol. 41, no. 1-3, pp. 309–325, 2005. [5] M. Budiši´c, R. Mohr, and I. Mezi´c, “Applied Koopmanism,” Chaos, vol. 22, p. 047 510, 2012. [6] C. W. Rowley, I. Mezi´c, S. Bagheri, P. Schlatter, and D. S. Henningson, “Spectral analysis of nonlinear flows,” Journal of Fluid Mechanics, vol. 641, pp. 115–127, 2009. [7] P. J. Schmid, “Dynamic mode decomposition of numerical and experimental data,” Journal of Fluid Mechanics, vol. 656, pp. 5–28, 2010. [8] J. L. Proctor and P. A. Eckhoff, “Discovering dynamic patterns from infectious disease data using dynamic mode decomposition,” International Health, vol. 7, no. 2, pp. 139–145, 2015. [9] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz, “Extracting spatial-temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition,” Journal of Neuroscience Methods, vol. 258, pp. 1–15, 2016. [10] E. Berger, M. Sastuba, D. Vogt, B. Jung, and H. B. Amor, “Estimation of perturbations in robotic behavior using dynamic mode decomposition,” Advanced Robotics, vol. 29, no. 5, pp. 331–343, 2015. [11] J. N. Kutz, X. Fu, and S. L. Brunton, “Multiresolution dynamic mode decomposition,” SIAM Journal on Applied Dynamical Systems, vol. 15, no. 2, pp. 713–735, 2016. [12] A. Mauroy and J. Goncalves, “Linear identification of nonlinear systems: A lifting technique based on the Koopman operator,” in Proceedings of the 2016 IEEE 55th Conference on Decision and Control, 2016, pp. 6500–6505. [13] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor, Dynamic mode decomposition: Data-driven modeling of complex systems. SIAM, 2016. [14] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, “A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition,” Journal of Nonlinear Science, vol. 25, no. 6, pp. 1307–1346, 2015. [15] Y. Kawahara, “Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis,” in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 911–919. [16] I. Mezi´c, “Analysis of fluid flows via spectral properties of the Koopman operator,” Annual Review of Fluid Mechanics, vol. 45, pp. 357–378, 2013. [17] G. Froyland, G. A. Gottwald, and A. Hammerlindl, “A computational method to extract macroscopic variables and their dynamics in multiscale systems,” SIAM Journal on Applied Dynamical Systems, vol. 13, no. 4, pp. 1816–1846, 2014. [18] N. Takeishi, Y. Kawahara, and T. Yairi, “Subspace dynamic mode decomposition for stochastic Koopman analysis,” Physical Review E, vol. 96, no. 3, 033310, p. 033 310, 3 2017. [19] J. L. Proctor, S. L. Brunton, and J. N. Kutz, “Dynamic mode decomposition with control,” SIAM Journal on Applied Dynamical Systems, vol. 15, no. 1, pp. 142–161, 2016. [20] ——, “Generalizing Koopman theory to allow for inputs and control,” arXiv:1602.07647, 2016. [21] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz, “On dynamic mode decomposition: Theory and applications,” Journal of Computational Dynamics, vol. 1, no. 2, pp. 391–421, 2014. 9 [22] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, “Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control,” PLoS ONE, vol. 11, no. 2, e0150171, 2016. [23] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proceedings of the National Academy of Sciences of the United States of America, vol. 113, no. 15, pp. 3932–3937, 2016. [24] V. Rakoˇcevi´c, “On continuity of the Moore–Penrose and Drazin inverses,” Matematiˇcki Vesnik, vol. 49, no. 3-4, pp. 163–172, 1997. [25] F. Takens, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980, ser. Lecture Notes in Mathematics, vol. 898, 1981, pp. 366–381. [26] T. Sauer, J. A. Yorke, and M. Casdagli, “Embedology,” Journal of Statistical Physics, vol. 65, no. 3-4, pp. 579–616, 1991. [27] S. P. Garcia and J. S. Almeida, “Multivariate phase space reconstruction by nearest neighbor embedding with different time delays,” Physical Review E, vol. 72, no. 2, 027205, p. 027 205, 2005. [28] Y. Hirata, H. Suzuki, and K. Aihara, “Reconstructing state spaces from multivariate data using variable delays,” Physical Review E, vol. 74, no. 2, 026202, p. 026 202, 2006. [29] I. Vlachos and D. Kugiumtzis, “Nonuniform state-space reconstruction and coupling detection,” Physical Review E, vol. 82, no. 1, 016207, p. 016 207, 2010. [30] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015, pp. 1026–1034. [31] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 37, 2015, pp. 448–456. [32] P. Kar, H. Narasimhan, and P. Jain, “Online and stochastic gradient methods for nondecomposable loss functions,” in Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 694–702. [33] Z. Ghahramani and S. T. Roweis, “Learning nonlinear dynamical systems using an EM algorithm,” in Advances in Neural Information Processing Systems, vol. 11, 1999, pp. 431– 437. [34] D. P. Kingma and M. Welling, “Stochastic gradient VB and the variational auto-encoder,” in Proceedings of the 2nd International Conference on Learning Representations, 2014. [35] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” in Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 2980–2988. [36] Y. Gao, E. W. Archer, L. Paninski, and J. P. Cunningham, “Linear dynamical neural population models through nonlinear embeddings,” in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 163–171. [37] M. Johnson, D. K. Duvenaud, A. Wiltschko, R. P. Adams, and S. R. Datta, “Composing graphical models with neural networks for structured representations and fast inference,” in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 2946–2954. [38] R. G. Krishnan, U. Shalit, and D. Sontag, “Structured inference networks for nonlinear state space models,” in Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017, pp. 2101–2109. [39] M. Karl, M. Soelch, J. Bayer, and P. van der Smagt, “Deep variational Bayes filters: Unsupervised learning of state space models from raw data,” in Proceedings of the 5th International Conference on Learning Representations, 2017. [40] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller, “Embed to control: A locally linear latent dynamics model for control from raw images,” in Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 2746–2754. [41] Q. Li, F. Dietrich, E. M. Bollt, and I. G. Kevrekidis, “Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator,” Chaos, vol. 27, p. 103 111, 2017. [42] E. Yeung, S. Kundu, and N. Hodas, “Learning deep neural network representations for Koopman operators of nonlinear dynamical systems,” arXiv:1708.06850, 2017. 10 [43] A. Mardt, L. Pasquali, H. Wu, and F. Noé, “VAMPnets: Deep learning of molecular kinetics,” arXiv:1710.06012, 2017. [44] S. E. Otto and C. W. Rowley, “Linearly-recurrent autoencoder networks for learning dynamics,” arXiv:1712.01378, 2017. [45] B. Lusch, J. N. Kutz, and S. L. Brunton, “Deep learning for universal linear embeddings of nonlinear dynamics,” arXiv:1712.09707, 2017. [46] H. Arbabi and I. Mezi´c, “Ergodic theory, dynamic mode decomposition and computation of spectral properties of the Koopman operator,” SIAM Journal on Applied Dynamical Systems, vol. 16, no. 4, 2096–2126, 2017. [47] Y. Susuki and I. Mezi´c, “A Prony approximation of Koopman mode decomposition,” in Proceedings of the 2015 IEEE 54th Conference on Decision and Control, 2015, pp. 7022– 7027. [48] E. N. Lorenz, “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130–141, 1963. [49] O. E. Rössler, “An equation for continuous chaos,” Physical Letters, vol. 57A, no. 5, pp. 397– 398, 1976. [50] A. S. Weigend and N. A. Gershenfeld, Eds., Time series prediction: Forecasting the future and understanding the past, ser. Santa Fe Institute Series. Westview Press, 1993. [51] S. Canu and A. Smola, “Kernel methods and the exponential family,” Neurocomputing, vol. 69, no. 7-9, pp. 714–720, 2006. [52] S. Liu, M. Yamada, N. Collier, and M. Sugiyama, “Change-point detection in time-series data by relative density-ratio estimation,” Neural Networks, vol. 43, pp. 72–83, 2013. 11 | 2017 | 167 |
6,638 | Online Prediction with Selfish Experts Tim Roughgarden Department of Computer Science Stanford University Stanford, CA 94305 tim@cs.stanford.edu Okke Schrijvers Department of Computer Science Stanford University Stanford, CA 94305 okkes@cs.stanford.edu Abstract We consider the problem of binary prediction with expert advice in settings where experts have agency and seek to maximize their credibility. This paper makes three main contributions. First, it defines a model to reason formally about settings with selfish experts, and demonstrates that “incentive compatible” (IC) algorithms are closely related to the design of proper scoring rules. Second, we design IC algorithms with good performance guarantees for the absolute loss function. Third, we give a formal separation between the power of online prediction with selfish versus honest experts by proving lower bounds for both IC and non-IC algorithms. In particular, with selfish experts and the absolute loss function, there is no (randomized) algorithm for online prediction—IC or otherwise—with asymptotically vanishing regret. 1 Introduction In the months leading up to elections and referendums, a plethora of pollsters try to figure out how the electorate is going to vote. Different pollsters use different methodologies, reach different people, and may have sources of random errors, so generally the polls don’t fully agree with each other. Aggregators such as Nate Silver’s FiveThirtyEight, and The Upshot by the New York Times consolidate these different reports into a single prediction, and hopefully reduce random errors.1 FiveThirtyEight in particular has a solid track record for their predictions, and as they are transparent about their methodology we use them as a motivating example. To a first-order approximation, they operate as follows: first they take the predictions of all the different pollsters, then they assign a weight to each of the pollsters based on past performance (and other factors), and finally they use the weighted average of the pollsters to run simulations and make their own prediction.2 But could the presence of an institution that rates pollsters inadvertently create perverse incentives for pollsters? The FiveThirtyEight pollster ratings are publicly available.3 They can be interpreted as a reputation, and a low rating can negatively impact future revenue opportunities for a pollster. Moreover, it has been demonstrated in practice that experts do not always report their true beliefs about future events. For example, in weather forecasting there is a known “wet bias,” where consumerfacing weather forecasters deliberately overestimate low chances of rain (e.g. a 5% chance of rain is reported as a 25% chance of rain) because people don’t like to be surprised by rain [Bickel and Kim, 2008]. 1https://fivethirtyeight.com/, https://www.nytimes.com/section/upshot. 2This is of course a simplification. FiveThirtyEight also uses features like the change in a poll over time, the state of the economy, and correlations between states. See https://fivethirtyeight.com/features/ how-fivethirtyeight-calculates-pollster-ratings/ for details. Our goal in this paper is not to accurately model all of the fine details of FiveThirtyEight (which are anyways changing all the time). Rather, it is to formulate a general model of prediction with experts that clearly illustrates why incentives matter. 3https://projects.fivethirtyeight.com/pollster-ratings/ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. These examples motivate the development of models of aggregating predictions that endow agency to the data sources.4 While there are multiple models in which we can investigate this issue, a natural candidate is the problem of prediction with expert advice. By focusing on a standard model, we abstract away from the fine details of FiveThirtyEight (which are anyways changing all the time), which allows us to formulate a general model of prediction with experts that clearly illustrates why incentives matter. In the classical model [Littlestone and Warmuth, 1994, Freund and Schapire, 1997], at each time step, several experts make predictions about an unknown event. An online prediction algorithm aggregates experts’ opinions and makes its own prediction at each time step. After this prediction, the event at this time step is realized and the algorithm incurs a loss as a function of its prediction and the realization. To compare its performance against individual experts, for each expert the algorithm calculates what its loss would have been had it always followed the expert’s prediction. While the problems introduced in this paper are relevant for general online prediction, to focus on the most interesting issues we concentrate on the case of binary events, and real-valued predictions in [0, 1]. For different applications, different notions of loss are appropriate, so we parameterize the model by a loss function `. Thus our formal model is: at each time step t = 1, 2, . . . , T: 1. Each expert i makes a prediction p(t) i 2 [0, 1], representing advocacy for event “1.” 2. The online algorithm commits to a probability q(t) 2 [0, 1] as a prediction for event “1.” 3. The outcome r(t) 2 {0, 1} is realized. 4. The algorithm incurs expected loss `(q(t), r(t)), each expert i is assigned loss `(p(t) i , r(t)). The standard goal in this problem is to design an online prediction algorithm that is guaranteed to have expected loss not much larger than that incurred by the best expert in hindsight. The classical solutions maintain a weight for each expert and make a prediction according to which outcome has more expert weight behind it. An expert’s weight can be interpreted as a measure of its credibility in light of its past performance. The (deterministic) Weighted Majority (WM) algorithm always chooses the outcome with more expert weight. The Randomized Weighted Majority (RWM) algorithm randomizes between the two outcomes with probability proportional to their total expert weights. The most common method of updating experts’ weights is via multiplication by 1 −⌘`(p(t) i , r(t)) after each time step t, where ⌘is the learning rate. We call this the “standard” or “classical” version of the WM and RWM algorithms. The classical model instills no agency in the experts. To account for this, in this paper we replace Step 1 of the classical model by: 1a. Each expert i formulates a belief b(t) i 2 [0, 1]. 1b. Each expert i reports a prediction p(t) i 2 [0, 1] to the algorithm. Each expert now has two types of loss at each time step — the reported loss `(p(t) i , r(t)) with respect to the reported prediction and the true loss `(b(t) i , r(t)) with respect to her true beliefs.5 When experts care about the weight that they are assigned, and with it their reputation and influence in the algorithm, different loss functions can lead to different expert behaviors. For example, for the quadratic loss function, in the standard WM and RWM algorithms, experts have no reason to misreport their beliefs (see Proposition 8). This is not the case for other loss functions, such as the absolute loss function.6 The standard algorithm with the absolute loss function incentivizes extremal reporting, i.e. an expert reports 1 whenever b(t) i ≥1 2 and 0 otherwise. This follows from a simple 4More generally, one can investigate how the presence of machine learning algorithms affects data-generating processes, either during learning or deployment. We discuss some of this work in the related work section. 5When we speak of the best expert in hindsight, we are always referring to the true losses. Guarantees with respect to reported losses follow from standard results [Littlestone and Warmuth, 1994, Freund and Schapire, 1997, Cesa-Bianchi et al., 2007], but are not immediately meaningful. 6The loss function is often tied to the particular application. For example, in the current FiveThirtyEight pollster rankings, the performance of a pollster is primarily measured according to an absolute loss function and also whether the candidate with the highest polling numbers ended up winning (see https://github.com/fivethirtyeight/data/tree/master/pollster-ratings). However, in 2008 FiveThirtyEight used the notion of “pollster introduced error” or PIE, which is the square root of a difference of squares, as the most important feature in calculating the weights, see https://fivethirtyeight.com/ features/pollster-ratings-v31/. 2 derivation or alternatively from results in the property elicitation literature.7 This shows that for the absolute loss function the standard WM algorithm is not “incentive-compatible” in a sense that we formalize in Section 2. There are similar examples for the other commonly studied weight update rules and for the RWM algorithm. We might care about truthful reporting for its own sake, but additionally the worry is that non-truthful reports will impede our ability to get good regret guarantees (with respect to experts’ true losses). We study several fundamental questions about online prediction with selfish experts: 1. What is the design space of “incentive-compatible” online prediction algorithms, where every expert is incentivized to report her true beliefs? 2. Given a loss function like absolute loss, are there incentive-compatible algorithms with good regret guarantees? 3. Is online prediction with selfish experts strictly harder than in the classical model with honest experts? Our Results. The first contribution of this paper is the development of a model for reasoning formally about the design and analysis of weight-based online prediction algorithms when experts are selfish (Section 2), and the definition of an “incentive-compatible” (IC) such algorithm. Intuitively, an IC algorithm is such that each expert wants to report its true belief at each time step. We demonstrate that the design of IC online prediction algorithms is closely related to the design of strictly proper scoring rules. Using this, we show that for the quadratic loss function, the standard WM and RWM algorithms are IC, whereas these algorithms are not generally IC for other loss functions. Our second contribution is the design of IC prediction algorithms for the absolute loss function with non-trivial performance guarantees. For example, our best result for deterministic algorithms is: the WM algorithm, with experts’ weights evolving according to the spherical proper scoring rule (see Section 3), is IC and has loss at most 2 + p 2 times the loss of best expert in hindsight (in the limit as T ! 1). A variant of the RWM algorithm with the Brier scoring rule is IC and has expected loss at most 2.62 times that of the best expert in hindsight (also in the limit, see Section 5). Our third and most technical contribution is a formal separation between online prediction with selfish experts and the traditional setting with honest experts. Recall that with honest experts, the classical (deterministic) WM algorithm has loss at most twice that of the best expert in hindsight (as T ! 1) [Littlestone and Warmuth, 1994]. We prove in Section 4 that the worst-case loss of every (deterministic) IC algorithm, and every (deterministic) non-IC algorithm satisfying mild technical conditions, is bounded away from twice that of the best expert in hindsight (even as T ! 1). A consequence of our lower bound is that, with selfish experts, there is no natural (randomized) algorithm for online prediction—IC or otherwise—with asymptotically vanishing regret. Finally, in Section 6 we show simulations that indicate that different IC methods show similar regret behavior, and that their regret is substantially better than that of the non-IC standard algorithms, suggesting that the worst-case characterization we prove holds more generally. Related Work. We believe that our model of online prediction over time with selfish experts is novel. We next survey the multiple other ways in which online learning and incentive issues have been blended, and the other efforts to model incentive issues in machine learning. There is a large literature on prediction and decision markets (e.g. Chen and Pennock [2010], Horn et al. [2014]), which also aim to aggregate information over time from multiple parties and make use of proper scoring rules to do it. However, prediction markets provide incentives through payments, rather than influence, and lack the feedback mechanism to select among experts. While there are strong mathematical connections between cost function-based prediction markets and regularizationbased online learning algorithms in the standard (non-IC) model [Abernethy et al., 2013], there does not appear to be any interesting implications for online prediction with selfish experts. There is also an emerging literature on “incentivizing exploration” in partial feedback models such as the bandit model (e.g. Frazier et al. [2014], Mansour et al. [2016]). Here, the incentive issues concern the learning algorithm itself, rather than the experts (or “arms”) that it makes use of. 7The absolute loss function is known to elicit the median [Bonin, 1976][Thomson, 1979], and since we have binary realizations, the median is either 0 or 1. 3 The question of how an expert should report beliefs has been studied before in the literature on strictly proper scoring rules [Brier, 1950, McCarthy, 1956, Savage, 1971, Gneiting and Raftery, 2007], but this literature typically considers the evaluation of a single prediction, rather than low-regret learning. Bayarri and DeGroot [1989] look at correlated settings where strictly proper scoring rules don’t suffice, though they also do not consider how an aggregator can achieve low regret. Finally, there are many works that fall under the broader umbrella of incentives in machine learning. Roughly, work in this area can be divided into two genres: incentives during the learning stage, e.g. [Cai et al., 2015, Shah and Zhou, 2015, Liu and Chen, 2016, Dekel et al., 2010], or incentives during the deployment stage, e.g. Brückner and Scheffer [2011], Hardt et al. [2016]. Finally, Babaioff et al. [2010] consider the problem of no-regret learning with selfish experts in an ad auction setting, where the incentives come from the allocations and payments of the auction, rather than from weights as in our case. 2 Preliminaries and Model Standard Model. At each time step t 2 1, ..., T we want to predict a binary realization r(t) 2 {0, 1}. To help in the prediction, we have access to n experts that for each time step report a prediction p(t) i 2 [0, 1] about the realization. The realizations are determined by an oblivious adversary, and the predictions of the experts may or may not be accurate. The goal is to use the predictions of the experts in such a way that the algorithm performs nearly as well as the best expert in hindsight. Most of the algorithms proposed for this problem fall into the following framework. Definition 1 (Weight-update Online Prediction Algorithm). A weight-update online prediction algorithm maintains a weight w(t) i for each expert and makes its prediction q(t) based on Pn i=1 w(t) i p(t) i and Pn i w(t) i (1 −p(t) i ). After the algorithm makes its prediction, the realization r(t) is revealed, and the algorithm updates the weights of experts using the rule w(t+1) i = f ⇣ p(t) i , r(t)⌘ · w(t) i , (1) where f : [0, 1] ⇥{0, 1} ! R+ is a positive function on its domain. The standard WM algorithm has f(p(t) i , r(t)) = 1 −⌘`(p(t) i , r(t)) where ⌘2 (0, 1 2) is the learning rate, and predicts q(t) = 1 if and only if Pn i w(t) i p(t) i ≥Pn i w(t) i (1−p(t) i ). Let the total loss of the algorithm be M (T ) = PT t=1 `(q(t), r(t)) and let the total loss of expert i be m(T ) i = PT t=1 `(p(t) i , r(t)). The MW algorithm has the property that M (T ) 2(1 + ⌘)m(T ) i + 2 ln n ⌘ for each expert i, and RWM —where the algorithm picks 1 with probability proportional to Pn i w(t) i p(t) i — satisfies M (T ) (1 + ⌘)m(T ) i + ln n ⌘ for each expert i [Littlestone and Warmuth, 1994][Freund and Schapire, 1997]. The notion of “no ↵-regret” [Kakade et al., 2009] captures the idea that the per time-step loss of an algorithm is ↵times that of the best expert in hindsight, plus a term that goes to 0 as T grows: Definition 2 (↵-regret). An algorithm is said to have no ↵-regret if M (T ) ↵mini m(T ) i + o(T). By taking ⌘= O(1/ p T), MW is a no 2-regret algorithm, and RWM is a no 1-regret algorithm. Selfish Model. We consider a model in which experts have agency about the prediction they report, and care about the weight that they are assigned. In the selfish model, at time t the expert formulates a private belief b(t) i about the realization, but she is free to report any prediction p(t) i to the algorithm. Let Bern(p) be a Bernoulli random variable with parameter p. For any non-negative weight update function f, max p Eb(t) i [w(t+1) i ] = max p Er⇠Bern ⇣ b(t) i ⌘[f (p, r) w(t) i ] = w(t) i · ✓ max p Er⇠Bern ⇣ b(t) i ⌘[f (p, r)] ◆ . So expert i will report whichever p(t) i will maximize the expectation of the weight update function. Performance of an algorithm with respect to the reported loss of experts follows from the standard analysis [Littlestone and Warmuth, 1994]. However, the true loss may be worse (in Section 3 we 4 show this for the standard update rule, Section 4 shows it more generally). Unless explicitly stated otherwise, in the remainder of this paper m(T ) i = PT t=1 `(b(t) i , r(t)) refers to the true loss of expert i. For now this motivates restricting the weight update rule f to functions where reporting p(t) i = b(t) i maximizes the expected weight of experts. We call these weight-update rules Incentive Compatible (IC). Definition 3 (Incentive Compatibility). A weight-update function f is incentive compatible (IC) if reporting the true belief b(t) i is always a best response for every expert at every time step. It is strictly IC when p(t) i = b(t) i is the only best response. By a “best response,” we mean an expected utility-maximizing report, where the expectation is with respect to the expert’s beliefs. Collusion. The definition of IC does not rule out the possibility that experts can collude to jointly misreport to improve their weights. We therefore also consider a stronger notion of incentive compatibility for groups with transferable utility.8 Definition 4 (IC for Groups with Transferable Utility). A weight-update function f is IC for groups with transferable utility (TU-GIC) if for every subset S of players, the total expected weight of the group P i2S Eb(t) i [w(t+1) i ] is maximized by each reporting their private belief b(t) i . Proper Scoring Rules. Incentivizing truthful reporting of beliefs has been studied extensively, and the set of functions that do this is called the set of proper scoring rules. Since we focus on predicting a binary event, we restrict our attention to this class of functions. Definition 5 (Binary Proper Scoring Rule, [Schervish, 1989]). A function f : [0, 1] ⇥{0, 1} ! R [ {±1} is a binary proper scoring rule if it is finite except possibly on its boundary and whenever for p 2 [0, 1] it holds that p 2 maxq2[0,1] p · f(q, 1) + (1 −p) · f(q, 0). A function f is a strictly proper scoring rule if p is the only value that maximizes the expectation. The first and perhaps most well-known proper scoring rule is the Brier scoring rule. Example 6 (Brier Scoring Rule, [Brier, 1950]). The Brier score is Br(p, r) = 2pr −(p2 + (1 −p)2) where pr = pr + (1 −p)(1 −r) is the report for the event that materialized. We will use the Brier scoring rule in Section 5 to construct an incentive-compatible randomized algorithm with good guarantees. The following proposition follows directly from Definitions 3 and 5. Proposition 7. Weight-update rule f is (strictly) IC if and only if f is a (strictly) proper scoring rule. Surprisingly, this result remains true even when experts can collude. While the realizations are obviously correlated, linearity of expectation causes the sum to be maximized exactly when each expert maximizes their own expected weight. Proposition 8. A weight-update rule f is (strictly) incentive compatible for groups with transferable utility if and only if f is a (strictly) proper scoring rule. Thus, for online prediction with selfish experts, we get TU-GIC “for free.” It is quite uncommon for problems in non-cooperate game theory to admit good TU-GIC solutions. For example, results for auctions (either for revenue or welfare) break down once bidders collude, see e.g. [Goldberg and Hartline, 2005]. In the remainder of the paper we will simply use IC to refer to IC and TU-GIC, as strictly proper scoring rules yield algorithms that satisfy both definitions. Thus, for IC algorithms we are restricted to considering (bounded) proper scoring rules as weightupdate rules. Conversely, any bounded scoring rule can be used, possibly after an affine transformation (which preserve proper-ness). Are there any proper scoring rules that give an online prediction algorithm with a good performance guarantee? The standard algorithm for quadratic losses yields a weight-update function that is equivalent to the Brier strictly proper scoring rule, and thus is IC. The standard algorithm with absolute losses is not IC, so in the remainder of this paper we discuss this setting in more detail. 8Note that TU-GIC is a strictly stronger concept than IC and group IC with nontransferable utility (NTU-GIC) [Moulin, 1999][Jain and Mahdian, 2007]. 5 3 Deterministic Algorithms for Selfish Experts This section studies the question if there are good online prediction algorithms with selfish experts for the absolute loss function. We restrict our attention here to deterministic algorithms; Section 5 gives a randomized algorithm with good guarantees. Proposition 7 tells us that for selfish experts to have a strict incentive to report truthfully, the weightupdate rule must be a strictly proper scoring rule. This section gives a deterministic algorithm based on the spherical strictly proper scoring rule that has no (2 + p 2)-regret (Theorem 10). Additionally, we consider the question if the non-truthful reports from experts in using the standard (non-IC) WM algorithm are harmful. We show that this is the case by proving it is not a no (4 −O(1))-regret algorithm for any constant smaller than 4 (Proposition 11). This shows that, when experts are selfish, the IC online prediction algorithm with the spherical rule outperforms the standard WM algorithm (in the worst case). Online Prediction using a Spherical Rule. We next give an algorithm that uses a strictly proper scoring rule that is based on the spherical rule scoring rule.9 Consider the following weight-update rule: fsp ⇣ p(t) i , r(t)⌘ = 1 −⌘ ✓ 1 − ⇣ 1 −|p(t) i −r(t)| ⌘ / q p(t) i · p(t) i + (1 −p(t) i ) · (1 −p(t) i ) ◆ . (2) The following proposition establishes that this is in fact a strictly proper scoring rule. Due to space constraints, all proofs appear in Appendix A of the supplementary material. Proposition 9. The spherical weight-update rule in (2) is a strictly proper scoring rule. In addition to incentivizing truthful reporting, the WM algorithm with the update rule fsp does not do much worse than the best expert in hindsight. Theorem 10. WM with weight-update rule (2) for ⌘= O(1/ p T) < 1 2 has no (2 + p 2)-regret. True Loss of the Non-IC Standard Rule. It is instructive to compare the guarantee in Theorem 10 with the performance of the standard (non-IC) WM algorithm. WM with the standard weight update function f(p(t) i , r(t)) = 1 −⌘|p(t) i −r(t)| for ⌘2 (0, 1 2) has no 2-regret with respect to the reported loss of experts. However, this algorithm incentivizes extremal reports (for details see Appendix B in the supplementary material), and in the worst case, this algorithm’s loss can be as bad as 4 times the true loss of the best expert in hindsight. Theorem 10 shows that a suitable IC algorithm obtains a superior worst-case guarantee. Proposition 11. The standard WM algorithm with weight-update rule f ⇣ p(t) i , r(t)⌘ = 1 −⌘|p(t) i − r(t)| results in a total worst-case loss no better than M (T ) ≥4 · mini m(T ) i −o(1). 4 The Cost of Selfish Experts We now address the third fundamental question: whether or not online prediction with selfish experts is strictly harder than with honest experts. As there exists a deterministic algorithm for honest experts with no 2-regret, showing a separation between honest and selfish experts boils down to proving that there exists a constant δ > 0 such that best possible no ↵-regret algorithm has ↵= 2 + δ. In this section we show that such a δ exists, and that it is independent of the learning rate. Hence the lower bound also holds for algorithms that, like the classical prediction algorithms, use a time-varying learning rate. Due to space considerations, this section only states the main results, for details and proofs refer to the supplementary materials where in Appendix D we give the results for IC algorithms, and in Appendix E we give the results for the non-IC algorithms. We extend these results to randomized algorithms in Section 5, where we rule out the existence of a (possibly randomized) no-regret algorithm for selfish experts. 9In Appendix G in the supplementary materials we give an intuition for why this rule yields better results than other natural candidates, such as the Brier scoring rule. 6 IC Algorithms. To prove the lower bound, we have to be specific about which set of algorithms we consider. To cover algorithms that have a decreasing learning parameter, we first show that any positive proper scoring rule can be interpreted as having a learning parameter ⌘. Proposition 12. Let f be any strictly proper scoring rule. We can write f as f(p, r) = a + bf 0(p, r) with a 2 R, b 2 R+ and f 0 a strictly proper scoring rule with min(f 0(0, 1), f 0(1, 0)) = 0 and max(f 0(0, 0), f 0(1, 1)) = 1. We call f 0 : [0, 1] ⇥{0, 1} ! [0, 1] a normalized scoring rule. Using normalized scoring rules, we can define a family of scoring rules with different learning rates ⌘. Define F as the following family of proper scoring rules generated by normalized strictly proper scoring rule f: F = {f 0(p, r) = a (1 + ⌘(f(p, r) −1)) : a > 0 and ⌘2 (0, 1)} By Proposition 12 the union of families generated by normalized strictly proper scoring rules cover all strictly proper scoring rules. Using this we can now formulate the class of deterministic algorithms that are incentive compatible. Definition 13 (Deterministic IC Algorithms). Let Ad be the set of deterministic algorithms that update weights by w(t+1) i = a(1 + ⌘(f(p(t) i , r(t)) −1))w(t) i , for a normalized strictly proper scoring rule f and ⌘2 (0, 1 2) with ⌘possibly decreasing over time. For q = Pn i=1 w(t) i p(t) i / Pn i=1 w(t) i , A picks q(t) = 0 if q < 1 2, q(t) = 1 if q > 1 2 and uses any deterministic tie breaking rule for q = 1 2. Using this definition we can now state our main lower bound result for IC algorithms: Theorem 14. For the absolute loss function, there does not exists a deterministic and incentivecompatible algorithm A 2 Ad with no 2-regret. Of particular interest are symmetric scoring rules, which occur often in practice, and which have a relevant parameter that drives the lower bound results: Definition 15 (Scoring Rule Gap). The scoring rule gap γ of family F with generator f is γ = f( 1 2) −1 2(f(0) + f(1)) = f( 1 2) −1 2. By definition, the scoring rule gap for strictly proper scoring rules is strictly positive, and it drives the lower bound for symmetric functions: Lemma 16. Let F be a family of scoring rules generated by a symmetric strictly proper scoring rule f, and let γ be the scoring rule gap of F. In the worst case, MW with any scoring rule f 0 from F with ⌘2 (0, 1 2) can do no better than M (T ) ≥ ⇣ 2 + 1 dγ−1e ⌘ · m(T ) i . As a consequence of Lemma 16, we can calculate lower bounds for specific strictly proper scoring rules. For example, the spherical rule used in Section 3 is a symmetric strictly proper scoring rule with a gap parameter γ = p 2 2 −1 2, and hence 1/dγ−1e = 1 5. Non-IC Algorithms. What about non-incentive-compatible algorithms? Could it be that, even with experts reporting strategically instead of honestly, there is a deterministic algorithm with loss at most twice that of the best expert in hindsight (or a randomized algorithm with vanishing regret), to match the classical results for honest experts? Under mild technical conditions, the answer is no. The following definition captures how players are incentivized to report differently from their beliefs. Definition 17 (Rationality Function). For a weight update function f, let ⇢f : [0, 1] ! [0, 1] be the function from beliefs to predictions, such that reporting ⇢f(b) is rational for an expert with belief b. Under mild technical conditions on the rationality function, we show our main lower bound for (potentially non-IC) algorithms. Theorem 18. For a weight update function f with continuous or non-strictly increasing rationality function ⇢f, there is no deterministic no 2-regret algorithm. Note that Theorem 18 covers the standard algorithm, as well as other common update rules such as the Hedge update rule fHedge(p(t) i , r(t)) = e−⌘|p(t) i −r(t)| [Freund and Schapire, 1997], and all IC methods, since they have the identity rationality function (though the bounds in Thm 14 are stronger). 7 5 Randomized Algorithms: Upper and Lower Bounds Impossibility of Vanishing Regret. We now consider randomized online learning algorithms, which can typically achieve better worst-case guarantees than deterministic algoritms. For example, with honest experts, there are randomized algorithms no 1-regret. Unfortunately, the lower bounds in Section 4 imply that no such result is possible for randomized algorithms (more details in Appendix F). Corollary 19. Any incentive compatible randomized weight-update algorithm or non-IC randomized algorithm with continuous or non-strictly increasing rationality function cannot be no 1-regret. An IC Randomized Algorithm. While we cannot hope to achieve a no-regret algorithm for online prediction with selfish experts, we can do better than the deterministic algorithm from Section 3. Consider the following class of randomized algorithms: Definition 20 (✓-randomized weighted majority). Let Ar be the class of algorithms that maintains expert weights as in Definition 1. Let b(t) = Pn i=1 w(t) i Pn j=1 w(t) j · p(t) i be the weighted predictions. For parameter ✓2 [0, 1 2] the algorithm chooses 1 with probability p(t) = 8 < : 0 if b(t) ✓ b(t) if ✓< b(t) 1 −✓ 1 otherwise . We call algorithms in Ar ✓-RWM algorithms. We’ll use the Brier rule fBr(p(t) i , r(t)) = 1−⌘((p(t) i )2+ (1 −p(t) i )2 + 1)/2 −(1 −s(t) i )) with s(t) i = |p(t) i −r(t)|. Theorem 21. Let A 2 Ar be a ✓-RWM algorithm with the Brier weight update rule fBr and ✓= 0.382 and with ⌘= O(1/ p T) 2 (0, 1 2). A has no 2.62-regret. 6 Simulations The theoretical results presented so far indicate that when faced with selfish experts, one should use an IC weight update rule, and ones with smaller scoring rule gap are better. Two objections to these conclusions are: first, the presented results are worst-case, and may not represent behavior on a typical input. It is of particular interest to see if on non-worst-case inputs, the non-IC standard weight-update rule does better or worse than the IC methods proposed in this paper. Second, there is a gap between our upper and lower bounds for IC rules, so it’s interesting to see what numerical regret is obtained. Results. In our first simulation, experts are represented by a simple two-state hidden Markov model (HMM) with a “good” state and a “bad” state. Realization r(t) is given by a fair coin. For r(t) = 0 (otherwise beliefs are reversed), in the good state expert i believes b(t) i ⇠min{Exp(1)/5, 1}, in the bad state b(t) i ⇠U[ 1 2, 1]. The probability to exit a state is 1 10 for both states. This data generating process models that experts that have information about the event are more accurate than experts who lack the information. Figure 1a shows the regret as a function of time for the standard (nonIC) algorithm, and IC scoring rules including one from the Beta family [Buja et al., 2005] with ↵= β = 1 2. For the IC methods, experts report p(t) i = b(t) i , for the standard algorithm p(t) i = 1 if b(t) i ≥1 2 and p(t) i = 0 otherwise. The y axis is the ratio of the total loss of each of the algorithms to the performance of the best expert at that time. The plot is for 10 experts, T = 10, 000, ⌘= 10−2, and the randomized10 versions of the algorithms, averaged over 30 runs. Varying model parameters and the deterministic version show similar results. Each of the IC methods does significantly better than the standard weight-update algorithm, and even at T = 200, 000 (not shown in the graph), the IC methods have a regret factor of about 1.003, whereas the standard algorithm still has 1.14. This gives credence to the notion that failing to account for incentive issues is problematic beyond the worst-case bounds presented earlier. Moreover, while there is a worst-case lower bound that rules out no-regret, for natural synthetic data, the loss of all the IC algorithms approaches that of the best expert in hindsight, while the standard algorithm fails to do 10Here we use the regular RWM algorithm, so in the notation of Section 5, we have ✓= 0. 8 (a) The HMM data-generating process. (b) The greedy lower bound instance. Figure 1: Regret for different data-generating processes. Table 1: Comparison of lower bound results with simulation. The simulation is run for T = 10, 000, ⌘= 10−4 and we report the average of 30 runs. For the lower bounds, the first number is the lower bound from Lemma 16, i.e. 2 + 1 dγ−1e, the second number (in parentheses) is 2 + γ. Beta .1 Beta .5 Beta .7 Beta .9 Brier Spherical Greedy LB 2.3708 2.2983 2.2758 2.2584 2.2507 2.2071 LB Sim 2.4414 2.3186 2.2847 2.2599 2.2502 2.2070 Lem 16 LB 2.33 (2.441) 2.25 (2.318) 2.25 (2.285) 2.25 (2.260) 2.25 2.2 (2.207) this. This seems to indicate that eliciting the truthful beliefs of the experts is more important than the exact weight-update rule. Comparison of LB Instances. We consider both the lower bound instance described the proof of Lemma 16, and a greedy version that punishes the algorithm every time w(t) 0 is “sufficiently” large.11 Figure 1b shows the regret for different algorithms on the greedy lower bound instance. Table 1 shows that it very closely traces 2 + γ, as do the numerical results for the lower bound from Lemma 16. In fact, for the analysis, we needed to use dγ−1e when determining the first phase of the instance. When we use γ instead numerically, the regret seems to trace 2 + γ quite closely, rather than the weaker proven lower bound of 2 + 1 dγ−1e. Table 1 shows that the analysis of Lemma 16 is essentially tight (up to the rounding of γ). Closing the gap between the lower and upper bound requires finding a different lower bound instance, or a better analysis for the upper bound. 7 Open Problems There area number of interesting questions that this work raises. First of all, our utility model effectively causes experts to optimize their weight independently of other experts. Bayarri and DeGroot [1989] discuss different objective functions for experts, including optimizing relative weight among experts under different informational assumptions. These would impose different constraints as to which algorithms would lead to truthful reporting, and it would be interesting to see if no-regret learning is possible in this setting. It also remains an open problem to close the gap between the best known upper and lower bounds that we presented in this paper. The simulations showed that the analysis for the lower bound instances is almost tight, so this requires a novel upper bound and/or a different lower bound instance. Finally, strictly proper scoring rules are also well-defined beyond binary outcomes. It would be interesting to see what bounds can be proved for predictions over more than two outcomes. 11When w(t) 0 is sufficiently large we make e0 (and thus the algorithm) wrong twice: b(t) 0 = 0, b(t) 1 = 1, b(t) 2 = 1 2, r(t) = 1, and b(t+1) 0 = 0, b(t) 1 = 1 2, b(t) 0 = 1, r(t) = 1. “Sufficiently” here means that weight of e0 is high enough for the algorithm to follow its advice during both steps. 9 References Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2):12, 2013. Moshe Babaioff, Robert D. Kleinberg, and Aleksandrs Slivkins. Truthful mechanisms with implicit payment computation. In Proceedings of the 11th ACM Conference on Electronic Commerce, EC ’10, pages 43–52, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-822-3. doi: 10.1145/ 1807342.1807349. URL http://doi.acm.org/10.1145/1807342.1807349. M. J. Bayarri and M. H. DeGroot. Optimal reporting of predictions. Journal of the American Statistical Association, 84(405):214–222, 1989. doi: 10.1080/01621459.1989.10478758. J Eric Bickel and Seong Dae Kim. Verification of the weather channel probability of precipitation forecasts. Monthly Weather Review, 136(12):4867–4881, 2008. John P Bonin. On the design of managerial incentive structures in a decentralized planning environment. The American Economic Review, 66(4):682–687, 1976. Craig Boutilier. Eliciting forecasts from self-interested experts: scoring rules for decision makers. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, pages 737–744. International Foundation for Autonomous Agents and Multiagent Systems, 2012. Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950. Michael Brückner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 547–555. ACM, 2011. Andreas Buja, Werner Stuetzle, and Yi Shen. Loss functions for binary class probability estimation and classification: Structure and applications. 2005. Yang Cai, Constantinos Daskalakis, and Christos H Papadimitriou. Optimum statistical estimation with strategic data sources. In COLT, pages 280–296, 2015. Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352, 2007. Yiling Chen and David M Pennock. Designing markets for prediction. AI Magazine, 31(4):42–52, 2010. Ofer Dekel, Felix Fischer, and Ariel D Procaccia. Incentive compatible regression learning. Journal of Computer and System Sciences, 76(8):759–777, 2010. Peter Frazier, David Kempe, Jon Kleinberg, and Robert Kleinberg. Incentivizing exploration. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 5–22. ACM, 2014. Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378, 2007. Andrew V Goldberg and Jason D Hartline. Collusion-resistant mechanisms for single-parameter agents. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 620–629. Society for Industrial and Applied Mathematics, 2005. Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 111–122. ACM, 2016. 10 Christian Franz Horn, Bjoern Sven Ivens, Michael Ohneberg, and Alexander Brem. Prediction markets–a literature review 2014. The Journal of Prediction Markets, 8(2):89–126, 2014. Kamal Jain and Mohammad Mahdian. Cost sharing. Algorithmic game theory, pages 385–410, 2007. Victor Richmond R Jose, Robert F Nau, and Robert L Winkler. Scoring rules, generalized entropy, and utility maximization. Operations research, 56(5):1146–1157, 2008. Sham M Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms. SIAM Journal on Computing, 39(3):1088–1106, 2009. Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994. Yang Liu and Yiling Chen. A bandit framework for strategic regression. In Advances in Neural Information Processing Systems, pages 1813–1821, 2016. Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, and Zhiwei Steven Wu. Bayesian exploration: Incentivizing exploration in bayesian games. arXiv preprint arXiv:1602.07570, 2016. John McCarthy. Measures of the value of information. Proceedings of the National Academy of Sciences of the United States of America, 42(9):654, 1956. Edgar C Merkle and Mark Steyvers. Choosing a strictly proper scoring rule. Decision Analysis, 10 (4):292–304, 2013. Nolan Miller, Paul Resnick, and Richard Zeckhauser. Eliciting informative feedback: The peerprediction method. Management Science, 51(9):1359–1373, 2005. Hervé Moulin. Incremental cost sharing: Characterization by coalition strategy-proofness. Social Choice and Welfare, 16(2):279–320, 1999. Tim Roughgarden and Eva Tardos. Introduction to the inefficiency of equilibria. Algorithmic Game Theory, 17:443–459, 2007. Leonard J Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336):783–801, 1971. Mark J Schervish. A general method for comparing probability assessors. The Annals of Statistics, pages 1856–1879, 1989. Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in neural information processing systems, pages 1–9, 2015. William Thomson. Eliciting production possibilities from a well-informed manager. Journal of Economic Theory, 20(3):360–380, 1979. 11 | 2017 | 168 |
6,639 | Streaming Robust Submodular Maximization: A Partitioned Thresholding Approach Slobodan Mitrovi´c∗ EPFL Ilija Bogunovic† EPFL Ashkan Norouzi-Fard‡ EPFL Jakub Tarnawski§ EPFL Volkan Cevher¶ EPFL Abstract We study the classical problem of maximizing a monotone submodular function subject to a cardinality constraint k, with two additional twists: (i) elements arrive in a streaming fashion, and (ii) m items from the algorithm’s memory are removed after the stream is finished. We develop a robust submodular algorithm STAR-T. It is based on a novel partitioning structure and an exponentially decreasing thresholding rule. STAR-T makes one pass over the data and retains a short but robust summary. We show that after the removal of any m elements from the obtained summary, a simple greedy algorithm STAR-T-GREEDY that runs on the remaining elements achieves a constant-factor approximation guarantee. In two different data summarization tasks, we demonstrate that it matches or outperforms existing greedy and streaming methods, even if they are allowed the benefit of knowing the removed subset in advance. 1 Introduction A central challenge in many large-scale machine learning tasks is data summarization – the extraction of a small representative subset out of a large dataset. Applications include image and document summarization [1, 2], influence maximization [3], facility location [4], exemplar-based clustering [5], recommender systems [6], and many more. Data summarization can often be formulated as the problem of maximizing a submodular set function subject to a cardinality constraint. On small datasets, a popular algorithm is the simple greedy method [7], which produces solutions provably close to optimal. Unfortunately, it requires repeated access to all elements, which makes it infeasible for large-scale scenarios, where the entire dataset does not fit in the main memory. In this setting, streaming algorithms prove to be useful, as they make only a small number of passes over the data and use sublinear space. In many settings, the extracted representative set is also required to be robust. That is, the objective value should degrade as little as possible when some elements of the set are removed. Such removals may arise for any number of reasons, such as failures of nodes in a network, or user preferences which the model failed to account for; they could even be adversarial in nature. ∗e-mail: slobodan.mitrovic@epfl.ch †e-mail: ilija.bogunovic@epfl.ch ‡e-mail: ashkan.norouzifard@epfl.ch §e-mail: jakub.tarnawski@epfl.ch ¶e-mail: volkan.cevher@epfl.ch 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. A robustness requirement is especially challenging for large datasets, where it is prohibitively expensive to reoptimize over the entire data collection in order to find replacements for the removed elements. In some applications, where data is produced so rapidly that most of it is not being stored, such a search for replacements may not be possible at all. These requirements lead to the following two-stage setting. In the first stage, we wish to solve the robust streaming submodular maximization problem – one of finding a small representative subset of elements that is robust against any possible removal of up to m elements. In the second, query stage, after an arbitrary removal of m elements from the summary obtained in the first stage, the goal is to return a representative subset, of size at most k, using only the precomputed summary rather than the entire dataset. For example, (i) in dominating set problem (also studied under influence maximization) we want to efficiently (in a single pass) compute a compressed but robust set of influential users in a social network (whom we will present with free copies of a new product), (ii) in personalized movie recommendation we want to efficiently precompute a robust set of user-preferred movies. Once we discard those users who will not spread the word about our product, we should find a new set of influential users in the precomputed robust summary. Similarly, if some movies turn out not to be interesting for the user, we should still be able to provide good recommendations by only looking into our robust movie summary. Contributions. In this paper, we propose a two-stage procedure for robust submodular maximization. For the first stage, we design a streaming algorithm which makes one pass over the data and finds a summary that is robust against removal of up to m elements, while containing at most O (m log k + k) log2 k elements. In the second (query) stage, given any set of size m that has been removed from the obtained summary, we use a simple greedy algorithm that runs on the remaining elements and produces a solution of size at most k (without needing to access the entire dataset). We prove that this solution satisfies a constant-factor approximation guarantee. Achieving this result requires novelty in the algorithm design as well as the analysis. Our streaming algorithm uses a structure where the constructed summary is arranged into partitions consisting of buckets whose sizes increase exponentially with the partition index. Moreover, buckets in different partitions are associated with greedy thresholds, which decrease exponentially with the partition index. Our analysis exploits and combines the properties of the described robust structure and decreasing greedy thresholding rule. In addition to algorithmic and theoretical contributions, we also demonstrate in several practical scenarios that our procedure matches (and in some cases outperforms) the SIEVE-STREAMING algorithm [8] (see Section 5) – even though we allow the latter to know in advance which elements will be removed from the dataset. 2 Problem Statement We consider a potentially large universe of elements V of size n equipped with a normalized monotone submodular set function f : 2V →R≥0 defined on V . We say that f is monotone if for any two sets X ⊆Y ⊆V we have f(X) ≤f(Y ). The set function f is said to be submodular if for any two sets X ⊆Y ⊆V and any element e ∈V \ Y it holds that f(X ∪{e}) −f(X) ≥f(Y ∪{e}) −f(Y ). We use f (Y | X) to denote the marginal gain in the function value due to adding the elements of set Y to set X, i.e. f (Y | X) := f(X ∪Y ) −f(X). We say that f is normalized if f(∅) = 0. The problem of maximizing a monotone submodular function subject to a cardinality constraint, i.e., max Z⊆V,|Z|≤k f(Z), (1) has been studied extensively. It is well-known that a simple greedy algorithm (henceforth refered to as GREEDY) [7], which starts from an empty set and then iteratively adds the element with highest marginal gain, provides a (1 −e−1)-approximation. However, it requires repeated access to all elements of the dataset, which precludes it from use in large-scale machine learning applications. 2 We say that a set S is robust for a parameter m if, for any set E ⊆V such that |E| ≤m, there is a subset Z ⊆S \ E of size at most k such that f(Z) ≥cf(OPT(k, V \ E)), where c > 0 is an approximation ratio. We use OPT(k, V \ E) to denote the optimal subset of size k of V \ E (i.e., after the removal of elements in E): OPT(k, V \ E) ∈ argmax Z⊆V \E,|Z|≤k f(Z). In this work, we are interested in solving a robust version of Problem (1) in the setting that consists of the following two stages: (i) streaming and (ii) query stage. In the streaming stage, elements from the ground set V arrive in a streaming fashion in an arbitrary order. Our goal is to design a one-pass streaming algorithm that has oracle access to f and retains a small set S of elements in memory. In addition, we want S to be a robust summary, i.e., S should both contain elements that maximize the objective value, and be robust against the removal of prespecified number of elements m. In the query stage, after any set E of size at most m is removed from V , the goal is to return a set Z ⊆S \ E of size at most k such that f(Z) is maximized. Related work. A robust, non-streaming version of Problem (1) was first introduced in [9]. In that setting, the algorithm must output a set Z of size k which maximizes the smallest objective value guaranteed to be obtained after a set of size m is removed, that is, max Z⊆V,|Z|≤k min E⊆Z,|E|≤m f(Z \ E). The work [10] provides the first constant (0.387) factor approximation result to this problem, valid for m = o( √ k). Their solution consists of buckets of size O(m2 log k) that are constructed greedily, one after another. Recently, in [11], a centralized algorithm PRO has been proposed that achieves the same approximation result and allows for a greater robustness m = o(k). PRO constructs a set that is arranged into partitions consisting of buckets whose sizes increase exponentially with the partition index. In this work, we use a similar structure for the robust set but, instead of filling the buckets greedily one after another, we place an element in the first bucket for which the gain of adding the element is above the corresponding threshold. Moreover, we introduce a novel analysis that allows us to be robust to any number of removals m as long as we are allowed to use O(m log2 k) memory. Recently, submodular streaming algorithms (e.g. [5], [12] and [13]) have become a prominent option for scaling submodular optimization to large-scale machine learning applications. A popular submodular streaming algorithm SIEVE-STREAMING [8] solves Problem (1) by performing one pass over the data, and achieves a (0.5 −ϵ)-approximation while storing at most O k log k ϵ elements. Our algorithm extends the algorithmic ideas of SIEVE-STREAMING, such as greedy thresholding, to the robust setting. In particular, we introduce a new exponentially decreasing thresholding scheme that, together with an innovative analysis, allows us to obtain a constant-factor approximation for the robust streaming problem. Recently, robust versions of submodular maximization have been considered in the problems of influence maximization (e.g, [3], [14]) and budget allocation ([15]). Increased interest in interactive machine learning methods has also led to the development of interactive and adaptive submodular optimization (see e.g. [16], [17]). Our procedure also contains the interactive component, as we can compute the robust summary only once and then provide different sub-summaries that correspond to multiple different removals (see Section 5.2). Independently and concurrently with our work, [18] gave a streaming algorithm for robust submodular maximization under the cardinality constraint. Their approach provides a 1/2 −ε approximation guarantee. However, their algorithm uses O(mk log k/ε) memory. While the memory requirement of their method increases linearly with k, in the case of our algorithm this dependence is logarithmic. 3 Data Stream k 푤 buckets (k / 2) 푤 buckets 2 푤 1푤 Set S partitions 휏 휏 / 2 휏 / k decreasing thresholds Figure 1: Illustration of the set S returned by STAR-T. It consists of ⌈log k⌉+ 1 partitions such that each partition i contains w⌈k/2i⌉buckets of size 2i (up to rounding). Moreover, each partition i has its corresponding threshold τ/2i. 3 A Robust Two-stage Procedure Our approach consists of the streaming Algorithm 1, which we call Streaming Robust submodular algorithm with Partitioned Thresholding (STAR-T). This algorithm is used in the streaming stage, while Algorithm 2, which we call STAR-T-GREEDY, is used in the query stage. As the input, STAR-T requires a non-negative monotone submodular function f, cardinality constraint k, robustness parameter m and thresholding parameter τ. The parameter τ is an αapproximation to f(OPT(k, V \ E)), for some α ∈(0, 1] to be specified later. Hence, it depends on f(OPT(k, V \ E)), which is not known a priori. For the sake of clarity, we present the algorithm as if f(OPT(k, V \ E)) were known, and in Section 4.1 we show how f(OPT(k, V \ E)) can be approximated. The algorithm makes one pass over the data and outputs a set of elements S that is later used in the query stage in STAR-T-GREEDY. The set S (see Figure 1 for an illustration) is divided into ⌈log k⌉+ 1 partitions, where every partition i ∈{0, . . . , ⌈log k⌉} consists of w⌈k/2i⌉buckets Bi,j, j ∈{1, . . . , w⌈k/2i⌉}. Here, w ∈N+ is a memory parameter that depends on m; we use w ≥ l 4⌈log k⌉m k m in our asymptotic theory, while our numerical results show that w = 1 works well in practice. Every bucket Bi,j stores at most min{k, 2i} elements. If |Bi,j| = min{2i, k}, then we say that Bi,j is full. Every partition has a corresponding threshold that is exponentially decreasing with the partition index i as τ/2i. For example, the buckets in the first partition will only store elements that have marginal value at least τ. Every element e ∈V arriving on the stream is assigned to the first non-full bucket Bi,j for which the marginal value f (e | Bi,j) is at least τ/2i. If there is no such bucket, the element will not be stored. Hence, the buckets are disjoint sets that in the end (after one pass over the data) can have a smaller number of elements than specified by their corresponding cardinality constraints, and some of them might even be empty. The set S returned by STAR-T is the union of all the buckets. In the second stage, STAR-T-GREEDY receives as input the set S constructed in the streaming stage, a set E ⊂S that we think of as removed elements, and the cardinality constraint k. The algorithm then returns a set Z, of size at most k, that is obtained by running the simple greedy algorithm GREEDY on the set S \ E. Note that STAR-T-GREEDY can be invoked for different sets E. 4 Theoretical Bounds In this section we discuss our main theoretical results. We initially assume that the value f(OPT(k, V \ E)) is known; later, in Section 4.1, we remove this assumption. The more detailed versions of our proofs are given in the supplementary material. We begin by stating the main result. 4 Algorithm 1 STreAming Robust - Thresholding submodular algorithm (STAR-T) Input: Set V , k, τ, w ∈N+ 1: Bi,j ←∅ for all 0 ≤i ≤⌈log k⌉and 1 ≤j ≤w⌈k/2i⌉ 2: for each element e in the stream do 3: for i ←0 to ⌈log k⌉do ▷loop over partitions 4: for j ←1 to w⌈k/2i⌉do ▷loop over buckets 5: if |Bi,j| < min{2i, k} and f (e | Bi,j) ≥τ/ min{2i, k} then 6: Bi,j ←Bi,j ∪{e} 7: break: proceed to the next element in the stream 8: S ←S i,j Bi,j 9: return S Algorithm 2 STAR-T- GREEDY Input: Set S, query set E and k 1: Z ←GREEDY(k, S \ E) 2: return Z Theorem 4.1 Let f be a normalized monotone submodular function defined over the ground set V . Given a cardinality constraint k and parameter m, for a setting of parameters w ≥ l 4⌈log k⌉m k m and τ = 1 2+ (1−e−1) (1−e−1/3) 1− 1 ⌈log k⌉ f(OPT(k, V \ E)), STAR-T performs a single pass over the data set and constructs a set S of size at most O((k + m log k) log k) elements. For such a set S and any set E ⊆V such that |E| ≤m, STAR-T-GREEDY yields a set Z ⊆S \ E of size at most k with f(Z) ≥c · f(OPT(k, V \ E)), for c = 0.149 1 − 1 ⌈log k⌉ . Therefore, as k →∞, the value of c approaches 0.149. Proof sketch. We first consider the case when there is a partition i⋆in S such that at least half of its buckets are full. We show that there is at least one full bucket Bi⋆,j such that f (Bi⋆,j \ E) is only a constant factor smaller than f(OPT(k, V \ E)), as long as the threshold τ is set close to f(OPT(k, V \ E)). We make this statement precise in the following lemma: Lemma 4.2 If there exists a partition in S such that at least half of its buckets are full, then for the set Z produced by STAR-T-GREEDY we have f(Z) ≥ 1 −e−1 1 −4m wk τ. (2) To prove this lemma, we first observe that from the properties of GREEDY it follows that f(Z) = f(GREEDY(k, S \ E)) ≥ 1 −e−1 f (Bi⋆,j \ E) . Now it remains to show that f (Bi⋆,j \ E) is close to τ. We observe that for any full bucket Bi⋆,j, we have |Bi⋆,j| = min{2i, k}, so its objective value f (Bi⋆,j) is at least τ (every element added to this bucket increases its objective value by at least τ/ min{2i, k}). On average, |Bi⋆,j ∩E| is relatively small, and hence we can show that there exists some full bucket Bi⋆,j such that f (Bi⋆,j \ E) is close to f (Bi⋆,j). Next, we consider the other case, i.e., when for every partition, more than half of its buckets are not full after the execution of STAR-T. For every partition i, we let Bi denote a bucket that is not fully populated and for which |Bi ∩E| is minimized over all the buckets of that partition. Then, we look at such a bucket in the last partition: B⌈log k⌉. We provide two lemmas that depend on f(B⌈log k⌉). If τ is set to be small compared to f(OPT(k, V \ E)): 5 • Lemma 4.3 shows that if f(B⌈log k⌉) is close to f(OPT(k, V \ E)), then our solution is within a constant factor of f(OPT(k, V \ E)); • Lemma 4.4 shows that if f(B⌈log k⌉) is small compared to f(OPT(k, V \ E)), then our solution is again within a constant factor of f(OPT(k, V \ E)). Lemma 4.3 If there does not exist a partition of S such that at least half of its buckets are full, then for the set Z produced by STAR-T-GREEDY we have f(Z) ≥ 1 −e−1/3 f B⌈log k⌉ −4m wk τ , where B⌈log k⌉is a not-fully-populated bucket in the last partition that minimizes B⌈log k⌉∩E and |E| ≤m. Using standard properties of submodular functions and the GREEDY algorithm we can show that f(Z) = f(GREEDY(k, S \ E)) ≥ 1 −e−1/3 f B⌈log k⌉ −4m wk τ . The complete proof of this result can be found in Lemma B.2, in the supplementary material. Lemma 4.4 If there does not exist a partition of S such that at least half of its buckets are full, then for the set Z produced by STAR-T-GREEDY, f(Z) ≥(1 −e−1) f(OPT(k, V \ E)) −f(B⌈log k⌉) −τ , where B⌈log k⌉is any not-fully-populated bucket in the last partition. To prove this lemma, we look at two sets X and Y , where Y contains all the elements from OPT(k, V \ E) that are placed in the buckets that precede bucket B⌈log k⌉in S, and set X := OPT(k, V \ E) \ Y . By monotonicity and submodularity of f, we bound f(Y ) by: f(Y ) ≥f(OPT(k, V \ E)) −f(X) ≥f(OPT(k, V \ E)) −f B⌈log k⌉ − X e∈X f e B⌈log k⌉ . To bound the sum on the right hand side we use that for every e ∈X we have f e B⌈log k⌉ < τ k, which holds due to the fact that B⌈log k⌉is a bucket in the last partition and is not fully populated. We conclude the proof by showing that f(Z) = f(GREEDY(k, S \ E)) ≥ 1 −e−1 f(Y ). Equipped with the above results, we proceed to prove our main result. Proof of Theorem 4.1. First, we prove the bound on the size of S: |S| = ⌈log k⌉ X i=0 w⌈k/2i⌉min{2i, k} ≤ ⌈log k⌉ X i=0 w(k/2i + 1)2i ≤(log k + 5)wk. (3) By setting w ≥ l 4⌈log k⌉m k m we obtain S = O((k + m log k) log k). Next, we show the approximation guarantee. We first define γ := 4m wk , α1 := 1 −e−1/3 , and α2 := 1 −e−1 . Lemma 4.3 and 4.4 provide two bounds on f(Z), one increasing and one decreasing in f(B⌈log k⌉). By balancing out the two bounds, we derive f(Z) ≥ α1α2 α1 + α2 (f(OPT(k, V \ E)) −(1 + γ)τ), (4) with equality for f(B⌈log k⌉) = α2f(OPT(k,V \E))−(α2−γα1)τ α2+α1 . Next, as γ ≥0, we can observe that Eq. (4) is decreasing, while the bound on f(Z) given by Lemma 4.2 is increasing in τ for γ < 1. Hence, by balancing out the two inequalities, we obtain our final bound f(Z) ≥ 1 2 α2(1−γ) + 1 α1 f(OPT(k, V \ E)). (5) 6 For w ≥ l 4⌈log k⌉m k m we have γ ≤1/⌈log k⌉, and hence, by substituting α1 and α2 in Eq. (5), we prove our main result: f(Z) ≥ 1 −e−1/3 1 −e−1 1 − 1 ⌈log k⌉ 2 1 −e−1/3 + (1 −e−1) f(OPT(k, V \ E)) ≥ 0.149 1 − 1 ⌈log k⌉ f(OPT(k, V \ E)). 2 4.1 Algorithm without access to f(OPT(k, V \ E)) Algorithm STAR-T requires in its input a parameter τ which is a function of an unknown value f(OPT(k, V \ E)). To deal with this shortcoming, we show how to extend the idea of [8] of maintaining multiple parallel instances of our algorithm in order to approximate f(OPT(k, V \ E)). For a given constant ϵ > 0, this approach increases the space by a factor of log1+ϵ k and provides a (1 + ϵ)-approximation compared to the value obtained in Theorem 4.1. More precisely, we prove the following theorem. Theorem 4.5 For any given constant ϵ > 0 there exists a parallel variant of STAR-T that makes one pass over the stream and outputs a collection of sets S of total size O (k + m log k) log k log1+ϵ k with the following property: There exists a set S ∈S such that applying STAR-T-GREEDY on S yields a set Z ⊆S \ E of size at most k with f(Z) ≥0.149 1 + ϵ 1 − 1 ⌈log k⌉ f(OPT(k, V \ E)). The proof of this theorem, along with a description of the corresponding algorithm, is provided in Appendix E. 5 Experiments In this section, we numerically validate the claims outlined in the previous section. Namely, we test the robustness and compare the performance of our algorithm against the SIEVE-STREAMING algorithm that knows in advance which elements will be removed. We demonstrate improved or matching performance in two different data summarization applications: (i) the dominating set problem, and (ii) personalized movie recommendation. We illustrate how a single robust summary can be used to regenerate recommendations corresponding to multiple different removals. 5.1 Dominating Set In the dominating set problem, given a graph G = (V, M), where V represents the set of nodes and M stands for edges, the objective function is given by f(Z) = |N(Z) ∪Z|, where N(Z) denotes the neighborhood of Z (all nodes adjacent to any node of Z). This objective function is monotone and submodular. We consider two datasets: (i) ego-Twitter [19], consisting of 973 social circles from Twitter, which form a directed graph with 81306 nodes and 1768149 edges; (ii) Amazon product co-purchasing network [20]: a directed graph with 317914 nodes and 1745870 edges. Given the dominating set objective function, we run STAR-T to obtain the robust summary S. Then we compare the performance of STAR-T-GREEDY, which runs on S, against the performance of SIEVE-STREAMING, which we allow to know in advance which elements will be removed. We also compare against a method that chooses the same number of elements as STAR-T, but does so uniformly at random from the set of all elements that will not be removed (V \ E); we refer to it as RANDOM. Finally, we also demonstrate the peformance of STAR-T-SIEVE, a variant of our algorithm that uses the same robust summary S, but instead of running GREEDY in the second stage, it runs SIEVE-STREAMING on S \ E. 7 Cardinality k 10 20 30 40 50 60 70 80 90 100 Avg. obj. value 0 2000 4000 6000 8000 10000 12000 (a) Amazon communities,|E| = k Star-T-Greedy Star-T-Sieve Sieve-Str Random Cardinality k 10 20 30 40 50 60 70 80 90 100 Avg. obj. value ×104 0 0.5 1 1.5 2 2.5 (c) ego-Twitter,|E| = k Star-T-Greedy Star-T-Sieve Sieve-Str Random Cardinality k 10 30 50 70 90 Obj. value 0 10 20 30 40 50 60 (e) Movies, already-seen Star-T-Greedy Star-T-Sieve Sieve-Str Greedy Cardinality k 10 20 30 40 50 60 70 80 90 100 Obj. value 0 1000 2000 3000 4000 5000 6000 7000 (b) Amazon communities,|E| = 2k Star-T-Greedy Star-T-Sieve Sieve-Str Random Cardinality k 10 20 30 40 50 60 70 80 90 100 Obj. value ×104 0 0.5 1 1.5 2 (d) ego-Twitter,|E| = 2k Star-T-Greedy Star-T-Sieve Sieve-Str Random Cardinality k 10 30 50 70 90 110 130 150 170 190 Obj. value 10 20 30 40 50 60 (f) Movies, by genre Star-T-Greedy Star-T-Sieve Sieve-Str Greedy Figure 2: Numerical comparisons of the algorithms STAR-T-GREEDY, STAR-T-SIEVE and SIEVESTREAMING. Figures 2(a,c) show the objective value after the random removal of k elements from the set S, for different values of k. Note that E is sampled as a subset of the summary of our algorithm, which hurts the performance of our algorithm more than the baselines. The reported numbers are averaged over 100 iterations. STAR-T-GREEDY, STAR-T-SIEVE and SIEVE-STREAMING perform comparably (STAR-T-GREEDY slightly outperforms the other two), while RANDOM is significantly worse. In Figures 2(b,d) we plot the objective value for different values of k after the removal of 2k elements from the set S, chosen greedily (i.e., by iteratively removing the element that reduces the objective value the most). Again, STAR-T-GREEDY, STAR-T-SIEVE and SIEVE-STREAMING perform comparably, but this time SIEVE-STREAMING slightly outperforms the other two for some values of k. We observe that even when we remove more than k elements from S, the performance of our algorithm is still comparable to the performance of SIEVE-STREAMING (which knows in advance which elements will be removed). We provide additional results in the supplementary material. 5.2 Interactive Personalized Movie Recommendation The next application we consider is personalized movie recommendation. We use the MovieLens 1M database [21], which contains 1000209 ratings for 3900 movies by 6040 users. Based on these ratings, we obtain feature vectors for each movie and each user by using standard low-rank matrix completion techniques [22]; we choose the number of features to be 30. For a user u, we use the following monotone submodular function to recommend a set of movies Z: fu(Z) = (1 −α) · X z∈Z ⟨vu, vz⟩+ α · X m∈M max z∈Z ⟨vm, vz⟩. The first term aggregates the predicted scores of the chosen movies z ∈Z for the user u (here vu and vz are non-normalized feature vectors of user u and movie z, respectively). The second term corresponds to a facility-location objective that measures how well the set Z covers the set of all movies M [4]. Finally, α is a user-dependent parameter that specifies the importance of global movie coverage versus high scores of individual movies. Here, the robust setting arises naturally since we do not have complete information about the user: when shown a collection of top movies, it will likely turn out that they have watched (but not rated) many of them, rendering these recommendations moot. In such an interactive setting, the user may also require (or exclude) movies of a specific genre, or similar to some favorite movie. We compare the performance of our algorithms STAR-T-GREEDY and STAR-T-SIEVE in such scenarios against two baselines: GREEDY and SIEVE-STREAMING (both being run on the set V \ E, i.e., knowing the removed elements in advance). Note that in this case we are able to afford running 8 GREEDY, which may be infeasible when working with larger datasets. Below we discuss two concrete practical scenarios featured in our experiments. Movies by genre. After we have built our summary S, the user decides to watch a drama today; we retrieve only movies of this genre from S. This corresponds to removing 59% of the universe V . In Figure 2(f) we report the quality of our output compared to the baselines (for user ID 445 and α = 0.95) for different values of k. The performance of STAR-T-GREEDY is within several percent of the performance of GREEDY (which we can consider as a tractable optimum), and the two sieve-based methods STAR-T-SIEVE and SIEVE-STREAMING display similar objective values. Already-seen movies. We randomly sample a set E of movies already watched by the user (500 out of all 3900 movies). To obtain a realistic subset, each movie is sampled proportionally to its popularity (number of ratings). Figure 2(e) shows the performance of our algorithm faced with the removal of E (user ID = 445, α = 0.9) for a range of settings of k. Again, our algorithm is able to almost match the objective values of GREEDY (which is aware of E in advance). Recall that we are able to use the same precomputed summary S for different removed sets E. This summary was built for parameter w = 1, which theoretically allows for up to k removals. However, despite having |E| ≫k in the above scenarios, our performance remains robust; this indicates that our method is more resilient in practice than what the proved bound alone would guarantee. 6 Conclusion We have presented a new robust submodular streaming algorithm STAR-T based on a novel partitioning structure and an exponentially decreasing thresholding rule. It makes one pass over the data and retains a set of size O (k + m log k) log2 k . We have further shown that after the removal of any m elements, a simple greedy algorithm that runs on the obtained set achieves a constant-factor approximation guarantee for robust submodular function maximization. In addition, we have presented two numerical studies where our method compares favorably against the SIEVE-STREAMING algorithm that knows in advance which elements will be removed. Acknowledgment. IB and VC’s work was supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 725594), in part by the Swiss National Science Foundation (SNF), project 407540_167319/1, in part by the NCCR MARVEL, funded by the Swiss National Science Foundation, in part by Hasler Foundation Switzerland under grant agreement number 16066 and in part by Office of Naval Research (ONR) under grant agreement number N00014-16-R-BA01. JT’s work was supported by ERC Starting Grant 335288-OptApprox. 9 References [1] S. Tschiatschek, R. K. Iyer, H. Wei, and J. A. Bilmes, “Learning mixtures of submodular functions for image collection summarization,” in Advances in neural information processing systems, 2014, pp. 1413–1421. [2] H. Lin and J. Bilmes, “A class of submodular functions for document summarization,” in Assoc. for Comp. Ling.: Human Language Technologies-Volume 1, 2011. [3] D. Kempe, J. Kleinberg, and É. Tardos, “Maximizing the spread of influence through a social network,” in Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), 2003. [4] E. Lindgren, S. Wu, and A. G. Dimakis, “Leveraging sparsity for efficient submodular data summarization,” in Advances in Neural Information Processing Systems, 2016, pp. 3414–3422. [5] A. Krause and R. G. Gomes, “Budgeted nonparametric learning from data streams,” in ICML, 2010, pp. 391–398. [6] K. El-Arini and C. Guestrin, “Beyond keyword search: discovering relevant scientific literature,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2011, pp. 439–447. [7] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions—i,” Mathematical Programming, vol. 14, no. 1, pp. 265–294, 1978. [8] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause, “Streaming submodular maximization: Massive data summarization on the fly,” in Proceedings of the 20th ACM SIGKDD. ACM, 2014, pp. 671–680. [9] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta, “Robust submodular observation selection,” Journal of Machine Learning Research, vol. 9, no. Dec, pp. 2761–2801, 2008. [10] J. B. Orlin, A. S. Schulz, and R. Udwani, “Robust monotone submodular function maximization,” in Int. Conf. on Integer Programming and Combinatorial Opt. (IPCO). Springer, 2016. [11] I. Bogunovic, S. Mitrovi´c, J. Scarlett, and V. Cevher, “Robust submodular maximization: A non-uniform partitioning approach,” in Int. Conf. Mach. Learn. (ICML), 2017. [12] R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani, “Fast greedy algorithms in MapReduce and streaming,” ACM Transactions on Parallel Computing, vol. 2, no. 3, p. 14, 2015. [13] A. Norouzi-Fard, A. Bazzi, I. Bogunovic, M. El Halabi, Y.-P. Hsieh, and V. Cevher, “An efficient streaming algorithm for the submodular cover problem,” in Adv. Neur. Inf. Proc. Sys. (NIPS), 2016. [14] W. Chen, T. Lin, Z. Tan, M. Zhao, and X. Zhou, “Robust influence maximization,” in Proceedings of the ACM SIGKDD, 2016, p. 795. [15] M. Staib and S. Jegelka, “Robust budget allocation via continuous submodular functions,” in Int. Conf. Mach. Learn. (ICML), 2017. [16] D. Golovin and A. Krause, “Adaptive submodularity: Theory and applications in active learning and stochastic optimization,” Journal of Artificial Intelligence Research, vol. 42, 2011. [17] A. Guillory and J. Bilmes, “Interactive submodular set cover,” arXiv preprint arXiv:1002.3345, 2010. [18] B. Mirzasoleiman, A. Karbasi, and A. Krause, “Deletion-robust submodular maximization: Data summarization with “the right to be forgotten”,” in International Conference on Machine Learning, 2017, pp. 2449–2458. [19] J. Mcauley and J. Leskovec, “Discovering social circles in ego networks,” ACM Trans. Knowl. Discov. Data, 2014. [20] J. Yang and J. Leskovec, “Defining and evaluating network communities based on ground-truth,” Knowledge and Information Systems, vol. 42, no. 1, pp. 181–213, 2015. [21] F. M. Harper and J. A. Konstan, “The MovieLens datasets: History and context,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, no. 4, p. 19, 2016. [22] O. Troyanskaya, M. Cantor, G. Sherlock, P. Brown, T. Hastie, R. Tibshirani, D. Botstein, and R. B. Altman, “Missing value estimation methods for DNA microarrays,” Bioinformatics, vol. 17, no. 6, pp. 520–525, 2001. 10 | 2017 | 169 |
6,640 | Non-parametric Structured Output Networks Andreas M. Lehrmann Disney Research Pittsburgh, PA 15213 andreas.lehrmann@disneyresearch.com Leonid Sigal Disney Research Pittsburgh, PA 15213 lsigal@disneyresearch.com Abstract Deep neural networks (DNNs) and probabilistic graphical models (PGMs) are the two main tools for statistical modeling. While DNNs provide the ability to model rich and complex relationships between input and output variables, PGMs provide the ability to encode dependencies among the output variables themselves. End-to-end training methods for models with structured graphical dependencies on top of neural predictions have recently emerged as a principled way of combining these two paradigms. While these models have proven to be powerful in discriminative settings with discrete outputs, extensions to structured continuous spaces, as well as performing efficient inference in these spaces, are lacking. We propose non-parametric structured output networks (NSON), a modular approach that cleanly separates a non-parametric, structured posterior representation from a discriminative inference scheme but allows joint end-to-end training of both components. Our experiments evaluate the ability of NSONs to capture structured posterior densities (modeling) and to compute complex statistics of those densities (inference). We compare our model to output spaces of varying expressiveness and popular variational and sampling-based inference algorithms. 1 Introduction In recent years, deep neural networks have led to tremendous progress in domains such as image classification [1, 2] and segmentation [3], object detection [4, 5] and natural language processing [6, 7]. These achievements can be attributed to their hierarchical feature representation, the development of effective regularization techniques [8, 9] and the availability of large amounts of training data [10, 11]. While a lot of effort has been spent on identifying optimal network structures and trainings schemes to enable these advances, the expressiveness of the output space has not evolved at the same rate. Indeed, it is striking that most neural architectures model categorical posterior distributions that do not incorporate any structural assumptions about the underlying task; they are discrete and global (Figure 1a). However, many tasks are naturally formulated as structured problems or would benefit from continuous representations due to their high cardinality. In those cases, it is desirable to learn an expressive posterior density reflecting the dependencies in the underlying task. As a simple example, consider a stripe of n noisy pixels in a natural image. If we want to learn a neural network that encodes the posterior distribution p✓✓✓(y | x) of the clean output y given the noisy input x, we must ensure that p✓✓✓is expressive enough to represent potentially complex noise distributions and structured enough to avoid modeling spurious dependencies between the variables. Probabilistic graphical models [12], such as Bayesian networks or Markov random fields, have a long history in machine learning and provide principled frameworks for such structured data. It is therefore natural to use their factored representations as a means of enforcing structure in a deep neural network. While initial results along this line of research have been promising [13, 14], they focus exclusively on the discrete case and/or mean-field inference. Instead, we propose a deep neural network that encodes a non-parametric posterior density that factorizes over a graph (Figure 1b). We perform recurrent inference inspired by message-passing in this structured output space and show how to learn all components end-to-end. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. x Pooling ReLU Convolution FC ReLU Dropout ··· ✓ y Y p✓ p✓(Y = y) ✓= {pi}|Y | i=1 Classification Neural Network U V (a) Traditional Neural Network: discrete, global, parametric. Y1 Y2 Y3 Y4 · · · Yn Y5 Y1 x Yi | pa(Yi) ⇠p✓i|pa(i) p✓2|1 Yn-1 LM Pooling ReLU Convolution FC ReLU LI Recurrent Inference Network Fig. 2 e✓1 ✓1 · · · · · · ✓2|1 ✓3|2 e✓32 e✓21 e✓n· ✓n|· e✓= {ew, eµµµ, eB} ⌧(·, y) p✓✓✓(y | x) Dropout U V (i) Deep Neural Network (ii) Non-parametric Graphical Model (b) Non-parametric Structured Output Network: continuous, structured, non-parametric. Figure 1: Overview: Non-parametric Structured Output Networks. (a) Traditional neural networks use a series of convolution and inner product modules to predict a discrete posterior without graphical structure (e.g., VGG [15]). [grey b= optional] (b) Non-parametric structured output networks use a deep neural network to predict a non-parametric graphical model p✓✓✓(x)(y) (NGM) that factorizes over a graph. A recurrent inference network (RIN) computes statistics t[p✓✓✓(x)(y)] from this structured output density. At training time, we propagate stochastic gradients from both NGM and RIN back to the inputs. 1.1 Related Work Our framework builds upon elements from neural networks, structured models, non-parametric statistics, and approximate inference. We will first present prior work on structured neural networks and then discuss the relevant literature on approximate non-parametric inference. 1.1.1 Structured Neural Networks Structured neural networks combine the expressive representations of deep neural networks with the structured dependencies of probabilistic graphical models. Early attempts to combine both frameworks used high-level features from neural networks (e.g., fc7) to obtain fixed unary potentials for a graphical model [18]. More recently, statistical models and their associated inference tasks have been reinterpreted as (layers in) neural networks, which has allowed true end-to-end training and blurred the line between both paradigms: [13, 14] express the classic mean-field update equations as a series of layers in a recurrent neural network (RNN). Structure inference machines [17] use an RNN to simulate message-passing in a graphical model with soft-edges for activity recognition. A full backward-pass through loopy-BP was proposed in [19]. The structural-RNN [16] models all node and edge potentials in a spatio-temporal factor graph as RNNs that are shared among groups of nodes/edges with similar semantics. Table 1 summarizes some important properties of these methods. Notably, all output spaces except for the non-probabilistic work [16] are discrete. 1.1.2 Inference in Structured Neural Networks In contrast to a discrete and global posterior, which allows inference of common statistics (e.g., its mode) in linear time, expressive output spaces, as in Figure 1b, require message-passing schemes [20] Output Space Related Work VGG MRF-RNN Structural Structure Deep Structured NSON RNN Inference Machines Models [15] [14] [16] [17] [13] (ours) Continuous 7 7 X 7 7 X −Non-parametric − − 7 − − X Structured 7 X X X X X −End-to-end Training − X X 7 X X Prob. Inference D MF 7 MP MF MP Posterior Sampling X 7 7 X 7 X Table 1: Output Space Properties Across Models. [MF: mean-field; MP: message passing; D: direct; ‘−’: not applicable] 2 to propagate and aggregate information. Local potentials outside of the exponential family, such as non-parametric distributions, lead to intractable message updates, so one needs to resort to approximate inference methods, which include the following two popular groups: Variational Inference. Variational methods, such as mean-field and its structured variants [12], approximate an intractable target distribution with a tractable variational distribution by maximizing the evidence lower bound (ELBO). Stochastic extensions allow the use of this technique even on large datasets [21]. If the model is not in the conjugate-exponential family [22], as is the case for non-parametric graphical models, black box methods must be used to approximate an intractable expectation in the ELBO [23]. For fully-connected graphs with Gaussian pairwise potentials, the dense-CRF model [24] proposes an efficient way to perform the variational updates using the permutohedral lattice [25]. For general edge potentials, [26] proposes a density estimation technique that allows the use of non-parametric edge potentials. Sampling-based Inference. This group of methods employs (sets of) samples to approximate intractable operations when computing message updates. Early works use iterative refinements of approximate clique potentials in junction trees [27]. Non-parametric belief propagation (NBP) [28, 29] represents each message as a kernel density estimate and uses Gibbs sampling for propagation. Particle belief propagation [30] represents each message as a set of samples drawn from an approximation to the receiving node’s marginal, effectively circumventing the kernel smoothing required in NBP. Diverse particle selection [31] keeps a diverse set of hypothesized solutions at each node that pass through an iterative augmentation-update-selection scheme that preserves message values. Finally, a mean shift density approximation has been used as an alternative to sampling in [32]. 1.2 Contributions Our NSON model is inspired by the structured neural architectures (Section 1.1.1). However, in contrast to those approaches, we model structured dependencies on top of expressive non-parametric densities. In doing so, we build an inference network that computes statistics of these non-parametric output densities, thereby replacing the need for more conventional inference (Section 1.1.2). In particular, we make the following contributions: (1) We propose non-parametric structured output networks, a novel approach combining the predictive power of deep neural networks with the structured representation and multimodal flexibility of non-parametric graphical models; (2) We show how to train the resulting output density together with recurrent inference modules in an end-to-end way; (3) We compare non-parametric structured output networks to a variety of alternative output densities and demonstrate superior performance of the inference module in comparison to variational and sampling-based approaches. 2 Non-parametric Structured Output Networks Traditional neural networks (Figure 1a; [15]) encode a discrete posterior distribution by predicting an input-conditioned parameter vector e✓(x) of a categorical distribution, i.e., Y | X = x ⇠pe✓(x). Non-parametric structured output networks (Figure 1b) do the same, except that e✓✓✓(x) parameterizes a continuous graphical model with non-parametric potentials. It consists of three components: A deep neural network (DNN), a non-parametric graphical model (NGM), and a recurrent inference network (RIN). While the DNN+NGM encode a structured posterior (b= model), the RIN computes complex statistics in this output space (b= inference). At a high level, the DNN, conditioned on an input x, predicts the parameters e✓✓✓= {e✓ij} (e.g., kernel weights, centers and bandwidths) of local non-parametric distributions over a node and its parents according to the NGM’s graph structure (Figure 1b). Using a function ⌧, these local joint distributions are then transformed to conditional distributions parameterized by ✓✓✓= {✓i|j} (e.g., through a closed-form conditioning operation) and assembled into a structured joint density p✓✓✓(x)(y) with conditional (in)dependencies prescribed by the graphical model. Parameters of the DNN are optimized with respect to a maximum-likelihood loss LM. Simultaneously, a recurrent inference network (detailed in Figure 2) that takes e✓✓✓as input, is trained to compute statistics of the structured distribution (e.g., marginals) using a separate inference loss LI. The following two paragraphs discuss these elements in more detail. 3 Model (DNN+NGM). The DNN is parameterized by a weight vector λλλM and encodes a function from a generic input space X to a Cartesian parameter space ⇥n, x λλλM 7−! e✓✓✓(x) = (e✓i,pa(i)(x))n i=1, (1) each of whose components models a joint kernel density (Yi, pa(Yi)) ⇠pe✓i,pa(i)(x) and thus, implicitly, the local conditional distribution Yi | pa(Yi) ⇠p✓i|pa(i)(x) of a non-parametric graphical model p✓✓✓(x)(y) = n Y i=1 p✓i|pa(i)(x)(yi | pa(yi)) (2) over a structured output space Y with directed, acyclic graph G = (Y, E). Here, pa(·) denotes the set of parent nodes w.r.t. G, which we fix in advance based on prior knowledge or structure learning [12]. The conditional density of a node Y = Yi with parents Y 0 = pa(Yi) and parameters ✓= ✓i|pa(i)(x) is thus given by1 p✓(y | y0) = N X j=1 w(j) · |B(j)|−1(B(−j)(y −µ(j))), (3) where the differentiable kernel (u) = Q i q(ui) is defined in terms of a symmetric, zero-mean density q with positive variance and the conditional parameters ✓= (w,µµµ, B) 2 ⇥correspond to the full set of kernel weights, kernel centers, and kernel bandwidth matrices, respectively.2 The functional relationship between ✓and its joint counterpart e✓= e✓i,pa(i)(x) is mediated through a kernel-dependent conditioning operation ⌧(e✓) = ⌧(ew, eµµµ, eB) = ✓and can be computed in closedform for a wide range of kernels, including Gaussian, cosine, logistic and other kernels with sigmoid CDF. In particular, for block decompositions eB(j) = & eB(j) y 0 0 eB(j) y0 ' and eµ(j) = & eµ(j) y eµ(j) y0 ' , we obtain ⌧(e✓) = ✓= 8 > > < > > : w(j) / ew(j) · |eB(j) y0 |−1(eB(−j) y0 (y0 −eµ(j) y0 )), µ(j) = eµ(j) y , B(j) = eB(j) y . 1 j N (4) See Appendix A.1 for a detailed derivation. We refer to the structured posterior density in Eq. (2) with the non-parametric local potentials in Eq. (3) as a non-parametric structured output network. Given an output training set DY = {y(i) 2 Y}N 0 i=1, traditional kernel density estimation [33] can be viewed as an extreme special case of this architecture in which the discriminative, trainable DNN is replaced with a generative, closed-form estimator and n := 1 (no structure), N := N 0 (#kernels = #training points), w(i) := (N 0)−1 (uniform weights), B(i) := B(0) (shared covariance) and µ(i) := y(i) (fixed centers). When learning λλλM from data, we can easily enforce parts or all of those restrictions in our model (see Section 5), but Section 3 will provide all necessary derivations for the more general case shown above. Inference (RIN). In contrast to traditional classification networks with discrete label posterior, non-parametric structured output networks encode a complex density with rich statistics. We employ a recurrent inference network with parameters λλλI to compute such statistics t from the predicted parameters e✓✓✓(x) 2 ⇥n, e✓✓✓(x) λλλI 7−! t[p✓✓✓(x)]. (5) Similar to conditional graphical models, the underlying assumption is that the input-conditioned density p✓✓✓(x) contains all information about the semantic entities of interest and that we can infer whichever statistic we are interested in from it. A popular example of a statistic is a summary statistic, t[p✓✓✓(x)](yi) = opy\yip✓✓✓(x)(y) d(y\yi), (6) which is known as sum-product BP (op = R ; computing marginals) and max-product BP (op = max; computing max-marginals). Note, however, that we can attach recurrent inference networks corresponding to arbitrary tasks to this meta representation. Section 4 discusses the necessary details. 1We write B(−j) := ⇣ B(j)⌘−1 and B−T := $ B−1%> to avoid double superscripts. 2Note that ✓represents the parameters of a specific node; different nodes may have different parameters. 4 3 Learning Structured Densities using Non-Parametric Back-Propagation The previous section introduced the model and inference components of a non-parametric structured output network. We will now describe how to learn the model (DNN+NGM) from a supervised training set (x(i), y(i)) ⇠pD. 3.1 Likelihood Loss We write ✓✓✓(x;λλλM) = ⌧(e✓✓✓(x;λλλM)) to explicitly refer to the weights λλλM of the deep neural network predicting the non-parametric graphical model (Eq. (1)). Since the parameters of p✓✓✓(x) are deterministic predictions from the input x, the only free and learnable parameters are the components of λλλM. We train the DNN via empirical risk minimization with a negative log-likelihood loss LM, λλλ⇤ M = argmin λλλM E(x,y)⇠bpD[LM(✓✓✓(x;λλλM), y)] = argmax λλλM E(x,y)⇠bpD[log p✓✓✓(x;λλλM)(y)], (7) where bpD refers to the empirical distribution and the expectation in Eq. (7) is taken over the factorization in Eq. (2) and the local distributions in Eq. (3). Note the similarities and differences between a non-parametric structured output network and a non-parametric graphical model with unary potentials from a neural network: Both model classes describe a structured posterior. However, while the unaries in the latter perform a reweighting of the potentials, a non-parametric structured output network predicts those potentials directly and allows joint optimization of its DNN and NGM components by back-propagating the structured loss first through the nodes of the graphical model and then through the layers of the neural network all the way back to the input. 3.2 Topological Non-parametric Gradients We optimize Eq. (7) via stochastic gradient descent of the loss LM w.r.t. the deep neural network weights λλλM using Adam [34]. Importantly, the gradients rλλλM LM(✓✓✓(x;λλλM), y) decompose into a factor from the deep neural network and a factor from the non-parametric graphical model, rλλλM LM(✓✓✓(x;λλλM), y) = @ log p✓✓✓(x;λλλM)(y) @ e✓e✓e✓(x;λλλM) · @ e✓e✓e✓(x;λλλM) @ λλλM , (8) where the partial derivatives of the second factor can be obtained via standard back-propagation and the first factor decomposes according to the graphical model’s graph structure G, @ log p✓✓✓(x;λλλM)(y) @ e✓e✓e✓(x;λλλM) = n X i=1 @ log p✓i|pa(i)(x;λλλM)(yi | pa(yi)) @ e✓e✓e✓(x;λλλM) . (9) The gradient of a local model w.r.t. the joint parameters e✓e✓e✓(x;λλλM) is given by two factors accounting for the gradient w.r.t. the conditional parameters and the Jacobian of the conditioning operation, @ log p✓i|pa(i)(x;λλλM)(yi | pa(yi)) @ e✓e✓e✓(x;λλλM) = @ log p✓i|pa(i)(x;λλλM)(yi | pa(yi)) @ ✓✓✓(x;λλλM) · @ ✓✓✓(x;λλλM) @ e✓✓✓(x;λλλM) . (10) Note that the Jacobian takes a block-diagonal form, because ✓= ✓i|pa(i)(x;λλλM) is independent of e✓= e✓j,pa(j)(x;λλλM) for i 6= j. Each block constitutes the backward-pass through a node Yi’s conditioning operation, @ ✓ @ e✓ = @ (w,µµµ, B) @ (ew, eµµµ, eB) = 2 66664 @w @ ew @w @eµµµ @w @ eB 0 @µµµ @eµµµ 0 0 0 @B @ eB 3 77775 , (11) where the individual entries are given by the derivatives of Eq. (4), e.g., @w @ ew = (w ⌦w + diag(w)) · diag(ew)−1. (12) 5 Similar equations exist for the derivatives of the weights w.r.t. the kernel locations and kernel bandwidth matrices; the remaining cases are simple projections. In practice, we may be able to group the potentials p✓i|pa(i) according to their semantic meaning, in which case we can train one potential per group instead of one potential per node by sharing the corresponding parameters in Eq. (9). All topological operations can be implemented as separate layers in a deep neural network and the corresponding gradients can be obtained using automatic differentiation. 3.3 Distributional Non-parametric Gradients We have shown how the gradient of the loss factorizes over the graph of the output space. Next, we will provide the gradients of those local factors log p✓(y | y0) (Eq. (3)) w.r.t. the local parameters ✓= ✓i|pa(i). To reduce notational clutter, we introduce the shorthand by(k) := B(−k)(y −µ(k)) to refer to the normalized input and provide only final results; detailed derivations for all gradients and worked out examples for specific kernels can be found in Appendix A.2. Kernel Weights. rw log p✓(y | y0) = ⌘ w>⌘, ⌘:= ✓ |B(−k)|(by(k)) ◆N k=1 . (13) Note that w is required to lie on the standard (N −1)-simplex ∆(N−1). Different normalizations are possible, including a softmax or a projection onto the simplex, i.e., ⇡∆(N−1)(w(i)) = max(0, w(i) + u) and u is the unique translation such that the positive points sum to 1 [35]. Kernel Centers. rµµµ log p✓(y | y0) = w ⊙β w>⌘, β := ✓−B(−>k) |B(k)| · @(by(k)) @by(k) ◆N k=1 . (14) The kernel centers do not underlie any spatial restrictions, but proper initialization is important. Typically, we use the centers of a k-means clustering with k := N to initialize the kernel centers. Kernel Bandwidth Matrices. rB log p✓(y | y0) = w ⊙γ w>⌘, γ := ✓−B(−>k) |B(k)| · ✓ (by(k)) + @(by(k)) @by(k) by(>k) ◆◆N k=1 . (15) While computation of the gradient w.r.t. B is a universal approach, specific kernels may allow alternative gradients: In a Gaussian kernel, for instance, the Gramian of the bandwidth matrix acts as a covariance matrix. We can thus optimize B(k)B(>k) in the interior of the cone of positive-semidefinite matrices by computing the gradients w.r.t. the Cholesky factor of the inverse covariance matrix. 4 Inferring Complex Statistics using Neural Belief Propagation The previous sections introduced non-parametric structured output networks and showed how their components, DNN and NGM, can be learned from data. Since the resulting posterior density p✓✓✓(x)(y) (Eq. (2)) factorizes over a graph, we can, in theory, use local messages to propagate beliefs about statistics t[p✓✓✓(x)(y)] along its edges (BP; [20]). However, special care must be taken to handle intractable operations caused by non-parametric local potentials and to allow an end-to-end integration. For ease of exposition, we assume that we can represent the local conditional distributions as a set of pairwise potentials {φ(yi, yj)}, effectively converting our directed model to a normalized MRF. This is not limiting, as we can always convert a factor graph representation of Eq. (2) into an equivalent pairwise MRF [36]. In this setting, a BP message µi!j(yj) from Yi to Yj takes the form µi!j(yj) = opyiφ(yi, yj) · µ·!i(yi), (16) where the operator opy computes a summary statistic, such as integration or maximization, and µ·!i(yi) is the product of all incoming messages at Yi. In case of a graphical model with nonparametric local distributions (Eq. (3)), this computation is not feasible for two reasons: (1) the premessages µ·!i(yi) are products of sums, which means that the number of kernels grows exponentially in the number of incoming messages; (2) the functional opy does not usually have an analytic form. 6 Deep Neural Network Fig. 1(b)(i) Non-parametric Graphical Model Fig. 1(b)(ii) t = 1, . . . , T i = 1, . . . , n e✓ij L(i) I Stacking {e✓ij} LM FC+ReLU k 2 ne(i) FC+ReLU k2ne(i)\j bµ(t−1) k!i bµ(t) i!j bµ(T ) k!i bbi (a) Recurrent Inference Network. bµ(t) 2!4 bµ(t−1) 1!2 bµ(t−1) 3!2 bµ(t−1) 1!2 bµ(t−1) 3!2 e✓42 FC e✓42 e✓21 FC FC e✓32 e✓21 e✓32 1 1 1 1 (b) Partially Unrolled Inference Network. Figure 2: Inferring Complex Statistics. Expressive output spaces require explicit inference procedures to obtain posterior statistics. We use an inference network inspired by message-passing schemes in non-parametric graphical models. (a) An RNN iteratively computes outgoing messages from incoming messages and the local potential. (b) Unrolled inference network illustrating the computation of bµ2!4 in the graph shown in Figure 1b. Inspired by recent results in imitation learning [37] and inference machines for classification [17, 38], we take an alternate route and use an RNN to model the exchange of information between nonparametric nodes. In particular, we introduce an RNN node bµi!j for each message and connect them in time according to Eq. (16), i.e., each node has incoming connections from its local potential e✓ij, predicted by the DNN, and the nodes {bµk!i : k 2 neG(i)\j}, which correspond to the incoming messages. The message computation itself is approximated through an FC+ReLU layer with weights λλλi!j I . An approximate message bµi!j from Yi to Yj can thus be written as bµi!j = ReLU(FCλλλi!j I (Stacking(e✓ij, {bµk!i : k 2 neG(i)\j}))), (17) where neG(·) returns the neighbors of a node in G. The final beliefs bbi = bµ·!i · bµi!j can be implemented analogously. Similar to (loopy) belief updates in traditional message-passing, we run the RNN for a fixed number of iterations, at each step passing all neural messages. Furthermore, using the techniques discussed in Section 3.3, we can ensure that the messages are valid non-parametric distributions. All layers in this recurrent inference network are differentiable, so that we can propagate a decomposable inference loss LI = Pn i=1 L(i) I end-to-end back to the inputs. In practice, we find that generic loss functions work well (see Section 5) and that canonic loss functions can often be obtained directly from the statistic. The DNN weights λλλM are thus updated so as to do both predict the right posterior density and, together with the RIN weights λλλI, perform correct inference in it (Figure 2). 5 Experiments We validate non-parametric structured output networks at both the model (DNN+NGM) and the inference level (RIN). Model validation consists of a comparison to baselines along two binary axes, structuredness and non-parametricity. Inference validation compares our RIN unit to the two predominant groups of approaches for inference in structured non-parametric densities, i.e., sampling-based and variational inference (Section 1.1.2). 5.1 Dataset We test our approach on simple natural pixel statistics from Microsoft COCO [11] by sampling stripes y = (yi)n i=1 2 [0, 255]n of n = 10 pixels. Each pixel yi is corrupted by a linear noise model, leading to the observable output xi = β · yi + ✏, with ✏⇠N(255 · δβ,−1, σ2) and β ⇠Ber( ), where the target space of the Bernoulli trial is {−1, +1}. For our experiments, we set σ2 = 100 and = 0.5. Using this noise process, we generate training and test sets of sizes 100,000 and 1,000, respectively. 5.2 Model Validation The distributional gradients (Eq. (9)) comprise three types of parameters: Kernel locations, kernel weights, and kernel bandwidth matrices. Default values for the latter two exist in the form of uniform weights and plug-in bandwidth estimates [33], respectively, so we can turn optimization of those 7 Model Non-param. Structured Parameter Group Estimation −W +W −B +B −B +B Gaussian 7 7 −1.13 (ML estimation) Kernel Density X 7 +6.66 (Plug-in bandwidth estimation) Neural Network + Gaussian 7 7 −0.90 +2.54 −0.88 +2.90 GGM [39] 7 X −0.85 +1.55 −0.93 +1.53 Mixture Density [40] X 7 +9.22 +6.87 +11.18 +11.51 NGM-100 (ours) X X +15.26 +15.30 +16.00 +16.46 (a) Model Validation Inference Particles Performance Runtime (marg. log-lik.) (sec) BB-VI [23] 400 +2.30 660.65 800 +3.03 1198.08 P-BP [30] 50 +2.91 0.49 100 +6.13 2.11 200 +7.01 6.43 400 +8.85 21.13 RIN-100 (ours) − +16.62 0.04 (b) Inference Validation Table 2: Quantitative Evaluation. (a) We report the expected log-likelihood of the test set under the predicted posterior p✓✓✓(x)(y), showing the need for a structured and non-parametric approach to model rich posteriors. (b) Inference using our RIN architecture is much faster than sampling-based or variational inference while still leading to accurate marginals. [(N/G)GM: Non-parametric/Gaussian Graphical Model; RIN-x: Recurrent Inference Network with x kernels; P-BP: Particle Belief Propagation; BB-VI: Black Box Variational Inference] parameter groups on/off as desired.3 In addition to those variations, non-parametric structured output networks with a Gaussian kernel = N(· | ~0, I) comprise a number of popular baselines as special cases, including neural networks predicting a Gaussian posterior (n = 1, N = 1), mixture density networks (n = 1, N > 1; [40]), and Gaussian graphical models (n > 1, N = 1; [39]). For the sake of completeness, we also report the performance of two basic posteriors without preceding neural network, namely a pure Gaussian and traditional kernel density estimation (KDE). We compare our approach to those baselines in terms of the expected log-likelihood on the test set, which is a relative measure for the KL-divergence to the true posterior. Setup and Results. For the two basic models, we learn a joint density p(y, x) by maximum likelihood (Gaussian) and plug-in bandwidth estimation (KDE) and condition on the inputs x to infer the labels y. We train the other 4 models for 40 epochs using a Gaussian kernel and a diagonal bandwidth matrix for the non-parametric models. The DNN consists of 2 fully-connected layers with 256 units and the kernel weights are constrained to lie on a simplex with a softmax layer. The NGM uses a chain-structured graph that connects each pixel to its immediate neighbors. Table 2a shows our results. Ablation study: unsurprisingly, a purely Gaussian posterior cannot represent the true posterior appropriately. A multimodal kernel density works better than a neural network with parametric posterior but cannot compete with the two non-parametric models attached to the neural network. Among the methods with a neural network, optimization of kernel locations only (first column) generally performs worst. However, the −W + B setting (second column) gets sometimes trapped in local minima, especially in case of global mixture densities. If we decide to estimate a second parameter group, weights (+W) should therefore be preferred over bandwidths (+B). Best results are obtained when estimation is turned on for all three parameter groups. Baselines: the two non-parametric methods consistently perform better than the parametric approaches, confirming our claim that non-parametric densities are a powerful alternative to a parametric posterior. Furthermore, a comparison of the last two rows shows a substantial improvement due to our factored representation, demonstrating the importance of incorporating structure into high-dimensional, continuous estimation problems. Learned Graph Structures. While the output variables in our experiments with one-dimensional pixel stripes have a canonical dependence structure, the optimal connectivity of the NGM in tasks with complex or no spatial semantics might be less obvious. As an example, we consider the case of twodimensional image patches of size 10 ⇥10, which we extract and corrupt following the same protocol and noise process as above. Instead of specifying the graph by hand, we use a mutual information criterion [41] to learn the optimal arborescence from the training labels. With estimation of all parameter groups turned on (+W + B), we obtain results that are fully in line with those above: the expected test log-likelihood of NSONs (+153.03) is again superior to a global mixture density (+76.34), which in turn outperforms the two parametric approaches (GGM: +18.60; Gaussian: −19.03). A full ablation study as well as a visualization of the inferred graph structure are shown in Appendix A.3. 3Since plug-in estimators depend on the kernel locations, the gradient w.r.t. the kernel locations needs to take these dependencies into account by backpropagating through the estimator and computing the total derivative. 8 5.3 Inference Validation Section 4 motivated the use of a recurrent inference network (RIN) to infer rich statistics from structured, non-parametric densities. We compare this choice to the other two groups of approaches, i.e., variational and sampling-based inference (Section 1.1.2), in a marginal inference task. To this end, we pick one popular member from each group as baselines for our RIN architecture. Particle Belief Propagation (P-BP; [30]). Sum-product particle belief propagation approximates a BP-message (Eq. (16); op := R ) with a set of particles {y(s) j }S s=1 per node Yj by computing bµi!j(y(k) j ) = S X s=1 φ(y(s) i , y(k) j ) · bµ·!i(y(s) i ) S⇢(y(s) i ) , (18) where the particles are sampled from a proposal distribution ⇢that approximates the true marginal by running MCMC on the beliefs bµ·!i(yi) · bµi!j(yi). Similar versions exist for other operators [42]. Black Box Variational Inference (BB-VI; [23]). Black box variational inference maximizes the ELBO LV I[qλλλ] with respect to a variational distribution qλλλ by approximating its gradient through a set of samples {y(s)}S s=1 ⇠qλλλ and performing stochastic gradient ascent, rλλλ LV I[qλλλ] = rλλλ Eqλ λ λ(y) log p✓✓✓(y) qλλλ(y) 7 ⇡S−1 S X s=1 rλλλ log qλλλ(y(s)) log p✓✓✓(y(s)) qλλλ(y(s)). (19) A statistic t (Eq. (5)) can then be estimated from the tractable variational distribution qλλλ(y) instead of the complex target distribution p✓✓✓(y). We use an isotropic Gaussian kernel = N(· | ~0, I) together with the traditional factorization qλλλ(y) = Qn i=1 qλi(yi), in which case variational sampling is straighforward and the (now unconditional) gradients are given directly by Section 3.3. 5.3.1 Setup and Results. We train our RIN architecture with a negative log-likelihood loss attached to each belief node, L(i) I = −log p✓i(yi), and compare its performance to the results obtained from P-BP and BB-VI by calculating the sum of marginal log-likelihoods. For the baselines, we consider different numbers of particles, which affects both performance and speed. Additionally, for BB-VI we track the performance across 1024 optimization steps and report the best results. Table 2b summarizes our findings. Among the baselines, P-BP performs better than BB-VI once a required particle threshold is exceeded. We believe this is a manifestation of the special requirements associated with inference in non-parametric densities: while BB-VI needs to fit a high number of parameters, which poses the risk of getting trapped in local minima, P-BP relies solely on the evaluation of potentials. However, both methods are outperformed by a significant margin by our RIN, which we attribute to its end-to-end training in accordance with DNN+NGM and its ability to propagate and update full distributions instead of their mere value at a discrete set of points. In addition to pure performance, a key advantage of RIN inference over more traditional inference methods is its speed: our RIN approach is over 50⇥ faster than P-BP with 100 particles and orders of magnitude faster than BB-VI. This is significant, even when taking dependencies on hardware and implementation into account, and allows the use of expressive non-parametric posteriors in time-critical applications. 6 Conclusion We proposed non-parametric structured output networks, a highly expressive framework consisting of a deep neural network predicting a non-parametric graphical model and a recurrent inference network computing statistics in this structured output space. We showed how all three components can be learned end-to-end by backpropagating non-parametric gradients through directed graphs and neural messages. Our experiments showed that non-parametric structured output networks are necessary for both effective learning of multimodal posteriors and efficient inference of complex statistics in them. We believe that NSONs are suitable for a variety of other structured tasks and can be used to obtain accurate approximations to many intractable statistics of non-parametric densities beyond (max-)marginals. 9 References [1] Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet Classification with Deep Convolutional Neural Networks. NIPS (2012) [2] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. CVPR (2016) [3] Shelhamer, E., Long, J., Darrell, T.: Fully Convolutional Networks for Semantic Segmentation. PAMI (2016) [4] Girshick, R.: Fast R-CNN. ICCV (2015) [5] Rena, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv:1506.01497 [cs.CV] (2015) [6] Collobert, R., Weston, J.: A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. ICML (2008) [7] Bahdanau, D., Cho, K., Bengio, Y.: Neural Machine Translation by Jointly Learning to Align and Translate. ICLR (2015) [8] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR (2014) [9] Ioffe, S., Szegedy, S.: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICML (2015) [10] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. CVPR (2009) [11] Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C.L., Dollar, P.: Microsoft COCO: Common Objects in Context. In arXiv:1405.0312 [cs.CV]. (2014) [12] Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press (2009) [13] Schwing, A., Urtasun, R.: Fully Connected Deep Structured Networks. arXiv:1503.02351 [cs.CV] (2015) [14] Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.: Conditional Random Fields as Recurrent Neural Networks. ICCV (2015) [15] Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recog. ICLR (2015) [16] Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-RNN: Deep Learning on SpatioTemporal Graphs. CVPR (2016) [17] Deng, Z., Vahdat, A., Hu, H., Mori, G.: Structure Inference Machines: Recurrent Neural Networks for Analyzing Relations in Group Activity Recognition. CVPR (2015) [18] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. ICLR (2015) [19] Chen, L.C., Schwing, A., Yuille, A., Urtasun, R.: Learning Deep Structured Models. ICML (2015) [20] Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann (1988) [21] Hoffman, M.D., Blei, D.M., Wang, C., Paisley, J.: Stochastic Variational Inference. JMLR (2013) [22] Ghahramani, Z., Beal, M.: Propagation Algorithms for Variational Bayesian Learning. NIPS (2001) [23] Ranganath, R., Gerrish, S., Blei, D.M.: Black Box Variational Inference. JMLR W&CP (2014) [24] Kraehenbuehl, P., Koltun, V.: Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. NIPS (2012) [25] Adams, A., Baek, J., Davis, M.A.: Fast High-Dimensional Filtering Using the Permutohedral Lattice. Computer Graphics Forum (2010) 10 [26] Campbell, N., Subr, K., Kautz, J.: Fully-Connected CRFs with Non-Parametric Pairwise Potentials. CVPR (2013) [27] Koller, D., Lerner, U., Angelov, D.: A General Algorithm for Approximate Inference and its Application to Hybrid Bayes Nets. UAI (1999) [28] Isard, M.: Pampas: Real-Valued Graphical Models for Computer Vision. CVPR (2003) [29] Sudderth, E., lhler, A., Freeman, W., Willsky, A.: Non-parametric Belief Propagation. CVPR (2003) [30] Ihler, A., McAllester, D.: Particle Belief Propagation. AISTATS (2009) [31] Pacheco, J., Zuffi, S., Black, M.J., Sudderth, E.: Preserving Modes and Messages via Diverse Particle Selection. ICML (2014) [32] Park, M., Liu, Y., Collins, R.T.: Efficient Mean Shift Belief Propagation for Vision Tracking. CVPR (2008) [33] Scott, D.: Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley (1992) [34] Kingma, D., Ba, J.: Adam: A Method for Stochastic Optimization. ICLR (2015) [35] Wang, W., Carreira-Perpiñán, M.Á.: Projection onto the Probability Simplex: An Efficient Algorithm with a Simple Proof, and an Application. arXiv:1309.1541 [cs.LG] (2013) [36] Yedidia, J.S., Freeman, W.T., Weiss, Y.: Understanding Belief Propagation and its Generalizations. Technical report, Mitsubishi Electric Research Laboratories (2001) [37] Sun, W., Venkatramana, A., Gordon, G.J., Boots, B., Bagnell, J.A.: Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction. arXiv:1703.01030 [cs.LG] (2017) [38] Ross, S., Munoz, D., Hebert, M., Bagnell, J.A.: Learning Message-Passing Inference Machines for Structured Prediction. CVPR (2011) [39] Weiss, Y., Freeman, W.T.: Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology. Neural Computation (2001) [40] Bishop, C.M.: Mixture Density Networks. Technical report, Aston University (1994) [41] Lehrmann, A., Gehler, P., Nowozin, S.: A Non-Parametric Bayesian Network Prior of Human Pose. ICCV (2013) [42] Kothapa, R., Pacheco, J., Sudderth, E.B.: Max-Product Particle Belief Propagation. Technical report, Brown University (2011) 11 | 2017 | 17 |
6,641 | Neural Program Meta-Induction Jacob Devlin∗ Google jacobdevlin@google.com Rudy Bunel∗ University of Oxford rudy@robots.ox.ac.uk Rishabh Singh Microsoft Research risin@microsoft.com Matthew Hausknecht Microsoft Research mahauskn@microsoft.com Pushmeet Kohli∗ DeepMind pushmeet@google.com Abstract Most recently proposed methods for Neural Program Induction work under the assumption of having a large set of input/output (I/O) examples for learning any underlying input-output mapping. This paper aims to address the problem of data and computation efficiency of program induction by leveraging information from related tasks. Specifically, we propose two approaches for cross-task knowledge transfer to improve program induction in limited-data scenarios. In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning. In our second approach, meta program induction, a k-shot learning approach is used to make a model generalize to new tasks without additional training. To test the efficacy of our methods, we constructed a new benchmark of programs written in the Karel programming language [17]. Using an extensive experimental evaluation on the Karel benchmark, we demonstrate that our proposals dramatically outperform the baseline induction method that does not use knowledge transfer. We also analyze the relative performance of the two approaches and study conditions in which they perform best. In particular, meta induction outperforms all existing approaches under extreme data sparsity (when a very small number of examples are available), i.e., fewer than ten. As the number of available I/O examples increase (i.e. a thousand or more), portfolio adapted program induction becomes the best approach. For intermediate data sizes, we demonstrate that the combined method of adapted meta program induction has the strongest performance. 1 Introduction Neural program induction has been a very active area of research in the last few years, but this past work has made highly variable set of assumptions about the amount of training data and types of training signals that are available. One common scenario is example-driven algorithm induction, where the goal is to learn a model which can perform a specific task (i.e., an underlying program or algorithm), such as sorting a list of integers[7, 11, 12, 21]. Typically, the goal of these works are to compare a newly proposed network architecture to a baseline model, and the system is trained on input/output examples (I/O examples) as a standard supervised learning task. For example, for integer sorting, the I/O examples would consist of pairs of unsorted and sorted integer lists, and the model would be trained to maximize cross-entropy loss of the output sequence. In this way, the induction model is similar to a standard sequence generation task such as machine translation or image captioning. In these works, the authors typically assume that a near-infinite amount of I/O examples corresponding to a particular task are available. ∗Work performed at Microsoft Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Other works have made different assumptions about data: Li et al. [14] trains models from scratch using 32 to 256 I/O examples. Lake et al. [13] learns to induce complex concepts from several hundred examples. Devlin et al. [5], Duan et al. [6], and Santoro et al. [19] are able to perform induction using as few one I/O example, but these works assume that a large set of background tasks from the same task family are available for training. Neelakantan et al. [16] and Andreas et al. [1] also develop models which can perform induction on new tasks that were not seen at training time, but are conditioned on a natural language representation rather than I/O examples. These varying assumptions about data are all reasonable in differing scenarios. For example, in a scenario where a reference implementation of the program is available, it is reasonable to expect that an unlimited amount of I/O examples can be generated, but it may be unreasonable to assume that any similar program will also be available. However, we can also consider a scenario like FlashFill [9], where the goal is to learn a regular expression based string transformation program based on userprovided examples, such as “John Smith →Smith, J.”). Here, it is only reasonable to assume that a handful of I/O examples are available for a particular task, but that many examples are available for other tasks in the same family (e.g., “Frank Miller →Frank M”). In this work, we compare several different techniques for neural program induction, with a particular focus on how the relative accuracy of these techniques differs as a function of the available training data. In other words, if technique A is better than technique B when only five I/O examples are available, does this mean A will also be better than B when 50 I/O examples are available? What about 1000? 100,000? How does this performance change if data for many related tasks is available? To answer these questions, we evaluate four general techniques for cross-task knowledge sharing: • Plain Program Induction (PLAIN) - Supervised learning is used to train a model which can perform induction on a single task, i.e., read in an input example for the task and predict the corresponding output. No cross-task knowledge sharing is performed. • Portfolio-Adapted Program Induction (PLAIN+ADAPT) - Simple transfer learning is used to to adapt a model which has been trained on a related task for a new task. • Meta Program Induction (META) - A k-shot learning-style model is used to represent an exponential family of tasks, where the training I/O examples corresponding to a task are directly conditioned on as input to the network. This model can generalize to new tasks without any additional training. • Adapted Meta Program Induction (META+ADAPT) - The META model is adapted to a specific new task using round-robin hold-one-out training on the task’s I/O examples. We evaluate these techniques on a synthetic domain described in Section 2, using a simple but strong network architecture. All models are fully example-driven, so the underlying program representation is only used to generate I/O examples, and is not used when training or evaluating the model. 2 Karel Domain In order to ground the ideas presented here, we describe our models in relation to a particular synthetic domain called “Karel”. Karel is an educational programming language developed at Stanford University in the 1980s[17]. In this language, a virtual agent named Karel the Robot moves around a 2D grid world placing markers and avoiding obstacle. The domain specific language (DSL) for Karel is moderately complex, as it allows if/then/else blocks, for loops, and while loops, but does not allow variable assignments. Compared to the current program induction benchmarks, Karel introduces a new challenge of learning programs with complex control flow, where the stateof-the-art program synthesis techniques involving constraint-solving and enumeration do not scale because of the prohibitively large search space. Karel is also an interesting domain as it is used for example-driven programming in an introductory Stanford programming course.2 In this course, students are provided with several I/O grids corresponding to some underlying Karel program that they have never seen before, and must write a single program which can be run on all inputs to generate the corresponding outputs. This differs from typical programming assignments, since the program specification is given in the form of I/O examples rather than natural language. An example is given in Figure 1. Note that inducing Karel programs is not a toy reinforcement learning task. 2The programs are written manually by students; it is not used to teach program induction or synthesis. 2 Since the example I/O grids are of varying dimensions, the learning task is not to induce a single trace that only works on grids of a fixed size, but rather to induce a program that can can perform the desired action on “arbitrary-size grids”, thereby forcing it to use the loop structure appropriately. Figure 1: Karel Domain: On the left, a sample task from the Karel domain with two training I/O examples (I1, O1), (I2, O2) and one test I/O example (ˆI, ˆO). The computer is Karel, the circles represent markers and the brick wall represents obstacles. On the right, the language spec for Karel. In this work, we only explore the induction variant of Karel, where instead of attempting to synthesize the program, we attempt to directly generate the output grid ˆO from a corresponding input grid ˆI. Although the underlying program is used to generate the training data, it is not used by the model in any way, so in principle it does not have to explicitly exist. For example, a more complex real-world analogue would be a system where a user controls a drone to provide examples of a task such as “Fly around the boundary of the forest, and if you see a deer, take a picture of it, then return home.” Such a task might be difficult to represent using a program, but could be possible with a sufficiently powerful and well-trained induction model, especially if cross-task knowledge sharing is used. 3 Plain Program Induction In this work, plain program induction (denoted as PLAIN) refers to the supervised training of a parametric model using a set of input/output examples (I1, O1), ..., (IN, ON), such that the model can take some new ˆI as input and emit the corresponding ˆO. In this scenario, all I/O examples in training and test correspond to the same task (i.e., underlying program or algorithm), such as sorting a list of integers. Examples of past work in plain program induction using neural networks include [7, 11, 12, 8, 4, 20, 2]. For the Karel domain, we use a simple architecture shown on the left side of Figure 2. The input feature map are an 16-dimensional vector with n-hot encodings to represent the objects of the cell, i.e., (AgentFacingNorth, AgentFacingEast, ..., OneMarker, TwoMarkers, ..., Obstacle). Additionally, instead of predicting the output grid directly, we use an LSTM to predict the delta between the input grid and output grid as a series of tokens using. For example, AgentRow=+1 AgentCol=+2 HeroDir=south MarkerRow=0 MarkerCol=0 MarkerCount=+2 would indicate that the hero has moved north 1 row, east 2 rows, is facing south, and also added two markers on its starting position. This sequence can be deterministically applied to the input to create the output grid. Specific details about the model architecture and training are given in Section 8. 4 Portfolio-Adapted Program Induction Most past work in neural programs induction assumes that a very large amount of training data is available to train a particular task, and ignores data sparsity issues entirely. However, in a practical scenario such as the FlashFill domain described in Section 1 or the real-world Karel analogue 3 Figure 2: Network Architecture: Diagrams for the general network architectures used for the Karel domain. Specifics of the model are provided in Section 8. described in Section 2, I/O examples for a new task must be provided by the user. In this case, it may be unrealistic to expect more than a handful of I/O examples corresponding to a new task. Of course, it is typically infeasible to train a deep neural network from scratch with only a handful of training examples. Instead, we consider a scenario where data is available for a number of background tasks from the same task family. In the Karel domain, the task family is simply any task from the Karel DSL, but in principle the task family can be more a more abstract concept such as “The set of string transformations that a user might perform on columns in a spreadsheet.” One way of taking advantage of such background tasks is with straightforward transfer learning, which we refer to as portfolio-adapted program induction (denoted as PLAIN+ADAPT). Here, we have a portfolio of models each trained on a single background I/O task. To train an induction model for a new task, we select the “best” background model and use it as an initialization point for training our new model. This is analogous to the type of transfer learning used in standard classification tasks like image recognition or machine translation [10, 15]. The criteria by which we select this background model is to score the training I/O examples for the new task with each model in the portfolio, and select the one with the highest log-likelihood. 5 Meta Program Induction Although we expect that PLAIN+ADAPT will allow us to learn an induction model with fewer I/O examples than training from scratch, it is still subject to the normal pitfalls of SGD-based training. In particular, it is typically very difficult to train powerful DNNs using very few I/O examples (e.g., < 100) without encountering significant overfitting. An alternative method is to train a single network which represents an entire (exponentially large) family of tasks, and the latent representation of a particular task is represented by conditioning on the training I/O examples for that task. We refer to this type of model as meta induction (denoted as META) because instead of using SGD to learn a latent representation of a particular task based on I/O examples, we are using SGD to learn how to learn a latent task representation based on I/O examples. More specifically, our meta induction architecture takes as input a set of demonstration examples (I1, O1), ..., (Ik, Ok) and an additional eval input ˆI, and emits the corresponding output ˆO. A diagram is shown in Figure 2. The number of demonstration examples k is typically small, e.g., 1 to 5. At training time, we are given a large number of tasks with k + 1 examples each. During training, one example is chosen at random to represent the eval example, the others are used to represent the demonstration examples. At test time, we are given k I/O examples which correspond to a new task that was not seen at training, along with one or more eval inputs ˆI. Then, we are able to generate the corresponding ˆO for the new task without performing any SGD. The META model could also be described as a k-shot learning system, closely related to Duan et al. [6] and Santoro et al. [19]. In a scenario where a moderate number of I/O examples are available at test time, e.g., 10 to 100, performing meta induction is non-trivial. It is not computationally feasible to train a model which is 4 directly conditioned on k = 100 examples, and using a larger value of k at test time than training time creates an undesirable mismatch. So, if the model is trained using k examples but n examples are available at test time (n > k), the approach we take is to randomly sample a number of k-sized sets and performing ensembling of the softmax log probabilities for each output token. There are (n choose k) total subsets available, but we found little improvement in using more than 2 ∗n/k. We set k = 5 in all experiments, and present results using different values of n in Section 8. 6 Adapted Meta Program Induction Figure 3: Data-Mixture Regularization The previous approach to use n > k I/O examples at test time seems reasonable, but certainly not optimal. An alternative approach is to combine the best aspects of META and PLAIN+ADAPT, and adapt the meta model to a particular new task using SGD. To do this, we can repeatedly sample k + 1 I/O examples from the n total examples provided, and fine tune the META model for the new task in the exact manner that it was trained originally. For decoding, we still perform the same algorithm as the META model, but the weights have been adapted for the particular task being decoded. In order to mitigate overfitting, we found that it is useful to perform “data-mixture regularization,” where the I/O examples for the new task are mixed with random training data corresponding to other tasks. In all experiments here we sample 10% of the I/O examples in a minibatch from the new task and 90% from random training tasks. It is potential that underfitting could occur in this scenario, but note that the meta network is already trained to represent an exponential number of tasks, so using a single task for 10% of the data is quite significant. Results with data mixture adaptation are shown in Figure 3, which demonstrates that this acts as a strong regularizer and moderately improves held-out loss. 7 Comparison with Existing Work on Neural Program Induction There has been a large amount of past work in neural program induction, and many of these works have made different assumptions about the conditions of the induction scenario. Here, our goal is to compare the four techniques presented here to each other and to past work across several attributes: • Example-Driven Induction - = The system is trained using I/O examples as specification. = The system uses some other specification, such as natural language. • No Explicit Program Representation - = The system can be trained without any explicit program or program trace. = The system requires a program or program trace. • Task-Specific Learning - = The model is trained to maximize performance on a particular task. = The model is trained for a family of tasks. • Cross-Task Knowledge Sharing - = The system uses information from multiple tasks when training a model for a new task. = The system uses information from only a single task for each model. The comparison is presented in Table 1. The PLAIN technique is closely related to the example-driven induction models such as Neural Turing Machines[7] or Neural RAM[12], which typically have not focused on cross-task knowledge transfer. The META model is closely related are the k-shot imitation learning approaches [6, 5, 19], but these papers did not explore task-specific adaptation. 8 Experimental Results In this section we evaluate the four techniques PLAIN, PLAIN+ADAPT, META, META+ADAPT on the Karel domain. The primary goal is to compare performance relative to the number of training I/O examples available for the test task. 5 System ExampleNo Explicit TaskCross-Task Driven Program Specific Knowledge Induction or Trace Learning Sharing Novel Architectures Applied to Program Induction NTM [7], Stack RNN [11], NRAM [12] Neural Transducers [8], Learn Algo [21] Others [4, 20, 2, 13] Trace-Augmented Induction NPI [18] Recursive NPI [3], NPL [14] Non Example-Driven Induction (e.g., Natural Language-Driven Induction) Inducing Latent Programs [16] Neural Module Networks [1] k-shot Imitation Learning 1-Shot Imitation Learning [6] RobustFill [5], Meta-Learning [19] Techniques Explored in This Work Plain Program Induction Portfolio-Adapted Program Induction (Weak) Meta Program Induction (Strong) Adapted Meta Program Induction (Strong) Table 1: Comparison with Existing Work: Comparison of existing work across several attributes. For the primary experiments reported here, the overall network architecture is sketched in Figure 2, with details as follows: The input encoder is a 3-layer CNN with a FC+relu layer on top. The output decoder is a 1-layer LSTM. For the META model, the task encoder uses 1-layer CNN to encode the input and output for a single example, which are concatenated on the feature map dimension and fed through a 6-layer CNN with a FC+relu layer on top. Multiple I/O examples were combined with max-pooling on the final vector. All convolutional layers use a 3 × 3 kernel with a 64-dimensional feature map. The fully-connected and LSTM are 1024-dimensional. Different model sizes are explored later in this section. The dropout, learning rate, and batch size were optimized with grid search for each value of n using a separate set of validation tasks. Training was performed using SGD + momentum and gradient clipping using an in-house toolkit. All training, validation, and test programs were generated by treating the Karel DSL as a probabilistic context free grammar and performing top-down expansion with uniform probability at each node. The input grids were generated by creating a grid of a random size and inserting the agent, markers, and obstacles at random. The output grid was generated by executing the program on the input grid, and if the agent ran into an obstacle or did not move, then the example was thrown out and a new input grid was generated. We limit the nesting depth of control flow to be at most 4 (i.e. at most 4 nested if/while blocks can be chosen in a valid program). We sample I/O grids of size n × m, where n and m are integers sampled uniformly from the range 2 to 20. We sample programs of size upto 20 statements. Every program and I/O grid in the training/validation/test set is unique. Results are presented in Figure 4, evaluated on 25 test tasks with 100 eval examples each.3 The x-axis represents the number of training/demonstration I/O examples available for the test task, denoted as n. The PLAIN system was trained only on these n examples directly. The PLAIN+ADAPT system was also trained on these n examples, but was initialized using a portfolio of m models that had been trained on d examples each. Three different values of m and d are shown in the figure. The META model in this figure was trained on 1,000,000 tasks with 6 I/O examples each, but smaller amounts of META training are shown in Figure 5. A point-by-point analysis is given below: 3Note that each task and eval example is evaluated independently, so the size of the test set does not affect the accuracy. 6 Figure 4: Induction Results: Comparison of the four induction techniques on the Karel scenario. The accuracy denotes the total percentage of examples for which the 1-best output grid was exactly equal to the reference. • PLAIN vs. PLAIN+ADAPT: PLAIN+ADAPT significantly outperforms PLAIN unless n is very large (10k+), in which case both systems perform equally well. This result makes sense, since we expect that much of the representation learning (e.g., how to encode an I/O grid with a CNN) will be independent of the exact task. • PLAIN+ADAPT Model Portfolio Size: Here, we compare the three model portfolio settings shown for PLAIN+ADAPT. The number of available models (m = 1 vs. m = 25) only has a small effect on accuracy, and this effect is only present for small values of n (e.g., n < 100) when the absolute performance is poor in any case. This implies that the majority of cross-task knowledge sharing is independent of the exact details of a task. On the other hand, the number of examples used to train each model in the portfolio (d = 1000 vs d = 100000) has a much larger effect, especially for moderate values of n, e.g., 50 to 100. This makes sense, as we would not expect a significant benefit from adaptation unless (a) d ≫n, and (b) n is large enough to train a robust model. • META vs. META+ADAPT: META+ADAPT does not improve over META for small values of n, which is in-line with the common observation that SGD-based training is difficult using a small number of samples. However, for large values of n, the accuracy of META+ADAPT increases significantly while the META model remains flat. • PLAIN+ADAPT vs. META+ADAPT: Perhaps the most interesting result in the entire chart is the fact that the accuracy crosses over, and PLAIN+ADAPT outperforms META+ADAPT by a significant margin for large values of n (i.e., 1000+). Intuitively, this makes sense, since the meta induction model was trained to represent an exponential family of tasks moderately well, rather than represent a single task with extreme precision. Because the network architecture of the META model is a superset of the PLAIN model, these results imply that for a large value of n, the model is becoming stuck in a poor local optima.4 To validate this hypothesis, we performed adaptation on the meta network after randomly re-initializing all of the weights, and found that in this case the performance of META+ADAPT matches that of PLAIN+ADAPT for large values of n. This confirms that the pre-trained meta network is actually a worse starting point than training from scratch when a large number of training I/O examples are available. Learning Curves: The left side of Figure 4 presents average held-out loss for the various techniques using 50 and 1000 training I/O examples. Epoch 0 on the META+ADAPT corresponds to the META 4Since the DNN is over-parameterized relative to the number of training examples, the system is able to overfit the training examples in all cases. Therefore “poor local optimal” is referring to the model’s ability to generalize to the test examples. 7 Figure 5: Ablation results for Karel Induction. loss. We can see that the PLAIN+ADAPT loss starts out very high, but the model able to adapt to the new task quickly. The META+ADAPT loss starts out very strong, but only improves by a small amount with adaptation. For 1000 I/O examples, it is able to overtake the META+ADAPT model by a small amount, supporting what was observed in Figure 4. Varying the Model Size: Here, we present results on three architectures: Large = 64-dim feature map, 1024-dim FC/RNN (used in the primary results); Medium = 32-dim feature map, 256-dim FC/RNN; Small = 16-dim feature map, 64-dim FC/RNN. All models use the structure described earlier in this section. We can see the center of Figure 5 that model size has a much larger impact on the META model than the PLAIN, which is intuitive – representing an entire family tasks from a given domain requires significantly more parameters than a single task. We can also see that the larger models outperform the smaller models for any value of n, which is likely because the dropout ratio was selected for each model size and value of n to mitigate overfitting. Varying the Amount of META Training: The META model presented in Figure 4 represents a very optimistic scenario which is trained on 1,000,000 background tasks with 6 I/O examples each. On the right side of Figure 5, we present META results using 100,000 and 10,000 training tasks. We see a significant loss in accuracy, which demonstrates that it is quite challenging to train a META model that can generalize to new tasks. 9 Conclusions In this work, we have contrasted two techniques for using cross-task knowledge sharing to improve neural program induction, which are referred to as adapted program induction and meta program induction. Both of these techniques can be used to improve accuracy on a new task by using models that were trained on related tasks from the same family. However, adapted induction uses a transfer learning style approach while meta induction uses a k-shot learning style approach. We applied these techniques to a challenging induction domain based on the Karel programming language, and found that each technique, including unadapted induction, performs best under certain conditions. Specifically, the preferred technique depends on the number of I/O examples (n) that are available for the new task we want to learn, as well as the amount of background data available. These conclusions can be summarized by the following table: Technique Background Data Required When to Use PLAIN None n is very large (10,000+) PLAIN+ADAPT Few related tasks (1+) with a large number of I/O examples (1,000+) n is fairly large (1,000 to 10,000) META Many related tasks (100k+) with a small number of I/O examples (5+) n is small (1 to 20) META+ADAPT Same as META n is moderate (20 to 100) Although we have only applied these techniques to a single domain, we believe that these conclusions are highly intuitive, and should generalize across domains. In future work, we plan to explore more principled methods for adapted meta adaption, in order to improve upon results in the very limited-example scenario. 8 References [1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. pages 39–48, 2016. [2] Marcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentive memory. CoRR, abs/1602.03218, 2016. [3] Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. In ICLR, 2017. [4] Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. Associative long shortterm memory. ICML, 2016. [5] Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Robustfill: Neural program learning under noisy I/O. CoRR, abs/1703.07469, 2017. [6] Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. CoRR, abs/1703.07326, 2017. [7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [8] Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. NIPS, 2015. [9] Sumit Gulwani, William R Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Communications of the ACM, 2012. [10] Mi-Young Huh, Pulkit Agrawal, and Alexei A. Efros. What makes imagenet good for transfer learning? CoRR, abs/1608.08614, 2016. URL http://arxiv.org/abs/1608.08614. [11] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, pages 190–198, 2015. [12] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. ICLR, 2016. [13] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. [14] Chengtao Li, Daniel Tarlow, Alexander L. Gaunt, Marc Brockschmidt, and Nate Kushman. Neural program lattices. In ICLR, 2017. [15] Minh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken language domains. 2015. [16] Arvind Neelakantan, Quov V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. ICLR, 2016. [17] Richard E Pattis. Karel the robot: a gentle introduction to the art of programming. John Wiley & Sons, Inc., 1981. [18] Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR, 2016. [19] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850, 2016. [20] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. NIPS, 2015. [21] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. CoRR, abs/1511.07275, 2015. URL http://arxiv.org/abs/1511.07275. 9 | 2017 | 170 |
6,642 | The Scaling Limit of High-Dimensional Online Independent Component Analysis Chuang Wang and Yue M. Lu John A. Paulson School of Engineering and Applied Sciences Harvard University 33 Oxford Street, Cambridge, MA 02138, USA {chuangwang,yuelu}@seas.harvard.edu Abstract We analyze the dynamics of an online algorithm for independent component analysis in the high-dimensional scaling limit. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE, which involves two spatial variables and one time variable, can be efficiently obtained. These solutions provide detailed information about the performance of the ICA algorithm, as many practical performance metrics are functionals of the joint empirical measures. Numerical simulations show that our asymptotic analysis is accurate even for moderate dimensions. In addition to providing a tool for understanding the performance of the algorithm, our PDE analysis also provides useful insight. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically “decoupled”, with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency may prove an interesting line of future research. 1 Introduction Online learning methods based on stochastic gradient descent are widely used in many learning and signal processing problems. Examples includes the classical least mean squares (LMS) algorithm [1] in adaptive filtering, principal component analysis [2, 3], independent component analysis (ICA) [4], and the training of shallow or deep artificial neural networks [5–7]. Analyzing the convergence rate of stochastic gradient descent has already been the subject of a vast literature (see, e.g., [8–11].) Unlike existing work that analyze the behaviors of the algorithms in finite dimensions, we present in this paper a framework for studying the exact dynamics of stochastic gradient algorithms in the high-dimensional limit, using online ICA as a concrete example. Instead of minimizing a generic function as considered in the optimization literature, the stochastic algorithm we analyze here is solving an estimation problem. The extra assumptions on the ground truth (e.g., the feature vector) and the generative models for the observations allow us to obtain the exact asymptotic dynamics of the algorithms. As the main result of this work, we show that, as the ambient dimension n →∞and with proper time-scaling, the time-varying joint empirical measure of the true underlying independent component ξ and its estimate x converges weakly to the unique solution of a nonlinear partial differential equation (PDE) [see (6).] Since many performance metrics, such as the correlation between ξ and 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. x and the support recover rate, are functionals of the joint empirical measure, knowledge about the asymptotics of the latter allows us to easily compute the asymptotic limits of various performance metrics of the algorithm. This work is an extension of a recent analysis on the dynamics of online sparse PCA [12] to more general settings. The idea of studying the scaling limits of online learning algorithm first appeared in a series of work that mostly came from the statistical physics communities [3, 5, 13–16] in the 1990s. Similar to our setting, those early papers studied the dynamics of various online learning algorithms in high dimensions. In particular, they show that the mean-squared error (MSE) of the estimation, together with a few other “order parameters”, can be characterized as the solution of a deterministic system of coupled ordinary differential equations (ODEs) in the large system limit. One limitation of such ODE-level analysis is that it cannot provide information about the distributions of the estimates. The latter are often needed when one wants to understand more general performance metrics beyond the MSE. Another limitation is that the ODE analysis cannot handle cases where the algorithms have non-quadratic regularization terms (e.g., the incorporation of ℓ1 norms to promote sparsity.) In this paper, we show that both limitations can be eliminated by using our PDE-level analysis, which tracks the asymptotic evolution of the probability distributions of the estimates given by the algorithm. In a recent paper [10], the dynamics of an ICA algorithm was studied via a diffusion approximation. As an important distinction, the analysis in [10] keeps the ambient dimension n fixed and studies the scaling limit of the algorithm as the step size tends to 0. The resulting PDEs involve O(n) spatial variables. In contrast, our analysis studies the limit as the dimension n →∞, with a constant step size. The resulting PDEs only involve 2 spatial variables. This low-dimensional characterization makes our limiting results more practical to use, especially when the dimension is large. The basic idea underlying our analysis can trace its root to the early work of McKean [17, 18], who studied the statistical mechanics of Markovian-type mean-field interactive particles. The mathematical foundation of this line of research has been further established in the 1980s (see, e.g., [19, 20]). This theoretical tool has been used in the analysis of high-dimensional MCMC algorithms [21]. In our work, we study algorithms through the lens of high-dimensional stochastic processes. Interestingly, the analysis does not explicitly depend on whether the underlying optimization problem is convex or nonconvex. This feature makes the presented analysis techniques a potentially very useful tool in understanding the effectiveness of using low-complexity iterative algorithms for solving highdimensional nonconvex estimation problems, a line of research that has recently attracted much attention (see, e.g., [22–25].) The rest of the paper is organized as follows. We first describe in Section 2 the observation model and the online ICA algorithm studied in this work. The main convergence results are given in Section 3, where we show that the time-varying joint empirical measure of the target independent component and its estimates given by the algorithm can be characterized, in the high-dimensional limit, by the solution of a deterministic PDE. Due to space constraint, we only provide in the appendix a formal derivation leading to the PDE, and leave the rigorous proof of the convergence to a follow-up paper. Finally, in Section 4 we present some insight obtained from our asymptotic analysis. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically “decoupled”, with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Notations and Conventions: Throughout this paper, we use boldfaced lowercase letters, such as ξ and xk, to represent n-dimensional vectors. The subscript k in xk denotes the discrete-time iteration step. The ith component of the vectors ξ and xk are written as ξi and xk,i, respectively. 2 Data model and online ICA We consider a generative model where a stream of sample vectors yk ∈Rn, k = 1, 2, . . . are generated according to yk = 1 √nξck + ak, (1) where ξ ∈Rn is a unique feature vector we want to recover. (For simplicity, we consider the case of recovering a single feature vector, but our analysis technique can be generalized to study cases involving a finite number of feature vectors.) Here ck ∈R is an i.i.d. random variable drawn from an unknown non-Gaussian distribution Pc with zero mean and unit variance. And ak ∼N(0, I −1 nξξT ) 2 models background noise. We use the normalization ∥ξ∥2 = n so that in the large n limit, all elements ξi of the vector are O(1) quantities. The observation model (1) is equivalent to the standard sphered data model yk = A ck sk , where A ∈Rn×n is an orthonormal matrix with the first column being ξ/√n and sk is an i.i.d. (n −1)-dimensional standard Gaussian random vector. To establish the large n limit, we shall assume that the empirical measure of ξ defined by µ(ξ) = 1 n Pn i=1 δ(ξ −ξi) converges weakly to a deterministic measure µ∗(ξ) with finite moments. Note that this assumption can be satisfied in a stochastic setting, where each element of ξ is an i.i.d. random variable drawn from µ∗(ξ), or in a deterministic setting [e.g., ξi = √ 2(i mod 2), in which case µ∗(ξ) = 1 2δ(ξ) + 1 2δ(ξ − √ 2).] We use an online learning algorithm to extract the non-Gaussian component ξ from the data stream {yk}k≥1. Let xk be the estimate of ξ at step k. Starting from an initial estimate x0, the algorithm update xk by exk = xk + τk √nf( 1 √nyT k xk)yk −τk n φ(xk) xk+1 = √n ∥exk∥exk, (2) where f(·) is a given twice differentiable function and φ(·) is an element-wise nonlinear mapping introduced to enforce prior information about ξ, e.g., sparsity. The scaling factor 1 √n in the above equations makes sure that each component xk,i of the estimate is of size O(1) in the large n limit. The above online learning scheme can be viewed as a projected stochastic gradient algorithm for solving an optimization problem min ∥x∥=n −1 K K X k=1 F( 1 √nyT k x) + 1 n n X i=1 Φ(xi), (3) where F(x) = R f(x) dx and Φ(x) = Z φ(x) dx (4) is a regularization function. In (2), we update xk using an instantaneous noisy estimation 1 √nf( 1 √nyT k xk)yk, in place of the true gradient 1 K√n PK k=1 f( 1 √nyT k xk)yk, once a new sample yk is received. In practice, one can use f(x) = ±x3 or f(x) = ± tanh(x) to extract symmetric non-Gaussian signals (for which E c3 k = 0 and E c4 k ̸= 3) and use f(x) = ±x2 to extract asymmetric non-Gaussian signals. The algorithm in (2) with f(x) = x3 can also be regarded as implementing a low-rank tensor decomposition related to the empirical kurtosis tensor of yk [10, 11]. For the nonlinear mapping φ(x), the choice of φ(x) = βx for some β > 0 corresponds to using an L2 norm in the regularization term Φ(x). If the feature vector is known to be sparse, we can set φ(x) = β sgn(x), which is equivalent to adding an L1-regularization term. 3 Main convergence result We provide an exact characterization of the dynamics of the online learning algorithm (2) when the ambient dimension n goes to infinity. First, we define the joint empirical measure of the feature vector ξ and its estimate xk as µn t (ξ, x) = 1 n n X i=1 δ(ξ −ξi, x −xk,i) (5) with t defined by k = ⌊tn⌋. Here we rescale (i.e., “accelerate”) the time by a factor of n. The joint empirical measure defined above carries a lot of information about the performance of the algorithm. For example, as both ξ and xk have the same norm √n by definition, the normalized correlation between ξ and xk defined by Qn t = 1 nξT xk 3 can be computed as Qn t = Eµn t [ξx], i.e., the expectation of ξx taken with respect to the empirical measure. More generally, any separable performance metric Hn t = 1 n Pn i=1 h(ξi, xk,i) with some function h(·, ·) can be expressed as an expectation with respect to the empirical measure µn t , i.e., Hn t = Eµn t h(ξ, x). Directly computing Qn t via the expectation Eµn t [ξx] is challenging, as µn t is a random probability measure. We bypass this difficulty by investigating the limiting behavior of the joint empirical measure µn t defined in (5). Our main contribution is to show that, as n →∞, the sequence of random probability measures {µn t }n converges weakly to a deterministic measure µt. Note that the limiting value of Qn t can then be computed from the limiting measure µn t via the identity limn→∞Qn t = Eµt[ξx]. Let Pt(x, ξ) be the density function of the limiting measure µt(ξ, x) at time t. We show that it is characterized as the unique solution of the following nonlinear PDE: ∂ ∂tPt(ξ, x) = −∂ ∂x Γ(x, ξ, Qt, Rt)Pt(ξ, x) + 1 2Λ(Qt) ∂2 ∂x2 Pt(ξ, x) (6) with Qt = ZZ R2 ξxPt(ξ, x) dx dξ (7) Rt = ZZ R2 xφ(x)Pt(ξ, x) dx dξ (8) where the two functions Λ(Q) and Γ(x, ξ, Q, R) are defined as Λ(Q) = τ 2 D f 2 cQ + e p 1 −Q2E (9) Γ(x, ξ, Q, R) = x QG(Q) + τR −1 2Λ(Q) −ξG(Q) −τφ(x) (10) where G(Q) = −τ D f cQ + e p 1 −Q2 c E + τQ D f ′ cQ + e p 1 −Q2E . (11) In the above equations, e and c denote two independent random variables, with e ∼N(0, 1) and c ∼Pc, the non-Gaussian distribution of ck introduced in (2); the notation ⟨·⟩denotes the expectation over e and c; and f(·) and φ(·) are the two functions used in the online learning algorithm (2). When φ(x) = 0 (and therefore Rt = 0), we can derive a simple ODE for Qt from (6) and (7): d dt Qt = (Q2 t −1)G(Qt) −1 2QtΛ(Qt). Example 1 As a concrete example, we consider the case when ck is drawn from a symmetric non-Gaussian distribution. Due to symmetry, E c3 k = 0. Write E c4 k = m4 and E c6 k = m6. We use f(x) = x3 in (2) to detect the feature vector ξ. Substituting this specific f(x) into (9) and (11), we obtain G(Q) = τQ3(m4 −3) (12) Λ(Q) = τ 2 h 15 + 15Q4(1 −Q2)(m4 −3) + Q6(m6 −15) i (13) and Γ(x, ξ, Q, R) can be computed by substituting (12) and (13) into (10). Moreover, for the case φ(x) = 0, we derive a simple ODE for qt = Q2 t as dqt dt = −2τtq2 t (1 −qt)(m4 −3) −τ 2 t qt h 15q2 t (1 −qt)(m4 −3) + q3 t (m6 −15) + 15 i . (14) Numerical verifications of the ODE results are shown in Figure 1(a). In our experiment, the ambient dimension is set to n = 5000 and we plot the averaged results as well as error bars (corresponding to one standard deviation) over 10 independent trials. Two different initial values of q0 = Q2 0 are used. In both cases, the asymptotic theoretical predictions match the numerical results very well. The ODE in (14) can be solved analytically. Next we briefly discuss its stability. The right-hand side of (14) is plotted in Figure 1(b) as a function of qt. It is clear that the ODE (14) always admits a 4 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 ODE Simulation (a) 0 0.5 1 -0.5 0 0.5 (b) Figure 1: (a) Comparison between the analytical prediction given by the ODE in (14) with numerical simulations of the online ICA algorithm. We consider two different initial values for the algorithm. The top one, which starts from a better initial guess, converges to an informative estimation, whereas the bottom one, with a worse initial guess, converges to a non-informative solution. (b) The stability of the ODE in (14). We draw g(q) = 1 τ dq dt for different value of τ = 0.02, 0.04, 0.06, 0.08 from top to bottom. solution qt = 0, which corresponding to a trivial, non-informative solution. Moreover, this trivial solution is always a stable fixed point. When the stepsize τ > τc for some constant τc, qt = 0 is also the unique stable fixed point. When τ < τc however, two additional solutions of the ODE emerge. One is a stable fixed point denoted by q∗s and the other is an unstable fixed point denoted by q∗u, with q∗u < q∗s. Thus, in order to reach an informative solution, one must initialize the algorithm with Q2 0 > q∗u. This insight agrees with a previous stability analysis done in [26], where the authors investigated the dynamics near qt = 0 via a small qt expansion. Example 2 In this experiment, we verify the accuracy of the asymptotic predictions given by the PDE (6). The settings are similar to those in Example 1. In addition, we assume that the feature vector ξ is sparse, consisting of ρn nonzero elements, each of which is equal to 1/√ρ. Figure 2 shows the asymptotic conditional density Pt(x|ξ) for ξ = 0 and ξ = 1/√ρ at two different times. These theoretical predictions are obtained by solving the PDE (6) numerically. Also shown in the figure are the empirical conditional densities associated with one realization of the ICA algorithm. Again, we observe that the theoretical predictions and numerical results have excellent agreement. To demonstrate the usefulness of the PDE analysis in providing detailed information about the performance of the algorithm, we show in Figure 3 the performance of sparse support recovery using a simple hard-thresholding scheme on the estimates provided by the algorithm. By changing the threshold values, one can have trade-offs between the true positive and false positive rates. As we can see from the figure, this precise trade-off can be accurately predicted by our PDE analysis. 4 Insights given by the PDE analysis In this section, we present some insights that can be gained from our high-dimensional analysis. To simplify the PDE in (6), we can assume that the two functions Qt and Rt in (7) and (8) are given to us in an oracle way. Under this assumption, the PDE (6) describes the limiting empirical measure of the following stochastic process zk+1,i = zk,i + 1 nΓ(zk,i, ξi, Qk/n, Rk/n) + q Λ(Qk/n) n wk,i, i = 1, 2, . . . n (15) where wk,i is a sequence of independent standard Gaussian random variables. Unlike the original online learning update equation (2) where different coordinates of xk are coupled, the above process is uncoupled. Each component zk,i for i = 1, 2, . . . , n evolves independently when conditioned on Qt and Rt. The continuous-time limit of (15) is described by a stochastic differential equation (SDE) dZt = Γ(Zt, ξ, Qt, Rt) dt + p Λ(Qt) dBt, 5 -1 0 1 2 3 4 0 2 4 6 -1 0 1 2 3 4 0 2 4 6 -1 0 1 2 3 4 0 2 4 6 -1 0 1 2 3 4 0 2 4 6 (a) -1 0 1 2 3 4 0 0.1 0.2 -1 0 1 2 3 4 0 0.05 0.1 0.15 (b) Figure 2: (a) A demonstration of the accuracy of our PDE analysis. See the discussions in Example 2 for details. (b) Effective 1-D cost functions. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Figure 3: Trade-offs between the true positive and false positive rates in sparse support recovery. In our experiment, n = 104, and the sparsity level is set to ρ = 0.3. The theoretical results obtained by our PDE analysis can accurately predict the actual performance at any run-time of the algorithm. where Bt is the standard Brownian motion. We next have a closer look at the equation (15). Given a scalar ξ, Qt and Rt, we can define a time-varying 1-D regularized quadratic optimization problem minx∈R Et(x, ξ) with the effective potential Et(x, ξ) = 1 2dt(x −btξ)2 + τΦ(x), (16) where dt = QtG(Qt) −1 2Λ(Qt) + τRt , bt = G(Qt)/dt and Φ(x) is the regularization term defined in (4). Then, the stochastic process (15) can be viewed as a stochastic gradient descent 6 for solving this 1-D problem with a step-size equal to 1/n. One can verify that the exact gradient of (16) is −Γ(x, ξ, Qt, Rt). The third term q Λ(Qk) n wk in (15) adds stochastic noise to the true gradient. Interestingly, although the original optimization problem (3) is non-convex, its 1-D effective optimization problem is always convex for convex regularizers Φ(x) (e.g., Φ(x) = β |x|.) This provides an intuitive explanation for the practical success of online ICA. To visualize this 1-D effective optimization problem, we plot in Figure 2(b) the effective potential Et(x, ξ) at t = 0 and t = 100, respectively. From Figure (2), we can see that the L1 norm always introduces a bias in the estimation for all non-zero ξi, as the minimum point in the effective 1-D cost function is always shifted towards the origin. It is hopeful that the insights gained from the 1-D effective optimization problem can guide the design of a better regularization function Φ(x) to achieve smaller estimation errors without sacrificing the convergence speed. This may prove an interesting line of future work. This uncoupling phenomenon is a typical consequence of mean-field dynamics, e.g., the SherringtonKirkpatrick model [27] in statistical physics. Similar phenomena are observed or proved in other high dimensional algorithms especially those related to approximate message passing (AMP) [28–30]. However, for these algorithms using batch updating rules with the Onsager reaction term, the limiting densities of iterands are Gaussian. Thus the evolution of such densities can be characterized by tracking a few scalar parameters in discrete time. For our case, the limiting densities are typically non-Gaussian and they cannot be parametrized by finitely many scalars. Thus the PDE limit (6) is required. Appendix: A Formal derivation of the PDE In this appendix, we present a formal derivation of the PDE (6). We first note that (xk, ξk)k with ξk = ξ forms an exchangeable Markov chain on R2n driven by the random variable ck ∼Pc and the Gaussian random vector ak . The drift coefficient Γ(x, ξ, Q, R) and the diffusion coefficient Λ(Q) in the PDE (6) are determined, respectively, by the conditional mean and variance of the increment xk+1,i −xk,i, conditioned upon the previous state vector xk and ξk. Let the increment of the gradient-descent step in the learning rule (2) be e∆k,i = exk,i −xk,i = τk √nf( 1 √nyT k xk)yk,i −τk n φ(xk,i) (17) where exk,i is the ith component of the output exk. Let Ek denote the conditional expectation with respect to ck and ak given xk and ξk. We first compute Ek h e∆k,i i and Ek h e∆2 k,i i . From (1) and (17) we have Ek h e∆k,i i = τk √nEk h f(Qn kck + eek,i + 1 √nak,ixk,i)( 1 √nξick + ak,i) i −τk n φ(xk,i), where Qn k = 1 nξT xk and eek,i = 1 √n aT k xk −ak,ixk,i . We use the Taylor expansion of f around Qn kck + eek,i up to the first order and get Ek h f(Qn kck + eek,i + 1 √nak,ixk,i)( 1 √nξick + ak,i) i = Ek h f(Qn kck + eek,i)( 1 √nξick + ak,i) i + 1 √nxk,iEk h f ′(Qn kck + eek,i)( 1 √nξick + ak,i)ak,i i + δk,i, where δk,i includes all higher order terms. As n →∞, the random variable Qn k converges to a deterministic quantity Qk. Moreover, eek,i and ak,i are both zero-mean Gaussian with the covariance matrix " 1 −Q2 k + O( 1 n) −1 √nξk,iQk −1 √nξk,iQk 1 + O( 1 n) # . We thus have Ek h f ′(Qn kck + eek,i)( 1 √nξick + ak,i)ak,i i = f ′(Qkc + q 1 −Q2 ke) + o(1) 7 and Ek h f(Qn kck + eek,i)( 1 √nξick + ak,i) i = f(Qkc + q 1 −Q2 ke − ξi √nQka)( 1 √nξic + a) = 1 √nξi " cf(Qkc + q 1 −Q2 ke) −Qk f ′(Qkc + q 1 −Q2 ke) # + o( 1 √n), where in the last line, we use the Taylor expansion again to expand f around Qkc + p 1 −Q2 ke and the bracket ⟨·⟩denotes the average over two independent random variables c ∼Pc and e ∼N(0, 1). Thus, we have Ek h e∆k,i i = 1 n " −ξiG(Qk) + τkxk,i f ′(Qkc + q 1 −Q2 ke) −τkφ(xk,i) # + o( 1 n), where the function G(Q) is defined in (11). To compute the (conditional) variance, we have Ek h e∆2 k,i i = τ 2 k n Ek h f 2(Qn k + eek,i) i + o( 1 n) = τ 2 k n f 2(Qkc + q 1 −Q2 ke) + o( 1 n). Next, we deal with the normalization step. Again, we use the Taylor expansion for the term
1 n exk
−1 =
1 n xk + e∆k
−1 up to the first order, which yields xk+1 = xk −1 nxk xT k e∆k + 1 2 e∆ T k e∆k + e∆k + δk, where δk includes all higher order terms. Note that 1 nxT k e∆k ≈1 n Pn i=1 xk,iEk h e∆k,i i , 1 n e∆ T k e∆k ≈ 1 n Pn i=1 Ek h e∆2 k,i i and 1 nxT k φ(x) = Rn k →Rk, we have Ek xk+1,i −xk,i = 1 nΓ(xk,i, ξi, Qk, Rk) + o( 1 n). Finally, the normalization step does not change the variance term, and thus Ek h xk+1,i −xk,i 2i = Ek h e∆2 k,i i + o( 1 n) = 1 nΛ(Qk) + o( 1 n). The above computation of Ek(xk+1,i −xk,i) and Ek(xk+1,i −xk,i)2 connects the dynamics (2) to (15). In fact, both (2) and (15) have the same limiting empirical measure described by (6). A rigorous proof of our asymptotic result is built on the weak convergence approach for measurevalued processes. Details will be presented in an upcoming paper. Here we only provide a sketch of the general proof strategy: First, we prove the tightness of the measure-valued stochastic process (µn t )0≤t≤T on D([0, T], M(R2)), where D denotes the space of càdlàg processes taking values from the space of probability measures. This then implies that any sequence of the measure-valued process {(µn t )0≤t≤T }n (indexed by n) must have a weakly converging subsequence. Second, we prove any converging (sub)sequence must converge weakly to a solution of the weak form of the PDE (6). Third, we prove the uniqueness of the solution of the weak form of the PDE (6) by constructing a contraction mapping. Combining these three statements, we can then conclude that any sequence must converge to this unique solution. Acknowledgments This work is supported by US Army Research Office under contract W911NF16-1- 0265 and by the US National Science Foundation under grants CCF-1319140 and CCF-1718698. References [1] Simon Haykin and Bernard Widrow. Least-mean-square adaptive filters, volume 31. John Wiley & Sons, 2003. 8 [2] Erkki Oja and Juha Karhunen. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J. Math. Anal. Appl., 106(1):69–84, 1985. [3] Michael Biehl and E Schlösser. The dynamics of on-line principal component analysis. J. Phys. A. Math. Gen., 31(5):97–103, 1998. [4] Aapo Hyvärinen and Erkki Oja. One-unit learning rules for independent component analysis. In Adv. Neural Inf. Process. Syst., pages 480–486, 1997. [5] M Biehl. An Exactly Solvable Model of Unsupervised Learning. Europhys. Lett., 25(5):391–396, 1994. [6] Ohad Shamir and Tong Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. In International Conference on Machine Learning, pages 71–79, 2013. [7] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [8] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to Escape Saddle Points Efficiently. arXiv:1703.00887, 2017. [9] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory Limited , Streaming PCA. In Adv. Neural Inf. Process. Syst., 2013. [10] Chris Junchi Li, Zhaoran Wang, and Han Liu. Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes. In Adv. Neural Inf. Process. Syst., pages 4961–4969, 2016. [11] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping From Saddle Points ? Online Stochastic Gradient for Tensor Decomposition. In JMLR Work. Conf. Proc., volume 40, pages 1–46, 2015. [12] Chuang Wang and Yue M. Lu. Online Learning for Sparse PCA in High Dimensions: Exact Dynamics and Phase Transitions. In Inf. Theory Work. (ITW), 2016 IEEE, pages 186–190, 2016. [13] David Saad and Sara A Solla. Exact Solution for On-Line Learning in Multilayer Neural Networks. Phys. Rev. Lett., 74(21):4337–4340, 1995. [14] David Saad and Magnus Rattray. Globally optimal parameters for online learning in multilayer neural networks. Phys. Rev. Lett., 79(13):2578, 1997. [15] Magnus Rattray and Gleb Basalyga. Scaling laws and local minima in Hebbian ICA. In Adv. Neural Inf. Process. Syst., pages 495–502, 2002. [16] G. Basalyga and M. Rattray. Statistical dynamics of on-line independent component analysis. J. Mach. Learn. Res., 4(7-8):1393–1410, 2004. [17] Henry P McKean. Propagation of chaos for a class of non-linear parabolic equations. Stoch. Differ. Equations (Lecture Ser. Differ. Equations, Sess. 7, Cathol. Univ., 1967), pages 41–57, 1967. [18] Henry P McKean. A class of Markov processes associated with nonlinear parabolic equations. Proc. Natl. Acad. Sci., 56(6):1907–1911, 1966. [19] Sylvie Méléard and Sylvie Roelly-Coppoletta. A propagation of chaos result for a system of particles with moderate interaction. Stoch. Process. their Appl., 26:317–332, 1987. [20] Alain-Sol Sznitman. Topics in progagation of chaos. In Paul-Louis Hennequin, editor, Ec. d’{\’e}t{\’e} Probab. Saint-Flour XIX–1989, pages 165–251. Springer Berlin Heidelberg, 1991. [21] G. O. Roberts, A. Gelman, and W. R. Gilks. Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab., 7(1):110–120, 1997. [22] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In Adv. Neural Inf. Process. Syst., pages 2796–2804, 2013. [23] Emmanuel J. Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory, 61(4):1985–2007, 2015. [24] Huishuai Zhang, Yuejie Chi, and Yingbin Liang. Provable non-convex phase retrieval with outliers: Median truncated wirtinger flow. arXiv:1603.03805, 2016. [25] Xiaodong Li, Shuyang Ling, Thomas Strohmer, and Ke Wei. Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv:1606.04933, 2016. 9 [26] Magnus Rattray. Stochastic trapping in a solvable model of on-line independent component analysis. Neural Comput., 14(2):17, 2002. [27] L. F. Cugliandolo and J. Kurchan. On the out-of-equilibrium relaxation of the Sherrington-Kirkpatrick model. J. Phys. A. Math. Gen., 27(17):5749–5772, 1994. [28] Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault Lesieur, and Lenka Zdeborová. Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula. In Adv. Neural Inf. Process. Syst., pages 424–432, 2016. [29] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Trans. Inf. Theory, 57(2):764–785, 2011. [30] David Donoho and Andrea Montanari. High dimensional robust M-estimation: asymptotic variance via approximate message passing. Probab. Theory Relat. Fields, 166(3-4):935–969, 2016. 10 | 2017 | 171 |
6,643 | Practical Locally Private Heavy Hitters Raef Bassily∗ Kobbi Nissim† Uri Stemmer‡ Abhradeep Thakurta§ Abstract We present new practical local differentially private heavy hitters algorithms achieving optimal or near-optimal worst-case error – TreeHist and Bitstogram. In both algorithms, server running time is ˜O(n) and user running time is ˜O(1), hence improving on the prior state-of-the-art result of Bassily and Smith [STOC 2015] requiring ˜O(n5/2) server time and ˜O(n3/2) user time. With a typically large number of participants in local algorithms (n in the millions), this reduction in time complexity, in particular at the user side, is crucial for the use of such algorithms in practice. We implemented Algorithm TreeHist to verify our theoretical analysis and compared its performance with the performance of Google’s RAPPOR code. 1 Introduction We revisit the problem of computing heavy hitters with local differential privacy. Such computations have already been implemented to provide organizations with valuable information about their user base while providing users with the strong guarantee that their privacy would be preserved even if the organization is subpoenaed for the entire information seen during an execution. Two prominent examples are Google’s use of RAPPOR in the Chrome browser [10] and Apple’s use of differential privacy in iOS-10 [16]. These tools are used for learning new words typed by users and identifying frequently used emojis and frequently accessed websites. Differential privacy in the local model. Differential privacy [9] provides a framework for rigorously analyzing privacy risk and hence can help organization mitigate users’ privacy concerns as it ensures that what is learned about any individual user would be (almost) the same whether the user’s information is used as input to an analysis or not. Differentially private algorithms work in two main modalities – the curator model and the local model. The curator model assumes a trusted centralized curator that collects all the personal information and then analyzes it. The local model on the other hand, does not involve a central repository. Instead, each piece of personal information is randomized by its provider to protect privacy even if all information provided to the analysis is revealed. Holding a central repository of personal information can become a liability to organizations in face of security breaches, employee misconduct, subpoenas, etc. This makes the local model attractive for implementation. Indeed in the last few years Google and Apple have deployed local differentially private analyses [10, 16]. Challenges of the local model. A disadvantage of the local model is that it requires introducing noise at a significantly higher level than what is required in the curator model. Furthermore, some tasks which are possible in the curator model are impossible in the local model [9, 14, 7]. To see the effect of noise, consider estimating the number of HIV positives in a given population of n participants. In the curated model, it suffices to add Laplace noise of magnitude O(1/ϵ) [9], i.e., ∗Department of Computer Science & Engineering, The Ohio State University. bassily.1@osu.edu †Department of Computer Science, Georgetown University. kobbi.nissim@georgetown.edu ‡Center for Research on Computation and Society (CRCS), Harvard University. u@uri.co.il §Department of Computer Science, University of California Santa Cruz. aguhatha@ucsc.edu. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. independent of n. In contrast, a lowerbound of Ω(√n/ϵ) is known for the local model [7]. A higher noise level implies that the number of participants n needs to be large (maybe in the millions for a reasonable choice of ϵ). An important consequence is that practical local algorithms must exhibit low time, space, and communication complexity, especially at the user side. This is the problem addressed in our work. Heavy hitters and histograms in the local model. Assume each of n users holds an element xi taken from a domain of size d. A histogram of this data lists (an estimate of) the multiplicity of each domain element in the data. When d is large, a succinct representation of the histogram is desired either in form of a frequency oracle – allowing to approximate the multiplicity of any domain element – and heavy hitters – listing the multiplicities of most frequent domain elements, implicitly considering the multiplicities of other domain elements as zero. The problem of computing histograms with differential privacy has attracted significant attention both in the curator model [9, 5, 6] and the local model [13, 10, 4]. Of relevance is the work in [15]. We briefly report on the state of the art heavy hitters algorithms of Bassily and Smith [4] and Thakurta et al. [16], which are most relevant for the current work. Bassily and Smith provide matching lower and upper bounds of Θ( p n log(d)/ϵ) on the worst-case error of local heavy hitters algorithms. Their local algorithm exhibits optimal communication but a rather high time complexity: Server running time is ˜O(n5/2) and, crucially, user running time is ˜O(n3/2) – complexity that severely hampers the practicality of this algorithm. The construction by Thakurta et al. is a heuristic with no bounds on server running time and accuracy.1 User computation time is ˜O(1), a significant improvement over [4]. See Table 1. Our contributions. The focus of this work is on the design of locally private heavy hitters algorithms with near optimal error, keeping time, space, and communication complexity minimal. We provide two new constructions of heavy hitters algorithms TreeHist and Bitstogram that apply different techniques and achieve similar performance. We implemented Algorithm TreeHist and provide measurements in comparison with RAPPOR [10] (the only currently available implementation for local histograms). Our measurements are performed with a setting that is favorable to RAPPOR (i.e., a small input domain), yet they indicate that Algorithm TreeHist performs better than RAPPOR in terms of noise level. Table 1 details various performance parameters of algorithms TreeHist and Bitstogram, and the reader can check that these are similar up to small factors which we ignore in the following discussion. Comparing with [4], we improve time complexity both at the server (reduced from ˜O(n5/2) to ˜O(n)) and at the user (reduced from ˜O(n3/2) to O(max (log n, log d)2)). Comparing with [16], we get provable bounds on the server running time and worst-case error. Note that Algorithm Bitstogram achieves optimal worst-case error whereas Algorithm TreeHist is almost optimal, by a factor of p log(n). Performance metric TreeHist Bitstogram Bassily and Smith (this work) (this work) [4]2 Server time ˜O(n) ˜O(n) ˜O(n5/2) User time ˜O(1) ˜O(1) ˜O(n3/2) Server processing memory ˜O(√n) ˜O(√n) O(n2) User memory ˜O(1) ˜O(1) ˜O(n3/2) Communication/user O (1) O (1) O (1) Public randomness/user 3 ˜O(1) ˜O(1) ˜O(n3/2) Worst-case Error O p n log(n) log(d) O p n log(d) O p n log(d) Table 1: Achievable performance of our protocols, and comparison to the prior state-of-the-art by Bassily and Smith [4]. For simplicity, the ˜O notation hides logarithmic factors in n and d. Dependencies on the failure probability β and the privacy parameter ϵ are omitted. 1The underlying construction in [16] is of a frequency oracle. 2 Elements of the constructions. Main details of our constructions are presented in sections 3 and 4. Both our algorithms make use of frequency oracles – data structures that allow estimating various counts. Algorithm TreeHist identifies heavy-hitters and estimates their frequencies by scanning the levels of a binary prefix tree whose leaves correspond to dictionary items. The recovery of the heavy hitters is in a bit-by-bit manner. As the algorithm progresses down the tree it prunes all the nodes that cannot be prefixes of heavy hitters, hence leaving ˜O(√n) nodes in every depth. This is done by making queries to a frequency oracle. Once the algorithm reaches the final level of the tree it identifies the list of heavy hitters. It then invokes the frequency oracle once more on those particular items to obtain more accurate estimates for their frequencies. Algorithm Bitstogram hashes the input domain into a domain of size roughly √n. The observation behind this algorithm is that if a heavy hitter x does not collide with other heavy hitters then (h(x), xi) would have a significantly higher count than (h(x), ¬xi) where xi is the ith bit of x. This allows recovering all bits of x in parallel given an appropriate frequency oracle. We remark that even though we describe our protocols as operating in phases (e.g., scanning the levels of a binary tree), these phases are done in parallel, and our constructions are non-interactive. All users participate simultaneously, each sending a single message to the server. We also remark that while our focus is on algorithms achieving the optimal (i.e., smallest possible) error, our algorithms are also applicable when the server is interested in a larger error, in which case the server can choose a random subsample of the users to participate in the computation. This will reduce the server runtime and memory usage, and also reduce the privacy cost in the sense that the unsampled users get perfect privacy (so the server might use their data in another analysis). 2 Preliminaries 2.1 Definitions and Notation Dictionary and users items: Let V = [d]. We consider a set of n users, where each user i ∈[n] has some item vi ∈V. Sometimes, we will also use vi to refer to the binary representation vi when it is clear from the context. Frequencies: For each item v ∈V, we define the frequency f(v) of such item as the number of users holding that item, namely, f(v) ≜P i∈[n] 1(vi = v), where 1(E) of an event E is the indicator function of E. A frequency oracle: is a data structure together with an algorithm that, for any given v ∈V, allows computing an estimate ˆf(v) of the frequency f(v). A succinct histogram: is a data structure that provides a (short) list of items ˆv1, ..., ˆvk, called the heavy hitters, together with estimates for their frequencies ( ˆf(ˆvj) : j ∈[k]). The frequencies of the items not in the list are implicitly estimated as ˆf(v) = 0. We measure the error in a succinct histogram by the ℓ∞distance between the estimated and true frequencies, maxv∈[d] | ˆf(v) −f(v)|. We will also consider the maximum error in the estimated frequencies restricted to the items in the list, that is, maxˆvj:j∈[k]| ˆf(ˆvj) −f(ˆvj)|. If a data succinct histogram aims to provide ℓ∞error η, the list does not need to contain more than O(1/η) items (since items with estimated frequencies below η may be omitted from the list, at the price of at most doubling the error). 2The user’s run-time and memory in [4] can be improved to O(n) if one assumes random access to the public randomness, which we do not assume in this work. 3Our protocols can be implemented without public randomness while attaining essentially the same performance. 3 2.2 Local Differential Privacy In the local model, an algorithm A : V →Z accesses the database v = (v1, · · · , vn) ∈Vn only via an oracle that, given any index i ∈[n], runs a local randomized algorithm (local randomizer) R : V →˜Z on input vi and returns the output R(vi) to A. Definition 2.1 (Local differential privacy [9, 11]). An algorithm satisfies ϵ-local differential privacy (LDP) if it accesses the database v = (v1, · · · , vn) ∈Vn only via invocations of a local randomizer R and if for all i ∈[n], if R(1), . . . , R(k) denote the algorithm’s invocations of R on the data sample vi, then the algorithm A(·) ≜ R(1)(·), R(2)(·), . . . , R(k)(·) is ϵ-differentially private. That is, if for any pair of data samples v, v′ ∈V and ∀S ⊆Range(A), Pr[A(v) ∈S] ≤eϵ Pr[A(v′) ∈S]. 3 The TreeHist Protocol In this section, we briefly give an overview of our construction that is based on a compressed, noisy version of the count sketch. To maintain clarity of the main ideas, we give here a high-level description of our construction. We refer to the full version of this work [3] for a detailed description of the full construction. We first introduce some objects and public parameters that will be used in the construction: Prefixes: For a binary string v, we will use v[1 : ℓ] to denote the ℓ-bit prefix of v. Let V = v ∈ {0, 1}ℓfor some ℓ∈[log d] . Note that elements of V arranged in a binary prefix tree of depth log d, where the nodes at level ℓof the tree represent all binary strings of length ℓ. The items of the dictionary V represent the bottommost level of that tree. Hashes: Let t, m be positive integers to be specified later. We will consider a set of t pairs of hash functions {(h1, g1), . . . , (ht, gt)}, where for each i ∈[t], hi : V →[m] and gi : V →{−1, +1} are independently and uniformly chosen pairwise independent hash functions. Basis matrix: Let W ∈ −1, +1 m×m be √m·Hm where Hm is the Hadamard transform matrix of size m. It is important to note that we do not need to store this matrix. The value of any entry in this matrix can be computed in O(log m) bit operations given the (row, column) index of that entry. Global parameters: The total number of users n, the size of the Hadamard matrix m, the number of hash pairs t, the privacy parameter ϵ, the confidence parameter β, and the hash functions (h1, g1), . . . , (ht, gt) are assumed to be public information. We set t = O(log(n/β)) and m = O q n log(n/β) . Public randomness: In addition to the t hash pairs {(h1, g1), . . . , (ht, gt)}, we assume that the server creates a random partition Π : [n] →[log d] × [t] that assigns to each user i ∈[n] a random pair (ℓi, ji) ←[log(d)] × [t], and another random function Q : [n] ←[m] that assigns4 to each user i a uniformly random index ri ←[m]. We assume that such random indices ℓi, ji, ri are shared between the server and each user. First, we describe the two main modules of our protocol. 3.1 A local randomizer: LocalRnd For each i ∈[n], user i runs her own independent copy of a local randomizer, denoted as LocalRnd, to generate her private report. LocalRnd of user i starts by acquiring the index triple (ℓi, ji, ri) ←[log d] × [t] × [m] from public randomness. For each user, LocalRnd is invoked twice in the full protocol: once during the first phase of the protocol (called the pruning phase) where the high-frequency items (heavy hitters) are identified, and a second time during the final phase (the estimation phase) to enable the protocol to get better estimates for the frequencies of the heavy hitters. 4We could have grouped Π and Q into one random function mapping [n] to [log d] × [t] × [m], however, we prefer to split them for clarity of exposition as each source of randomness will be used for a different role. 4 In the first invocation, LocalRnd of user i performs its computation on the ℓi-th prefix of the item vi of user i, while in the second invocation, it performs the computation on the entire user’s string vi. Apart from this, in both invocations, LocalRnd follows similar steps. It first selects the hash pair (hji, gji), computes ci = hji(vi[1 : ˜ℓ]) (where ˜ℓ= ℓi in the first invocation and ˜ℓ= log d in the second invocation, and vi[1 : ˜ℓ] is the ˜ℓ-th prefix of vi), then it computes a bit xi = gji vi[1 : ˜ℓ] · Wri,ci (where Wr,c denotes the (r, c) entry of the basis matrix W). Finally, to guarantee ϵ-local differential privacy, it generates a randomized response yi based on xi (i.e., yi = xi with probability eϵ/2/(1 + eϵ/2) and yi = −xi with probability 1/(1 + eϵ/2), which is sent to the server. Our local randomizer can thought of as a transformed, compressed (via sampling), and randomized version of the count sketch [8]. In particular, we can think of LocalRnd as follows. It starts off with similar steps to the standard count sketch algorithm, but then deviates from it as it applies Hadamard transform to the user’s signal, then samples one bit from the result. By doing so, we can achieve significant savings in space and communication without sacrificing accuracy. 3.2 A frequency oracle: FreqOracle Suppose we want to allow the server estimate the frequencies of some given subset bV ⊆{0, 1}ℓfor some given ℓ∈[log d] based on the noisy users’ reports. We give a protocol, denoted as FreqOracle, for accomplishing this task. For each queried item ˆv ∈bV and for each hash index j ∈[t], FreqOracle computes c = hj(ˆv), then collects the noisy reports of a collection of users Iℓ,j that contains every user i whose pair of prefix and hash indices (ℓi, ji) match (ℓ, j). Next, it estimates the inverse Hadamard transform of the compressed and noisy signal of each user in Iℓ,j. In particular, for each i ∈Iℓ,j, it computes yi Wri,c which can be described as a multiplication between yieri (where eri is the indicator vector with 1 at the ri-th position) and the scaled Hadamard matrix W, followed by selecting the c-th entry of the resulting vector. This brings us back to the standard count sketch representation. It then sums all the results and multiplies the outcome by gj(ˆv) to obtain an estimate ˆfj(ˆv) for the frequency of ˆv. As in the count sketch algorithm, this is done for every j ∈[t], then FreqOracle obtains a high-confidence estimate by computing the median of all the t frequency estimates. 3.3 The protocol: TreeHist The protocol is easier to describe via operations over nodes of the prefix tree V of depth log d (described earlier). The protocol runs through two main phases: the pruning (or, scanning) phase, and the final estimation phase. In the pruning phase, the protocol scans the levels of the prefix tree starting from the top level (that contains just 0 and 1) to the bottom level (that contains all items of the dictionary). For a given node at level ℓ∈[log d], using FreqOracle as a subroutine, the protocol gets an estimate for the frequency of the corresponding ℓ-bit prefix. For any ℓ∈[log(d) −1], before the protocol moves to level ℓ+ 1 of the tree, it prunes all the nodes in level ℓthat cannot be prefixes of actual heavy hitters (highfrequency items in the dictionary).Then, as it moves to level ℓ+ 1, the protocol considers only the children of the surviving nodes in level ℓ. The construction guarantees that, with high probability, the number of survining nodes in each level cannot exceed O q n log(d) log(n) . Hence, the total number of nodes queried by the protocol (i.e., submitted to FreqOracle) is at most O q n log(d) log(n) . In the second and final phase, after reaching the final level of the tree, the protocol would have already identified a list of the candidate heavy hitters, however, their estimated frequencies may not be as accurate as we desire due to the large variance caused by the random partitioning of users across all the levels of the tree. Hence, it invokes the frequency oracle once more on those particular items, and this time, the sampling variance is reduced as the set of users is partitioned only across the t hash pairs (rather than across log(d)×t bins as in the pruning phase). By doing this, the server obtains more accurate estimates for the frequencies of the identified heavy hitters. The privacy and accuracy guarantees are stated below. The full details are given in the full version [3]. 5 3.4 Privacy and Utility Guartantees Theorem 3.1. Protocol TreeHist is ϵ-local differentially private. Theorem 3.2. There is a number η = O p n log(n/β) log(d))/ϵ such that with probability at least 1 −β, the output list of the TreeHist protocol satisfies the following properties: 1. it contains all items v ∈V whose true frequencies above 3η. 2. it does not contain any item v ∈V whose true frequency below η. 3. Every frequency estimate in the output list is accurate up to an error ≤O p n log(n/β)/ϵ 4 Locally Private Heavy-hitters – bit by bit We now present a simplified description of our second protocol, that captures most of the ideas. We refer the reader to the full version of this work for the complete details. First Step: Frequency Oracle. Recall that a frequency oracle is a protocol that, after communicating with the users, outputs a data structure capable of approximating the frequency of every domain element v ∈V. So, if we were to allow the server to have linear runtime in the domain size |V| = d, then a frequency oracle would suffice for computing histograms. As we are interested in protocols with a significantly lower runtime, we will only use a frequency oracle as a subroutine, and query it only for (roughly) √n elements. Let Z ∈{±1}d×n be a matrix chosen uniformly at random, and assume that Z is publicly known.5 That is, for every domain element v ∈V and every user j ∈[n], we have a random bit Z[v, j] ∈ {±1}. As Z is publicly known, every user j can identify its corresponding bit Z[vj, j], where vj ∈V is the input of user j. Now consider a protocol in which users send randomized responses of their corresponding bits. That is, user j sends yj = Z[vj, j] w.p. 1 2 + ϵ 2 and sends yj = −Z[vj, j] w.p. 1 2 −ϵ 2. We can now estimate the frequency of every domain element v ∈V as a(v) = 1 ϵ · X j∈[n] yj · Z[v, j]. To see that a(v) is accurate, observe that a(v) is the sum of n independent random variables (one for every user). For the users j holding the input v being estimated (that is, vj = v) we will have that 1 ϵ E[yj · Z[v, j]] = 1. For the other users we will have that yj and Z[v, j] are independent, and hence E[yj · Z[v, j]] = E[yj] · E[Z[v, j]] = 0. That is, a(v) can be expressed as the sum of n independent random variables: f(v) variables with expectation 1, and (n −f(v)) variables with expectation 0. The fact that a(v) is an accurate estimation for f(v) now follows from the Hoeffding bound. Lemma 4.1 (Algorithm Hashtogram). Let ϵ ≤1. Algorithm Hashtogram satisfies ϵ-LDP. Furthermore, with probability at least 1 −β, algorithm Hashtogram answers every query v ∈V with a(v) satisfying: |a(v) −f(v)| ≤O 1 ϵ · r n log nd β . Second Step: Identifying Heavy-Hitters. Let us assume that we have a frequency oracle protocol with worst-case error τ. We now want to use our frequency oracle in order to construct a protocol that operates on two steps: First, it identifies a small set of potential “heavy-hitters”, i.e., domain elements that appear in the database at least 2τ times. Afterwards, it uses the frequency oracle to estimate the frequencies of those potential heavy elements.6 Let h : V →[T] be a (publicly known) random hash function, mapping domain elements into [T], where T will be set later.7 We will now use h in order to identify the heavy-hitters. To that end, 5As we describe in the full version of this work, Z has a short description, as it need not be uniform. 6Event though we describe the protocol as having two steps, the necessary communication for these steps can be done in parallel, and hence, our protocol will have only 1 round of communication. 7As with the matrix Z, the hash function h can have a short description length. 6 0.1 1.0 2.0 5.0 10.0 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Estimated frequency Estimated frequency versus epsilon Rank 1-True Rank 1-Priv Rank 10-True Rank 10-Priv Rank 100-True Rank 100-Priv Figure 1: Frequency vs privacy (ϵ) on the NLTKBrown corpus. 0 20 40 60 80 100 0.00 0.02 0.04 0.06 0.08 Frequency estimate Comparison between Count-Sketch and RAPPOR True Freq Count_Sketch RAPPOR Figure 2: Frequency vs privacy (ϵ) on the Demo 3 experiment from RAPPOR let v∗∈V denote such a heavy-hitter, appearing at least 2τ times in the database S, and denote t∗= h(v∗). Assuming that T is big enough, w.h.p. we will have that v∗is the only input element (from S) that is mapped (by h) into the hash value t∗. Assuming that this is indeed the case, we will now identify v∗bit by bit. For ℓ∈[log d], denote Sℓ= (h(vj), vj[ℓ])j∈[n], where vj[ℓ] is bit ℓof vj. That is, Sℓis a database over the domain ([T]×{0, 1}), where the row corresponding to user j is (h(vj), vj[ℓ]). Observe that every user can compute her own row locally. As v∗is a heavy-hitter, for every ℓ∈[log d] we have that (t∗, v∗[ℓ]) appears in Sℓat least 2τ times. On the other hand, as we assumed that v∗is the only input element that is mapped into t∗we get that (t∗, 1 −v∗[ℓ]) does not appear in Sℓat all. Recall that our frequency oracle has error at most τ, and hence, we can use it to accurately determine the bits of v∗. To make things more concrete, consider the protocol that for every hash value t ∈[T], for every coordinate ℓ∈[log d], and for every bit b ∈{0, 1}, obtains an estimation (using the frequency oracle) for the multiplicity of (t, b) in Sℓ(so there are log d invocations of the frequency oracle, and a total of 2T log d estimations). Now, for every t ∈[T] let us define ˆv(t) where bit ℓof ˆv(t) is the bit b s.t. (t, b) is more frequent than (t, 1−b) in Sℓ. By the above discussion, we will have that ˆv(t∗) = v∗. That is, the protocol identifies a set of T domain elements, containing all of the heavy-hitters. The frequency of the identified heavy-hitters can then be estimated using the frequency oracle. Remark 4.1. As should be clear from the above discussion, it suffices to take T ≳n2, as this will ensure that there are no collisions among different input elements. As we only care about collisions between “heavy-hitters” (appearing in S at least √n times), it would suffice to take T ≳n to ensure that w.h.p. there are no collisions between heavy-hitters. In fact, we could even take T ≳√n, which would ensure that a heavy-hitter x∗has no collisions with constant probability, and then to amplify our confidence using repetitions. Lemma 4.2 (Algorithm Bitstogram). Let ϵ ≤1. Algorithm Bitstogram satisfies ϵ-LDP. Furthermore, the algorithm returns a list L of length ˜O(√n) satisfying: 1. With probability 1 −β, for every (v, a) ∈L we have that |a −f(v)| ≤O 1 ϵ p n log(n/β) . 2. W.p. 1 −β, for every v ∈V s.t. f(v) ≥O 1 ϵ q n log(d/β) log( 1 β ) , we have that v is in L. 5 Empirical Evaluation We now discuss implementation details of our algorithms mentioned in Section 38. The main objective of this section is to emphasize the empirical efficacy of our algorithms. [16] recently claimed space optimality for a similar problem, but a formal analysis (or empirical evidence) was not provided. 7 5.1 Evaluation of the Private Frequency Oracle The objective of this experiment is to test the efficacy of our algorithm in estimating the frequencies of a known set of dictionary of user items, under local differential privacy. We estimate the error in estimation while varying the privacy parameter ϵ. (See Section 2.1 for a refresher on the notation.) We ran the experiment (Figure 1) on a data set drawn uniformly at random from the NLTK Brown corpus [1]. The data set we created has n = 10 million samples drawn i.i.d. from the corpus with replacement (which corresponds to 25, 991 unique words), and the system parameters are chosen as follows: number of data samples (n) : 10 million, range of the hash function (m): √n, number of hash functions (t): 285. For the hash functions, we used the prefix bits of SHA-256. The estimated frequency is scaled by the number of samples to normalize the result, and each experiment is averaged over ten runs. In this plot, the rank corresponds to the rank of a domain element in the distribution of true frequencies in the data set. Observations: i) The plots corroborate the fact that the frequency oracle is indeed unbiased. The average frequency estimate (over ten runs) for each percentile is within one standard deviation of the corresponding true estimate. ii) The error in the estimates go down significantly as the privacy parameter ϵ is increased. Comparison with RAPPOR [10]. Here we compare our implementation with the only publicly available code for locally private frequency estimation. We took the snapshot of the RAPPOR code base (https://github.com/google/rappor) on May 9th, 2017. To perform a fair comparison, we tested our algorithm against one of the demo experiments available for RAPPOR (Demo3 using the demo.sh script) with the same privacy parameter ϵ = ln(3), the number of data samples n = 1 million, and the data set to be the same data set generated by the demo.sh script. In Figure 2 we observe that for higher frequencies both RAPPOR and our algorithm perform similarly, with ours being slightly better. However, in lower frequency regimes, the RAPPOR estimates are zero most of the times, while our estimates are closer to the true estimates. We do not claim our algorithm to be universally better than RAPPOR on all data sets. Rather, through our experiments we want to motivate the need for more thorough empirical comparison of both the algorihtms. 5.2 Private Heavy-hitters In this section, we take on the harder task of identifying the heavy hitters, rather than estimating the frequencies of domain elements. We run our experiments on the NLTK data set described earlier, with the same default system parameters (as Section 5.1) along with n = 10 mi and ϵ = 2, except now we assume that we do not know the domain. As a part of our algorithm design, we assume that every element in the domain is from the english alphabet set [a-z] and are of length exactly equal to six letters. Words longer than six letters were truncated and words shorter than six letters were tagged ⊥at the end. We set a threshold of 15 · √n as the threshold for being a heavy hitter. As with moth natural language data sets, the NLTK Brown data follows a power law dirstribution with a very long tail. (See the full version of this work for a visualization of the distribution.) In Table 5.2 we state our corresponding precision and recall parameters, and the false positive rate. The total number of positive examples is 22 (out of 25991 unique words),and the total number of negative examples is roughly 3 × 108. The total number of false positives FP = 60, and false negatives FN = 3. This corresponds to a vanishing FP-rate, considering the total number of negative examples roughly equals 3 × 108. In practice, if there are false positives, they can be easily pruned using domain expertise. For example, if we are trying to identify new words which users are typing in English [2], then using the domain expertise of English, a set of false positives can be easily ruled out by inspecting the list of heavy hitters output by the algorithm. On the other hand, this cannot be done for false negatives. Hence, it is important to have a high recall value. The fact that we have three false negatives is because the frequency of those words are very close to the threshold of 15√n. While there are other algorithms for finding heavy-hitters [4, 13], either they do not provide any theoretical guarantee for the utility [10, 12, 16], or there does not exist a scalable and efficient implementation for them. 8The experiments are performed without the Hadamard compression during data transmission. 8 Data set unique words Precision Recall (TPR) FPR NLTK Brown corpus 25991 0.24 (σ = 0.04) 0.86 (σ = 0.05) 2 × 10−7 Table 2: Private Heavy-hitters with threshold=15√n. Here σ corresponds to the standard deviation. TPR and FPR correspond to true positive rate and false positive rates respectively. References [1] Nltk brown corpus. www.nltk.org. [2] Apple tries to peek at user habits without violating privacy. The Wall Street Journal, 2016. [3] Raef Bassily, Kobbi Nissim, Uri Stemmer, and Abhradeep Thakurta. Practical locally private heavy hitters. CoRR, abs/1707.04982, 2017. [4] Raef Bassily and Adam Smith. Local, private, efficient protocols for succinct histograms. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 127–135. ACM, 2015. [5] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. Theory of Computing, 12(1):1–61, 2016. [6] Mark Bun, Kobbi Nissim, Uri Stemmer, and Salil P. Vadhan. Differentially private release and learning of threshold functions. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 634–649. IEEE Computer Society, 2015. [7] T.-H. Hubert Chan, Elaine Shi, and Dawn Song. Optimal lower bound for differentially private multi-party aggregation. In Leah Epstein and Paolo Ferragina, editors, Algorithms - ESA 2012 20th Annual European Symposium, Ljubljana, Slovenia, September 10-12, 2012. Proceedings, volume 7501 of Lecture Notes in Computer Science, pages 277–288. Springer, 2012. [8] Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In ICALP, 2002. [9] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265–284. Springer, 2006. [10] ´Ulfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized aggregatable privacy-preserving ordinal response. In CCS, 2014. [11] Alexandre Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant. Limiting privacy breaches in privacy preserving data mining. In PODS, pages 211–222. ACM, 2003. [12] Giulia Fanti, Vasyl Pihur, and Ulfar Erlingsson. Building a rappor with the unknown: Privacypreserving learning of associations and data dictionaries. arXiv preprint arXiv:1503.01214, 2015. [13] Justin Hsu, Sanjeev Khanna, and Aaron Roth. Distributed private heavy hitters. In International Colloquium on Automata, Languages, and Programming, pages 461–472. Springer, 2012. [14] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–826, 2011. [15] Nina Mishra and Mark Sandler. Privacy via pseudorandom sketches. In Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 143–152. ACM, 2006. [16] A.G. Thakurta, A.H. Vyrros, U.S. Vaishampayan, G. Kapoor, J. Freudiger, V.R. Sridhar, and D. Davidson. Learning new words. US Patent 9594741, 2017. 9 | 2017 | 172 |
6,644 | Mixture-Rank Matrix Approximation for Collaborative Filtering Dongsheng Li1 Chao Chen1 Wei Liu2∗ Tun Lu3,4 Ning Gu3,4 Stephen M. Chu1 1IBM Research - China 2Tencent AI Lab, China 3School of Computer Science, Fudan University, China 4Shanghai Key Laboratory of Data Science, Fudan University, China {ldsli, cshchen, schu}@cn.ibm.com, wliu@ee.columbia.edu, {lutun, ninggu}@fudan.edu.cn Abstract Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today’s collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy. 1 Introduction Low-rank matrix approximation (LRMA) is one of the most popular methods in today’s collaborative filtering (CF) methods due to high accuracy [11, 12, 13, 17]. Given a targeted user-item rating matrix R ∈Rm×n, the general goal of LRMA is to find two rank-k matrices U ∈Rm×k and V ∈Rn×k such that R ≈ˆR = UV T . After obtaining the user and item feature matrices, the recommendation score of the i-th user on the j-th item can be obtained by the dot product between their corresponding feature vectors, i.e., UiVj T . In existing LRMA methods [12, 13, 17], the rank k is considered fixed, i.e., the same rank is adopted to describe all users and items. However, in many real-world user-item rating matrices, e.g., Movielens and Netflix, users/items have a significantly varying number of ratings, so that submatrices with different ranks could coexist. For instance, a submatrix containing users and items with few ratings should be of a low rank, e.g., 10 or 20, and a submatrix containing users and items with many ratings may be of a relatively higher rank, e.g., 50 or 100. Adopting a fixed rank for all users and items cannot perfectly model the internal structures of the rating matrix, which will lead to imperfect approximations as well as degraded recommendation accuracy. In this paper, we propose a mixture-rank matrix approximation (MRMA) method, in which user-item ratings are represented by a mixture of LRMA models with different ranks. For each user/item, a probability distribution with a Laplacian prior is exploited to describe its relationship with different ∗This work was conducted while the author was with IBM. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. LRMA models, while a joint distribution of user-item pairs is employed to describe the relationship between the user-item ratings and different LRMA models. To cope with the non-convex optimization problem associated with MRMA, a learning algorithm capitalizing on iterated condition modes (ICM) [1] is proposed, which can obtain a local maximum of the joint probability by iteratively maximizing the probability of each variable conditioned on the rest. Finally, we evaluate the proposed MRMA method on Movielens and Netflix datasets. The experimental results show that MRMA can achieve better accuracy compared against state-of-the-art LRMA-based CF methods, further boosting the performance for recommender systems leveraging matrix approximation. 2 Related Work Low-rank matrix approximation methods have been leveraged by much recent work to achieve accurate collaborative filtering, e.g., PMF [17], BPMF [16], APG [19], GSMF [20], SMA [13], etc. These methods train one user feature matrix and one item feature matrix first and use these feature matrices for all users and items without any adaptation. However, all these methods adopt fixed rank values for the targeted user-item rating matrices. Therefore, as analyzed in this paper, submatrices with different ranks could coexist in the rating matrices and only adopting a fixed rank cannot achieve optimal matrix approximation. Besides stand-alone matrix approximation methods, ensemble methods, e.g., DFC [15], LLORMA [12], WEMAREC [5], etc., and mixture models, e.g., MPMA [4], etc., have been proposed to improve the recommendation accuracy and/or scalability by weighing different base models across different users/items. However, the above methods do not consider using different ranks to derive different base models. In addition, it is desirable to borrow the idea of mixture-rank matrix approximation (MRMA) to generate more accurate base models in the above methods and further enhance their accuracy. In many matrix approximation-based collaborative filtering methods, auxiliary information, e.g., implicit feedback [9], social information [14], contextual information [10], etc., is introduced to improve the recommendation quality of pure matrix approximation methods. The idea of MRMA is orthogonal to these methods, and can thus be employed by these methods to further improve their recommendation accuracy. In general low-rank matrix approximation methods, it is non-trivial to directly determine the maximum rank of a targeted matrix [2, 3]. Candès et al. [3] proved that a non-convex rank minimization problem can be equivalently transformed into a convex nuclear norm minimization problem. Based on this finding, we can easily determine the range of ranks for MRMA and choose different K values (the maximum rank in MRMA) for different datasets. 3 Problem Formulation In this paper, upper case letters such as R, U, V denote matrices, and k denotes the rank for matrix approximation. For a targeted user-item rating matrix R ∈Rm×n, m denotes the number of users, n denotes the number of items, and Ri,j denotes the rating of the i-th user on the j-th item. ˆR denotes the low-rank approximation of R. The general goal of k-rank matrix approximation is to determine user and item feature matrices, i.e., U ∈Rm×k, V ∈Rn×k, such that R ≈ˆR = UV T . The rank k is considered low, because k ≪min{m, n} can achieve good performance in many CF applications. In real-world rating matrices, e.g., Movielens and Netflix, users/items have a varying number of ratings, so that a lower rank which best describes users/items with less ratings will easily underfit the users/items with more ratings, and similarly a higher rank will easily overfit the users/items with less ratings. A case study is conducted on the Movielens (1M) dataset (with 1M ratings from 6,000 users on 4,000 movies), which confirms that internal submatrices with different ranks indeed coexist in the rating matrix. Here, we run the probabilistic matrix factorization (PMF) method [17] using k = 5 and k = 50, and then compare the root mean square errors (RMSEs) for the users/items with less than 10 ratings and more than 50 ratings. As shown in Table 1, when the rank is 5, the users/items with less than 10 ratings achieve lower RMSEs than the cases when the rank is 50. This indicates that the PMF model overfits the users/items with less than 10 ratings when k = 50. Similarly, we can conclude that the PMF model underfits the users/items with more than 50 ratings when k = 5. Moreover, PMF with k = 50 achieves lower RMSE (higher accuracy) than PMF with k = 5, but the improvement comes with sacrificed accuracy for the users and items with a small number of ratings, e.g., less than 10. This study shows that PMF 2 Table 1: The root mean square errors (RMSEs) of PMF [17] for users/items with different numbers of ratings when rank k = 5 and k = 50. rank = 5 rank = 50 #user ratings < 10 0.9058 0.9165 #user ratings > 50 0.8416 0.8352 #item ratings < 10 0.9338 0.9598 #item ratings > 50 0.8520 0.8418 All 0.8614 0.8583 with fixed rank values cannot perfectly model the internal mixture-rank structure of the rating matrix. To this end, it is desirable to model users and items with different ranks. 4 Mixture-Rank Matrix Approximation (MRMA) Ri,j U k i αk i V k j βk j σ2 σ2 U σ2 V µα bα µβ bβ j = {1, ..., n} i = {1, ..., m} k = {1, ..., K} Figure 1: The graphical model for the proposed mixture-rank matrix approximation (MRMA) method. Following the idea of PMF, we exploit a probabilistic model with Gaussian noise to model the ratings [17]. As shown in Figure 1, the conditional distribution over the observed ratings for the mixture-rank model can be defined as follows: Pr(R|U, V, α, β, σ2) = m Y i=1 n Y j=1 [ K X k=1 αk i βk j N(Ri,j|U k i V k j T , σ2)]1i,j, (1) where N(x|µ, σ2) denotes the probability density function of a Gaussian distribution with mean µ and variance σ2. K is the maximum rank among all internal structures of the user-item rating matrix. αk and βk are the weight vectors of the rank-k matrix approximation model for all users and items, respectively. Thus, αk i and βk j denote the weights of the rank-k model for the i-th user and j-th item, respectively. U k and V k are the feature matrices of the rank-k matrix approximation model for all users and items, respectively. Likewise, U k i and V k j denote the feature vectors of the rank-k model for the i-th user and j-th item, respectively. 1i,j is an indication function, which will be 1 if Ri,j is observed and 0 otherwise. By placing a zero mean isotropic Gaussian prior [6, 17] on the user and item feature vectors, we have Pr(U k|σ2 U) = m Y i=1 N(U k i |0, σ2 UI), Pr(V k|σ2 V ) = n Y j=1 N(V k j |0, σ2 V I). (2) For αk and βk, we choose a Laplacian prior here, because the models with most suitable ranks for each user/item should be with large weights, i.e., αk and βk should be sparse. By placing the Laplacian prior on the user and item weight vectors, we have Pr(αk|µα, bα) = m Y i=1 L(αk i |µα, bα), Pr(βk|µβ, bβ) = n Y j=1 L(βk j |µβ, bβ), (3) 3 where µα and bα are the location parameter and scale parameter of the Laplacian distribution for α, respectively, and accordingly µβ and bβ are the location parameter and scale parameter for β. The log of the posterior distribution over the user and item features and weights can be given as follows: l = ln Pr(U, V, α, β|R, σ2, σ2 U, σ2 V , µα, bα, µβ, bβ) ∝ ln Pr(R|U, V, α, β, σ2) Pr(U|σ2 U) Pr(V |σ2 V ) Pr(α|µα, bα) Pr(β|µβ, bβ) = m X i=1 n X j=1 1i,j ln K X k=1 αk i βk j N(Ri,j|U k i (V k j )T , σ2I) −1 2σ2 U K X k=1 m X i=1 (U k i )2 − 1 2σ2 V K X k=1 n X j=1 (V k i )2 −1 2Km ln σ2 U −1 2Kn ln σ2 V (4) −1 bα K X k=1 m X i=1 |αk i −µα| −1 bβ K X k=1 n X j=1 |βk j −µβ| −1 2 K X k=1 m ln b2 α −1 2 K X k=1 n ln b2 β + C, where C is a constant that does not depend on any parameters. Since the above optimization problem is difficult to solve directly, we obtain its lower bound using Jensen’s inequality and then optimize the following lower bound: l′ = −1 2σ2 m X i=1 n X j=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 −1 2 m X i=1 n X j=1 1i,j ln σ2 −1 2σ2 U K X k=1 m X i=1 (U k i )2 − 1 2σ2 V K X k=1 n X j=1 (V k i )2 −1 2Km ln σ2 U −1 2Kn ln σ2 V (5) −1 bα K X k=1 m X i=1 |αk i −µα| −1 bβ K X k=1 n X j=1 |βk j −µβ| −1 2Km ln b2 α −1 2Kn ln b2 β + C. If we keep the hyperparameters of the prior distributions fixed, then maximizing l′ is similar to the popular least square error minimization with ℓ2 regularization on U and V and ℓ1 regularization on α and β. However, keeping the hyperparameters fixed may easily lead to overfitting because MRMA models have many parameters. 5 Learning MRMA Models The optimization problem defined in Equation 5 is very likely to overfit if we cannot precisely estimate the hyperparameters, which automatically control the generalization capacity of the MRMA model. For instance, σU and σV will control the regularization of U and V . Therefore, it is more desirable to estimate the parameters and hyperparameters simultaneously during model training. One possible way is to estimate each variable by its maximum a priori (MAP) value while conditioned on the rest variables and then iterate until convergence, which is also known as iterated conditional modes (ICM) [1]. The ICM procedure for maximizing Equation 5 is presented as follows. Initialization: Choose initial values for all variables and parameters. ICM Step: The values of U, V , α and β can be updated by solving the following minimization problems when conditioned on other variables or hyperparameters. ∀k ∈{1, ..., K}, ∀i ∈{1, ..., m} : U k i ←arg min U ′ 1 2σ2 n X j=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 + 1 2σ2 U K X k=1 (U k i )2 , αk i ←arg min α′ 1 2σ2 n X j=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 + 1 bα K X k=1 |αk i −µα| . 4 ∀k ∈{1, ..., K}, ∀j ∈{1, ..., n} : V k j ←arg min V ′ 1 2σ2 m X i=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 + 1 2σ2 V K X k=1 (V k j )2 , βk j ←arg min β′ 1 2σ2 m X i=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 + 1 bβ K X k=1 |βk j −µβ| . The hyperparameters can be learned as their maximum likelihood estimates by setting their partial derivatives on l′ to 0. σ2 ← m X i=1 n X j=1 1i,j K X k=1 αk i βk j (Ri,j −U k i (V k j )T )2 / m X i=1 n X j=1 1i,j, σ2 U ← K X k=1 m X i=1 (U k i )2/Km, µα ← K X k=1 m X i=1 αk i /Km, bα = K X k=1 m X i=1 |αk i −µα|/Km, σ2 V ← K X k=1 n X j=1 (V k j )2/Kn, µβ ← K X k=1 n X j=1 βk j /Kn, bβ = K X k=1 n X j=1 |βk j −µβ|/Kn. Repeat: until convergence or the maximum number of iterations reached. Note that ICM is sensitive to initial values. Our empirical studies show that setting the initial values of U k and V k by solving the classic PMF method can achieve good performance. Regarding α and β, one of the proper initial values should be 1/ √ K (K denotes the number of sub-models in the mixture model). To improve generalization performance and enable online learning [7], we can update U, V, α, β using stochastic gradient descent. Meanwhile, the ℓ1 norms in learning α and β can be approximated by the smoothed ℓ1 method [18]. To deal with massive datasets, we can use the alternating least squares (ALS) method to learn the parameters of the proposed MRMA model, which is amenable to parallelization. 6 Experiments This section presents the experimental results of the proposed MRMA method on three well-known datasets: 1) MovieLens 1M dataset (∼1 million ratings from 6,040 users on 3,706 movies); 2) MovieLens 10M dataset (∼10 million ratings from 69,878 users on 10,677 movies); 3) Netflix Prize dataset (∼100 million ratings from 480,189 users on 17,770 movies). For all accuracy comparisons, we randomly split each dataset into a training set and a test set by the ratio of 9:1. All results are reported by averaging over 5 different splits. The root mean square error (RMSE) is adopted to measure the rating prediction accuracy of different algorithms, which can be computed as follows: D( ˆR) = qP i P j 1i,j(Ri,j −ˆRi,j)2/ P i P j 1i,j (1i,j indicates that entry (i, j) appears in the test set). The normalized discounted cumulative gain (NDCG) is adopted to measure the item ranking accuracy of different algorithms, which can be computed as follows: NDCG@N = DCG@N/IDCG@N (DCG@N = PN i=1(2reli −1)/ log2(i + 1), and IDCG is the DCG value with perfect ranking). In ICM-based learning, we adopt ϵ = 0.00001 as the convergence threshold and T = 300 as the maximum number of iterations. Considering efficiency, we only choose a subset of ranks, e.g., {10, 20, 30, ..., 300} rather than {1, 2, 3, ..., 300}, in MRMA. The parameters of all the compared algorithms are adopted from their original papers because all of them are evaluated on the same datasets. We compare the recommendation accuracy of MRMA with six matrix approximation-based collaborative filtering algorithms as follows: 1) BPMF [16], which extends the PMF method from a Baysian view and estimates model parameters using a Markov chain Monte Carlo scheme; 2) GSMF [20], which learns user/item features with group sparsity regularization in matrix approximation; 3) LLORMA [12], which ensembles the approximations from different submatrices using kernel smoothing; 4) WEMAREC [5], which ensembles different biased matrix approximation models to achieve higher 5 0.80 0.82 0.84 0.86 0.88 k=10 k=20 k=50 k=100 k=150 k=200 k=250 k=300 MRMA RMSE Model PMF MRMA Figure 2: Root mean square error comparison between MRMA and PMF with different ranks. 0.80 0.82 0.84 0.86 set 1 set 2 set 3 set 4 set 5 0 1000 RMSE computation time (s) rank setting RMSE computation time Figure 3: The accuracy and efficiency tradeoff of MRMA. accuracy; 5) MPMA [4], which combines local and global matrix approximations using a mixture model; 6) SMA [13], which yields a stable matrix approximation that can achieve good generalization performance. 6.1 Mixture-Rank Matrix Approximation vs. Fixed-Rank Matrix Approximation Given a fixed rank k, the corresponding rank-k model in MRMA is identical to probabilistic matrix factorization (PMF) [17]. In this experiment, we compare the recommendation accuracy of MRMA with ranks in {10, 20, 50, 100, 150, 200, 250, 300} against those of PMF with fixed ranks on the MovieLens 1M dataset. For PMF, we choose 0.01 as the learning rate, 0.01 as the user feature regularization coefficient, and 0.001 as the item feature regularization coefficient, respectively. The convergence condition is the same as MRMA. As shown in Figure 2, when the rank increases from 10 to 300, PMF can achieve RMSEs between 0.86 and 0.88. However, the RMSE of MRMA is about 0.84 when mixing all these ranks from 10 to 300. Meanwhile, the accuracy of PMF is not stable when k ≤100. For instance, PMF with k = 10 achieves better accuracy than k = 20 but worse accuracy than k = 50. This is because fixed rank matrix approximation cannot be perfect for all users and items, so that many users and items either underfit or overfit at a fixed rank less than 100. Yet when k > 100, only overfitting occurs and PMF achieves consistently better accuracy when k increases, which is because regularization terms can help improve generalization capacity. Nevertheless, PMF with all ranks achieves lower accuracy than MRMA, because individual users/items can give the sub-models with the optimal ranks higher weights in MRMA and thus alleviate underfitting or overfitting. 6.2 Sensitivity of Rank in MRMA In MRMA, the set of ranks decide the performance of the final model. However, it is neither efficient nor necessary to choose all the ranks in [1, 2, ..., K]. For instance, a rank-k approximation will be very similar to rank-(k −1) and rank-(k + 1) approximations, i.e., they may have overlapping structures. Therefore, a subset of ranks will be sufficient. Figure 3 shows 5 different settings of rank combinations, in which set 1 = {10, 20, 30, ..., 300}, set 2 = {20, 40, ..., 300}, set 3 = {30, 60, ..., 300}, set 4 = {50, 100, ..., 300}, and set 5 = {100, 200, 300}. As shown in this figure, RMSE decreases when more ranks are adopted in MRMA, which is intuitive because more ranks will help users/items better choose the most appropriate components. However, the computation time also increases when more ranks are adopted in MRMA. If a tradeoff between accuracy and efficiency is required, then set 2 or set 3 will be desirable because they achieve slightly worse accuracies but significantly less computation overheads. MRMA only contains three sub-models with different ranks in set 5 = {100, 200, 300}, but it still significantly outperforms PMF with ranks ranging from 10 to 300 in recommendation accuracy (as shown in Figure 2). This further confirms that MRMA can indeed discover the internal mixture-rank structure of the user-item rating matrix and thus achieve better recommendation accuracy due to better approximation. 6 Table 2: RMSE comparison between MRMA and six state-of-the-art matrix approximation-based collaborative filtering algorithms on MovieLens (10M) and Netflix datasets. Note that MRMA statistically significantly outperforms the other algorithms with 95% confidence level. MovieLens (10M) Netflix BPMF [16] 0.8197 ± 0.0004 0.8421 ± 0.0002 GSMF [20] 0.8012 ± 0.0011 0.8420 ± 0.0006 LLORMA [12] 0.7855 ± 0.0002 0.8275 ± 0.0004 WEMAREC [5] 0.7775 ± 0.0007 0.8143 ± 0.0001 MPMA [4] 0.7712 ± 0.0002 0.8139 ± 0.0003 SMA [13] 0.7682 ± 0.0003 0.8036 ± 0.0004 MRMA 0.7634 ± 0.0009 0.7973 ± 0.0002 Table 3: NDCG comparison between MRMA and six state-of-the-art matrix approximation-based collaborative filtering algorithms on Movielens (1M) and Movielens (10M) datasets. Note that MRMA statistically significantly outperforms the other algorithms with 95% confidence level. Metric NDCG@N Data | Method N=1 N=5 N=10 N=20 Movielens 1M BPMF 0.6870 ± 0.0024 0.6981 ± 0.0029 0.7525 ± 0.0009 0.8754 ± 0.0008 GSMF 0.6909 ± 0.0048 0.7031 ± 0.0023 0.7555 ± 0.0017 0.8769 ± 0.0011 LLORMA 0.7025 ± 0.0027 0.7101 ± 0.0005 0.7626 ± 0.0023 0.8811 ± 0.0010 WEMAREC 0.7048 ± 0.0015 0.7089 ± 0.0016 0.7617 ± 0.0041 0.8796 ± 0.0005 MPMA 0.7020 ± 0.0005 0.7114 ± 0.0018 0.7606 ± 0.0006 0.8805 ± 0.0007 SMA 0.7042 ± 0.0033 0.7109 ± 0.0011 0.7607 ± 0.0008 0.8801 ± 0.0004 MRMA 0.7153 ± 0.0027 0.7182 ± 0.0005 0.7672 ± 0.0013 0.8837 ± 0.0004 Movielens 10M BPMF 0.6563 ± 0.0005 0.6845 ± 0.0003 0.7467 ± 0.0007 0.8691 ± 0.0002 GSMF 0.6708 ± 0.0012 0.6995 ± 0.0008 0.7566 ± 0.0017 0.8748 ± 0.0004 LLORMA 0.6829 ± 0.0014 0.7066 ± 0.0005 0.7632 ± 0.0004 0.8782 ± 0.0012 WEMAREC 0.7013 ± 0.0003 0.7176 ± 0.0006 0.7703 ± 0.0002 0.8824 ± 0.0006 MPMA 0.6908 ± 0.0006 0.7133 ± 0.0002 0.7680 ± 0.0001 0.8808 ± 0.0004 SMA 0.7002 ± 0.0006 0.7134 ± 0.0004 0.7679 ± 0.0003 0.8809 ± 0.0002 MRMA 0.7048 ± 0.0006 0.7219 ± 0.0001 0.7743 ± 0.0001 0.8846 ± 0.0001 6.3 Accuracy Comparison 6.3.1 Rating Prediction Comparison Table 2 compares the rating prediction accuracy between MRMA and six matrix approximationbased collaborative filtering algorithms on MovieLens (10M) and Netflix datasets. Note that among the compared algorithms, BPMF, GSMF, MPMA and SMA are stand-alone algorithms, while LLORMA and WEMAREC are ensemble algorithms. In this experiment, we adopt the set of ranks as {10, 20, 50, 100, 150, 200, 250, 300} due to efficiency reason, which means that the accuracy of MRMA should not be optimal. However, as shown in Table 2, MRMA statistically significantly outperforms all the other algorithms with 95% confidence level. The reason is that MRMA can choose different rank values for different users/items, which can achieve not only globally better approximation but also better approximation in terms of individual users or items. This further confirms that mixture-rank structure indeed exists in user-item rating matrices in recommender systems. Thus, it is desirable to adopt mixture-rank matrix approximations rather than fixed-rank matrix approximations for recommendation tasks. 6.3.2 Item Ranking Comparison Table 3 compares the NDCGs of MRMA with the other six state-of-the-art matrix approximationbased collaborative filtering algorithms on Movielens (1M) and Movielens (10M) datasets. Note that for each dataset, we keep 20 ratings in the test set for each user and remove users with less than 5 7 ratings in the training set. As shown in the results, MRMA can also achieve higher item ranking accuracy than the other compared algorithms thanks to the capability of better capturing the internal mixture-rank structures of the user-item rating matrices. This experiment demonstrates that MRMA can not only provide accurate rating prediction but also achieve accurate item ranking for each user. 6.4 Interpretation of MRMA Table 4: Top 10 movies with largest β values for sub-models with rank k = 20 and k = 200 in MRMA. Here, #ratings stands for the average number of ratings in the training set for the corresponding movies. rank=20 rank=200 movie name β #ratings movie name β #ratings Smashing Time 0.6114 2.4 American Beauty 0.9219 1781.4 Gate of Heavenly Peace 0.6101 Groundhog Day 0.9146 Man of the Century 0.6079 Fargo 0.8779 Mamma Roma 0.6071 Face/Off 0.8693 Dry Cleaning 0.6071 2001: A Space Odyssey 0.8608 Dear Jesse 0.6063 Shakespeare in Love 0.8553 Skipped Parts 0.6057 Saving Private Ryan 0.8480 The Hour of the Pig 0.6055 The Fugitive 0.8404 Inheritors 0.6042 Braveheart 0.8247 Dangerous Game 0.6034 Fight Club 0.8153 To better understand how users/items weigh different sub-models in the mixture model of MRMA, we present the top 10 movies which have largest β values for sub-models with rank=20 and rank=200, show their β values, and compare their average numbers of ratings in the training set in Table 4. Intuitively, the movies with more ratings (e.g., over 1000 ratings) should weigh higher towards more complex models, and the movies with less ratings (e.g., under 10 ratings) should weigh higher towards simpler models in MRMA. As shown in Table 4, the top 10 movies with largest β values for the sub-model with rank 20 have only 2.4 ratings on average in the training set. On the contrary, the top 10 movies with largest β values for the sub-model with rank 200 have 1781.4 ratings on average in the training set, and meanwhile these movies are very popular and most of them are Oscar winners. This confirms our previous claim that MRMA can indeed weigh more complex models (e.g., rank=200) higher for movies with more ratings to prevent underfitting, and weigh less complex models (e.g., rank=20) higher for the movies with less ratings to prevent overfitting. A similar phenomenon has also been observed from users with different α values, and we omit the results due to space limit. 7 Conclusion and Future Work This paper proposes a mixture-rank matrix approximation (MRMA) method, which describes useritem ratings using a mixture of low-rank matrix approximation models with different ranks to achieve better approximation and thus better recommendation accuracy. An ICM-based learning algorithm is proposed to handle the non-convex optimization problem pertaining to MRMA. The experimental results on MovieLens and Netflix datasets demonstrate that MRMA can achieve better accuracy than six state-of-the-art matrix approximation-based collaborative filtering methods, further pushing the frontier of recommender systems. One of the possible extensions of this work is to incorporate other inference methods into learning the MRMA model, e.g., variational inference [8], because ICM may be trapped in local maxima and therefore cannot achieve global maxima without properly chosen initial values. Acknowledgement This work was supported in part by the National Natural Science Foundation of China under Grant No. 61332008 and NSAF under Grant No. U1630115. 8 References [1] J. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society. Series B (Methodological), pages 259–302, 1986. [2] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111–119, 2012. [3] E. J. Candès and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [4] C. Chen, D. Li, Q. Lv, J. Yan, S. M. Chu, and L. Shang. MPMA: mixture probabilistic matrix approximation for collaborative filtering. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI ’16), pages 1382–1388, 2016. [5] C. Chen, D. Li, Y. Zhao, Q. Lv, and L. Shang. WEMAREC: Accurate and scalable recommendation through weighted and ensemble matrix approximation. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’15), pages 303–312, 2015. [6] D. Dueck and B. Frey. Probabilistic sparse matrix factorization. University of Toronto technical report PSI-2004-23, 2004. [7] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent, 2015. arXiv:1509.01240. [8] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [9] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’14), pages 426–434. ACM, 2008. [10] Y. Koren. Collaborative filtering with temporal dynamics. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’09), pages 447–456. ACM, 2009. [11] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009. [12] J. Lee, S. Kim, G. Lebanon, and Y. Singer. Local low-rank matrix approximation. In Proceedings of The 30th International Conference on Machine Learning (ICML ’13), pages 82–90, 2013. [13] D. Li, C. Chen, Q. Lv, J. Yan, L. Shang, and S. Chu. Low-rank matrix approximation with stability. In The 33rd International Conference on Machine Learning (ICML ’16), pages 295–303, 2016. [14] H. Ma, H. Yang, M. R. Lyu, and I. King. Sorec: social recommendation using probabilistic matrix factorization. In Proceedings of the 17th ACM conference on Information and knowledge management (CIKM ’08), pages 931–940. ACM, 2008. [15] L. W. Mackey, M. I. Jordan, and A. Talwalkar. Divide-and-conquer matrix factorization. In Advances in Neural Information Processing Systems (NIPS ’11), pages 1134–1142, 2011. [16] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte carlo. In Proceedings of the 25th international conference on Machine learning (ICML ’08), pages 880–887. ACM, 2008. [17] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems (NIPS ’08), pages 1257–1264, 2008. [18] M. Schmidt, G. Fung, and R. Rosales. Fast optimization methods for L1 regularization: A comparative study and two new approaches. In European Conference on Machine Learning (ECML ’07), pages 286–297. Springer, 2007. [19] K.-C. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization, 6(15):615–640, 2010. [20] T. Yuan, J. Cheng, X. Zhang, S. Qiu, and H. Lu. Recommendation by mining multiple user behaviors with group sparsity. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI ’14), pages 222–228, 2014. 9 | 2017 | 173 |
6,645 | Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods Veeranjaneyulu Sadhanala Carnegie Mellon University Pittsburgh, PA 15213 vsadhana@cs.cmu.edu Yu-Xiang Wang Carnegie Mellon University/Amazon AI Pittsburgh, PA 15213/Palo Alto, CA 94303 yuxiangw@amazon.com James Sharpnack University of California, Davis Davis, CA 95616 jsharpna@ucdavis.edu Ryan J. Tibshirani Carnegie Mellon University Pittsburgh, PA 15213 ryantibs@stat.cmu.edu Abstract We consider the problem of estimating the values of a function over n nodes of a d-dimensional grid graph (having equal side lengths n1/d) from noisy observations. The function is assumed to be smooth, but is allowed to exhibit different amounts of smoothness at different regions in the grid. Such heterogeneity eludes classical measures of smoothness from nonparametric statistics, such as Holder smoothness. Meanwhile, total variation (TV) smoothness classes allow for heterogeneity, but are restrictive in another sense: only constant functions count as perfectly smooth (achieve zero TV). To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes. We relate these two new classes to Holder classes, and derive lower bounds on their minimax errors. We also analyze two naturally associated trend filtering methods; when d = 2, each is seen to be rate optimal over the appropriate class. 1 Introduction In this work, we focus on estimation of a mean parameter defined over the nodes of a d-dimensional grid graph G = (V, E), with equal side lengths N = n1/d. Let us enumerate V = {1, . . . , n} and E = {e1, . . . , em}, and consider data y = (y1, . . . , yn) ∈Rn observed over V , distributed as yi ∼N(θ0,i, σ2), independently, for i = 1, . . . , n, (1) where θ0 = (θ0,1, . . . , θ0,n) ∈Rn is the mean parameter to be estimated, and σ2 > 0 the common noise variance. We will assume that θ0 displays some kind of regularity or smoothness over G, and are specifically interested in notions of regularity built around on the total variation (TV) operator ∥Dθ∥1 = X (i,j)∈E |θi −θj|, (2) defined with respect to G, where D ∈Rm×n is the edge incidence matrix of G, which has ℓth row Dℓ= (0, . . . , −1, . . . , 1, . . . , 0), with −1 in location i and 1 in location j, provided that the ℓth edge is eℓ= (i, j) with i < j. There is an extensive literature on estimators based on TV regularization, both in Euclidean spaces and over graphs. Higher-order TV regularization, which, loosely speaking, considers the TV of derivatives of the parameter, is much less understood, especially over graphs. In this paper, we develop statistical theory for higher-order TV smoothness classes, and we analyze 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. associated trend filtering methods, which are seen to achieve the minimax optimal estimation error rate over such classes. This can be viewed as an extension of the work in [22] for the zeroth-order TV case, where by “zeroth-order”, we refer to the usual TV operator as defined in (2). Motivation. TV denoising over grid graphs, specifically 1d and 2d grid graphs, is a well-studied problem in signal processing, statistics, and machine learning, some key references being [20, 5, 26]. Given data y ∈Rn as per the setup described above, the TV denoising or fused lasso estimator over the grid G is defined as ˆθ = argmin θ∈Rn 1 2∥y −θ∥2 2 + λ∥Dθ∥1, (3) where λ ≥0 is a tuning parameter. The TV denoising estimator generalizes seamlessly to arbitrary graphs. The problem of denoising over grids, the setting we focus on, is of particular relevance to a number of important applications, e.g., in time series analysis, and image and video processing. A strength of the nonlinear TV denoising estimator in (3)—where by “nonlinear”, we mean that ˆθ is nonlinear as a function of y—is that it can adapt to heterogeneity in the local level of smoothness of the underlying signal θ0. Moreover, it adapts to such heterogeneity at an extent that is beyond what linear estimators are capable of capturing. This principle is widely evident in practice and has been championed by many authors in the signal processing literature. It is also backed by statistical theory, i.e., [8, 16, 27] in the 1d setting, and most recently [22] in the general d-dimensional setting. Note that the TV denoising estimator ˆθ in (3) takes a piecewise constant structure by design, i.e., at many adjacent pairs (i, j) ∈E we will have ˆθi = ˆθj, and this will be generally more common for larger λ. For some problems, this structure may not be ideal and we might instead seek a piecewise smooth estimator, that is still able to cope with local changes in the underlying level of smoothness, but offers a richer structure (beyond a simple constant structure) for the base trend. In a 1d setting, this is accomplished by trend filtering methods, which move from piecewise constant to piecewise polynomial structure, via TV regularization of discrete derivatives of the parameter [24, 13, 27]. An extension of trend filtering to general graphs was developed in [31]. In what follows, we study the statistical properties of this graph trend filtering method over grids, and we propose and analyze a more specialized trend filtering estimator for grids based on the idea that something like a Euclidean coordinate system is available at any (interior) node. See Figure 1 for a motivating illustration. Related work. The literature on TV denoising is enormous and we cannot give a comprehensive review, but only some brief highlights. Important methodological and computational contributions are found in [20, 5, 26, 4, 10, 6, 28, 15, 7, 12, 1, 25], and notable theoretical contributions are found in [16, 19, 9, 23, 11, 22, 17]. The literature on higher-order TV-based methods is more sparse and more concentrated on the 1d setting. Trend filtering methods in 1d were pioneered in [24, 13], and analyzed statistically in [27], where they were also shown to be asymptotically equivalent to locally adaptive regression splines of [16]. An extension of trend filtering to additive models was given in [21]. A generalization of trend filtering that operates over an arbitrary graph structure was given in [31]. Trend filtering is not the only avenue for higher-order TV regularization: the signal processing community has also studied higher-order variants of TV, see, e.g., [18, 3]. The construction of the discrete versions of these higher-order TV operators is somewhat similar to that in [31] as well our Kronecker trend filtering proposal, however, the focus of the work is quite different. Summary of contributions. An overview of our contributions is given below. • We propose a new method for trend filtering over grid graphs that we call Kronecker trend filtering (KTF), and compare its properties to the more general graph trend filtering (GTF) proposal of [31]. • For 2d grids, we derive estimation error rates for GTF and KTF, each of these rates being a function of the regularizer evaluated at the mean θ0. • For d-dimensional grids, we derive minimax lower bounds for estimation over two higherorder TV classes defined using the operators from GTF and KTF. When d = 2, these lower bounds match the upper bounds in rate (apart from log factors) derived for GTF and KTF, ensuring that each method is minimax rate optimal (modulo log factors) for its own notion of regularity. Also, the KTF class contains a Holder class of an appropriate order, and KTF is seen to be rate optimal (modulo log factors) for this more homogeneous class as well. 2 Underlying signal and data G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Laplacian smoothing, large λ Laplacian smoothing, small λ TV denoising Graph trend filtering Kronecker trend filtering Figure 1: Top left: an underlying signal θ0 and associated data y (shown as black points). Top middle and top right: Laplacian smoothing fit to y, at large and small tuning parameter values, respectively. Bottom left, middle, and right: TV denoising (3), graph trend filtering (5), and Kronecker trend filtering (5) fit to y, respectively (the latter two are of order k = 2, with penalty operators as described in Section 2). In order to capture the larger of the two peaks, Laplacian smoothing must significantly undersmooth throughout; with more regularization, it undersmooths throughout. TV denoising is able to adapt to heterogeneity in the smoothness of the underlying signal, but exhibits “staircasing” artifacts, as it is restricted to fitting piecewise constant functions. Graph and Kronecker trend filtering overcome this, while maintaining local adaptivity. Notation. For deterministic sequences an, bn we write an = O(bn) to denote that an/bn is upper bounded for large enough n, and an ≍bn to denote that both an = O(bn) and a−1 n = O(b−1 n ). For random sequences An, Bn, we write An = OP(Bn) to denote that An/Bn is bounded in probability. Given a d-dimensional grid G = (V, E), where V = {1, . . . , n}, as before, we will sometimes index a parameter θ ∈Rn defined over the nodes in the following convenient way. Letting N = n1/d and Zd = {(i1/N, . . . , id/N) : i1, . . . , id ∈{1, . . . , N}} ⊆[0, 1]d, we will index the components of θ by their lattice positions, denoted θ(x), x ∈Zd. Further, for each j = 1, . . . , d, we will define the discrete derivative of θ in the jth coordinate direction at a location x by (Dxjθ)(x) = θ(x + ej/N) −θ(x) if x, x + ej/N ∈Zd, 0 else. (4) Naturally, we denote by Dxjθ ∈Rn the vector with components (Dxjθ)(x), x ∈Zd. Higher-order discrete derivatives are simply defined by repeated application of the above definition. We use abbreviations (Dx2 jθ)(x) = (Dxj(Dxjθ))(x), for j = 1, . . . , d, and (Dxj,xℓθ)(x) = (Dxj(Dxℓθ))(x), for j, ℓ= 1, . . . , d, and so on. Given an estimator ˆθ of the mean parameter θ0 in (1), and K ⊆Rn, two quantities of interest are: MSE(ˆθ, θ0) = 1 n∥ˆθ −θ0∥2 2 and R(K) = inf ˆθ sup θ0∈K E MSE(ˆθ, θ0) . The first quantity here is called the mean squared error (MSE) of θ; we will also call E[MSE(ˆθ, θ0)] the risk of ˆθ. The second quantity is called the minimax risk over K (the infimum being taken over all estimators ˆθ). 3 2 Trend filtering methods Review: graph trend filtering. To review the family of estimators developed in [31], we start by introducing a general-form estimator called the generalized lasso signal approximator [28], ˆθ = argmin θ∈Rn 1 2∥y −θ∥2 2 + λ∥∆θ∥1, (5) for a matrix ∆∈Rr×n, referred to as the penalty operator. For an integer k ≥0, the authors [31] defined the graph trend filtering (GTF) estimator of order k by (5), with the penalty operator being ∆(k+1) = DLk/2 for k even, L(k+1)/2 for k odd. (6) Here, as before, we use D for the edge incidence matrix of G. We also use L = DT D for the graph Laplacian matrix of G. The intuition behind the above definition is that ∆(k+1)θ gives something roughly like the (k + 1)st order discrete derivatives of θ over the graph G. Note that the GTF estimator reduces to TV denoising in (3) when k = 0. Also, like TV denoising, GTF applies to arbitrary graph structures; see [31] for more details and for the study of GTF over general graphs. Our interest is of course its behavior over grids, and we will now use the notation introduced in (4), to shed more light on the GTF penalty operator in (6) over a d-dimensional grid. For any signal θ ∈Rn, we can write ∥∆(k+1)θ∥1 = P x∈Zd dx, where at all points x ∈Zd (except for those close to the boundary), dx = d X j1=1 d X j2,...,jq=1 Dxj1,x2 j2,...,x2 jq θ (x) for k even, where q = k/2, d X j1,...,jq=1 Dx2 j1,x2 j2,...,x2 jq θ (x) for k odd, where q = (k + 1)/2. (7) Written in this form, it appears that the GTF operator ∆(k+1) aggregates derivatives in somewhat of an unnatural way. But we must remember that for a general graph structure, only first derivatives and divergences have obvious discrete analogs—given by application of D and L, respectively. Hence, GTF, which was originally designed for general graphs, relies on combinations of D and L to produce something like higher-order discrete derivatives. This explains the form of the aggregated derivatives in (6), which is entirely based on divergences. Kronecker trend filtering. There is a natural alternative to the GTF penalty operator that takes advantage of the Euclidean-like structure available at the (interior) nodes of a grid graph. At a point x ∈Zd (not close to the boundary), consider using dx = d X j=1 Dxk+1 j θ (x) (8) as a basic building block for penalizing derivatives, rather than (7). This gives rise to a method we call Kronecker trend filtering (KTF), which for an integer order k ≥0 is defined by (5), but now with the choice of penalty operator e∆(k+1) = D(k+1) 1d ⊗I ⊗· · · ⊗I I ⊗D(k+1) 1d ⊗· · · ⊗I ... I ⊗I ⊗· · · ⊗D(k+1) 1d . (9) Here, D(k+1) 1d ∈R(N−k−1)×N is the 1d discrete derivative operator of order k + 1 (e.g., as used in univariate trend filtering, see [27]), I ∈RN×N is the identity matrix, and A ⊗B is the Kronecker product of matrices A, B. Each group of rows in (9) features a total of d −1 Kronecker products. KTF reduces to TV denoising in (3) when k = 0, and thus also to GTF with k = 0. But for k ≥1, GTF and KTF are different estimators. A look at the action of their penalty operators, as displayed in 4 (7), (8) reveals some of their differences. For example, we see that GTF considers mixed derivatives of total order k + 1, but KTF only considers directional derivatives of order k + 1 that are parallel to the coordinate axes. Also, GTF penalizes aggregate derivatives (i.e., sums of derivatives), whereas KTF penalizes individual ones. More subtle differences between GTF and KTF have to do with the structure of their estimates, as we discuss next. Another subtle difference lies in how the GTF and KTF operators (6), (9) relate to more classical notions of smoothness, particularly, Holder smoothness. This is covered in Section 4. Structure of estimates. It is straightforward to see that the GTF operator (6) has a 1-dimensional null space, spanned by 1 = (1, . . . , 1) ∈Rn. This means that GTF lets constant signals pass through unpenalized, but nothing else; or, in other words, it preserves the projection of y onto the space of constant signals, ¯y1, but nothing else. The KTF operator, meanwhile, has a much richer null space. Lemma 1. The null space of the KTF operator (9) has dimension (k + 1)d, and it is spanned by a polynomial basis made up of elements p(x) = xa1 1 xa2 2 · · · xad d , x ∈Zd, where a1, . . . , ad ∈{0, . . . , k}. The proof is elementary and (as with all proofs in this paper) is given in the supplement. The lemma shows that KTF preserves the projection of y onto the space of polynomials of max degree k, i.e., lets much more than just constant signals pass through unpenalized. Beyond the differences in these base trends (represented by their null spaces), GTF and KTF admit estimates with similar but generally different structures. KTF has the advantage that this structure is more transparent: its estimates are piecewise polynomial functions of max degree k, with generally fewer pieces for larger λ. This is demonstrated by a functional representation for KTF, given next. Lemma 2. Let hi : [0, 1] →R, i = 1, . . . , N be the (univariate) falling factorial functions [27, 30] of order k, defined over knots 1/N, 2/N, . . . , N: hi(t) = i−1 Y ℓ=1 (t −tℓ), t ∈[0, 1], i = 1, . . . , k + 1, hi+k+1(t) = k Y ℓ=1 t −i + ℓ N · 1 t > i + k N , t ∈[0, 1], i = 1, . . . , N −k −1. (10) (For k = 0, our convention is for the empty product to equal 1.) Let Hd be the space spanned by all d-wise tensor products of falling factorial functions, i.e., Hd contains f : [0, 1]d →R of the form f(x) = N X i1,...,id=1 αi1,...,idhi1(x)hi2(x2) · · · hid(xd), x ∈[0, 1]d, for coefficients α ∈Rn (whose components we index by αi1,...,id, for i1, . . . , id = 1, . . . , N). Then the KTF estimator defined in (5), (9) is equivalent to the functional optimization problem ˆf = argmin f∈Hd 1 2 X x∈Zd y(x) −f(x) 2 + λ d X j=1 X x−j∈Zd−1 TV ∂kf(·, x−j) ∂xk j , (11) where f(·, x−j) denotes f as function of the jth dimension with all other dimensions fixed at x−j, ∂k/∂xk j (·) denotes the kth partial weak derivative operator with respect to xj, for j = 1, . . . , d, and TV(·) denotes the total variation operator. The discrete (5), (9) and functional (11) representations are equivalent in that ˆf and ˆθ match at all grid locations x ∈Zd. Aside from shedding light on the structure of KTF solutions, the functional optimization problem in (11) is of practical importance: the function ˆf is defined over all of [0, 1]d (as opposed to ˆθ, which is of course only defined on the grid Zd) and thus we may use it to interpolate the KTF estimate to non-grid locations. It is not clear to us that a functional representation as in (11) (or even a sensible interpolation strategy) is available for GTF on d-dimensional grids. 5 3 Upper bounds on estimation error In this section, we assume that d = 2, and derive upper bounds on the estimation error of GTF and KTF for 2d grids. Upper bounds for generalized lasso estimators were studied in [31], and we will leverage one of their key results, which is based on what these authors call incoherence of the left singular vectors of the penalty operator ∆. A slightly refined version of this result is stated below. Theorem 1 (Theorem 6 in [31]). Suppose that ∆∈Rr×n has rank q, and denote by ξ1 ≤. . . ≤ξq its nonzero singular values. Also let u1, . . . , uq be the corresponding left singular vectors. Assume that these vectors, except for the first i0, are incoherent, meaning that for a constant µ ≥1, ∥ui∥∞≤µ/√n, i = i0 + 1, . . . , q, Then for λ ≍µ q (log r/n) Pq i=i0+1 ξ−2 i , the generalized lasso estimator ˆθ in (5) satisfies MSE(ˆθ, θ0) = OP nullity(∆) n + i0 n + µ n v u u tlog r n q X i=i0+1 1 ξ2 i · ∥∆θ0∥1 ! . For GTF and KTF, we will apply this result, balancing an appropriate choice of i0 with the partial sum of reciprocal squared singular values Pq i=i0+1 ξ−2 i . The main challenge, as we will see, is in establishing incoherence of the singular vectors. Error bounds for graph trend filtering. The authors in [31] have already used Theorem 1 (their Theorem 6) in order to derive error rates for GTF on 2d grids. However, their results (specifically, their Corollary 8) can be refined using a tighter upper bound for the partial sum term Pq i=i0+1 ξ−2 i . No real further tightening is possible, since, as we show later, the results below match the minimax lower bound in rate, up to log factors. Theorem 2. Assume that d = 2. For k = 0, Cn = ∥∆(1)θ0∥1 (i.e., Cn equal to the TV of θ0, as in (2)), and λ ≍log n, the GTF estimator in (5), (6) (i.e., the TV denoising estimator in (3)) satisfies MSE(ˆθ, θ0) = OP 1 n + log n n Cn . For any integer k ≥1, Cn = ∥∆(k+1)θ0∥1 and λ ≍n k k+2 (log n) 1 k+2 C − k k+2 n , GTF satisfies MSE(ˆθ, θ0) = OP 1 n + n− 2 k+2 (log n) 1 k+2 C 2 k+2 n . Remark 1. The result for k = 0 in Theorem 2 was essentially already established by [11] (a small difference is that the above rate is sharper by a factor of log n; though to be fair, [11] also take into account ℓ0 sparsity). It is interesting to note that the case k = 0 appears to be quite special, in that the GTF estimator, i.e., TV denoising estimator, is adaptive to the underlying smoothness parameter Cn (the prescribed choice of tuning parameter λ ≍log n does not depend on Cn). The technique for upper bounding Pq i=i0+1 ξ−2 i in the proof of Theorem 2 can be roughly explained as follows. The GTF operator ∆(k+1) on a 2d grid has squared singular values: 4 sin2 π(i1 −1) 2N + 4 sin2 π(i2 −1) 2N k+1 , i1, i2 = 1, . . . , N. We can upper bound the sum of squared reciprocal singular values with a integral over [0, 1]2, make use of the identity sin x ≥x/2 for small enough x, and then switch to polar coordinates to calculate the integral (similar to [11], in analyzing TV denoising). The arguments to verify incoherence of the left singular vectors of ∆(k+1) are themselves somewhat delicate, but were already given in [31]. Error bounds for Kronecker trend filtering. In comparison to the GTF case, the application of Theorem 1 to KTF is a much more difficult task, because (to the best of our knowledge) the KTF operator e∆(k+1) does not admit closed-form expressions for its singular values and vectors. This is true in any dimension (i.e., even for d = 1, where KTF reduces to univariate trend filtering). As it turns out, the singular values can be handled with a relatively straightforward application of the Cauchy interlacing theorem. It is establishing the incoherence of the singular vectors that proves to be the real challenge. This is accomplished by leveraging specialized approximation bounds for the eigenvectors of Toeplitz matrices from [2]. 6 Theorem 3. Assume that d = 2. For k = 0, since KTF reduces to the GTF with k = 0 (and to TV denoising), it satisfies the result stated in the first part of Theorem 2. For any integer k ≥1, Cn = ∥e∆(k+1)θ0∥1 and λ ≍n k k+2 (log n) 1 k+2 C − k k+2 n , the KTF estimator in (5), (9) satisfies MSE(ˆθ, θ0) = OP 1 n + n− 2 k+2 (log n) 1 k+2 C 2 k+2 n . The results in Theorems 2 and 3 match, in terms of their dependence on n, k, d and the smoothness parameter Cn. As we will see in the next section, the smoothness classes defined by the GTF and KTF operators are similar, though not exactly the same, and each GTF and KTF is minimax rate optimal with respect to its own smoothness class, up to log factors. Beyond 2d? To analyze GTF and KTF on grids of dimension d ≥3, we would need to establish incoherence of the left singular vectors of the GTF and KTF operators. This should be possible by extending the arguments given in [31] (for GTF) and in the proof of Theorem 3 (for KTF), and is left to future work. 4 Lower bounds on estimation error We present lower bounds on the minimax estimation error over smoothness classes defined by the operators from GTF (6) and KTF (9), denoted T k d (Cn) = {θ ∈Rn : ∥∆(k+1)θ∥1 ≤Cn}, (12) eT k d (Cn) = {θ ∈Rn : ∥e∆(k+1)θ∥1 ≤Cn}, (13) respectively (where the subscripts mark the dependence on the dimension d of the underlying grid graph). Before we derive such lower bounds, we examine embeddings of (the discretization of) the class of Holder smooth functions into the GTF and KTF classes, both to understand the nature of these new classes, and to define what we call a “canonical” scaling for the radius parameter Cn. Embedding of Holder spaces and canonical scaling. Given an integer k ≥0 and L > 0, recall that the Holder class H(k + 1, L; [0, 1]d) contains k times differentiable functions f : [0, 1]d →R, such that for all integers α1, . . . , αd ≥0 with α1 + · · · + αd = k, ∂kf(x) ∂xα1 1 · · · ∂xαd d − ∂kf(z) ∂xα1 1 · · · ∂xαd d ≤L∥x −z∥2, for all x, z ∈[0, 1]d. To compare Holder smoothness with the GTF and KTF classes defined in (12), (13), we discretize the class H(k + 1, L; [0, 1]d) by considering function evaluations over the grid Zd, defining Hk+1 d (L) = θ ∈Rn : θ(x) = f(x), x ∈Zd, for some f ∈H(k + 1, L; [0, 1]d) . (14) Now we ask: how does the (discretized) Holder class in (14) compare to the GTF and KTF classes in (12), (13)? Beginning with a comparison to KTF, fix θ ∈Hk+1 d (L), corresponding to evaluations of f ∈H(k + 1, L; [0, 1]d), and consider a point x ∈Zd that is away from the boundary. Then the KTF penalty at x is Dxk+1 j θ (x) = Dxk j θ (x + ej/N) − Dxk j θ (x) ≤N k ∂k ∂xk j f(x + ej/N) −∂k ∂xk j f(x) + N kδ(N) ≤LN k−1 + cLN k−1. (15) In the second line above, we define δ(N) to be the sum of absolute errors in the discrete approximations to the partial derivatives (i.e., the error in approximating ∂kf(x)/∂xk j by (Dxk j θ)(x)/N k, and similarly at x + ej/N). In the third line, we use Holder smoothness to upper bound the first term, and we use standard numerical analysis (details in the supplement) for the second term to ensure that δ(N) ≤cL/N for a constant c > 0 depending only on k. Summing the bound in (15) over x ∈Zd as appropriate gives a uniform bound on the KTF penalty at θ, and leads to the next result. 7 Lemma 3. For any integers k ≥0, d ≥1, the (discretized) Holder and KTF classes defined in (14), (13) satisfy Hk+1 d (L) ⊆eT k d (cLn1−(k+1)/d), where c > 0 is a constant depending only on k. This lemma has three purposes. First, it provides some supporting evidence that the KTF class is an interesting smoothness class to study, as it shows the KTF class contains (discretizations of) Holder smooth functions, which are a cornerstone of classical nonparametric regression theory. In fact, this containment is strict and the KTF class contains more heterogeneous functions in it as well. Second, it leads us to define what we call the canonical scaling Cn ≍n1−(k+1)/d for the radius of the KTF class (13). This will be helpful for interpreting our minimax lower bounds in what follows; at this scaling, note that we have Hk+1 d (1) ⊆eT k d (Cn). Third and finally, it gives us an easy way to establish lower bounds on the minimax estimation error over KTF classes, by invoking well-known results on minimax rates for Holder classes. This will be described shortly. As for GTF, calculations similar to (15) are possible, but complications ensue for x on the boundary of the grid Zd. Importantly, unlike the KTF penalty, the GTF penalty includes discrete derivatives at the boundary and so these complications have serious consequences, as stated next. Lemma 4. For any integers k, d ≥1, there are elements in the (discretized) Holder class Hk+1 d (1) in (14) that do not lie in the GTF class T k d (Cn) in (12) for arbitrarily large Cn. This lemma reveals a very subtle drawback of GTF caused by the use of discrete derivatives at the boundary of the grid. The fact that GTF classes do not contain (discretized) Holder classes makes them seem less natural (and perhaps, in a sense, less interesting) than KTF classes. In addition, it means that we cannot use standard minimax theory for Holder classes to establish lower bounds for the estimation error over GTF classes. However, as we will see next, we can construct lower bounds for GTF classes via another (more purely geometric) embedding strategy; interestingly, the resulting rates match the Holder rates, suggesting that, while GTF classes do not contain all (discretized) Holder functions, they do contain “enough” of these functions to admit the same lower bound rates. Minimax rates for GTF and KTF classes. Following from classical minimax theory for Holder classes [14, 29], and Lemma 3, we have the following result for the minimax rates over KTF classes. Theorem 4. For any integers k ≥0, d ≥1, the KTF class defined in (13) has minimax estimation error satisfying R eT k d (Cn) = Ω(n− 2d 2k+2+d C 2d 2k+2+d n ). For GTF classes, we use a different strategy. We embed an ellipse, then rotate the parameter space and embed a hypercube, leading to the following result. Theorem 5. For any integers k ≥0, d ≥1, the GTF class defined in (12) has minimax estimation error satisfying R T k d (Cn) = Ω(n− 2d 2k+2+d C 2d 2k+2+d n ). Several remarks are in order. Remark 2. Plugging in the canonical scaling Cn ≍n1−(k+1)/d in Theorems 4 and 5, we see that R( eT k d (Cn)) = Ω(n− 2k+2 2k+2+d ) and R(T k d (Cn)) = Ω(n− 2k+2 2k+2+d ), both matching the usual rate for the Holder class Hk+1 d (1). For KTF, this should be expected, as its lower bound is constructed via the Holder embedding given in Lemma 3. But for GTF, it may come as somewhat of a surprise—despite the fact it does not embed a Holder class as in Lemma 4, we see that the GTF class shares the same rate, suggesting it still contains something like “hardest” Holder smooth signals. Remark 3. For d = 2 and all k ≥0, we can certify that the lower bound rate in Theorem 4 is tight, modulo log factors, by comparing it to the upper bound in Theorem 3. Likewise, we can certify that the lower bound rate in Theorem 5 is tight, up to log factors, by comparing it to the upper bound in Theorem 2. For d ≥3, the lower bound rates in Theorems 4 and 5 will not be tight for some values of k. For example, when k = 0, at the canonical scaling Cn ≍n1−1/d, the lower bound rate (given by either theorem) is n−2/(2+d), however, [22] prove that the minimax error of the TV class scales (up to log factors) as n−1/d for d ≥2, so we see there is a departure in the rates for d ≥3. 8 GTF class KTF class Hölder class Figure 2: Illustration of the two higher-order TV classes, namely the GTF and KTF classes, as they relate to the (discretized) Holder class. The horizontally/vertically checkered region denotes the part of Holder class not contained in the GTF class. As explained in Section 4, this is due to the fact that the GTF operator penalizes discrete derivatives on the boundary of the grid graph. The diagonally checkered region (also colored in blue) denotes the part of the Holder class contained in the GTF class. The minimax lower bound rates we derive for the GTF class in Theorem 5 match the well-known Holder rates, suggesting that this region is actually sizeable and contains the “hardest” Holder smooth signals. In general, we conjecture that the Holder embedding for the KTF class (and ellipse embedding for GTF) will deliver tight lower bound rates, up to log factors, when k is large enough compared to d. This would have interesting implications for adaptivity to smoother signals (see the next remark); a precise study will be left to future work, along with tight minimax lower bounds for all k, d. Remark 4. Again by comparing Theorems 3 and 4, as well as Theorems 2 and 5, we find that, for d = 2 and all k ≥0, KTF is rate optimal for the KTF smoothness class and GTF is rate optimal for the GTF smoothness class, modulo log factors. We conjecture that this will continue to hold for all d ≥3, which will be examined in future work. Moreover, an immediate consequence of Theorem 3 and the Holder embedding in Lemma 3 is that KTF adapts automatically to Holder smooth signals, i.e., it achieves a rate (up to log factors) of n−(k+1)/(k+2) over Hk+1 2 (1), matching the well-known minimax rate for the more homogeneous Holder class. It is not clear that GTF shares this property. 5 Discussion In this paper, we studied two natural higher-order extensions of the TV estimator on d-dimensional grid graphs. The first was graph trend filtering (GTF) as defined in [31], applied to grids; the second was a new Kronecker trend filtering (KTF) method, which was built with the special (Euclidean-like) structure of grids in mind. GTF and KTF exhibit some similarities, but are different in important ways. Notably, the notion of smoothness defined using the KTF operator is somewhat more natural, and is a strict generalization of the standard notion of Holder smoothness (in the sense that the KTF smoothness class strictly contains a Holder class of an appropriate order). This is not true for the notion of smoothness defined using the GTF operator. Figure 2 gives an illustration. When d = 2, we derived tight upper bounds for the estimation error achieved by the GTF and KTF estimators—tight in the sense that these upper bound match in rate (modulo log factors) the lower bounds on the minimax estimation errors for the GTF and KTF classes. We constructed the lower bound for the KTF class by leveraging the fact that it embeds a Holder class; for the GTF class, we used a different (more geometric) embedding. While these constructions proved to be tight for d = 2 and all k ≥0, we suspect this will no longer be the case in general, when d is large enough relative to k. We will examine this in future work, along with upper bounds for GTF and KTF when d ≥3. Another important consideration for future work are the minimax linear rates over GTF and KTF classes, i.e., minimax rates when we restrict our attention to linear estimators. We anticipate that a gap will exist between minimax linear and nonlinear rates for all k, d (as it does for k = 0, as shown in [22]). This would, e.g., provide some rigorous backing to the claim that the KTF class is larger than its embedded Holder class (the latter having matching minimax linear and nonlinear rates). Acknowledgements. We thank Sivaraman Balakrishnan for helpful discussions regarding minimax rates for Holder classes on grids. JS was supported by NSF Grant DMS-1712996. VS, YW, and RT were supported by NSF Grants DMS-1309174 and DMS-1554123. 9 References [1] Alvaro Barbero and Suvrit Sra. Modular proximal optimization for multidimensional totalvariation regularization. arXiv: 1411.0589, 2014. [2] Johan M. Bogoya, Albrecht Bottcher, Sergei M. Grudsky, and Egor A. Maximenko. Eigenvectors of Hermitian Toeplitz matrices with smooth simple-loop symbols. Linear Algebra and its Applications, 493:606–637, 2016. [3] Kristian Bredies, Karl Kunisch, and Thomas Pock. Total generalized variation. SIAM Journal on Imaging Sciences, 3(3):492–526, 2010. [4] Antonin Chambolle and Jerome Darbon. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84:288–307, 2009. [5] Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, 1997. [6] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40:120–145, 2011. [7] Laurent Condat. A direct algorithm for 1d total variation denoising. HAL: 00675043, 2012. [8] David L. Donoho and Iain M. Johnstone. Minimax estimation via wavelet shrinkage. Annals of Statistics, 26(8):879–921, 1998. [9] Zaid Harchaoui and Celine Levy-Leduc. Multiple change-point estimation with a total variation penalty. Journal of the American Statistical Association, 105(492):1480–1493, 2010. [10] Holger Hoefling. A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4):984–1006, 2010. [11] Jan-Christian Hutter and Philippe Rigollet. Optimal rates for total variation denoising. Annual Conference on Learning Theory, 29:1115–1146, 2016. [12] Nicholas Johnson. A dynamic programming algorithm for the fused lasso and l0-segmentation. Journal of Computational and Graphical Statistics, 22(2):246–260, 2013. [13] Seung-Jean Kim, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. ℓ1 trend filtering. SIAM Review, 51(2):339–360, 2009. [14] Aleksandr P. Korostelev and Alexandre B. Tsybakov. Minimax Theory of Image Reconstructions. Springer, 2003. [15] Arne Kovac and Andrew Smith. Nonparametric regression on a graph. Journal of Computational and Graphical Statistics, 20(2):432–447, 2011. [16] Enno Mammen and Sara van de Geer. Locally apadtive regression splines. Annals of Statistics, 25(1):387–413, 1997. [17] Oscar Hernan Madrid Padilla, James Sharpnack, James Scott, , and Ryan J. Tibshirani. The DFS fused lasso: Linear-time denoising over general graphs. arXiv: 1608.03384, 2016. [18] Christiane Poschl and Otmar Scherzer. Characterization of minimizers of convex regularization functionals. In Frames and Operator Theory in Analysis and Signal Processing, volume 451, pages 219–248. AMS eBook Collections, 2008. [19] Alessandro Rinaldo. Properties and refinements of the fused lasso. Annals of Statistics, 37(5): 2922–2952, 2009. [20] Leonid I. Rudin, Stanley Osher, and Emad Faterni. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992. [21] Veeranjaneyulu Sadhanala and Ryan J. Tibshirani. Additive models via trend filtering. arXiv: 1702.05037, 2017. 10 [22] Veeranjaneyulu Sadhanala, Yu-Xiang Wang, and Ryan J. Tibshirani. Total variation classes beyond 1d: Minimax rates, and the limitations of linear smoothers. Advances in Neural Information Processing Systems, 29, 2016. [23] James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Sparsistency via the edge lasso. International Conference on Artificial Intelligence and Statistics, 15, 2012. [24] Gabriel Steidl, Stephan Didas, and Julia Neumann. Splines in higher order TV regularization. International Journal of Computer Vision, 70(3):214–255, 2006. [25] Wesley Tansey and James Scott. A fast and flexible algorithm for the graph-fused lasso. arXiv: 1505.06475, 2015. [26] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1):91–108, 2005. [27] Ryan J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. Annals of Statistics, 42(1):285–323, 2014. [28] Ryan J. Tibshirani and Jonathan Taylor. The solution path of the generalized lasso. Annals of Statistics, 39(3):1335–1371, 2011. [29] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. [30] Yu-Xiang Wang, Alexander Smola, and Ryan J. Tibshirani. The falling factorial basis and its statistical applications. International Conference on Machine Learning, 31, 2014. [31] Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan J. Tibshirani. Trend filtering on graphs. Journal of Machine Learning Research, 17(105):1–41, 2016. 11 | 2017 | 174 |
6,646 | Robust Conditional Probabilities Yoav Wald School of Computer Science and Engineering Hebrew University yoav.wald@mail.huji.ac.il Amir Globerson The Balvatnik School of Computer Science Tel-Aviv University gamir@mail.tau.ac.il Abstract Conditional probabilities are a core concept in machine learning. For example, optimal prediction of a label Y given an input X corresponds to maximizing the conditional probability of Y given X. A common approach to inference tasks is learning a model of conditional probabilities. However, these models are often based on strong assumptions (e.g., log-linear models), and hence their estimate of conditional probabilities is not robust and is highly dependent on the validity of their assumptions. Here we propose a framework for reasoning about conditional probabilities without assuming anything about the underlying distributions, except knowledge of their second order marginals, which can be estimated from data. We show how this setting leads to guaranteed bounds on conditional probabilities, which can be calculated efficiently in a variety of settings, including structured-prediction. Finally, we apply them to semi-supervised deep learning, obtaining results competitive with variational autoencoders. 1 Introduction In classification tasks the goal is to predict a label Y for an object X. Assuming that the joint distribution of these two variables is p∗(x, y) then optimal prediction1 corresponds to returning the label y that maximizes the conditional probability p∗(y|x). Thus, being able to reason about conditional probabilities is fundamental to machine learning and probabilistic inference. In the fully supervised setting, one can sidestep the task of estimating conditional probabilities by directly learning a classifier in a discriminative fashion. However, in unsupervised or semi-supervised settings, a reliable estimate of the conditional distributions becomes important. For example, consider a self-training [17, 31] or active learning setting. In both scenarios, the learner has a set of unlabeled samples and it needs to choose which ones to tag. Given an unlabeled sample x, if we could reliably conclude that p∗(y|x) is close to 1 for some label y, we could easily decide whether to tag x or not. Intuitively, an active learner would prefer not to tag x while a self training algorithm would tag it. There are of course many approaches to “modelling” conditional distributions, from logistic regression to conditional random fields. However, these do not come with any guarantees of approximations to the true underlying conditional distributions of p∗and thus cannot be used to reliably reason about these. This is due to the fact that such models make assumptions about the conditionals (e.g., conditional independence or parametric), which are unlikely to be satisfied in practice. As an illustrative example for our motivation and setup, consider a set of n binary variables X1, ..., Xn whose distribution we are interested in. Suppose we have enough data to obtain the joint marginals, P [Xi = xi, Xj = xj], of pairs i, j in a set E. If (1, 2) ∈E and we concluded that P [X1 = 1|X2 = 1] = 1, this lets us reason about many other probabilities. For example, we know that P [X1 = 1|X2 = 1, . . . , Xn = xn] = 1 for any setting of the x3, . . . , xn 1In the sense of minimizing prediction error. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. variables. This is a simple but powerful observation, as it translates knowledge about probabilities over small subsets to robust estimates of conditional probability over large subsets. Now, what happens when P [X1 = 1|X2 = 1] = 0.99? In other words, what can we say about P [X1 = 1|X2 = 1, . . . , Xn = xn] given information about probabilities P [Xi = xi, Xj = xj]. As we show here, it is still possible to reason about such conditional probabilities even under this partial knowledge. Motivated by the above, we propose a novel model-free approach for reasoning about conditional probabilities. Specifically, we shall show how conditional probabilities can be lower bounded when the only assumption made is that certain low-order marginals of the distribution are known. One of the surprising outcomes of our analysis is that these lower bounds can be calculated efficiently, and often have an elegant closed form. Finally, we show how these bounds can be used in a semi-supervised setting, obtaining results that are competitive with variational autoencoders [11]. 2 Problem Setup We begin by defining notations to be used in what follows. Let X denote a vector of random variables X1, . . . , Xn which are the features and Y denote labels. If we have a single label we will denote it by Y , otherwise, a multivariate label will be denoted by Y1, . . . , Yr. X, Y are generated by an unknown underlying distribution p∗(X, Y ). All variables are discrete (i.e., can take on a finite set of values). Here we will assume that although we do not know p∗we have access to some of its low order marginals, such as those of a single feature and a label: µi(xi, y) = X ¯x1,...,¯xn:¯xi=xi p∗(¯x1, . . . , ¯xn, y). Similarly we may have access to the set of pairwise marginals µij(xi, xj, y) for all i, j ∈E, where the set E corresponds to edges of a graph G (see also [7]). Denote the set of all such marginals by µ. For simplicity we assume the marginals are exact. Generally they are of course only approximate, but concentration bounds can be used to quantify this accuracy as a function of data size. Furthermore, most of the methods described here can be extended to inexact marginals (e.g., see [6] for an approach that can be applied here). Since µ does not uniquely specify a distribution p∗, we will be interested in the set of all distributions that attain these marginals. Denote this set by P(µ), namely: P(µ) = ( p ∈∆: X ¯x1,...,¯xn:¯xi=xi p(¯x1, . . . , ¯xn, y) = µi(xi, y) ∀i ) (1) where ∆is the probability simplex of the appropriate dimension. More generally, one may consider some vector function f : X, Y →Rd and its expected value according to p∗, denoted by a = Ep∗[f(X, Y )]. Then the corresponding set of distributions is: P(a) = {p ∈∆: Ep [f(X, Y )] = a} . Since marginals are expectations of random variables [30], this generalizes the notation given above. 2.1 The Robust Conditionals Problem Our approach is to reason about conditional distributions using only the fact that p∗∈P(µ). Our key goal is to lower bound these conditionals, since this will allow us to conclude that certain labels are highly likely in cases where the lower bound is large. We shall also be interested in upper and lower bounding joint probabilities, since these will play a key role in bounding the conditionals. Our goal is thus to solve the following optimization problems. min p∈P(µ) p(x, y), max p∈P(µ) p(x, y), min p∈P(µ) p (y | x). (2) In all three problems, the constraint set is linear in p. However, note that p is specified by an exponential number of variables (one per assignment x1, . . . , xn, y) and thus it is not feasible to plug these constraints into an LP solver. In terms of objective, the min and max problems are linear, and the conditional is fractional linear. In what follows we show how all three problems can be solved efficiently for tree shaped graphs. 2 3 Related Work The problem of reasoning about a distribution based on its expected values has a long history, with many beautiful mathematical results. An early example is the classical Chebyshev inequality, which bounds the tail of a distribution given its first and second moments. This was significantly extended in the Chebyshev Markov Stieltjes inequality [2]. More recently, various generalized Chebyshev inequalities have been developed [4, 22, 27] and some further results tying moments with bounds on probabilities have been shown (e.g. [18]). A typical statement of these is that several moments are given, and one seeks the minimum measure of some set S under any distribution that agrees with the moments. As [4] notes, most of these problems are NP hard, with isolated cases of tractability. Such inequalities have been used to obtain minimax optimal linear classifiers in [14]. The moment problems we consider here are very different from those considered previously, in terms of the finite support we require, our focus on bounding probabilities and conditional probabilities of assignments. The above approaches consider worst case bounds on probabilities of events for distributions in P(a). A different approach is to pick a particular distribution in P(a) as an approximation (or model) of p∗. The most common choice here is the maximum entropy distribution in P(a). Such log-linear models have found widespread use in statistics and machine learning. In particular, most graphical models can be viewed as distributions of this type (e.g., see [12, 13]). However, probabilities given by these models cannot be related to the true probabilities in any sense (e.g., upper or lower bound). This is where our approach markedly differs from entropy based assumptions. Another approach to reduce modeling assumptions is robust optimization, where data and certain model parameters are assumed not to be known precisely, and optimality is sought in a worst case adversarial setting. This approach has been applied to machine learning in various settings (e.g, see [32, 16]), establishing close links to regularization. None of these approaches considers bounding probabilities as is our focus here. Finally, another elegant moment approach is that based on kernel mean embedding [23, 24]. In this approach, one maps a distribution into a set of expected values of a set of functions (possibly infinite). The key observation is that this mean embedding lies in an RKHS, and hence many operations can be done implicitly. Most of the applications of this idea assume that the set of functions is rich enough to fully specify the distribution (i.e., characteristic kernels [25]). The focus is thus different from ours, where moments are not assumed to be fully informative, and the set P(a) contains many possible distributions. It would however be interesting to study possible uses of RKHS in our setting. 4 Calculating Robust Conditional Probabilities The optimization problems in Eq. (2) are linear programs (LP) and fractional LPs, where the number of variables scales exponentially with n. Yet, as we show in this section and Section 5, it turns out that in many non-trivial cases, they can be efficiently solved. Our focus below is on the case where the set of edges E corresponding to the pairwise marginals forms a tree structured graph. The tree structure assumption is common in literature on Graphical Models, only here we do not make an inductive assumption on the generating distribution (i.e., we make none of the conditional independence assumptions that are implied by tree-structured graphical models). In the following sections we study solutions of robust conditional probabilities under the tree assumption. We will also discuss some extensions to the cyclic case. Finally, note that although the derivations here are for pairwise marginals, these can be extended to the non-pairwise case by considering clique-trees [e.g., see 30]. Pairs are used here to allow a clearer presentation. In what follows, we show that the conditional lower bound has a simple structure as stated in Theorem 4.1. This result does not immediately suggest an efficient algorithm since its denominator includes an exponentially sized LP. Next, in Section 4.2 we show how this LP can be reduced to polynomial sized, resulting in an efficient algorithm for the lower bound. Finally, in Section 5 we show that in certain cases there is no need to use a general purpose LP solver and the problem can be solved either in closed form or via combinatorial algorithms. Detailed proofs are provided in the supplementary file. 4.1 From Conditional Probabilities To Maximum Probabilities with Exclusion The main result of this section will reduce calculation of the robust conditional probability for p(y | x), to one of maximizing the probability of all labels other than y. This reduction by itself will not allow for efficient calculation of the desired conditional probabilities, as the new problem is also 3 a large LP that needs to be solved. Still the result will take us one step further towards a solution, as it reveals the probability mass a minimizing distribution p will assign to x, y. This part of the solution is related to a result from [8], where the authors derive the solution of minp∈P(µ) p(x, y). They prove that under the tree assumption this problem has a simple closed form solution, given by the functional I(x, y ; µ): I(x, y ; µ) = X i (1 −di)µi(xi, y) + X ij∈E µij(xi, xj, y) + . (3) Here [·]+ denotes the ReLU function [z]+ = max{z, 0} and di is the degree of node i in G. It turns out that robust conditional probabilities will assign the event x, y its minimal possible probability as given in Eq. (3). Moreover, it will assign all other labels their maximum possible probability. This is indeed a behaviour that may be expected from a robust bound, we formalize it in the main result for this part: Theorem 4.1 Let µ be a vector of tree-structured pairwise marginals, then min p∈P(µ) p (y | x) = I(x, y ; µ) I(x, y ; µ) + maxp∈P(µ) P ¯y̸=y p(x, ¯y). (4) The proof of this theorem is rather technical and we leave it for the supplementary material. We note that the above result also applies to the “structured-prediction” setting where y is multivariate and we also assume knowledge of marginals µ(yi, yj). In this case, the expression for I(x, y ; µ) will also include edges between yi variables, and incorporate their degrees in the graph. The important implication of Theorem 4.1 is that it reduces the minimum conditional problem to that of probability maximization with an assignment exclusion. Namely: max p∈P(µ) X ¯y̸=y p(x, ¯y). (5) Although this is still a problem with an exponential number of variables, we show in the next section that it can be solved efficiently. 4.2 Minimizing and Maximizing Probabilities To provide an efficient solution for Eq. (5), we turn to a class of joint probability bounding problems. Assume we constrain each variable Xi and Yj to a subset ¯Xi, ¯Yj of its domain and would like to reason about the probability of this constrained set of joint assignments: U = x, y | xi ∈¯Xi, yj ∈¯Yj ∀i ∈[n], j ∈[r] . (6) Under this setting, an efficient algorithm for solving max p∈P(µ) X u∈U\(x,y) p(u), will also solve Eq. (5). By the results of last section, we will then also have an algorithm calculates robust conditional probabilities. To see this is indeed the case, assume we are given an assignment (x, y). Then setting ¯Xi = {xi} for all features and ¯Yj = {1, . . . , |Yj|} for labels (i.e. U does not restrict labels), gives exactly Eq. (5). To derive the algorithm, we will find a compact representation of the LP, with a polynomial number of variables and constraints. The result is obtained by using tools from the literature on Graphical Models. It shows how to formulate probability maximisation problems over U as problems constrained by the local marginal polytope [30]. Its definition in our setting slightly deviates from its standard definition, as it does not require that probabilities sum up to 1: Definition 1 The set of locally consistent pseudo marginals over U is defined as: ML(U) = {˜µ | X xi∈¯ Xi ˜µij(xi, xj) = ˜µj(xj) ∀(i, j) ∈E, xj ∈¯Xj}. The partition function of ˜µ, Z(˜µ), is given by P xi∈¯ Xi ˜µi(xi). 4 The following theorem states that solving Eq. (5) is equivalent to solving an LP over ML(U) with additional constraints. Theorem 4.2 Let U be a universe of assignments as defined in Eq. (6), x ∈U and µ a vector of tree-structured pairwise marginals, then the values of the following problems: max p∈P(µ) X u∈U p(u), max p∈P(µ) X u∈U\(x,y) p(u), are equal (respectively) to: max ˜µ∈ML(U),˜µ≤µ Z(˜µ), max ˜µ∈ML(U),˜µ≤µ I(x,y ; ˜µ)≤0 Z(˜µ). (7) These LPs involve a polynomial number of constraints and variables, thus can be solved efficiently. Proofs of this result can be obtained by exploiting properties of functions that decompose over trees. In the supplementary material, we provide a proof similar to that given in [30] to show equality of the marginal and local-marginal polytopes in tree models. To conclude this section, we restate the main result: the robust conditional probability problem Eq. (2) can be solved in polynomial time by combining Theorems 4.1 and 4.2. As a by-product of this derivation we also presented efficient tools for bounding answers to a large class of probabilistic queries. While this is not the focus of the current paper, these tools may be a useful in probabilistic modelling, where we often combine estimates of low order marginals with assumptions on the data generating process. Bounds like the ones presented in this section give a quantitative estimate of the uncertainty that is induced by data and circumvented by our assumptions. 5 Closed Form Solutions and Combinatorial Algorithms The results of the previous section imply that the minimum conditional can be found by solving a poly-sized LP. Although this results in polynomial runtime, it is interesting to improve as much as possible on the complexity of this calculation. One reason is that application of the bounds might require solving them repeatedly within some larger learning probelm. For instance, in classification tasks it may be necessary to solve Eq. (4) for each sample in the dataset. An even more demanding procedure will come up in our experimental evaluation, where we learn features that result in high confidence under our bounds. There, we need to solve Eq. (4) over mini-batches of training data only to calculate a gradient at each training iteration. Since using an LP solver in these scenarios is impractical, we next derive more efficient solutions for some special cases of Eq. (4). 5.1 Closed Form for Multiclass Problems The multiclass setting is a special case of Eq. (4) when y is a single label variable (e.g., a digit label in MNIST with values y ∈{0, . . . , 9}). The solution of course depends on the type of marginals provided in P(µ). Here we will assume that we have access to joint marginals of the label y and pairs of feature xi, xj corresponding to edges ij ∈E of a graph G. We note that we can obtain similar results for the cases where some additional “unlabeled” statistics µij(xi, xj) are known. It turns out that in both cases Eq. (5) has a simple solution. Here we write it for the case without unlabeled statistics. The following lemma is based on a result that states maxp∈P(µ) p(x) = minij µij(xi, xj), which we prove in the supplementary material. Lemma 5.1 Let x ∈X and µ a vector of tree-structured pairwise marginals, then min p∈P(µ) p (y | x) = I(x, y ; µ) I(x, y ; µ) + P ¯y̸=y minij µij(xi, xj, ¯y). (8) 5.2 Combinatorial Algorithms and Connection to Maximum Flow Problems In some cases, fast algorithms for the optimization problem in Eq. (5) can be derived by exploiting a tight connection of our problems to the Max-Flow problem. The problems are also closely related 5 to the weighted Set-Cover problem. To observe the connection to the latter, consider an instance of Set-Cover defined as follows. The universe is all assignments x. Sets are defined for each i, j, xi, xj and are denoted by Sij,xi,xj. The set Sij,xi,xj contains all assignments ¯x whose values at i, j are xi, xj. Moreoever, the set Sij,xi,xj has weight w(Sij,xi,xj) = µij(xi, xj). Note that the number of items in each set is exponential, but the number of sets is polynomial. Now consider using these sets to cover some set of assignments U with the minimum possible weight. It turns out that under the tree structure assumption, this problem is closely related to the problem of maximizing probabilities. Lemma 5.2 Let U be a set of assignments and µ a vector of tree-structured marginals. Then: max p∈P(µ) X u∈U p(u), (9) has the same value as the standard LP relaxation [28] of the Set-Cover problem above. The connection to Set-Cover may not give a path to efficient algorithms, but it does illuminate some of the results presented earlier. It is simple to verify that minij µij(xi, xj, ¯y) is a weight of a cover of x, ¯y, while Eq. (3) equals one minus the weight of a set that covers all assignments but x, y. A connection that we may exploit to obtain more efficient algorithms is to Max-Flow. When the graph defined by E is a chain, we show in the supplementary material that the value of Eq. (9) can be found by solving a flow problem on a simple network. We note that using the same construction, Eq. (5) turns out to be Max Flow under a budget constraint [1]. This may prove very beneficial for our goals, as it allows for efficient calculation of the robust conditionals we are interested in. Our conjecture is that this connection goes beyond chain graphs, but leave this for exploration in future work. The proofs for results in this section may also be found in the supplementary material. 6 Experiments To evaluate the utility of our bounds, we consider their use in settings of semi-supervised deep learning and structured prediction. For the bounds to be useful, the marginal distributions need to be sufficiently informative. In some datasets, the raw features already provide such information, as we show in Section 6.3. In other cases, such as images, a single raw feature (i.e., a pixel) does not provide sufficient information about the label. These cases are addressed in Section 6.1 where we show how to learn new features which do result in meaningful bounds. Using deep networks to learn these features turns out to be an effective method for semi-supervised settings, reaching results close to those demonstrated by Variational Autoencoders [11]. It would be interesting to use such feature learning methods for structured prediction too; however this requires incorporation of the max-flow algorithm into the optimization loop, and we defer this to future work. 6.1 Deep Semi-Supervised Learning A well known approach to semi-supervised learning is to optimize an empirical loss, while adding another term that measures prediction confidence on unlabeled data [9, 10]. Let us describe one such method and how to adapt it to use our bounds. Entropy Regularizer: Consider training a deep neural network where the last layer has n neurons z1, . . . , zn connected to a softmax layer of size |Y | (i.e. the number of labels), and the loss we use is a cross entropy loss. Denote the weights of the softmax layer by W ∈Rn×|Y |. Given an input x, define the softmax distribution at the output of the network as: ˜py = softmaxy(⟨Wy, z⟩) , (10) where Wy is the y’th row of W. The min-entropy regularizer [9] adds an entropy term βH(˜py) to the loss, for each unlabeled x in the training set. Plugging in Robust Conditional Probabilities: We suggest a simple adaptation of this method that uses our bounds. Let us remove the softmax layer and set the activations of the neurons z1, . . . , zn to a sigmoid activation. Let Z1, . . . , Zn denote random variables that take on the values of the output neurons, these variables will be used as features in our bounds (in previous sections we refer to features as Xi. Here we switch to Zi since Xi are understood as the raw features of the problem. e.g., the pixel values in the image). Since our bounds apply to discrete variables, while z1, . . . , zn are real values, we use a smoothed version of our bounds. 6 Loss Function and Smoothed Bounds: A smoothed version of the marginals µ is calculated by considering Zi as an indicator variable (e.g., the probability p(Zi = 1) would just be the average of the Zi values). Then the smoothed marginal ¯µ(zi = 1, y) is the average of zi values over all training data labeled with y. In our experiments we used all the labeled data to estimate ¯µ at each iteration. The smoothed version of I(z, y; µ), which we shall call ¯I(z, y; µ), is then calculated with Eq. (3) when switching µ with ¯µ and the ReLU operator with a softplus. To define a loss function we take a distribution over all labels: ˜py = softmaxy( ¯I(z, y ; ¯µ) ¯I(z, y ; ¯µ) + P ¯y̸=y minij ¯µij(zi, zj, ¯y)) , (11) This is very similar to the standard distribution taken in a neural net, but it uses our bounds to make a more robust estimate of the conditionals. Then we use the exact same loss as the entropy regularizer, a cross entropy loss for labeled data with an added entropy term for unlabeled instances. 6.1.1 Algorithm Settings and Baselines We implemented the min-entropy regularizer and our proposed method using a multilayer perceptron (MLP) with fully connected layers and a ReLU activation at each layer (except a sigmoid at the last layer for our method). In our experiments we used hidden layers of sizes 1000, 500, 50 (so we learn 50 features Z1, . . . , Z50). We also add ℓ2 regularization on the weights of the soft-max layer for the entropy regularizer, since otherwise entropy can always be driven to zero in the separable case. We also experimented with adding a hinge loss as a regularizer (as in Transductive SVM [10]), but omit it from the comparison because it did not yield significant improvement over the entropy regularizer. We also compare our results with those obtained by Variational Autoencoders and Ladder Networks. Although we do not expect to get accuracies as high as these methods, getting comparable numbers with a simple regularizer (compared to the elaborate techniques used in these works) like the one we suggest, shows that the use of our bounds results in a very powerful method. 6.2 MNIST Dataset We trained the above models on the MNIST dataset, using 100 and 1000 labeled samples (see [11] for a similar setup). We set the two regularization parameters required for the entropy regularizer and the one required for our minimum probability regularizer with five fold cross validation. We used 10% of the training data as a validation set and compared error rates on the 104 samples of the test set. Results are shown in Figure 1. They show that on the 1000 sample case we are slightly outperformed by VAE and for 100 samples we lose by 1%. Ladder networks outperform other baselines. N Ladder [21] VAE [11] Robust Probs Entropy MLP+Noise 100 1.06(±0.37) 3.33(±0.14) 4.44(±0.22) 18.93(±0.54) 21.74(±1.77) 1000 0.84(±0.08) 2.40(±0.02) 2.48(±0.03) 3.15(±0.03) 5.70(±0.20) Figure 1: Error rates of several semi-supervised learning methods on the MNIST dataset with few training samples. Accuracy vs. Coverage Curves: In self-training and co-training methods, a classifier adds its most confident predictions to the training set and then repeats training. A crucial factor in the success of such methods is the error in the predictions we add to the training pool. Classifiers that use confidence over unlabelled data as a regularizer are natural choices for base classifiers in such a setting. Therefore an interesting comparison to make is the accuracy we would get over the unlabeled data, had the classifier needed to choose its k most confident predictions. We plot this curve as a function of k for the entropy regularizer and our min-probabilities regularizer. Samples in the unlabelled training data are sorted in descending order according to confidence. Confidence for a sample in entropy regularized MLP is calculated based on the value of the logit that the predicted label received in the output layer. For the robust probabilities classifier, the confidence of a sample is the minimum conditional probability the predicted label received. As can be observed in Figure 6.2, our classifier ranks its predictions better than the entropy based method. We attribute this to our classifier being trained to give robust bounds under minimal assumptions. 7 0 10000 20000 30000 40000 50000 60000 k 0.965 0.970 0.975 0.980 0.985 0.990 0.995 Accuracy Figure 2: Accuracy for k most confident samples in unlabelled data. Blue curve shows results for the Robust Probabilities Classifier, green for the Entropy Regularizer. Confidence is measured by conditional probabilities and logits accordingly. 6.3 Multilabel Structured Prediction As mentioned earlier, in the structured prediction setting it is more difficult to learn features that yield high certainty. We therefore provide a demonstration of our method on a dataset where the raw features are relatively informative. The Genbase dataset taken from [26], is a protein classification multilabel dataset. It has 662 instances, divided into a training set of 463 samples and a test set of 199, each sample has 1185 binary features and 27 binary labels. We ran a structured-SVM algorithm, taken from [19] to obtain a classifier that outputs a labelling ˆy for each x in the dataset (the error of the resulting classifier was 2%). We then used our probabilistic bounds to rank the classifier’s predictions by their robust conditional probabilities. The bounds were calculated based on the set of marginals µij(xi, yj), estimated from the data for each pair of a feature and a label Xi, Yj. The graph corresponding to these marginals is not a tree and we handled it as discussed in Section 7. The value of our bounds was above 0.99 for 85% of the samples, indicating high certainty that the classifier is correct. Indeed only 0.59% of these 85% were actually errors. The remaining errors made by the classifier were assigned a robust probability of 0 by our bounds, indicating low level of certainty. 7 Discussion We presented a method for bounding conditional probabilities of a distribution based only on knowledge of its low order marginals. Our results can be viewed as a new type of moment problem, bounding a key component of machine learning systems, namely the conditional distribution. As we show, calculating these bounds raises many challenging optimization questions, which surprisingly result in closed form expressions in some cases. While the results were limited to the tree structured case, some of the methods have natural extensions to the cyclic case that still result in robust estimations. For instance, the local marginal polytope in Eq. (7) can be taken over a cyclic structure and still give a lower bound on maximum probabilities. Also in the presence of the cycles, it is possible to find the spanning tree that induces the best bound on Eq. (3) using a maximum spanning tree algorithm. Plugging these solutions into Eq. (4) results in a tighter approximation which we used in our experiments. Our method can be extended in many interesting directions. Here we addressed the case of discrete random variables, although we also showed in our experiments how these can be dealt with in the context of continuous features. It will be interesting to calculate bounds on conditional probabilities given expected values of continuous random variables. In this case, sums-of-squares characterizations play a key role [15, 20, 3], and their extension to the conditional case is an exciting challenge. It will also be interesting to study how these bounds can be used in the context of unsupervised learning. One natural approach here would be to learn constraint functions such that the lower bound is maximized. Finally, we plan to study the implications of our approach to diverse learning settings, from selftraining to active learning and safe reinforcement learning. 8 Acknowledgments: This work was supported by the ISF Centers of Excellence grant 2180/15, and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). References [1] R. K. Ahuja and J. B. Orlin. A capacity scaling algorithm for the constrained maximum flow problem. Networks, 25(2):89–98, 1995. [2] N. I. Akhiezer. The classical moment problem: and some related questions in analysis, volume 5. Oliver & Boyd, 1965. [3] A. Benavoli, A. Facchini, D. Piga, and M. Zaffalon. Sos for bounded rationality. Proceedings of the Tenth International Symposium on Imprecise Probability: Theories and Applications, 2017. [4] D. Bertsimas and I. Popescu. Optimal inequalities in probability theory: A convex optimization approach. SIAM Journal on Optimization, 15(3):780–804, 2005. [5] R. G. Cowell, P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic networks and expert systems: Exact computational methods for Bayesian networks. Springer Science & Business Media, 2006. [6] M. Dudík, S. J. Phillips, and R. E. Schapire. Maximum entropy density estimation with generalized regularization and an application to species distribution modeling. Journal of Machine Learning Research, 8(Jun):1217–1260, 2007. [7] E. Eban, E. Mezuman, and A. Globerson. Discrete Chebyshev classifiers. In Proceedings of the 31st International Conference on Machine Learning (ICML). JMLR Workshop and Conference Proceedings Volume 32, pages 1233–1241, 2014. [8] M. Fromer and A. Globerson. An LP view of the M-best MAP problem. In NIPS, volume 22, pages 567–575, 2009. [9] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances in neural information processing systems, pages 529–536, 2005. [10] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled, Slovenia, June 27 - 30, 1999, pages 200–209, 1999. [11] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3581–3589, 2014. [12] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289. Morgan Kaufmann, San Francisco, CA, 2001. [14] G. R. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. A robust minimax approach to classification. Journal of Machine Learning Research, 3(Dec):555–582, 2002. [15] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796–817, 2001. [16] R. Livni and K. C. A. Globerson. A simple geometric interpretation of SVM using stochastic adversaries. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AI-STATS), pages 722–730. JMLR: W&CP, 2012. [17] D. McClosky, E. Charniak, and M. Johnson. Effective self-training for parsing. In Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics, pages 152–159. Association for Computational Linguistics, 2006. 9 [18] E. Miranda, G. De Cooman, and E. Quaeghebeur. The hausdorff moment problem under finite additivity. Journal of Theoretical Probability, 20(3):663–693, 2007. [19] A. C. Muller and S. Behnke. pystruct - learning structured prediction in python. Journal of Machine Learning Research, 15:2055–2060, 2014. [20] P. A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical programming, 96(2):293–320, 2003. [21] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 3546–3554, 2015. [22] J. E. Smith. Generalized chebychev inequalities: theory and applications in decision analysis. Operations Research, 43(5):807–825, 1995. [23] A. Smola, A. Gretton, L. Song, and B. Schölkopf. A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13–31. Springer, 2007. [24] L. Song, K. Fukumizu, and A. Gretton. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Processing Magazine, 30(4):98–111, 2013. [25] B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels and rkhs embedding of measures. J. Mach. Learn. Res., 12:2389–2410, July 2011. [26] G. Tsoumakas, E. Spyromitros-Xioufis, J. Vilcek, and I. Vlahavas. Mulan: A java library for multi-label learning. Journal of Machine Learning Research, 12:2411–2414, 2011. [27] L. Vandenberghe, S. Boyd, and K. Comanor. Generalized chebyshev bounds via semidefinite programming. SIAM review, 49(1):52–64, 2007. [28] V. V. Vazirani. Approximation algorithms. Springer Science & Business Media, 2013. [29] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the performance of the max-product algorithm and its generalizations. Statistics and Computing, 14(2):143–166, 2004. [30] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [31] D. Weiss, C. Alberti, M. Collins, and S. Petrov. Structured training for neural network transitionbased parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323–333, Beijing, China, July 2015. Association for Computational Linguistics. [32] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. J. Mach. Learn. Res., 10:1485–1510, December 2009. 10 | 2017 | 175 |
6,647 | Attention Is All You Need Ashish Vaswani∗ Google Brain avaswani@google.com Noam Shazeer∗ Google Brain noam@google.com Niki Parmar∗ Google Research nikip@google.com Jakob Uszkoreit∗ Google Research usz@google.com Llion Jones∗ Google Research llion@google.com Aidan N. Gomez∗† University of Toronto aidan@cs.toronto.edu Łukasz Kaiser∗ Google Brain lukaszkaiser@google.com Illia Polosukhin∗‡ illia.polosukhin@gmail.com Abstract The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. 1 Introduction Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [31, 21, 13]. ∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditional computation [26], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms are used in conjunction with a recurrent network. In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. 2 Background The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [11]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19]. End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [28]. To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [14, 15] and [8]. 3 Model Architecture Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [9], consuming the previously generated symbols as additional input when generating the next. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. 3.1 Encoder and Decoder Stacks Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position2 Figure 1: The Transformer - model architecture. wise fully connected feed-forward network. We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512. Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i. 3.2 Attention An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. 3.2.1 Scaled Dot-Product Attention We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the 3 Scaled Dot-Product Attention Multi-Head Attention Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel. query with all keys, divide each by √dk, and apply a softmax function to obtain the weights on the values. In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as: Attention(Q, K, V ) = softmax(QKT √dk )V (1) The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1 √dk . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1 √dk . 3.2.2 Multi-Head Attention Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. 4To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q · k = Pdk i=1 qiki, has mean 0 and variance dk. 4 MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O where headi = Attention(QW Q i , KW K i , V W V i ) Where the projections are parameter matrices W Q i ∈Rdmodel×dk, W K i ∈Rdmodel×dk, W V i ∈Rdmodel×dv and W O ∈Rhdv×dmodel. In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. 3.2.3 Applications of Attention in our Model The Transformer uses multi-head attention in three different ways: • In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [31, 2, 8]. • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2. 3.3 Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. FFN(x) = max(0, xW1 + b1)W2 + b2 (2) While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality dff = 2048. 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √dmodel. 3.5 Positional Encoding Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the 5 Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. n is the sequence length, d is the representation dimension, k is the kernel size of convolutions and r the size of the neighborhood in restricted self-attention. Layer Type Complexity per Layer Sequential Maximum Path Length Operations Self-Attention O(n2 · d) O(1) O(1) Recurrent O(n · d2) O(n) O(n) Convolutional O(k · n · d2) O(1) O(logk(n)) Self-Attention (restricted) O(r · n · d) O(1) O(n/r) bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [8]. In this work, we use sine and cosine functions of different frequencies: PE(pos,2i) = sin(pos/100002i/dmodel) PE(pos,2i+1) = cos(pos/100002i/dmodel) where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos. We also experimented with using learned positional embeddings [8] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. 4 Why Self-Attention In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [31] and byte-pair [25] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in 6 the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work. A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [15], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. 5 Training This section describes the training regime for our models. 5.1 Training Data and Batching We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. 5.2 Hardware and Schedule We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). 5.3 Optimizer We used the Adam optimizer [17] with β1 = 0.9, β2 = 0.98 and ϵ = 10−9. We varied the learning rate over the course of training, according to the formula: lrate = d−0.5 model · min(step_num−0.5, step_num · warmup_steps−1.5) (3) This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000. 5.4 Regularization We employ three types of regularization during training: Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1. 7 Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost. Model BLEU Training Cost (FLOPs) EN-DE EN-FR EN-DE EN-FR ByteNet [15] 23.75 Deep-Att + PosUnk [32] 39.2 1.0 · 1020 GNMT + RL [31] 24.6 39.92 2.3 · 1019 1.4 · 1020 ConvS2S [8] 25.16 40.46 9.6 · 1018 1.5 · 1020 MoE [26] 26.03 40.56 2.0 · 1019 1.2 · 1020 Deep-Att + PosUnk Ensemble [32] 40.4 8.0 · 1020 GNMT + RL Ensemble [31] 26.30 41.16 1.8 · 1020 1.1 · 1021 ConvS2S Ensemble [8] 26.36 41.29 7.7 · 1019 1.2 · 1021 Transformer (base model) 27.3 38.1 3.3 · 1018 Transformer (big) 28.4 41.0 2.3 · 1019 Label Smoothing During training, we employed label smoothing of value ϵls = 0.1 [30]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. 6 Results 6.1 Machine Translation On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3. For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [31]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [31]. Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5. 6.2 Model Variations To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. 5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively. 8 Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, newstest2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities. N dmodel dff h dk dv Pdrop ϵls train PPL BLEU params steps (dev) (dev) ×106 base 6 512 2048 8 64 64 0.1 0.1 100K 4.92 25.8 65 (A) 1 512 512 5.29 24.9 4 128 128 5.00 25.5 16 32 32 4.91 25.8 32 16 16 5.01 25.4 (B) 16 5.16 25.1 58 32 5.01 25.4 60 (C) 2 6.11 23.7 36 4 5.19 25.3 50 8 4.88 25.5 80 256 32 32 5.75 24.5 28 1024 128 128 4.66 26.0 168 1024 5.12 25.4 53 4096 4.75 26.2 90 (D) 0.0 5.77 24.6 0.2 4.95 25.5 0.0 4.67 25.3 0.2 5.47 25.7 (E) positional embedding instead of sinusoids 4.92 25.7 big 6 1024 4096 16 0.3 300K 4.33 26.4 213 In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical results to the base model. 7 Conclusion In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor. Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. 9 References [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. [3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017. [4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. [5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. [6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. [7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014. [8] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017. [9] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [11] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [12] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [13] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. [14] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016. [15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2, 2017. [16] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017. [17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [18] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017. [19] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017. [20] Samy Bengio Łukasz Kaiser. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016. 10 [21] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025, 2015. [22] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016. [23] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. [24] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. [25] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. [26] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. [28] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015. [29] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014. [30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015. [31] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. [32] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. 11 | 2017 | 176 |
6,648 | A General Framework for Robust Interactive Learning∗ Ehsan Emamjomeh-Zadeh† David Kempe‡ Abstract We propose a general framework for interactively learning models, such as (binary or non-binary) classifiers, orderings/rankings of items, or clusterings of data points. Our framework is based on a generalization of Angluin’s equivalence query model and Littlestone’s online learning model: in each iteration, the algorithm proposes a model, and the user either accepts it or reveals a specific mistake in the proposal. The feedback is correct only with probability p > 1 2 (and adversarially incorrect with probability 1 −p), i.e., the algorithm must be able to learn in the presence of arbitrary noise. The algorithm’s goal is to learn the ground truth model using few iterations. Our general framework is based on a graph representation of the models and user feedback. To be able to learn efficiently, it is sufficient that there be a graph G whose nodes are the models, and (weighted) edges capture the user feedback, with the property that if s, s∗are the proposed and target models, respectively, then any (correct) user feedback s′ must lie on a shortest s-s∗path in G. Under this one assumption, there is a natural algorithm, reminiscent of the Multiplicative Weights Update algorithm, which will efficiently learn s∗even in the presence of noise in the user’s feedback. From this general result, we rederive with barely any extra effort classic results on learning of classifiers and a recent result on interactive clustering; in addition, we easily obtain new interactive learning algorithms for ordering/ranking. 1 Introduction With the pervasive reliance on machine learning systems across myriad application domains in the real world, these systems frequently need to be deployed before they are fully trained. This is particularly true when the systems are supposed to learn a specific user’s (or a small group of users’) personal and idiosyncratic preferences. As a result, we are seeing an increased practical interest in online and interactive learning across a variety of domains. A second feature of the deployment of such systems “in the wild” is that the feedback the system receives is likely to be noisy. Not only may individual users give incorrect feedback, but even if they do not, the preferences — and hence feedback — across different users may vary. Thus, interactive learning algorithms deployed in real-world systems must be resilient to noisy feedback. Since the seminal work of Angluin [2] and Littlestone [14], the paradigmatic application of (noisy) interactive learning has been online learning of a binary classifier when the algorithm is provided with feedback on samples it had previously classified incorrectly. However, beyond (binary or other) classifiers, there are many other models that must be frequently learned in an interactive manner. Two ∗A full version is available on the arXiv at https://arxiv.org/abs/1710.05422. The present version omits all proofs and several other details and discussions. †Department of Computer Science, University of Southern California, emamjome@usc.edu ‡Department of Computer Science, University of Southern California, dkempe@usc.edu 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. particularly relevant examples are the following: (1) Learning an ordering/ranking of items is a key part of personalized Web search or other information-retrieval systems (e.g., [12, 18]). The user is typically presented with an ordering of items, and from her clicks or lack thereof, an algorithm can infer items that are in the wrong order. (2) Interactively learning a clustering [6, 5, 4] is important in many application domains, such as interactively identifying communities in social networks or partitioning an image into distinct objects. The user will be shown a candidate clustering, and can express that two clusters should be merged, or a cluster should be split into two. In all three examples — classification, ranking, and clustering — the interactive algorithm proposes a model4 (a classifier, ranking, or clustering) as a solution. The user then provides — explicitly or implicitly — feedback on whether the model is correct or needs to be fixed/improved. This feedback may be incorrect with some probability. Based on the feedback, the algorithm proposes a new and possibly very different model, and the process repeats. This type of interaction is the natural generalization of Angluin’s equivalence query model [2, 3]. It is worth noting that in contrast to active learning, in interactive learning (which is the focus of this work), the algorithm cannot “ask” direct questions; it can only propose a model and receive feedback in return. The algorithm should minimize the number of user interactions, i.e., the number of times that the user needs to propose fixes. A secondary goal is to make the algorithm’s internal computations efficient as well. The main contribution of this article is a general framework for efficient interactive learning of models (even with noisy feedback), presented in detail in Section 2. We consider the set of all N models as nodes of a positively weighted undirected or directed graph G. The one key property that G must satisfy is the following: (*) If s is a proposed model, and the user (correctly) suggests changing it to s′, then the graph must contain the edge (s, s′); furthermore, (s, s′) must lie on a shortest path from s to the target model s∗(which is unknown to the algorithm). We show that this single property is enough to learn the target model s∗using at most log N queries5 to the user, in the absence of noise. When the feedback is correct with probability p > 1 2, the required number of queries gracefully deteriorates to O(log N); the constant depends on p. We emphasize that the assumption (*) is not an assumption on the user. We do not assume that the user somehow “knows” the graph G and computes shortest paths in order to find a response. Rather, (*) states that G was correctly chosen to model the underlying domain, so that correct answers by the user must in fact have the property (*). To illustrate the generality of our framework, we apply it to ordering, clustering, and classification: 1. For ordering/ranking, each permutation is a node in G; one permutation is the unknown target. If the user can point out only adjacent elements that are out of order, then G is an adjacent transposition “BUBBLE SORT” graph, which naturally has the property (*). If the user can pick any element and suggest that it should precede an entire block of elements it currently follows, then we can instead use an “INSERSION SORT” graph; interestingly, to ensure the property (*), this graph must be weighted. On the other hand, as we show in Section 3, if the user can propose two arbitrary elements that should be swapped, there is no graph G with the property (*). Our framework directly leads to an interactive algorithm that will learn the correct ordering of n items in O(log(n!)) = O(n log n) queries; we show that this bound is optimal under the equivalence query model. 2. For learning a clustering of n items, the user can either propose merging two clusters, or splitting one cluster. In the interactive clustering model of [6, 5, 4], the user can specify that a particular cluster C should be split, but does not give a specific split. We show in Section 4 that there is a weighted directed graph with the property (*); then, if each cluster is from a “small” concept class of size at most M (such as having low VC-dimension), there is an algorithm finding the true clustering in O(k log M) queries, where k is number of the clusters (known ahead of time). 3. For binary classification, G is simply an n-dimensional hypercube (where n is the number of sample points that are to be classified). As shown in Section 5, one immediately recovers a close variant of standard online learning algorithms within this framework. An extension to classification with more than two classes is very straightforward. 4We avoid the use of the term “concept,” as it typically refers to a binary function, and is thus associated specifically with a classifier. 5 Unless specified otherwise, all logarithms are base 2. 2 Due to space limits, all proofs and several other details and discussions are omitted. A full version is available on the arXiv at https://arxiv.org/abs/1710.05422. 2 Learning Framework We define a framework for query-efficient interactive learning of different types of models. Some prototypical examples of models to be learned are rankings/orderings of items, (unlabeled) clusterings of graphs or data points, and (binary or non-binary) classifiers. We denote the set of all candidate models (permutations, partitions, or functions from the hypercube to {0, 1}) by Σ, and individual models6 by s, s′, s∗, etc. We write N = |Σ| for the number of candidate models. We study interactive learning of such models in a natural generalization of the equivalence query model of Angluin [2, 3]. This model is equivalent to the more widely known online learning model of Littlestone [14], but more naturally fits the description of user interactions we follow here. It has also served as the foundation for the interactive clustering model of Balcan and Blum [6] and Awasthi et al. [5, 4]. In the interactive learning framework, there is an unknown ground truth model s∗to be learned. In each round, the learning algorithm proposes a model s to the user. In response, with probability p > 1 2, the user provides correct feedback. In the remaining case (i.e., with probability 1 −p), the feedback is arbitrary; in particular, it could be arbitrarily and deliberately misleading. Correct feedback is of the following form: if s = s∗, then the algorithm is told this fact in the form of a user response of s. Otherwise, the user reveals a model s′ ̸= s that is “more similar” to s∗than s was. The exact nature of “more similar,” as well as the possibly restricted set of suggestions s′ that the user can propose, depend on the application domain. Indeed, the strength of our proposed framework is that it provides strong query complexity guarantees under minimal assumptions about the nature of the feedback; to employ the framework, one merely has to verify that the the following assumption holds. Definition 2.1 (Graph Model for Feedback) Define a weighted graph G (directed or undirected) that contains one node for each model s ∈Σ, and an edge (s, s′) with arbitrary positive edge length ω(s,s′) > 0 if the user is allowed to propose s′ in response to s. (Choosing the lengths of edges is an important part of using the framework.) G may contain additional edges not corresponding to any user feedback. The key property that G must satisfy is the following: (*) If the algorithm proposes s and the ground truth is s∗̸= s, then every correct user feedback s′ lies on a shortest path from s to s∗in G with respect to the lengths ωe. If there are multiple candidate nodes s′, then there is no guarantee on which one the algorithm will be given by the user. 2.1 Algorithm and Guarantees Our algorithms are direct reformulations and slight generalizations of algorithms recently proposed by Emamjomeh-Zadeh et al. [10], which itself was a significant generalization of the natural “Halving Algorithm” for learning a classifier (e.g., [14]). They studied the search problem as an abstract problem they termed “Binary Search in Graphs,” without discussing any applications. Our main contribution here is the application of the abstract search problem to a large variety of interactive learning problems, and a framework that makes such applications easy. We begin with the simplest case p = 1, i.e., when the algorithm only receives correct feedback. Algorithm 1 gives essentially best-possible general guarantees [10]. To state the algorithm and its guarantees, we need the notion of an approximate median node of the graph G. First, we denote by N(s, s′) := {s} if s′ = s {ˆs | s′ lies on a shortest path from s to ˆs} if s′ ̸= s the set of all models ˆs that are consistent with a user feedback of s′ to a model s. In anticipation of the noisy case, we allow models to be weighted7, and denote the node weights or likelihoods by 6When considering specific applications, we will switch to notation more in line with that used for the specific application. 7Edge lengths are part of the definition of the graph, but node weights will be assigned by our algorithm; they basically correspond to likelihoods. 3 µ(s) ≥0. If feedback is not noisy (i.e., p = 1), all the non-zero node weights are equal. For every subset of models S, we write µ(S) := P s∈S µ(s) for the total node weight of the models in S. Now, for every model s, define Φµ(s) := 1 µ(Σ) · max s′̸=s,(s,s′)∈G µ(N(s, s′)) to be the largest fraction (with respect to node weights) of models that could still be consistent with a worst-case response s′ to a proposed model of s. For every subset of models S, we denote by µS the likelihood function that assigns weight 1 to every node s ∈S and 0 elsewhere. For simplicity of notation, we use ΦS(s) when the node weights are µS. The simple key insight of [10] can be summarized and reformulated as the following proposition: Proposition 2.1 ([10], Proofs of Theorems 3 and 14) Let G be a (weighted) directed graph in which each edge e with length ωe is part of a cycle of total edge length at most c · ωe. Then, for every node weight function µ, there exists a model s such that Φµ(s) ≤c−1 c . When G is undirected (and hence c = 2), for every node weight function µ, there exists an s such that Φµ(s) ≤1 2. In Algorithm 1, we always have uniform node weight for all the models which are consistent with all the feedback received so far, and node weight 0 for models that are inconsistent with at least one response. Prior knowledge about candidates for s∗can be incorporated by providing the algorithm with the input Sinit ∋s∗to focus its search on; in the absence of prior knowledge, the algorithm can be given Sinit = Σ. Algorithm 1 LEARNING A MODEL WITHOUT FEEDBACK ERRORS (Sinit) 1: S ←Sinit. 2: while |S| > 1 do 3: Let s be a model with a “small” value of ΦS(s). 4: Let s′ be the user’s feedback model. 5: Set S ←S ∩N(s, s′). 6: return the only remaining model in S. Line 3 is underspecified as “small.” Typically, an algorithm would choose the s with smallest ΦS(s). But computational efficiency constraints or other restrictions (see Sections 2.2 and 5) may preclude this choice and force the algorithm to choose a suboptimal s. The guarantee of Algorithm 1 is summarized by the following Theorem 2.2. It is a straightforward generalization of Theorems 3 and 14 from [10] Theorem 2.2 Let N0 = |Sinit| be the number of initial candidate models. If each model s chosen in Line 3 of Algorithm 1 has ΦS(s) ≤β, then Algorithm 1 finds s∗using at most log1/β N0 queries. Corollary 2.3 When G is undirected and the optimal s is used in each iteration, β = 1 2 and Algorithm 1 finds s∗using at most log2 N0 queries. In the presence of noise, the algorithm is more complicated. The algorithm and its analysis are given in the full version. The performance of the robust algorithm is summarized in Theorem 2.4. Theorem 2.4 Let β ∈[ 1 2, 1), define τ = βp + (1 −β)(1 −p), and let N0 = |Sinit|. Assume that log(1/τ) > H(p) where H(p) = −p log p −(1 −p) log(1 −p) denotes the entropy. (When β = 1 2, this holds for every p > 1 2.) If in each iteration, the algorithm can find a model s with Φµ(s) ≤β, then with probability at least 1 −δ, the robust algorithm finds s∗using at most (1−δ) log(1/τ)−H(p) log N0 + o(log N0) + O(log2(1/δ)) queries in expectation. Corollary 2.5 When the graph G is undirected and the optimal s is used in each iteration, then with probability at least 1 −δ, the robust algorithm finds s∗using at most (1−δ) 1−H(p) log2 N0 + o(log N0) + O(log2(1/δ)) queries in expectation. 4 2.2 Computational Considerations and Sampling Corollaries 2.3 and 2.5 require the algorithm to find a model s with small Φµ(s) in each iteration. In most learning applications, the number N of candidate models is exponential in a natural problem parameter n, such as the number of sample points (classification), or the number of items to rank or cluster. If computational efficiency is a concern, this precludes explicitly keeping track of the set S or the weights µ(s). It also rules out determining the model s to query by exhaustive search over all models that have not yet been eliminated. In some cases, these difficulties can be circumvented by exploiting problem-specific structure. A more general approach relies on Monte Carlo techniques. We show that the ability to sample models s with probability (approximately) proportional to µ(s) (or approximately uniformly from S in the case of Algorithm 1) is sufficient to essentially achieve the results of Corollaries 2.3 and 2.5 with a computationally efficient algorithm. Notice that both in Algorithm 1 and the robust algorithm with noisy feedback (omitted from this version), the node weights µ(s) are completely determined by all the query responses the algorithm has seen so far and the probability p. Theorem 2.6 Let n be a natural measure of the input size and assume that log N is polynomial in n. Assume that G = (V, E) is undirected8, all edge lengths are integers, and the maximum degree and diameter (both with respect to the edge lengths) are bounded by poly(n). Also assume w.l.o.g. that µ is normalized to be a distribution over the nodes9 (i.e., µ(Σ) = 1). Let 0 ≤∆< 1 4 be a constant, and assume that there is an oracle that — given a set of query responses — runs in polynomial time in n and returns a model s drawn from a distribution µ′ with dTV(µ, µ′) ≤∆. Also assume that there is a polynomial-time algorithm that, given a model s, decides whether or not s is consistent with every given query response or not. Then, for every ϵ > 0, in time poly(n, 1 ϵ ), an algorithm can find a model s with Φµ(s) ≤1 2 + 2∆+ ϵ, with high probability. 3 Application I: Learning a Ranking As a first application, we consider the task of learning the correct order of n elements with supervision in the form of equivalence queries. This task is motivated by learning a user’s preference over web search results (e.g., [12, 18]), restaurant or movie orders (e.g., [9]), or many other types of entities. Using pairwise active queries (“Do you think that A should be ranked ahead of B?”), a learning algorithm could of course simulate standard O(n log n) sorting algorithms; this number of queries is necessary and sufficient. However, when using equivalence queries, the user must be presented with a complete ordering (i.e., a permutation π of the n elements), and the feedback will be a mistake in the proposed permutation. Here, we propose interactive algorithms for learning the correct ranking without additional information or assumptions.10 We first describe results for a setting with simple feedback in the form of adjacent transpositions; we then show a generalization to more realistic feedback as one is wont to receive in applications such as search engines. 3.1 Adjacent Transpositions We first consider “BUBBLE SORT” feedback of the following form: the user specifies that elements i and i + 1 in the proposed permutation π are in the wrong relative order. An obvious correction for an algorithm would be to swap the two elements, and leave the rest of π intact. This algorithm would exactly implement BUBBLE SORT, and thus require Θ(n2) equivalence queries. Our general framework allows us to easily obtain an algorithm with O(n log n) equivalence queries instead. We define the undirected and unweighted graph GBS as follows: • GBS contains N = n! nodes, one for each permutation π of the n elements; • it contains an edge between π and π′ if and only if π′ can be obtained from π by swapping two adjacent elements. 8It is actually sufficient that for every node weight function µ : V →R+, there exists a model s with Φµ(s) ≤1 2. 9For Algorithm 1, µ is uniform over all models consistent with all feedback up to that point. 10For example, [12, 18, 9] map items to feature vectors and assume linearity of the target function(s). 5 Lemma 3.1 GBS satisfies Definition 2.1 with respect to BUBBLE SORT feedback. Hence, applying Corollary 2.3 and Theorem 2.4, we immediately obtain the existence of learning algorithms with the following properties: Corollary 3.2 Assume that in response to each equivalence query on a permutation π, the user responds with an adjacent transposition (or states that the proposed permutation π is correct). 1. If all query responses are correct, then the target ordering can be learned by an interactive algorithm using at most log N = log n! ≤n log n equivalence queries. 2. If query responses are correct with probability p > 1 2, the target ordering can be learned by an interactive algorithm with probability at least 1 −δ using at most (1−δ) 1−H(p)n log n + o(n log n) + O(log2(1/δ)) equivalence queries in expectation. Up to constants, the bound of Corollary 3.2 is optimal: Theorem 3.3 shows that Ω(n log n) equivalence queries are necessary in the worst case. Notice that Theorem 3.3 does not immediately follow from the classical lower bound for sorting with pairwise comparisons: while the result of a pairwise comparison always reveals one bit, there are n −1 different possible responses to an equivalence query, so up to O(log n) bits might be revealed. For this reason, the proof of Theorem 3.3 explicitly constructs an adaptive adversary, and does not rely on a simple counting argument. Theorem 3.3 With adversarial responses, any interactive ranking algorithm can be forced to ask Ω(n log n) equivalence queries. This is true even if the true ordering is chosen uniformly at random, and only the query responses are adversarial. 3.2 Implicit Feedback from Clicks In the context of search engines, it has been argued (e.g., by [12, 18, 1]) that a user’s clicking behavior provides implicit feedback of a specific form on the ranking. Specifically, since users will typically read the search results from first to last, when a user skips some links that appear earlier in the ranking, and instead clicks on a link that appears later, her action suggests that the later link was more informative or relevant. Formally, when a user clicks on the element at index i, but did not previously click on any elements at indices j, j +1, . . . , i−1, this is interpreted as feedback that element i should precede all of elements j, j + 1, . . . , i −1. Thus, the feedback is akin to an “INSERSION SORT” move. (The BUBBLE SORT feedback model is the special case in which j = i −1 always.) To model this more informative feedback, the new graph GIS has more edges, and the edge lengths are non-uniform. It contains the same N nodes (one for each permutation). For a permutation π and indices 1 ≤j < i ≤n, πj←i denotes the permutation that is obtained by moving the ith element in π before the jth element (and thus shifting elements j, j + 1, . . . , i −1 one position to the right). In GIS, for every permutation π and every 1 ≤j < i ≤n, there is an undirected edge from π to πj←i with length i −j. Notice that for i > j + 1, there is actually no user feedback corresponding to the edge from πj←i to π; however, additional edges are permitted, and Lemma 3.4 establishes that GIS does in fact satisfy the “shortest paths” property. Lemma 3.4 GIS satisfies Definition 2.1 with respect to INSERSION SORT feedback. As in the case of GBS, by applying Corollary 2.3 and Theorem 2.4, we immediately obtain the existence of interactive learning algorithms with the same guarantees as those of Corollary 3.2. Corollary 3.5 Assume that in response to each equivalence query, the user responds with a pair of indices j < i such that element i should precede all elements j, j + 1, . . . , i −1. 1. If all query responses are correct, then the target ordering can be learned by an interactive algorithm using at most log N = log n! ≤n log n equivalence queries. 2. If query responses are correct with probability p > 1 2, the target ordering can be learned by an interactive algorithm with probability at least 1 −δ using at most (1−δ) 1−H(p)n log n + o(n log n) + O(log2(1/δ)) equivalence queries in expectation. 6 3.3 Computational Considerations While Corollaries 3.2 and 3.5 imply interactive algorithms using O(n log n) equivalence queries, they do not guarantee that the internal computations of the algorithms are efficient. The na¨ıve implementation requires keeping track of and comparing likelihoods on all N = n! nodes. When p = 1, i.e., the algorithm only receives correct feedback, it can be made computationally efficient using Theorem 2.6. To apply Theorem 2.6, it suffices to show that one can efficiently sample a (nearly) uniformly random permutation π consistent with all feedback received so far. Since the feedback is assumed to be correct, the set of all pairs (i, j) such that the user implied that element i must precede element j must be acyclic, and thus must form a partial order. The sampling problem is thus exactly the problem of sampling a linear extension of a given partial order. This is a well-known problem, and a beautiful result of Bubley and Dyer [8, 7] shows that the Karzanov-Khachiyan Markov Chain [13] mixes rapidly. Huber [11] shows how to modify the Markov Chain sampling technique to obtain an exactly (instead of approximately) uniformly random linear extension of the given partial order. For the purpose of our interactive learning algorithm, the sampling results can be summarized as follows: Theorem 3.6 (Huber [11]) Given a partial order over n elements, let L be the set of all linear extensions, i.e., the set of all permutations consistent with the partial order. There is an algorithm that runs in expected time O(n3 log n) and returns a uniformly random sample from L. The maximum node degree in GBS is n −1, while the maximum node degree in GIS is O(n2). The diameter of both GBS and GIS is O(n2). Substituting these bounds and the bound from Theorem 3.6 into Theorem 2.6, we obtain the following corollary: Corollary 3.7 Both under BUBBLE SORT feedback and INSERSION SORT feedback, if all feedback is correct, there is an efficient interactive learning algorithm using at most log n! ≤n log n equivalence queries to find the target ordering. The situation is significantly more challenging when feedback could be incorrect, i.e., when p < 1. In this case, the user’s feedback is not always consistent and may not form a partial order. In fact, we prove the following hardness result. Theorem 3.8 There exists a p (depending on n) for which the following holds. Given a set of user responses, let µ(π) be the likelihood of π given the responses, and normalized so that P π µ(π) = 1. Let 0 < ∆< 1 be any constant. There is no polynomial-time algorithm to draw a sample from a distribution µ′ with dTV(µ, µ′) ≤1 −∆unless RP = NP. It should be noted that the value of p in the reduction is exponentially close to 1. In this range, incorrect feedback is so unlikely that with high probability, the algorithm will always see a partial order. It might then still be able to sample efficiently. On the other hand, for smaller values of p (e.g., constant p), sampling approximately from the likelihood distribution might be possible via a metropolized Karzanov-Khachiyan chain or a different approach. This problem is still open. 4 Application II: Learning a Clustering Many traditional approaches for clustering optimize an (explicit) objective function or rely on assumptions about the data generation process. In interactive clustering, the algorithm repeatedly proposes a clustering, and obtains feedback that two proposed clusters should be merged, or a proposed cluster should be split into two. There are n items, and a clustering C is a partition of the items into disjoint sets (clusters) C1, C2, . . .. It is known that the target clustering has k clusters, but in order to learn it, the algorithm can query clusterings with more or fewer clusters as well. The user feedback has the following semantics, as proposed by Balcan and Blum [6] and Awasthi et al. [5, 4]. 1. MERGE(Ci, Cj): Specifies that all items in Ci and Cj belong to the same cluster. 2. SPLIT(Ci): Specifies that cluster Ci needs to be split, but not into which subclusters. 7 Notice that feedback that two clusters be merged, or that a cluster be split (when the split is known), can be considered as adding constraints on the clustering (see, e.g., [21]); depending on whether feedback may be incorrect, these constraints are hard or soft. We define a weighted and directed graph GUC on all clusterings C. Thus, N = Bn ≤nn is the nth Bell number. When C′ is obtained by a MERGE of two clusters in C, GUC contains a directed edge (C, C′) of length 2. If C = {C1, C2, . . .} is a clustering, then for each Ci ∈C, the graph GUC contains a directed edge of length 1 from C to C \ {Ci} ∪{{v} | v ∈Ci}. That is, GUC contains an edge from C to the clustering obtained from breaking Ci into singleton clusters of all its elements. While this may not be the “intended” split of the user, we can still associate this edge with the feedback. Lemma 4.1 GUC satisfies Definition 2.1 with respect to MERGE and SPLIT feedback. GUC is directed, and every edge makes up at least a 1 3n fraction of the total length of at least one cycle it participates in. Hence, Proposition 2.1 gives an upper bound of 3n−1 3n on the value of β in each iteration. A more careful analysis exploiting the specific structure of GUC gives us the following: Lemma 4.2 In GUC, for every non-negative node weight function µ, there exists a clustering C with Φµ(C) ≤1 2. In the absence of noise in the feedback, Lemmas 4.1 and 4.2 and Theorem 2.2 imply an algorithm that finds the true clustering using log N = log B(n) = Θ(n log n) queries. Notice that this is worse than the “trivial” algorithm, which starts with each node as a singleton cluster and always executes the merge proposed by the user, until it has found the correct clustering; hence, this bound is itself rather trivial. Non-trivial bounds can be obtained when clusters belong to a restricted set, an approach also followed by Awasthi and Zadeh [5]. If there are at most M candidate clusters, then the number of clusterings is N0 ≤M k. For example, if there is a set system F of VC dimension at most d such that each cluster is in the range space of F, then M = O(nd) by the Sauer-Shelah Lemma [19, 20]. Combining Lemmas 4.1 and 4.2 with Theorems 2.2 and 2.4, we obtain the existence of learning algorithms with the following properties: Corollary 4.3 Assume that in response to each equivalence query, the user responds with MERGE or SPLIT. Also, assume that there are at most M different candidate clusters, and the clustering has (at most) k clusters. 1. If all query responses are correct, then the target clustering can be learned by an interactive algorithm using at most log N = O(k log M) equivalence queries. Specifically when M = O(nd), this bound is O(kd log n). This result recovers the main result of [5].11 2. If query responses are correct with probability p > 1 2, the target clustering can be learned with probability at least 1 −δ using at most (1−δ)k log M 1−H(p) + o(k log M) + O(log2(1/δ)) equivalence queries in expectation. Our framework provides the noise tolerance “for free;” [5] instead obtain results for a different type of noise in the feedback. 5 Application III: Learning a Classifier Learning a binary classifier is the original and prototypical application of the equivalence query model of Angluin [2], which has seen a large amount of follow-up work since (see, e.g., [16, 17]). Naturally, if no assumptions are made on the classifier, then n queries are necessary in the worst case. In general, applications therefore restrict the concept classes to smaller sets, such as assuming that they have bounded VC dimension. We use F to denote the set of all possible concepts, and write M = |F|; when F has VC dimension d, the Sauer-Shelah Lemma [19, 20] implies that M = O(nd). Learning a binary classifier for n points is an almost trivial application of our framework12. When the algorithm proposes a candidate classifier, the feedback it receives is a point with a corrected label (or the fact that the classifier was correct on all points). 11In fact, the algorithm in [5] is implicitly computing and querying a node with small Φ in GUC 12The results extend readily to learning a classifier with k ≥2 labels. 8 We define the graph GCL to be the n-dimensional hypercube13 with unweighted and undirected edges between every pair of nodes at Hamming distance 1. Because the distance between two classifiers C, C′ is exactly the number of points on which they disagree, GCL satisfies Definition 2.1. Hence, we can apply Corollary 2.3 and Theorem 2.4 with Sinit equal to the set of all M candidate classifiers, recovering the classic result on learning a classifier in the equivalence query model when feedback is perfect, and extending it to the noisy setting. Corollary 5.1 1. With perfect feedback, the target classifier is learned using log M queries14. 2. When each query response is correct with probability p > 1 2, there is an algorithm learning the true binary classifier with probability at least 1−δ using at most (1−δ) log M 1−H(p) +o(log M)+ O(log2(1/δ)) queries in expectation. 6 Discussion and Conclusions We defined a general framework for interactive learning from imperfect responses to equivalence queries, and presented a general algorithm that achieves a small number of queries. We then showed how query-efficient interactive learning algorithms in several domains can be derived with practically no effort as special cases; these include some previously known results (classification and clustering) as well as new results on ranking/ordering. Our work raises several natural directions for future work. Perhaps most importantly, for which domains can the algorithms be made computationally efficient (in addition to query-efficient)? We provided a positive answer for ordering with perfect query responses, but the question is open for ordering when feedback is imperfect. For classification, when the possible clusters have VC dimension d, the time is O(nd), which is unfortunately still impractical for real-world values of d. Maass and Tur´an [15] show how to obtain better bounds specifically when the sample points form a d-dimensional grid; to the best of our knowledge, the question is open when the sample points are arbitrary. The Monte Carlo approach of Theorem 2.6 reduces the question to the question of sampling a uniformly random hyperplane, when the uniformity is over the partition induced by the hyperplane (rather than some geometric representation). For clustering, even less appears to be known. It should be noted that our algorithms may incorporate “improper” learning steps: for instance, when trying to learn a hyperplane classifier, the algorithm in Section 5 may propose intermediate classifiers that are not themselves hyperplanes (though the final output is of course a hyperplane classifier). At an increase of a factor O(log d) in the number of queries, we can ensure that all steps are proper for hyperplane learning. An interesting question is whether similar bounds can be obtained for other concept classes, and for other problems (such as clustering). Finally, our noise model is uniform. An alternative would be that the probability of an incorrect response depends on the type of response. In particular, false positives could be extremely likely, for instance, because the user did not try to classify a particular incorrectly labeled data point, or did not see an incorrect ordering of items far down in the ranking. Similarly, some wrong responses may be more likely than others; for example, a user proposing a merge of two clusters (or split of one) might be “roughly” correct, but miss out on a few points (the setting that [5, 4] studied). We believe that several of these extensions should be fairly straightforward to incorporate into the framework, and would mostly lead to additional complexity in notation and in the definition of various parameters. But a complete and principled treatment would be an interesting direction for future work. Acknowledgments Research supported in part by NSF grant 1619458. We would like to thank Sanjoy Dasgupta, Ilias Diakonikolas, Shaddin Dughmi, Haipeng Luo, Shanghua Teng, and anonymous reviewers for useful feedback and suggestions. 13When there are k labels, GCL is a graph with kn nodes. 14With k labels, this bound becomes (k −1) log M. 9 References [1] E. Agichtein, E. Brill, S. Dumais, and R. Ragno. Learning user interaction models for predicting web search result preferences. In Proc. 29th Intl. Conf. on Research and Development in Information Retrieval (SIGIR), pages 3–10, 2006. [2] D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. [3] D. Angluin. Computational learning theory: Survey and selected bibliography. In Proc. 24th ACM Symp. on Theory of Computing, pages 351–369, 1992. [4] P. Awasthi, M.-F. Balcan, and K. Voevodski. Local algorithms for interactive clustering. Journal of Machine Learning Research, 18:1–35, 2017. [5] P. Awasthi and R. B. Zadeh. Supervised clustering. In Proc. 24th Advances in Neural Information Processing Systems, pages 91–99. 2010. [6] M.-F. Balcan and A. Blum. Clustering with interactive feedback. In Proc. 19th Intl. Conf. on Algorithmic Learning Theory, pages 316–328, 2008. [7] R. Bubley. Randomized Algorithms: Approximation, Generation, and Counting. Springer, 2001. [8] R. Bubley and M. Dyer. Faster random generation of linear extensions. Discrete Mathematics, 201(1):81–88, 1999. [9] K. Crammer and Y. Singer. Pranking with ranking. In Proc. 16th Advances in Neural Information Processing Systems, pages 641–647, 2002. [10] E. Emamjomeh-Zadeh, D. Kempe, and V. Singhal. Deterministic and probabilistic binary search in graphs. In Proc. 48th ACM Symp. on Theory of Computing, pages 519–532, 2016. [11] M. Huber. Fast perfect sampling from linear extensions. Discrete Mathematics, 306(4):420–428, 2006. [12] T. Joachims. Optimizing search engines using clickthrough data. In Proc. 8th Intl. Conf. on Knowledge Discovery and Data Mining, pages 133–142, 2002. [13] A. Karzanov and L. Khachiyan. On the conductance of order Markov chains. Order, 8(1):7–15, 1991. [14] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285–318, 1988. [15] W. Maass and G. Tur´an. On the complexity of learning from counterexamples and membership queries. In Proc. 31st IEEE Symp. on Foundations of Computer Science, pages 203–210, 1990. [16] W. Maass and G. Tur´an. Lower bound methods and separation results for on-line learning models. Machine Learning, 9(2):107–145, 1992. [17] W. Maass and G. Tur´an. Algorithms and lower bounds for on-line learning of geometrical concepts. Machine Learning, 14(3):251–269, 1994. [18] F. Radlinski and T. Joachims. Query chains: Learning to rank from implicit feedback. In Proc. 11th Intl. Conf. on Knowledge Discovery and Data Mining, pages 239–248, 2005. [19] N. Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145–147, 1972. [20] S. Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. Pacific Journal of Mathematics, 41(1):247–261, 1972. [21] K. L. Wagstaff. Intelligent Clustering with Instance-Level Constraints. PhD thesis, Cornell University, 2002. 10 | 2017 | 177 |
6,649 | Sample and Computationally Efficient Learning Algorithms under S-Concave Distributions Maria-Florina Balcan Machine Learning Department Carnegie Mellon University, USA ninamf@cs.cmu.edu Hongyang Zhang∗ Machine Learning Department Carnegie Mellon University, USA hongyanz@cs.cmu.edu Abstract We provide new results for noise-tolerant and sample-efficient learning algorithms under s-concave distributions. The new class of s-concave distributions is a broad and natural generalization of log-concavity, and includes many important additional distributions, e.g., the Pareto distribution and t-distribution. This class has been studied in the context of efficient sampling, integration, and optimization, but much remains unknown about the geometry of this class of distributions and their applications in the context of learning. The challenge is that unlike the commonly used distributions in learning (uniform or more generally log-concave distributions), this broader class is not closed under the marginalization operator and many such distributions are fat-tailed. In this work, we introduce new convex geometry tools to study the properties of s-concave distributions and use these properties to provide bounds on quantities of interest to learning including the probability of disagreement between two halfspaces, disagreement outside a band, and the disagreement coefficient. We use these results to significantly generalize prior results for margin-based active learning, disagreement-based active learning, and passive learning of intersections of halfspaces. Our analysis of geometric properties of s-concave distributions might be of independent interest to optimization more broadly. 1 Introduction Developing provable learning algorithms is one of the central challenges in learning theory. The study of such algorithms has led to significant advances in both the theory and practice of passive and active learning. In the passive learning model, the learning algorithm has access to a set of labeled examples sampled i.i.d. from some unknown distribution over the instance space and labeled according to some underlying target function. In the active learning model, however, the algorithm can access unlabeled examples and request labels of its own choice, and the goal is to learn the target function with significantly fewer labels. In this work, we study both learning models in the case where the underlying distribution belongs to the class of s-concave distributions. Prior work on noise-tolerant and sample-efficient algorithms mostly relies on the assumption that the distribution over the instance space is log-concave [1, 12, 7, 30]. A distribution is log-concave if the logarithm of its density is a concave function. The assumption of log-concavity has been made for a few purposes: for computational efficiency reasons and for sample efficiency reasons. For computational efficiency reasons, it was made to obtain a noise-tolerant algorithm even for seemingly simple decision surfaces like linear separators. These simple algorithms exist for noiseless scenarios, e.g., via linear programming [28], but they are notoriously hard once we have noise [15, 25, 19]; This is why progress on noise-tolerant algorithms has focused on uniform [22, 26] and ∗Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. log-concave distributions [4]. Other concept spaces, like intersections of halfspaces, even have no computationally efficient algorithm in the noise-free settings that works under general distributions, but there has been nice progress under uniform and log-concave distributions [27]. For sample efficiency reasons, in the context of active learning, we need distributional assumptions in order to obtain label complexity improvements [16]. The most concrete and general class for which prior work obtains such improvements is when the marginal distribution over instance space satisfies log-concavity [32, 7]. In this work, we provide a broad generalization of all above results, showing how they extend to s-concave distributions (s < 0). A distribution with density f(x) is s-concave if f(x)s is a concave function. We identify key properties of these distributions that allow us to simultaneously extend all above results. How general and important is the class of s-concave distributions? The class of s-concave distributions is very broad and contains many well-known (classes of) distributions as special cases. For example, when s →0, s-concave distributions reduce to log-concave distributions. Furthermore, the s-concave class contains infinitely many fat-tailed distributions that do not belong to the class of log-concave distributions, e.g., Cauchy, Pareto, and t distributions, which have been widely applied in the context of theoretical physics and economics, but much remains unknown about how the provable learning algorithms, such as active learning of halfspaces, perform under these realistic distributions. We also compare s-concave distributions with nearly-log-concave distributions, a slightly broader class of distributions than log-concavity. A distribution with density f(x) is nearly-log-concave if for any λ ∈[0, 1], x1, x2 ∈Rn, we have f(λx1 + (1 −λ)x2) ≥e−0.0154f(x1)λf(x2)1−λ [7]. The class of s-concave distributions includes many important extra distributions which do not belong to the nearly-log-concave distributions: a nearly-log-concave distribution must have sub-exponential tails (see Theorem 11, [7]), while the tail probability of an s-concave distribution might decay much slower (see Theorem 1 (6)). We also note that efficient sampling, integration and optimization algorithms for s-concave distributions have been well understood [13, 23]. Our analysis of s-concave distributions bridges these algorithms to the strong guarantees of noise-tolerant and sample-efficient learning algorithms. 1.1 Our Contributions Structural Results. We study various geometric properties of s-concave distributions. These properties serve as the structural results for many provable learning algorithms, e.g., margin-based active learning [7], disagreement-based active learning [29, 21], learning intersections of halfspaces [27], etc. When s →0, our results exactly reduce to those for log-concave distributions [7, 2, 4]. Below, we state our structural results informally: Theorem 1 (Informal). Let D be an isotropic s-concave distribution in Rn. Then there exist closedform functions γ(s, m), f1(s, n), f2(s, n), f3(s, n), f4(s, n), and f5(s, n) such that 1. (Weakly Closed under Marginal) The marginal of D over m arguments (or cumulative distribution function, CDF) is isotropic γ(s, m)-concave. (Theorems 3, 4) 2. (Lower Bound on Hyperplane Disagreement) For any two unit vectors u and v in Rn, f1(s, n)θ(u, v) ≤Prx∼D[sign(u · x) ̸= sign(v · x)], where θ(u, v) is the angle between u and v. (Theorem 12) 3. (Probability of Band) There is a function d(s, n) such that for any unit vector w ∈Rn and any 0 < t ≤d(s, n), we have f2(s, n)t < Prx∼D[|w · x| ≤t] ≤f3(s, n)t. (Theorem 11) 4. (Disagreement outside Margin) For any absolute constant c1 > 0 and any function f(s, n), there exists a function f4(s, n) > 0 such that Prx∼D[sign(u · x) ̸= sign(v · x) and |v · x| ≥ f4(s, n)θ(u, v)] ≤c1f(s, n)θ(u, v). (Theorem 13) 5. (Variance in 1-D Direction) There is a function d(s, n) such that for any unit vectors u and a in Rn such that ∥u−a∥≤r and for any 0 < t ≤d(s, n), we have Ex∼Du,t[(a·x)2] ≤f5(s, n)(r2+t2), where Du,t is the conditional distribution of D over the set {x : |u · x| ≤t}. (Theorem 14) 6. (Tail Probability) We have Pr[∥x∥> √nt] ≤ h 1 − cst 1+ns i(1+ns)/s . (Theorem 5) If s →0 (i.e., the distribution is log-concave), then γ(s, m) →0 and the functions f(s, n), f1(s, n), f2(s, n), f3(s, n), f4(s, n), f5(s, n), and d(s, n) are all absolute constants. 2 Table 1: Comparisons with prior distributions for margin-based active learning, disagreement-based active learning, and Baum’s algorithm. Prior Work Ours Margin (Efficient, Noise) uniform [3] log-concave [4] s-concave Disagreement uniform [20] nearly-log-concave [7] s-concave Baum’s symmetric [9] log-concave [27] s-concave To prove Theorem 1, we introduce multiple new techniques, e.g., extension of Prekopa-Leindler theorem and reduction to baseline function (see the supplementary material for our techniques), which might be of independent interest to optimization more broadly. Margin Based Active Learning: We apply our structural results to margin-based active learning of a halfspace w∗under any isotropic s-concave distribution for both realizable and adversarial noise models. In the realizable case, the instance X is drawn from an isotropic s-concave distribution and the label Y = sign(w∗· X). In the adversarial noise model, an adversary can corrupt any η (≤O(ϵ)) fraction of labels. For both cases, we show that there exists a computationally efficient algorithm that outputs a linear separator wT such that Prx∼D[sign(wT · x) ̸= sign(w∗· x)] ≤ϵ (see Theorems 15 and 16). The label complexity w.r.t. 1/ϵ improves exponentially over the passive learning scenario under s-concave distributions, though the underlying distribution might be fat-tailed. To the best of our knowledge, this is the first result concerning the computationally-efficient, noise-tolerant margin-based active learning under the broader class of s-concave distributions. Our work solves an open problem proposed by Awasthi et al. [4] about exploring wider classes of distributions for provable active learning algorithms. Disagreement Based Active Learning: We apply our results to agnostic disagreement-based active learning under s-concave distributions. The key to the analysis is estimating the disagreement coefficient, a distribution-dependent measure of complexity that is used to analyze certain types of active learning algorithms, e.g., the CAL [14] and A2 algorithm [5]. We work out the disagreement coefficient under isotropic s-concave distribution (see Theorem 17). By composing it with the existing work on active learning [17], we obtain a bound on label complexity under the class of s-concave distributions. As far as we are aware, this is the first result concerning disagreementbased active learning that goes beyond log-concave distributions. Our bounds on the disagreement coefficient match the best known results for the much less general case of log-concave distributions [7]; Furthermore, they apply to the s-concave case where we allow an arbitrary number of discontinuities, a case not captured by [18]. Learning Intersections of Halfspaces: Baum’s algorithm is one of the most famous algorithms for learning the intersections of halfspaces. The algorithm was first proposed by Baum [9] under symmetric distribution, and later extended to log-concave distribution by Klivans et al. [27] as these distributions are almost symmetric. In this paper, we show that approximate symmetry also holds for the case of s-concave distributions. With this, we work out the label complexity of Baum’s algorithm under the broader class of s-concave distributions (see Theorem 18), and advance the state-of-the-art results (see Table 1). We provide lower bounds to partially show the tightness of our analysis. Our results can be potentially applied to other provable learning algorithms as well [24, 31, 10, 30, 8], which might be of independent interest. We discuss our techniques and other related papers in the supplementary material. 2 Preliminary Before proceeding, we define some notations and clarify our problem setup in this section. Notations: We will use capital or lower-case letters to represent random variables, D to represent an s-concave distribution, and Du,t to represent the conditional distribution of D over the set {x : |u · x| ≤t}. We define the sign function as sign(x) = +1 if x ≥0 and −1 otherwise. We denote by B(α, β) = R 1 0 tα−1(1−t)β−1dt the beta function, and Γ(α) = R ∞ 0 tα−1e−tdt the gamma function. We will consider a single norm for the vectors in Rn, namely, the 2-norm denoted by ∥x∥. We will frequently use µ (or µf, µD) to represent the measure of the probability distribution D with density function f. The notation ball(w∗, t) represents the set {w ∈Rn : ∥w −w∗∥≤t}. For convenience, the symbol ⊕slightly differs from the ordinary addition +: For f = 0 or g = 0, {f s ⊕gs}1/s = 0; Otherwise, ⊕and + are the same. For u, v ∈Rn, we define the angle between them as θ(u, v). 3 2.1 From Log-Concavity to S-Concavity We begin with the definition of s-concavity. There are slight differences among the definitions of s-concave density, s-concave distribution, and s-concave measure. Definition 1 (S-Concave (Density) Function, Distribution, Measure). A function f: Rn →R+ is s-concave, for −∞≤s ≤1, if f(λx + (1 −λ)y) ≥(λf(x)s + (1 −λ)f(y)s)1/s for all λ ∈[0, 1], ∀x, y ∈Rn.2 A probability distribution D is s-concave, if its density function is s-concave. A probability measure µ is s-concave if µ(λA + (1 −λ)B) ≥[λµ(A)s + (1 −λ)µ(B)s]1/s for any sets A, B ⊆Rn and λ ∈[0, 1]. Special classes of s-concave functions include concavity (s = 1), harmonic-concavity (s = −1), quasi-concavity (s = −∞), etc. The conditions in Definition 1 are progressively weaker as s becomes smaller: s1-concave densities (distributions, measures) are s2-concave if s1 ≥s2. Thus one can verify [13]: concave (s = 1) ⊊log-concave (s = 0) ⊊s-concave (s < 0) ⊊quasi-concave (s = −∞). 3 Structural Results of S-Concave Distributions: A Toolkit In this section, we develop geometric properties of s-concave distribution. The challenge is that unlike the commonly used distributions in learning (uniform or more generally log-concave distributions), this broader class is not closed under the marginalization operator and many such distributions are fattailed. To address this issue, we introduce several new techniques. We first introduce the extension of the Prekopa-Leindler inequality so as to reduce the high-dimensional problem to the one-dimensional case. We then reduce the resulting one-dimensional s-concave function to a well-defined baseline function, and explore the geometric properties of that baseline function. We summarize our high-level proof ideas briefly by the following figure. n-D s-concave 1-D !-concave 1-D ℎ# = %(1 + )#)+/- Extension of Prekopa-Leindler Baseline Function 3.1 Marginal Distribution and Cumulative Distribution Function We begin with the analysis of the marginal distribution, which forms the basis of other geometric properties of s-concave distributions (s ≤0). Unlike the (nearly) log-concave distribution where the marginal remains (nearly) log-concave, the class of s-concave distributions is not closed under the marginalization operator. To study the marginal, our primary tool is the theory of convex geometry. Specifically, we will use an extension of the Prékopa-Leindler inequality developed by Brascamp and Lieb [11], which allows for a characterization of the integral of s-concave functions. Theorem 2 ([11], Thm 3.3). Let 0 < λ < 1, and Hs, G1, and G2 be non-negative integrable functions on Rm such that Hs(λx+(1−λ)y) ≥[λG1(x)s ⊕(1−λ)G2(y)s]1/s for every x, y ∈Rm. Then R Rm Hs(x)dx ≥ λ R Rm G1(x)dx γ + (1 −λ) R Rm G2(x)dx γ1/γ for s ≥−1/m, with γ = s/(1 + ms). Building on this, the following theorem plays a key role in our analysis of the marginal distribution. Theorem 3 (Marginal). Let f(x, y) be an s-concave density on a convex set K ⊆Rn+m with s ≥−1 m. Denote by K|Rn = {x ∈Rn : ∃y ∈Rm s.t. (x, y) ∈K}. For every x in K|Rn, consider the section K(x) ≜{y ∈Rm : (x, y) ∈K}. Then the marginal density g(x) ≜ R K(x) f(x, y)dy is γ-concave on K|Rn, where γ = s 1+ms. Moreover, if f(x, y) is isotropic, then g(x) is isotropic. Similar to the marginal, the CDF of an s-concave distribution might not remain in the same class. This is in sharp contrast to log-concave distributions. The following theorem studies the CDF of an s-concave distribution. Theorem 4. The CDF of s-concave distribution in Rn is γ-concave, where γ = s 1+ns and s ≥−1 n. 2When s →0, we note that lims→0(λf(x)s + (1 −λ)f(y)s)1/s = exp(λ log f(x) + (1 −λ) log f(y)). In this case, f(x) is known to be log-concave. 4 Theorem 3 and 4 serve as the bridge that connects high-dimensional s-concave distributions to one-dimensional γ-concave distributions. With them, we are able to reduce the high-dimensional problem to the one-dimensional one. 3.2 Fat-Tailed Density Tail probability is one of the most distinct characteristics of s-concave distributions compared to (nearly) log-concave distributions. While it can be shown that the (nearly) log-concave distribution has an exponentially small tail (Theorem 11, [7]), the tail of an s-concave distribution is fat, as proved by the following theorem. Theorem 5 (Tail Probability). Let x come from an isotropic distribution over Rn with an s-concave density. Then for every t ≥16, we have Pr[∥x∥> √nt] ≤ h 1 − cst 1+ns i(1+ns)/s , where c is an absolute constant. Theorem 5 is almost tight for s < 0. To see this, consider X that is drawn from a one-dimensional Pareto distribution with density f(x) = (−1 −1 s)−1 s x 1 s (x ≥ s+1 −s ). It can be easily seen that Pr[X > t] = h −s s+1t i s+1 s for t ≥s+1 −s , which matches Theorem 5 up to an absolute constant factor. 3.3 Geometry of S-Concave Distributions We now investigate the geometry of s-concave distributions. We first consider one-dimensional sconcave distributions: We provide bounds on the density of centroid-centered halfspaces (Lemma 6) and range of the density function (Lemma 7). Building upon these, we develop geometric properties of high-dimensional s-concave distributions by reducing the distributions to the one-dimensional case based on marginalization (Theorem 3). 3.3.1 One-Dimensional Case We begin with the analysis of one-dimensional halfspaces. To bound the probability, a normal technique is to bound the centroid region and the tail region separately. However, the challenge is that the s-concave distribution is fat-tailed (Theorem 5). So while the probability of a one-dimensional halfspace is bounded below by an absolute constant for log-concave distributions, such a probability for s-concave distributions decays as s (≤0) becomes smaller. The following lemma captures such an intuition. Lemma 6 (Density of Centroid-Centered Halfspaces). Let X be drawn from a one-dimensional distribution with s-concave density for −1/2 ≤s ≤0. Then Pr(X ≥EX) ≥(1 + γ)−1/γ for γ = s/(1 + s). We also study the image of a one-dimensional s-concave density. The following condition for s > −1/3 is for the existence of second-order moment. Lemma 7. Let g : R →R+ be an isotropic s-concave density function and s > −1/3. (a) For all x, g(x) ≤ 1+s 1+3s; (b) We have g(0) ≥ q 1 3(1+γ)3/γ , where γ = s s+1. 3.3.2 High-Dimensional Case We now move on to the high-dimensional case (n ≥2). In the following, we will assume − 1 2n+3 ≤ s ≤0. Though this working range of s vanishes as n becomes larger, it is almost the broadest range of s that we can hopefully achieve: Chandrasekaran et al. [13] showed a lower bound of s ≥− 1 n−1 if one require the s-concave distribution to have good geometric properties. In addition, we can see from Theorem 3 that if s < − 1 n−1, the marginal of an s-concave distribution might even not exist; Such a case does happen for certain s-concave distributions with s < − 1 n−1, e.g., the Cauchy distribution. So our range of s is almost tight up to a 1/2 factor. We start our analysis with the density of centroid-centered halfspaces in high-dimensional spaces. Lemma 8 (Density of Centroid-Centered Halfspaces). Let f : Rn →R+ be an s-concave density function, and let H be any halfspace containing its centroid. Then R H f(x)dx ≥(1 + γ)−1/γ for γ = s/(1 + ns). Proof. W.L.O.G., we assume H is orthogonal to the first axis. By Theorem 3, the first marginal of f is s/(1+(n−1)s)-concave. Then by Lemma 6, R H f(x)dx ≥(1+γ)−1/γ, where γ = s/(1+ns). 5 The following theorem is an extension of Lemma 7 to high-dimensional spaces. The proofs basically reduce the n-dimensional density to its first marginal by Theorem 3, and apply Lemma 7 to bound the image. Theorem 9 (Bounds on Density). Let f : Rn →R+ be an isotropic s-concave density. Then (a) Let d(s, n) = (1 + γ)−1/γ 1+3β 3+3β , where β = s 1+(n−1)s and γ = β 1+β . For any u ∈Rn such that ∥u∥≤d(s, n), we have f(u) ≥ ∥u∥ d ((2 −2−(n+1)s)−1 −1) + 1 1/s f(0). (b) f(x) ≤f(0) h 1+β 1+3β p 3(1 + γ)3/γ2n−1+1/ss −1 i1/s for every x. (c) There exists an x ∈Rn such that f(x) > (4eπ)−n/2. (d) (4eπ)−n/2 h 1+β 1+3β p 3(1 + γ)3/γ2n−1+ 1 s s −1 i−1 s < f(0) ≤(2 −2−(n+1)s)1/s nΓ(n/2) 2πn/2dn . (e) f(x) ≤(2 −2−(n+1)s)1/s nΓ(n/2) 2πn/2dn h 1+β 1+3β p 3(1 + γ)3/γ2n−1+1/ss −1 i1/s for every x. (f) For any line ℓthrough the origin, R ℓf ≤(2 −2−ns)1/s (n−1)Γ((n−1)/2) 2π(n−1)/2dn−1 . Theorem 9 provides uniform bounds on the density function. To obtain more refined upper bound on the image of s-concave densities, we have the following lemma. The proof is built upon Theorem 9. Lemma 10 (More Refined Upper Bound on Densities). Let f : Rn →R+ be an isotropic s-concave density. Then f(x) ≤β1(n, s)(1 −sβ2(n, s)∥x∥)1/s for every x ∈Rn, where β1(n, s) = (2 −2−(n+1)s) 1 s 2πn/2dn (1 −s)−1/snΓ(n/2) 1 + β 1 + 3β q 3(1 + γ)3/γ2n−1+1/s s −1 1/s , β2(n, s) = 2π(n−1)/2dn−1 (n −1)Γ((n −1)/2)(2 −2−ns)−1 s [(a(n, s) + (1 −s)β1(n, s)s)1+ 1 s −a(n, s)1+ 1 s ]s β1(n, s)s(1 + s)(1 −s) , a(n, s) = (4eπ)−ns/2 h 1+β 1+3β p 3(1 + γ)3/γ2n−1+1/ss −1 i−1 , γ = β 1+β , β = s 1+(n−1)s, and d = (1 + γ)−1 γ 1+3β 3+3β . We also give an absolute bound on the measure of band. Theorem 11 (Probability inside Band). Let D be an isotropic s-concave distribution in Rn. Denote by f3(s, n) = 2(1+ns)/(1+(n+2)s). Then for any unit vector w, Prx∼D[|w·x| ≤t] ≤f3(s, n)t. Moreover, if t ≤d(s, n) ≜ 1+2γ 1+γ −1+γ γ 1+3γ 3+3γ where γ = s 1+(n−1)s, then Prx∼D[|w · x| ≤t] > f2(s, n)t, where f2(s, n) = 2(2 −2−2γ)−1/γ(4eπ)−1/2 2 1+γ 1+3γ √ 3 1+2γ 1+γ 3+3γ 2γ γ −1 !−1/γ . To analyze the problem of learning linear separators, we are interested in studying the disagreement between the hypothesis of the output and the hypothesis of the target. The following theorem captures such a characteristic under s-concave distributions. Theorem 12 (Probability of Disagreement). Assume D is an isotropic s-concave distribution in Rn. Then for any two unit vectors u and v in Rn, we have dD(u, v) = Prx∼D[sign(u·x) ̸= sign(v ·x)] ≥ f1(s, n)θ(u, v), where f1(s, n) = c(2 −2−3α)−1 α h 1+β 1+3β p 3(1 + γ)3/γ21+1/αα −1 i−1 α (1 + γ)−2/γ 1+3β 3+3β 2 , c is an absolute constant, α = s 1+(n−2)s, β = s 1+(n−1)s, γ = s 1+ns. Due to space constraints, all missing proofs are deferred to the supplementary material. 4 Applications: Provable Algorithms under S-Concave Distributions In this section, we show that many algorithms that work under log-concave distributions behave well under s-concave distributions by applying the above-mentioned geometric properties. For simplicity, we will frequently use the notations in Theorem 1. 6 4.1 Margin Based Active Learning We first investigate margin-based active learning under isotropic s-concave distributions in both realizable and adversarial noise models. The algorithm (see Algorithm 1) follows a localization technique: It proceeds in rounds, aiming to cut the error down by half in each round in the margin [6]. Algorithm 1 Margin Based Active Learning under S-Concave Distributions Input: Parameters bk, τk, rk, mk, κ, and T as in Theorem 16. 1: Draw m1 examples from D, label them and put them into W. 2: For k = 1, 2, ..., T 3: Find vk ∈ball(wk−1, rk) to approximately minimize the hinge loss over W s.t. ∥vk∥≤1: ℓτk ≤minw∈ball(wk−1,rk)∩ball(0,1) ℓτk(w, W) + κ/8. 4: Normalize vk, yielding wk = vk ∥vk∥; Clear the working set W. 5: While mk+1 additional data points are not labeled 6: Draw sample x from D. 7: If |wk · x| ≥bk, reject x; else ask for label of x and put into W. Output: Hypothesis wT . 4.1.1 Relevant Properties of S-Concave Distributions The analysis requires more refined geometric properties as below. Theorem 13 basically claims that the error mostly concentrates in a band, and Theorem 14 guarantees that the variance in any 1-D direction cannot be too large. We defer the detailed proofs to the supplementary material. Theorem 13 (Disagreement outside Band). Let u and v be two vectors in Rn and assume that θ(u, v) = θ < π/2. Let D be an isotropic s-concave distribution. Then for any absolute constant c1 > 0 and any function f1(s, n) > 0, there exists a function f4(s, n) > 0 such that Prx∼D[sign(u · x) ̸= sign(v · x) and |v · x| ≥f4(s, n)θ] ≤c1f1(s, n)θ, where f4(s, n) = 4β1(2,α)B(−1/α−3,3) −c1f1(s,n)α3β2(2,α)3 , B(·, ·) is the beta function, α = s/(1 + (n −2)s), β1(2, α) and β2(2, α) are given by Lemma 10. Theorem 14 (1-D Variance). Assume that D is isotropic s-concave. For d given by Theorem 9 (a), there is an absolute C0 such that for all 0 < t ≤d and for all a such that ∥u −a∥≤r and ∥a∥≤1, Ex∼Du,t[(a · x)2] ≤f5(s, n)(r2 + t2), where f5(s, n) = 16 + C0 8β1(2,η)B(−1/η−3,2) f2(s,n)β2(2,η)3(η+1)η2 , (β1(2, η), β2(2, η)) and f2(s, n) are given by Lemma 10 and Theorem 11, and η = s 1+(n−2)s. 4.1.2 Realizable Case We show that margin-based active learning works under s-concave distributions in the realizable case. Theorem 15. In the realizable case, let D be an isotropic s-concave distribution in Rn. Then for 0 < ϵ < 1/4, δ > 0, and absolute constants c, there is an algorithm (see the supplementary material) that runs in T = ⌈log 1 cϵ⌉iterations, requires mk = O f3 min{2−kf4f −1 1 ,d} 2−k n log f3 min{2−kf4f −1 1 ,d} 2−k +log 1+s−k δ labels in the k-th round, and outputs a linear separator of error at most ϵ with probability at least 1 −δ. In particular, when s →0 (a.k.a. log-concave), we have mk = O n + log( 1+s−k δ ) . By Theorem 15, we see that the algorithm of margin-based active learning under s-concave distributions works almost as well as the log-concave distributions in the resizable case, improving exponentially w.r.t. the variable 1/ϵ over passive learning algorithms. 4.1.3 Efficient Learning with Adversarial Noise In the adversarial noise model, an adversary can choose any distribution eP over Rn × {+1, −1} such that the marginal D over Rn is s-concave but an η fraction of labels can be flipped adversarially. The analysis builds upon an induction technique where in each round we do hinge loss minimization in the band and cut down the 0/1 loss by half. The algorithm was previously analyzed in [3, 4] for the special class of log-concave distributions. In this paper, we analyze it for the much more general class of s-concave distributions. Theorem 16. Let D be an isotropic s-concave distribution in Rn over x and the label y obey the adversarial noise model. If the rate η of adversarial noise satisfies η < c0ϵ for some absolute constant c0, then for 0 < ϵ < 1/4, δ > 0, and an absolute constant c, Algorithm 1 runs in T = ⌈log 1 cϵ⌉iterations, outputs a linear separator wT such that Prx∼D[sign(wT · x) ̸= sign(w∗· x)] ≤ϵ with probability at least 1 −δ. The label complexity 7 in the k-th round is mk = O [bk−1s+τk(1+ns)[1−(δ/(√n(k+k2)))s/(1+ns)]+τks]2 κ2τ 2 ks2 n n + log k+k2 δ , where κ = max n f3τk f2 min{bk−1,d}, bk−1 √f5 τk √f2 o , τk = Θ f −2 1 f −1/2 2 f3f 2 4 f 1/2 5 2−(k−1) , and bk = min{Θ(2−kf4f −1 1 ), d}. In particular, if s →0, mk = O n log( n ϵδ)(n + log( k δ )) . By Theorem 16, the label complexity of margin-based active learning improves exponentially over that of passive learning w.r.t. 1/ϵ even under fat-tailed s-concave distributions and challenging adversarial noise model. 4.2 Disagreement Based Active Learning We apply our results to the analysis of disagreement-based active learning under s-concave distributions. The key is estimating the disagreement coefficient, a measure of complexity of active learning problems that can be used to bound the label complexity [20]. Recall the definition of the disagreement coefficient w.r.t. classifier w∗, precision ϵ, and distribution D as follows. For any r > 0, define ballD(w, r) = {u ∈H : dD(u, w) ≤r} where dD(u, w) = Prx∼D[(u · x)(w · x) < 0]. Define the disagreement region as DIS(H) = {x : ∃u, v ∈H s.t. (u · x)(v · x) < 0}. Let the Alexander capacity capw∗,D = PrD(DIS(ballD(w∗,r))) r . The disagreement coefficient is defined as Θw∗,D(ϵ) = supr≥ϵ[capw∗,D(r)]. Below, we state our results on the disagreement coefficient under isotropic s-concave distributions. Theorem 17 (Disagreement Coefficient). Let D be an isotropic s-concave distribution over Rn. For any w∗and r > 0, the disagreement coefficient is Θw∗,D(ϵ) = O √n(1+ns)2 s(1+(n+2)s)f1(s,n)(1 −ϵ s 1+ns ) . In particular, when s →0 (a.k.a. log-concave), Θw∗,D(ϵ) = O(√n log(1/ϵ)). Our bounds on the disagreement coefficient match the best known results for the much less general case of log-concave distributions [7]; Furthermore, they apply to the s-concave case where we allow arbitrary number of discontinuities, a case not captured by [18]. The result immediately implies concrete bounds on the label complexity of disagreement-based active learning algorithms, e.g., CAL [14] and A2 [5]. For instance, by composing it with the result from [17], we obtain a bound of eO n3/2 (1+ns)2 s(1+(n+2)s)f(s)(1 −ϵs/(1+ns)) log2 1 ϵ + OP T 2 ϵ2 for agnostic active learning under an isotropic s-concave distribution D. Namely, it suffices to output a halfspace with error at most OPT + ϵ, where OPT = minw errD(w). 4.3 Learning Intersections of Halfspaces Baum [9] provided a polynomial-time algorithm for learning the intersections of halfspaces w.r.t. symmetric distributions. Later, Klivans [27] extended the result by showing that the algorithm works under any distribution D as long as µD(E) ≈µD(−E) for any set E. In this section, we show that it is possible to learn intersections of halfspaces under the broader class of s-concave distributions. Theorem 18. In the PAC realizable case, there is an algorithm (see the supplementary material) that outputs a hypothesis h of error at most ϵ with probability at least 1 −δ under isotropic s-concave distributions. The label complexity is M(ϵ/2, δ/4, n2) + max{2m2/ϵ, (2/ϵ2) log(4/δ)}, where M(ϵ, δ, m) is defined by M(ϵ, δ, n) = O n ϵ log 1 ϵ + 1 ϵ log 1 δ , m2 = M(max{δ/(4eKm1), ϵ/2}, δ/4, n), K = β1(3, κ) B(−1/κ−3,3) (−κβ2(3,κ))3 3+1/κ h(κ)d3+1/κ , d = (1 + γ)−1/γ 1+3β 3+3β , h(κ) = 1 d((2 −2−4κ)−1 −1) + 1 1 κ (4eπ)−3 2 h 1+β 1+3β p 3(1+γ)3/γ22+ 1 κ κ −1 i−1/κ , β = κ 1+2κ, γ = κ 1+κ, and κ = s 1+(n−3)s. In particular, if s →0 (a.k.a. log-concave), K is an absolute constant. 5 Lower Bounds In this section, we give information-theoretic lower bounds on the label complexity of passive and active learning of homogeneous halfspaces under s-concave distributions. Theorem 19. For a fixed value − 1 2n+3 ≤s ≤0 we have: (a) For any s-concave distribution D in Rn whose covariance matrix is of full rank, the sample complexity of learning origin-centered linear separators under D in the passive learning scenario is Ω(nf1(s, n)/ϵ); (b) The label complexity of active learning of linear separators under s-concave distributions is Ω(n log (f1(s, n)/ϵ)). If the covariance matrix of D is not of full rank, then the intrinsic dimension is less than d. So our lower bounds essentially apply to all s-concave distributions. According to Theorem 19, it is possible to have an exponential improvement of label complexity w.r.t. 1/ϵ over passive learning by active sampling, even though the underlying distribution is a fat-tailed s-concave distribution. This observation is captured by Theorems 15 and 16. 8 6 Conclusions In this paper, we study the geometric properties of s-concave distributions. Our work advances the state-of-the-art results on the margin-based active learning, disagreement-based active learning, and learning intersections of halfspaces w.r.t. the distributions over the instance space. When s →0, our results reduce to the best-known results for log-concave distributions. The geometric properties of s-concave distributions can be potentially applied to other learning algorithms, which might be of independent interest more broadly. Acknowledgements. This work was supported in part by grants NSF-CCF 1535967, NSF CCF1422910, NSF CCF-1451177, a Sloan Fellowship, and a Microsoft Research Fellowship. References [1] D. Applegate and R. Kannan. Sampling and integration of near log-concave functions. In ACM Symposium on Theory of Computing, pages 156–163, 1991. [2] P. Awasthi, M.-F. Balcan, N. Haghtalab, and H. Zhang. Learning and 1-bit compressed sensing under asymmetric noise. In Annual Conference on Learning Theory, pages 152–192, 2016. [3] P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators with noise. In ACM Symposium on Theory of Computing, pages 449–458, 2014. [4] P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators with noise. Journal of the ACM, 63(6):50, 2017. [5] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78–89, 2009. [6] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Annual Conference on Learning Theory, pages 35–50, 2007. [7] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In Annual Conference on Learning Theory, pages 288–316, 2013. [8] M.-F. Balcan and H. Zhang. Noise-tolerant life-long matrix completion via adaptive sampling. In Advances in Neural Information Processing Systems, pages 2955–2963, 2016. [9] E. B. Baum. A polynomial time algorithm that learns two hidden unit nets. Neural Computation, 2(4):510–522, 1990. [10] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In International Conference on Machine Learning, pages 49–56, 2009. [11] H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. Journal of Functional Analysis, 22(4):366–389, 1976. [12] C. Caramanis and S. Mannor. An inequality for nearly log-concave distributions with applications to learning. IEEE Transactions on Information Theory, 53(3):1043–1057, 2007. [13] K. Chandrasekaran, A. Deshpande, and S. Vempala. Sampling s-concave functions: The limit of convexity based isoperimetry. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 420–433. 2009. [14] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [15] A. Daniely. Complexity theoretic limitations on learning halfspaces. In ACM Symposium on Theory of computing, pages 105–117, 2016. [16] S. Dasgupta. Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems, volume 17, pages 337–344, 2004. 9 [17] S. Dasgupta, D. J. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems, pages 353–360, 2007. [18] E. Friedman. Active learning for smooth problems. In Annual Conference on Learning Theory, 2009. [19] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. SIAM Journal on Computing, 39(2):742–765, 2009. [20] S. Hanneke. A bound on the label complexity of agnostic active learning. In International Conference on Machine Learning, pages 353–360, 2007. [21] S. Hanneke et al. Theory of disagreement-based active learning. Foundations and Trends in Machine Learning, 7(2-3):131–309, 2014. [22] A. T. Kalai, A. R. Klivans, Y. Mansour, and R. A. Servedio. Agnostically learning halfspaces. SIAM Journal on Computing, 37(6):1777–1805, 2008. [23] A. T. Kalai and S. Vempala. Simulated annealing for convex optimization. Mathematics of Operations Research, 31(2):253–266, 2006. [24] D. M. Kane, S. Lovett, S. Moran, and J. Zhang. Active classification with comparison queries. In IEEE Symposium on Foundations of Computer Science, pages 355–366, 2017. [25] A. Klivans and P. Kothari. Embedding hard learning problems into gaussian space. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 28:793–809, 2014. [26] A. R. Klivans, P. M. Long, and R. A. Servedio. Learning halfspaces with malicious noise. Journal of Machine Learning Research, 10:2715–2740, 2009. [27] A. R. Klivans, P. M. Long, and A. K. Tang. Baum’s algorithm learns intersections of halfspaces with respect to log-concave distributions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 588–600. 2009. [28] R. A. Servedio. Efficient algorithms in computational learning theory. PhD thesis, Harvard University, 2001. [29] L. Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning. Journal of Machine Learning Research, 12(Jul):2269–2292, 2011. [30] Y. Xu, H. Zhang, A. Singh, A. Dubrawski, and K. Miller. Noise-tolerant interactive learning using pairwise comparisons. In Advances in Neural Information Processing Systems, pages 2428–2437, 2017. [31] S. Yan and C. Zhang. Revisiting perceptron: Efficient and label-optimal active learning of halfspaces. arXiv preprint arXiv:1702.05581, 2017. [32] C. Zhang and K. Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442–450, 2014. 10 | 2017 | 178 |
6,650 | Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee Alireza Aghasi∗ Institute for Insight Georgia State University IBM TJ Watson aaghasi@gsu.edu Afshin Abdi Department of ECE Georgia Tech abdi@gatech.edu Nam Nguyen IBM TJ Watson nnguyen@us.ibm.com Justin Romberg Department of ECE Georgia Tech jrom@ece.gatech.edu Abstract We introduce and analyze a new technique for model reduction for deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting and model redundancy negatively affects the prediction accuracy and model variance. Our Net-Trim algorithm prunes (sparsifies) a trained network layer-wise, removing connections at each layer by solving a convex optimization program. This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model. The algorithms and associated analysis are applicable to neural networks operating with the rectified linear unit (ReLU) as the nonlinear activation. We present both parallel and cascade versions of the algorithm. While the latter can achieve slightly simpler models with the same generalization performance, the former can be computed in a distributed manner. In both cases, Net-Trim significantly reduces the number of connections in the network, while also providing enough regularization to slightly reduce the generalization error. We also provide a mathematical analysis of the consistency between the initial network and the retrained model. To analyze the model sample complexity, we derive the general sufficient conditions for the recovery of a sparse transform matrix. For a single layer taking independent Gaussian random vectors of length N as inputs, we show that if the network response can be described using a maximum number of s non-zero weights per node, these weights can be learned from O(slog N) samples. 1 Introduction With enough layers, neurons in each layer, and a sufficiently large set of training data, neural networks can learn structure of arbitrary complexity [1]. This model flexibility has made the deep neural network a pioneer machine learning tool over the past decade (see [2] for a comprehensive overview). In practice, multi-layer networks often have more parameters than can be reliably estimated from the amount of data available. This gives the training procedure a certain ambiguity – many different sets of parameter values can model the data equally well, and we risk instabilities due to overfitting. In this paper, we introduce a framework for sparisfying networks that have already been trained using standard techniques. This reduction in the number of parameters needed to specify the network makes it more robust and more computationally efficient to implement without sacrificing performance. ∗Corresponding Author 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In recent years there has been increasing interest in the mathematical understanding of deep networks. These efforts are mainly in the context of characterizing the minimizers of the underlying cost function [3, 4] and the geometry of the loss function [5]. Recently, the analysis of deep neural networks using compressed sensing tools has been considered in [6], where the distance preservability of feedforward networks at each layer is studied. There are also works on formulating the training of feedforward networks as an optimization problem [7, 8, 9], where the majority of the works approach their understanding of neural networks by sequentially studying individual layers. Various methods have been proposed to reduce overfitting via regularizing techniques and pruning strategies. These include explicit regularization using ℓ1 and ℓ2 penalties during training [10, 11], and techniques that randomly remove active connections in the training phase (e.g. Dropout [12] and DropConnect [13]) making them more likely to produce sparse networks. There has also been recent works on explicit network compression (e.g., [14, 15, 16]) to remove the inherent redundancies. In what is perhaps the most closely related work to what is presented below, [14] proposes a pruning scheme that simply truncates small weights of an already trained network, and then re-adjusts the remaining active weights using another round of training. These aforementioned techniques are based on heuristics, and lack general performance guarantees that help understand when and how well they work. We present a framework, called Net-Trim, for pruning the network layer-by-layer that is based on convex optimization. Each layer of the net consists of a linear map followed by a nonlinearity; the algorithms and theory presented below use a rectified linear unit (ReLU) applied point-wise to each output of the linear map. Net-trim works by taking a trained network, and then finding the sparsest set of weights for each layer that keeps the output responses consistent with the initial training. More concisely, if Y (ℓ−1) is the input (across the training examples) to layer ℓ, and Y (ℓ) is the output following the ReLU operator, Net-Trim searches for a sparse W such that Y (ℓ) ≈ReLU(W ⊺Y (ℓ−1)). Using the standard ℓ1 relaxation for sparsity and the fact that the ReLU function is piecewise linear allows us to perform this search by solving a convex program. In contrast to techniques based on thresholding (such as [14]), Net-Trim does not require multiple other time-consuming training steps after the initial pruning. Along with making the computations tractable, Net-Trim’s convex formulation also allows us to derive theoretical guarantees on how far the retrained model is from the initial model, and establish sample complexity arguments about the number of random samples required to retrain a presumably sparse layer. To the best of our knowledge, Net-Trim is the first pruning scheme with such performance guarantees. In addition, it is easy to modify and adapt to other structural constraints on the weights by adding additional penalty terms or introducing additional convex constraints. An illustrative example is shown in Figure 1. Here, 200 points in the 2D plane are used to train a binary classifier. The regions corresponding to each class are nested spirals. We fit a classifier using a simple neural network with two hidden layers with fully connected weights, each consisting 200 neurons. Figure 1(b) shows the weighted adjacency matrix between the layers after training, and then again after Net-Trim is applied. With only a negligible change to the overall network response (panel (a) vs panel (d)), Net-Trim is able to prune more than 93% of the links among the neurons, representing a significant model reduction. Even when the neural network is trained using sparsifying weight regularizers (here, Dropout [12] and ℓ1 penalty), Net-Trim produces a model which is over 7 times sparser than the initial one, as presented in panel (c). The numerical experiments in Section 6 show that these kinds of results are not limited to toy examples; Net-Trim achieves significant compression ratios on large networks trained on real data sets. The remainder of the paper is structured as follows. In Section 2, we formally present the network model used in the paper. The proposed pruning schemes, both the parallel and cascade Net-Trim are presented and discussed in Section 3. Section 4 is devoted to the convex analysis of the proposed framework and its sample complexity. The implementation details of the proposed convex scheme are presented in Section 5. Finally, in Section 6, we report some retraining experiments using the Net-Trim and conclude the paper by presenting some general remarks. Along with some extended discussions, the proofs of all of the theoretical statements in the paper are presented as a supplementary note (specifically, §4 of the notes is devoted to the technical proofs). We very briefly summarize the notation used below. For a matrix A, we use AΓ1,∶to denote the submatrix formed by restricting the rows of A to the index set Γ1. Similarly, A∶,Γ2 is the submatrix of columns indexed by Γ2, and AΓ1,Γ2 is formed by extracting both rows and columns. For an 2 -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 (a) (c) (b) -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 (d) Figure 1: Net-Trim pruning performance; (a) initial trained model; (b) the weighted adjacency matrix relating the two hidden layers before (left) and after (right) the application of Net-Trim; (c) left: the adjacency matrix after training the network with Dropout and ℓ1 regularization; right: after retraining via Net-Trim; (d) the retrained classifier M ×N matrix X with entries xm,n, we use2 ∥X∥1 ≜∑M m=1 ∑N n=1 ∣xm,n∣and ∥X∥F as the Frobenius norm. For a vector x, ∥x∥0 is the cardinality of x, supp x is the set of indexes with non-zero entries, and suppc x is the complement set. We will use the notation x+ as shorthand max(x,0), where max(.,0) is applied to vectors and matrices component-wise. Finally, the vertical concatenation of two vectors a and b is denoted by [a;b]. 2 Feedforward Network Model In this section, we introduce some notational conventions related to a feedforward network model. We assume that we have P training samples xp, p = 1,⋯,P, where xp ∈RN is an input to the network. We stack up these samples into a matrix X ∈RN×P , structured as X = [x1,⋯,xP ]. Considering L layers for the network, the output of the network at the final layer is denoted by Y (L) ∈RNL×P , where each column in Y (L) is a response to the corresponding training column in X. The network activations are taken to be rectified linear units. The output of the ℓ-th layer is Y (ℓ) ∈ RNℓ×P , generated by applying the adjoint of the weight matrix Wℓ∈RNℓ−1×Nℓto the output of the previous layer Y (ℓ−1) and then applying a component-wise max(.,0) operation: Y (ℓ) = max(W ⊺ ℓY (ℓ−1),0), ℓ= 1,⋯,L, (1) where Y (0) = X and N0 = N. A trained neural network as outlined in (1) is represented by T N({Wℓ}L ℓ=1,X). For the sake of theoretical analysis, all the results presented in this paper are stated for link-normalized networks, where ∥Wℓ∥1 = 1 for every layer ℓ= 1,⋯,L. Such presentation is with no loss of generality, as any network in the form of (1) can be converted to its link-normalized version by replacing Wℓwith Wℓ/∥Wℓ∥1, and Y (ℓ+1) with Y (ℓ+1)/∏ℓ j=0 ∥Wj∥1. Since max(αx,0) = α max(x,0) for α > 0, any weight processing on a network of the form (1) can be applied to the link-normalized version and later transferred to the original domain via a suitable scaling. 3 Convex Pruning of the Network Our pruning strategy relies on redesigning the network so that for the same training data each layer outcomes stay more or less close to the initial trained model, while the weights associated with each layer are replaced with sparser versions to reduce the model complexity. Figure 2 presents the main idea, where the complex paths between the layer outcomes are replaced with simple paths. In a sense, if we consider each layer response to the transmitted data as a checkpoint, Net-Trim assures the checkpoints remain roughly the same, while a simpler path between the checkpoints is discovered. 2The notation ∥X∥1 should not be confused with the matrix induced ℓ1 norm 3 X Y (1) Y (L−1) Y (L) W1 ⋯ WL ⇒ X ˆY (1) ˆY (L−1) ˆY (L) ˆ W1 ⋯ ˆ WL Figure 2: The main retraining idea: keeping the layer outcomes close to the initial trained model while finding a simpler path relating each layer input to the output Consider the first layer, where X = [x1,⋯,xP ] is the layer input, W = [w1,⋯,wM] the layer coefficient matrix, and Y = [ym,p] the layer outcome. We require the new coefficient matrix ˆ W to be sparse and the new response to be close to Y . Using the sum of absolute entries as a proxy to promote sparsity, a natural strategy to retrain the layer is addressing the nonlinear program ˆ W = arg min U ∥U∥1 s.t. ∥max(U ⊺X,0) −Y ∥F ≤ϵ. (2) Despite the convex objective, the constraint set in (2) is non-convex. However, we may approximate it with a convex set by imposing Y and ˆY = max( ˆ W ⊺X,0) to have similar activation patterns. More specifically, knowing that ym,p is either zero or positive, we enforce the max(.,0) argument to be negative when ym,p = 0, and close to ym,p elsewhere. To present the convex formulation, for V = [vm,p], throughout the paper we use the notation U ∈Cϵ(X,Y ,V ) to present the constraint set ⎧⎪⎪⎨⎪⎪⎩ ∑ m,p∶ym,p>0 (u⊺ mxp −ym,p)2 ≤ϵ2 u⊺ mxp ≤vm,p m,p ∶ym,p = 0 . (3) Based on this definition, a convex proxy to (2) is ˆ W = arg min U ∥U∥1 s.t. U ∈Cϵ(X,Y ,0). (4) Basically, depending on the value of ym,p, a different constraint is imposed on u⊺ mxp to emulate the ReLU operation. As a first observation towards establishing a retraining framework, we show that the solution of (4) is consistent with the desired constraint in (2), as follows. Proposition 1. Let ˆ W be the solution to (4). For ˆY = max( ˆ W ⊺X,0) being the retrained layer response, ∥ˆY −Y ∥F ≤ϵ. 3.1 Parallel and Cascade Net-Trim Based on the above exploratory, we propose two schemes to retrain a neural network; one explores a computationally distributable nature and the other proposes a cascading scheme to retrain the layers sequentially. The general idea which originates from the relaxation in (4) is referred to as the Net-Trim, specified by the parallel or cascade nature. The parallel Net-Trim is a straightforward application of the convex program (4) to each layer in the network. Basically, each layer is processed independently based on the initial model input and output, without taking into account the retraining result from the previous layer. Specifically, denoting Y (ℓ−1) and Y (ℓ) as the input and output of the ℓ-th layer of the initial trained neural network (see equation (1)), we propose to relearn the coefficient matrix Wℓvia the convex program ˆ Wℓ= arg min U ∥U∥1 s.t. U ∈Cϵ (Y (ℓ−1),Y (ℓ),0). (5) The optimization in (5) can be independently applied to every layer in the network and hence computationally distributable. Algorithm 1 presents the pseudocode for the parallel Net-Trim. In this pseudocode, we use TRIM(X,Y ,V ,ϵ) as a function which returns the solution to a program like (4) with the constraint U ∈Cϵ(X,Y ,V ). With reference to the constraint in (5), if we only retrain the ℓ-th layer, the output of the retrained layer is in the ϵ-neighborhood of that before retraining. However, when all the layers are retrained through (5), an immediate question would be whether the retrained network produces an output which is controllably close to the initially trained model. In the following theorem, we show that the retrained error does not blow up across the layers and remains a multiple of ϵ. 4 Theorem 1. Let T N({Wℓ}L ℓ=1,X) be a link-normalized trained network with layer outcomes Y (ℓ) described by (1). Form the retrained network T N({ ˆ Wℓ}L ℓ=1,X) by solving the convex programs (5), with ϵ = ϵℓat each layer. Then the retrained layer outcomes ˆY (ℓ) = max( ˆ Wℓ⊺ˆY (ℓ−1),0) obey ∥ˆY (ℓ) −Y (ℓ)∥F ≤∑ℓ j=1 ϵj. When all the layers are retrained with a fixed parameter ϵ (as in Algorithm 1), a corollary of the theorem above would bound the overall discrepancy as ∥ˆY (L) −Y (L)∥F ≤Lϵ. In a cascade Net-Trim, unlike the parallel scheme where each layer is retrained independently, the outcome of a retrained layer is probed into the program retraining the next layer. More specifically, having the first layer processed via (4), one would ideally seek to address (5) with the modified constraint U ∈Cϵ( ˆY (ℓ−1),Y (ℓ),0) to retrain the subsequent layers. However, as detailed in §1 of the supplementary note, such program is not necessarily feasible and needs to be sufficiently slacked to warrant feasibility. In this regard, for every subsequent layer, ℓ= 2,⋯,L, the retrained weighting matrix, ˆ Wℓ, is obtained via min U ∥U∥1 s.t. U ∈Cϵℓ(ˆY (ℓ−1),Y (ℓ),W ⊺ ℓˆY (ℓ−1)), (6) where for Wℓ= [wℓ,1,⋯,wℓ,Nℓ] and γℓ≥1, ϵ2 ℓ= γℓ ∑ m,p∶y(ℓ) m,p>0 (w⊺ ℓ,mˆy(ℓ−1) p −y(ℓ) m,p) 2 . (7) The constants γℓ≥1 (referred to as the inflation rates) are free parameters, which control the sparsity of the resulting matrices. In the following theorem, we prove that the outcome of the retrained network produced by Algorithm 2 is close to that of the network before retraining. Theorem 2. Let T N({Wℓ}L ℓ=1,X) be a link-normalized trained network with layer outcomes Y (ℓ). Form the retrained network T N({ ˆ Wℓ}L ℓ=1,X) by solving (5) for the first layer and (6) for the subsequent layers with ϵℓas in (7), ˆY (ℓ) = max( ˆ Wℓ⊺ˆY (ℓ−1),0), ˆY (1) = max( ˆ W1⊺X,0) and γℓ≥1. Then the outputs ˆY (ℓ) of the retrained network will obey ∥ˆY (ℓ) −Y (ℓ)∥F ≤ϵ1(∏ℓ j=2 γj) 1 2 . Algorithm 2 presents the pseudo-code to implement the cascade Net-Trim for a link normalized network with ϵ1 = ϵ and a constant inflation rate, γ, across all the layers. In such case, a corollary of Theorem 2 bounds the network overall discrepancy as ∥ˆY (L) −Y (L)∥F ≤γ (L−1) 2 ϵ. We would like to note that focusing on a link-normalized network is only for the sake of presenting the theoretical results in a more compact form. In practice, such conversion is not necessary and to retrain layer ℓin the parallel Net-Trim we can take ϵ = ϵr∥Y (ℓ)∥F and use ϵ = ϵr∥Y (1)∥F for the cascade case, where ϵr plays a similar role as ϵ for a link-normalized network. Moreover, as detailed in §2 of the supplementary note, Theorems 1 and 2 identically apply to the practical networks that follow (1) for the first L −1 layers and skip an activation at the last layer. Algorithm 1 Parallel Net-Trim 1: Input: X, ϵ > 0, and normalized W1, ⋯, WL 2: Y (0) ←X % generating initial layer outcomes: 3: for ℓ= 1, ⋯, L do 4: Y (ℓ) ←max (W ⊺ ℓY (ℓ−1), 0) 5: end for % retraining: 6: for all ℓ= 1, ⋯, L do 7: ˆ Wℓ←TRIM (Y (ℓ−1), Y (ℓ), 0, ϵ) 8: end for 9: Output: ˆ W1, ⋯, ˆ WL Algorithm 2 Cascade Net-Trim 1: Input: X, ϵ > 0, γ > 1 and normalized W1, ⋯, WL 2: Y ←max (W ⊺ 1 X, 0) 3: ˆ W1 ←TRIM(X, Y , 0, ϵ) 4: ˆY ←max( ˆ W1 ⊺X, 0) 5: for ℓ= 2, ⋯, L do 6: Y ←max(W ⊺ ℓY , 0) 7: ϵ ←(γ ∑m,p∶ym,p>0(w⊺ ℓ,mˆyp −ym,p)2)1/2 % wℓ,m is the m-th column of Wℓ 8: ˆ Wℓ←TRIM( ˆY , Y , W ⊺ ℓˆY , ϵ) 9: ˆY ←max( ˆ W ⊺ ℓˆY , 0) 10: end for 11: Output: ˆ W1, ⋯, ˆ WL 5 4 Convex Analysis and Sample Complexity In this section, we derive a sampling theorem for a single-layer, redundant network. Here, there are many sets of weights that can induce the observed outputs given then input vectors. This scenario might arise when the number of training samples used to train a (large) network is small (smaller than the network degrees of freedom). We will show that when the inputs into the layers are independent Gaussian random vectors, if there are sparse set of weights that can generate the output, then with high probability, the Net-Trim program in (4) will find them. As noted above, in the case of a redundant layer, for a given input X and output Y , the relation Y = max(W ⊺X,0) can be established via more than one W . In this case, we hope to find a sparse W by setting ϵ = 0 in (4). For this value of ϵ, our central convex program decouples into M convex programs, each searching for the m-th column in ˆ W : ˆwm = arg min w ∥w∥1 s.t. { w⊺xp = ym,p p ∶ym,p > 0 w⊺xp ≤0 p ∶ym,p = 0 . (8) By dropping the m index and introducing the slack variable s, program (8) can be cast as min w,s ∥w∥1 s.t. ˜X [w s ] = y, s ⪯0, (9) where ˜X = [ X⊺ ∶,Ω 0 X⊺ ∶,Ωc −I], y = [yΩ 0 ], and Ω= {p ∶yp > 0}. For a general ˜X, not necessarily structured as above, the following result states the sufficient conditions under which a sparse pair (w∗,s∗) is the unique minimizer to (9). Proposition 2. Consider a pair (w∗,s∗) ∈(Rn1,Rn2), which is feasible for the convex program (9). If there exists a vector Λ = [Λℓ] ∈Rn1+n2 in the range of ˜X⊺with entries satisfying {−1 <Λℓ<1 ℓ∈suppc w∗ 0 < Λn1+ℓ ℓ∈suppc s∗ ,{Λℓ=sign(w∗ ℓ) ℓ∈supp w∗ Λn1+ℓ=0 ℓ∈supp s∗ (10) and for ˜Γ = supp w∗∪{n1 + supp s∗} the restricted matrix ˜X∶,˜Γ is full column rank, then the pair (w∗,s∗) is the unique solution to (9). The proposed optimality result can be related to the unique identification of a sparse w∗from rectified observations of the form y = max(X⊺w∗,0). Clearly, the structure of the feature matrix X plays the key role here, and the construction of the dual certificate stated in Proposition 2 entirely relies on this. As an insightful case, we show that when X is a Gaussian matrix (that is, the elements of X are i.i.d values drawn from a standard normal distribution), for sufficiently large number of samples, the dual certificate can be constructed. As a result, we can warrant that learning w∗can be performed with much fewer samples than the layer degrees of freedom. Theorem 3. Let w∗∈RN be an arbitrary s-sparse vector, X ∈RN×P a Gaussian matrix representing the samples and µ > 1 a fixed value. Given P = (11s + 7)µlog N observations of the type y = max(X⊺w∗,0), with probability exceeding 1 −N 1−µ the vector w∗can be learned exactly through (8). The standard Gaussian assumption for the feature matrix X allows us to relate the number of training samples to the number of active links in a layer. Such feature structure could be a realistic assumption for the first layer of the neural network. As reflected in the proof of Theorem 3, because of the dependence of the set Ωto the entries in X, we need to take a totally nontrivial analysis path than the standard concentration of measure arguments for the sum of independent random matrices. In fact, the proof requires establishing concentration bounds for the sum of dependent random matrices. While we focused on each column of W ∗individually, for the observations Y = max(W ∗⊺X,0), using the union bound, an exact identification of W ∗can be warranted as a corollary of Theorem 3. Corollary 1. Consider an arbitrary matrix W ∗= [w∗ 1,⋯,w∗ M] ∈RN×M, where sm = ∥w∗ m∥0, and 0 < sm ≤smax for m = 1,⋯,M. For X ∈RN×P being a Gaussian matrix, set Y = max(W ∗⊺X,0). If µ > (1 + logNM) and P = (11smax +7)µlog N, for ϵ = 0, W ∗can be accurately learned through (4) with probability exceeding 1 −∑M m=1 N 1−µ 11smax+7 11sm+7 . 6 It can be shown that for the network model in (1), probing the network with an i.i.d sample matrix X would generate subgaussian random matrices with independent columns as the subsequent layer outcomes. Under certain well conditioning of the input covariance matrix of each layer, results similar to Theorem 3 are extendable to the subsequent layers. While such results are left for a more extended presentation of the work, Theorem 3 is brought here as a good reference for the general performance of the proposed retraining scheme and the associated analysis theme. 5 Implementing the Convex Program If the quadratic constraint in (3) is brought to the objective via a regularization parameter λ, the resulting convex program decouples into M smaller programs of the form ˆwm = arg min u ∥u∥1 + λ ∑ p∶ym,p>0 (u⊺xp −ym,p) 2 s.t. u⊺xp ≤vm,p, for p ∶ym,p = 0, (11) each recovering a column of ˆ W . Such decoupling of the regularized form is computationally attractive, since it makes the trimming task extremely distributable among parallel processing units by recovering each column of ˆ W on a separate unit. Addressing the original constrained form (4) in a fast and scalable way requires using more complicated techniques, which is left to a more extended presentation of the work. We can formulate the program in a standard form by introducing the index sets Ωm = {p ∶ym,p > 0}, Ωc m = {p ∶ym,p = 0}. Denoting the m-th row of Y by y⊺ m and the m-th row of V by v⊺ m, one can equivalently rewrite (11) in terms of u as min u ∥u∥1 + u⊺Qmu + 2q⊺ mu s.t. P mu ⪯cm, (12) where Qm = λX∶,ΩmX⊺ ∶,Ωm, qm = −λX∶,ΩmymΩm = −λXym, P m = X⊺ ∶,Ωcm, cm = vmΩcm. (13) The ℓ1 term in the objective of (12) can be converted into a linear term by defining a new vector ˜u = [u+;−u−], where u−= min(u,0). This variable change naturally yields u = [I,−I]˜u, ∥u∥1 = 1⊺˜u. The convex program (13) is now cast as the standard quadratic program min ˜u ˜u⊺˜ Qm˜u + (1 + 2˜qm)⊺˜u s.t. [ ˜ P m −I ] ˜u ⪯[cm 0 ], (14) where ˜ Qm = [ 1 −1 −1 1 ] ⊗Qm, ˜qm = [ qm −qm], ˜ P m = [P m −P m]. Once ˜u∗ m, the solution to (14) is found, the solution to (11) can be recovered via ˆwm = [I,−I]˜u∗ m. Aside from the variety of convex solvers that can be used to address (14), we are specifically interested in using the alternating direction method of multipliers (ADMM). In fact the main motivation to translate (11) into (14) is the availability of ADMM implementations for problems in the form of (14) that are reasonably fast and scalable (e.g., see [17]). The authors have made the implementation publicly available online3. 6 Experiments and Discussions Aside from the major technical contribution of the paper in providing a theoretical understanding of the Net-Trim pruning process, in this section we present some experiments to highlight its performance against the state of the art techniques. 3The code for the regularized Net-Trim implementation using the ADMM scheme can be accessed online at: https://github.com/DNNToolBox/Net-Trim-v1 7 The first set of experiments associated with the example presented in the introduction (classification of 2D points on nested spirals) compares the Net-Trim pruning power against the standard pruning strategies of ℓ1 regularization and Dropout. The experiments demonstrate how Net-Trim can significantly improve the pruning level of a given network and produce simpler and more understandable networks. We also compare the cascade Net-Trim against the parallel scheme. As could be expected, for a fixed level of discrepancy between the initial and retrained models, the cascade scheme is capable of producing sparser networks. However, the computational distributability of the parallel scheme makes it a more favorable approach for large scale and big data problems. Due to the space limitation, these experiments are moved to §3 of the supplementary note. We next apply Net-Trim to the problem of classifying hand-written digits of the mixed national institute of standards and technology (MNIST) dataset. The set contains 60,000 training samples and 10,000 test instances. To examine different settings, we consider 6 networks: NN2-10K, which is a 784⋅300⋅300⋅10 network (two hidden layers of 300 nodes) and trained with 10,000 samples; NN3-30K, a 784⋅300⋅500⋅300⋅10 network trained with 30,000 samples; and NN3-60K, a 784⋅300⋅1000⋅300⋅10 network trained with 60,000 samples. We also consider CNN-10K, CNN-30K and CNN-60K which are topologically identical convolutional networks trained with 10,000, 30,000 and 60,000 samples, respectively. The convolutional networks contain two convolutional layers composed of 32 filters of size 5 × 5 × 1 for the first layer and 5 × 5 × 32 for the second layer, both followed by max pooling and a fully connected layer of 512 neurons. While the linearity of the convolution allows using the Net-Trim for the associated layers, here we merely consider retraining the fully connected layers. To address the Net-Trim convex program, we use the regularized form outlined in Section 5, which is fully capable of parallel processing. For our largest problem (associated with the fully connected layer in CNN-60K), retraining each column takes less than 20 seconds and distributing the independent jobs among a cluster of processing units (in our case 64) or using a GPU reduces the overall retraining of a layer to few minutes. Table 1 summarize the retraining experiments. Panel (a) corresponds to the Net-Trim operating in a low discrepancy mode (smaller ϵ), while in panel (b) we explore more sparsity by allowing larger discrepancies. Each neural network is trained three times with different initialization seeds and average quantities are reported. In these tables, the first row corresponds to the test accuracy of the initial models. The second row reports the overall pruning rate and the third row reports the overall discrepancy between the initial and Net-Trim retrained models. We also compare the results with the work by Han, Pool, Tran and Dally (HPTD) [14]. The basic idea in [14] is to truncate the small weights across the network and perform another round of training on the active weights. The forth row reports the test accuracy after applying Net-Trim. To make a fair comparison in applying the HPTD, we impose the same number of weights to be truncated in the HPTD technique. The accuracy of the model after this truncation is presented on the fifth row. Rows six and seven present the test accuracy of Net-Trim and HPTD after a fine training process (optional for Net-Trim). An immediate observation is the close test error of Net-Trim compared to the initial trained models (row four vs row one). We can observe from the second and third rows of the two tables that allowing more discrepancy (larger ϵ) increases the pruning level. We can also observe that the basic Net-Trim process (row four) in many scenarios beats the HPTD (row seven), and if we allow a fine training step after the Net-Trim (row six), in all the scenarios a better test accuracy is achieved. A serious problem with the HPTD is the early minima trapping (EMT). When we simply truncate the layer transfer matrices, ignoring their actual contribution to the network, the error introduced can be very large (row five), and using this biased pattern as an initialization for the fine training can produce poor local minima solutions with large errors. The EMT blocks in the table correspond to the scenarios where all three random seeds failed to generate acceptable results for this approach. In the experiments where Net-Trim was followed by an additional fine training step, this was never an issue, since the Net-Trim outcome is already a good model solution. In Figure 3(a), we visualize ˆ W1 after the Net-Trim process. We observe 28 bands (MNIST images are 28×28), where the zero columns represent the boundary pixels with the least image information. It is noteworthy that such interpretable result is achieved using the Net-Trim with no post or pre-processes. A similar outcome of HPTD is depicted in panel (b). As a matter of fact, the authors present a similar visualization as panel (a) in [14], which is the result of applying the HPTD process iteratively and going through the retraining step many times. Such path certainly produces a lot of processing load and lacks any type of confidence on being a convergent procedure. 8 Table 1: The test accuracy of different models before and after Net-Trim (NT) and HPTD [14]. Without a fine training (FT) step, Net-Trim produces pruned networks in the majority of cases more accurate than HPTD and with no risk of poor local minima. Adding an additional FT step makes Net-Trim consistently prominent NN2-10K NN3-30K NN3-60K CNN-10K CNN-30K CNN-60K Init. Mod. Acc. (%) 95.59 97.58 98.18 98.37 99.11 99.25 Total Pruning (%) 40.86 30.69 29.38 43.91 39.11 45.74 NT Overall Disc. (%) 1.98 1.31 1.77 1.22 0.75 0.55 NT No FT Acc. (%) 95.47 97.55 98.1 98.31 99.15 99.25 HPTD No FT Acc. (%) 9.3 10.34 8.92 19.17 55.92 30.17 NT + FT Acc. (%) 95.85 97.67 98.12 98.35 99.21 99.33 HPTD + FT Acc. (%) 93.56 97.32 EMT 98.16 EMT EMT NN2-10K NN3-30K NN3-60K CNN-10K CNN-30K CNN-60K Init. Mod. Acc. (%) 95.59 97.58 98.18 98.37 99.11 99.25 Total Pruning (%) 75.87 75.82 77.40 76.18 77.63 81.62 NT Overall Disc. (%) 4.95 11.01 11.47 3.65 5.32 8.93 NT No FT Acc. (%) 94.92 95.97 97.35 97.91 99.08 98.96 HPTD No FT Acc. (%) 8.97 10.1 8.92 31.18 73.36 46.84 NT + FT Acc. (%) 95.89 97.69 98.19 98.40 99.17 99.26 HPTD + FT Acc. (%) 95.61 EMT 97.96 EMT 99.01 99.06 0 100 200 300 400 500 600 700 0 50 100 150 200 250 300 (a) (a) 0 100 200 300 400 500 600 700 0 50 100 150 200 250 300 (b) (b) Figure 3: Visualization of ˆ W1 in NN3-60K; (a) Net-Trim output; (b) standard HPTD 0 20 40 60 80 100 120 140 160 65 70 75 80 85 90 95 100 Net-Trim Retrained Model Initial Model Test Accuracy (a) Noise (%) 0 20 40 60 80 100 120 140 160 65 70 75 80 85 90 95 100 Net-Trim Retrained Model Initial Model (b) Test Accuracy Noise (%) Figure 4: Noise robustness of initial and retrained networks; (a) NN2-10K; (b) NN3-30K Also, for a deeper understanding of the robustness Net-Trim adds to the models, in Figure 4 we have plotted the classification accuracy of the initial and retrained models against the level of added noise to the test data (ranging from 0 to 160%). The Net-Trim improvement in accuracy becomes more noticeable as the noise level in the data increases. Basically, as expected, reducing the model complexity makes the network more robust to outliers and noisy samples. It is also interesting to note that the NN3-30K initial model in panel (b), which is trained with more data, presents robustness to a larger level of noise compared to NN2-10K in panel (a). However, the retrained models behave rather similarly (blue curves) indicating the saving that can be achieved in the number of training samples via Net-Trim. In fact, Net-Trim can be particularly useful when the number of training samples is limited. While overfitting is likely to occur in such scenarios, Net-Trim reduces the complexity of the model by setting a significant portion of weights at each layer to zero, yet maintaining the model consistency. This capability can also be viewed from a different perspective, that Net-Trim simplifies the process of determining the network size. In other words, if the network used at the training phase is oversized, Net-Trim can reduce its size to an order matching the data. Finally, aside from the theoretical and practical contribution that Net-Trim brings to the understanding of deep neural network, the idea can be easily generalized to retraining schemes with other regularizers (e.g., the use of ridge or elastic net type regularizers) or other structural constraint about the network. 9 References [1] K. Hornik, M. Stinchcombe, H. White D. Achlioptas, and F. McSherry. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989. [2] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. [3] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In Proceedings of the 31st International Conference on Machine Learning, 2014. [4] K. Kawaguchi. Deep learning without poor local minima. In Preprint, 2016. [5] A. Choromanska, M. Henaff, M. Mathieu, G.B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [6] R. Giryes, G. Sapiro, and A.M. Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? IEEE Transactions on Signal Processing, 64(13):3444–3457, 2016. [7] Y. Bengio, N. Le Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In Proceedings of the 18th International Conference on Neural Information Processing Systems, pages 123–130, 2005. [8] F. Bach. Breaking the curse of dimensionality with convex neural networks. Technical report, 2014. [9] O. Aslan, X. Zhang, and D. Schuurmans. Convex deep learning via normalized kernels. In Proceedings of the 27th International Conference on Neural Information Processing Systems, pages 3275–3283, 2014. [10] S. Nowlan and G. Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473–493, 1992. [11] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural computation, 7(2):219–269, 1995. [12] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. [13] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 33rd International Conference on Machine Learning, 2016. [14] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143, 2015. [15] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In International Conference on Machine Learning, pages 2285–2294, 2015. [16] S. Han, H. Mao, and W. J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [17] E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson. Optimal parameter selection for the alternating direction method of multipliers (admm): quadratic problems. IEEE Transactions on Automatic Control, 60(3):644–658, 2015. 10 | 2017 | 179 |
6,651 | Robust Imitation of Diverse Behaviors Ziyu Wang⇤, Josh Merel⇤, Scott Reed, Greg Wayne, Nando de Freitas, Nicolas Heess DeepMind ziyu,jsmerel,reedscot,gregwayne,nandodefreitas,heess@google.com Abstract Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. 1 Introduction Building versatile embodied agents, both in the form of real robots and animated avatars, capable of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced by toddlers. Towards addressing this challenge, in this work we combine several deep generative approaches to imitation learning in a way that accentuates their individual strengths and addresses their limitations. The end product of this is a robust neural network policy that can imitate a large and diverse set of behaviors using few training demonstrations. We first introduce a variational autoencoder (VAE) [15, 26] for supervised imitation, consisting of a bi-directional LSTM [13, 32, 9] encoder mapping demonstration sequences to embedding vectors, and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory embedding and the current state to a continuous action vector. The second is a dynamics model mapping the embedding and previous state to the present state, while modelling correlations among states with a WaveNet [39]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker, implemented in the MuJoCo physics engine [38], show that the VAE learns a structured semantic embedding space, which allows for smooth policy interpolation. While supervised policies that condition on demonstrations (such as our VAE or the recent approach of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too much from the demonstration trajectories. These limitations of supervised learning for imitation, also known as behavioral cloning (BC) [24], are well known [28, 29]. ⇤Joint First authors. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning, allowing the agent to interact with the environment during training. GAIL allows one to learn more robust policies with fewer demonstrations, but adversarial training introduces another difficulty called mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset of modes of a probability distribution, resulting in a failure to produce adequately diverse samples. This will cause the learned policy to capture only a subset of control behaviors (which can be viewed as modes of a distribution), rather than allocating capacity to cover all modes. Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section 3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE policy alone is brittle and GAIL alone does not capture all of the diverse behaviors. 2 Background and Related Work We begin our brief review with generative models. One canonical way of training generative models is to maximize the likelihood of the data: max P i log p✓(xi). This is equivalent to minimizing the Kullback-Leibler divergence between the distribution of the data and the model: DKL(pdata(·)||p✓(·)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable. One class of highly-expressive yet tractable models are the auto-regressive models which decompose the log likelihood as log p(x) = P i log p✓(xi|x<i). Auto-regressive models have been highly effective in both image and audio generation [40, 39]. Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model over the latent variables, qφ(z|x), and optimize a lower bound of the log-likelihood: Eqφ(z|xi) [log p✓(xi|z)] −DKL (qφ(z|xi)||p(z)) log p(x). (1) For continuous latent variables, this bound can be optimized efficiently via the re-parameterization trick [15, 26]. This class of models are often referred to as VAEs. GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks: a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples, predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs optimize the following objective function min G max D Epdata(x) [log D(x)] + Ep(z) [log(1 −D(G(z))] . (2) Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has focused on alleviating mode collapse in image modeling [2, 4, 19, 25, 42, 11, 27], but so far these have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp and at times realistic image samples [40], but they tend to be slow to sample from and unlike VAEs do not immediately provide a latent vector representation of the data. This is why we used VAEs to learn representations of demonstration trajectories. We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a behavior provided via a demonstration. It is natural to view imitation learning from the perspective of generative modeling. However, unlike in image and audio modeling, in imitation the generation process is constrained by the environment and the agent’s actions, with observations becoming accessible through interaction. Imitation learning brings its own unique challenges. In this paper, we assume that we have been provided with demonstrations {⌧i}i where the i-th trajectory of state-action pairs is ⌧i = {xi 1, ai 1, · · · , xi Ti, ai Ti}. These trajectories may have been produced by either an artificial or natural agent. 2 As in generative modeling, we can easily apply maximum likelihood to imitation learning. For instance, if the dynamics are tractable, we can maximize the likelihood of the states directly: max✓ P i PTi t=1 log p(xi t+1|xi t, ⇡✓(xi t)). If a model of the dynamics is unavailable, we can instead maximize the likelihood of the actions: max✓ P i PTi t=1 log ⇡✓(ai t|xi t). The latter approach is what we referred to as behavioral cloning (BC) in the introduction. When demonstrations are plentiful, BC is effective [24, 30, 6]. Without abundant data, BC is known to be inadequate [28, 29, 12]. The inefficiencies of BC stem from the sequential nature of the problem. When using BC, even the slightest errors in mimicking the demonstration behavior can quickly accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but for BC to achieve this, the corrective behaviors have to appear frequently in the training data. GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and learn from these interactions. It constructs a reward function using GANs to measure the similarity between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the following objective function min ✓ max E⇡E [log D (x, a)] + E⇡✓[log(1 −D (x, a))] , (3) where ⇡E denotes the expert policy that generated the demonstration trajectories. To avoid differentiating through the system dynamics, policy gradient algorithms are used to train the policy by maximizing the discounted sum of rewards r (xt, at) = −log(1 −D (xt, at)). Maximizing this reward, which may differ from the expert reward, drives ⇡✓to expert-like regions of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize the learning process [31]. GAIL has become a popular choice for imitation learning [16] and there already exist model-based [3] and third-person [36] extensions. Two recent GAIL-based approaches [17, 10] introduce additional reward signals that encourage the policy to make use of latent variables which would correspond to different types of demonstrations after training. These approaches are complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation. The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to recent authoritative surveys on the topic [5, 1, 14]. Inspired by recent works, including [12, 36, 6], we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations. In graphics, a significant effort has been devoted to the design physics controllers that take advantage of motion capture data, or key-frames and other inputs provided by animators [33, 35, 43, 22]. Yet, as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires significant human insight. Our focus is on flexible, general imitation methods. 3 A Generative Modeling Approach to Imitating Diverse Behaviors 3.1 Behavioral cloning with variational autoencoders suited for control In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having a distribution qφ(z|x1:T ) to better regularize the latent space. In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize the following loss: L(↵, w, φ; ⌧i)=−Eqφ(z|xi 1:Ti) " Ti X t=1 log ⇡↵(ai t|xi t, z)+log pw(xi t+1|xi t, z) # +DKL % qφ(z|xi 1:Ti)||p(z) & Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average of all the outputs of the second layer of this LSTM before applying a final linear transformation to generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as our demonstration encoding. The action decoder is an MLP that maps the concatenation of the state and the embedding to the parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model [39]. 3 Action decoder State decoder Demonstration state encoder ... ... ... Autoregressive state model (given , ) Figure 1: Schematic of the encoder decoder architecture. LEFT: Bidirectional LSTM on demonstration states, followed by action and state decoders at each time step. RIGHT: State decoder model within a single time step, that is autoregressive over the state dimensions. In particular, it conditions on the embedding z and previous state xt−1 to generate the vector xt autoregressively. That is, the autoregression is over the components of the vector xt. Wavenet lessens the load of the encoder which no longer has to carry information that can be captured by modeling auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a mixture of Gaussians as the output of the WaveNet. 3.2 Diverse generative adversarial imitation learning As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations. Our solution to obtain more robust policies from few demonstrations, which are also capable of diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions, we condition the discriminator on the embeddings generated by the VAE encoder and integrate out the GAIL objective with respect to the variational posterior qφ(z|x1:T ). Specifically, we train the discriminator by optimizing the following objective max E⌧i⇠⇡E ( Eq(z|xi 1:Ti) " 1 Ti Ti X t=1 log D (xi t, ai t|z) + E⇡✓[log(1 −D (x, a|z))] # ) . (4) A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors from state trajectories, but the discriminator conditions on an annotated class label, as in conditional GANs [21]. We condition on unlabeled trajectories, which have been passed through a powerful encoder, and hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3. Since our discriminator is conditional, the reward function is also conditional: rt (xt, at|z) = −log(1 −D (xt, at|z)). We also clip the reward so that it is upper-bounded. Conditioning on z allows us to generate an infinite number of reward functions each of them tailored to imitating a different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular mode due to the diversity of reward functions. To better motivate our objective, let us temporarily leave the context of imitation learning and consider the following alternative value function for training GANs min G max D V (G, D) = Z y p(y) Z z q(z|y) log D(y|z) + Z ˆy G(ˆy|z) log(1 −D(ˆy|z))dˆy + dydz. This function is a simplification of our objective function. Furthermore, it satisfies the following property. Lemma 1. Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z) p(y) , then V (G, D) = Z z p(z) Z y p(y|z) log D(y|z)dy + Z ˆx G(ˆy|z) log(1 −D(ˆy|z))dˆy + dz. 4 Algorithm 1 Diverse generative adversarial imitation learning. INPUT: Demonstration trajectories {⌧i}i and VAE encoder q. repeat for j 2 {1, · · · , n} do Sample trajectory ⌧j from the demonstration set and sample zj ⇠q(·|xj 1:Tj). Run policy ⇡✓(·|zj) to obtain the trajectory b⌧j. end for Update policy parameters via TRPO with rewards rj t(xj t, aj t|zj) = −log(1 −D (xj t, aj t|zj)). Update discriminator parameters from i to i+1 with gradient: r 8 < : 1 n n X j=1 2 4 1 Tj Tj X t=1 log D (xj t, aj t|zj) 3 5 + 2 4 1 bTj b Tj X t=1 log(1 −D (bxj t, baj t|zj)) 3 5 9 = ; until Max iteration or time reached. If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes C(G) = 2 Z z p(z)JSD [p( · |z) || G( · |z)] dz −log 4, (5) where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize this divergence, and it is well documented that optimizing it leads to mode seeking behavior [37]. The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture of Gaussians and p(z) describes the distribution over the mixture components. In this case, the conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity, we can expect the conditional distributions to be close to uni-modal and therefore our generators to be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible and hence our choice of training q with VAEs. We now turn our attention to some algorithmic considerations. We can use the VAE policy ⇡↵(at|xt, z) to accelerate the training of ⇡✓(at|xt, z). One possible route is to initialize the weights ✓to ↵. However, before the policy behaves reasonably, the noise injected into the policy for exploration (when using stochastic policy gradients) can cause poor initial performance. Instead, we fix ↵and structure the conditional policy as follows ⇡✓( · |x, z) = N ( · |µ✓(x, z) + µ↵(x, z), σ✓(x, z)) , where µ↵is the mean of the VAE policy. Finally, the policy parameterized by ✓is optimized with TRPO [31] while holding parameters ↵fixed, as shown in Algorithm 1. 4 Experiments The primary focus of our experimental evaluation is to demonstrate that the architecture allows learning of robust controllers capable of producing the full spectrum of demonstration behaviors for a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm, a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working controller, for the other two problems our full learning procedure is critical. We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible structure that an be exploited for control. Finally, we show that the encoder can be used to capture the gist of novel demonstration trajectories which can then be reproduced by the controller. All experiments are conducted with the MuJoCo physics engine [38]. For details of the simulation and the experimental setup please see appendix. 4.1 Robotic arm reaching We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a robotics arm developed by Kinova Robotics. 5 Policy 1 Policy 2 Interpolated policies Time Figure 3: Interpolation in the latent space for the Jaco arm. Each column shows three frames of a target-reach trajectory (time increases across rows). The left and right most columns correspond to the demonstration trajectories in between which we interpolate. Intermediate columns show trajectories generated by our VAE policy conditioned on embeddings which are convex combinations of the embeddings of the demonstration trajectories. Interpolating in the latent space indeed correspond to interpolation in the physical dimensions. To obtain demonstrations, we trained 60 independent policies to reach to random target locations2 in the workspace starting from the same initial configuration. We generated 30 trajectories from each of the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in total). The remaining 10 policies were used to generate test data. Figure 2: Trajectories for the Jaco arm’s end-effector on test set demonstrations. The trajectories produced by the VAE policy and corresponding demonstration are plotted with the same color, illustrating that the policy can imitate well. The reaching task is relatively simple, so with this amount of data the VAE policy is fairly robust. After training, the VAE encodes and reproduces the demonstrations as shown in Figure 2. Representative examples can be found in the video in the supplemental material. To further investigate the nature of the embedding space we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories. We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment are illustrated with a representative pair in Figure 3. We observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint) space, highlighting the semantic meaningfulness of the discovered latent space. 4.2 2D Walker We found reaching behavior to be relatively easy to imitate, presumably because it does not involve much physical contact. As a more challenging test we consider bipedal locomotion. We train 60 neural network policies for a 2d walker to serve as demonstrations3. These policies are each trained to move at different speeds both forward and backward depending on a label provided as additional input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s): -1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top right). Besides the target speed the reward function imposes few constraints on the behavior. The resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for most purposes this diversity is undesirable, for the present experiment we consider it a feature. 2See appendix for details 3See section A.2 in the appendix for details. 6 Figure 4: LEFT: t-SNE plot of the embedding vectors of the training trajectories; marker color indicates average speed. The plot reveals a clear clustering according to speed. Insets show pairs of frames from selected example trajectories. Trajectories nearby in the plot tend to correspond to similar movement styles even when differing in speed (e.g. see pair of trajectories on the right hand side of plot). RIGHT, TOP: Distribution of walker speeds for the demonstration trajectories. RIGHT, BOTTOM: Difference in speed between the demonstration and imitation trajectories. Measured against the demonstration trajectories, we observe that the fine-tuned controllers tend to have less difference in speed compared to controllers without fine-tuning. We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required: training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general, presumably because our relatively small training set does not cover the space of trajectories sufficiently densely. On this generated dataset, we also train policies with GAIL using the same architecture and hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories. Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL also exhibit dramatically less diversity; see video. A general problem of adversarial training is that there is no easy way to quantitatively assess the quality of learned models. Here, since we aim to imitate particular demonstration trajectories that were trained to achieve particular target speed(s) we can use the difference between the speed of the demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality of the imitation (cf. also [12]). The general quality of the learned model and the improvement achieved by the adversarial stage of our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all 60 policies) from the training set, compute the corresponding embedding vectors using the encoder, and use both the VAE policy as well as the improved policy from the adversarial stage to imitate each of the trajectories. We determine the absolute values of the difference between the average speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the adversarial training greatly improves reliability of the controller as well as the ability of the model to accurately match the speed of the demonstration. We also include addition quantitative analysis of our approach using this speed metric in Appendix B. Video of our agent imitating a diverse set of behaviors can be found in the supplemental material. To assess generalization to novel trajectories we encode and subsequently imitate trajectories not contained in the training set. The supplemental video contains several representative examples, demonstrating that the style of movement is successfully imitated for previously unseen trajectories. Finally, we analyze the structure of the embedding space. We embed training trajectories and perform dimensionality reduction with t-SNE [41]. The result is shown in Fig. 4. It reveals a clear clustering according to movement speeds thus recovering the nature of the task context for the demonstration trajectories. We further find that trajectories that are nearby in embedding space tend to correspond to similar movement styles even when differing in speed. 7 Time Demo Imitation Train Test Time Figure 5: Left: examples of the demonstration trajectories in the CMU humanoid domain. The top row shows demonstrations from both the training and test set. The bottom row shows the corresponding imitation. Right: Percentage of falling down before the end of the episode with and without fine tuning. 4.3 Complex humanoid We consider a humanoid body of high dimensionality that poses a hard control problem. The construction of this body and associated control policies is described in [20], and is briefly summarized in the appendix (section A.3) for completness. We generate training trajectories with the existing controllers, which can produce instances of one of six different movement styles (see section A.3). Examples of such trajectories are shown in Fig. 5 and in the supplemental video. The training set consists of 250 random trajectories from 6 different neural network controllers that were trained to match 6 different movement styles from the CMU motion capture data base4. Each trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate trajectories for evaluation (3 of these policies were trained on the same movement styles as the policies used for generating training data). Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories, although it lacks the robustness to be practically useful. Adversarial training dramatically improves the stability of the controller. We analyze the improvement quantitatively by computing the percentage of the humanoid falling down before the end of an episode while imitating either training or test policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames of representative demonstration and associated imitation trajectories. Videos of demonstration and imitation behaviors can be found in the supplemental video. For practical purposes it is desirable to allow the controller to transition from one behavior to another. We test this possibility in an experiment similar to the one for the Jaco arm: We determine the embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on the first embedding vector, and then transition from one behavior to the other half-way through the episode by linearly interpolating the embeddings of the two demonstration trajectories over a window of 20 control steps. Although not always successful the learned controller often transitions robustly, despite not having been trained to do so. Representative examples of these transitions can be found in the supplemental video. 5 Conclusions We have proposed an approach for imitation learning that combines the favorable properties of techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a model that learns, from a moderate number of demonstration trajectories (1) a semantically well structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new trajectories into the embedding space and hence allows for one-shot imitation. Our experimental results demonstrate that our approach can work on a variety of control problems, and that it scales even to very challenging ones such as the control of a simulated humanoid with a large number of degrees of freedoms. 4See appendix for details. 8 References [1] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469–483, 2009. [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. Preprint arXiv:1701.07875, 2017. [3] N. Baram, O. Anschel, and S. Mannor. Model-based adversarial imitation learning. Preprint arXiv:1612.02179, 2016. [4] D. Berthelot, T. Schumm, and L. Metz. BEGAN: Boundary equilibrium generative adversarial networks. Preprint arXiv:1703.10717, 2017. [5] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. In Springer handbook of robotics, pages 1371–1394. 2008. [6] Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. One-shot imitation learning. Preprint arXiv:1703.07326, 2017. [7] I. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. Preprint arXiv:1701.00160, 2016. [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014. [9] A. Graves, S. Fernández, and J. Schmidhuber. Bidirectional LSTM networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005, pages 753–753, 2005. [10] K. Hausman, Y. Chebotar, S. Schaal, G. Sukhatme, and J. Lim. Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. arXiv preprint arXiv:1705.10479, 2017. [11] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial networks. Preprint arXiv:1702.08431, 2017. [12] J. Ho and S. Ermon. Generative adversarial imitation learning. In NIPS, pages 4565–4573, 2016. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [14] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys, 50(2):21, 2017. [15] D. Kingma and M. Welling. Auto-encoding variational bayes. Preprint arXiv:1312.6114, 2013. [16] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer. Imitating driver behavior with generative adversarial networks. Preprint arXiv:1701.06699, 2017. [17] Y. Li, J. Song, and S. Ermon. Inferring the latent structure of human decision-making from raw visual inputs. arXiv preprint arXiv:1703.08840, 2017. [18] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. [19] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. Preprint ArXiv:1611.04076, 2016. [20] J. Merel, Y. Tassa, TB. Dhruva, S. Srinivasan, J. Lemmon, Z. Wang, G. Wayne, and N. Heess. Learning human behaviors from motion capture by adversarial imitation. Preprint arXiv:1707.02201, 2017. [21] M. Mirza and S. Osindero. Conditional generative adversarial nets. Preprint arXiv:1411.1784, 2014. [22] U. Muico, Y. Lee, J. Popovi´c, and Z. Popovi´c. Contact-aware nonlinear control of dynamic characters. In SIGGRAPH, 2009. [23] X. B. Peng, G. Berseth, K. Yin, and M. van de Panne. DeepLoco: Dynamic locomotion skills using hierarchical deep reinforcement learning. In SIGGRAPH, 2017. [24] D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88–97, 1991. [25] G. J. Qi. Loss-sensitive generative adversarial networks on Lipschitz densities. Preprint arXiv:1701.06264, 2017. 9 [26] D. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [27] M. Rosca, B. Lakshminarayanan, D. Warde-Farley, and S. Mohamed. Variational approaches for autoencoding generative adversarial networks. arXiv preprint arXiv:1706.04987, 2017. [28] S. Ross and A. Bagnell. Efficient reductions for imitation learning. In AIStats, 2010. [29] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AIStats, 2011. [30] A. Rusu, S. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell. Policy distillation. Preprint arXiv:1511.06295, 2015. [31] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015. [32] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681, 1997. [33] D. Sharon and M. van de Panne. Synthesis of controllers for stylized planar bipedal walking. In ICRA, pages 2387–2392, 2005. [34] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014. [35] K. W. Sok, M. Kim, and J. Lee. Simulating biped behaviors from human motion data. 2007. [36] B. C. Stadie, P. Abbeel, and I. Sutskever. Third-person imitation learning. Preprint arXiv:1703.01703, 2017. [37] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. Preprint arXiv:1511.01844, 2015. [38] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IROS, pages 5026–5033, 2012. [39] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. Preprint arXiv:1609.03499, 2016. [40] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, and A. Graves. Conditional image generation with pixelCNN decoders. In NIPS, 2016. [41] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605, 2008. [42] R. Wang, A. Cully, H. Jin Chang, and Y. Demiris. MAGAN: Margin adaptation for generative adversarial networks. Preprint arXiv:1704.03817, 2017. [43] K. Yin, K. Loken, and M. van de Panne. SIMBICON: Simple biped locomotion control. In SIGGRAPH, 2007. 10 | 2017 | 18 |
6,652 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games Yuandong Tian1 Qucheng Gong1 Wenling Shang2 Yuxin Wu1 C. Lawrence Zitnick1 1Facebook AI Research 2Oculus 1{yuandong, qucheng, yuxinwu, zitnick}@fb.com 2wendy.shang@oculus.com Abstract In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frameper-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs endto-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open sourced at https://github.com/facebookresearch/ELF. 1 Introduction Game environments are commonly used for research in Reinforcement Learning (RL), i.e. how to train intelligent agents to behave properly from sparse rewards [4, 6, 5, 14, 29]. Compared to the real world, game environments offer an infinite amount of highly controllable, fully reproducible, and automatically labeled data. Ideally, a game environment for fundamental RL research is: • Extensive: The environment should capture many diverse aspects of the real world, such as rich dynamics, partial information, delayed/long-term rewards, concurrent actions with different granularity, etc. Having an extensive set of features and properties increases the potential for trained agents to generalize to diverse real-world scenarios. • Lightweight: A platform should be fast and capable of generating samples hundreds or thousands of times faster than real-time with minimal computational resources (e.g., a single machine). Lightweight and efficient platforms help accelerate academic research of RL algorithms, particularly for methods which are heavily data-dependent. • Flexible: A platform that is easily customizable at different levels, including rich choices of environment content, easy manipulation of game parameters, accessibility of internal variables, and flexibility of training architectures. All are important for fast exploration of different algorithms. For example, changing environment parameters [35], as well as using internal data [15, 19] have been shown to substantially accelerate training. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To our knowledge, no current game platforms satisfy all criteria. Modern commercial games (e.g., StarCraft I/II, GTA V) are extremely realistic, but are not customizable and require significant resources for complex visual effects and for computational costs related to platform-shifting (e.g., a virtual machine to host Windows-only SC I on Linux). Old games and their wrappers [4, 6, 5, 14]) are substantially faster, but are less realistic with limited customizability. On the other hand, games designed for research purpose (e.g., MazeBase [29], µRTS [23]) are efficient and highly customizable, but are not very extensive in their capabilities. Furthermore, none of the environments consider simulation concurrency, and thus have limited flexibility when different training architectures are applied. For instance, the interplay between RL methods and environments during training is often limited to providing simplistic interfaces (e.g., one interface for one game) in scripting languages like Python. In this paper, we propose ELF, a research-oriented platform that offers games with diverse properties, efficient simulation, and highly customizable environment settings. The platform allows for both game parameter changes and new game additions. The training of RL methods is deeply and flexibly integrated into the environment, with an emphasis on concurrent simulations. On ELF, we build a real-time strategy (RTS) game engine that includes three initial environments including Mini-RTS, Capture the Flag and Tower Defense. Mini-RTS is a miniature custom-made RTS game that captures all the basic dynamics of StarCraft (fog-of-war, resource gathering, troop building, defense/attack with troops, etc). Mini-RTS runs at 165K FPS on a 4 core laptop, which is faster than existing environments by an order of magnitude. This enables us for the first time to train end-toend a full-game bot against built-in AIs. Moreover, training is accomplished in only one day using 6 CPUs and 1 GPU. The other two games can be trained with similar (or higher) efficiency. Many real-world scenarios and complex games (e.g. StarCraft) are hierarchical in nature. Our RTS engine has full access to the game data and has a built-in hierarchical command system, which allows training at any level of the command hierarchy. As we demonstrate, this allows us to train a full-game bot that acts on the top-level strategy in the hierarchy while lower-level commands are handled using build-in tactics. Previously, most research on RTS games focused only on lower-level scenarios such as tactical battles [34, 25]. The full access to the game data also allows for supervised training with small-scale internal data. ELF is resilient to changes in the topology of the environment-actor communication used for training, thanks to its hybrid C++/Python framework. These include one-to-one, many-to-one and oneto-many mappings. In contrast, existing environments (e.g., OpenAI Gym [6] and Universe [33]) wrap one game in one Python interface, which makes it cumbersome to change topologies. Parallelism is implemented in C++, which is essential for simulation acceleration. Finally, ELF is capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Chess and Go [32]), physics engines (e.g., Bullet [10]), etc, by writing a simple adaptor. Equipped with a flexible RL backend powered by PyTorch, we experiment with numerous baselines, and highlight effective techniques used in training. We show the first demonstration of end-toend trained AIs for real-time strategy games with partial information. We use the Asynchronous Advantagous Actor-Critic (A3C) model [21] and explore extensive design choices including frameskip, temporal horizon, network structure, curriculum training, etc. We show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in full-game Mini-RTS. We also show stronger performance in others games. ELF and its RL platform, is open-sourced at https://github.com/facebookresearch/ELF. 2 Architecture ELF follows a canonical and simple producer-consumer paradigm (Fig. 1). The producer plays N games, each in a single C++ thread. When a batch of M current game states are ready (M < N), the corresponding games are blocked and the batch are sent to the Python side via the daemon. The consumers (e.g., actor, optimizer, etc) get batched experience with history information via a Python/C++ interface and send back the replies to the blocked batch of the games, which are waiting for the next action and/or values, so that they can proceed. For simplicity, the producer and consumers are in the same process. However, they can also live in different processes, or even on different machines. Before the training (or evaluation) starts, different consumers register themselves for batches with 2 Actor Game 1 Game N Daemon (batch collector) Producer (Games in C++) Model Batch with history info Game 2 History buffer History buffer History buffer Consumers (Python) Optimizer Reply Figure 1: Overview of ELF. different history length. For example, an actor might need a batch with short history, while an optimizer (e.g., T-step actor-critic) needs a batch with longer history. During training, the consumers use the batch in various ways. For example, the actor takes the batch and returns the probabilties of actions (and values), then the actions are sampled from the distribution and sent back. The batch received by the optimizer already contains the sampled actions from the previous steps, and can be used to drive reinforcement learning algorithms such as A3C. Here is a sample usage of ELF: 1 # We run 1024 games concurrently . 2 num games = 1024 3 4 # Wait for a batch of 256 games. 5 batchsize = 256 6 7 # The return states contain key ’s ’, ’r’ and ’ terminal ’ 8 # The reply contains key ’a’ to be filled from the Python side . 9 # The definitions of the keys are in the wrapper of the game. 10 input spec = dict (s=’’ , r=’’ , terminal=’’) 11 reply spec = dict (a=’’) 12 13 context = Init (num games, batchsize, input spec , reply spec ) Initialization of ELF 1 # Start all game threads and enter main loop. 2 context . Start () 3 while True: 4 # Wait for a batch of game states to be ready 5 # These games will be blocked, waiting for replies . 6 batch = context .Wait() 7 8 # Apply a model to the game state . The output has key ’pi’ 9 output = model(batch) 10 11 # Sample from the output to get the actions of this batch. 12 reply [’a’ ][:] = SampleFromDistribution(output) 13 14 # Resume games. 15 context .Steps () 16 17 # Stop all game threads . 18 context .Stop() Main loop of ELF Parallelism using C++ threads. Modern reinforcement learning methods often require heavy parallelism to obtain diverse experiences [21, 22]. Most existing RL environments (OpenAI Gym [6] and Universe [33], RLE [5], Atari [4], Doom [14]) provide Python interfaces which wrap only single game instances. As a result, parallelism needs to be built in Python when applying modern RL methods. However, thread-level parallelism in Python can only poorly utilize multi-core processors, due to the Global Interpreter Lock (GIL)1. Process-level parallelism will also introduce extra data exchange overhead between processes and increase complexity to framework design. In contrast, our parallelism is achieved with C++ threads for better scaling on multi-core CPUs. Flexible Environment-Model Configurations. In ELF, one or multiple consumers can be used. Each consumer knows the game environment identities of samples from received batches, and typically contains one neural network model. The models of different consumers may or may not share parameters, might update the weights, might reside in different processes or even on different machines. This architecture offers flexibility for switching topologies between game environments and models. We can assign one model to each game environment, or one-to-one (e.g, vanilla A3C [21]), in which each agent follows and updates its own copy of the model. Similarly, multiple environments can be assigned to a single model, or many-to-one (e.g., BatchA3C [35] or GA3C [1]), where the model can perform batched forward prediction to better utilize GPUs. We have also incorporated forward-planning methods (e.g., Monte-Carlo Tree Search (MCTS) [7, 32, 27]) and Self-Play [27], in which a single environment might emit multiple states processed by multiple models, or one-tomany. Using ELF, these training configurations can be tested with minimal changes. Highly customizable and unified interface. Games implemented with our RTS engine can be trained using raw pixel data or lower-dimensional internal game data. Using internal game data is 1The GIL in Python forbids simultaneous interpretations of multiple statements even on multi-core CPUs. 3 ELF RTS Engine ALE Board Games Mini-RTS Capture the Flag Tower Defense Pong Breakout Figure 2: Hierarchical layout of ELF. In the current repository (https://github.com/ facebookresearch/ELF, master branch), there are board games (e.g., Go [32]), Atari learning environment [4], and a customized RTS engine that contains three simple games. Enemy base Your base Your barracks Worker Enemy unit Selected unit Resource (a) Game Name Descriptions Avg Game Length Mini-RTS Gather resource and build troops to destroy opponent’s base. 1000-6000 ticks Capture the Flag Capture the flag and bring it to your own base 1000-4000 ticks Tower Defense Builds defensive towers to block enemy invasion. 1000-2000 ticks (b) Figure 3: Overview of Real-time strategy engine. (a) Visualization of current game state. (b) The three different game environments and their descriptions. typically more convenient for research focusing on reasoning tasks rather than perceptual ones. Note that web-based visual renderings is also supported (e.g., Fig. 3(a)) for case-by-case debugging. ELF allows for a unified interface capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Go [32]), and a customized RTS engine, with a simple adaptor (Fig. 2). This enables easy multi-threaded training and evaluation using existing RL methods. Besides, we also provide three concrete simple games based on RTS engine (Sec. 3). Reinforcement Learning backend. We propose a Python-based RL backend. It has a flexible design that decouples RL methods from models. Multiple baseline methods (e.g., A3C [21], Policy Gradient [30], Q-learning [20], Trust Region Policy Optimization [26], etc) are implemented, mostly with very few lines of Python codes. 3 Real-time strategy Games Real-time strategy (RTS) games are considered to be one of the next grand AI challenges after Chess and Go [27]. In RTS games, players commonly gather resources, build units (facilities, troops, etc), and explore the environment in the fog-of-war (i.e., regions outside the sight of units are invisible) to invade/defend the enemy, until one player wins. RTS games are known for their exponential and changing action space (e.g., 510 possible actions for 10 units with 5 choices each, and units of each player can be built/destroyed when game advances), subtle game situations, incomplete information due to limited sight and long-delayed rewards. Typically professional players take 200-300 actions per minute, and the game lasts for 20-30 minutes. Very few existing RTS engines can be used directly for research. Commercial RTS games (e.g., StarCraft I/II) have sophisticated dynamics, interactions and graphics. The game play strategies have been long proven to be complex. Moreover, they are close-source with unknown internal states, and cannot be easily utilized for research. Open-source RTS games like Spring [12], OpenRA [24] and Warzone 2100 [28] focus on complex graphics and effects, convenient user interface, stable network play, flexible map editors and plug-and-play mods (i.e., game extensions). Most of them use rule-based AIs, do not intend to run faster than real-time, and offer no straightforward interface with modern machine learning architectures. ORTS [8], BattleCode [2] and RoboCup Simulation League [16] are designed for coding competitions and focused on rule-based AIs. Research-oriented platforms (e.g., µRTS [23], MazeBase [29]) are fast and simple, often coming with various baselines, 4 Realistic Code Resource Rule AIs Data AIs RL backend StarCraft I/II High No High Yes No No TorchCraft High Yes High Yes Yes No ORTS, BattleCode Mid Yes Low Yes No No µRTS, MazeBase Low Yes Low Yes Yes No Mini-RTS Mid Yes Low Yes Yes Yes Table 1: Comparison between different RTS engines. Platform ALE [4] RLE [5] Universe [33] Malmo [13] Frame per second 6000 530 60 120 Platform DeepMind Lab [3] VizDoom [14] TorchCraft [31] Mini-RTS Frame per second 287(C)/866(G) ∼7,000 2,000 (frameskip=50) 40,000 Table 2: Frame rate comparison. Note that Mini-RTS does not render frames, but save game information into a C structure which is used in Python without copying. For DeepMind Lab, FPS is 287 (CPU) and 866 (GPU) on single 6CPU+1GPU machine. Other numbers are in 1CPU core. but often with much simpler dynamics than RTS games. Recently, TorchCraft [31] provides APIs for StarCraft I to access its internal game states. However, due to platform incompatibility, one docker is used to host one StarCraft engine, and is resource-consuming. Tbl. 1 summarizes the difference. 3.1 Our approach Many popular RTS games and its variants (e.g., StarCraft, DoTA, Leagues of Legends, Tower Defense) share the same structure: a few units are controlled by a player, to move, attack, gather or cast special spells, to influence their own or an enemy’s army. With our command hierarchy, a new game can be created by changing (1) available commands (2) available units, and (3) how each unit emits commands triggered by certain scenarios. For this, we offer simple yet effective tools. Researchers can change these variables either by adding commands in C++, or by writing game scripts (e.g., Lua). All derived games share the mechanism of hierarchical commands, replay, etc. Rule-based AIs can also be extended similarly. We provide the following three games: Mini-RTS, Capture the Flag and Tower Defense (Fig. 3(b)). These games share the following properties: Gameplay. Units in each game move with real coordinates, have dimensions and collision checks, and perform durative actions. The RTS engine is tick-driven. At each tick, AIs make decisions by sending commands to units based on observed information. Then commands are executed, the game’s state changes, and the game continues. Despite a fair complicated game mechanism, MiniRTS is able to run 40K frames-per-second per core on a laptop, an order of magnitude faster than most existing environments. Therefore, bots can be trained in a day on a single machine. Built-in hierarchical command levels. An agent could issue strategic commands (e.g., more aggressive expansion), tactical commands (e.g., hit and run), or micro-command (e.g., move a particular unit backward to avoid damage). Ideally strong agents master all levels; in practice, they may focus on a certain level of command hierarchy, and leave others to be covered by hard-coded rules. For this, our RTS engine uses a hierarchical command system that offers different levels of controls over the game. A high-level command may affect all units, by issuing low-level commands. A low-level, unit-specific durative command lasts a few ticks until completion during which per-tick immediate commands are issued. Built-in rule-based AIs. We have designed rule-based AIs along with the environment. These AIs have access to all the information of the map and follow fixed strategies (e.g., build 5 tanks and attack the opponent base). These AIs act by sending high-level commands which are then translated to low-level ones and then executed. With ELF, for the first time, we are able to train full-game bots for real-time strategy games and achieve stronger performance than built-in rule-based AIs. In contrast, existing RTS AIs are either rule-based or focused on tactics (e.g., 5 units vs. 5 units). We run experiments on the three games to justify the usability of our platform. 5 KFPS per CPU core for Mini-RTS 70 60 50 40 30 20 10 0 1 core 2 cores 4 cores 8 cores 16 cores 64 threads 128 threads 256 threads 512 threads 1024 threads 6 5 4 3 2 1 0 KFPS per CPU core for Pong (Atari) 64 threads 128 threads 256 threads 512 threads 1024 threads 1 core 2 cores 4 cores 8 cores 16 cores OpenAI Gym ELF Figure 4: Frame-per-second per CPU core (no hyper-threading) with respect to CPUs/threads. ELF (light-shaded) is 3x faster than OpenAI Gym [6] (dark-shaded) with 1024 threads. CPU involved in testing: Intel E5-2680@2.50GHz. 4 Experiments 4.1 Benchmarking ELF We run ELF on a single server with a different number of CPU cores to test the efficiency of parallelism. Fig. 4(a) shows the results when running Mini-RTS. We can see that ELF scales well with the number of CPU cores used to run the environments. We also embed Atari emulator [4] into our platform and check the speed difference between a single-threaded ALE and paralleled ALE per core (Fig. 4(b)). While a single-threaded engine gives around 5.8K FPS on Pong, our paralleled ALE runs comparable speed (5.1K FPS per core) with up to 16 cores, while OpenAI Gym (with Python threads) runs 3x slower (1.7K FPS per core) with 16 cores 1024 threads, and degrades with more cores. Number of threads matters for training since they determine how diverse the experiences could be, with the same number of CPUs. Apart from this, we observed that Python multiprocessing with Gym is even slower, due to heavy communication of game frames among processes. Note that we used no hyperthreading for all experiments. 4.2 Baselines on Real-time Strategy Games We focus on 1-vs-1 full games between trained AIs and built-in AIs. Built-in AIs have access to full information (e.g., number of opponent’s tanks), while trained AIs know partial information in the fog of war, i.e., game environment within the sight of its own units. There are exceptions: in Mini-RTS, the location of the opponent’s base is known so that the trained AI can attack; in Capture the Flag, the flag location is known to all; Tower Defense is a game of complete information. Details of Built-in AI. For Mini-RTS there are two rule-based AIs: SIMPLE gathers, builds five tanks and then attacks the opponent base. HIT N RUN often harasses, builds and attacks. For Capture the Flag, we have one built-in AI. For Tower Defense (TD), no AI is needed. We tested our built-in AIs against a human player and find they are strong in combat but exploitable. For example, SIMPLE is vulnerable to hit-and-run style harass. As a result, a human player has a win rate of 90% and 50% against SIMPLE and HIT N RUN, respectively, in 20 games. Action Space. For simplicity, we use 9 strategic (and thus global) actions with hard-coded execution details. For example, AI may issue BUILD BARRACKS, which automatically picks a worker to build barracks at an empty location, if the player can afford. Although this setting is simple, detailed commands (e.g., command per unit) can be easily set up, which bear more resemblance to StarCraft. Similar setting applies to Capture the Flag and Tower Defense. Please check Appendix for detailed descriptions. Rewards. For Mini-RTS, the agent only receives a reward when the game ends (±1 for win/loss). An average game of Mini-RTS lasts for around 4000 ticks, which results in 80 decisions for a frame skip of 50, showing that the game is indeed delayed in reward. For Capturing the Flag, we give intermediate rewards when the flag moves towards player’s own base (one score when the flag “touches down”). In Tower Defense, intermediate penalty is given if enemy units are leaked. 4.2.1 A3C baseline Next, we describe our baselines and their variants. Note that while we refer to these as baseline, we are the first to demonstrate end-to-end trained AIs for real-time strategy (RTS) games with partial information. For all games, we randomize the initial game states for more diverse experience and 6 Frameskip SIMPLE HIT N RUN 50 68.4(±4.3) 63.6(±7.9) 20 61.4(±5.8) 55.4(±4.7) 10 52.8(±2.4) 51.1(±5.0) Capture Flag Tower Defense Random 0.7 (± 0.9) 36.3 (± 0.3) Trained AI 59.9 (± 7.4) 91.0 (± 7.6) Table 3: Win rate of A3C models competing with built-in AIs over 10k games. Left: Mini-RTS. Frame skip of the trained AI is 50. Right: For Capture the Flag, frame skip of trained AI is 10, while the opponent is 50. For Tower Defense the frame skip of trained AI is 50, no opponent AI. Game Mini-RTS SIMPLE Mini-RTS HIT N RUN Median Mean (± std) Median Mean (± std) ReLU 52.8 54.7 (± 4.2) 60.4 57.0 (± 6.8) Leaky ReLU 59.8 61.0 (± 2.6) 60.2 60.3 (± 3.3) BN 61.0 64.4 (± 7.4 ) 55.6 57.5 (± 6.8) Leaky ReLU + BN 72.2 68.4 (± 4.3) 65.5 63.6 (± 7.9) Table 4: Win rate in % of A3C models using different network architectures. Frame skip of both sides are 50 ticks. The fact that the medians are better than the means shows that different instances of A3C could converge to very different solutions. use A3C [21] to train AIs to play the full game. We run all experiments 5 times and report mean and standard deviation. We use simple convolutional networks with two heads, one for actions and the other for values. The input features are composed of spatially structured (20-by-20) abstractions of the current game environment with multiple channels. At each (rounded) 2D location, the type and hit point of the unit at that location is quantized and written to their corresponding channels. For Mini-RTS, we also add an additional constant channel filled with current resource of the player. The input feature only contains the units within the sight of one player, respecting the properties of fog-of-war. For Capture the Flag, immediate action is required at specific situations (e.g., when the opponent just gets the flag) and A3C does not give good performance. Therefore we use frame skip 10 for trained AI and 50 for the opponent to give trained AI a bit advantage. All models are trained from scratch with curriculum training (Sec. 4.2.2). Note that there are several factors affecting the AI performance. Frame-skip. A frame skip of 50 means that the AI acts every 50 ticks, etc. Against an opponent with low frame skip (fast-acting), A3C’s performance is generally lower (Fig. 3). When the opponent has high frame skip (e.g., 50 ticks), the trained agent is able to find a strategy that exploits the longdelayed nature of the opponent. For example, in Mini-RTS it will send two tanks to the opponent’s base. When one tank is destroyed, the opponent does not attack the other tank until the next 50divisible tick comes. Interestingly, the trained model could be adaptive to different frame-rates and learn to develop different strategies for faster acting opponents. For Capture the Flag, the trained bot learns to win 60% over built-in AI, with an advantage in frame skip. For even frame skip, trained AI performance is low. Network Architectures. Since the input is sparse and heterogeneous, we experiment on CNN architectures with Batch Normalization [11] and Leaky ReLU [18]. BatchNorm stabilizes the gradient flow by normalizing the outputs of each filter. Leaky ReLU preserves the signal of negative linear responses, which is important in scenarios when the input features are sparse. Tbl. 4 shows that these two modifications both improve and stabilize the performance. Furthermore, they are complimentary to each other when combined. History length. History length T affects the convergence speed, as well as the final performance of A3C (Fig. 5). While Vanilla A3C [21] uses T = 5 for Atari games, the reward in Mini-RTS is more delayed (∼80 actions before a reward). In this case, the T-step estimation of reward R1 = PT t=1 γt−1rt + γT V (sT ) used in A3C does not yield a good estimation of the true reward if V (sT ) is inaccurate, in particular for small T. For other experiments we use T = 6. Interesting behaviors The trained AI learns to act promptly and use sophisticated strategies (Fig. 6). Multiple videos are available in https://github.com/facebookresearch/ELF. 7 Samples used (in thousands) AI_SIMPLE T=4 T=8 T=12 T=16 T=20 0.75 0 200 400 600 800 0.55 0.35 0.15 Best win rate in evaluation Samples used (in thousands) AI_HIT_AND_RUN T=4 T=8 T=12 T=16 T=20 0.75 0 200 400 600 800 0.55 0.35 0.15 Best win rate in evaluation Figure 5: Win rate in Mini-RTS with respect to the amount of experience at different steps T in A3C. Note that one sample (with history) in T = 2 is equivalent to two samples in T = 1. Longer T shows superior performance to small step counterparts, even if their samples are more expensive. (a) (b) (c) (d) (e) Trained AI (Blue) AI_SIMPLE (Red) Worker Short-range Tank Long-range Tank Figure 6: Game screenshots between trained AI (blue) and built-in SIMPLE (red). Player colors are shown on the boundary of hit point gauges. (a) Trained AI rushes opponent using early advantage. (b) Trained AI attacks one opponent unit at a time. (c) Trained AI defends enemy invasion by blocking their ways. (d)-(e) Trained AI uses one long-range attacker (top) to distract enemy units and one melee attacker to attack enemy’s base. 4.2.2 Curriculum Training We find that curriculum training plays an important role in training AIs. All AIs shown in Tbl. 3 and Tbl. 4 are trained with curriculum training. For Mini-RTS, we let the built-in AI play the first k ticks, where k ∼Uniform(0, 1000), then switch to the AI to be trained. This (1) reduces the difficulty of the game initially and (2) gives diverse situations for training to avoid local minima. During training, the aid of the built-in AIs is gradually reduced until no aid is given. All reported win rates are obtained by running the trained agents alone with greedy policy. We list the comparison with and without curriculum training in Tbl. 6. It is clear that the performance improves with curriculum training. Similarly, when fine-tuning models pre-trained with one type of opponent towards a mixture of opponents (e.g., 50%SIMPLE + 50%HIT N RUN), curriculum training is critical for better performance (Tbl. 5). Tbl. 5 shows that AIs trained with one built-in AI cannot do very well against another built-in AI in the same game. This demonstrates that training with diverse agents is important for training AIs with low-exploitability. 4.2.3 Monte-Carlo Tree Search Monte-Carlo Tree Search (MCTS) can be used for planning when complete information about the game is known. This includes the complete state s without fog-of-war, and the precise forward model s′ = s′(s, a). Rooted at the current game state, MCTS builds a game tree that is biased Game Mini-RTS SIMPLE HIT N RUN Combined SIMPLE 68.4 (±4.3) 26.6(±7.6) 47.5(±5.1) HIT N RUN 34.6(±13.1) 63.6 (±7.9) 49.1(±10.5) Combined(No curriculum) 49.4(±10.0) 46.0(±15.3) 47.7(±11.0) Combined 51.8(±10.6) 54.7(±11.2) 53.2(±8.5) Table 5: Training with a specific/combined AIs. Frame skip of both sides is 50. When against combined AIs (50%SIMPLE + 50%HIT N RUN), curriculum training is particularly important. 8 Game Mini-RTS SIMPLE Mini-RTS HIT N RUN Capture the Flag no curriculum training 66.0(±2.4) 54.4(±15.9) 54.2(±20.0) with curriculum training 68.4 (±4.3) 63.6 (±7.9) 59.9 (±7.4) Table 6: Win rate of A3C models with and without curriculum training. Mini-RTS: Frame skip of both sides are 50 ticks. Capture the Flag: Frame skip of trained AI is 10, while the opponent is 50. The standard deviation of win rates are large due to instability of A3C training. For example in Capture the Flag, highest win rate reaches 70% while lowest win rate is only 27%. Game Mini-RTS SIMPLE Mini-RTS HIT N RUN Random 24.2(±3.9) 25.9(±0.6) MCTS 73.2(±0.6) 62.7(±2.0) Table 7: Win rate using MCTS over 1000 games. Both players use a frameskip of 50. towards paths with high win rate. Leaves are expanded with all candidate moves and the win rate estimation is computed by random self-play until the game ends. We use 8 threads, each with 100 rollouts. We use root parallelization [9] in which each thread independently expands a tree, and are combined to get the most visited action. As shown in Tbl. 7, MCTS achieves a comparable win rate to models trained with RL. Note that the win rates of the two methods are not directly comparable, since RL methods have no knowledge of game dynamics, and its state knowledge is reduced by the limits introduced by the fog-of-war. Also, MCTS runs much slower (2-3sec per move) than the trained RL AI (≤1msec per move). 5 Conclusion and Future Work In this paper, we propose ELF, a research-oriented platform for concurrent game simulation which offers an extensive set of game play options, a lightweight game simulator, and a flexible environment. Based on ELF, we build a RTS game engine and three initial environments (Mini-RTS, Capture the Flag and Tower Defense) that run 40KFPS per core on a laptop. As a result, a fullgame bot in these games can be trained end-to-end in one day using a single machine. In addition to the platform, we provide throughput benchmarks of ELF, and extensive baseline results using state-of-the-art RL methods (e.g, A3C [21]) on Mini-RTS and show interesting learnt behaviors. ELF opens up many possibilities for future research. With this lightweight and flexible platform, RL methods on RTS games can be explored in an efficient way, including forward modeling, hierarchical RL, planning under uncertainty, RL with complicated action space, and so on. Furthermore, the exploration can be done with an affordable amount of resources. As future work, we will continue improving the platform and build a library of maps and bots to compete with. References [1] Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, and Jan Kautz. Reinforcement learning through asynchronous advantage actor-critic on a gpu. International Conference on Learning Representations (ICLR), 2017. [2] BattleCode. Battlecode, mit’s ai programming competition: https://www.battlecode.org/. 2000. URL https://www.battlecode.org/. [3] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801. [4] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. CoRR, abs/1207.4708, 2012. URL http://arxiv.org/abs/1207.4708. 9 [5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning environment. CoRR, abs/1611.02205, 2016. URL http://arxiv.org/abs/1611.02205. [6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv. org/abs/1606.01540. [7] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012. [8] Michael Buro and Timothy Furtak. On the development of a free rts game engine. In GameOnNA Conference, pages 23–27, 2005. [9] Guillaume MJ-B Chaslot, Mark HM Winands, and H Jaap van Den Herik. Parallel monte-carlo tree search. In International Conference on Computers and Games, pages 60–71. Springer, 2008. [10] Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics.org, 2010. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. [12] Stefan Johansson and Robin Westberg. Spring: https://springrts.com/. 2008. URL https: //springrts.com/. [13] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelligence experimentation. In International joint conference on artificial intelligence (IJCAI), page 4246, 2016. [14] Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016. [15] Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016. [16] RoboCup Simulation League. Robocup simulation league: https://en.wikipedia.org/wiki/robocup simulation league. 1995. URL https: //en.wikipedia.org/wiki/RoboCup_Simulation_League. [17] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013. [18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. 2013. [19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. ICLR, 2017. [20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [22] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods for deep reinforcement learning. CoRR, abs/1507.04296, 2015. URL http://arxiv.org/ abs/1507.04296. 10 [23] Santiago Ontan´on. The combinatorial multi-armed bandit problem and its application to realtime strategy games. In Proceedings of the Ninth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, pages 58–64. AAAI Press, 2013. [24] OpenRA. Openra: http://www.openra.net/. 2007. URL http://www.openra.net/. [25] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. CoRR, abs/1703.10069, 2017. URL http://arxiv.org/abs/1703.10069. [26] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, pages 1889–1897, 2015. [27] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529 (7587):484–489, 2016. [28] Pumpkin Studios. Warzone 2100: https://wz2100.net/. 1999. URL https://wz2100. net/. [29] Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Mazebase: A sandbox for learning from games. CoRR, abs/1511.07401, 2015. URL http: //arxiv.org/abs/1511.07401. [30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057–1063, 1999. [31] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zeming Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research on real-time strategy games. CoRR, abs/1611.00625, 2016. URL http: //arxiv.org/abs/1611.00625. [32] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term prediction. arXiv preprint arXiv:1511.06410, 2015. [33] Universe. 2016. URL universe.openai.com. [34] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. ICLR, 2017. [35] Yuxin Wu and Yuandong Tian. Training agent for first-person shooter game with actor-critic curriculum learning. International Conference on Learning Representations (ICLR), 2017. 11 | 2017 | 180 |
6,653 | Task-based End-to-end Model Learning in Stochastic Optimization Priya L. Donti Dept. of Computer Science Dept. of Engr. & Public Policy Carnegie Mellon University Pittsburgh, PA 15213 pdonti@cs.cmu.edu Brandon Amos Dept. of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 bamos@cs.cmu.edu J. Zico Kolter Dept. of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 zkolter@cs.cmu.edu Abstract With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications. 1 Introduction While prediction algorithms commonly operate within some larger process, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them: the performance of the full “closed-loop” system on the ultimate task at hand. For instance, instead of merely classifying images in a standalone setting, one may want to use these classifications within planning and control tasks such as autonomous driving. While a typical image classification algorithm might optimize accuracy or log likelihood, in a driving task we may ultimately care more about the difference between classifying a pedestrian as a tree vs. classifying a garbage can as a tree. Similarly, when we use a probabilistic prediction algorithm to generate forecasts of upcoming electricity demand, we then want to use these forecasts to minimize the costs of a scheduling procedure that allocates generation for a power grid. As these examples suggest, instead of using a “generic loss,” we instead may want to learn a model that approximates the ultimate task-based “true loss.” This paper considers an end-to-end approach for learning probabilistic machine learning models that directly capture the objective of their ultimate task. Formally, we consider probabilistic models in the context of stochastic programming, where the goal is to minimize some expected cost over the models’ probabilistic predictions, subject to some (potentially also probabilistic) constraints. As mentioned above, it is common to approach these problems in a two-step fashion: first to fit a predictive model to observed data by minimizing some criterion such as negative log-likelihood, and then to use this model to compute or approximate the necessary expected costs in the stochastic programming setting. While this procedure can work well in many instances, it ignores the fact that the true cost of the system (the optimization objective evaluated on actual instantiations in the real world) may benefit from a model that actually attains worse overall likelihood, but makes more accurate predictions over certain manifolds of the underlying space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We propose to train a probabilistic model not (solely) for predictive accuracy, but so that–when it is later used within the loop of a stochastic programming procedure–it produces solutions that minimize the ultimate task-based loss. This formulation may seem somewhat counterintuitive, given that a “perfect” predictive model would of course also be the optimal model to use within a stochastic programming framework. However, the reality that all models do make errors illustrates that we should indeed look to a final task-based objective to determine the proper error tradeoffs within a machine learning setting. This paper proposes one way to evaluate task-based tradeoffs in a fully automated fashion, by computing derivatives through the solution to the stochastic programming problem in a manner that can improve the underlying model. We begin by presenting background material and related work in areas spanning stochastic programming, end-to-end training, and optimizing alternative loss functions. We then describe our approach within the formal context of stochastic programming, and give a generic method for propagating task loss through these problems in a manner that can update the models. We report on three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach outperforms traditional modeling and purely black-box policy optimization approaches. 2 Background and related work Stochastic programming Stochastic programming is a method for making decisions under uncertainty by modeling or optimizing objectives governed by a random process. It has applications in many domains such as energy [1], finance [2], and manufacturing [3], where the underlying probability distributions are either known or can be estimated. Common considerations include how to best model or approximate the underlying random variable, how to solve the resulting optimization problem, and how to then assess the quality of the resulting (approximate) solution [4]. In cases where the underlying probability distribution is known but the objective cannot be solved analytically, it is common to use Monte Carlo sample average approximation methods, which draw multiple iid samples from the underlying probability distribution and then use deterministic optimization methods to solve the resultant problems [5]. In cases where the underlying distribution is not known, it is common to learn or estimate some model from observed samples [6]. End-to-end training Recent years have seen a dramatic increase in the number of systems building on so-called “end-to-end” learning. Generally speaking, this term refers to systems where the end goal of the machine learning process is directly predicted from raw inputs [e.g. 7, 8]. In the context of deep learning systems, the term now traditionally refers to architectures where, for example, there is no explicit encoding of hand-tuned features on the data, but the system directly predicts what the image, text, etc. is from the raw inputs [9, 10, 11, 12, 13]. The context in which we use the term end-to-end is similar, but slightly more in line with its older usage: instead of (just) attempting to learn an output (with known and typically straightforward loss functions), we are specifically attempting to learn a model based upon an end-to-end task that the user is ultimately trying to accomplish. We feel that this concept–of describing the entire closed-loop performance of the system as evaluated on the real task at hand–is beneficial to add to the notion of end-to-end learning. Also highly related to our work are recent efforts in end-to-end policy learning [14], using value iteration effectively as an optimization procedure in similar networks [15], and multi-objective optimization [16, 17, 18, 19]. These lines of work fit more with the “pure” end-to-end approach we discuss later on (where models are eschewed for pure function approximation methods), but conceptually the approaches have similar motivations in modifying typically-optimized policies to address some task(s) directly. Of course, the actual methodological approaches are quite different, given our specific focus on stochastic programming as the black box of interest in our setting. Optimizing alternative loss functions There has been a great deal of work in recent years on using machine learning procedures to optimize different loss criteria than those “naturally” optimized by the algorithm. For example, Stoyanov et al. [20] and Hazan et al. [21] propose methods for optimizing loss criteria in structured prediction that are different from the inference procedure of the prediction algorithm; this work has also recently been extended to deep networks [22]. Recent work has also explored using auxiliary prediction losses to satisfy multiple objectives [23], learning 2 dynamics models that maximize control performance in Bayesian optimization [24], and learning adaptive predictive models via differentiation through a meta-learning optimization objective [25]. The work we have found in the literature that most closely resembles our approach is the work of Bengio [26], which uses a neural network model for predicting financial prices, and then optimizes the model based on returns obtained via a hedging strategy that employs it. We view this approach–of both using a model and then tuning that model to adapt to a (differentiable) procedure–as a philosophical predecessor to our own work. In concurrent work, Elmachtoub and Grigas [27] also propose an approach for tuning model parameters given optimization results, but in the context of linear programming and outside the context of deep networks. Whereas Bengio [26] and Elmachtoub and Grigas [27] use hand-crafted (but differentiable) algorithms to approximately attain some objective given a predictive model, our approach is tightly coupled to stochastic programming, where the explicit objective is to attempt to optimize the desired task cost via an exact optimization routine, but given underlying randomness. The notions of stochasticity are thus naturally quite different in our work, but we do hope that our work can bring back the original idea of task-based model learning. (Despite Bengio [26]’s original paper being nearly 20 years old, virtually all follow-on work has focused on the financial application, and not on what we feel is the core idea of using a surrogate model within a task-driven optimization procedure.) 3 End-to-end model learning in stochastic programming We first formally define the stochastic modeling and optimization problems with which we are concerned. Let (x 2 X, y 2 Y) ⇠D denote standard input-output pairs drawn from some (real, unknown) distribution D. We also consider actions z 2 Z that incur some expected loss LD(z) = Ex,y⇠D[f(x, y, z)]. For instance, a power systems operator may try to allocate power generators z given past electricity demand x and future electricity demand y; this allocation’s loss corresponds to the over- or under-generation penalties incurred given future demand instantiations. If we knew D, then we could select optimal actions z? D = argminz LD(z). However, in practice, the true distribution D is unknown. In this paper, we are interested in modeling the conditional distribution y|x using some parameterized model p(y|x; ✓) in order to minimize the real-world cost of the policy implied by this parameterization. Specifically, we find some parameters ✓to parameterize p(y|x; ✓) (as in the standard statistical setting) and then determine optimal actions z?(x; ✓) (via stochastic optimization) that correspond to our observed input x and the specific choice of parameters ✓in our probabilistic model. Upon observing the costs of these actions z?(x; ✓) relative to true instantiations of x and y, we update our parameterized model p(y|x; ✓) accordingly, calculate the resultant new z?(x; ✓), and repeat. The goal is to find parameters ✓such that the corresponding policy z?(x; ✓) optimizes the loss under the true joint distribution of x and y. Explicitly, we wish to choose ✓to minimize the task loss L(✓) in the context of x, y ⇠D, i.e. minimize ✓ L(✓) = Ex,y⇠D[f(x, y, z?(x; ✓))]. (1) Since in reality we do not know the distribution D, we obtain z?(x; ✓) via a proxy stochastic optimization problem for a fixed instantiation of parameters ✓, i.e. z?(x; ✓) = argmin z Ey⇠p(y|x;✓)[f(x, y, z)]. (2) The above setting specifies z?(x; ✓) using a simple (unconstrained) stochastic program, but in reality our decision may be subject to both probabilistic and deterministic constraints. We therefore consider more general decisions produced through a generic stochastic programming problem1 z?(x; ✓) = argmin z Ey⇠p(y|x;✓)[f(x, y, z)] subject to Ey⇠p(y|x;✓)[gi(x, y, z)] 0, i = 1, . . . , nineq hi(z) = 0, i = 1, . . . , neq. (3) 1It is standard to presume in stochastic programming that equality constraints depend only on decision variables (not random variables), as non-trivial random equality constraints are typically not possible to satisfy. 3 In this setting, the full task loss is more complex, since it captures both the expected cost and any deviations from the constraints. We can write this, for instance, as L(✓) = Ex,y⇠D[f(x, y, z?(x; ✓))]+ nineq X i=1 I{Ex,y⇠D[gi(x, y, z?(x; ✓))] 0}+ neq X i=1 Ex[I{hi(z?(x; ✓)) = 0}] (4) (where I(·) is the indicator function that is zero when its constraints are satisfied and infinite otherwise). However, the basic intuition behind our approach remains the same for both the constrained and unconstrained cases: in both settings, we attempt to learn parameters of a probabilistic model not to produce strictly “accurate” predictions, but such that when we use the resultant model within a stochastic programming setting, the resulting decisions perform well under the true distribution. Actually solving this problem requires that we differentiate through the “argmin” operator z?(x; ✓) of the stochastic programming problem. This differentiation is not possible for all classes of optimization problems (the argmin operator may be discontinuous), but as we will show shortly, in many practical cases–including cases where the function and constraints are strongly convex–we can indeed efficiently compute these gradients even in the context of constrained optimization. 3.1 Discussion and alternative approaches We highlight our approach in contrast to two alternative existing methods: traditional model learning and model-free black-box policy optimization. In traditional machine learning approaches, it is common to use ✓to minimize the (conditional) log-likelihood of observed data under the model p(y|x; ✓). This method corresponds to approximately solving the optimization problem minimize ✓ Ex,y⇠D [−log p(y|x; ✓)] . (5) If we then need to use the conditional distribution y|x to determine actions z within some later optimization setting, we commonly use the predictive model obtained from (5) directly. This approach has obvious advantages, in that the model-learning phase is well-justified independent of any future use in a task. However, it is also prone to poor performance in the common setting where the true distribution y|x cannot be represented within the class of distributions parameterized by ✓, i.e. where the procedure suffers from model bias. Conceptually, the log-likelihood objective implicitly trades off between model error in different regions of the input/output space, but does so in a manner largely opaque to the modeler, and may ultimately not employ the correct tradeoffs for a given task. In contrast, there is an alternative approach to solving (1) that we describe as the model-free “black-box” policy optimization approach. Here, we forgo learning any model at all of the random variable y. Instead, we attempt to learn a policy mapping directly from inputs x to actions z?(x; ¯✓) that minimize the loss L(¯✓) presented in (4) (where here ¯✓defines the form of the policy itself, not a predictive model). While such model-free methods can perform well in many settings, they are often very data-inefficient, as the policy class must have enough representational power to describe sufficiently complex policies without recourse to any underlying model.2 Algorithm 1 Task Loss Optimization 1: input: D // samples from true distribution 2: initialize ✓ // some initial parameterization 3: for t = 1, . . . , T do 4: sample (x, y) ⇠D 5: compute z?(x; ✓) via Equation (3) 6: // step in violated constraint or objective 7: if 9i s.t. gi(x, y, z?(x; ✓)) > 0 then 8: update ✓with r✓gi(x, y, z?(x; ✓)) 9: else 10: update ✓with r✓f(x, y, z?(x; ✓)) 11: end if 12: end for Our approach offers an intermediate setting, where we do still use a surrogate model to determine an optimal decision z?(x; ✓), yet we adapt this model based on the task loss instead of any model prediction accuracy. In practice, we typically want to minimize some weighted combination of log-likelihood and task loss, which can be easily accomplished given our approach. 3.2 Optimizing task loss To solve the generic optimization problem (4), we can in principle adopt a straightforward (constrained) stochastic gradient approach, as detailed in Algorithm 1. At each iteration, we 2This distinction is roughly analogous to the policy search vs. model-based settings in reinforcement learning. However, for the purposes of this paper, we consider much simpler stochastic programs without the multiple rounds that occur in RL, and the extension of these techniques to a full RL setting remains as future work. 4 Pred. demand (uncertain; discrete) ≡"($|&; () ≡& ∈ℝ, Features (randomly generated) Newspaper stocking decision ≡- ∈ℝ 1 2 5 10 20 ( (a) Inventory stock problem Past demand, past temperature, temporal features Pred. demand (w/ uncertainty) Generation schedule (e.g.) ≡"($|&; () ≡≡& Present ( (b) Load forecasting problem ( ≡& Pred. prices (w/ uncertainty) ≡"($|&; () Battery schedule (e.g.) ≡Present Past prices, past temperature, temporal features, load forecasts (c) Price forecasting problem Figure 1: Features x, model predictions y, and policy z for the three experiments. solve the proxy stochastic programming problem (3) to obtain z?(x, ✓), using the distribution defined by our current values of ✓. Then, we compute the true loss L(✓) using the observed value of y. If any of the inequality constraints gi in L(✓) are violated, we take a gradient step in the violated constraint; otherwise, we take a gradient step in the optimization objective f. We note that if any inequality constraints are probabilistic, Algorithm 1 must be adapted to employ mini-batches in order to determine whether these probabilistic constraints are satisfied. Alternatively, because even the gi constraints are probabilistic, it is common in practice to simply move a weighted version of these constraints to the objective, i.e., we modify the objective by adding some appropriate penalty times the positive part of the function, λgi(x, y, z)+, for some λ > 0. In practice, this has the effect of taking gradient steps jointly in all the violated constraints and the objective in the case that one or more inequality constraints are violated, often resulting in faster convergence. Note that we need only move stochastic constraints into the objective; deterministic constraints on the policy itself will always be satisfied by the optimizer, as they are independent of the model. 3.3 Differentiating the optimization solution to a stochastic programming problem While the above presentation highlights the simplicity of the proposed approach, it avoids the issue of chief technical challenge to this approach, which is computing the gradient of an objective that depends upon the argmin operation z?(x; ✓). Specifically, we need to compute the term @L @✓= @L @z? @z? @✓ (6) which involves the Jacobian @z? @✓. This is the Jacobian of the optimal solution with respect to the distribution parameters ✓. Recent approaches have looked into similar argmin differentiations [28, 29], though the methodology we present here is more general and handles the stochasticity of the objective. At a high level, we begin by writing the KKT optimality conditions of the general stochastic programming problem (3). Differentiating these equations and applying the implicit function theorem gives a set of linear equations that we can solve to obtain the necessary Jacobians (with expectations over the distribution y ⇠p(y|x; ✓) denoted Ey✓, and where g is the vector of inequality constraints) 2 64 r2 zEy✓f(z) + nineq X i=1 λir2 zEy✓gi(z) (rzEy✓g(z))T AT diag(λ) (rzEy✓g(z)) diag(Ey✓g(z)) 0 A 0 0 3 75 2 64 @z @✓ @λ @✓ @⌫ @✓ 3 75 = 2 64 @rzEy✓f(z) @✓ + @ Pnineq i=1 λirzEy✓gi(z) @✓ diag(λ) @Ey✓g(z) @✓ 0 3 75 . (7) The terms in these equations look somewhat complex, but fundamentally, the left side gives the optimality conditions of the convex problem, and the right side gives the derivatives of the relevant functions at the achieved solution with respect to the governing parameter ✓. In practice, we calculate the right-hand terms by employing sequential quadratic programming [30] to find the optimal policy z?(x; ✓) for the given parameters ✓, using a recently-proposed approach for fast solution of the argmin differentiation for QPs [31] to solve the necessary linear equations; we then take the derivatives at the optimum produced by this strategy. Details of this approach are described in the appendix. 4 Experiments We consider three applications of our task-based method: a synthetic inventory stock problem, a real-world energy scheduling task, and a real-world battery arbitrage task. We demonstrate that the task-based end-to-end approach can substantially improve upon other alternatives. Source code for all experiments is available at https://github.com/locuslab/e2e-model-learning. 5 4.1 Inventory stock problem Problem definition To highlight the performance of the algorithm in a setting where the true underlying model is known to us, we consider a “conditional” variation of the classical inventory stock problem [4]. In this problem, a company must order some quantity z of a product to minimize costs over some stochastic demand y, whose distribution in turn is affected by some observed features x (Figure 1a). There are linear and quadratic costs on the amount of product ordered, plus different linear/quadratic costs on over-orders [z −y]+ and under-orders [y −z]+. The objective is given by fstock(y, z) = c0z + 1 2q0z2 + cb[y −z]+ + 1 2qb([y −z]+)2 + ch[z −y]+ + 1 2qh([z −y]+)2, (8) where [v]+ ⌘max{v, 0}. For a specific choice of probability model p(y|x; ✓), our proxy stochastic programming problem can then be written as minimize z Ey⇠p(y|x;✓)[fstock(y, z)]. (9) To simplify the setting, we further assume that the demands are discrete, taking on values d1, . . . , dk with probabilities (conditional on x) (p✓)i ⌘p(y = di|x; ✓). Thus our stochastic programming problem (9) can be written succinctly as a joint quadratic program3 minimize z2R,zb,zh2Rk c0z + 1 2q0z2 + k X i=1 (p✓)i ✓ cb(zb)i + 1 2qb(zb)2 i + ch(zh)i + 1 2qh(zh)2 i ◆ subject to d −z1 zb, z1 −d zh, z, zh, zb ≥0. (10) Further details of this approach are given in the appendix. Experimental setup We examine our algorithm under two main conditions: where the true model is linear, and where it is nonlinear. In all cases, we generate problem instances by randomly sampling some x 2 Rn and then generating p(y|x; ✓) according to either p(y|x; ✓) / exp(⇥T x) (linear true model) or p(y|x; ✓) / exp((⇥T x)2) (nonlinear true model) for some ⇥2 Rn⇥k. We compare the following approaches on these tasks: 1) the QP allocation based upon the true model (which performs optimally); 2) MLE approaches (with linear or nonlinear probability models) that fit a model to the data, and then compute the allocation by solving the QP; 3) pure end-to-end policy-optimizing models (using linear or nonlinear hypotheses for the policy); and 4) our task-based learning models (with linear or nonlinear probability models). In all cases, we evaluate test performance by running on 1000 random examples, and evaluate performance over 10 folds of different true ✓? parameters. Figures 2(a) and (b) show the performance of these methods given a linear true model, with linear and nonlinear model hypotheses, respectively. As expected, the linear MLE approach performs best, as the true underlying model is in the class of distributions that it can represent and thus solving the stochastic programming problem is a very strong proxy for solving the true optimization problem under the real distribution. While the true model is also contained within the nonlinear MLE’s generic nonlinear distribution class, we see that this method requires more data to converge, and when given less data makes error tradeoffs that are ultimately not the correct tradeoffs for the task at hand; our task-based approach thus outperforms this approach. The task-based approach also substantially outperforms the policy-optimizing neural network, highlighting the fact that it is more data-efficient to run the learning process “through” a reasonable model. Note that here it does not make a difference whether we use the linear or nonlinear model in the task-based approach. Figures 2(c) and (d) show performance in the case of a nonlinear true model, with linear and nonlinear model hypotheses, respectively. Case (c) represents the “non-realizable” case, where the true underlying distribution cannot be represented by the model hypothesis class. Here, the linear MLE, as expected, performs very poorly: it cannot capture the true underlying distribution, and thus the resultant stochastic programming solution would not be expected to perform well. The linear policy model similarly performs poorly. Importantly, the task-based approach with the linear model performs much better here: despite the fact that it still has a misspecified model, the task-based nature of the learning process lets us learn a different linear model than the MLE version, which is 3This is referred to as a two-stage stochastic programming problem (though a very trivial example of one), where first stage variables consist of the amount of product to buy before observing demand, and second-stage variables consist of how much to sell back or additionally purchase once the true demand has been revealed. 6 Figure 2: Inventory problem results for 10 runs over a representative instantiation of true parameters (c0 = 10, q0 = 2, cb = 30, qb = 14, ch = 10, qh = 2). Cost is evaluated over 1000 testing samples (lower is better). The linear MLE performs best for a true linear model. In all other cases, the task-based models outperform their MLE and policy counterparts. particularly tuned to the distribution and loss of the task. Finally, also as to be expected, the non-linear models perform better than the linear models in this scenario, but again with the task-based non-linear model outperforming the nonlinear MLE and end-to-end policy approaches. 4.2 Load forecasting and generator scheduling We next consider a more realistic grid-scheduling task, based upon over 8 years of real electrical grid data. In this setting, a power system operator must decide how much electricity generation z 2 R24 to schedule for each hour in the next 24 hours based on some (unknown) distribution over electricity demand (Figure 1b). Given a particular realization y of demand, we impose penalties for both generation excess (γe) and generation shortage (γs), with γs ≫γe. We also add a quadratic regularization term, indicating a preference for generation schedules that closely match demand realizations. Finally, we impose a ramping constraint cr restricting the change in generation between consecutive timepoints, reflecting physical limitations associated with quick changes in electricity output levels. These are reasonable proxies for the actual economic costs incurred by electrical grid operators when scheduling generation, and can be written as the stochastic programming problem minimize z2R24 24 X i=1 Ey⇠p(y|x;✓) γs[yi −zi]+ + γe[zi −yi]+ + 1 2(zi −yi)2 + subject to |zi −zi−1| cr 8i, (11) where [v]+ ⌘max{v, 0}. Assuming (as we will in our model), that yi is a Gaussian random variable with mean µi and variance σ2 i , then this expectation has a closed form that can be computed via analytically integrating the Gaussian PDF.4 We then use sequential quadratic programming (SQP) to iteratively approximate the resultant convex objective as a quadratic objective, iterate until convergence, and then compute the necessary Jacobians using the quadratic approximation at the solution, which gives the correct Hessian and gradient terms. Details are given in the appendix. To develop a predictive model, we make use of a highly-tuned load forecasting methodology. Specifically, we input the past day’s electrical load and temperature, the next day’s temperature forecast, and additional features such as non-linear functions of the temperatures, binary indicators of weekends or holidays, and yearly sinusoidal features. We then predict the electrical load over all 24 4 Part of the philosophy behind applying this approach here is that we know the Gaussian assumption is incorrect: the true underlying load is neither Gaussian distributed nor homoskedastic. However, these assumptions are exceedingly common in practice, as they enable easy model learning and exact analytical solutions. Thus, training the (still Gaussian) system with a task-based loss retains computational tractability while still allowing us to modify the distribution’s parameters to improve actual performance on the task at hand. 7 Figure 4: Results for 10 runs of the generation-scheduling problem for representative decision parameters γe = 0.5, γs = 50, and cr = 0.4. (Lower loss is better.) As expected, the RMSE net achieves the lowest RMSE for its predictions. However, the task net outperforms the RMSE net on task loss by 38.6%, and the cost-weighted RMSE on task loss by 8.6%. hours of the next day. We employ a 2-hidden-layer neural network for this purpose, with an additional residual connection from the inputs to the outputs initialized to the linear regression solution. ! ∈ℝ$ 200 % ∈ℝ&' Past Load Past Temp (Past Temp)2 Future Temp (Future Temp)2 (Future Temp)3 ((Weekday) ((Holiday) ((DST) sin(2-.× DOY) cos(2-× DOY) Future Load 200 Figure 3: 2-hidden-layer neural network to predict hourly electric load for the next day. An illustration of the architecture is shown in Figure 3. We train the model to minimize the mean squared error between its predictions and the actual load (giving the mean prediction µi), and compute σ2 i as the (constant) empirical variance between the predicted and actual values. In all cases we use 7 years of data to train the model, and 1.75 subsequent years for testing. Using the (mean and variance) predictions of this base model, we obtain z?(x; ✓) by solving the generator scheduling problem (11) and then adjusting network parameters to minimize the resultant task loss. We compare against a traditional stochastic programming model that minimizes just the RMSE, as well as a cost-weighted RMSE that periodically reweights training samples given their task loss.5 (A pure policy-optimizing network is not shown, as it could not sufficiently learn the ramp constraints. We could not obtain good performance for the policy optimizer even ignoring this infeasibility.) Figure 4 shows the performance of the three models. As expected, the RMSE model performs best with respect to the RMSE of its predictions (its objective). However, the task-based model substantially outperforms the RMSE model when evaluated on task loss, the actual objective that the system operator cares about: specifically, we improve upon the performance of the traditional stochastic programming method by 38.6%. The cost-weighted RMSE’s performance is extremely variable, and overall, the task net improves upon this method by 8.6%. 4.3 Price forecasting and battery storage Finally, we consider a battery arbitrage task, based upon 6 years of real electrical grid data. Here, a grid-scale battery must operate over a 24 hour period based on some (unknown) distribution over future electricity prices (Figure 1c). For each hour, the operator must decide how much to charge (zin 2 R24) or discharge (zout 2 R24) the battery, thus inducing a particular state of charge in the battery (zstate 2 R24). Given a particular realization y of prices, the operator optimizes over: 1) profits, 2) flexibility to participate in other markets, by keeping the battery near half its capacity B (with weight λ), and 3) battery health, by discouraging rapid charging/discharging (with weight ✏, 5It is worth noting that a cost-weighted RMSE approach is only possible when direct costs can be assigned independently to each decision point, i.e. when costs do not depend on multiple decision points (as in this experiment). Our task-based method, however, accommodates the (typical) more general setting. 8 Hyperparameters RMSE net Task-based net (our method) % Improvement λ ✏ 0.1 0.05 −1.45 ± 4.67 −2.92 ± 0.30 1.02 1 0.5 4.96 ± 4.85 2.28 ± 2.99 0.54 10 5 131 ± 145 95.9 ± 29.8 0.27 35 15 173 ± 7.38 170 ± 2.16 0.02 Table 1: Task loss results for 10 runs each of the battery storage problem, given a lithium-ion battery with attributes B = 1, γeff = 0.9, cin = 0.5, and cout = 0.2. (Lower loss is better.) Our task-based net on average somewhat improves upon the RMSE net, and demonstrates more reliable performance. ✏< λ). The battery also has a charging efficiency (γeff), limits on speed of charge (cin) and discharge (cout), and begins at half charge. This can be written as the stochastic programming problem minimize zin,zout,zstate2R24 Ey⇠p(y|x;✓) " 24 X i=1 yi(zin −zout)i + λ ----zstate −B 2 ---2 + ✏kzink2 + ✏kzoutk2 # subject to zstate,i+1 = zstate,i −zout,i + γeffzin,i 8i, zstate,1 = B/2, 0 zin cin, 0 zout cout, 0 zstate B. (12) Assuming (as we will in our model) that yi is a random variable with mean µi, then this expectation has a closed form that depends only on the mean. Further details are given in the appendix. To develop a predictive model for the mean, we use an architecture similar to that described in Section 4.2. In this case, we input the past day’s prices and temperature, the next day’s load forecasts and temperature forecasts, and additional features such as non-linear functions of the temperatures and temporal features similar to those in Section 4.2. We again train the model to minimize the mean squared error between the model’s predictions and the actual prices (giving the mean prediction µi), using about 5 years of data to train the model and 1 subsequent year for testing. Using the mean predictions of this base model, we then solve the storage scheduling problem by solving the optimization problem (12), again learning network parameters by minimizing the task loss. We compare against a traditional stochastic programming model that minimizes just the RMSE. Table 1 shows the performance of the two models. As energy prices are difficult to predict due to numerous outliers and price spikes, the models in this case are not as well-tuned as in our load forecasting experiment; thus, their performance is relatively variable. Even then, in all cases, our task-based model demonstrates better average performance than the RMSE model when evaluated on task loss, the objective most important to the battery operator (although the improvements are not statistically significant). More interestingly, our task-based method shows less (and in some cases, far less) variability in performance than the RMSE-minimizing method. Qualitatively, our task-based method hedges against perverse events such as price spikes that could substantially affect the performance of a battery charging schedule. The task-based method thus yields more reliable performance than a pure RMSE-minimizing method in the case the models are inaccurate due to a high level of stochasticity in the prediction task. 5 Conclusions and future work This paper proposes an end-to-end approach for learning machine learning models that will be used in the loop of a larger process. Specifically, we consider training probabilistic models in the context of stochastic programming to directly capture a task-based objective. Preliminary experiments indicate that our task-based learning model substantially outperforms MLE and policy-optimizing approaches in all but the (rare) case that the MLE model “perfectly” characterizes the underlying distribution. Our method also achieves a 38.6% performance improvement over a highly-optimized real-world stochastic programming algorithm for scheduling electricity generation based on predicted load. In the case of energy price prediction, where there is a high degree of inherent stochasticity in the problem, our method demonstrates more reliable task performance than a traditional predictive method. The task-based approach thus demonstrates promise in optimizing in-the-loop predictions. Future work includes an extension of our approach to stochastic learning models with multiple rounds, and further to model predictive control and full reinforcement learning settings. 9 Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1252522, and by the Department of Energy Computational Science Graduate Fellowship. References [1] Stein W Wallace and Stein-Erik Fleten. Stochastic programming models in energy. Handbooks in operations research and management science, 10:637–677, 2003. [2] William T Ziemba and Raymond G Vickson. Stochastic optimization models in finance, volume 1. World Scientific, 2006. [3] John A Buzacott and J George Shanthikumar. Stochastic models of manufacturing systems, volume 4. Prentice Hall Englewood Cliffs, NJ, 1993. [4] Alexander Shapiro and Andy Philpott. A tutorial on stochastic programming. Manuscript. Available at www2.isye.gatech.edu/ashapiro/publications.html, 17, 2007. [5] Jeff Linderoth, Alexander Shapiro, and Stephen Wright. The empirical behavior of sampling methods for stochastic programming. Annals of Operations Research, 142(1):215–241, 2006. [6] R Tyrrell Rockafellar and Roger J-B Wets. Scenarios and policy aggregation in optimization under uncertainty. Mathematics of operations research, 16(1):119–147, 1991. [7] Yann LeCun, Urs Muller, Jan Ben, Eric Cosatto, and Beat Flepp. Off-road obstacle avoidance through end-to-end learning. In NIPS, pages 739–746, 2005. [8] Ryan W Thomas, Daniel H Friend, Luiz A Dasilva, and Allen B Mackenzie. Cognitive networks: adaptation and learning to achieve end-to-end performance objectives. IEEE Communications Magazine, 44(12):51–57, 2006. [9] Kai Wang, Boris Babenko, and Serge Belongie. End-to-end scene text recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1457–1464. IEEE, 2011. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [11] Tao Wang, David J Wu, Adam Coates, and Andrew Y Ng. End-to-end text recognition with convolutional neural networks. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 3304–3308. IEEE, 2012. [12] Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pages 1764–1772, 2014. [13] Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-toend speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. [14] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016. [15] Aviv Tamar, Sergey Levine, Pieter Abbeel, YI WU, and Garrett Thomas. Value iteration networks. In Advances in Neural Information Processing Systems, pages 2146–2154, 2016. [16] Ken Harada, Jun Sakuma, and Shigenobu Kobayashi. Local search for multiobjective function optimization: pareto descent method. In Proceedings of the 8th annual conference on Genetic and evolutionary computation, pages 659–666. ACM, 2006. [17] Kristof Van Moffaert and Ann Nowé. Multi-objective reinforcement learning using sets of pareto dominating policies. Journal of Machine Learning Research, 15(1):3483–3512, 2014. 10 [18] Hossam Mossalam, Yannis M Assael, Diederik M Roijers, and Shimon Whiteson. Multiobjective deep reinforcement learning. arXiv preprint arXiv:1610.02707, 2016. [19] Marco A Wiering, Maikel Withagen, and M˘ad˘alina M Drugan. Model-based multi-objective reinforcement learning. In Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2014 IEEE Symposium on, pages 1–6. IEEE, 2014. [20] Veselin Stoyanov, Alexander Ropson, and Jason Eisner. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. International Conference on Artificial Intelligence and Statistics, 15:725–733, 2011. ISSN 15324435. [21] Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems, pages 1594–1602, 2010. [22] Yang Song, Alexander G Schwing, Richard S Zemel, and Raquel Urtasun. Training deep neural networks via direct loss minimization. In Proceedings of The 33rd International Conference on Machine Learning, pages 2169–2177, 2016. [23] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. [24] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven dynamics learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017. [25] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017. [26] Yoshua Bengio. Using a financial training criterion rather than a prediction criterion. International Journal of Neural Systems, 8(04):433–443, 1997. [27] Adam N Elmachtoub and Paul Grigas. Smart "predict, then optimize". arXiv preprint arXiv:1710.08005, 2017. [28] Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and Edison Guo. On differentiating parameterized argmin and argmax problems with application to bi-level optimization. arXiv preprint arXiv:1607.05447, 2016. [29] Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. arXiv preprint arXiv:1609.07152, 2016. [30] Paul T Boggs and Jon W Tolle. Sequential quadratic programming. Acta numerica, 4:1–51, 1995. [31] Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks. arXiV preprint arXiv:1703.00443, 2017. [32] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 11 | 2017 | 181 |
6,654 | Fader Networks: Manipulating Images by Sliding Attributes Guillaume Lample1,2, Neil Zeghidour1,3, Nicolas Usunier1, Antoine Bordes1, Ludovic Denoyer2, Marc’Aurelio Ranzato1 {gl,neilz,usunier,abordes,ranzato}@fb.com ludovic.denoyer@lip6.fr Abstract This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images. 1 Introduction We are interested in the problem of manipulating natural images by controlling some attributes of interest. For example, given a photograph of the face of a person described by their gender, age, and expression, we want to generate a realistic version of this same person looking older or happier, or an image of a hypothetical twin of the opposite gender. This task and the related problem of unsupervised domain transfer recently received a lot of interest [18, 25, 10, 27, 22, 24], as a case study for conditional generative models but also for applications like automatic image edition. The key challenge is that the transformations are ill-defined and training is unsupervised: the training set contains images annotated with the attributes of interest, but there is no example of the transformation: In many cases such as the “gender swapping” example above, there are no pairs of images representing the same person as a male or as a female. In other cases, collecting examples requires a costly annotation process, like taking pictures of the same person with and without glasses. Our approach relies on an encoder-decoder architecture where, given an input image x with its attributes y, the encoder maps x to a latent representation z, and the decoder is trained to reconstruct x given (z, y). At inference time, a test image is encoded in the latent space, and the user chooses the attribute values y that are fed to the decoder. Even with binary attribute values at train time, each attribute can be considered as a continuous variable during inference to control how much it is perceived in the final image. We call our architecture Fader Networks, in analogy to the sliders of an audio mixing console, since the user can choose how much of each attribute they want to incorporate. 1Facebook AI Research 2Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6 3LSCP, ENS, EHESS, CNRS, PSL Research University, INRIA 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Interpolation between different attributes (Zoom in for better resolution). Each line shows reconstructions of the same face with different attribute values, where each attribute is controlled as a continuous variable. It is then possible to make an old person look older or younger, a man look more manly or to imagine his female version. Left images are the originals. The fundamental feature of our approach is to constrain the latent space to be invariant to the attributes of interest. Concretely, it means that the distribution over images of the latent representations should be identical for all possible attribute values. This invariance is obtained by using a procedure similar to domain-adversarial training (see e.g., [21, 6, 15]). In this process, a classifier learns to predict the attributes y given the latent representation z during training while the encoder-decoder is trained based on two objectives at the same time. The first objective is the reconstruction error of the decoder, i.e., the latent representation z must contain enough information to allow for the reconstruction of the input. The second objective consists in fooling the attribute classifier, i.e., the latent representation must prevent it from predicting the correct attribute values. In this model, achieving invariance is a means to filter out, or hide, the properties of the image that are related to the attributes of interest. A single latent representation thus corresponds to different images that share a common structure but with different attribute values. The reconstruction objective then forces the decoder to use the attribute values to choose, from the latent representation, the intended image. Our motivation is to learn a disentangled latent space in which we have explicit control on some attributes of interest, without supervision of the intended result of modifying attribute values. With a similar motivation, several approaches have been tested on the same tasks [18, 25], on related image-to-image translation problems [10, 27], or for more specific applications like the creation of parametrized avatars [24]. In addition to a reconstruction loss, the vast majority of these works rely on adversarial training in pixel space, which compares during training images generated with an intentional change of attributes from genuine images for the target attribute values. Our approach is different both because we use adversarial training for the latent space instead of the output, but also because adversarial training aims at learning invariance to attributes. The assumption underlying our work is that a high fidelity to the input image is less conflicting with the invariance criterion, than with a criterion that forces the hallucinated image to match images from the training set. As a consequence of this principle, our approach results in much simpler training pipelines than those based on adversarial training in pixel space, and is readily amenable to controlling multiple attributes, by adding new output variables to the discriminator of the latent space. As shown in Figure 1 on test images from the CelebA dataset [14], our model can make subtle changes to portraits that end up sufficient to alter the perceived value of attributes while preserving the natural aspect of the image and the identity of the person. Our experiments show that our model outperforms previous methods based on adversarial training on the decoders’ output like [18] in terms of both reconstruction loss and generation quality as measured by human subjects. We believe this disentanglement approach is a serious competitor to the widespread adversarial losses on the decoder output for such tasks. In the remainder of the paper, we discuss in more details the related work in Section 2. We then present the training procedure in Section 3 before describing the network architecture and the implementation in Section 4. Experimental results are shown in Section 5. 2 2 Related work There is substantial literature on attribute-based and/or conditional image generation that can be split in terms of required supervision, with three different levels. At one extreme are fully supervised approaches developed to model known transformations, where examples take the form of (input, transformation, result of the transformation). In that case, the model needs to learn the desired transformation. This setting was previously explored to learn affine transformations [9], 3D rotations [26], lighting variations [12] and 2D video game animations [20]. The methods developed in these works however rely on the supervised setting, and thus cannot be applied in our setup. At the other extreme of the supervision spectrum lie fully unsupervised methods that aim at learning deep neural networks that disentangle the factors of variations in the data, without specification of the attributes. Example methods are InfoGAN [4], or the predictability minimization framework proposed in [21]. The neural photo editor [3] disentangles factors of variations in natural images for image edition. [8] introduced the beta-VAE, a modification of the variational autoencoder (VAE) framework that can learn latent factorized representations in a completely unsupervised manner. This setting is considerably harder than the one we consider, and in general, it may be difficult with these methods to automatically discover high-level concepts such as gender or age. Our work lies in between the two previous settings. It is related to information as in [16]. Methods developed for unsupervised domain transfer [10, 27, 22, 24] can also be applied in our case: given two different domains of images such as “drawings” and “photograph”, one wants to map an image from one domain to the other without supervision; in our case, a domain would correspond to an attribute value. The mappings are trained using adversarial training in pixel space as mentioned in the introduction, using separate encoders and/or decoders per domain, and thus do not scale well to multiple attributes. In this line of work but more specifically considering the problem of modifying attributes, the Invertible conditional GAN [18] first trains a GAN conditioned on the attribute values, and in a second step learns to map input images to the latent space of the GAN, hence the name of invertible GANs. It is used as a baseline in our experiments. Antipov et al. [1] use a pre-trained face recognition system instead of a conditional GAN to learn the latent space, and only focuses on the age attribute. The attribute-to-image approach [25] is a variational auto-encoder that disentangles foreground and background to generate images using attribute values only. Conditional generation is performed by inferring the latent state given the correct attributes and then changing the attributes. Additionally, our work is related to work on learning invariant latent spaces using adversarial training in domain adaptation [6], fair classification [5] and robust inference [15]. The training criterion we use for enforcing invariance is similar to the one used in those works, the difference is that the end-goal of these works is only to filter out nuisance variables or sensitive information. In our case, we learn generative models, and invariance is used as a means to force the decoder to use attribute information in its reconstruction. Finally, for the application of automatically modifying faces using attributes, the feature interpolation approach of [23] presents a means to generate alterations of images based on attributes using a pre-trained network on ImageNet. While their approach is interesting from an application perspective, their inference is costly and since it relies on pre-trained models, cannot naturally incorporate factors or attributes that have not been foreseen during the pre-training. 3 Fader Networks Let X be an image domain and Y the set of possible attributes associated with images in X, where in the case of people’s faces typical attributes are glasses/no glasses, man/woman, young/old. For simplicity, we consider here the case where attributes are binary, but our approach could be extended to categorical attributes. In that setting, Y = {0, 1}n, where n is the number of attributes. We have a training set D = {(x1, y1), ..., (xm, ym)}, of m pairs (image, attribute) (xi ∈X, yi ∈Y). The end goal is to learn from D a model that will generate, for any attribute vector y′, a version of an input image x whose attribute values correspond to y′. Encoder-decoder architecture Our model, described in Figure 2, is based on an encoder-decoder architecture with domain-adversarial training on the latent space. The encoder Eθenc : X →RN is a convolutional neural network with parameters θenc that maps an input image to its N-dimensional latent representation Eθenc(x). The decoder Dθdec : (RN, Y) →X is a deconvolutional network with parameters θdec that produces a new version of the input image given its latent representation Eθenc(x) 3 and any attribute vector y′. When the context is clear, we simply use D and E to denote Dθdec and Eθenc. The precise architectures of the neural networks are described in Section 4. The auto-encoding loss associated to this architecture is a classical mean squared error (MSE) that measures the quality of the reconstruction of a training input x given its true attribute vector y: LAE(θenc, θdec) = 1 m X (x,y)∈D
Dθdec Eθenc(x), y −x
2 2 The exact choice of the reconstruction loss is not fundamental in our approach, and adversarial losses such as PatchGAN [13] could be used in addition to the MSE at this stage to obtain better textures or sharper images, as in [10]. Using a mean absolute or mean squared error is still necessary to ensure that the reconstruction matches the original image. Ideally, modifying y in D(E(x), y) would generate images with different perceived attributes, but similar to x in every other aspect. However, without additional constraints, the decoder learns to ignore the attributes, and modifying y at test time has no effect. Learning attribute-invariant latent representations To avoid this behavior, our approach is to learn latent representations that are invariant with respect to the attributes. By invariance, we mean that given two versions of a same object x and x′ that are the same up to their attribute values, for instance two images of the same person with and without glasses, the two latent representations E(x) and E(x′) should be the same. When such an invariance is satisfied, the decoder must use the attribute to reconstruct the original image. Since the training set does not contain different versions of the same image, this constraint cannot be trivially added in the loss. We hence propose to incorporate this constraint by doing adversarial training on the latent space. This idea is inspired by the work on predictability minimization [21] and adversarial training for domain adaptation [6, 15] where the objective is also to learn an invariant latent representation using an adversarial formulation of the learning objective. To that end, an additional neural network called the discriminator is trained to identify the true attributes y of a training pair (x, y) given E(x). The invariance is obtained by learning the encoder E such that the discriminator is unable to identify the right attributes. As in GANs [7], this corresponds to a two-player game where the discriminator aims at maximizing its ability to identify attributes, and E aims at preventing it to be a good discriminator. The exact structure of our discriminator is described in Section 4. Discriminator objective The discriminator outputs probabilities of an attribute vector Pθdis(y|E(x)), where θdis are the discriminator’s parameters. Using the subscript k to refer to the k-th attribute, we have log Pθdis(y|E(x)) = nP k=1 log Pθdis,k(yk|E(x)). Since the objective of the discriminator is to predict the attributes of the input image given its latent representation, its loss depends on the current state of the encoder and is written as: Ldis(θdis|θenc) = −1 m X (x,y)∈D log Pθdis y Eθenc(x) (1) Adversarial objective The objective of the encoder is now to compute a latent representation that optimizes two objectives. First, the decoder should be able to reconstruct x given E(x) and y, and at the same time the discriminator should not be able to predict y given E(x). We consider that a mistake is made when the discriminator predicts 1 −yk for attribute k. Given the discriminator’s parameters, the complete loss of the encoder-decoder architecture is then: L(θenc, θdec|θdis) = 1 m X (x,y)∈D
Dθdec Eθenc(x), y −x
2 2 −λE log Pθdis(1 −y|Eθenc(x)) , (2) where λE > 0 controls the trade-off between the quality of the reconstruction and the invariance of the latent representations. Large values of λE will restrain the amount of information about x contained in E(x), and result in blurry images, while low values limit the decoder’s dependency on the latent code y and will result in poor effects when altering attributes. 4 Figure 2: Main architecture. An (image, attribute) pair (x, y) is given as input. The encoder maps x to the latent representation z; the discriminator is trained to predict y given z whereas the encoder is trained to make it impossible for the discriminator to predict y given z only. The decoder should reconstruct x given (z, y). At test time, the discriminator is discarded and the model can generate different versions of x when fed with different attribute values. Learning algorithm Overall, given the current state of the encoder, the optimal discriminator parameters satisfy θ∗ dis(θenc) ∈argminθdis Ldis(θdis|θenc). If we ignore problems related to multiple (and local) minima, the overall objective function is θ∗ enc, θ∗ dec = argmin θenc,θdec L(θenc, θdec|θ∗ dis(θenc)) . In practice, it is unreasonable to solve for θ∗ dis(θenc) at each update of θenc. Following the practice of adversarial training for deep networks, we use stochastic gradient updates for all parameters, considering the current value of θdis as an approximation for θ∗ dis(θenc). Given a training example (x, y), let us denote Ldis θdis θenc, x, y the auto-encoder loss restricted to (x, y) and L θenc, θdec θdis, x, y the corresponding discriminator loss. The update at time t given the current parameters θ(t) dis, θ(t) enc, and θ(t) dec and the training example (x(t), y(t)) is: θ(t+1) dis = θ(t) dis −η∇θdisLdis θ(t) dis θ(t) enc, x(t), y(t) [θ(t+1) enc , θ(t+1) dec ] = [θ(t) enc, θ(t) dec] −η∇θenc,θdecL θ(t) enc, θ(t) dec θ(t+1) dis , x(t), y(t) . The details of training and models are given in the next section. 4 Implementation We adapt the architecture of our network from [10]. Let Ck be a Convolution-BatchNorm-ReLU layer with k filters. Convolutions use kernel of size 4 × 4, with a stride of 2, and a padding of 1, so that each layer of the encoder divides the size of its input by 2. We use leaky-ReLUs with a slope of 0.2 in the encoder, and simple ReLUs in the decoder. The encoder consists of the following 7 layers: C16 −C32 −C64 −C128 −C256 −C512 −C512 Input images have a size of 256 × 256. As a result, the latent representation of an image consists of 512 feature maps of size 2 × 2. In our experiments, using 6 layers gave us similar results, while 8 layers significantly decreased the performance, even when using more feature maps in the latent state. To provide the decoder with image attributes, we append the latent code to each layer given as input to the decoder, where the latent code of an image is the concatenation of the one-hot vectors representing 5 Model Naturalness Accuracy Mouth Smile Glasses Mouth Smile Glasses Real Image 92.6 87.0 88.6 89.0 88.3 97.6 IcGAN AE 22.7 21.7 14.8 88.1 91.7 86.2 IcGAN Swap 11.4 22.9 9.6 10.1 9.9 47.5 FadNet AE 88.4 75.2 78.8 91.8 90.1 94.5 FadNet Swap 79.0 31.4 45.3 66.2 97.1 76.6 Table 1: Perceptual evaluation of naturalness and swap accuracy for each model. The naturalness score is the percentage of images that were labeled as “real” by human evaluators to the question “Is this image a real photograph or a fake generated by a graphics engine?”. The accuracy score is the classification accuracy by human evaluators on the values of each attribute. the values of its attributes (binary attributes are represented as [1, 0] and [0, 1]). We append the latent code as additional constant input channels for all the convolutions of the decoder. Denoting by n the number of attributes, (hence a code of size 2n), the decoder is symmetric to the encoder, but uses transposed convolutions for the up-sampling: C512+2n −C512+2n −C256+2n −C128+2n −C64+2n −C32+2n −C16+2n . The discriminator is a C512 layer followed by a fully-connected neural network of two layers of size 512 and n repsectively. Dropout We found it beneficial to add dropout in our discriminator. We hypothesized that dropout helped the discriminator to rely on a wider set of features in order to infer the current attributes, improving and stabilizing its accuracy, and consequently giving better feedback to the encoder. We set the dropout rate to 0.3 in all our experiments. Following [10], we also tried to add dropout in the first layers of the decoder, but in our experiments, this turned out to significantly decrease the performance. Discriminator cost scheduling Similarly to [2], we use a variable weight for the discriminator loss coefficient λE. We initially set λE to 0 and the model is trained like a normal auto-encoder. Then, λE is linearly increased to 0.0001 over the first 500, 000 iterations to slowly encourage the model to produce invariant representations. This scheduling turned out to be critical in our experiments. Without it, we observed that the encoder was too affected by the loss coming from the discriminator, even for low values of λE. Model selection Model selection was first performed automatically using two criteria. First, we used the reconstruction error on original images as measured by the MSE. Second, we also want the model to properly swap the attributes of an image. For this second criterion, we train a classifier to predict image attributes. At the end of each epoch, we swap the attributes of each image in the validation set and measure how well the classifier performs on the decoded images. These two metrics were used to filter out potentially good models. The final model was selected based on human evaluation on images from the train set reconstructed with swapped attributes. 5 Experiments 5.1 Experiments on the celebA dataset Experimental setup We first present experiments on the celebA dataset [14], which contains 200, 000 images of celebrity of shape 178 × 218 annotated with 40 attributes. We used the standard training, validation and test split. All pictures presented in the paper or used for evaluation have been taken from the test set. For pre-processing, we cropped images to 178 × 178, and resized them to 256 × 256, which is the resolution used in all figures of the paper. Image values were normalized to [−1, 1]. All models were trained with Adam [11], using a learning rate of 0.002, β1 = 0.5, and a batch size of 32. We performed data augmentation by flipping horizontally images with a probability 0.5 at each iteration. As model baseline, we used IcGAN [18] with the model provided by the authors and trained on the same dataset. 4 4https://github.com/Guim3/IcGAN 6 Figure 3: Swapping the attributes of different faces. Zoom in for better resolution. Qualitative evaluation Figure 3 shows examples of images generated when swapping different attributes: the generated images have a high visual quality and clearly handle the attribute value changes, for example by adding realistic glasses to the different faces. These generated images confirm that the latent representation learned by Fader Networks is both invariant to the attribute values, but also captures the information needed to generate any version of a face, for any attribute value. Indeed, when looking at the shape of the generated glasses, different glasses shapes and colors have been integrated into the original face depending on the face: our model is not only adding “generic” glasses to all faces, but generates plausible glasses depending on the input. Quantitative evaluation protocol We performed a quantitative evaluation of Fader Networks on Mechanical Turk, using IcGAN as a baseline. We chose the three attributes Mouth (Open/Close), Smile (With/Without) and Glasses (With/Without) as they were attributes in common between IcGAN and our model. We evaluated two different aspects of the generated images: the naturalness, that measures the quality of generated images, and the accuracy, that measures how well swapping an attribute value is reflected in the generation. Both measures are necessary to assess that we generate natural images, and that the swap is effective. We compare: REAL IMAGE , that provides original images without transformation, FADNET AE and ICGAN AE , that reconstruct original images without attribute alteration, and FADNET SWAP and ICGAN SWAP , that generate images with one swapped attribute, e.g., With Glasses →Without Glasses. Before being submitted to Mechanical Turk, all images were cropped and resized following the same processing than IcGAN. As a result, output images were displayed in 64 × 64 resolution, also preventing Workers from basing their judgment on the sharpness of presented images exclusively. Technically, we should also assess that the identity of a person is preserved when swapping attributes. This seemed to be a problem for GAN-based methods, but the reconstruction quality of our model is very good (RMSE on test of 0.0009, to be compared to 0.028 for IcGAN), and we did not observe this issue. Therefore, we did not evaluate this aspect. For naturalness, the first 500 images from the test set such that there are 250 images for each attribute value were shown to Mechanical Turk Workers, 100 for each of the 5 different models presented above. For each image, we asked whether the image seems natural or generated. The description given to the Workers to understand their task showed 4 examples of real images, and 4 examples of fake images (1 FADNET AE , 1 FADNET SWAP , 1 ICGAN AE , 1 ICGAN SWAP ). The accuracy of each model on each attribute was evaluated in a different classification task, resulting in a total of 15 experiments. For example, the FadNet/Glasses experiment consisted in asking Workers whether people with glasses being added by FADNET SWAP effectively possess glasses, and vice-versa. This allows us to evaluate how perceptible the swaps are to the human eye. In each experiment, 100 images were shown (50 images per class, in the order they appear in the test set). In both quantitative evaluations, each experiment was performed by 10 Workers, resulting in 5, 000 samples per experiment for naturalness, and 1, 000 samples per classification experiment on swapped attributes. The results on both tasks are shown in Table 1. 7 Figure 4: (Zoom in for better resolution.) Examples of multi-attribute swap (Gender / Opened eyes / Eye glasses) performed by the same model. Left images are the originals. Quantitative results In the naturalness experiments, only around 90% of real images were classified as “real” by the Workers, indicating the high level of requirement to generate natural images. Our model obtained high naturalness accuracies when reconstructing images without swapping attributes: 88.4%, 75.2% and 78.8%, compared to IcGAN reconstructions whose accuracy does not exceed 23%, whether it be for reconstructed or swapped images. For the swap, FADNET SWAP still consistently outperforms ICGAN SWAP by a large margin. However, the naturalness accuracy varies a lot based on the swapped attribute: from 79.0% for the opening of the mouth, down to 31.4% for the smile. Classification experiments show that reconstructions with FADNET AE and ICGAN AE have very high classification scores, and are even on par with real images on both Mouth and Smile. FADNET SWAP obtains an accuracy of 66.2% for the mouth, 76.6% for the glasses and 97.1% for the smile, indicating that our model can swap these attributes with a very high efficiency. On the other hand, with accuracies of 10.1%, 47.5% and 9.9% on these same attributes, ICGAN SWAP does not seem able to generate convincing swaps. Multi-attributes swapping We present qualitative results for the ability of our model to swap multiple attributes at once in Figure 4, by jointly modifying the gender, open eyes and glasses attributes. Even in this more difficult setting, our model can generate convincing images with multiple swaps. 5.2 Experiments on Flowers dataset We performed additional experiments on the Oxford-102 dataset, which contains about 9, 000 images of flowers classified into 102 categories [17]. Since the dataset does not contain other labels than the flower categories, we built a list of color attributes from the flower captions provided by [19]. Each flower is provided with 10 different captions. For a given color, we gave a flower the associated color attribute, if that color appears in at least 5 out of the 10 different captions. Although being naive, this approach was enough to create accurate labels. We resized images to 64 × 64. Figure 5 represents reconstructed flowers with different values of the “pink” attribute. We can observe that the color of the flower changes in the desired direction, while keeping the background cleanly unchanged. Figure 5: Examples of reconstructed flowers with different values of the pink attribute. First row images are the originals. Increasing the value of that attribute will turn flower colors into pink, while decreasing it in images with originally pink flowers will make them turn yellow or orange. 8 6 Conclusion We presented a new approach to generate variations of images by changing attribute values. The approach is based on enforcing the invariance of the latent space w.r.t. the attributes. A key advantage of our method compared to many recent models [27, 10] is that it generates realistic images of high resolution without needing to apply a GAN to the decoder output. As a result, it could easily be extended to other domains like speech, or text, where the backpropagation through the decoder can be really challenging because of the non-differentiable text generation process for instance. However, methods commonly used in vision to assess the visual quality of the generated images, like PatchGAN, could totally be applied on top of our model. Acknowledgments The authors would like to thank Yedid Hoshen for initial discussions about the core ideas of the paper, Christian Pursch and Alexander Miller for their help in setting up the experiments and Mechanical Turk evaluations. The authors are also grateful to David Lopez-Paz and Mouhamadou Moustapha Cisse for useful feedback and support on this project. References [1] Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional generative adversarial networks. arXiv preprint arXiv:1702.01983, 2017. [2] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [3] Andrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016. [4] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2172–2180, 2016. [5] Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015. [6] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. [7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [8] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Mohamed Shakir, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. Proceedings of ICLR 2017, 2017. [9] Geoffrey Hinton, Alex Krizhevsky, and Sida Wang. Transforming auto-encoders. Artificial Neural Networks and Machine Learning–ICANN 2011, pages 44–51, 2011. [10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016. [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [12] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages 2539–2547, 2015. 9 [13] Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pages 702–716. Springer, 2016. [14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. [15] Gilles Louppe, Michael Kagan, and Kyle Cranmer. Learning to pivot with adversarial networks. arXiv preprint arXiv:1611.01046, 2016. [16] Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pages 5041–5049, 2016. [17] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP’08. Sixth Indian Conference on, pages 722–729. IEEE, 2008. [18] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M Álvarez. Invertible conditional gans for image editing. arXiv preprint arXiv:1611.06355, 2016. [19] Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 49–58, 2016. [20] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In Advances in Neural Information Processing Systems, pages 1252–1260, 2015. [21] Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992. [22] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016. [23] Paul Upchurch, Jacob Gardner, Kavita Bala, Robert Pless, Noah Snavely, and Kilian Weinberger. Deep feature interpolation for image content changes. arXiv preprint arXiv:1611.05507, 2016. [24] Lior Wolf, Yaniv Taigman, and Adam Polyak. Unsupervised creation of parameterized avatars. arXiv preprint arXiv:1704.05693, 2017. [25] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In European Conference on Computer Vision, pages 776–791. Springer, 2016. [26] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In Advances in Neural Information Processing Systems, pages 1099–1107, 2015. [27] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. 10 | 2017 | 182 |
6,655 | VAE Learning via Stein Variational Gradient Descent Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, Lawrence Carin Department of Electrical and Computer Engineering, Duke University {yp42, zg27, r.henao, cl319, shaobo.han, lcarin}@duke.edu Abstract A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets. 1 Introduction There has been significant recent interest in the variational autoencoder (VAE) [11], a generalization of the original autoencoder [33]. VAEs are typically trained by maximizing a variational lower bound of the data log-likelihood [2, 10, 11, 12, 18, 21, 22, 23, 30, 34, 35]. To compute the variational expression, one must be able to explicitly evaluate the associated distribution of latent features, i.e., the stochastic encoder must have an explicit analytic form. This requirement has motivated design of encoders in which a neural network maps input data to the parameters of a simple distribution, e.g., Gaussian distributions have been widely utilized [1, 11, 27, 25]. The Gaussian assumption may be too restrictive in some cases [28]. Consequently, recent work has considered normalizing flows [28], in which random variables from (for example) a Gaussian distribution are fed through a series of nonlinear functions to increase the complexity and representational power of the encoder. However, because of the need to explicitly evaluate the distribution within the variational expression used when learning, these nonlinear functions must be relatively simple, e.g., planar flows. Further, one may require many layers to achieve the desired representational power. We present a new approach for training a VAE. We recognize that the need for an explicit form for the encoder distribution is only a consequence of the fact that learning is performed based on the variational lower bound. For inference (e.g., at test time), we do not need an explicit form for the distribution of latent features, we only require fast sampling from the encoder. Consequently, rather than directly employing the traditional variational lower bound, we seek to minimize the KullbackLeibler (KL) distance between the true posterior of model and latent parameters. Learning then becomes a novel application of Stein variational gradient descent (SVGD) [15], constituting its first application to training VAEs. We extend SVGD with importance sampling [1], and also demonstrate its novel use in semi-supervised VAE learning. The concepts developed here are demonstrated on a wide range of unsupervised and semi-supervised learning problems, including a large-scale semi-supervised analysis of the ImageNet dataset. These experimental results illustrate the advantage of SVGD-based VAE training, relative to traditional approaches. Moreover, the results demonstrate further improvements realized by integrating SVGD with importance sampling. Independent work by [3, 6] proposed the similar models, in which the aurthers incorporated SVGD with VAEs [3] and importance sampling [6] for unsupervised learning tasks. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Stein Learning of Variational Autoencoder (Stein VAE) 2.1 Review of VAE and Motivation for Use of SVGD Consider data D = {xn}N n=1, where xn are modeled via decoder xn|zn ∼p(x|zn; θ). A prior p(z) is placed on the latent codes. To learn parameters θ, one typically is interested in maximizing the empirical expected log-likelihood, 1 N PN n=1 log p(xn; θ). A variational lower bound is often employed: L(θ, φ; x) = Ez|x;φ log hp(x|z; θ)p(z) q(z|x; φ) i = −KL(q(z|x; φ)∥p(z|x; θ)) + log p(x; θ) , (1) with log p(x; θ) ≥L(θ, φ; x), and where Ez|x;φ[·] is approximated by averaging over a finite number of samples drawn from encoder q(z|x; φ). Parameters θ and φ are typically iteratively optimized via stochastic gradient descent [11], seeking to maximize PN n=1 L(θ, φ; xn). To evaluate the variational expression in (1), we require the ability to sample efficiently from q(z|x; φ), to approximate the expectation. We also require a closed form for this encoder, to evaluate log[p(x|z; θ)p(z)/q(z|x; φ)]. In the proposed VAE learning framework, rather than maximizing the variational lower bound explicitly, we focus on the term KL(q(z|x; φ)∥p(z|x; θ)), which we seek to minimize. This can be achieved by leveraging Stein variational gradient descent (SVGD) [15]. Importantly, for SVGD we need only be able to sample from q(z|x; φ), and we need not possess its explicit functional form. In the above discussion, θ is treated as a parameter; below we treat it as a random variable, as was considered in the Appendix of [11]. Treatment of θ as a random variable allows for model averaging, and a point estimate of θ is revealed as a special case of the proposed method. The set of codes associated with all xn ∈D is represented Z = {zn}N n=1. The prior on {θ, Z} is here represented as p(θ, Z) = p(θ) QN n=1 p(zn). We desire the posterior p(θ, Z|D). Consider the revised variational expression L1(q; D) = Eq(θ,Z) log hp(D|Z, θ)p(θ, Z) q(θ, Z) i = −KL(q(θ, Z)∥p(θ, Z|D)) + log p(D; M) , (2) where p(D; M) is the evidence for the underlying model M. Learning q(θ, Z) such that L1 is maximized is equivalent to seeking q(θ, Z) that minimizes KL(q(θ, Z)∥p(θ, Z|D)). By leveraging and generalizing SVGD, we will perform the latter. 2.2 Stein Variational Gradient Descent (SVGD) Rather than explicitly specifying a form for p(θ, Z|D), we sequentially refine samples of θ and Z, such that they are better matched to p(θ, Z|D). We alternate between updating the samples of θ and samples of Z, analogous to how θ and φ are updated alternatively in traditional VAE optimization of (1). We first consider updating samples of θ, with the samples of Z held fixed. Specifically, assume we have samples {θj}M j=1 drawn from distribution q(θ), and samples {zjn}M j=1 drawn from distribution q(Z). We wish to transform {θj}M j=1 by feeding them through a function, and the corresponding (implicit) transformed distribution from which they are drawn is denoted as qT (θ). It is desired that, in a KL sense, qT (θ)q(Z) is closer to p(θ, Z|D) than was q(θ)q(Z). The following theorem is useful for defining how to best update {θj}M j=1. Theorem 1 Assume θ and Z are Random Variables (RVs) drawn from distributions q(θ) and q(Z), respectively. Consider the transformation T(θ) = θ + ϵψ(θ; D) and let qT (θ) represent the distribution of θ′ = T(θ). We have ∇ϵ KL(qT ∥p) |ϵ=0 = −Eθ∼q(θ) trace(Ap(θ; D)) , (3) where qT = qT (θ)q(Z), p = p(θ, Z|D), Ap(θ; D) = ∇θ log ˜p(θ; D)ψ(θ; D)T + ∇θψ(θ; D), log ˜p(θ; D) = EZ∼q(Z)[log p(D, Z, θ)], and p(D, Z, θ) = p(D|Z, θ)p(θ, Z). The proof is provided in Appendix A. Following [15], we assume ψ(θ; D) lives in a reproducing kernel Hilbert space (RKHS) with kernel k(·, ·). Under this assumption, the solution for ψ(θ; D) 2 that maximizes the decrease in the KL distance (3) is ψ∗(·; D) = Eq(θ)[k(θ, ·)∇θ log ˜p(θ; D) + ∇θk(θ, ·)] . (4) Theorem 1 concerns updating samples from q(θ) assuming fixed q(Z). Similarly, to update q(Z) with q(θ) fixed, we employ a complementary form of Theorem 1 (omitted for brevity). In that case, we consider transformation T(Z) = Z + ϵψ(Z; D), with Z ∼q(Z), and function ψ(Z; D) is also assumed to be in a RKHS. The expectations in (3) and (4) are approximated by samples θ(t+1) j = θ(t) j + ϵ∆θ(t) j , with ∆θ(t) j ≈ 1 M PM j′=1 kθ(θ(t) j′ , θ(t) j )∇θ(t) j′ log ˜p(θ(t) j′ ; D) + ∇θ(t) j′ kθ(θ(t) j′ , θ(t) j )) , (5) with ∇θ log ˜p(θ; D) ≈ 1 M PN n=1 PM j=1 ∇θ log p(xn|zjn, θ)p(θ). A similar update of samples is manifested for the latent variables z(t+1) jn = z(t) jn + ϵ∆z(t) jn: ∆z(t) jn = 1 M PM j′=1 kz(z(t) j′n, z(t) jn)∇z(t) j′n log ˜p(z(t) j′n; D) + ∇z(t) j′nkz(z(t) j′n, z(t) jn) , (6) where ∇zn log ˜p(zn; D) ≈ 1 M PM j=1 ∇zn log p(xn|zn, θ′ j)p(zn). The kernels used to update samples of θ and zn are in general different, denoted respectively kθ(·, ·) and kz(·, ·), and ϵ is a small step size. For notational simplicity, M is the same in (5) and (6), but in practice a different number of samples may be used for θ and Z. If M = 1 for parameter θ, indices j and j′ are removed in (5). Learning then reduces to gradient descent and a point estimate for θ, identical to the optimization procedure used for the traditional VAE expression in (1), but with the (multiple) samples associated with Z sequentially transformed via SVGD (and, importantly, without the need to assume a form for q(z|x; φ)). Therefore, if only a point estimate of θ is desired, (1) can be optimized wrt θ, while for updating Z SVGD is applied. 2.3 Efficient Stochastic Encoder At iteration t of the above learning procedure, we realize a set of latent-variable (code) samples {z(t) jn}M j=1 for each xn ∈D under analysis. For large N, training may be computationally expensive. Further, the need to evolve (learn) samples {zj∗}M j=1 for each new test sample, x∗, is undesirable. We therefore develop a recognition model that efficiently computes samples of latent codes for a data sample of interest. The recognition model draws samples via zjn = f η(xn, ξjn) with ξjn ∼q0(ξ). Distribution q0(ξ) is selected such that it may be easily sampled, e.g., isotropic Gaussian. After each iteration of updating the samples of Z, we refine recognition model f η(x, ξ) to mimic the Stein sample dynamics. Assume recognition-model parameters η(t) have been learned thus far. Using η(t), latent codes for iteration t are constituted as z(t) jn = f η(t)(xn, ξjn), with ξjn ∼q0(ξ). These codes are computed for all data xn ∈Bt, where Bt ⊂D is the minibatch of data at iteration t. The change in the codes is ∆z(t) jn, as defined in (6). We then update η to match the refined codes, as η(t+1) = arg minη P xn∈Bt PM j=1 ∥f η(xn, ξjn) −z(t+1) jn ∥2 . (7) The analytic solution of (7) is intractable. We update η with K steps of gradient descent as η(t,k) = η(t,k−1) −δ P xn∈Bt PM j=1 ∆η(t,k−1) jn , where ∆η(t,k−1) jn = ∂ηf η(xn, ξjn)(f η(xn, ξjn) − z(t+1) jn )|η=η(t,k−1), δ is a small step size, η(t) = η(t,0), η(t+1) = η(t,K), and ∂ηf η(xn, ξjn) is the transpose of the Jacobian of f η(xn, ξjn) wrt η. Note that the use of minibatches mitigates challenges of training with large training sets, D. The function f η(x, ξ) plays a role analogous to q(z|x; φ) in (1), in that it yields a means of efficiently drawing samples of latent codes z, given observed x; however, we do not impose an explicit functional form for the distribution of these samples. 3 3 Stein Variational Importance Weighted Autoencoder (Stein VIWAE) 3.1 Multi-sample importance-weighted KL divergence Recall the variational expression in (1) employed in conventional VAE learning. Recently, [1, 19] showed that the multi-sample (k samples) importance-weighted estimator Lk(x) = Ez1,...,zk∼q(z|x) h log 1 k Pk i=1 p(x,zi) q(zi|x) i , (8) provides a tighter lower bound and a better proxy for the log-likelihood, where z1, . . . , zk are random variables sampled independently from q(z|x). Recall from (3) that the KL divergence played a key role in the Stein-based learning of Section 2. Equation (8) motivates replacement of the KL objective function with the multi-sample importance-weighted KL divergence KLk q,p(Θ; D) ≜−EΘ1:k∼q(Θ) h log 1 k Pk i=1 p(Θi|D) q(Θi) i , (9) where Θ = (θ, Z) and Θ1:k = Θ1, . . . , Θk are independent samples from q(θ, Z). Note that the special case of k = 1 recovers the standard KL divergence. Inspired by [1], the following theorem (proved in Appendix A) shows that increasing the number of samples k is guaranteed to reduce the KL divergence and provide a better approximation of target distribution. Theorem 2 For any natural number k, we have KLk q,p(Θ; D) ≥KLk+1 q,p (Θ; D) ≥0, and if q(Θ)/p(Θ|D) is bounded, then limk→∞KLk q,p(Θ; D) = 0. We minimize (9) with a sample transformation based on a generalization of SVGD and the recognition model (encoder) is trained in the same way as in Section 2.3. Specifically, we first draw samples {θ1:k j }M j=1 and {z1:k jn }M j=1 from a simple distribution q0(·), and convert these to approximate draws from p(θ1:k, Z1:k|D) by minimizing the multi-sample importance weighted KL divergence via nonlinear functional transformation. 3.2 Importance-weighted SVGD for VAEs The following theorem generalizes Theorem 1 to multi-sample weighted KL divergence. Theorem 3 Let Θ1:k be RVs drawn independently from distribution q(Θ) and KLk q,p(Θ, D) is the multi-sample importance weighted KL divergence in (9). Let T(Θ) = Θ + ϵψ(Θ; D) and qT (Θ) represent the distribution of Θ′ = T(Θ). We have ∇ϵ KLk q,p(Θ′; D) |ϵ=0 = −EΘ1:k∼q(Θ)(Ak p(Θ1:k; D)) . (10) The proof and detailed definition is provided in Appendix A. The following corollaries generalize Theorem 1 and (4) via use of importance sampling, respectively. Corollary 3.1 θ1:k and Z1:k are RVs drawn independently from distributions q(θ) and q(Z), respectively. Let T(θ) = θ + ϵψ(θ; D), qT (θ) represent the distribution of θ′ = T(θ), and Θ′ = (θ′, Z) . We have ∇ϵ KLk qT ,p(Θ′; D) |ϵ=0 = −Eθ1:k∼q(θ)(Ak p(θ1:k; D)) , (11) where Ak p(θ1:k; D) = 1 ˜ω Pk i=1 ωiAp(θi; D), ωi = EZi∼q(Z) h p(θi,Zi,D) q(θi)q(Zi) i , ˜ω = Pk i=1 ωi; Ap(θ; D) and log ˜p(θ; D) are as defined in Theorem 1. Corollary 3.2 Assume ψ(θ; D) lives in a reproducing kernel Hilbert space (RKHS) with kernel kθ(·, ·). The solution for ψ(θ; D) that maximizes the decrease in the KL distance (11) is ψ∗(·; D) = Eθ1:k∼q(θ) h 1 ˜ω Pk i=1 ωi ∇θikθ(θi, ·) + kθ(θi, ·)∇θi log ˜p(θi; D) i . (12) 4 Corollary 3.1 and Corollary 3.2 provide a means of updating multiple samples {θ1:k j }M j=1 from q(θ) via T(θi) = θi + ϵψ(θi; D). The expectation wrt q(Z) is approximated via samples drawn from q(Z). Similarly, we can employ a complementary form of Corollary 3.1 and Corollary 3.2 to update multiple samples {Z1:k j }M j=1 from q(Z). This suggests an importance-weighted learning procedure that alternates between update of particles {θ1:k j }M j=1 and {Z1:k j }M j=1, which is similar to the one in Section 2.2. Detailed update equations are provided in Appendix B. 4 Semi-Supervised Learning with Stein VAE Consider labeled data as pairs Dl = {xn, yn}Nl n=1, where the label yn ∈{1, . . . , C} and the decoder is modeled as (xn, yn|zn) ∼p(x, y|zn; θ, ˜θ) = p(x|zn; θ)p(y|zn; ˜θ), where ˜θ represents the parameters of the decoder for labels. The set of codes associated with all labeled data are represented as Zl = {zn}Nl n=1. We desire to approximate the posterior distribution on the entire dataset p(θ, ˜θ, Z, Zl|D, Dl) via samples, where D represents the unlabeled data, and Z is the set of codes associated with D. In the following, we will only discuss how to update the samples of θ, ˜θ and Zl. Updating samples Z is the same as discussed in Sections 2 and 3.2 for Stein VAE and Stein VIWAE, respectively. Assume {θj}M j=1 drawn from distribution q(θ), {˜θj}M j=1 drawn from distribution q(˜θ), and samples {zjn}M j=1 drawn from (distinct) distribution q(Zl). The following corollary generalizes Theorem 1 and (4), which is useful for defining how to best update {θj}M j=1. Corollary 3.3 Assume θ, ˜θ, Z and Zl are RVs drawn from distributions q(θ), q(˜θ), q(Z) and q(Zl), respectively. Consider the transformation T(θ) = θ + ϵψ(θ; D, Dl) where ψ(θ; D, Dl) lives in a RKHS with kernel kθ(·, ·). Let qT (θ) represent the distribution of θ′ = T(θ). For qT = qT (θ)q(Z)q(˜θ) and p = p(θ, ˜θ, Z|D, Dl), we have ∇ϵ KL(qT ∥p) |ϵ=0 = −Eθ∼q(θ)(Ap(θ; D, Dl)) , (13) where Ap(θ; D, Dl) = ∇θψ(θ; D, Dl) + ∇θ log ˜p(θ; D, Dl)ψ(θ; D, Dl)T , log ˜p(θ; D, Dl) = EZ∼q(Z)[log p(D|Z, θ)] + EZl∼q(Zl)[log p(Dl|Zl, θ)], and the solution for ψ(θ; D, Dl) that maximizes the change in the KL distance (13) is ψ∗(·; D, Dl) = Eq(θ)[k(θ, ·)∇θ log ˜p(θ; D, Dl) + ∇θk(θ, ·)] . (14) Further details are provided in Appendix C. 5 Experiments For all experiments, we use a radial basis-function (RBF) kernel as in [15], i.e., k(x, x′) = exp(−1 h∥x −x′∥2 2), where the bandwidth, h, is the median of pairwise distances between current samples. q0(θ) and q0(ξ) are set to isotropic Gaussian distributions. We share the samples of ξ across data points, i.e., ξjn = ξj, for n = 1, . . . , N (this is not necessary, but it saves computation). The samples of θ and z, and parameters of the recognition model, η, are optimized via Adam [9] with learning rate 0.0002. We do not perform any dataset-specific tuning or regularization other than dropout [32] and early stopping on validation sets. We set M = 100 and k = 50, and use minibatches of size 64 for all experiments, unless otherwise specified. 5.1 Expressive power of Stein recognition model Gaussian Mixture Model We synthesize data by (i) drawing zn ∼ 1 2N(µ1, I) + 1 2N(µ2, I), where µ1 = [5, 5]T , µ2 = [−5, −5]T ; (ii) drawing xn ∼N(θzn, σ2I), where θ = 2 −1 1 −2 and σ = 0.1. The recognition model fη(xn, ξj) is specified as a multi-layer perceptron (MLP) with 100 hidden units, by first concatenating ξj and xn into a long vector. The dimension of ξj is set to 2. The recognition model for standard VAE is also an MLP with 100 hidden units, and with the assumption of a Gaussian distribution for the latent codes [11]. 5 Figure 1: Approximation of posterior distribution: Stein VAE vs. VAE. The figures represent different samples of Stein VAE. (left) 10 samples, (center) 50 samples, and (right) 100 samples. We generate N = 10, 000 data points for training and 10 data points for testing. The analytic form of true posterior distribution is provided in Appendix D. Figure 1 shows the performance of Stein VAE approximations for the true posterior; other similar examples are provided in Appendix F. The Stein recognition model is able to capture the multi-modal posterior and produce accurate density approximation. Figure 2: Univariate marginals and pairwise posteriors. Purple, red and green represent the distribution inferred from MCMC, standard VAE and Stein VAE, respectively. Poisson Factor Analysis Given a discrete vector xn ∈ZP +, Poisson factor analysis [36] assumes xn is a weighted combination of V latent factors xn ∼ Pois(θzn), where θ ∈RP ×V + is the factor loadings matrix and zn ∈RV + is the vector of factor scores. We consider topic modeling with Dirichlet priors on θv (v-th column of θ) and gamma priors on each component of zn. We evaluate our model on the 20 Newsgroups dataset containing N = 18, 845 documents with a vocabulary of P = 2, 000. The data are partitioned into 10,314 training, 1,000 validation and 7,531 test documents. The number of factors (topics) is set to V = 128. θ is first learned by Markov chain Monte Carlo (MCMC) [4]. We then fix θ at its MAP value, and only learn the recognition model η using standard VAE and Stein VAE; this is done, as in the previous example, to examine the accuracy of the recognition model to estimate the posterior of the latent factors, isolated from estimation of θ. The recognition model is an MLP with 100 hidden units. Table 1: Negative log-likelihood (NLL) on MNIST. †Trained with VAE and tested with IWAE. ‡Trained and tested with IWAE. Method NLL DGLM [27] 89.90 Normalizing flow [28] 85.10 VAE + IWAE [1]† 86.76 IWAE + IWAE [1]‡ 84.78 Stein VAE + ELBO 85.21 Stein VAE + S-ELBO 84.98 Stein VIWAE + ELBO 83.01 Stein VIWAE + S-ELBO 82.88 An analytic form of the true posterior distribution p(zn|xn) is intractable for this problem. Consequently, we employ samples collected from MCMC as ground truth. With θ fixed, we sample zn via Gibbs sampling, using 2,000 burn-in iterations followed by 2,500 collection draws, retaining every 10th collection sample. We show the marginal and pairwise posterior of one test data point in Figure 2. Additional results are provided in Appendix F. Stein VAE leads to a more accurate approximation than standard VAE, compared to the MCMC samples. Considering Figure 2, note that VAE significantly underestimates the variance of the posterior (examining the marginals), a well-known problem of variational Bayesian analysis [7]. In sharp contrast, Stein VAE yields highly accurate approximations to the true posterior. 5.2 Density estimation Data We consider five benchmark datasets: MNIST and four text corpora: 20 Newsgroups (20News), New York Times (NYT), Science and RCV1-v2 (RCV2). For MNIST, we used the standard split of 50K training, 10K validation and 10K test examples. The latter three text corpora 6 consist of 133K, 166K and 794K documents. These three datasets are split into 1K validation, 10K testing and the rest for training. Evaluation Given new data x∗(testing data), the marginal log-likelihood/perplexity values are estimated by the variational evidence lower bound (ELBO) while integrating the decoder parameters θ out, log p(x∗) ≥Eq(z∗)[log p(x∗, z∗)] + H(q(z∗)) = ELBO(q(z∗)), where p(x∗, z∗) = Eq(θ)[log p(x∗, θ, z∗)] and H(q(·)) = −Eq(log q(·)) is the entropy. The expectation is approximated with samples {θj}M j=1 and {z∗j}M j=1 with z∗j = f η(x∗, ξj), ξj ∼q0(ξ). Directly evaluating q(z∗) is intractable, thus it is estimated via density transformation q(z) = q0(ξ) det ∂f η(x,ξ) ∂ξ −1 . Table 2: Test perplexities on four text corpora. Method 20News NYT Science RCV2 DocNADE [14] 896 2496 1725 742 DEF [24] —2416 1576 —NVDM [17] 852 ——550 Stein VAE + ELBO 849 2402 1499 549 Stein VAE + S-ELBO 845 2401 1497 544 Stein VIWAE + ELBO 837 2315 1453 523 Stein VIWAE + S-ELBO 829 2277 1421 518 We further estimate the marginal loglikelihood/perplexity values via the stochastic variational lower bound, as the mean of 5K-sample importance weighting estimate [1]. Therefore, for each dataset, we report four results: (i) Stein VAE + ELBO, (ii) Stein VAE + SELBO, (iii) Stein VIWAE + ELBO and (iv) Stein VIWAE + S-ELBO; the first term denotes the training procedure is employed as Stein VAE in Section 2 or Stein VIWAE in Section 3; the second term denotes the testing log-likelihood/perplexity is estimated by the ELBO or the stochastic variational lower bound, S-ELBO [1]. Model For MNIST, we train the model with one stochastic layer, zn, with 50 hidden units and two deterministic layers, each with 200 units. The nonlinearity is set as tanh. The visible layer, xn, follows a Bernoulli distribution. For the text corpora, we build a three-layer deep Poisson network [24]. The sizes of hidden units are 200, 200 and 50 for the first, second and third layer, respectively (see [24] for detailed architectures). 1 2 3 4 5 6 85 86 87 88 1 5 10 20 40 60 100 200 300 Number of Samples (M) Time (s) Negative Log−likelihood (nats) Negative Log−likelihood Testing Time for Entire Dataset Training Time for Each Epoch Figure 3: NLL vs. Training/Testing time on MNIST with various numbers of samples for θ. Results The log-likelihood/perplexity results are summarized in Tables 1 and 2. On MNIST, our Stein VAE achieves a variational lower bound of -85.21 nats, which outperforms standard VAE with the same model architecture. Our Stein VIWAE achieves a log-likelihood of -82.88 nats, exceeding normalizing flow (-85.1 nats) and importance weighted autoencoder (-84.78 nats), which is the best prior result obtained by feedforward neural network (FNN). DRAW [5] and PixelRNN [20], which exploit spatial structure, achieved log-likelihoods of around -80 nats. Our model can also be applied on these models, but this is left as interesting future work. To further illustrate the benefit of model averaging, we vary the number of samples for θ (while retaining 100 samples for Z) and show the results associated with training/testing time in Figure 3. When M = 1 for θ, our model reduces to a point estimate for that parameter. Increasing the number of samples of θ (model averaging) improves the negative log-likelihood (NLL). The testing time of using 100 samples of θ is around 0.12 ms per image. 5.3 Semi-supervised Classification We consider semi-supervised classification on MNIST and ImageNet [29] data. For each dataset, we report the results obtained by (i) VAE, (ii) Stein VAE, and (iii) Stein VIWAE. MNIST We randomly split the training set into a labeled and unlabeled set, and the number of labeled samples in each category varies from 10 to 300. We perform testing on the standard test set with 20 different training-set splits. The decoder for labels is implemented as p(yn|zn, ˜θ) = softmax(˜θzn). We consider two types of decoders for images p(xn|zn, θ) and encoder f η(x, ξ): 7 (i) FNN: Following [12], we use a 50-dimensional latent variables zn and two hidden layers, each with 600 hidden units, for both encoder and decoder; softplus is employed as the nonlinear activation function. (ii) All convolutional nets (CNN): Inspired by [31], we replace the two hidden layers with 32 and 64 kernels of size 5 × 5 and a stride of 2. A fully connected layer is stacked on the CNN to produce a 50-dimensional latent variables zn. We use the leaky rectified activation [16]. The input of the encoder is formed by spatially aligning and stacking xn and ξ, while the output of decoder is the image itself. Table 3: Semi-supervised classification error (%) on MNIST. Nρ is the number of labeled images per class. §[12]; †our implementation. Nρ FNN CNN VAE§ Stein VAE Stein VIWAE VAE† Stein VAE Stein VIWAE 10 3.33 ± 0.14 2.78 ± 0.24 2.67 ± 0.09 2.44 ± 0.17 1.94 ± 0.24 1.90 ± 0.05 60 2.59 ±0.05 2.13 ± 0.08 2.09 ± 0.03 1.88 ±0.05 1.44 ± 0.04 1.41 ± 0.02 100 2.40 ±0.02 1.92 ± 0.05 1.88 ± 0.01 1.47 ±0.02 1.01 ± 0.03 0.99 ± 0.02 300 2.18 ±0.04 1.77 ± 0.03 1.75 ± 0.01 0.98 ±0.02 0.89 ± 0.03 0.86 ± 0.01 Table 3 shows the classification results. Our Stein VAE and Stein VIWAE consistently achieve better performance than the VAE. We further observe that the variance of Stein VIWAE results is much smaller than that of Stein VAE results on small labeled data, indicating the former produces more robust parameter estimates. State-ofthe-art results [26] are achieved by the Ladder network, which can be employed with our Stein-based approach, however, we will consider this extension as future work. Table 4: Semi-supervised classification accuracy (%) on ImageNet. VAE Stein VAE Stein VIWAE DGDN [21] 1 % 35.92± 1.91 36.44 ± 1.66 36.91 ± 0.98 43.98± 1.15 2 % 40.15± 1.52 41.71 ± 1.14 42.57 ± 0.84 46.92± 1.11 5 % 44.27± 1.47 46.14 ± 1.02 46.20 ± 0.52 47.36± 0.91 10 % 46.92± 1.02 47.83 ± 0.88 48.67 ± 0.31 48.41± 0.76 20 % 50.43± 0.41 51.62 ± 0.24 51.77 ± 0.12 51.51± 0.28 30 % 53.24± 0.33 55.02 ± 0.22 55.45 ± 0.11 54.14± 0.12 40 % 56.89± 0.11 58.17 ± 0.16 58.21 ± 0.12 57.34± 0.18 ImageNet 2012 We consider scalability of our model to large datasets. We split the 1.3 million training images into an unlabeled and labeled set, and vary the proportion of labeled images from 1% to 40%. The classes are balanced to ensure that no particular class is over-represented, i.e., the ratio of labeled and unlabeled images is the same for each class. We repeat the training process 10 times for the training setting with labeled images ranging from 1% to 10% , and 5 times for the the training setting with labeled images ranging from 20% to 40%. Each time we utilize different sets of images as the unlabeled ones. We employ an all convolutional net [31] for both the encoder and decoder, which replaces deterministic pooling (e.g., max-pooling) with stridden convolutions. Residual connections [8] are incorporated to encourage gradient flow. The model architecture is detailed in Appendix E. Following [13], images are resized to 256 × 256. A 224 × 224 crop is randomly sampled from the images or its horizontal flip with the mean subtracted [13]. We set M = 20 and k = 10. Table 4 shows classification results indicating that Stein VAE and Stein IVWAE outperform VAE in all the experiments, demonstrating the effectiveness of our approach for semi-supervised classification. When the proportion of labeled examples is too small (< 10%), DGDN [21] outperforms all the VAE-based models, which is not surprising provided that our models are deeper, thus have considerably more parameters than DGDN [21]. 6 Conclusion We have employed SVGD to develop a new method for learning a variational autoencoder, in which we need not specify an a priori form for the encoder distribution. Fast inference is manifested by learning a recognition model that mimics the manner in which the inferred code samples are manifested. The method is further generalized and improved by performing importance sampling. An extensive set of results, for unsupervised and semi-supervised learning, demonstrate excellent performance and scaling to large datasets. Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF. 8 References [1] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016. [2] L. Chen, S. Dai, Y. Pu, C. Li, and Q. Su Lawrence Carin. Symmetric variational autoencoder and connections to adversarial learning. In arXiv, 2017. [3] Y. Feng, D. Wang, and Q. Liu. Learning to draw samples with amortized stein variational gradient descent. In UAI, 2017. [4] Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin. Scalable deep poisson factor analysis for topic modeling. In ICML, 2015. [5] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015. [6] J. Han and Q. Liu. Stein variational adaptive importance sampling. In UAI, 2017. [7] S. Han, X. Liao, D.B. Dunson, and L. Carin. Variational gaussian copula inference. In AISTATS, 2016. [8] K. He, X. Zhang, S. Ren, and Sun J. Deep residual learning for image recognition. In CVPR, 2016. [9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [10] D. P. Kingma, T. Salimans, R. Jozefowicz, X.i Chen, I. Sutskever, and M. Welling. Improving variational inference with inverse autoregressive flow. In NIPS, 2016. [11] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [12] D.P. Kingma, D.J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [14] H. Larochelle and S. Laulyi. A neural autoregressive topic model. In NIPS, 2012. [15] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In NIPS, 2016. [16] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013. [17] Y. Miao, L. Yu, and Phil Blunsomi. Neural variational inference for text processing. In ICML, 2016. [18] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML, 2014. [19] A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. In ICML, 2016. [20] A. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural network. In ICML, 2016. [21] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [22] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop, 2015. [23] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image model. Artificial Intelligence and Statistics (AISTATS), 2016. 9 [24] R. Ranganath, L. Tang, L. Charlin, and D. M.Blei. Deep exponential families. In AISTATS, 2015. [25] R. Ranganath, D. Tran, and D. M. Blei. Hierarchical variational models. In ICML, 2016. [26] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. [27] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [28] D.J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015. [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-fei. Imagenet large scale visual recognition challenge. IJCV, 2014. [30] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for text sequence matching. In arXiv, 2017. [31] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR workshop, 2015. [32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. [33] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 2010. [34] Y. Pu W. Wang, R. Henao, L. Chen, Z. Gan, C. Li, and Lawrence Carin. Adversarial symmetric variational autoencoder. In NIPS, 2017. [35] Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin. Deconvolutional paragraph representation learning. In NIPS, 2017. [36] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, 2012. 10 | 2017 | 183 |
6,656 | Approximation and Convergence Properties of Generative Adversarial Learning Shuang Liu University of California, San Diego shuangliu@ucsd.edu Olivier Bousquet Google Brain obousquet@google.com Kamalika Chaudhuri University of California, San Diego kamalika@cs.ucsd.edu Abstract Generative adversarial networks (GAN) approximate a target data distribution by jointly optimizing an objective function through a "two-player game" between a generator and a discriminator. Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results. 1 Introduction Generative adversarial networks (GANs) have attracted an enormous amount of recent attention in machine learning. In a generative adversarial network, the goal is to produce an approximation to a target data distribution from which only samples are available. This is done iteratively via two components – a generator and a discriminator, which are usually implemented by neural networks. The generator takes in random (usually Gaussian or uniform) noise as input and attempts to transform it to match the target distribution ; the discriminator aims to accurately discriminate between samples from the target distribution and those produced by the generator. Estimation proceeds by iteratively refining the generator and the discriminator to optimize an objective function until the target distribution is indistinguishable from the distribution induced by the generator. The practical success of GANs has led to a large volume of recent literature on variants which have many desirable properties; examples are the f-GAN [10], the MMD-GAN [5, 9], the Wasserstein-GAN [2], among many others. In spite of their enormous practical success, unlike more traditional methods such as maximum likelihood inference, GANs are theoretically rather poorly-understood. In particular, two very basic questions on how well they can approximate the target distribution , even in the presence of a very large number of samples and perfect optimization, remain largely unanswered. The first relates to the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. role of the discriminator in the quality of the approximation. In practice, the discriminator is usually restricted to belong to some family, and it is not understood in what sense this restriction affects the distribution output by the generator. The second question relates to convergence; different variants of GANs have been proposed that involve different objective functions (to be optimized by the generator and the discriminator). However, it is not understood under what conditions minimizing the objective function leads to a good approximation of the target distribution. More precisely, does a sequence of distributions output by the generator that converges to the global minimum under the objective function always converge to the target distribution under some standard notion of distributional convergence? In this work, we consider these two questions in a broad setting. We first characterize a very general class of objective functions that we call adversarial divergences, and we show that they capture the objective functions used by a variety of existing procedures that include the original GAN [7], f-GAN [10], MMD-GAN [5, 9], WGAN [2], improved WGAN [8], as well as a class of entropic regularized optimal transport problems [6]. We then define the class of strict adversarial divergences – a subclass of adversarial divergences where the minimizer of the objective function is uniquely the target distribution. This characterization allows us to address the two questions above in a unified setting, and translate the results to an entire class of GANs with little effort. First, we address the role of the discriminator in the approximation in Section 4. We show that if the objective function is an adversarial divergence that obeys certain conditions, then using a restricted class of discriminators has the effect of matching generalized moments. A concrete consequence of this result is that in linear f-GANs, where the discriminator family is the set of all affine functions over a vector of features maps, and the objective function is an f-GAN, the optimal distribution output by the GAN will satisfy Ex[ (x)] = Ex[ (x)] regardless of the specific f-divergence chosen in the objective function. Furthermore, we show that a neural network GAN is just a supremum of linear GANs, therefore has the same moment-matching effect. We next address convergence in Section 5. We show that convergence in an adversarial divergence implies some standard notion of topological convergence. Particularly, we show that provided an objective function is a strict adversarial divergence, convergence to in the objective function implies weak convergence of the output distribution to . While convergence properties of some isolated objective functions were known before [2], this result extends them to a broad class of GANs. An additional consequence of this result is the observation that as the Wasserstein distance metrizes weak convergence of probability distributions (see e.g. [14]), Wasserstein-GANs have the weakest1 objective functions in the class of strict adversarial divergences. 2 Notations We use bold constants (e.g., 0, 1, x0) to denote constant functions. We denote by f g the function composition of f and g. We denote by Y X the set of functions maps from the set X to the set Y . We denote by the product measure of and . We denote by int(X) the interior of the set X. We denote by E[f] the integral of f with respect to measure . Let f : R ! R [ f+1g be a convex function, we denote by dom f the effective domain of f, that is, dom f = fx 2 R; f(x) < +1g; and we denote by f the convex conjugate of f, that is, f (x) = supx2R fx x f(x)g. For a topological space , we denote by C( ) the set of continuous functions on , Cb( ) the set of bounded continuous functions on , rca( ) the set of finite signed regular Borel measures on , and P( ) the set of probability measures on . Given a non-empty subspace Y of a topological space X, denote by X=Y the quotient space equipped with the quotient topology Y , where for any a; b 2 X, a Y b if and only if a = b or a; b both belong to Y . The equivalence class of each element a 2 X is denoted as [a] = fb : a Y bg. 1Weakness is actually a desirable property since it prevents the divergence from being too discriminative (saturate), thus providing more information about how to modify the model to approximate the true distribution. 2 3 General Framework Let be the target data distribution from which we can draw samples. Our goal is to find a generative model to approximate . Informally, most GAN-style algorithms model this approximation as solving the following problem inf sup f2F Ex; y [f(x; y)] ; where F is a class of functions. The process is usually considered adversarial in the sense that it can be thought of as a two-player minimax game, where a generator is trying to mimick the true distribution , and a adversary f is trying to distinguish between the true and generated distributions. However, another way to look at it is as the minimization of the following objective function 7 ! sup f2F Ex; y [f(x; y)] (1) This objective function measures how far the target distribution is from the current estimate . Hence, minimizing this function can lead to a good approximation of the target distribution . This leads us to the concept of adversarial divergence. Definition 1 (Adversarial divergence). Let X be a topological space, F Cb(X2), F 6= ;. An adversarial divergence over X is a function P(X) P(X) ! R [ f+1g (; ) 7 ! (jj) = sup f2F E [f] : (2) Observe that in Definition 1 if we have a fixed target distribution , then (2) is reduced to the objective function (1). Also, notice that because is the supremum of a family of linear functions (in each of the variables and separately), it is convex in each of its variables. Definition 1 captures the objective functions used by a variety of existing GAN-style procedures. In practice, although the function class F can be complicated, it is usually a transformation of a simple function class V, which is the set of discriminators or critics, as they have been called in the GAN literature. We give some examples by specifying F and V for each objective function. (a) GAN [7]. F = fx; y 7! log(u(x)) + log(1 u(y)) : u 2 Vg V = (0; 1)X \ Cb(X): (b) f-GAN [10]. Let f : R ! R [ f1g be a convex lower semi-continuous function. Assume f (x) x for any x 2 R, f is continuously differentiable on int(dom f ), and there exists x0 2 int(dom f ) such that f (x0) = x0. F = fx; y 7! v(x) f (v(y)) : v 2 Vg ; V = (dom f )X \ Cb(X): (c) MMD-GAN [5, 9]. Let k : X2 ! R be a universal reproducing kernel. Let M be the set of signed measures on X. F = fx; y 7! v(x) v(y) : v 2 Vg ; V = x 7! E [k(x; )] : 2 M; E2[k] 1 : (d) Wasserstein-GAN (WGAN) [2]. Assume X is a metric space. F = fx; y 7! v(x) v(y) : v 2 Vg ; V = n v 2 Cb(X) : kvkLip K o ; where K is a positive constant, kkLip denotes the Lipschitz constant. (e) WGAN-GP (Improved WGAN) [8]. Assume X is a convex subset of a Euclidean space. F = fx; y 7! v(x) v(y) EtU [(krv(tx + (1 t)y)k2 1)p] : v 2 Vg; V = C1(X); where U is the uniform distribution on [0; 1], is a positive constant, p 2 (1; 1). 3 (f) (Regularized) Optimal Transport [6]. 2 Let c : X2 ! R be some transportation cost function, 0 be the strength of regularization. If = 0 (no regularization), then F = fx; y 7! u(x) + v(y) : (u; v) 2 Vg ; (3) V = f(u; v) 2 Cb(X) Cb(X); u(x) + v(y) c(x; y) for any x; y 2 Xg ; if > 0, then F = x; y 7! u(x) + v(y) exp u(x) + v(y) c(x; y) : u; v 2 V ; (4) V = Cb(X): In order to study an adversarial divergence , it is critical to first understand at which points the divergence is minimized. More precisely, let be an adversarial divergence and be the target probability measure. We are interested in the set of probability measures that minimize the divergence when the first argument of is set to , i.e., the set arg min (jj) = f : (jj) = inf (jj)g. Formally, we define the set OPT; as follows. Definition 2 (OPT;). Let be an adversarial divergence over a topological space X, 2 P(X). Define OPT; to be the set of probability measures that minimize the function (jj). That is, OPT; 4= 2 P(X) : (jj) = inf 02P(X) (jj0) : Ideally, the target probability measure should be one and the only one that minimizes the objective function. The notion of strict adversarial divergence captures this property. Definition 3 (Strict adversarial divergence). Let be an adversarial divergence over a topological space X, is called a strict adversarial divergence if for any 2 P(X), OPT; = fg. For example, if the underlying space X is a compact metric space, then examples (c) and (d) induce metrics on P(X) (see, e.g., [12]), therefore are strict adversarial divergences. In the next two sections, we will answer two questions regarding the set OPT;: how well do the elements in OPT; approximate the target distribution when restricting the class of discriminators? (Section 4); and does a sequence of distributions that converges in an adversarial divergence also converges to OPT; under some standard notion of distributional convergence? (Section 5) 4 Generalized Moment Matching To motivate the discussion in this section, recall example (b) in Section 3). It can be shown that under some mild conditions, , the objective function of f-GAN, is actually the f-divergence, and the minimizer of (jj) is only [10]. However, in practice, the discriminator class V is usually implemented by a feedforward neural network, and it is known that a fixed neural network has limited capacity (e.g., it cannot implement the set of all the bounded continuous function). Therefore, one could ask what will happen if we restrict V to a sub-class V0? Obviously one would expect not be the unique minimizer of (jj) anymore, that is, OPT; contains elements other than . What can we say about the elements in OPT; now? Are all of them close to in a certain sense? In this section we will answer these questions. More formally, we consider F = fm r : 2 g to be a function class indexed by a set . We can think of as the parameter set of a feedforward neural network. Each m is thought to be a matching between two distributions, in the sense that and are matched under m if and only if E [m] = 0. In particular, if each m is corresponding to some function v such that m(x; y) = v(x) v(y), then and are matched under m if and only if some generalized moment of and are equal: E[v] = E[v]. Each r can be thought as a residual. We will now relate the matching condition to the optimality of the divergence. In particular, define M 4= f : 8 2 ; E[v] = E[v]g ; We will give sufficients conditions for members of M to be in OPT;. 2To the best of our knowledge, neither (3) or (4) was used in any GAN algorithm. However, since our focus in this paper is not implementing new algorithms, we leave experiments with this formulation for future work. 4 Theorem 4. Let X be a topological space, Rn, V = fv 2 Cb(X) : 2 g, R = r 2 Cb(X2) : 2 . Let m(x; y) = v(x) v(y). If there exists c 2 R such that for any ; 2 P(X), inf2 E [r] = c and there exists some 2 such that E [r ] = c and E [m ] 0, then (jj) = sup2 E [m r] is an adversarial divergence over X and for any 2 P(X), OPT; M : We now review the examples (a)-(e) in Section 3, show how to write each f 2 F into m r, and specify in each case such that the conditions of Theorem 4 can be satisfied. (a) GAN. Note that for any x 2 (0; 1), log (1=(x(1 x))) log(4). Let u = 1 2, f(x; y) = log(u(x)) + log(1 u(y)) = log(u(x)) log(u(y)) | {z } m(x;y) note E h m i =0 log (1= (u(y)(1 u(y)))) | {z } r(x;y) note r(x;y)r (x;y)=log(4) : (b) f-GAN. Recall that f (x) x 0 for any x 2 R and f (x0) = x0. Let v = x0, f(x; y) = v(x) f (v(y)) = v(x) v(y) | {z } m(x;y) note E h m i =0 (f (v(y)) v(y)) | {z } r(x;y) note r(x;y)r (x;y)=0 : (5) (c, d) MMD-GAN or Wasserstein-GAN. Let v = 0, f(x; y) = v(x) v(y) | {z } m(x;y) note E h m i =0 0 |{z} r(x;y) note r(x;y)=r (x;y)=0 : (e) WGAN-GP. Note that the function x 7! xp is nonnegative on R. Let v = ( (x1; x2; ; xn) 7! Pn i=1 xi pn ; if E[Pn i=1 xi] E[Pn i=1 xi], (x1; x2; ; xn) 7! Pn i=1 xi pn ; otherwise; f(x; y) = v(x) v(y) | {z } m(x;y) note E h m i 0 EtU [(krv(tx + (1 t)y)k2 1)p] | {z } r(x;y) note r(x;y)r (x;y)=0 : We now refine the previous result and show that under some additional conditions on m and r, the optimal elements of are fully characterized by the matching condition, i.e. OPT; = M. Theorem 5. Under the assumptions of Theorem 4, if 2 int() and both 7! E [m] and 7! E [r] have gradients at , and E [m ] = 0 and 90; E [m0] 6= 0 =) r E [m] 6= 0: (6) Then for any 2 P(X), OPT; = M: We remark that Theorem 4 is relatively intuitive, while Theorem 5 requires extra conditions, and is quite counter-intuitive especially for algorithms like f-GANs. 4.1 Example: Linear f-GAN We first consider a simple algorithm called linear f-GAN. Suppose we are provided with a feature map that maps each point x in the sample space X to a feature vector ( 1(x); 2(x); ; n(x)) where each i 2 Cb(X). We are satisfied that any distribution is a good approximation of the target distribution as long as E[ ] = E[ ]. For example, if X R and k(x) = xk, to say E[ ] = E[ ] is equivalent to say the first n moments of and are matched. Recall that in the standard f-GAN (example (b) in Section 3), V = (dom f )X \ Cb(X). Now instead of using the discriminator class V, we use a restricted discriminator class V0 V, containing the linear (or more precisely, affine) transformations of – the set V0 = T( ; 1) : 2 V; where = 2 Rn+1 : 8x 2 X; T( (x); 1) 2 dom f . We will show that now OPT; contains exactly those such that E[ ] = E[ ], regardless of the specific f chosen. Formally, 5 Corollary 6 (linear f-GAN). Let X be a compact topological space. Let f be a function as defined in example (b) of Section 3. Let = ( i)n i=1 be a vector of continuously differentiable functions on X. Let = 2 Rn+1 : 8x 2 X; T( (x); 1) 2 dom f . Let be the objective function of the linear f-GAN (jj) = sup 2 E[T( ; 1)] E[f (T( ; 1))] : Then for any 2 P(X), OPT; = f : (jj) = 0g = f : E[ ] = E[ ]g 3 : A very concrete example of Corollary 6 could be, for example, the linear KL-GAN, where f(u) = u log u, f (t) = exp(t 1), = ( i)n i=1, = Rn+1. The objective function is (jj) = sup 2Rn+1 E[T( ; 1)] E[exp(T( ; 1) 1)] ; 4.2 Example: Neural Network f-GAN Next we consider a more general and practical example: an f-GAN where the discriminator class V0 = fv : 2 g is implemented through a feedforward neural network with weight parameter set . We assume that all the activation functions are continuously differentiable (e.g., sigmoid, tanh), and the last layer of the network is a linear transformation plus a bias. We also assume dom f = R (e.g., the KL-GAN where f (t) = exp(t 1)). Now observe that when all the weights before the last layer are fixed, the last layer acts as a discriminator in a linear f-GAN. More precisely, let pre be the index set for the weights before the last layer. Then each pre 2 pre corresponds to a feature map pre. Let the linear f-GAN that corresponds to pre be pre, the adversarial divergence induced by the Neural Network f-GAN is (jj) = sup pre2pre pre(jj) Clearly OPT; T pre2pre OPTpre;. For the other direction, note that by Corollary 6, for any pre 2 pre, pre(jj) 0 and pre(jj) = 0. Therefore (jj) 0 and (jj) = 0. If 2 OPT;, then (jj) = 0. As a consequence, pre(jj) = 0 for any pre 2 pre. Therefore OPT; T pre2pre OPTpre;. Therefore, by Corollary 6, OPT; = \ pre2pre OPTpre; = f : 8 2 ; E[v] = E[v]g : That is, the minimizer of the Neural Network f-GAN are exactly those distributions that are indistinguishable under the expectation of any discriminator network v. 5 Convergence To motivate the discussion in this section, consider the following question. Let x0 be the delta distribution at x0 2 R, that is, x = x0 with probability 1. Now, does the sequence of delta distributions 1=n converges to 1? Almost all the people would answer no. However, does the sequence of delta distributions 1=n converges to 0? Most people would answer yes based on the intuition that 1=n ! 0 and so does the sequence of corresponding delta distributions, even though the support of 1=n never has any intersection with the support of 0. Therefore, convergence can be defined for distributions not only in a point-wise way, but in a way that takes consideration of the underlying structure of the sample space. Now returning to our adversarial divergence framework. Given an adversarial divergence , is it possible that (1jj1=n) convreges to the global minimum of (1jj)? How to we define convergence to a set of points instead of only one point, in order to explain the convergence behaviour of any adversarial divergence? In this section we will answer these questions. We start from two standard notions from functional analysis. Definition 7 (Weak-* topology on P(X) (see e.g. [11])). Let X be a compact metric space. By associating with each 2 rca(X) a linear function f 7 ! E[f] on C(X), we have that rca(X) 6 is the continuous dual of C(X) with respect to the uniform norm on C(X) (see e.g. [4]). Therefore we can equip rca(X) (and therefore P(X)) with a weak-* topology, which is the coarsest topology on rca(X) such that f 7! E[f] : f 2 C(X)g is a set of continuous linear functions on rca(X). Definition 8 (Weak convergence of probability measures (see e.g. [11])). Let X be a compact metric space. A sequence of probability measures (n) in P(X) is said to weakly converge to a measure 2 P(X), if 8f 2 C(X), En[f] ! E[f]; or equivalently, if (n) is weak-* convergent to . The definition of weak-* topology and weak convergence respect the topological structure of the sample space. For example, it is easy to check that the sequence of delta distributions 1=n weakly converges to 0, but not to 1. Now note that Definition 8 only defines weak convergence of a sequence of probability measures to a single target measure. Here we generalize the definition for the single target measure to a set of target measures through quotient topology as follows. Definition 9 (Weak convergence of probability measures to a set). Let X be a compact metric space, equip P(X) with the weak-* topology and let A be a non-empty subspace of P(X). A sequence of probability measures (n) in P(X) is said to weakly converge to the set A if ([n]) converges to A in the quotient space P(X)=A. With everything properly defined, we are now ready to state our convergence result. Note that an adversarial divergence is not necessarily a metric, and therefore does not necessarily induce a topology. However, convergence in an adversarial divergence can still imply some type of topological convergence. More precisely, we show a convergence result that holds for any adversarial divergence as long as the sample space is a compact metric space. Informally, we show that for any target probability measure, if (jjn) converges to the global minimum of (jj), then n weakly converges to the set of measures that achieve the global minimum. Formally, Theorem 10. Let X be a compact metric space, be an adversarial divergence over X, 2 P(X), then OPT; 6= ;. Let (n) be a sequence of probability measures in P(X). If (jjn) ! inf0 (jj0), then (n) weakly converges to the set OPT;. As a special case of Theorem 10, if is a strict adversarial divergence, i.e., OPT; = fg, then converging to the minimizer of the objective function implies the usual weak convergence to the target probability measure. For example, it can be checked that the objective function of f-GAN is a strict adversarial divergence, therefore converging in the objective function of an f-GAN implies the usual weak convergence to the target probability measure. To compare this result with our intuition, we return to the example of a sequence of delta distributions and show that as long as is a strict adversarial divergence, (1jj1=n) does not converge to the global minimum of (1jj). Observe that if (1jj1=n) converges to the global minimum of (1jj), then according to Theorem 10, 1=n will weakly converge to 1, which leads to a contradiction. However Theorem 10 does more than excluding undesired possibilities. It also enables us to give general statements about the structure of the class of adversarial divergences. The structural result can be easily stated under the notion of relative strength between adversarial divergences, which is defined as follows. Definition 11 (Relative strength between adversarial divergences). Let 1 and 2 be two adversarial divergences, if for any sequence of probability measures (n) and any target probability measure , 1(jjn) ! inf 1(jj) implies 2(jjn) ! inf 2(jj), then we say 1 is stronger than 2 and 2 is weaker than 1. We say 1 is equivalent to 2 if 1 is both stronger and weaker than 2. We say 1 is strictly stronger (strictly weaker) than 2 if 1 is stronger (weaker) than 2 but not equivalent. We say 1 and 2 are not comparable if 1 is neither stronger nor weaker than 2. Not much is known about the relative strength between different adversarial divergences. If the underlying sample space is nice (e.g., subset of Euclidean space), then the variational (GAN-style) formulation of f-divergences using bounded continuous functions coincides with the original definition [15], and therefore f-divergences are adversarial divergences. [2] showed that the KL-divergence is stronger than the JS-divergence, which is equivalent to the total variation distance, which is strictly stronger than the Wasserstein-1 distance. However, the novel fact is that we can reach the weakest strict adversarial divergence. Indeed, one implicatoin of Theorem 10 is that if X is a compact metric space and is a strict adversarial 7 Figure 1: Structure of the class of strict adversarial divergences divergence over , then -convergence implies the usual weak convergence on probability measures. In particular, since the Wasserstein distance metrizes weak convergence of probability distributions (see e.g. [14]), as a direct consequence of Theorem 10, the Wasserstein distance is in the equivalence class of the weakest strict adversarial divergences. In the other direction, there exists a trivial strict adversarial divergence Trivial(jj) 4= 0; if = , +1; otherwise; (7) that is stronger than any other strict adversarial divergence. We now incorporate our convergence results with some previous results and get the following structural result. Corollary 12. The class of strict adversarial divergences over a bounded and closed subset of a Euclidean space has the structure as shown in Figure 1, where Trivial is defined as in (7), MMD is corresponding to example (c) in Section 3, Wasserstein is corresponding to example (d) in Section 3, and KL, Reverse-KL, TV, JS, Hellinger are corresponding to example (b) in Section 3 with f(x) being x log(x), log(x), 1 2jx 1j, (x + 1) log( x+1 2 ) + x log(x), (px 1)2, respectively. Each rectangle in Figure 1 represents an equivalence class, inside of which are some examples. In particular, Trivial is in the equivalence class of the strongest strict adversarial divergences, while MMD and Wasserstein are in the equivalence class of the weakest strict adversarial divergences. 6 Related Work There has been an explosion of work on GANs over the past couple of years; however, most of the work has been empirical in nature. A body of literature has looked at designing variants of GANs which use different objective functions. Examples include [10], which propose using the f-divergence between the target and the generated distribution , and [5, 9], which propose the MMD distance. Inspired by previous work, we identify a family of GAN-style objective functions in full generality and show general properties of the objective functions in this family. There has also been some work on comparing different GAN-style objective functions in terms of their convergence properties, either in a GAN-related setting [2], or in a general IPM setting [12]. Unlike these results, which look at the relationship between several specific strict adversarial divergences, our results apply to an entire class of GAN-style objective functions and establish their convergence properties. For example, [2] shows that KL-divergnce, JS-divergence, total-variation distance are all stronger than the Wasserstein distance, while our results generalize this part of their result and says that any strict adversarial divergence is stronger than the Wasserstein distance and its equivalences. Furthermore, our results also apply to non-strict adversarial divergences. That being said, it does not mean our results are a complete generalization of the previous convergence results such as [2, 12]. Our results do not provide any methods to compare two strict adversarial divergences if none of them is equivalent to the Wasserstein distance or the trivial divergence. In contrast, [2] show that the KL-divergence is stronger than the JS-divergence, which is equivalent to the total variation distance, which is strictly stronger than the Wasserstein-1 distance. Finally, there has been some additional theoretical literature on understanding GANs, which consider orthogonal aspects of the problem. [3] address the question of whether we can achieve generalization bounds when training GANs. [13] focus on optimizing the estimating power of kernel distances. [5] study generalization bounds for MMD-GAN in terms of fat-shattering dimension. 8 7 Discussion and Conclusions In conclusion, our results provide insights on the cost or loss functions that should be used in GANs. The choice of cost function plays a very important role in this case – more so, for example, than data domains or network architectures. For example, most works still use the DCGAN architecture, while changing the cost functions to achieve different levels of performance, and which cost function is better is still a matter of debate. In particular we provide a framework for studying many different GAN criteria in a way that makes them more directly comparable, and under this framework, we study both approximation and convergence properties of various loss functions. 8 Acknowledgments We thank Iliya Tolstikhin, Sylvain Gelly, and Robert Williamson for helpful discussions. The work of KC and SL were partially supported by NSF under IIS 1617157. References [1] C. D. Aliprantis and O. Burkinshaw. Principles of real analysis. Academic Press, 1998. [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. [3] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (gans). CoRR, abs/1703.00573, 2017. [4] H. G. Dales, J. F.K. Dashiell, A.-M. Lau, and D. Strauss. Banach Spaces of Continuous Functions as Dual Spaces. CMS Books in Mathematics. Springer International Publishing, 2016. [5] G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI 2015. [6] A. Genevay, M. Cuturi, G. Peyré, and F. R. Bach. Stochastic optimization for large-scale optimal transport. In NIPS 2016. [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS 2014. [8] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. CoRR, abs/1704.00028, 2017. [9] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML 2015. [10] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In NIPS 2016. [11] W. Rudin. Functional Analysis. International Series in Pure and Applied Mathematics. McGrawHill, Inc, 1991. [12] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. G. Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517–1561, 2010. [13] D. J. Sutherland, H. F. Tung, H. Strathmann, S. De, A. Ramdas, A. J. Smola, and A. Gretton. Generative models and model criticism via optimized maximum mean discrepancy. In ICLR 2017. [14] C. Villani. Optimal transport, old and new. Grundlehren der mathematischen Wissenschaften. Springer-Verlag Berlin Heidelberg, 2009. [15] Y. Wu. Lecture notes: Information-theoretic methods for high-dimensional statistics. 2017. 9 | 2017 | 184 |
6,657 | VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning Akash Srivastava School of Informatics University of Edinburgh akash.srivastava@ed.ac.uk Lazar Valkov School of Informatics University of Edinburgh L.Valkov@sms.ed.ac.uk Chris Russell The Alan Turing Institute London crussell@turing.ac.uk Michael U. Gutmann School of Informatics University of Edinburgh Michael.Gutmann@ed.ac.uk Charles Sutton School of Informatics & The Alan Turing Institute University of Edinburgh csutton@inf.ed.ac.uk Abstract Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images. But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes of the true distribution. To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise. In sharp contrast to a traditional autoencoder over data points, VEEGAN does not require specifying a loss function over the data, but rather only over the representations, which are standard normal by assumption. On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. 1 Introduction Deep generative models are a topic of enormous recent interest, providing a powerful class of tools for the unsupervised learning of probability distributions over difficult manifolds such as natural images [7, 11, 18]. Deep generative models are usually implicit statistical models [3], also called implicit probability distributions, meaning that they do not induce a density function that can be tractably computed, but rather provide a simulation procedure to generate new data points. Generative adversarial networks (GANs) [7] are an attractive such method, which have seen promising recent successes [17, 20, 23]. GANs train two deep networks in concert: a generator network that maps random noise, usually drawn from a multi-variate Gaussian, to data items; and a discriminator network that estimates the likelihood ratio of the generator network to the data distribution, and is trained 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. using an adversarial principle. Despite an enormous amount of recent work, GANs are notoriously fickle to train, and it has been observed [1, 19] that they often suffer from mode collapse, in which the generator network learns how to generate samples from a few modes of the data distribution but misses many other modes, even though samples from the missing modes occur throughout the training data. To address this problem, we introduce VEEGAN,1 a variational principle for estimating implicit probability distributions that avoids mode collapse. While the generator network maps Gaussian random noise to data items, VEEGAN introduces an additional reconstructor network that maps the true data distribution to Gaussian random noise. We train the generator and reconstructor networks jointly by introducing an implicit variational principle, which encourages the reconstructor network not only to map the data distribution to a Gaussian, but also to approximately reverse the action of the generator. Intuitively, if the reconstructor learns both to map all of the true data to the noise distribution and is an approximate inverse of the generator network, this will encourage the generator network to map from the noise distribution to the entirety of the true data distribution, thus resolving mode collapse. Unlike other adversarial methods that train reconstructor networks [4, 5, 22], the noise autoencoder dramatically reduces mode collapse. Unlike recent adversarial methods that also make use of a data autoencoder [1, 13, 14], VEEGAN autoencodes noise vectors rather than data items. This is a significant difference, because choosing an autoencoder loss for images is problematic, but for Gaussian noise vectors, an ℓ2 loss is entirely natural. Experimentally, on both synthetic and real-world image data sets, we find that VEEGAN is dramatically less susceptible to mode collapse, and produces higher-quality samples, than other state-of-the-art methods. 2 Background Implicit probability distributions are specified by a sampling procedure, but do not have a tractable density [3]. Although a natural choice in many settings, implicit distributions have historically been seen as difficult to estimate. However, recent progress in formulating density estimation as a problem of supervised learning has allowed methods from the classification literature to enable implicit model estimation, both in the general case [6, 10] and for deep generative adversarial networks (GANs) in particular [7]. Let {xi}N i=1 denote the training data, where each xi ∈RD is drawn from an unknown distribution p(x). A GAN is a neural network Gγ that maps representation vectors z ∈RK, typically drawn from a standard normal distribution, to data items x ∈RD. Because this mapping defines an implicit probability distribution, training is accomplished by introducing a second neural network Dω, called a discriminator, whose goal is to distinguish generator samples from true data samples. The parameters of these networks are estimated by solving the minimax problem max ω min γ OGAN(ω, γ) := Ez [log σ (Dω(Gγ(z)))] + Ex [log (1 −σ (Dω(x)))] , where Ez indicates an expectation over the standard normal z, Ex indicates an expectation over the data distribution p(x), and σ denotes the sigmoid function. At the optimum, in the limit of infinite data and arbitrarily powerful networks, we will have Dω = log qγ(x)/p(x), where qγ is the density that is induced by running the network Gγ on normally distributed input, and hence that qγ = p [7]. Unfortunately, GANs can be difficult and unstable to train [19]. One common pathology that arises in GAN training is mode collapse, which is when samples from qγ(x) capture only a few of the modes of p(x). An intuition behind why mode collapse occurs is that the only information that the objective function provides about γ is mediated by the discriminator network Dω. For example, if Dω is a constant, then OGAN is constant with respect to γ, and so learning the generator is impossible. When this situation occurs in a localized region of input space, for example, when there is a specific type of image that the generator cannot replicate, this can cause mode collapse. 1VEEGAN is a Variational Encoder Enhancement to Generative Adversarial Networks. https://akashgit. github.io/VEEGAN/ 2 z p0(z) x p(x) z Gγ Fθ (a) Suppose Fθ is trained to approximately invert Gγ. Then applying Fθ to true data is likely to produce a non-Gaussian distribution, allowing us to detect mode collapse. z p0(z) x p(x) z Gγ Fθ (b) When Fθ is trained to map the data to a Gaussian distribution, then treating Fθ ◦Gγ as an autoencoder provides learning signal to correct Gγ. Figure 1: Illustration of how a reconstructor network Fθ can help to detect mode collapse in a deep generative network Gγ. The data distribution is p(x) and the Gaussian is p0(z). See text for details. 3 Method The main idea of VEEGAN is to introduce a second network Fθ that we call the reconstructor network which is learned both to map the true data distribution p(x) to a Gaussian and to approximately invert the generator network. To understand why this might prevent mode collapse, consider the example in Figure 1. In both columns of the figure, the middle vertical panel represents the data space, where in this example the true distribution p(x) is a mixture of two Gaussians. The bottom panel depicts the input to the generator, which is drawn from a standard normal distribution p0 = N(0, I), and the top panel depicts the result of applying the reconstructor network to the generated and the true data. The arrows labeled Gγ show the action of the generator. The purple arrows labelled Fθ show the action of the reconstructor on the true data, whereas the green arrows show the action of the reconstructor on data from the generator. In this example, the generator has captured only one of the two modes of p(x). The difference between Figure 1a and 1b is that the reconstructor networks are different. First, let us suppose (Figure 1a) that we have successfully trained Fθ so that it is approximately the inverse of Gγ. As we have assumed mode collapse however, the training data for the reconstructor network Fθ does not include data items from the “forgotten" mode of p(x), therefore the action of Fθ on data from that mode is ill-specified. This means that Fθ(X), X ∼p(x) is unlikely to be Gaussian and we can use this mismatch as an indicator of mode collapse. Conversely, let us suppose (Figure 1b) that Fθ is successful at mapping the true data distribution to a Gaussian. In that case, if Gγ mode collapses, then Fθ will not map all Gγ(z) back to the original z and the resulting penalty provides us with a strong learning signal for both γ and θ. Therefore, the learning principle for VEEGAN will be to train Fθ to achieve both of these objectives simultaneously. Another way of stating this intuition is that if the same reconstructor network maps both the true data and the generated data to a Gaussian distribution, then the generated data is likely to coincide with true data. To measure whether Fθ approximately inverts Gγ, we use an autoencoder loss. More precisely, we minimize a loss function, like ℓ2 loss between z ∼p0 and Fθ(Gγ(z))). To quantify whether Fθ maps the true data distribution to a Gaussian, we use the cross entropy H(Z, Fθ(X)) between Z and Fθ(x). This boils down to learning γ and θ by minimising the sum of these two objectives, namely Oentropy(γ, θ) = E ∥z −Fθ(Gγ(z))∥2 2 + H(Z, Fθ(X)). (1) While this objective captures the main idea of our paper, it cannot be easily computed and minimised. We next transform it into a computable version and derive theoretical guarantees. 3.1 Objective Function Let us denote the distribution of the outputs of the reconstructor network when applied to a fixed data item x by pθ(z|x) and when applied to all X ∼p(x) by pθ(z) = R pθ(z|x)p(x) dx. The conditional 3 distribution pθ(z|x) is Gaussian with unit variance and, with a slight abuse of notation, (deterministic) mean function Fθ(x). The entropy term H(Z, Fθ(X)) can thus be written as H(Z, Fθ(X)) = − Z p0(z) log pθ(z)dz = − Z p0(z) log Z p(x)pθ(z|x) dx dz. (2) This cross entropy is minimized with respect to θ when pθ(z) = p0(z) [2]. Unfortunately, the integral on the right-hand side of (2) cannot usually be computed in closed form. We thus introduce a variational distribution qγ(x|z) and by Jensen’s inequality, we have −log pθ(z) = −log Z pθ(z|x)p(x)qγ(x|z) qγ(x|z) dx ≤− Z qγ(x|z) log pθ(z|x)p(x) qγ(x|z) dx, (3) which we use to bound the cross-entropy in (2). In variational inference, strong parametric assumptions are typically made on qγ. Importantly, we here relax that assumption, instead representing qγ implicitly as a deep generative model, enabling us to learn very complex distributions. The variational distribution qγ(x|z) plays exactly the same role as the generator in a GAN, and for that reason, we will parameterize qγ(x|z) as the output of a stochastic neural network Gγ(z). In practice minimizing this bound is difficult if qγ is specified implicitly. For instance, it is challenging to train a discriminator network that accurately estimates the unknown likelihood ratio log p(x)/qγ(x|z), because qγ(x|z), as a conditional distribution, is much more peaked than the joint distribution p(x), making it too easy for a discriminator to tell the two distributions apart. Intuitively, the discriminator in a GAN works well when it is presented a difficult pair of distributions to distinguish. To circumvent this problem, we write (see supplementary material) − Z p0(z) log pθ(z) ≤KL [qγ(x|z)p0(z) ∥pθ(z|x)p(x)] −E [log p0(z)] . (4) Here all expectations are taken with respect to the joint distribution p0(z)qγ(x|z). Now, moving to the second term in (1), we define the reconstruction penalty as an expectation of the cost of autoencoding noise vectors, that is, E [d(z, Fθ(Gγ(z)))] . The function d denotes a loss function in representation space RK, such as ℓ2 loss and therefore the term is an autoencoder in representation space. To make this link explicit, we expand the expectation, assuming that we choose d to be ℓ2 loss. This yields E [d(z, Fθ(x))] = R p0(z) R qγ(x|z)∥z −Fθ(x)∥2 dxdz. Unlike a standard autoencoder, however, rather than taking a data item as input and attempting to reconstruct it, we autoencode a representation vector. This makes a substantial difference in the interpretation and performance of the method, as we discuss in Section 4. For example, notice that we do not include a regularization weight on the autoencoder term in (5), because Proposition 1 below says that this is not needed to recover the data distribution. Combining these two ideas, we obtain the final objective function O(γ, θ) = KL [qγ(x|z)p0(z) ∥pθ(z|x)p(x)] −E [log p0(z)] + E [d(z, Fθ(x))] . (5) Rather than minimizing the intractable Oentropy(γ, θ), our goal in VEEGAN is to minimize the upper bound O with respect to γ and θ. Indeed, if the networks Fθ and Gγ are sufficiently powerful, then if we succeed in globally minimizing O, we can guarantee that qγ recovers the true data distribution. This statement is formalized in the following proposition. Proposition 1. Suppose that there exist parameters θ∗, γ∗such that O(γ∗, θ∗) = H[p0], where H denotes Shannon entropy. Then (γ∗, θ∗) minimizes O, and further pθ∗(z) := Z pθ∗(z|x)p(x) dx = p0(z), and qγ∗(x) := Z qγ∗(x|z)p0(z) dz = p(x). Because neural networks are universal approximators, the conditions in the proposition can be achieved when the networks G and F are sufficiently powerful. 3.2 Learning with Implicit Probability Distributions This subsection describes how to approximate O when we have implicit representations for qγ and pθ rather than explicit densities. In this case, we cannot optimize O directly, because the KL divergence 4 Algorithm 1 VEEGAN training 1: while not converged do 2: for i ∈{1 . . . N} do 3: Sample zi ∼p0(z) 4: Sample xi g ∼qγ(x|zi) 5: Sample xi ∼p(x) 6: Sample zi g ∼pθ(zg|xi) 7: gω ←−∇ω 1 N P i log σ Dω(zi, xi g) + log 1 −σ Dω(zi g, xi) ▷Compute ∇ω ˆOLR 8: 9: gθ ←∇θ 1 N P i d(zi, xi g) ▷Compute ∇θ ˆO 10: 11: gγ ←∇γ 1 N P i Dω(zi, xi g) + 1 N P i d(zi, xi g) ▷Compute ∇γ ˆO 12: 13: ω ←ω −ηgω; θ ←θ −ηgθ; γ ←γ −ηgγ ▷Perform SGD updates for ω, θ and γ in (5) depends on a density ratio which is unknown, both because qγ is implicit and also because p(x) is unknown. Following [4, 5], we estimate this ratio using a discriminator network Dω(x, z) which we will train to encourage Dω(z, x) = log qγ(x|z)p0(z) pθ(z|x)p(x) . (6) This will allow us to estimate O as ˆO(ω, γ, θ) = 1 N N X i=1 Dω(zi, xi g) + 1 N N X i=1 d(zi, xi g), (7) where (zi, xi g) ∼p0(z)qγ(x|z). In this equation, note that xi g is a function of γ; although we suppress this in the notation, we do take this dependency into account in the algorithm. We use an auxiliary objective function to estimate ω. As mentioned earlier, we omit the entropy term −E [log p0(z)] from ˆO as it is constant with respect to all parameters. In principle, any method for density ratio estimation could be used here, for example, see [9, 21]. In this work, we will use the logistic regression loss, much as in other methods for deep adversarial training, such as GANs [7], or for noise contrastive estimation [8]. We will train Dω to distinguish samples from the joint distribution qγ(x|z)p0(z) from pθ(z|x)p(x). The objective function for this is OLR(ω, γ, θ) = −Eγ [log (σ (Dω(z, x)))] −Eθ [log (1 −σ (Dω(z, x)))] , (8) where Eγ denotes expectation with respect to the joint distribution qγ(x|z)p0(x) and Eθ with respect to pθ(z|x)p(x). We write ˆOLR to indicate the Monte Carlo estimate of OLR. Our learning algorithm optimizes this pair of equations with respect to γ, ω, θ using stochastic gradient descent. In particular, the algorithms aim to find a simultaneous solution to minω ˆOLR(ω, γ, θ) and minθ,γ ˆO(ω, γ, θ). This training procedure is described in Algorithm 1. When this procedure converges, we will have that ω∗= arg minω OLR(ω, γ∗, θ∗), which means that Dω∗has converged to the likelihood ratio (6). Therefore (γ∗, θ∗) have also converged to a minimum of O. We also found that pre-training the reconstructor network on samples from p(x) helps in some cases. 4 Relationships to Other Methods An enormous amount of attention has been devoted recently to improved methods for GAN training, and we compare ourselves to the most closely related work in detail. BiGAN/Adversarially Learned Inference BiGAN [4] and Adversarially Learning Inference (ALI) [5] are two essentially identical recent adversarial methods for learning both a deep generative network Gγ and a reconstructor network Fθ. Likelihood-free variational inference (LFVI) [22] extends this idea to a hierarchical Bayesian setting. Like VEEGAN, all of these methods also use a discriminator Dω(z, x) on the joint (z, x) space. However, the VEEGAN objective function O(θ, γ) 5 provides significant benefits over the logistic regression loss over θ and γ that is used in ALI/BiGAN, or the KL-divergence used in LFVI. In all of these methods, just as in vanilla GANs, the objective function depends on θ and γ only via the output Dω(z, x) of the discriminator; therefore, if there is a mode of data space in which Dω is insensitive to changes in θ and γ, there will be mode collapse. In VEEGAN, by contrast, the reconstruction term does not depend on the discriminator, and so can provide learning signal to γ or θ even when the discriminator is constant. We will show in Section 5 that indeed VEEGAN is dramatically less prone to mode collapse than ALI. InfoGAN While differently motivated to obtain disentangled representation of the data, InfoGAN also uses a latent-code reconstruction based penalty in its cost function. But unlike VEEGAN, only a part of the latent code is reconstructed in InfoGAN. Thus, InfoGAN is similar to VEEGAN in that it also includes an autoencoder over the latent codes, but the key difference is that InfoGAN does not also train the reconstructor network on the true data distribution. We suggest that this may be the reason that InfoGAN was observed to require some of the same stabilization tricks as vanilla GANs, which are not required for VEEGAN. Adversarial Methods for Autoencoders A number of other recent methods have been proposed that combine adversarial methods and autoencoders, whether by explicitly regularizing the GAN loss with an autoencoder loss [1, 13], or by alternating optimization between the two losses [14]. In all of these methods, the autoencoder is over images, i.e., they incorporate a loss function of the form λd(x, Gγ(Fθ(x))), where d is a loss function over images, such as pixel-wise ℓ2 loss, and λ is a regularization constant. Similarly, variational autoencoders [12, 18] also autoencode images rather than noise vectors. Finally, the adversarial variational Bayes (AVB) [15] is an adaptation of VAEs to the case where the posterior distribution pθ(z|x) is implicit, but the data distribution qγ(x|z), must be explicit, unlike in our work. Because these methods autoencode data points, they share a crucial disadvantage. Choosing a good loss function d over natural images can be problematic. For example, it has been commonly observed that minimizing an ℓ2 reconstruction loss on images can lead to blurry images. Indeed, if choosing a loss function over images were easy, we could simply train an autoencoder and dispense with adversarial learning entirely. By contrast, in VEEGAN we autoencode the noise vectors z, and choosing a good loss function for a noise autoencoder is easy. The noise vectors z are drawn from a standard normal distribution, using an ℓ2 loss on z is entirely natural — and does not, as we will show in Section 5, result in blurry images compared to purely adversarial methods. 5 Experiments Quantitative evaluation of GANs is problematic because implicit distributions do not have a tractable likelihood term to quantify generative accuracy. Quantifying mode collapsing is also not straightforward, except in the case of synthetic data with known modes. For this reason, several indirect metrics have recently been proposed to evaluate GANs specifically for their mode collapsing behavior [1, 16]. However, none of these metrics are reliable on their own and therefore we need to compare across a number of different methods. Therefore in this section we evaluate VEEGAN on several synthetic and real datasets and compare its performance against vanilla GANs [7], Unrolled GAN [16] and ALI [5] on five different metrics. Our results strongly suggest that VEEGAN does indeed resolve mode collapse in GANs to a large extent. Generally, we found that VEEGAN performed well with default hyperparameter values, so we did not tune these. Full details are provided in the supplementary material. 5.1 Synthetic Dataset Mode collapse can be accurately measured on synthetic datasets, since the true distribution and its modes are known. In this section we compare all four competing methods on three synthetic datasets of increasing difficulty: a mixture of eight 2D Gaussian distributions arranged in a ring, a mixture of twenty-five 2D Gaussian distributions arranged in a grid 2 and a mixture of ten 700 dimensional 2Experiment follows [5]. Please note that for certain settings of parameters, vanilla GAN can also recover all 25 modes, as was pointed out to us by Paulina Grnarova. 6 Table 1: Sample quality and degree of mode collapse on mixtures of Gaussians. VEEGAN consistently captures the highest number of modes and produces better samples. 2D Ring 2D Grid 1200D Synthetic Modes (Max 8) % High Quality Samples Modes (Max 25) % High Quality Samples Modes (Max 10) % High Quality Samples GAN 1 99.3 3.3 0.5 1.6 2.0 ALI 2.8 0.13 15.8 1.6 3 5.4 Unrolled GAN 7.6 35.6 23.6 16 0 0.0 VEEGAN 8 52.9 24.6 40 5.5 28.29 Gaussian distributions embedded in a 1200 dimensional space. This mixture arrangement was chosen to mimic the higher dimensional manifolds of natural images. All of the mixture components were isotropic Gaussians. For a fair comparison of the different learning methods for GANs, we use the same network architectures for the reconstructors and the generators for all methods, namely, fully-connected MLPs with two hidden layers. For the discriminator we use a two layer MLP without dropout or normalization layers. VEEGAN method works for both deterministic and stochastic generator networks. To allow for the generator to be a stochastic map we add an extra dimension of noise to the generator input that is not reconstructed. To quantify the mode collapsing behavior we report two metrics: We sample points from the generator network, and count a sample as high quality, if it is within three standard deviations of the nearest mode, for the 2D dataset, or within 10 standard deviations of the nearest mode, for the 1200D dataset. Then, we report the number of modes captured as the number of mixture components whose mean is nearest to at least one high quality sample. We also report the percentage of high quality samples as a measure of sample quality. We generate 2500 samples from each trained model and average the numbers over five runs. For the unrolled GAN, we set the number of unrolling steps to five as suggested in the authors’ reference implementation. As shown in Table 1, VEEGAN captures the greatest number of modes on all the synthetic datasets, while consistently generating higher quality samples. This is visually apparent in Figure 2, which plot the generator distributions for each method; the generators learned by VEEGAN are sharper and closer to the true distribution. This figure also shows why it is important to measure sample quality and mode collapse simultaneously, as either alone can be misleading. For instance, the GAN on the 2D ring has 99.3% sample quality, but this is simply because the GAN collapses all of its samples onto one mode (Figure 2b). On the other extreme, the unrolled GAN on the 2D grid captures almost all the modes in the true distribution, but this is simply because that it is generating highly dispersed samples (Figure 2i) that do not accurately represent the true distribution, hence the low sample quality. All methods had approximately the same running time, except for unrolled GAN, which is a few orders of magnitude slower due to the unrolling overhead. 5.2 Stacked MNIST Following [16], we evaluate our methods on the stacked MNIST dataset, a variant of the MNIST data specifically designed to increase the number of discrete modes. The data is synthesized by stacking three randomly sampled MNIST digits along the color channel resulting in a 28x28x3 image. We now expect 1000 modes in this data set, corresponding to the number of possible triples of digits. Again, to focus the evaluation on the difference in the learning algorithms, we use the same generator architecture for all methods. In particular, the generator architecture is an off-the-shelf standard implementation3 of DCGAN [17]. For Unrolled GAN, we used a standard implementation of the DCGAN discriminator network. For ALI and VEEGAN, the discriminator architecture is described in the supplementary material. For the reconstructor in ALI and VEEGAN, we use a simple two-layer MLP for the reconstructor without any regularization layers. 3https://github.com/carpedm20/DCGAN-tensorflow 7 Stacked-MNIST CIFAR-10 Modes (Max 1000) KL IvOM DCGAN 99 3.4 0.00844 ± 0.002 ALI 16 5.4 0.0067 ± 0.004 Unrolled GAN 48.7 4.32 0.013 ± 0.0009 VEEGAN 150 2.95 0.0068 ± 0.0001 Table 2: Degree of mode collapse, measured by modes captured and the inference via optimization measure (IvOM), and sample quality (as measured by KL) on Stacked-MNIST and CIFAR. VEEGAN captures the most modes and also achieves the highest quality. Finally, for VEEGAN we pretrain the reconstructor by taking a few stochastic gradient steps with respect to θ before running Algorithm 1. For all methods other than VEEGAN, we use the enhanced generator loss function suggested in [7], since we were not able to get sufficient learning signals for the generator without it. VEEGAN did not require this adjustment for successful training. As the true locations of the modes in this data are unknown, the number of modes are estimated using a trained classifier as described originally in [1]. We used a total of 26000 samples for all the models and the results are averaged over five runs. As a measure of quality, following [16] again, we also report the KL divergence between the generator distribution and the data distribution. As reported in Table 2, VEEGAN not only captures the most modes, it consistently matches the data distribution more closely than any other method. Generated samples from each of the models are shown in the supplementary material. 5.3 CIFAR Finally, we evaluate the learning methods on the CIFAR-10 dataset, a well-studied and diverse dataset of natural images. We use the same discriminator, generator, and reconstructor architectures as in the previous section. However, the previous mode collapsing metric is inappropriate here, owing to CIFAR’s greater diversity. Even within one of the 10 classes of CIFAR, the intra-group diversity is very high compared to any of the 10 classes of MNIST. Therefore, for CIFAR it is inappropriate to assume, as the metrics of the previous subsection do, that each labelled class corresponds to a single mode of the data distribution. Instead, we use a metric introduced by [16] which we will call the inference via optimization metric (IvOM). The idea behind this metric is to compare real images from the test set to the nearest generated image; if the generator suffers from mode collapse, then there will be some images for which this distance is large. To quantify this, we sample a real image x from the test set, and find the closest image that the GAN is capable of generating, i.e. optimizing the ℓ2 loss between x and generated image Gγ(z) with respect to z. If a method consistently attains low MSE, then it can be assumed to be capturing more modes than the ones which attain a higher MSE. As before, this metric can still be fooled by highly dispersed generator distributions, and also the ℓ2 metric may favour generators that produce blurry images. Therefore we will also evaluate sample quality visually. All numerical results have been averaged over five runs. Finally, to evaluate whether the noise autoencoder in VEEGAN is indeed superior to a more traditional data autoencoder, we compare to a variant, which we call VEEGAN +DAE, that uses a data autoencoder instead, by simply replacing d(z, Fθ(x)) in O with a data loss ∥x −Gγ(Fθ(x)))∥2 2. As shown in Table 2, ALI and VEEGAN achieve the best IvOM. Qualitatively, however, generated samples from VEEGAN seem better than other methods. In particular, the samples from VEEGAN +DAE are meaningless. Generated samples from VEEGAN are shown in Figure 3b; samples from other methods are shown in the supplementary material. As another illustration of this, Figure 3 illustrates the IvOM metric, by showing the nearest neighbors to real images that each of the GANs were able to generate; in general, the nearest neighbors will be more semantically meaningful than randomly generated images. We omit VEEGAN +DAE from this table because it did not produce plausible images. Across the methods, we see in Figure 3 that VEEGAN captures small details, such as the face of the poodle, that other methods miss. 8 Figure 2: Density plots of the true data and generator distributions from different GAN methods trained on mixtures of Gaussians arranged in a ring (top) or a grid (bottom). (a) True Data (b) GAN (c) ALI (d) Unrolled (e) VEEGAN (f) True Data (g) GAN (h) ALI (i) Unrolled (j) VEEGAN Figure 3: Sample images from GANs trained on CIFAR-10. Best viewed magnified on screen. (a) Generated samples nearest to real images from CIFAR-10. In each of the two panels, the first column are real images, followed by the nearest images from DCGAN, ALI, Unrolled GAN, and VEEGAN respectively. (b) Random samples from generator of VEEGAN trained on CIFAR-10. 6 Conclusion We have presented VEEGAN, a new training principle for GANs that combines a KL divergence in the joint space of representation and data points with an autoencoder over the representation space, motivated by a variational argument. Experimental results on synthetic data and real images show that our approach is much more effective than several state-of-the art GAN methods at avoiding mode collapse while still generating good quality samples. Acknowledgement We thank Martin Arjovsky, Nicolas Collignon, Luke Metz, Casper Kaae Sønderby, Lucas Theis, Soumith Chintala, Stanisław Jastrz˛ebski, Harrison Edwards, Amos Storkey and Paulina Grnarova for their helpful comments. We would like to specially thank Ferenc Huszár for insightful discussions and feedback. 9 References [1] Che, Tong, Li, Yanran, Jacob, Athul Paul, Bengio, Yoshua, and Li, Wenjie. Mode regularized generative adversarial networks. In International Conference on Learning Representations (ICLR), volume abs/1612.02136, 2017. [2] Cover, Thomas M and Thomas, Joy A. Elements of information theory. John Wiley & Sons, 2012. [3] Diggle, Peter J. and Gratton, Richard J. Monte carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), 46(2):193–227, 1984. ISSN 00359246. URL http://www.jstor.org/stable/2345504. [4] Donahue, Jeff, Krähenbühl, Philipp, and Darrell, Trevor. Adversarial feature learning. In International Conference on Learning Representations (ICLR), 2017. [5] Dumoulin, Vincent, Belghazi, Ishmael, Poole, Ben, Mastropietro, Olivier, Lamb, Alex, Arjovsky, Martin, and Courville, Aaron. Adversarially learned inference. In International Conference on Learning Representations (ICLR), 2017. [6] Dutta, Ritabrata, Corander, Jukka, Kaski, Samuel, and Gutmann, Michael U. Likelihood-free inference by ratio estimation. 2016. [7] Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. [8] Gutmann, Michael U. and Hyvarinen, Aapo. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13:307–361, 2012. [9] Gutmann, M.U. and Hirayama, J. Bregman divergence as general framework to estimate unnormalized statistical models. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI), pp. 283–290, Corvallis, Oregon, 2011. AUAI Press. [10] Gutmann, M.U., Dutta, R., Kaski, S., and Corander, J. Likelihood-free inference via classification. arXiv:1407.4981, 2014. [11] Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [12] Kingma, D.P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. [13] Larsen, Anders Boesen Lindbo, Sønderby, Søren Kaae, Larochelle, Hugo, and Winther, Ole. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), 2016. [14] Makhzani, Alireza, Shlens, Jonathon, Jaitly, Navdeep, and Goodfellow, Ian J. Adversarial autoencoders. Arxiv preprint 1511.05644, 2015. URL http://arxiv.org/abs/1511.05644. [15] Mescheder, Lars M., Nowozin, Sebastian, and Geiger, Andreas. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. ArXiv, abs/1701.04722, 2017. URL http://arxiv.org/abs/1701.04722. [16] Metz, Luke, Poole, Ben, Pfau, David, and Sohl-Dickstein, Jascha. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. [17] Radford, Alec, Metz, Luke, and Chintala, Soumith. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [18] Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278–1286, 2014. 10 [19] Salimans, Tim, Goodfellow, Ian J., Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. CoRR, abs/1606.03498, 2016. URL http://arxiv.org/abs/1606.03498. [20] Sønderby, Casper Kaae, Caballero, Jose, Theis, Lucas, Shi, Wenzhe, and Huszár, Ferenc. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. [21] Sugiyama, M., Suzuki, T., and Kanamori, T. Density ratio estimation in machine learning. Cambridge University Press, 2012. [22] Tran, D., Ranganath, R., and Blei, D. M. Deep and Hierarchical Implicit Models. ArXiv e-prints, 2017. [23] Zhu, Jun-Yan, Park, Taesung, Isola, Phillip, and Efros, Alexei A. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. 11 | 2017 | 185 |
6,658 | Local Aggregative Games Vikas K. Garg CSAIL, MIT vgarg@csail.mit.edu Tommi Jaakkola CSAIL, MIT tommi@csail.mit.edu Aggregative games provide a rich abstraction to model strategic multi-agent interactions. We introduce local aggregative games, where the payoff of each player is a function of its own action and the aggregate behavior of its neighbors in a connected digraph. We show the existence of a pure strategy ϵ-Nash equilibrium in such games when the payoff functions are convex or sub-modular. We prove an information theoretic lower bound, in a value oracle model, on approximating the structure of the digraph with non-negative monotone sub-modular cost functions on the edge set cardinality. We also define a new notion of structural stability, and introduce γ-aggregative games that generalize local aggregative games and admit ϵ-Nash equilibrium that is stable with respect to small changes in some specified graph property. Moreover, we provide algorithms for our models that can meaningfully estimate the game structure and the parameters of the aggregator function from real voting data. 1 Introduction Structured prediction methods have been remarkably successful in learning mappings between input observations and output configurations [1; 2; 3]. The central guiding formulation involves learning a scoring function that recovers the configuration as the highest scoring assignment. In contrast, in a game theoretic setting, myopic strategic interactions among players lead to a Nash equilibrium or locally optimal configuration rather than highest scoring global configuration. Learning games therefore involves, at best, enforcement of local consistency constraints as recently advocated [4]. [4] introduced the notion of contextual potential games, and proposed a dual decomposition algorithm for learning these games from a set of pure strategy Nash equilibria. However, since their setting was restricted to learning undirected tree structured potential games, it cannot handle (a) asymmetries in the strategic interactions, and (b) higher order interactions. Moreover, a wide class of strategic games (e.g. anonymous games [5]) do not admit a potential function and thus locally optimal configurations do not coincide with pure strategy Nash equilibria. In such games, the existence of only (approximate) mixed strategy equilibria is guaranteed [6]. In this work, we focus on learning local aggregative games to address some of these issues. In an aggregative game [7; 8; 9], every player gets a payoff that depends only on its own strategy and the aggregate of all the other players’ strategies. Aggregative games and their generalizations form a very rich class of strategic games that subsumes Cournot oligopoly, public goods, anonymous, mean field, and cost and surplus sharing games [10; 11; 12; 13]. In a local aggregative game, a player’s payoff is a function of its own strategy and the aggregate strategy of its neighbors (i.e. only a subset of other players). We do not assume that the interactions are symmetric or confined to a tree structure, and therefore the game structure could, in general, be a spanning digraph, possibly with cycles. We consider local aggregative games where each player’s payoff is a convex or submodular Lipschitz function of the aggregate of its neighbors. We prove sufficient conditions under which such games admit some pure strategy ϵ-Nash equilibrium. We then prove an information theoretic lower bound that for a specified ϵ, approximating a game structure that minimizes a non-negative monotone submodular cost objective on the cardinality of the edge set may require exponentially many queries under a zero-order or value oracle model. Our result generalizes the approximability of the submodular minimum spanning tree problem to degree constrained spanning digraphs [14]. We argue that this lower bound might be averted with a dataset of multiple ϵ-Nash equilibrium configurations sampled 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. from the local aggregative game. We also introduce γ-aggregative games that generalize local aggregative games to accommodate the (relatively weaker) effect of players that are not neighbors. These games are shown to have a desirable stability property that makes their ϵ-Nash equilibria robust to small fluctuations in the aggregator input. We formulate learning these games as optimization problems that can be efficiently solved via branch and bound, outer approximation decomposition, or extended cutting plane methods [17; 18]. The information theoretic hardness results do not apply to our algorithms since they have access to the (sub)gradients as well, unlike the value oracle model where only the function values may be queried. Our experiments strongly corroborate the efficacy of the local aggregative and γ-aggregative games in estimating the game structure on two real voting datasets, namely, the US Supreme Court Rulings and the Congressional Votes. 2 Setting We consider an n-player game where each player i ∈[n] ≜{1, 2, . . . , n} plays a strategy (or action) from a finite set Ai. For any strategy profile a, ai denotes the strategy of the ith player, and a−i the strategies of the other players. We are interested in local aggregative games that have the property that the payoff of each player i depends only on its own action and the aggregate action of its neighbors NG(i) = {j ∈V (G) : (j, i) ∈E(G)} in a connected digraph G = (V, E), where |V | = n. Since, the graph is directed, the neighbors need not be symmetric, i.e., (j, i) ∈E does not imply (i, j) ∈E. For any strategy profile a, we will denote the strategy vector of neighbors of player i by aNG(i). We assume that player i has a payoff function of the form ui(ai, fG(a, i)), where fG(a, i) ≜f(aNG(i)) is a local aggregator function, and ui is convex and Lipschitz in the aggregate fG(a, i) for all ai ∈Ai. Since fG(a, i) may take only finitely many values, we will assume interpolation between these values such that they form a convex set. We can define the Lipschitz constant of G as δ(G) ≜ max i,ai,a′ −i,a′′ −i {ui(ai, fG(a′, i)) −ui(ai, fG(a′′, i))}, (1) where the vectors a′ −i and a′′ −i differ in exactly one coordinate. Clearly, the payoff of any player in the network does not change by more than δ(G) when one of the neighbors changes its strategy. We can now talk about a class of aggregative games characterized by the Lipschitz constant: L(∆, n) = {G : V (G) = n, δ(G) ≤∆}. A strategy profile a = (ai, a−i) is said to be a pure strategy ϵ-Nash equilibrium (ϵ-PSNE) if no player can improve its payoff by more than ϵ by unilaterally switching its strategy. In other words, any player i cannot gain more than ϵ by playing an alternative strategy a′ i if the other players continue to play a−i. More generally, instead of playing deterministic actions in response to the actions of others, each player can randomize its actions. Then, the distributions over players’ actions constitute a mixed strategy ϵ-Nash equilibrium if any unilateral deviation could improve the expected payoff by at most ϵ. We will prove the existence of ϵ-PSNE in our setting. We will assume a training set S = {a1, a2, . . . , aM}, where each ai is an ϵ-PSNE sampled from our game. Our objective is to recover the game digraph G and the payoff functions ui, i ∈[n] from the set S. The rest of the paper is organized as follows. We first establish some important theoretical paraphernalia on the local aggregative games in Section 3. In Section 4, we introduce γ-aggregative games and show that γ-aggregators are structurally stable. We formulate the learning problem in Section 5, and describe our experimental set up and results in Section 6. We state the theoretical results in the main text, and provide the detailed proofs in the Supplementary (Section 7) for improved readability. 3 Theoretical foundations Any finite game is guaranteed to admit a mixed strategy ϵ-equilibrium due to a seminal result by Nash [6]. However, general games may not have any ϵ-PSNE (for small ϵ). We first prove a sufficient condition for the existence of ϵ-PSNE in local aggregative games with small Lipschitz constant. A similar result holds when the payoff functions ui(·) are non-negative monotone submodular and Lipschitz (see the supplementary material for details). Theorem 1. Any local aggregative game on a connected digraph G, where G ∈L(∆, n) and max i |Ai| ≤m, admits a 10∆ p ln(8mn)-PSNE. 2 Proof. (Sketch.) The main idea behind the proof is to sample a random strategy profile from a mixed strategy Nash equilibrium of the game, and show that with high probability the sampled profile corresponds to an ϵ-PSNE when the Lipschitz constant is small. The proof is based on a novel application of the Talagrand’s concentration inequality. Theorem 1 implies the minimum degree d (which depends on number of players n, the local aggregator function A, Lipschitz constant ∆, and ϵ) of the game structure that ensures the existence of at least one ϵ-PSNE. One example is the following local generalization of binary summarization games [8]. Each player i plays ai ∈{0, 1} and has access to an averaging aggregator that computes the fraction of its neighbors playing action 1. Then, the Lipschitz constant of G is 1/k, where k is the minimum degree the underlying game digraph. Then, an ϵ-PSNE is guaranteed for k = Ω( √ ln n/ϵ). In other words, k needs to grow slowly (i.e., only sub-logarithmically) in the number of players n. An important follow-up question is to determine the complexity of recovering the underlying game structure in a local aggregative game with an ϵ-PSNE. We will answer this question in a combinatorial setting with non-negative monotone submodular cost functions on the edge set cardinality. Specifically, we consider the following problem. Given a connected digraph G(V, E), a degree parameter d, and a submodular cost function h : 2E →R+ that is normalized (i.e. h(∅) = 0) and monotone (i.e. h(S) ≤h(T) for all S ⊆T ∈2E), we would like to find a spanning directed subgraph1 Gs of G such that f(Gs) is minimized, the in-degree of each player is at least d, and Gs admits some ϵ-Nash equilibrium when players play to maximize their individual payoffs. We first establish a technical lemma that provides tight lower and upper bounds on the probability that a directed random graph is disconnected, and thus extends a similar result for Erd˝os-Rényi random graphs [25] to the directed setting. The lemma will be invoked while proving a bound for the recovery problem, and might be of independent interest beyond this work. Lemma 2. Consider a directed random graph DG(n, p) where p ∈(0, 1) is the probability of choosing any directed edge independently of others. Define q = 1 −p. Let Pn be the probability that DG is connected. Then, the probability that DG is disconnected is 1−PN = nq2(n−1) +O n2q3n . We will now prove an information theoretic lower bound for the recovery problem under the value oracle model [14]. A problem with an information theoretic lower bound of β has the property that any randomized algorithm that approximates the optimum to within a factor β with high probability needs to make superpolynomial number of queries under the specified oracle model. In the value oracle model, each query Q corresponds to obtaining the cost/value of any candidate set by issuing Q to the value oracle (which acts as a black-box). We invoke the Yao’s minimax principle [28], which states the relation between distributional complexity and randomized complexity. Using Yao’s principle, the performance of randomized algorithms can be lower bounded by proving that no deterministic algorithm can perform well on an appropriately defined distribution of hard inputs. Theorem 3. Let ϵ > 0, and α, δ ∈(0, 1). Let n be the number of players in a local aggregative game, where each player i ∈[n] is provided with some convex ∆-Lipschitz function ui and an aggregator A. Let Dn ≜Dn(∆, ϵ, A, (ui)i∈[n]) be the sufficient in-degree (number of incoming edges) of each player such that the game admits some ϵ-PSNE when the players play to maximize their individual payoffs ui according to the local information provided by the aggregator A. Assume any non-negative monotone submodular cost function on the edge set cardinality. Then for any d ≥max{Dn, nα ln n}/(1 −α), any randomized algorithm that approximates the game structure to a factor n1−α/(1 + δ)d requires exponentially many queries under the value oracle model. Proof. (Sketch.) The main idea is to construct a digraph that has exponentially many spanning directed subgraphs, and define two carefully designed submodular cost functions over the edges of the digraph, one of which is deterministic in query size while the other depends on a distribution. We make it hard for the deterministic algorithm to tell one cost function from the other. This can be accomplished by ensuring two conditions: (a) these cost functions map to the same value on almost all the queries, and (b) the discrepancy in the optimum value of the functions (on the optimum query) is massive. The proof invokes Lemma 2, exploits the degree constraint for ϵ-PSNE, argues about the optimal query size, and appeals to the Yao’s minimax principle. 1A spanning directed graph spans all the vertices, and has the property that the (multi)graph obtained by replacing the directed edges with undirected edges is connected. 3 Theorem 3 might sound pessimistic from a practical perspective, however, a closer look reveals why the query complexity turned out to be prohibitive. The proof hinged on the fact that all spanning subgraphs with same edge cardinality that satisfied the sufficiency condition for existence of any ϵ-PSNE were equally good with respect to our deterministic submodular function, and we created an instance with exponentially such spanning subgraphs. However, we might be able to circumvent Theorem 3 by breaking the symmetry, e.g., by using data that specifies multiple distinct ϵ-Nash equilibria. Then, since the digraph instance would be required to satisfy these equilibria, fooling the deterministic algorithm would be more difficult. Thus data could, in principle, help us avoid the complexity result of Theorem 3. We will formulate optimization problems that would enforce margin separability on the equilibrium profiles, which will further limit the number of potential digraphs and thus facilitate learning the aggregative game. Moreover, the hardness result does not apply to our estimation algorithms that will have access to the (sub)gradients in addition to the function values. 4 γ-Aggregative Games We now describe a generalization of the local aggregative games, which we call the γ-aggregative games. The main idea behind these games is that a player i ∈[n] may, often, be influenced not only by the aggregate behavior of its neighbors, but also to a lesser extent on the aggregate behavior of the other players, whose influence on the payoff of i decreases with increase in their distance to i. Let dG(i, j) be the number of intermediate nodes on a shortest path from j to i in the underlying digraph G = (V, E). That is, dG(i, j) = 0 if (j, i) ∈E, and 1 + mink∈V \{i,j} dG(i, k) + dG(k, j) otherwise. Let WG ≜maxi,j∈V dG(i, j) be the width of G. For any strategy profile a ∈{0, 1}n and t ∈{0, 1, . . . , WG}, let It G(i) = {j : dG(i, j) = t} be the set of nodes that have exactly t intermediaries on a shortest path to i, and let aIt G(i) be a strategy profile of the nodes in this set. We define aggregator functions f t G(a, i) ≜f(aIt G(i)) that return the aggregate at level t with respect to player i. Let γ ∈(0, 1) be a discount rate. Define the γ-aggregator function gG(a, γ, ℓ, i) ≜ ℓ X t=0 γtf t G(a, i)/ ℓ X t=0 γt, which discounts the aggregates based on the distance ℓ∈{0, 1, . . . , WG} to i. We assume that player i ∈[n] has a payoff function of the form ui(ai, ·), which is convex and η-Lipschitz in its second argument for each fixed ai. Finally, we define the Lipschitz constant of the γ-aggregative game as δγ(G) ≜ max i,ai,a′ −i,a′′ −i {ui(ai, gG(a′, γ, WG, i)) −ui(ai, gG(a′′, γ, WG, i))}, where the vectors a′ −i and a′′ −i differ in exactly one coordinate. The main criticism of the concept of ϵ-Nash equilibrium concerns lack of stability: if any player deviates (due to ϵ-incentive), then in general, some other player may have a high incentive to deviate as well, resulting in a non-equilibrium profile. Worse, it may take exponentially many steps to reach an ϵ-equilibrium again. Thus, stability of ϵ-equilibrium is an important consideration. We will now introduce an appropriate notion of stability, and prove that γ-aggregative games admit stable pure strategy ϵ-equilibrium in that any deviation by a player does not affect the equilibrium much. Structurally Stable Aggregator (SSA): Let G = (E, V ) be a connected digraph and PG(w) be a property of G, where w denotes the parameters of PG. Let A be an aggregator function that depends on PG. Suppose M = (a1, a2, . . . , an) be an ϵ-PSNE when A aggregates information according to PG(w), where ai is the strategy of player i ∈V = [n]. Suppose now A aggregates information according to PG(w′). Then, A is a (α, β)P,w,w′-structurally stable aggregator (SSA) with respect to G, where α and β are functions of the gap between w, w′, if it satisfies these conditions: (a) M is a (ϵ + α)-equilibrium under PG(w′), and (b) the payoff of each player at the equilibrium profile M under PG(w′) is at most β = O(α) worse than that under PG(w). A SSA with small values of α and β with respect to a small change in w is desirable since that would discourage the players from deviating from their ϵ-equilibrium strategy, however, such an aggregator might not exist in general. The following result shows the γ-aggregator is a SSA. 4 Theorem 4. Let γ ∈(0, 1), and gG(·, ·, ℓ, ·) be the γ-aggregator defined above. Let PG(ℓ) be the property “the number of maximum permissible intermediaries in a shortest path of length ℓin G”. Then, gG is a (2ηκG, ηκG)P,WG,L- SSA, where L < WG and κG depends on γ and WG −L. 5 Learning formulation We now formulate an optimization problem to recover the underlying graph structure, the parameters of the aggregator function, and the payoff functions. Let S = {a1, a2, . . . , aM} be our training set, where each strategy profile am ∈{0, 1}n is an ϵ-PSNE, and am i is the action of player i in example m ∈[M]. Let f be a local aggregator function, and let am Ni be the actions of neighbors Ni of player i ∈[n] on training example m. We will also represent N as a 0-1 adjacency matrix, with the interpretation that Nij = 1 implies that j ∈Ni, and Nij = 0 otherwise. We will use the notation Ni· ≜{Nij : j ̸= i}. Note that since the underlying game structure is represented as a digraph, Nij and Nji need not be equal. Let h be a concave function such that h(0) = 0. Then Fi(h) ≜h(|Ni|) is submodular since the concave transformation of the cardinality function results in a submodular function. Moreover F(h) = P i∈[n] Fi(h) is submodular since it is a sum of submodular functions. We will use F(h) as a sparsity-inducing prior. Several choices of h have been advocated in the literature, including suitably normalized geometric, log, smooth log and square root functions [15]. We would denote the parameters of the aggregator function f by θf. The payoff functions will depend on the choice of this parameterization. For a fixed aggregator f (such as the sum aggregator), linear parameterization is one possibility, where the payoff function for player i ∈[n] takes the form, uf i (am, Ni·) = am i wi1(wf f(am Ni) + bf) + (1 −am i )wi0(wf f(am Ni) + bf), where wi· = (wi0, wi1)⊤and Ni· denote the independent parameters for player i and θf = (wf, bf)⊤ are the shared parameters. Our setting is flexible, and we can easily accommodate more complex aggregators instead of the standard aggregators (e.g. sum). Exchangeable functions over sets [16] provide one such example. An interesting instantiation is a neural network comprising one hidden layer, an output sum layer, with tied weights. Specifically, let W ∈Rn×(n−1) where all entries of W are equal to wNN. Let σ be an element-wise non-linearity (e.g. we used the ReLU function, σ(x) = max{x, 0} for our experiments). Then, using the element-wise multiplication operator ⊙and a vector 1 with all ones, ui may be expressed as ufNN i (am, Ni·) = am i wi1fNN(am Ni)+(1−am i )wi0fNN(am Ni), where the permutation invariant neural aggregator, parameterized by θfNN = (wNN, bNN)⊤, fNN(am Ni) = 1⊤σ(W am −i ⊙Ni· + bNN). We could have more complex functions such as deeper neural nets, with parameter sharing, at the expense of increased computation. We believe this versatility makes local aggregative games particularly attractive, and provides a promising avenue for modeling structured strategic settings. Each am is an ϵ-PSNE, so it ensures a locally (near) optimal reward for each player. We will impose a margin constraint on the difference in the payoffs when player i unilaterally deviates from am i . Note that Ni = {j ∈Ni· : Nij = 1}. Then, introducing slack variables ξm i , and hyperparameters C, C′, Cf > 0, we obtain the following optimization problem in O(n2) variables: min θf ,w1·,...,wn·,Ni·,...,Nn·,ξ 1 2 n X i=1 ||wi·||2 + Cf 2M ||θf||2 + C′ n n X i=1 Fi(h) + C M n X i=1 M X m=1 ξm i s.t. ∀i ∈[n], m ∈[M] : uf i (am, Ni·) −uf i (1 −am, Ni·) ≥e(am, a′) −ξm i ∀i ∈[n], m ∈[M] : ξm i ≥0 ∀i ∈[n] : Ni· ∈{0, 1}n−1, where am and a′ differ in exactly one coordinate, and e is a margin specific loss term, such as Hamming loss eH(a, ˜a) = 1{a ̸= ˜a} or scaled 0-1 loss es(a, ˜a) = 1{a ̸= ˜a}/n. From a game theoretic perspective, the scaled loss has a natural asymptotic interpretation: as the number of players n →∞, es(am, a′) →0, and we get ∀i ∈[n], m ∈[M] : uf i (am, Ni·) ≥uf i (1 −am, Ni·) −ξm i , i.e., each training example am is an ϵ-PSNE, where ϵ = maxi∈[n],m∈[M] ξm i . Once θf are fixed, the problem clearly becomes separable, i.e., each player i can solve an independent sub-problem in O(n) variables. Each sub-problem includes both continuous and binary variables, 5 and may be solved via branch and bound, outer approximation decomposition, or extended cutting plane methods (see [17; 18] for an overview of these techniques). The individual solutions can be forced to agree on θf via a standard dual decomposition procedure, and methods like alternating direction method of multipliers (ADMM) [19] could be leveraged to facilitate rapid agreement of the continuous parameters wf and bf. The extension to learning the γ-aggregative games is immediate. We now describe some other optimization variants for the local aggregative games. Instead of constraining each player to a hard neighborhood, one might relax the constraints Nij ∈{0, 1} to Nij ∈[0, 1], where Nij might be interpreted as the strength of the edge (j, i). The Lovász convex relaxation of F [20] is a natural prior for inducing sparsity in this case. Specifically, for an ordering of values |Ni(0)| ≥|Ni(1)| . . . ≥|Ni(n−1)|, i ∈[n], this prior is given by Γh(N) = n X i=1 Γh(N, i), where Γh(N, i) = n−1 X k=0 [h(k + 1) −h(k)]|Ni(k)|. Since the transformation h encodes the preference for each degree, Γh(N) will act as a prior that encourages structured sparsity. One might also enforce other constraints on the structure of the local aggregative game. For instance, an undirected graph could be obtained by adding constraints Nij = Nji, for i ∈[n], j ̸= i. Likewise, a minimum in-degree constraint may be enforced on player i by requiring P j Nij ≥d. Both these constraints are linear in Ni·, and thus do not add to the complexity of the problem. Finally, based on cues such as domain knowledge, one may wish to add a degree of freedom by not enforcing sharing of the parameters of the aggregator among the players. 6 Experiments We now present strong empirical evidence to demonstrate the efficacy of local aggregative games in unraveling the aggregative game structure of two real voting datasets, namely, the US Supreme Court Rulings dataset and the Congressional Votes dataset. Our experiments span the different variants for recovering the structure of the aggregative games including settings where (a) parameters of the aggregator are learned along with the payoffs, (b) in-degree of each node is lower bounded, (c) γ-discounting is used, or (d) parameters of the aggregator are fixed. We will also demonstrate that our method compares favorably with the potential games method for tree structured games [4], even when we relax the digraph setting to let weights Nij ∈[0, 1] instead of {0, 1} or force the game structure to be undirected by adding the constraints Nij = Nji. For our purposes, we used the smoothed square-root concave function, h(i) = √i + 1 −1 + αi parameterized by α, the sum and neural aggregators, and the scaled 0-1 loss function es(a, ˜a) = 1{a ̸= ˜a}/n. We found our model to perform well across a very wide range of hyperparameters. All the experiments described below used the following setting of values: α = 1, C = 100, and Cf = 1. C′ was also set to 0.01 in all settings except when the parameters of the aggregator were fixed, when we set C′ = 0.01√n. K Kennedy A Alito T Thomas S Scalia R Roberts So Sotomayor Ka Kagan B Breyer G Ginsburg 91% 93% 91% 90% 86% 80% 94% 93% 89% Conservatives Liberals Figure 1: Supreme Court Rulings (full bench): The digraph recovered by the local aggregative and γ-aggregative games (ℓ≤2, all γ) with the sum aggregator as well as the neural aggregator is consistent with the known behavior of the Justices: conservative and liberal sides of the bench are well segregated from each other, while the moderate Justice Kennedy is positioned near the center. Numbers on the arrows are taken from an independent study [21] on Justices’ mutual voting patterns. 6 6.1 Dataset 1: Supreme Court Rulings We experimented with a dataset containing all non-unanimous rulings by the US Supreme court bench during the year 2013. We denote the Justices of the bench by their last name initials, and add a second character to some names to avoid the conflicts in the initials: Alito (A), Breyer (B), Ginsburg(G), Kennedy (K), Kagan (Ka), Roberts (R), Scalia (S), Sotomayor (So), and Thomas (T). We obtained a binary dataset following the procedure described in [4]. K A T S R G B (a) Local Aggregative K T R S A B G (b) Potential Exhaustive Enumeration K A T S R G B (c) Local Aggregative (Undirected & Relaxed) K R S T A B G (d) Potential Hamming Figure 2: Comparison with the potential games method [4]: (a) The digraph produced by our method with the sum as well as the neural aggregator is consistent with the expected voting behavior of the Justices on the data used by [4] in their experiments. (c) Relaxing all Nij ∈[0, 1] and enforcing Nij = Nji still resulted in a meaningful undirected structure. (b) & (d) The tree structures obtained by the brute force and the Hamming distance restricted methods [4] fail to capture higher order interactions, e.g., the strongly connected component between Justices A, T, S and R. . K A T S R G B (a) Local Aggregative (d >= 2) K A S R T B G (b) γ −aggregative (ℓ= 2, γ = 0.9) Figure 3: Degree constrained and γ-aggregative games: (a) Enforcing the degree of each node to be at least 2 reinforces the intra-republican and the intra-democrat affinity, reaffirming their respective jurisprudences, and (b) γ-aggregative games also support this observation: the same digraph as Fig. 2(a) is obtained unless ℓand γ are set to high values (plot generated with ℓ= 2, γ = 0.9), when the strong effect of one-hop and two-hop neighbors overpowers the direct connection between B and G. Fig. 1 shows the structure recovered by the local aggregative method. The method was able to distinguish the conservative side of the court (Justices A, R, S, and T) from the left side (B, G, Ka, and So). Also, the structure places Justice Kennedy in between the two extremes, which is consistent with his moderate jurisprudence. To put our method in perspective, we also compare the result of applying our method on the same subset of the full bench data that was considered by [4] in their experiments. Fig. 2 demonstrates how the local aggregative approach estimated meaningful structures consistent with the full bench structure, and compared favorably with both the methods of [4]. Finally, Fig. 3(a) 7 and 3(b) demonstrate the effect of enforcing minimum in-degree constraints in the local aggregative games, and increasing ℓand γ in the γ-aggregative games respectively. As expected, the estimated γ-aggregative structure is stable unless γ and ℓare set to high values when non-local effects kick in. We provide some additional results on the degree-constrained local aggregative games (Fig. 4 ) and the γ-aggregative games (Fig. 5). In particular, we see that the γ-aggregative games are indeed robust to small changes in the aggregator input as expected in the light of stability result of Theorem 4. K Kennedy A Alito T Thomas S Scalia R Roberts So Sotomayor Ka Kagan B Breyer G Ginsburg Conservatives Liberals Figure 4: Degree constrained local aggregative games (full bench): The digraph recovered by the local aggregative method when the degree of each node was constrained to be at least 2. Clearly, the cohesion among the Justices on the conservative side got strengthened by the degree constraint (likewise for the liberal side of the bench). On the other hand, no additional edges were added between the two sides. K Kennedy A Alito T Thomas S Scalia R Roberts So Sotomayor Ka Kagan B Breyer G Ginsburg Conservatives Liberals Figure 5: γ-Aggregative Games (full bench): The digraph estimated by the γ-aggregative method for ℓ= 2, γ = 0.9, and lower values of γ and/or ℓ. Note that an identical structure was obtained by the local aggregative method (Fig. 1). This indicates that despite heavily weighting the effect of the nodes on a shortest path with one or two intermediary hops, the structure in Fig. 1 is very stable. Also, this substantiates our theoretical result about the stability of the γ-aggregative games. 6.2 Dataset 2: Congressional Votes We also experimented with the Congressional Votes data [22], that contains the votes by the US Senators on all the bills of the 110 US Congress, Session 2. Each of the 100 Senators voted in favor of (treated as 1) or against each bill (treated as 0). Fig. 6 shows that the local aggregative method provides meaningful insights into the voting patterns of the Senators as well. In particular, few connections exist between the nodes in red and those in blue, making the bipartisan structure quite apparent. In some cases, the intra-party connections might be bolstered due to same state affiliations, e.g. Senators Corker (28) and Alexander (2) represent Tennessee. The cross connections may also capture some interesting collaborations or influences, e.g., Senators Allard (3) and Clinton (22) introduced the Autism Act. Likewise, Collins (26) and Carper (19) reintroduced the Fire Grants Reauthorization Act. The potential methods [4] failed to estimate some of these strategic interactions. Likewise, Fig. 7 provides some interesting insights regarding the ideologies of some Senators that follow a more centrist ideology than their respective political affiliations would suggest. 8 23 3 13 26 4 15 25 7 21 24 30 2 29 28 10 14 1 9 11 27 6 19 18 12 17 16 5 20 8 22 Figure 6: Comparison with [4] on the Congressional Votes data: The digraph recovered by local aggregative method, on the data used by [4], when the parameters of the sum aggregator were fixed (wf = 1, bf = 0). The segregation between the Republicans (shown in red) and the Democrats (shown in blue) strongly suggests that they are aligned according to their party policies. Figure 7: Complete Congressional Votes data: The digraph recovered on fixing parameters, relaxing Nij to [0, 1], and thresholding at 0.05. The estimated structure not only separates majority of the reds from the blues, but also associates closely the then independent Senators Sanders (82) and Lieberman (62) with the Democrats. Moreover, the few reds among the blues generally identify with a more centrist ideology - Collins (26) and Snowe (87) are two prominent examples. Conclusion An overwhelming majority of literature on machine learning is restricted to modeling non-strategic settings. Strategic interactions in several real world systems such as decision/voting often exhibit local structure in terms of how players are guided by or respond to each other. In other words, different agents make rational moves in response to their neighboring agents leading to locally stable configurations such as Nash equilibria. Another challenge with modeling the strategic settings is that they are invariably unsupervised. Consequently, standard learning techniques such as structured prediction that enforce global consistency constraints fall short in such settings (cf. [4]). As substantiated by our experiments, local aggregative games nicely encapsulate various strategic applications, and could be leveraged as a tool to glean important insights from voting data. Furthermore, the stability of approximate equilibria is a primary consideration from a conceptual viewpoint, and the γ-aggregative games introduced in this work add a fresh perspective by achieving structural stability. 9 References [1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data, ICML, 2001. [2] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks, NIPS, 2003. [3] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables, JMLR, 6(2), pp. 1453-1484, 2005. [4] V. K. Garg and T. Jaakkola. Learning Tree Structured Potential Games, NIPS, 2016. [5] C. Daskalakis and C. H. Papadimitriou. Approximate Nash equilibria in anonymous games, Journal of Economic Theory, 156, pp. 207-245, 2015. [6] J. Nash. Non-Cooperative Games, Annals of Mathematics, 54(2), pp. 286-295, 1951. [7] R. Selten. Preispolitik der Mehrproduktenunternehmung in der Statischen Theorie, Springer-Verlag, 1970. [8] M. Kearns and Y. Mansour. Efficient Nash computation in large population games with bounded influence, UAI, 2002. [9] R. Cummings, M. Kearns, A. Roth, and Z. S. Wu. Privacy and truthful equilibrium selection for aggregative games, WINE, 2015. [10] R. Cornes and R. Harley. Fully Aggregative Games, Economic Letters, 116, pp. 631-633, 2012. [11] W. Novshek. On the Existence of Cournot Equilibrium, Review of Economic Studies, 52, pp. 86-98, 1985. [12] M. K. Jensen. Aggregative Games and Best-Reply Potentials, Economic Theory, 43, pp. 45-66, 2010. [13] J. M. Lasry and P. L. Lions. Mean field games, Japanese Journal of Mathematics, 2(1), pp. 229-260, 2007. [14] G. Goel, C. Karande, P. Tripathi, and L. Wang. Approximability of Combinatorial Problems with Multiagent Submodular Cost Functions, FOCS, 2009. [15] A. J. Defazio and T. S. Caetano. A convex formulation for learning scale-free networks via submodular relaxation, NIPS, 2012. [16] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. Salakhutdinov, and A. Smola. Deep Sets, arXiv:1703.06114, 2017. [17] P. Bonami et al. An algorithmic framework for convex mixed integer nonlinear programs, Discrete Optimization, 5(2), pp. 186-204, 2008. [18] P. Bonami, M. Kilinç, and J. Linderoth J. Algorithms and Software for Convex Mixed Integer Nonlinear Programs, Mixed Integer Nonlinear Programming, The IMA Volumes in Mathematics and its Applications, 154, Springer, 2012. [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011. [20] F. Bach. Structured sparsity-inducing norms through submodular functions, NIPS, 2010. [21] J. Bowers, A. Liptak and D. Willis. Which Supreme Court Justices Vote Together Most and Least Often, The New York Times, 2014. [22] J. Honorio and L. Ortiz. Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data, JMLR, 16, pp. 1157-1210, 2015. [23] Y. Azrieli and E. Shmaya. Lipschitz Games, Mathematics of Operations Research, 38(2), pp. 350-357, 2013. [24] E. Kalai. Large robust games. Econometrica, 72(6), pp. 1631-1665, 2004. [25] E. N. Gilbert. Random Graphs, The Annals of Mathematical Statistics, 30(4), pp. 1141-1144,1959. [26] W. Feller. An Introduction to Probability Theory and its Applications, Vol. 1, Second edition, Wiley, 1957. [27] M.-F. Balcan and N. J. A. Harvey. Learning Submodular Functions, STOC, 2011. 10 [28] A. Yao. Probabilistic computations: Toward a unified measure of complexity, FOCS, 1977. [29] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions, FOCS, 2007. [30] M. X. Goemans, N. J. A. Harvey, S. Iwata, and V. S. Mirrokni. Approximating submodular functions everywhere, SODA, 2009. [31] Z. Svitkina and L. Fleischer. Submodular approximation: Sampling based algorithms and lower bounds, FOCS, 2008. 11 | 2017 | 186 |
6,659 | An Error Detection and Correction Framework for Connectomics Jonathan Zung Princeton University jzung@princeton.edu Ignacio Tartavull Princeton University tartavull@princeton.edu Kisuk Lee Princeton University and MIT kisuklee@mit.edu H. Sebastian Seung Princeton University sseung@princeton.edu Abstract We define and study error detection and correction tasks that are useful for 3D reconstruction of neurons from electron microscopic imagery, and for image segmentation more generally. Both tasks take as input the raw image and a binary mask representing a candidate object. For the error detection task, the desired output is a map of split and merge errors in the object. For the error correction task, the desired output is the true object. We call this object mask pruning, because the candidate object mask is assumed to be a superset of the true object. We train multiscale 3D convolutional networks to perform both tasks. We find that the error-detecting net can achieve high accuracy. The accuracy of the error-correcting net is enhanced if its input object mask is “advice” (union of erroneous objects) from the error-detecting net. 1 Introduction While neuronal circuits can be reconstructed from volumetric electron microscopic imagery, the process has historically [30] and even recently [28] been highly laborious. One of the most timeconsuming reconstruction tasks is the tracing of the brain’s “wires,” or neuronal branches. This task is an example of instance segmentation, and can be automated through computer detection of the boundaries between neurons. Convolutional nets were first applied to neuronal boundary detection a decade ago [10, 29]. Since then convolutional nets have become the standard approach, and the accuracy of boundary detection has become impressively high [31, 1, 15, 6]. Given the low error rates, it becomes helpful to think of subsequent processing steps in terms of modules that detect and correct errors. In the error detection task (Figure 1a), the input is the raw image and a binary mask that represents a candidate object. The desired output is a map containing the locations of split and merge errors in the candidate object. Related work on this problem has been restricted to detection of merge errors only by either hand-designed [18] or learned [24] computations. However, a typical segmentation contains both split and merge errors, so it would be desirable to include both in the error detection task. In the error correction task (Figure 1b), the input is again the raw image and a binary mask that represents a candidate object. The candidate object mask is assumed to be a superset of a true object, which is the desired output. With this assumption, error correction is formulated as object mask pruning. Object mask pruning can be regarded as the splitting of undersegmented objects to create true objects. In this sense, it is the opposite of agglomeration, which merges oversegmented objects to create true objects [11, 21]. Object mask pruning can also be viewed as the subtraction of voxels 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (a) Error detection task for split (top) and merge (bottom) errors. The desired output is an error map (red). A voxel in the error map is red if and only if a window centered on it contains a split or merge error. We also consider a variant of the task in which the object mask is the sole input; the grayscale image is not used. (b) The object mask pruning task. The input mask is assumed to be a superset of a true object. The desired output (right) is the true object containing the central voxel (black dot). In the first case there is nothing to prune, while in the second case the object not overlapping the central voxel is erased. Figure 1: Error detection and correction tasks. For both tasks, the inputs are a candidate object mask (blue) and the original image (grayscale). Note that diagrams are 2D for illustrative purposes, but in reality the inputs and outputs are 3D. from an object to create a true object. In this sense, it is the opposite of a flood-filling net [13, 12] or MaskExtend [18], each iteration of which is the addition of voxels to an object to create a true object. Iterative mask extension has been studied in other work on instance segmentation in computer vision [25, 23]. The task of generating an object mask de novo from an image has also been studied in computer vision [22]. We implement both error detection and error correction using 3D multiscale convolutional networks. One can imagine multiple uses for these nets in a connectomics pipeline. For example, the errordetecting net could be used to reduce the amount of labor required for proofreading by directing human attention to locations in the image where errors are likely. This labor reduction could be substantial because the declining error rate of automated segmentation has made it more time-consuming for a human to find an error. We show that the error-detecting net can provide “advice” to the error-correcting net in the following way. To create the candidate object mask for the error-correcting net from a baseline segmentation, one can simply take the union of all erroneous segments as found by the error-detecting net. Since the error rate in the baseline segmentation is already low, this union is small and it is easy to select out a single object. The idea of using the error detector to choose locations for the error corrector was proposed previously though not actually implemented [18]. Furthermore, the idea of using the error detector to not only choose locations but provide “advice” is novel as far as we know. We contend that our approach decomposes the neuron segmentation problem into two strictly easier pieces. First, we hypothesize that recognizing an error is much easier than producing the correct answer. Indeed, humans are often able to detect errors using only morphological cues such as abrupt terminations of axons, but may have difficulty actually finding the correct extension. On the other hand, if the error-detection network has high accuracy and the initial set of errors is sparse, then the error correction module only needs to prune away a small number of irrelevant parts from the candidate mask described above. This contrasts with the flood-filling task which involves an unconstrained search for new parts to add. Given that most voxels are not a part of the object to be reconstructed, an upper bound on the object is usually more informative than a lower bound. As an added benefit, selective application of the error correction module near likely errors makes efficient use of our computational budget [18]. 2 In this paper, we support the intuition above by demonstrating high accuracy detection of both split and merge errors. We also demonstrate a complete implementation of the stated error detection-correction framework, and report significant improvements upon our baseline segmentation. Some of the design choices we made in our neural networks may be of interest to other researchers. Our error-correcting net is trained to produce a vector field via metric learning instead of directly producing an object mask. The vector field resembles a semantic labeling of the image, so this approach blurs the distinction between instance and semantic segmentation. This idea is relatively new in computer vision [7, 4, 3]. Our multiscale convolutional net architecture, while similar in spirit to the popular U-Net [26], has some novelty. With proper weight sharing, our model can be viewed as a feedback recurrent convolutional net unrolled in time (see the appendix for details). Although our model architecture is closely related to the independent works of [27, 9, 5], we contribute a feedback recurrent convolutional net interpretation. 2 Error detection 2.1 Task specification: detecting split and merge errors Given a single segment in a proposed segmentation presented as an object mask Obj, the error detection task is to produce a binary image called the error map, denoted Errpx×py×pz(Obj). The definition of the error map depends on a choice of a window size px × py × pz. A voxel i in the error map is 0 if and only if the restriction of the input mask to a window centred at i of size px × py × pz is voxel-wise equal to the restriction of some object in the ground truth. Observe that the error map is sensitive to both split and merge errors. A smaller window size allows us to localize errors more precisely. On the other hand, if the window radius is less than the width of a typical boundary between objects, it is possible that two objects participating in a merge error never appear in the same window. These merge errors would not be classified as an error in any window. We could use a less stringent measure than voxel-wise equality that disregards small perturbations of the boundaries of objects. However, our proposed segmentations are all composed of the same building blocks (supervoxels) as the ground truth segmentation, so this is not an issue for us. We define the combined error map as P Obj Err(Obj) ∗Obj where ∗represents pointwise multiplication. In other words, we restrict the error map for each object to the object itself, and then sum the results. The figures in this paper show the combined error map. 2.2 Architecture of the error-detecting net We take a fully supervised approach to error detection. We implement error detection using a multiscale 3D convolutional network. The architecture is detailed in Figure 2. Its design is informed by experience with convolutional networks for neuronal boundary detection (see [15]) and reflects recent trends in neural network design [26, 8]. Its field of view is Px × Py × Pz = 318 × 318 × 33 (which is roughly cubic in physical size given the anisotropic resolution of our dataset). The network computes (a downsampling of) Err46×46×7. At test time, we perform inference in overlapping windows and conservatively blend the output from overlapping windows using a maximum operation. We trained two variants, one of which takes as input only Obj, and another which additionally receives as input the raw image. 3 Error correction 3.1 Task specification: object mask pruning Given an image patch of size Px × Py × Pz and a candidate object mask of the same dimensions, the object mask pruning task is to erase all voxels which do not belong to the true object overlapping the central voxel. The candidate object mask is assumed to be a superset of the true object. 3 Output 18x18x7 Input (2 channels) 318x318x33 18 24 28 32 48 2 18 18 24 24 24 28 32 32 4x4x4 4x4x4 4x4x4 4x4x1 4x4x1 64 24 28 28 32 48 4x4x4 32 28 32 48 64 48 64 2x2x1 2x2x1 2x2x2 2x2x2 2x2x2 2x2x2 Summation joining Strided convolution Skip connection Strided transposed convolution Figure 2: Architectures for the error-detecting and error-correcting nets respectively. Each node represents a layer and the number inside represents the number of feature maps. The layers closer to the top of the diagram have lower resolution than the layers near the bottom. We make savings in computation by minimizing the number of high resolution feature maps. The diagonal arrows represent strided convolutions, while the horizontal arrows represent skip connections. Associated with the diagonal arrows, black numbers indicate filter size and red numbers indicate strides in x × y × z. Due to the anisotropy of the resolution of the images in our dataset, we design our nets so that the first convolutions are exclusively 2D while later convolutions are 3D. The field of view of a unit in the higher layers is therefore roughly cubic. To limit the number of parameters in our model, we factorize all 3D convolutions into a 2D convolution followed by a 1D convolution in z-dimension. We also use weight sharing between some convolutions at the same height. Note that the error-correcting net is a prolonged, symmetric version of the error-detecting net. For more detail of the error corrector, see the appendix. 3.2 Architecture of the error-correcting net Yet again, we implement error correction using a multiscale 3D convolutional network. The architecture is detailed in Figure 2. One difficulty with training a neural network to reconstruct the object containing the central voxel is that the desired output can change drastically as the central voxel moves between objects. We use an intermediate representation whose role is to soften this dependence on the location of the central voxel. The desired intermediate representation is a k = 6 dimensional vector v(x, y, z) at each point (x, y, z) such that points within the same object have similar vectors and points in different objects have different vectors. We transform this vector field into a binary image M representing the object overlapping the central voxel as follows: M(x, y, z) = exp −||v(x, y, z) −v(0, 0, 0)||2 , where (0, 0, 0) is the central voxel. When an over-segmentation is available, we replace v(0, 0, 0) with the average of v over the supervoxel containing the central voxel. This trick makes it unnecessary to centre our windows far away from a boundary, as was necessary in [13]. Note that we backpropagate 4 Figure 3: An example of a mistake in the initial segmentation. The dendrite is missing a spine. The red overlay on the left shows the combined error map (defined in Section 2.1); the stump in the centre of the image was clearly marked as an error. Figure 4: The right shows all objects which contained a detected error in the vicinity. For clarity, each supervoxel was drawn with a different colour. The union of these objects is the binary mask which is provided as input to the error correction network. For clarity, these objects were clipped to lie within the white box representing the field of view of our error correction network. The output of the error correction network is overlaid in blue on the left. Figure 5: The supervoxels assembled in accordance with the output of the error correction network. through the transform M, so the vector representation may be seen as an implementation detail and the final output of the network is just a (soft) binary image. 4 How the error detector can “advise” the error corrector Suppose that we would like to correct the errors in a baseline segmentation. Obviously, the errordetecting net can be used to find locations where the error-correcting net can be applied [18]. Less obviously, the error-detecting net can be used to construct the object mask that is the input to the error-correcting net. We refer to this object mask as the “advice mask” and its construction is 5 important because the baseline object to be corrected might contain split as well as merge errors, while the object mask pruning task can correct only merge errors. The advice mask is defined is the union of the baseline object at the central pixel with all other baseline objects in the window that contain errors as judged by the error-detecting net. The advice mask is a superset of the true object overlapping the central voxel, assuming that the error-detecting net makes no mistakes. Therefore advice is suitable as an input to the object mask pruning task. The details of the above procedure are as follows. We begin with an initial baseline segmentation whose remaining errors are assumed to be sparsely distributed. During the error correction phase, we iteratively update a segmentation represented as the connected components of a graph G whose vertices are segments in a strict over-segmentation (henceforth called supervoxels). We also maintain the combined error map associated with the current segmentation. We binarize the error map by thresholding it at 0.25. Now we iteratively choose a location ℓ= (x, y, z) which has value 1 in the binarized combined error map. In a Px × Py × Pz window centred on ℓ, we prepare an input for the error corrector by taking the union of all segments containing at least one white voxel in the error map. The error correction network produces from this input a binary image M representing the object containing the central voxel. For each supervoxel S touching the central Px/2 × Py/2 × Pz/2 window, let M(S) denote the average value of M inside S. If M(S) ̸∈[0.1, 0.9] for all S in the relevant window (i.e. the error corrector is confident in its prediction for each supervoxel), we add to G a clique on {S | M(S) > 0.9} and delete from G all edges between {S | M(S) < 0.1} and {S | M(S) > 0.9}. The effect of these updates is to change G to locally agree with M. Finally, we update the combined error map by applying the error detector at all locations where its decision could have changed. We iterate until every location is zero in the error map or has been covered by a window at least t = 2 times by the error corrector. This stopping criterion guarantees that the algorithm terminates. In practice, the segmentation converges without this auxiliary stopping condition to a state in which the error corrector fails confidence threshold everywhere. However, it is hard to certify convergence since it is possible that the error corrector could give different outputs on slightly shifted windows. Based on our validation set, increasing t beyond 2 did not measurably improve performance. Note that this algorithm deals with split and merge errors, but cannot fix errors already present at the supervoxel level. 5 Experiments 5.1 Dataset Our dataset is a sample of mouse primary visual cortex (V1) acquired using serial section transmission electron microscopy at the Allen Institute for Brain Science. The voxel resolution is 3.6 nm×3.6 nm× 40 nm. Human experts used the VAST software tool [14, 2] to densely reconstruct multiple volumes that amounted to 530 Mvoxels of ground truth annotation. These volumes were used to train a neuronal boundary detection network (see the appendix for architecture). We applied the resulting boundary detector to a larger volume of size 4800 Mvoxels to produce a preliminary segmentation, which was then proofread by the tracers. This bootstrapped ground truth was used to train the error detector and corrector. A subvolume of size 910 Mvoxels was reserved for validation, and a subvolume of size 910 Mvoxels was reserved for testing. Producing the gold standard segmentation required a total of ∼560 tracer hours, while producing the bootstrapped ground truth required ∼670 tracer hours. 5.2 Baseline segmentation Our baseline segmentation was produced using a pipeline of multiscale convolutional networks for neuronal boundary detection, watershed, and mean affinity agglomeration [15]. We describe the pipeline in detail in the appendix. The segmentation performance values reported for the baseline are taken at a mean affinity agglomeration threshold of 0.23, which minimizes the variation of information error metric [17, 20] on the test volumes. 6 5.3 Training procedures Sampling procedure Here we describe our procedure for choosing a random point location in a segmentation. Uniformly random sampling is unsatisfactory since large objects such as dendritic shafts will be overrepresented. Instead, given a segmentation, we sample a location (x, y, z) with probability inversely proportional to the fraction of a window of size 128 × 128 × 16 centred at (x, y, z) which is occupied by the object containing the central voxel. Training of error detector An initial segmentation containing errors was produced using our baseline neuronal boundary detector combined with mean affinity agglomeration at a threshold of 0.3. Point locations were sampled according to the sampling procedure specified in 5.3. We augmented all of our data with rotations and reflections. We used a pixelwise cross-entropy loss. Training of error corrector We sampled locations in the ground truth segmentation as in 5.3. At each location ℓ= (x, y, z), we generated a training example as follows. Let Objℓbe the ground truth object touching ℓ. We selected a random subset of the objects in the window centred on ℓincluding Objℓ. To be specific, we chose a number p uniformly at random from [0, 1], and then selected each segment in the window with probability p in addition Objℓ. The input at ℓwas then a binary mask representing the union of the selected objects along with the raw EM image, and the desired output was a binary mask representing only Objℓ. The dataset was augmented with rotations, reflections, simulated misalignments and missing sections [15]. We used a pixelwise cross-entropy loss. Note that this training procedure uses only the ground truth segmentation and is completely independent of the error detector and the baseline segmentation. This convenient property is justified by the fact that if the error detector is perfect, the error corrector only ever receives as input unions of complete objects. 5.4 Error detection results To measure the quality of error detection, we densely sampled points in our test volume as in 5.3. In order to remove ambiguity over the precise location of errors, we filtered out points which contained an error within a surrounding window of size 80×80×8 but not a window of size 40×40×4. These locations were all unique, in that two locations in the same object were separated by at least 80, 80, 8 in x, y, z, respectively. Precision and recall simultaneously exceed 90% (see Figure 6). Empirically, many of the false positive examples come where a dendritic spine head curls back and touches its trunk. These examples locally appear to be incorrectly merged objects. We trained one error detector with access to the raw image and one without. The network’s admirable performance even without access to the image as seen in Figure 6 supports our hypothesis that error detection is a relatively easy task and can be performed using only shape cues. Merge errors qualitatively appear to be especially easy for the network to detect; an example is shown in Figure 7. 5.5 Error correction results Table 1: Comparing segmentation performance V Imerge V Isplit Rand Recall Rand Precision Baseline 0.162 0.142 0.952 0.954 Without Advice 0.130 0.057 0.956 0.979 With Advice 0.088 0.052 0.974 0.980 In order to demonstrate the importance of error detection to error correction, we ran two experiments: one in which the binary mask input to the error corrector was simply the union of all segments in the window (“without advice”), and one in which the binary mask was the union of all segments with a detected error (“with advice”). In the “without advice” mode, the network is essentially asked to reconstruct the object overlapping the central voxel in one shot. Table 1 shows that advice confers a considerable advantage in performance on the error corrector. 7 Precision 0.50 0.90 0.95 1.00 Obj + raw image Obj only Experiment 0.50 0.90 0.95 1.00 Recall Figure 6: Precision and recall for error detection, both with and without access to the raw image. In the test volume, there are 8248 error free locations and 944 locations with errors. In practice, we use threshold which guarantees > 95% recall and > 85% precision. Figure 7: An example of a detected error. The right shows two incorrectly merged axons, and the left shows the predicted combined error map (defined in 2.1) overlaid on the corresponding 2D image in red. Figure 8: A difficult location with missing data in one section combined with a misalignment between sections. The error-correcting net was able to trace across the missing data. It is sometimes difficult to assess the significance of an improvement in variation of information or rand score since changes can be dominated by modifications to a few large objects. Therefore, we decomposed the variation of information into a score for each object in the ground truth. Figure 9 summarizes the cumulative distribution of the values of V I(i) = V Imerge(i) + V Isplit(i) for all segments i in the ground truth. See the appendix for a precise definition of V I(i). The number of errors from the set in Sec. 5.4 that were fixed or introduced by our iterative refinement procedure is shown in 2. These numbers should be taken with a grain of salt since topologically insignificant changes could count as errors. Regardless, it is clear that our iterative refinement procedure fixed a significant fraction of the remaining errors and that “advice” improves the error corrector. The results are qualitatively impressive as well. The error-correcting network is sometimes able to correctly merge disconnected objects, for example in Figure 8. Table 2: Number of errors fixed and introduced relative to the baseline # Errors # Errors fixed # Errors introduced Baseline 944 Without Advice 474 547 77 With Advice 305 707 68 8 VI (nats) 0 1 2 3 Baseline With advice Without advice Method 0 500 1000 1500 Cumulative # of objects Figure 9: Per-object VI scores for the 940 reconstructed objects in our test volume. Almost 800 objects are completely error free in our segmentation. These objects are likely all axons; almost every dendrite is missing a few spines. 5.6 Computational cost analysis Table 3 shows the computational cost of the most expensive parts of our segmentation pipeline. Boundary detection and error detection are run on the entire image, while error correction is run on roughly 10% of the possible locations in the image. Error correction is still the most costly step, but it would be 10× more costly without restricting to the locations found by the error detection network. Therefore, the cost of error detection is more than justified by the subsequent savings during the error correction phase. The number of locations requiring error correction will fall even further if the precision of the error detector increases or the error rate of the initial segmentation decreases. Table 3: Computation time for a 2048 × 2048 × 256 volume using a single TitanX Pascal GPU Boundary Detection 18 mins Error Detection 25 mins Error Correction 55 mins 6 Conclusion and future directions We have developed a error detector for the neuronal segmentation problem and combined it with an error correction module. In particular, we have shown that our error detectors are able to exploit priors on neuron shape, having reasonable performance even without access to the raw image. We have made significant savings in computation by applying expensive error correction procedures only where predicted necessary by the error detector. Finally, we have demonstrated that the “advice” of error detection improves an error correction module, improving segmentation performance upon our baseline. We expect that significant improvements in the accuracy of error detection could come from aggressive data augmentation. We can mutilate a ground truth segmentation in arbitrary (or even adversarial) ways to produce unlimited examples of errors. An error detection module has many potential uses beyond the ones presented here. For example, we could use error detection to direct ground truth annotation effort toward mistakes. If sufficiently accurate, it could also be used directly as a learning signal for segmentation algorithms on unlabelled data. The idea of co-training our error-correction and error-detection networks is natural in view of recent work on generative adversarial networks [19, 16]. 9 Author contributions and acknowledgements JZ conceptualized the study and conducted most of the experiments and evaluation. IT (along with Will Silversmith) created much of the infrastructure necessary for visualization and running our algorithms at scale. KL produced the baseline segmentation. HSS helped with the writing. We are grateful to Clay Reid, Nuno da Costa, Agnes Bodor, Adam Bleckert, Dan Bumbarger, Derrick Britain, JoAnn Buchannan, and Marc Takeno for acquiring the TEM dataset at the Allen Institute for Brain Science. The ground truth annotation was created by Ben Silverman, Merlin Moore, Sarah Morejohn, Selden Koolman, Ryan Willie, Kyle Willie, and Harrison MacGowan. We thank Nico Kemnitz for proofreading a draft of this paper. We thank Jeremy Maitin-Shepard at Google and the other contributors to the neuroglancer project for creating an invaluable visualization tool. We acknowledge NVIDIA Corporation for providing us with early access to Titan X Pascal GPU used in this research, and Amazon for assistance through an AWS Research Grant. This research was supported by the Mathers Foundation, the Samsung Scholarship and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC0005. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. References [1] Thorsten Beier, Constantin Pape, Nasim Rahaman, Timo Prange, Stuart Berg, Davi D Bock, Albert Cardona, Graham W Knott, Stephen M Plaza, Louis K Scheffer, et al. Multicut brings automated neurite segmentation closer to human performance. Nature Methods, 14(2):101–102, 2017. [2] Daniel Berger. VAST Lite. URL https://software.rc.fas.harvard.edu/lichtman/ vast/. [3] Bert De Brabandere, Davy Neven, and Luc Van Gool. Semantic instance segmentation with a discriminative loss function. CoRR, abs/1708.02551, 2017. URL http://arxiv.org/abs/ 1708.02551. [4] Alireza Fathi, Zbigniew Wojna, Vivek Rathod, Peng Wang, Hyun Oh Song, Sergio Guadarrama, and Kevin P. Murphy. Semantic instance segmentation via deep metric learning. CoRR, abs/1703.10277, 2017. URL http://arxiv.org/abs/1703.10277. [5] Damien Fourure, Rémi Emonet, Élisa Fromont, Damien Muselet, Alain Trémeau, and Christian Wolf. Residual conv-deconv grid network for semantic segmentation. CoRR, abs/1707.07958, 2017. URL http://arxiv.org/abs/1707.07958. [6] Jan Funke, Fabian David Tschopp, William Grisaitis, Chandan Singh, Stephan Saalfeld, and Srinivas C Turaga. A deep structured learning approach towards automating connectome reconstruction from 3d electron micrographs. arXiv preprint arXiv:1709.02974, 2017. [7] Adam W. Harley, Konstantinos G. Derpanis, and Iasonas Kokkinos. Learning dense convolutional embeddings for semantic segmentation. CoRR, abs/1511.04377, 2015. URL http://arxiv.org/abs/1511.04377. [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385. [9] Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q. Weinberger. Multi-scale dense convolutional networks for efficient prediction. CoRR, abs/1703.09844, 2017. URL http://arxiv.org/abs/1703.09844. [10] Viren Jain, Joseph F Murray, Fabian Roth, Srinivas Turaga, Valentin Zhigulin, Kevin L Briggman, Moritz N Helmstaedter, Winfried Denk, and H Sebastian Seung. Supervised learning of image restoration with convolutional networks. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8. IEEE, 2007. [11] Viren Jain, Srinivas C. Turaga, Kevin L. Briggman, Moritz Helmstaedter, Winfried Denk, and H. Sebastian Seung. Learning to agglomerate superpixel hierarchies. In NIPS, 2011. 10 [12] Michał Januszewski, Jörgen Kornfeld, Peter H Li, Art Pope, Tim Blakely, Larry Lindsey, Jeremy B Maitin-Shepard, Mike Tyka, Winfried Denk, and Viren Jain. High-precision automated reconstruction of neurons with flood-filling networks. bioRxiv, page 200675, 2017. [13] Michał Januszewski, Jeremy Maitin-Shepard, Peter Li, Jörgen Kornfeld, Winfried Denk, and Viren Jain. Flood-filling networks, Nov 2016. URL https://arxiv.org/abs/1611.00421. [14] Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, et al. Saturated reconstruction of a volume of neocortex. Cell, 162(3):648–661, 2015. [15] Kisuk Lee, Jonathan Zung, Peter Li, Viren Jain, and H. Sebastian Seung. Superhuman accuracy on the SNEMI3D connectomics challenge. CoRR, abs/1706.00120, 2017. URL http://arxiv. org/abs/1706.00120. [16] Pauline Luc, Camille Couprie, Soumith Chintala, and Jakob Verbeek. Semantic segmentation using adversarial networks. CoRR, abs/1611.08408, 2016. URL http://arxiv.org/abs/ 1611.08408. [17] Marina Meil˘a. Comparing clusterings—an information based distance. Journal of Multivariate Analysis, 98(5):873 – 895, 2007. ISSN 0047-259X. doi: http://dx.doi.org/10. 1016/j.jmva.2006.11.013. URL http://www.sciencedirect.com/science/article/ pii/S0047259X06002016. [18] Yaron Meirovitch, Alexander Matveev, Hayk Saribekyan, David Budden, David Rolnick, Gergely Odor, Seymour Knowles-Barley, Thouis Raymond Jones, Hanspeter Pfister, Jeff William Lichtman, and Nir Shavit. A multi-pass approach to large-scale connectomics, 2016. [19] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014. URL http://arxiv.org/abs/1411.1784. [20] Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, and Dmitri B. Chklovskii. Machine Learning of Hierarchical Clustering to Segment 2D and 3D Images. PLOS ONE, 8 (8):1–11, 08 2013. doi: 10.1371/journal.pone.0071715. URL https://doi.org/10.1371/ journal.pone.0071715. [21] Juan Nunez-Iglesias, Ryan Kennedy, Stephen M. Plaza, Anirban Chakraborty, and William T. Katz. Graph-based active learning of agglomeration (gala): a python library to segment 2d and 3d neuroimages, 2014. URL https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC3983515/. [22] Pedro O. Pinheiro, Ronan Collobert, and Piotr Dollár. Learning to segment object candidates. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS’15, pages 1990–1998, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id=2969442.2969462. [23] Mengye Ren and Richard S. Zemel. End-to-end instance segmentation and counting with recurrent attention. CoRR, abs/1605.09410, 2016. URL http://arxiv.org/abs/1605. 09410. [24] David Rolnick, Yaron Meirovitch, Toufiq Parag, Hanspeter Pfister, Viren Jain, Jeff W. Lichtman, Edward S. Boyden, and Nir Shavit. Morphological error detection in 3d segmentations. CoRR, abs/1705.10882, 2017. URL http://arxiv.org/abs/1705.10882. [25] Bernardino Romera-Paredes and Philip H. S. Torr. Recurrent instance segmentation. CoRR, abs/1511.08250, 2015. URL http://arxiv.org/abs/1511.08250. [26] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation, May 2015. URL https://arxiv.org/abs/1505.04597. [27] Shreyas Saxena and Jakob Verbeek. Convolutional Neural Fabrics. CoRR, abs/1606.02492, 2016. URL http://arxiv.org/abs/1606.02492. [28] Helene Schmidt, Anjali Gour, Jakob Straehle, Kevin M Boergens, Michael Brecht, and Moritz Helmstaedter. Axonal synapse sorting in medial entorhinal cortex. Nature, 549(7673):469, 2017. 11 [29] S C Turaga, J F Murray, V Jain, F Roth, M Helmstaedter, K Briggman, W Denk, and H S Seung. Convolutional networks can learn to generate affinity graphs for image segmentation., Feb 2010. URL https://www.ncbi.nlm.nih.gov/pubmed/19922289. [30] John G White, Eileen Southgate, J Nichol Thomson, and Sydney Brenner. The structure of the nervous system of the nematode caenorhabditis elegans: the mind of a worm. Phil. Trans. R. Soc. Lond, 314:1–340, 1986. [31] Tao Zeng, Bian Wu, and Shuiwang Ji. Deepem3d: approaching human-level performance on 3d anisotropic em image segmentation. Bioinformatics, 33(16):2555–2562, 2017. doi: 10.1093/ bioinformatics/btx188. URL +http://dx.doi.org/10.1093/bioinformatics/btx188. 12 | 2017 | 187 |
6,660 | Hindsight Experience Replay Marcin Andrychowicz⇤, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel†, Wojciech Zaremba† OpenAI Abstract Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI. 1 Introduction Reinforcement learning (RL) combined with neural networks has recently led to a wide range of successes in learning policies for sequential decision-making problems. This includes simulated environments, such as playing Atari games (Mnih et al., 2015), and defeating the best human player at the game of Go (Silver et al., 2016), as well as robotic tasks such as helicopter control (Ng et al., 2006), hitting a baseball (Peters and Schaal, 2008), screwing a cap onto a bottle (Levine et al., 2015), or door opening (Chebotar et al., 2016). However, a common challenge, especially for robotics, is the need to engineer a reward function that not only reflects the task at hand but is also carefully shaped (Ng et al., 1999) to guide the policy optimization. For example, Popov et al. (2017) use a cost function consisting of five relatively complicated terms which need to be carefully weighted in order to train a policy for stacking a brick on top of another one. The necessity of cost engineering limits the applicability of RL in the real world because it requires both RL expertise and domain-specific knowledge. Moreover, it is not applicable in situations where we do not know what admissible behaviour may look like. It is therefore of great practical relevance to develop algorithms which can learn from unshaped reward signals, e.g. a binary signal indicating successful task completion. One ability humans have, unlike the current generation of model-free RL algorithms, is to learn almost as much from achieving an undesired outcome as from the desired one. Imagine that you are learning how to play hockey and are trying to shoot a puck into a net. You hit the puck but it misses the net on the right side. The conclusion drawn by a standard RL algorithm in such a situation would be that the performed sequence of actions does not lead to a successful shot, and little (if anything) ⇤marcin@openai.com † Equal advising. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. would be learned. It is however possible to draw another conclusion, namely that this sequence of actions would be successful if the net had been placed further to the right. In this paper we introduce a technique called Hindsight Experience Replay (HER) which allows the algorithm to perform exactly this kind of reasoning and can be combined with any off-policy RL algorithm. It is applicable whenever there are multiple goals which can be achieved, e.g. achieving each state of the system may be treated as a separate goal. Not only does HER improve the sample efficiency in this setting, but more importantly, it makes learning possible even if the reward signal is sparse and binary. Our approach is based on training universal policies (Schaul et al., 2015a) which take as input not only the current state, but also a goal state. The pivotal idea behind HER is to replay each episode with a different goal than the one the agent was trying to achieve, e.g. one of the goals which was achieved in the episode. 2 Background 2.1 Reinforcement Learning We consider the standard reinforcement learning formalism consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is described by a set of states S, a set of actions A, a distribution of initial states p(s0), a reward function r : S ⇥A ! R, transition probabilities p(st+1|st, at), and a discount factor γ 2 [0, 1]. A deterministic policy is a mapping from states to actions: ⇡: S ! A. Every episode starts with sampling an initial state s0. At every timestep t the agent produces an action based on the current state: at = ⇡(st). Then it gets the reward rt = r(st, at) and the environment’s new state is sampled from the distribution p(·|st, at). A discounted sum of future rewards is called a return: Rt = P1 i=t γi−tri. The agent’s goal is to maximize its expected return Es0[R0|s0]. The Q-function or action-value function is defined as Q⇡(st, at) = E[Rt|st, at]. Let ⇡⇤denote an optimal policy i.e. any policy ⇡⇤s.t. Q⇡⇤(s, a) ≥Q⇡(s, a) for every s 2 S, a 2 A and any policy ⇡. All optimal policies have the same Q-function which is called optimal Q-function and denoted Q⇤. It is easy to show that it satisfies the following equation called the Bellman equation: Q⇤(s, a) = Es0⇠p(·|s,a) r(s, a) + γ max a02A Q⇤(s0, a0) # . 2.2 Deep Q-Networks (DQN) Deep Q-Networks (DQN) (Mnih et al., 2015) is a model-free RL algorithm for discrete action spaces. Here we sketch it only informally, see Mnih et al. (2015) for more details. In DQN we maintain a neural network Q which approximates Q⇤. A greedy policy w.r.t. Q is defined as ⇡Q(s) = argmaxa2AQ(s, a). An ✏-greedy policy w.r.t. Q is a policy which with probability ✏takes a random action (sampled uniformly from A) and takes the action ⇡Q(s) with probability 1 −✏. During training we generate episodes using ✏-greedy policy w.r.t. the current approximation of the action-value function Q. The transition tuples (st, at, rt, st+1) encountered during training are stored in the so-called replay buffer. The generation of new episodes is interleaved with neural network training. The network is trained using mini-batch gradient descent on the loss L which encourages the approximated Q-function to satisfy the Bellman equation: L = E (Q(st, at) −yt)2, where yt = rt + γ maxa02A Q(st+1, a0) and the tuples (st, at, rt, st+1) are sampled from the replay buffer1. 2.3 Deep Deterministic Policy Gradients (DDPG) Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) is a model-free RL algorithm for continuous action spaces. Here we sketch it only informally, see Lillicrap et al. (2015) for more details. In DDPG we maintain two neural networks: a target policy (also called an actor) ⇡: S ! A and an action-value function approximator (called the critic) Q : S ⇥A ! R. The critic’s job is to approximate the actor’s action-value function Q⇡. 1The targets yt depend on the network parameters but this dependency is ignored during backpropagation. Moreover, DQN uses the so-called target network to make the optimization procedure more stable but we omit it here as it is not relevant to our results. 2 Episodes are generated using a behavioral policy which is a noisy version of the target policy, e.g. ⇡b(s) = ⇡(s) + N(0, 1). The critic is trained in a similar way as the Q-function in DQN but the targets yt are computed using actions outputted by the actor, i.e. yt = rt + γQ(st+1, ⇡(st+1)). The actor is trained with mini-batch gradient descent on the loss La = −EsQ(s, ⇡(s)), where s is sampled from the replay buffer. The gradient of La w.r.t. actor parameters can be computed by backpropagation through the combined critic and actor networks. 2.4 Universal Value Function Approximators (UVFA) Universal Value Function Approximators (UVFA) (Schaul et al., 2015a) is an extension of DQN to the setup where there is more than one goal we may try to achieve. Let G be the space of possible goals. Every goal g 2 G corresponds to some reward function rg : S ⇥A ! R. Every episode starts with sampling a state-goal pair from some distribution p(s0, g). The goal stays fixed for the whole episode. At every timestep the agent gets as input not only the current state but also the current goal ⇡: S ⇥G ! A and gets the reward rt = rg(st, at). The Q-function now depends not only on a state-action pair but also on a goal Q⇡(st, at, g) = E[Rt|st, at, g]. Schaul et al. (2015a) show that in this setup it is possible to train an approximator to the Q-function using direct bootstrapping from the Bellman equation (just like in case of DQN) and that a greedy policy derived from it can generalize to previously unseen state-action pairs. The extension of this approach to DDPG is straightforward. 3 Hindsight Experience Replay 3.1 A motivating example Consider a bit-flipping environment with the state space S = {0, 1}n and the action space A = {0, 1, . . . , n −1} for some integer n in which executing the i-th action flips the i-th bit of the state. For every episode we sample uniformly an initial state as well as a target state and the policy gets a reward of −1 as long as it is not in the target state, i.e. rg(s, a) = −[s 6= g]. Figure 1: Bit-flipping experiment. Standard RL algorithms are bound to fail in this environment for n > 40 because they will never experience any reward other than −1. Notice that using techniques for improving exploration (e.g. VIME (Houthooft et al., 2016), count-based exploration (Ostrovski et al., 2017) or bootstrapped DQN (Osband et al., 2016)) does not help here because the real problem is not in lack of diversity of states being visited, rather it is simply impractical to explore such a large state space. The standard solution to this problem would be to use a shaped reward function which is more informative and guides the agent towards the goal, e.g. rg(s, a) = −||s −g||2. While using a shaped reward solves the problem in our toy environment, it may be difficult to apply to more complicated problems. We investigate the results of reward shaping experimentally in Sec. 4.4. Instead of shaping the reward we propose a different solution which does not require any domain knowledge. Consider an episode with a state sequence s1, . . . , sT and a goal g 6= s1, . . . , sT which implies that the agent received a reward of −1 at every timestep. The pivotal idea behind our approach is to re-examine this trajectory with a different goal — while this trajectory may not help us learn how to achieve the state g, it definitely tells us something about how to achieve the state sT . This information can be harvested by using an off-policy RL algorithm and experience replay where we replace g in the replay buffer by sT . In addition we can still replay with the original goal g left intact in the replay buffer. With this modification at least half of the replayed trajectories contain rewards different from −1 and learning becomes much simpler. Fig. 1 compares the final performance of DQN with and without this additional replay technique which we call Hindsight Experience Replay (HER). DQN without HER can only solve the task for n 13 while DQN with HER easily solves the task for n up to 50. See Appendix A for the details of the experimental setup. Note that this approach combined with powerful function approximators (e.g., deep neural networks) allows the agent to learn how to achieve the goal g even if it has never observed it during training. We more formally describe our approach in the following sections. 3.2 Multi-goal RL We are interested in training agents which learn to achieve multiple different goals. We follow the approach from Universal Value Function Approximators (Schaul et al., 2015a), i.e. we train policies 3 and value functions which take as input not only a state s 2 S but also a goal g 2 G. Moreover, we show that training an agent to perform multiple tasks can be easier than training it to perform only one task (see Sec. 4.3 for details) and therefore our approach may be applicable even if there is only one task we would like the agent to perform (a similar situation was recently observed by Pinto and Gupta (2016)). We assume that every goal g 2 G corresponds to some predicate fg : S ! {0, 1} and that the agent’s goal is to achieve any state s that satisfies fg(s) = 1. In the case when we want to exactly specify the desired state of the system we may use S = G and fg(s) = [s = g]. The goals can also specify only some properties of the state, e.g. suppose that S = R2 and we want to be able to achieve an arbitrary state with the given value of x coordinate. In this case G = R and fg((x, y)) = [x = g]. Moreover, we assume that given a state s we can easily find a goal g which is satisfied in this state. More formally, we assume that there is given a mapping m : S ! G s.t. 8s2Sfm(s)(s) = 1. Notice that this assumption is not very restrictive and can usually be satisfied. In the case where each goal corresponds to a state we want to achieve, i.e. G = S and fg(s) = [s = g], the mapping m is just an identity. For the case of 2-dimensional state and 1-dimensional goals from the previous paragraph this mapping is also very simple m((x, y)) = x. A universal policy can be trained using an arbitrary RL algorithm by sampling goals and initial states from some distributions, running the agent for some number of timesteps and giving it a negative reward at every timestep when the goal is not achieved, i.e. rg(s, a) = −[fg(s) = 0]. This does not however work very well in practice because this reward function is sparse and not very informative. In order to solve this problem we introduce the technique of Hindsight Experience Replay which is the crux of our approach. 3.3 Algorithm The idea behind Hindsight Experience Replay (HER) is very simple: after experiencing some episode s0, s1, . . . , sT we store in the replay buffer every transition st ! st+1 not only with the original goal used for this episode but also with a subset of other goals. Notice that the goal being pursued influences the agent’s actions but not the environment dynamics and therefore we can replay each trajectory with an arbitrary goal assuming that we use an off-policy RL algorithm like DQN (Mnih et al., 2015), DDPG (Lillicrap et al., 2015), NAF (Gu et al., 2016) or SDQN (Metz et al., 2017). One choice which has to be made in order to use HER is the set of additional goals used for replay. In the simplest version of our algorithm we replay each trajectory with the goal m(sT ), i.e. the goal which is achieved in the final state of the episode. We experimentally compare different types and quantities of additional goals for replay in Sec. 4.5. In all cases we also replay each trajectory with the original goal pursued in the episode. See Alg. 1 for a more formal description of the algorithm. HER may be seen as a form of implicit curriculum as the goals used for replay naturally shift from ones which are simple to achieve even by a random agent to more difficult ones. However, in contrast to explicit curriculum, HER does not require having any control over the distribution of initial environment states. Not only does HER learn with extremely sparse rewards, in our experiments it also performs better with sparse rewards than with shaped ones (See Sec. 4.4). These results are indicative of the practical challenges with reward shaping, and that shaped rewards would often constitute a compromise on the metric we truly care about (such as binary success/failure). 4 Experiments The video presenting our experiments is available at https://goo.gl/SMrQnI. 4.1 Environments The are no standard environments for multi-goal RL and therefore we created our own environments. We decided to use manipulation environments based on an existing hardware robot to ensure that the challenges we face correspond as closely as possible to the real world. In all experiments we use a 7-DOF Fetch Robotics arm which has a two-fingered parallel gripper. The robot is simulated using the MuJoCo (Todorov et al., 2012) physics engine. The whole training procedure is performed in the simulation but we show in Sec. 4.6 that the trained policies perform well on the physical robot without any finetuning. Policies are represented as Multi-Layer Perceptrons (MLPs) with Rectified Linear Unit (ReLU) activation functions. Training is performed using the DDPG algorithm (Lillicrap et al., 2015) with 4 Algorithm 1 Hindsight Experience Replay (HER) Given: • an off-policy RL algorithm A, . e.g. DQN, DDPG, NAF, SDQN • a strategy S for sampling goals for replay, . e.g. S(s0, . . . , sT ) = m(sT ) • a reward function r : S ⇥A ⇥G ! R. . e.g. r(s, a, g) = −[fg(s) = 0] Initialize A . e.g. initialize neural networks Initialize replay buffer R for episode = 1, M do Sample a goal g and an initial state s0. for t = 0, T −1 do Sample an action at using the behavioral policy from A: at ⇡b(st||g) . || denotes concatenation Execute the action at and observe a new state st+1 end for for t = 0, T −1 do rt := r(st, at, g) Store the transition (st||g, at, rt, st+1||g) in R . standard experience replay Sample a set of additional goals for replay G := S(current episode) for g0 2 G do r0 := r(st, at, g0) Store the transition (st||g0, at, r0, st+1||g0) in R . HER end for end for for t = 1, N do Sample a minibatch B from the replay buffer R Perform one step of optimization using A and minibatch B end for end for Adam (Kingma and Ba, 2014) as the optimizer. See Appendix A for more details and the values of all hyperparameters. We consider 3 different tasks: 1. Pushing. In this task a box is placed on a table in front of the robot and the task is to move it to the target location on the table. The robot fingers are locked to prevent grasping. The learned behaviour is a mixture of pushing and rolling. 2. Sliding. In this task a puck is placed on a long slippery table and the target position is outside of the robot’s reach so that it has to hit the puck with such a force that it slides and then stops in the appropriate place due to friction. 3. Pick-and-place. This task is similar to pushing but the target position is in the air and the fingers are not locked. To make exploration in this task easier we recorded a single state in which the box is grasped and start half of the training episodes from this state2. The images showing the tasks being performed can be found in Appendix C. States: The state of the system is represented in the MuJoCo physics engine. Goals: Goals describe the desired position of the object (a box or a puck depending on the task) with some fixed tolerance of ✏i.e. G = R3 and fg(s) = [|g −sobject| ✏], where sobject is the position of the object in the state s. The mapping from states to goals used in HER is simply m(s) = sobject. Rewards: Unless stated otherwise we use binary and sparse rewards r(s, a, g) = −[fg(s0) = 0] where s0 if the state after the execution of the action a in the state s. We compare sparse and shaped reward functions in Sec. 4.4. State-goal distributions: For all tasks the initial position of the gripper is fixed, while the initial position of the object and the target are randomized. See Appendix A for details. Observations: In this paragraph relative means relative to the current gripper position. The policy is 2This was necessary because we could not successfully train any policies for this task without using the demonstration state. We have later discovered that training is possible without this trick if only the goal position is sometimes on the table and sometimes in the air. 5 given as input the absolute position of the gripper, the relative position of the object and the target3, as well as the distance between the fingers. The Q-function is additionally given the linear velocity of the gripper and fingers as well as relative linear and angular velocity of the object. We decided to restrict the input to the policy in order to make deployment on the physical robot easier. Actions: None of the problems we consider require gripper rotation and therefore we keep it fixed. Action space is 4-dimensional. Three dimensions specify the desired relative gripper position at the next timestep. We use MuJoCo constraints to move the gripper towards the desired position but Jacobian-based control could be used instead4. The last dimension specifies the desired distance between the 2 fingers which are position controlled. Strategy S for sampling goals for replay: Unless stated otherwise HER uses replay with the goal corresponding to the final state in each episode, i.e. S(s0, . . . , sT ) = m(sT ). We compare different strategies for choosing which goals to replay with in Sec. 4.5. 4.2 Does HER improve performance? In order to verify if HER improves performance we evaluate DDPG with and without HER on all 3 tasks. Moreover, we compare against DDPG with count-based exploration5 (Strehl and Littman, 2005; Kolter and Ng, 2009; Tang et al., 2016; Bellemare et al., 2016; Ostrovski et al., 2017). For HER we store each transition in the replay buffer twice: once with the goal used for the generation of the episode and once with the goal corresponding to the final state from the episode (we call this strategy final). In Sec. 4.5 we perform ablation studies of different strategies S for choosing goals for replay, here we include the best version from Sec. 4.5 in the plot for comparison. Figure 2: Multiple goals. Figure 3: Single goal. Fig. 2 shows the learning curves for all 3 tasks6. DDPG without HER is unable to solve any of the tasks7 and DDPG with count-based exploration is only able to make some progress on the sliding task. On the other hand, DDPG with HER solves all tasks almost perfectly. It confirms that HER is a crucial element which makes learning from sparse, binary rewards possible. 4.3 Does HER improve performance even if there is only one goal we care about? In this section we evaluate whether HER improves performance in the case where there is only one goal we care about. To this end, we repeat the experiments from the previous section but the goal state is identical in all episodes. From Fig. 3 it is clear that DDPG+HER performs much better than pure DDPG even if the goal state is identical in all episodes. More importantly, comparing Fig. 2 and Fig. 3 we can also notice that HER learns faster if training episodes contain multiple goals, so in practice it is advisable to train on multiple goals even if we care only about one of them. 3The target position is relative to the current object position. 4The successful deployment on a physical robot (Sec. 4.6) confirms that our control model produces movements which are reproducible on the physical robot despite not being fully physically plausible. 5 We discretize the state space and use an intrinsic reward of the form ↵/ p N, where ↵is a hyperparameter and N is the number of times the given state was visited. The discretization works as follows. We take the relative position of the box and the target and then discretize every coordinate using a grid with a stepsize β which is a hyperparameter. We have performed a hyperparameter search over ↵2 {0.032, 0.064, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32}, β 2 {1cm, 2cm, 4cm, 8cm}. The best results were obtained using ↵= 1 and β = 1cm and these are the results we report. 6An episode is considered successful if the distance between the object and the goal at the end of the episode is less than 7cm for pushing and pick-and-place and less than 20cm for sliding. The results are averaged across 5 random seeds and shaded areas represent one standard deviation. 7We also evaluated DQN (without HER) on our tasks and it was not able to solve any of them. 6 Figure 4: Ablation study of different strategies for choosing additional goals for replay. The top row shows the highest (across the training epochs) test performance and the bottom row shows the average test performance across all training epochs. On the right top plot the curves for final, episode and future coincide as all these strategies achieve perfect performance on this task. 4.4 How does HER interact with reward shaping? So far we only considered binary rewards of the form r(s, a, g) = −[|g −sobject| > ✏]. In this section we check how the performance of DDPG with and without HER changes if we replace this reward with one which is shaped. We considered reward functions of the form r(s, a, g) = λ|g −sobject|p −|g −s0 object|p, where s0 is the state of the environment after the execution of the action a in the state s and λ 2 {0, 1}, p 2 {1, 2} are hyperparameters. Surprisingly neither DDPG, nor DDPG+HER was able to successfully solve any of the tasks with any of these reward functions8(learning curves can be found in Appendix D). Our results are consistent with the fact that successful applications of RL to difficult manipulation tasks which does not use demonstrations usually have more complicated reward functions than the ones we tried (e.g. Popov et al. (2017)). The following two reasons can cause shaped rewards to perform so poorly: (1) There is a huge discrepancy between what we optimize (i.e. a shaped reward function) and the success condition (i.e.: is the object within some radius from the goal at the end of the episode); (2) Shaped rewards penalize for inappropriate behaviour (e.g. moving the box in a wrong direction) which may hinder exploration. It can cause the agent to learn not to touch the box at all if it can not manipulate it precisely and we noticed such behaviour in some of our experiments. Our results suggest that domain-agnostic reward shaping does not work well (at least in the simple forms we have tried). Of course for every problem there exists a reward which makes it easy (Ng et al., 1999) but designing such shaped rewards requires a lot of domain knowledge and may in some cases not be much easier than directly scripting the policy. This strengthens our belief that learning from sparse, binary rewards is an important problem. 4.5 How many goals should we replay each trajectory with and how to choose them? In this section we experimentally evaluate different strategies (i.e. S in Alg. 1) for choosing goals to use with HER. So far the only additional goals we used for replay were the ones corresponding to the final state of the environment and we will call this strategy final. Apart from it we consider the following strategies: future — replay with k random states which come from the same episode as the transition being replayed and were observed after it, episode — replay with k random states coming from the same episode as the transition being replayed, random — replay with k random states encountered so far in the whole training procedure. All of these strategies have a hyperparameter k which controls the ratio of HER data to data coming from normal experience replay in the replay buffer. 8We also tried to rescale the distances, so that the range of rewards is similar as in the case of binary rewards, clipping big distances and adding a simple (linear or quadratic) term encouraging the gripper to move towards the object but none of these techniques have led to successful training. 7 Figure 5: The pick-and-place policy deployed on the physical robot. The plots comparing different strategies and different values of k can be found in Fig. 4. We can see from the plots that all strategies apart from random solve pushing and pick-and-place almost perfectly regardless of the values of k. In all cases future with k equal 4 or 8 performs best and it is the only strategy which is able to solve the sliding task almost perfectly. The learning curves for future with k = 4 can be found in Fig. 2. It confirms that the most valuable goals for replay are the ones which are going to be achieved in the near future9. Notice that increasing the values of k above 8 degrades performance because the fraction of normal replay data in the buffer becomes very low. 4.6 Deployment on a physical robot We took a policy for the pick-and-place task trained in the simulator (version with the future strategy and k = 4 from Sec. 4.5) and deployed it on a physical fetch robot without any finetuning. The box position was predicted using a separately trained CNN using raw fetch head camera images. See Appendix B for details. Initially the policy succeeded in 2 out of 5 trials. It was not robust to small errors in the box position estimation because it was trained on perfect state coming from the simulation. After retraining the policy with gaussian noise (std=1cm) added to observations10 the success rate increased to 5/5. The video showing some of the trials is available at https://goo.gl/SMrQnI. 5 Related work The technique of experience replay has been introduced in Lin (1992) and became very popular after it was used in the DQN agent playing Atari (Mnih et al., 2015). Prioritized experience replay (Schaul et al., 2015b) is an improvement to experience replay which prioritizes transitions in the replay buffer in order to speed up training. It it orthogonal to our work and both approaches can be easily combined. Learning simultaneously policies for multiple tasks have been heavily explored in the context of policy search, e.g. Schmidhuber and Huber (1990); Caruana (1998); Da Silva et al. (2012); Kober et al. (2012); Devin et al. (2016); Pinto and Gupta (2016). Learning off-policy value functions for multiple tasks was investigated by Foster and Dayan (2002) and Sutton et al. (2011). Our work is most heavily based on Schaul et al. (2015a) who considers training a single neural network approximating multiple value functions. Learning simultaneously to perform multiple tasks has been also investigated for a long time in the context of Hierarchical Reinforcement Learning, e.g. Bakker and Schmidhuber (2004); Vezhnevets et al. (2017). Our approach may be seen as a form of implicit curriculum learning (Elman, 1993; Bengio et al., 2009). While curriculum is now often used for training neural networks (e.g. Zaremba and Sutskever (2014); Graves et al. (2016)), the curriculum is almost always hand-crafted. The problem of automatic curriculum generation was approached by Schmidhuber (2004) who constructed an asymptotically optimal algorithm for this problem using program search. Another interesting approach is PowerPlay (Schmidhuber, 2013; Srivastava et al., 2013) which is a general framework for automatic task selection. Graves et al. (2017) consider a setup where there is a fixed discrete set of tasks and empirically evaluate different strategies for automatic curriculum generation in this settings. Another approach investigated by Sukhbaatar et al. (2017) and Held et al. (2017) uses self-play between the policy and a task-setter in order to automatically generate goal states which are on the border of what the current policy can achieve. Our approach is orthogonal to these techniques and can be combined with them. 9We have also tried replaying the goals which are close to the ones achieved in the near future but it has not performed better than the future strategy 10The Q-function approximator was trained using exact observations. It does not have to be robust to noisy observations because it is not used during the deployment on the physical robot. 8 6 Conclusions We introduced a novel technique called Hindsight Experience Replay which makes possible applying RL algorithms to problems with sparse and binary rewards. Our technique can be combined with an arbitrary off-policy RL algorithm and we experimentally demonstrated that with DQN and DDPG. We showed that HER allows training policies which push, slide and pick-and-place objects with a robotic arm to the specified positions while the vanilla RL algorithm fails to solve these tasks. We also showed that the policy for the pick-and-place task performs well on the physical robot without any finetuning. As far as we know, it is the first time so complicated behaviours were learned using only sparse, binary rewards. Acknowledgments We would like to thank Ankur Handa, Jonathan Ho, John Schulman, Matthias Plappert, Tim Salimans, and Vikash Kumar for providing feedback on the previous versions of this manuscript. We would also like to thank Rein Houthooft and the whole OpenAI team for fruitful discussions as well as Bowen Baker for performing some additional experiments. References Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Bakker, B. and Schmidhuber, J. (2004). Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization. In Proc. of the 8-th Conf. on Intelligent Autonomous Systems, pages 438–445. Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. (2016). Unifying countbased exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471–1479. Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Caruana, R. (1998). Multitask learning. In Learning to learn, pages 95–133. Springer. Chebotar, Y., Kalakrishnan, M., Yahya, A., Li, A., Schaal, S., and Levine, S. (2016). Path integral guided policy search. arXiv preprint arXiv:1610.00529. Da Silva, B., Konidaris, G., and Barto, A. (2012). Learning parameterized skills. arXiv preprint arXiv:1206.6398. Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2016). Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088. Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99. Foster, D. and Dayan, P. (2002). Structure in the space of value functions. Machine Learning, 49(2):325–346. Graves, A., Bellemare, M. G., Menick, J., Munos, R., and Kavukcuoglu, K. (2017). Automated curriculum learning for neural networks. arXiv preprint arXiv:1704.03003. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi´nska, A., Colmenarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476. Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. (2016). Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv:1603.00748. Held, D., Geng, X., Florensa, C., and Abbeel, P. (2017). Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366. Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109– 1117. Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 9 Kober, J., Wilhelm, A., Oztop, E., and Peters, J. (2012). Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4):361–379. Kolter, J. Z. and Ng, A. Y. (2009). Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 513–520. ACM. Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2015). End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293–321. Metz, L., Ibarz, J., Jaitly, N., and Davidson, J. (2017). Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv:1705.05035. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529–533. Ng, A., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., and Liang, E. (2006). Autonomous inverted helicopter flight via reinforcement learning. Experimental Robotics IX, pages 363–372. Ng, A. Y., Harada, D., and Russell, S. (1999). Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278–287. Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. (2016). Deep exploration via bootstrapped dqn. In Advances In Neural Information Processing Systems, pages 4026–4034. Ostrovski, G., Bellemare, M. G., Oord, A. v. d., and Munos, R. (2017). Count-based exploration with neural density models. arXiv preprint arXiv:1703.01310. Peters, J. and Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682–697. Pinto, L. and Gupta, A. (2016). Learning to push by grasping: Using multiple tasks for effective learning. arXiv preprint arXiv:1609.09025. Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and Riedmiller, M. (2017). Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv:1704.03073. Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015a). Universal value function approximators. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1312–1320. Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2015b). Prioritized experience replay. arXiv preprint arXiv:1511.05952. Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54(3):211–254. Schmidhuber, J. (2013). Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Frontiers in psychology, 4. Schmidhuber, J. and Huber, R. (1990). Learning to generate focus trajectories for attentive vision. Institut für Informatik. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489. Srivastava, R. K., Steunebrink, B. R., and Schmidhuber, J. (2013). First experiments with powerplay. Neural Networks, 41:130–136. Strehl, A. L. and Littman, M. L. (2005). A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning, pages 856–863. ACM. Sukhbaatar, S., Kostrikov, I., Szlam, A., and Fergus, R. (2017). Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407. 10 Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 761–768. International Foundation for Autonomous Agents and Multiagent Systems. Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). # exploration: A study of count-based exploration for deep reinforcement learning. arXiv preprint arXiv:1611.04717. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. arXiv preprint arXiv:1703.06907. Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE. Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161. Zaremba, W. and Sutskever, I. (2014). Learning to execute. arXiv preprint arXiv:1410.4615. 11 | 2017 | 188 |
6,661 | Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data Joel A. Tropp Caltech jtropp@caltech.edu Alp Yurtsever EPFL alp.yurtsever@epfl.ch Madeleine Udell Cornell mru8@cornell.edu Volkan Cevher EPFL volkan.cevher@epfl.ch Abstract Several important applications, such as streaming PCA and semidefinite programming, involve a large-scale positive-semidefinite (psd) matrix that is presented as a sequence of linear updates. Because of storage limitations, it may only be possible to retain a sketch of the psd matrix. This paper develops a new algorithm for fixed-rank psd approximation from a sketch. The approach combines the Nyström approximation with a novel mechanism for rank truncation. Theoretical analysis establishes that the proposed method can achieve any prescribed relative error in the Schatten 1-norm and that it exploits the spectral decay of the input matrix. Computer experiments show that the proposed method dominates alternative techniques for fixed-rank psd matrix approximation across a wide range of examples. 1 Motivation In recent years, researchers have studied many applications where a large positive-semidefinite (psd) matrix is presented as a series of linear updates. A recurring theme is that we only have space to store a small summary of the psd matrix, and we must use this information to construct an accurate psd approximation with specified rank. Here are two important cases where this problem arises. Streaming Covariance Estimation. Suppose that we receive a stream h1, h2, h3, · · · ∈Rn of high-dimensional vectors. The psd sample covariance matrix of these vectors has the linear dynamics A(0) ←0 and A(i) ←(1 −i−1)A(i−1) + i−1hih∗ i . When the dimension n and the number of vectors are both large, it is not possible to store the vectors or the sample covariance matrix. Instead, we wish to maintain a small summary that allows us to compute the rank-r psd approximation of the sample covariance matrix A(i) at a specified instant i. This problem and its variants are often called streaming PCA [3, 12, 14, 15, 25, 32]. Convex Low-Rank Matrix Optimization with Optimal Storage. A primary application of semidefinite programming (SDP) is to search for a rank-r psd matrix that satisfies additional constraints. Because of storage costs, SDPs are difficult to solve when the matrix variable is large. Recently, Yurtsever et al. [44] exhibited the first provable algorithm, called SketchyCGM, that produces a rank-r approximate solution to an SDP using optimal storage. Implicitly, SketchyCGM forms a sequence of approximate psd solutions to the SDP via the iteration A(0) ←0 and A(i) ←(1 −ηi)A(i−1) + ηihih∗ i . The step size ηi = 2/(i + 2), and the vectors hi do not depend on the matrices A(i). In fact, SketchyCGM only maintains a small summary of the evolving solution A(i). When the iteration terminates, SketchyCGM computes a rank-r psd approximation of the final iterate using the method described by Tropp et al. [37, Alg. 9]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Notation and Background The scalar field F = R or F = C. Define α(R) = 1 and α(C) = 0. The asterisk ∗is the (conjugate) transpose, and the dagger † denotes the Moore–Penrose pseudoinverse. The notation A1/2 refers to the unique psd square root of a psd matrix A. For p ∈[1, ∞], the Schatten p-norm ∥· ∥p returns the ℓp norm of the singular values of a matrix. As usual, σr refers to the rth largest singular value. For a nonnegative integer r, the phrase “rank-r” and its variants mean “rank at most r.” For a matrix M, the symbol JMKr denotes a (simultaneous) best rank-r approximation of the matrix M with respect to any Schatten p-norm. We can take JMKr to be any r-truncated singular value decomposition (SVD) of M [24, Sec. 6]. Every best rank-r approximation of a psd matrix is psd. 2 Sketching and Fixed-Rank PSD Approximation We begin with a streaming data model for a psd matrix that evolves via a sequence of general linear updates, and it describes a randomized linear sketch for tracking the psd matrix. To compute a fixed-rank psd approximation, we develop an algorithm based on the Nyström method [40], a technique from the literature on kernel methods. In contrast to previous approaches, our algorithm uses a distinct mechanism to truncate the rank of the approximation. The Streaming Model. Fix a rank parameter r in the range 1 ≤r ≤n. Initially, the psd matrix A ∈Fn×n equals a known psd matrix Ainit ∈Fn×n. Then A evolves via a series of linear updates: A ←θ1A + θ2H where θi ∈R, H ∈Fn×n is (conjugate) symmetric. (2.1) In many applications, the innovation H is low-rank and/or sparse. We assume that the evolving matrix A always remains psd. At one given instant, we must produce an accurate rank-r approximation of the psd matrix A induced by the stream of linear updates. The Sketch. Fix a sketch size parameter k in the range r ≤k ≤n. Independent from A, we draw and fix a random test matrix Ω∈Fn×k. (2.2) See Sec. 3 for a discussion of possible distributions. The sketch of the matrix A takes the form Y = AΩ∈Fn×k. (2.3) The sketch (2.3) supports updates of the form (2.1): Y ←θ1Y + θ2HΩ. (2.4) To find a good rank-r approximation, we must set the sketch size k larger than r. But storage costs and computation also increase with k. One of our main contributions is to clarify the role of k. Under the model (2.1), it is more or less necessary to use a randomized linear sketch to track A [28]. For psd matrices, sketches of the form (2.2)–(2.3) appear explicitly in Gittens’s work [16, 17, 19]. Tropp et al. [37] relies on a more complicated sketch developed in [7, 42]. The Nyström Approximation. The Nyström method is a general technique for low-rank psd matrix approximation. Various instantiations appear in the papers [5, 11, 13, 16, 17, 19, 22, 27, 34, 40]. Here is the application to the present situation. Given the test matrix Ωand the sketch Y = AΩ, the Nyström method constructs a rank-k psd approximation of the psd matrix A via the formula ˆ Anys = Y (Ω∗Y )†Y ∗. (2.5) In most work on the Nyström method, the test matrix Ωdepends adaptively on A, so these approaches are not valid in the streaming setting. Gittens’s framework [16, 17, 19] covers the streaming case. Fixed-Rank Nyström Approximation: Prior Art. To construct a Nyström approximation with exact rank r from a sketch of size k, the standard approach is to truncate the center matrix to rank r: ˆ Anysfix r = Y (JΩ∗Y Kr)†Y ∗. (2.6) The truncated Nyström approximation (2.6) appears in the many papers, including [5, 11, 18, 34]. We have found (Sec. 5) that the truncation method (2.6) performs poorly in the present setting. This observation motivated us to search for more effective techniques. 2 Fixed-Rank Nyström Approximation: Proposal. The purpose of this paper is to develop, analyze, and evaluate a new approach for fixed-rank approximation of a psd matrix under the streaming model. We propose a more intuitive rank-r approximation: ˆ Ar = J ˆ AnysKr. (2.7) That is, we report a best rank-r approximation of the full Nyström approximation (2.5). This “matrix nearness” approach to fixed-rank approximation appears in the papers [21, 22, 37]. The combination with the Nyström method (2.5) is totally natural. Let us emphasize that the approach (2.7) also applies to Nyström approximations outside the streaming setting. Summary of Contributions. This paper contains a number of advances over the prior art: 1. We propose a new technique (2.7) for truncating the Nyström approximation to rank r. This formulation differs from the published literature on fixed-rank Nyström approximations. 2. We present a stable numerical implementation of (2.7) based on the best practices outlined in the paper [27]. This approach is essential for achieving high precision! (Sec. 3) 3. We establish informative error bounds for the method (2.7). In particular, we prove that it attains (1 + ε)-relative error in the Schatten 1-norm when k = Θ(r/ε). (Sec. 4) 4. We document numerical experiments on real and synthetic data to demonstrate that our method dominates existing techniques [18, 37] for fixed-rank psd approximation. (Sec. 5) Psd matrix approximation is a ubiquitous problem, so we expect these results to have a broad impact. Related Work. Randomized algorithms for low-rank matrix approximation were proposed in the late 1990s and developed into a technology in the 2000s; see [22, 30, 41]. In the absence of constraints, such as streaming, we recommend the general-purpose methods from [22, 23, 27]. Algorithms for low-rank matrix approximation in the important streaming data setting are discussed in [4, 7, 8, 15, 22, 37, 41, 42]. Few of these methods are designed for psd matrices. Nyström methods for low-rank psd matrix approximation appear in [11, 13, 16, 17, 19, 22, 26, 34, 37, 40, 43]. These works mostly concern kernel matrices; they do not focus on the streaming model. We are only aware of a few papers [16, 17, 19, 37] on algorithms for psd matrix approximation that operate under the streaming model (2.1). These papers form the comparison group. After this paper was submitted, we learned about two contemporary works [35, 39] that propose the fixed-rank approximation (2.7) in the context of kernel methods. Our research is distinctive because we focus on the streaming setting, we obtain precise error bounds, we address numerical stability, and we include an exhaustive empirical evaluation. Finally, let us mention two very recent theoretical papers [6, 33] that present existential results on algorithms for fixed-rank psd matrix approximation. The approach in [6] is only appropriate for sparse input matrices, while the work [33] is not valid in the streaming setting. 3 Implementation Distributions for the Test Matrix. To ensure that the sketch is informative, we must draw the test matrix (2.2) at random from a suitable distribution. The choice of distribution determines the computational requirements for the sketch (2.3), the linear updates (2.4), and the matrix approximation (2.7). It also affects the quality of the approximation (2.7). Let us outline some of the most useful distributions. A full discussion is outside the scope of our work, but see [17, 19, 22, 29, 30, 37, 41]. Isotropic Models. Mathematically, the most natural model is to construct a test matrix Ω∈Fn×k whose range is a uniformly random k-dimensional subspace in Fn. There are two approaches: 1. Gaussian. Draw each entry of the matrix Ω∈Fn×k independently at random from the standard normal distribution on F. 2. Orthonormal. Draw a Gaussian matrix G ∈Fn×k, as above. Compute a thin orthogonal– triangular factorization G = ΩR to obtain the test matrix Ω∈Fn×k. Discard R. Gaussian and orthonormal test matrices both require storage of kn floating-point numbers in F for the test matrix Ωand another kn floating-point numbers for the sketch Y . In both cases, the cost of multiplying a vector in Fn into Ωis Θ(kn) floating-point operations. 3 Algorithm 1 Sketch Initialization. Implements (2.2)–(2.3) with a random orthonormal test matrix. Input: Positive-semidefinite input matrix A ∈Fn×n; sketch size parameter k Output: Constructs test matrix Ω∈Fn×k and sketch Y = AΩ∈Fn×k 1 local: Ω, Y ▷Internal variables for NYSTROMSKETCH 2 function NYSTROMSKETCH(A; k) ▷Constructor 3 if F = R then 4 Ω←randn(n, k) 5 if F = C then 6 Ω←randn(n, k) + i ∗randn(n, k) 7 Ω←orth(Ω) ▷Improve numerical stability 8 Y ←AΩ Algorithm 2 Linear Update. Implements (2.4). Input: Scalars θ1, θ2 ∈R and conjugate symmetric H ∈Fn×n Output: Updates sketch to reflect linear innovation A ←θ1A + θ2H 1 local: Ω, Y ▷Internal variables for NYSTROMSKETCH 2 function LINEARUPDATE(θ1, θ2, H) 3 Y ←θ1Y + θ2HΩ For isotropic models, we can analyze the approximation (2.7) in detail. In exact arithmetic, Gaussian and isotropic test matrices yield identical Nyström approximations (Supplement). In floating-point arithmetic, orthonormal matrices are more stable for large k, but we can generate Gaussian matrices with less arithmetic and communication. References for isotropic test matrices include [21, 22, 31]. Subsampled Scrambled Fourier Transform (SSFT). One shortcoming of the isotropic models is the cost of storing the test matrix and the cost of multiplying a vector into the test matrix. We can often reduce these costs using an SSFT test matrix. An SSFT takes the form Ω= Π1F Π2F R ∈Fn×k. (3.1) The Πi ∈Fn×n are independent, signed permutation matrices,1 chosen uniformly at random. The matrix F ∈Fn×n is a discrete Fourier transform (F = C) or a discrete cosine transform (F = R). The matrix R ∈Fn×k is a restriction to k coordinates, chosen uniformly at random. An SSFT Ωrequires only Θ(n) storage, but the sketch Y still requires storage of kn numbers. We can multiply a vector in Fn into Ωusing Θ(n log n) arithmetic operations via an FFT or FCT algorithm. Thus, for most choices of sketch size k, the SSFT improves over the isotropic models. In practice, the SSFT yields matrix approximations whose quality is identical to those we obtain with an isotropic test matrix (Sec. 5). Although the analysis for SSFTs is less complete, the empirical evidence confirms that the theory for isotropic models also offers excellent guidance for SSFTs. References for SSFTs and related test matrices include [1, 2, 9, 22, 29, 36, 42]. Numerically Stable Implementation. It requires care to compute the fixed-rank approximation (2.7). The supplement shows that a poor implementation may produce an approximation with 100% error! Let us outline a numerically stable and very accurate implementation of (2.7), based on an idea from [27, 38]. Fix a small parameter ν > 0. Instead of approximating the psd matrix A directly, we approximate the shifted matrix Aν = A + νI and then remove the shift. Here are the steps: 1. Construct the shifted sketch Yν = Y + νΩ. 2. Form the matrix B = Ω∗Yν. 3. Compute a Cholesky decomposition B = CC∗. 4. Compute E = YνC−1 by back-substitution. 5. Compute the (thin) singular value decomposition E = UΣV ∗. 6. Form ˆ Ar = UJΣ2 −νIKrU ∗. 1A signed permutation has exactly one nonzero entry in each row and column; the nonzero has modulus one. 4 Algorithm 3 Fixed-Rank PSD Approximation. Implements (2.7). Input: Matrix A in sketch must be psd; rank parameter 1 ≤r ≤k Output: Returns factors U ∈Fn×r with orthonormal columns and nonnegative, diagonal Λ ∈Fr×r that form a rank-r psd approximation ˆ Ar = UΛU ∗of the sketched matrix A 1 local: Ω, Y ▷Internal variables for NYSTROMSKETCH 2 function FIXEDRANKPSDAPPROX(r) 3 ν ←µ norm(Y ) ▷µ = 2.2 · 10−16 in double precision 4 Y ←Y + νΩ ▷Sketch of shifted matrix A + νI 5 B ←Ω∗Y 6 C ←chol((B + B∗)/2) ▷Force symmetry 7 (U, Σ, ∼) ←svd(Y /C, ’econ’) ▷Solve least squares problem; form thin SVD 8 U ←U(:, 1:r) and Σ ←Σ(1:r, 1:r) ▷Truncate to rank r 9 Λ ←max{0, Σ2 −νI} ▷Square to get eigenvalues; remove shift 10 return (U, Λ) The pseudocode addresses some additional implementation details. Related, but distinct, methods were proposed by Williams & Seeger [40] and analyzed in Gittens’s thesis [17]. Pseudocode. We present detailed pseudocode for the sketch (2.2)–(2.4) and the implementation of the fixed-rank psd approximation (2.7) described above. For simplicity, we only elaborate the case of a random orthonormal test matrix; we have also developed an SSFT implementation for empirical testing. The pseudocode uses both mathematical notation and MATLAB 2017A functions. Algorithms and Computational Costs. Algorithm 1 constructs a random orthonormal test matrix, and computes the sketch (2.3) of an input matrix. The test matrix and sketch require the storage of 2kn floating-point numbers. Owing to the orthogonalization step, the construction of the test matrix requires Θ(k2n) floating-point operations. For a general input matrix, the sketch requires Θ(kn2) floating-point operations; this cost can be removed by initializing the input matrix to zero. Algorithm 2 implements the linear update (2.4) to the sketch. Nominally, the computation requires Θ(kn2) arithmetic operations, but this cost can be reduced when H has structure (e.g., low rank). Using the SSFT test matrix (3.1) also reduces this cost. Algorithm 3 computes the rank-r psd approximation (2.7). This method requires additional storage of Θ(kn). The arithmetic cost is Θ(k2n) operations, which is dominated by the SVD of the matrix E. 4 Theoretical Results Relative Error Bound. Our first result is an accurate bound for the expected Schatten 1-norm error in the fixed-rank psd approximation (2.7). Theorem 4.1 (Fixed-Rank Nyström: Relative Error). Assume 1 ≤r < k ≤n. Let A ∈Fn×n be a psd matrix. Draw a test matrix Ω∈Fn×k from the Gaussian or orthonormal distribution, and form the sketch Y = AΩ. Then the approximation ˆ Ar given by (2.5) and (2.7) satisfies E ∥A −ˆ Ar∥1 ≤ 1 + r k −r −α · ∥A −JAKr∥1; (4.1) E ∥A −ˆ Ar∥∞≤∥A −JAKr∥∞+ r k −r −α · ∥A −JAKr∥1. (4.2) The quantities α(R) = 1 and α(C) = 0. Similar results hold with high probability. The proof appears in the supplement. In contrast to all previous analyses of randomized Nyström methods, Theorem 4.1 yields explicit, sharp constants. (The contemporary work [39, Thm. 1] contains only a less precise variant of (4.1).) As a consequence, the formulae (4.1)–(4.2) offer an a priori mechanism for selecting the sketch size k to achieve a desired error bound. In particular, for each ε > 0, k = (1 + ε−1)r + α implies E ∥A −ˆ Ar∥1 ≤(1 + ε) · ∥A −JAKr∥1. 5 Thus, we can attain an arbitrarily small relative error in the Schatten 1-norm. In the streaming setting, the scaling k = Θ(r/ε) is optimal for this result [14, Thm. 4.2]. Furthermore, it is impossible [41, Sec. 6.2] to obtain “pure” relative error bounds in the Schatten ∞-norm unless k = Ω(n). The Role of Spectral Decay. To circumvent these limitations, it is necessary to develop a different kind of error bound. Our second result shows that the fixed-rank psd approximation (2.7) automatically exploits decay in the spectrum of the input matrix. Theorem 4.2 (Fixed-Rank Nyström: Spectral Decay). Instate the notation and assumptions of Theorem 4.1. Then E ∥A −ˆ Ar∥1 ≤∥A −JAKr∥1 + 2 min ϱ<k−α 1 + ϱ k −ϱ −α · ∥A −JAKϱ∥1 ; (4.3) E ∥A −ˆ Ar∥∞≤∥A −JAKr∥∞+ 2 min ϱ<k−α 1 + ϱ k −ϱ −α · ∥A −JAKϱ∥1 . (4.4) The index ϱ ranges over the natural numbers. The proof of Theorem 4.2 appears in the supplement. Here is one way to understand this result. As the index ϱ increases, the quantity ϱ/(k−ϱ−α) increases while the rank-ϱ approximation error decreases. Theorem 4.2 states that the approximation (2.7) automatically achieves the best tradeoff between these two terms. When the spectrum of A decays, the rank-ϱ approximation error may be far smaller than the rank-r approximation error. In this case, Theorem 4.2 is tighter than Theorem 4.1, although the prediction is more qualitative. Additional Results. The proofs can be extended to obtain high-probability bounds, as well as results for other Schatten norms or for other test matrices (Supplement). 5 Numerical Performance Experimental Setup. In many streaming applications, such as [44], it is essential that the sketch uses as little memory as possible and that the psd approximation achieves the best possible error. For the methods we consider, the arithmetic costs of linear updates and psd approximation are roughly comparable. Therefore, we only assess storage and accuracy. For the numerical experiments, the field F = C except when noted explicitly. Choose a psd input matrix A ∈Fn×n and a target rank r. Then fix a sketch size parameter k with r ≤k ≤n. For each trial, draw the test matrix Ωfrom the orthonormal or the SSFT distribution, and form the sketch Y = AΩof the input matrix. Using Algorithm 3, compute the rank-r psd approximation ˆ Ar defined in (2.7). We evaluate the performance using the relative error metric: Schatten p-norm relative error = ∥A −ˆ Ar∥p ∥A −JAKr∥p −1. (5.1) We perform 20 independent trials and report the average error. We compare our method (2.7) with the standard truncated Nyström approximation (2.6); the best reference for this type of approach is [18, Sec. 2.2]. The approximation (2.6) is constructed from the same sketch as (2.7), so the experimental procedure is identical. We also consider the sketching method and psd approximation algorithm [37, Alg. 9] based on earlier work from [7, 22, 42]. We implemented this sketch with orthonormal matrices and also with SSFT matrices. The sketch has two different parameters (k, ℓ), so we select the parameters that result in the minimum relative error. Otherwise, the experimental procedure is the same. We apply the methods to representative input matrices; see the Supplement for plots of the spectra. Synthetic Examples. The synthetic examples are diagonal with dimension n = 103; results for larger and non-diagonal matrices are similar. These matrices are parameterized by an effective rank parameter R, which takes values in {5, 10, 20}. We compute approximations with rank r = 10. 1. Low-Rank + PSD Noise. These matrices take the form A = diag(1, . . . , 1 | {z } R , 0, . . . , 0) + ξn−1W ∈Fn×n. 6 Storage (T) 1 2 4 8 16 32 64 128 Relative Error (S1) 10-8 10-6 10-4 10-2 (A) PhaseRetrieval (r = 1) Storage (T) 6 12 24 48 96 192 10-8 10-6 10-4 10-2 100 (B) PhaseRetrieval (r = 5) Storage (T) 1 2 4 8 16 32 64 128 Relative Error (S1) 10-8 10-6 10-4 10-2 (C) MaxCut (r = 1) Storage (T) 16 32 64 128 256 10-1 100 101 102 [TYUC17, Alg. 9] Standard (2.6) Proposed (2.7) (D) MaxCut (r = 14) FIGURE 5.1: Application Examples, Approximation Rank r, Schatten 1-Norm Error. The data series show the performance of three algorithms for rank-r psd approximation. Solid lines are generated from the Gaussian sketch; dashed lines are from the SSFT sketch. Each panel displays the Schatten 1-norm relative error (5.1) as a function of storage cost T. See Sec. 5 for details. The matrix W ∈Fn×n has the WISHART(n, n; F) distribution; that is, W = GG∗where G ∈Fn×n is standard normal. The parameter ξ controls the signal-to-noise ratio. We consider three examples: LowRankLowNoise (ξ = 10−4), LowRankMedNoise (ξ = 10−2), LowRankHiNoise (ξ = 10−1). 2. Polynomial Decay. These matrices take the form A = diag(1, . . . , 1 | {z } R , 2−p, 3−p, . . . , (n −R + 1)−p) ∈Fn×n. The parameter p > 0 controls the rate of polynomial decay. We consider three examples: PolyDecaySlow (p = 0.5), PolyDecayMed (p = 1), PolyDecayFast (p = 2). 3. Exponential Decay. These matrices take the form A = diag(1, . . . , 1 | {z } R , 10−q, 10−2q, . . . , 10−(n−R)q) ∈Fn×n. The parameter q > 0 controls the rate of exponential decay. We consider three examples: ExpDecaySlow (q = 0.1), ExpDecayMed (q = 0.25), ExpDecayFast (q = 1). Application Examples. We also consider non-diagonal matrices inspired by the SDP algorithm [44]. 1. MaxCut: This is a real-valued psd matrix with dimension n = 2 000, and its effective rank R = 14. We form approximations with rank r ∈{1, 14}. The matrix is an approximate solution to the MAXCUT SDP [20] for the sparse graph G40 [10]. 2. PhaseRetrieval: This is a psd matrix with dimension n = 25 921. It has exact rank 250, but its effective rank R = 5. We form approximations with rank r ∈{1, 5}. The matrix is an approximate solution to a phase retrieval SDP; the data is drawn from our paper [44]. Experimental Results. Figures 5.1–5.2 display the performance of the three fixed-rank psd approximation methods for a subcollection of the input matrices. The vertical axis is the Schatten 1-norm 7 Storage (T) 12 24 48 96 192 Relative Error (S1) 10-1 100 [TYUC17, Alg. 9] Standard (2.6) Proposed (2.7) (A) LowRankLowNoise Storage (T) 12 24 48 96 192 10-2 10-1 (B) LowRankMedNoise Storage (T) 12 24 48 96 192 10-2 10-1 (C) LowRankHiNoise Storage (T) 12 24 48 96 192 Relative Error (S1) 10-3 10-2 10-1 100 (D) PolyDecayFast Storage (T) 12 24 48 96 192 10-2 10-1 (E) PolyDecayMed Storage (T) 12 24 48 96 192 10-2 10-1 (F) PolyDecaySlow Storage (T) 12 24 48 96 192 Relative Error (S1) 10-8 10-6 10-4 10-2 100 (G) ExpDecayFast Storage (T) 12 24 48 96 192 10-8 10-6 10-4 10-2 100 (H) ExpDecayMed Storage (T) 12 24 48 96 192 10-8 10-6 10-4 10-2 100 (I) ExpDecaySlow FIGURE 5.2: Synthetic Examples with Effective Rank R = 10, Approximation Rank r = 10, Schatten 1-Norm Error. The data series show the performance of three algorithms for rank-r psd approximation with r = 10. Solid lines are generated from the Gaussian sketch; dashed lines are from the SSFT sketch. Each panel displays the Schatten 1-norm relative error (5.1) as a function of storage cost T. relative error (5.1). The variable T on the horizontal axis is proportional to the storage required for the sketch only. For the Nyström-based approximations (2.6)–(2.7), we have the correspondence T = k. For the approximation [37, Alg. 9], we set T = k + ℓ. The experiments demonstrate that the proposed method (2.7) has a significant benefit over the alternatives for input matrices that admit a good low-rank approximation. It equals or improves on the competitors for almost all other examples and storage budgets. The supplement contains additional numerical results; these experiments only reinforce the message of Figures 5.1–5.2. Conclusions. This paper makes the case for using the proposed fixed-rank psd approximation (2.7) in lieu of the alternatives (2.6) or [37, Alg. 9]. Theorem 4.1 shows that the proposed fixed-rank psd approximation (2.7) can attain any prescribed relative error, and Theorem 4.2 shows that it can exploit spectral decay. Furthermore, our numerical work demonstrates that the proposed approximation improves (almost) uniformly over the competitors for a range of examples. These results are timely because of the recent arrival of compelling applications, such as [44], for sketching psd matrices. 8 Acknowledgments. The authors wish to thank Mark Tygert and Alex Gittens for helpful feedback on preliminary versions of this work. JAT gratefully acknowledges partial support from ONR Award N00014-17-1-2146 and the Gordon & Betty Moore Foundation. VC and AY were supported in part by the European Commission under Grant ERC Future Proof, SNF 200021-146750, and SNF CRSII2-147633. MU was supported in part by DARPA Award FA8750-17-2-0101. References [1] N. Ailon and B. Chazelle. The fast Johnson-Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput., 39(1):302–322, 2009. [2] C. Boutsidis and A. Gittens. Improved matrix algorithms via the subsampled randomized Hadamard transform. SIAM J. Matrix Anal. Appl., 34(3):1301–1340, 2013. [3] C. Boutsidis, D. Garber, Z. Karnin, and E. Liberty. Online principal components analysis. In Proc. 26th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pages 887–901, 2015. [4] C. Boutsidis, D. Woodruff, and P. Zhong. Optimal principal component analysis in distributed and streaming models. In Proc. 48th ACM Symp. Theory of Computing (STOC), 2016. [5] J. Chiu and L. Demanet. Sublinear randomized algorithms for skeleton decompositions. SIAM J. Matrix Anal. Appl., 34(3):1361–1383, 2013. [6] K. Clarkson and D. Woodruff. Low-rank PSD approximation in input-sparsity time. In Proc. 28th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pages 2061–2072, Jan. 2017. [7] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In Proc. 41st ACM Symp. Theory of Computing (STOC), 2009. [8] M. B. Cohen, S. Elder, C. Musco, C. Musco, and M. Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proc. 47th ACM Symp. Theory of Computing (STOC), pages 163–172. ACM, New York, 2015. [9] M. B. Cohen, J. Nelson, and D. P. Woodruff. Optimal Approximate Matrix Product in Terms of Stable Rank. In 43rd Int. Coll. Automata, Languages, and Programming (ICALP), volume 55, pages 11:1–11:14, 2016. [10] T. A. Davis and Hu. The University of Florida sparse matrix collection. ACM Trans. Math. Softw., 3(1): 1:1–1:25, 2011. [11] P. Drineas and M. W. Mahoney. On the Nyström method for approximating a Gram matrix for improved kernel-based learning. J. Mach. Learn. Res., 6:2153–2175, 2005. [12] D. Feldman, M. Volkov, and D. Rus. Dimensionality reduction of massive sparse datasets using coresets. In Adv. Neural Information Processing Systems 29 (NIPS), 2016. [13] C. Fowlkes, S. Belongie, F. Chung, and J. Malik. Spectral grouping using the Nyström method. IEEE Trans. Pattern Anal. Mach. Intell., 26(2):214–225, Jan. 2004. [14] M. Ghasemi, E. Liberty, J. M. Phillips, and D. P. Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM J. Comput., 45(5):1762–1792, 2016. [15] A. C. Gilbert, J. Y. Park, and M. B. Wakin. Sketched SVD: Recovering spectral features from compressed measurements. Available at http://arXiv.org/abs/1211.0361, Nov. 2012. [16] A. Gittens. The spectral norm error of the naïve Nyström extension. Available at http:arXiv.org/abs/ 1110.5305, Oct. 2011. [17] A. Gittens. Topics in Randomized Numerical Linear Algebra. PhD thesis, California Institute of Technology, 2013. [18] A. Gittens and M. W. Mahoney. Revisiting the Nyström method for improved large-scale machine learning. Available at http://arXiv.org/abs/1303.1849, Mar. 2013. [19] A. Gittens and M. W. Mahoney. Revisiting the Nyström method for improved large-scale machine learning. J. Mach. Learn. Res., 17:Paper No. 117, 65, 2016. [20] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. Assoc. Comput. Mach., 42(6):1115–1145, 1995. [21] M. Gu. Subspace iteration randomization and singular value problems. SIAM J. Sci. Comput., 37(3): A1139–A1173, 2015. [22] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217–288, 2011. 9 [23] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal component analysis of large data sets. SIAM J. Sci. Comput., 33(5):2580–2594, 2011. ISSN 1064-8275. doi: 10.1137/100804139. URL http://dx.doi.org/10.1137/100804139. [24] N. J. Higham. Matrix nearness problems and applications. In Applications of matrix theory (Bradford, 1988), pages 1–27. Oxford Univ. Press, New York, 1989. [25] P. Jain, C. Jin, S. M. Kakade, P. Netrapalli, and A. Sidford. Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja’s algorithm. In 29th Ann. Conf. Learning Theory (COLT), pages 1147–1164, 2016. [26] S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nyström method. J. Mach. Learn. Res., 13:981–1006, Apr. 2012. [27] H. Li, G. C. Linderman, A. Szlam, K. P. Stanton, Y. Kluger, and M. Tygert. Algorithm 971: An implementation of a randomized algorithm for principal component analysis. ACM Trans. Math. Softw., 43 (3):28:1–28:14, Jan. 2017. [28] Y. Li, H. L. Nguyen, and D. P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In Proc. 2014 ACM Symp. Theory of Computing (STOC), pages 174–183. ACM, 2014. [29] E. Liberty. Accelerated dense random projections. PhD thesis, Yale Univ., New Haven, 2009. [30] M. W. Mahoney. Randomized algorithms for matrices and data. Found. Trends Mach. Learn., 3(2):123–224, 2011. [31] P.-G. Martinsson, V. Rokhlin, and M. Tygert. A randomized algorithm for the decomposition of matrices. Appl. Comput. Harmon. Anal., 30(1):47–68, 2011. [32] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In Adv. Neural Information Processing Systems 26 (NIPS), pages 2886–2894, 2013. [33] C. Musco and D. Woodruff. Sublinear time low-rank approximation of positive semidefinite matrices. Available at http://arXiv.org/abs/1704.03371, Apr. 2017. [34] J. C. Platt. FastMap, MetricMap, and Landmark MDS are all Nyström algorithms. In Proc. 10th Int. Workshop Artificial Intelligence and Statistics (AISTATS), pages 261–268, 2005. [35] F. Pourkamali-Anaraki and S. Becker. Randomized clustered Nyström for large-scale kernel machines. Available at http://arXiv.org/abs/1612.06470, Dec. 2016. [36] J. A. Tropp. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal., 3(1-2):115–126, 2011. [37] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher. Randomized single-view algorithms for low-rank matrix approximation. ACM Report 2017-01, Caltech, Pasadena, Jan. 2017. Available at http://arXiv. org/abs/1609.00048, v1. [38] M. Tygert. Beta versions of Matlab routines for principal component analysis. Available at http: //tygert.com/software.html, 2014. [39] S. Wang, A. Gittens, and M. W. Mahoney. Scalable kernel K-means clustering with Nyström approximation: relative-error bounds. Available at http://arXiv.org/abs/1706.02803, June 2017. [40] C. K. I. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In Adv. Neural Information Processing Systems 13 (NIPS), 2000. [41] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci., 10 (1-2):iv+157, 2014. [42] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert. A fast randomized algorithm for the approximation of matrices. Appl. Comput. Harmon. Anal., 25(3):335–366, 2008. [43] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nyström method vs random Fourier features: A theoretical and empirical comparison. In Adv. Neural Information Processing Systems 25 (NIPS), pages 476–484, 2012. [44] A. Yurtsever, M. Udell, J. A. Tropp, and V. Cevher. Sketchy decisions: Convex low-rank matrix optimization with optimal storage. In Proc. 20th Int. Conf. Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, May 2017. 10 | 2017 | 189 |
6,662 | High-Order Attention Models for Visual Question Answering Idan Schwartz Department of Computer Science Technion idansc@cs.technion.ac.il Alexander G. Schwing Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign aschwing@illinois.edu Tamir Hazan Department of Industrial Engineering & Management Technion tamir.hazan@gmail.com Abstract The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset. 1 Introduction The quest for algorithms which enable cognitive abilities is an important part of machine learning and appears in many facets, e.g., in visual question answering tasks [6], image captioning [26], visual question generation [18, 10] and machine comprehension [8]. A common trait in these recent cognitive-like tasks is that they take into account different data modalities, for example, visual and textual data. To address these tasks, recently, attention mechanisms have emerged as a powerful common theme, which provides not only some form of interpretability if applied to deep net models, but also often improves performance [8]. The latter effect is attributed to more expressive yet concise forms of the various data modalities. Present day attention mechanisms, like for example [15, 26], are however often lacking in two main aspects. First, the systems generally extract abstract representations of data in an ad-hoc and entangled manner. Second, present day attention mechanisms are often geared towards a specific form of input and therefore hand-crafted for a particular task. To address both issues, we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. For example, second order correlations can model interactions between two data modalities, e.g., an image and a question, and more generally, k−th order correlations can model interactions between k modalities. Learning these correlations effectively directs the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our novel attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the VQA dataset [2]. Some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. What does the man have on his head? How many cars are in the picture? Original Image Unary Potentials Pairwise Potentials Final Attention What does the man have on his head? What does the man have on his head? What does the man have on his head? How many cars are in the picture? How many cars are in the picture? How many cars are in the picture? Figure 1: Results of our multi-modal attention for one image and two different questions (1st column). The unary image attention is identical by construction. The pairwise potentials differ for both questions and images since both modalities are taken into account (3rd column). The final attention is illustrated in the 4th column. of our results are visualized in Fig. 1, where we show how the visual attention correlates with the textual attention. We begin by reviewing the related work. We subsequently provide details of our proposed technique, focusing on the high-order nature of our attention models. We then conclude by presenting the application of our high-order attention mechanism to VQA and compare it to the state-of-the-art. 2 Related work Attention mechanisms have been investigated for both image and textual data. In the following we review mechanisms for both. Image attention mechanisms: Over the past few years, single image embeddings extracted from a deep net (e.g., [17, 16]) have been extended to a variety of image attention modules, when considering VQA. For example, a textual long short term memory net (LSTM) may be augmented with a spatial attention [29]. Similarly, Andreas et al. [1] employ a language parser together with a series of neural net modules, one of which attends to regions in an image. The language parser suggests which neural net module to use. Stacking of attention units was also investigated by Yang et al. [27]. Their stacked attention network predicts the answer successively. Dynamic memory network modules which capture contextual information from neighboring image regions has been considered by Xiong et al. [24]. Shih et al. [23] use object proposals and and rank regions according to relevance. The multi-hop attention scheme of Xu et al. [25] was proposed to extract fine-grained details. A joint attention mechanism was discussed by Lu et al. [15] and Fukui et al. [7] suggest an efficient outer product mechanism to combine visual representation and text representation before applying attention over the combined representation. Additionally, they suggested the use of glimpses. Very recently, Kazemi et al. [11] showed a similar approach using concatenation instead of outer product. Importantly, all of these approaches model attention as a single network. The fact that multiple modalities are involved is often not considered explicitly which contrasts the aforementioned approaches from the technique we present. Very recently Kim et al. [14] presented a technique that also interprets attention as a multi-variate probabilistic model, to incorporate structural dependencies into the deep net. Other recent techniques are work by Nam et al. [19] on dual attention mechanisms and work by Kim et al. [13] on bilinear 2 models. In contrast to the latter two models our approach is easy to extend to any number of data modalities. Textual attention mechanisms: We also want to provide a brief review of textual attention. To address some of the challenges, e.g., long sentences, faced by translation models, Hermann et al. [8] proposed RNNSearch. To address the challenges which arise by fixing the latent dimension of neural nets processing text data, Bahdanau et al. [3] first encode a document and a query via a bidirectional LSTM which are then used to compute attentions. This mechanism was later refined in [22] where a word based technique reasons about sentence representations. Joint attention between two CNN hierarchies is discussed by Yin et al. [28]. Among all those attention mechanisms, relevant to our approach is work by Lu et al. [15] and the approach presented by Xu et al. [25]. Both discuss attention mechanisms which operate jointly over two modalities. Xu et al. [25] use pairwise interactions in the form of a similarity matrix, but ignore the attentions on individual data modalities. Lu et al. [15] suggest an alternating model, that directly combines the features of the modalities before attending. Additionally, they suggested a parallel model which uses a similarity matrix to map features for one modality to the other. It is hard to extend this approach to more than two modalities. In contrast, our model develops a probabilistic model, based on high order potentials and performs mean-field inference to obtain marginal probabilities. This permits trivial extension of the model to any number of modalities. Additionally, Jabri et al. [9] propose a model where answers are also used as inputs. Their approach questions the need of attention mechanisms and develops an alternative solution based on binary classification. In contrast, our approach captures high-order attention correlations, which we found to improve performance significantly. Overall, while there is early work that propose a combination of language and image attention for VQA, e.g., [15, 25, 12], attention mechanism with several potentials haven’t been discussed in detail yet. In the following we present our approach for joint attention over any number of modalities. 3 Higher order attention models Attention modules are a crucial component for present day decision making systems. Particularly when taking into account more and more data of different modalities, attention mechanisms are able to provide insights into the inner workings of the oftentimes abstract and automatically extracted representations of our systems. An example of such a system that captured a lot of research efforts in recent years is Visual Question Answering (VQA). Considering VQA as an example, we immediately note its dependence on two or even three different data modalities, the visual input V , the question Q and the answer A, which get processed simultaneously. More formally, we let V ∈Rnv×d, Q ∈Rnq×d, A ∈Rna×d denote a representation for the visual input, the question and the answer respectively. Hereby, nv, nq and na are the number of pixels, the number of words in the question, and the number of possible answers. We use d to denote the dimensionality of the data. For simplicity of the exposition we assume d to be identical across all data modalities. Due to this dependence on multiple data modalities, present day decision making systems can be decomposed into three major parts: (i) the data embedding; (ii) attention mechanisms; and (iii) the decision making. For a state-of-the-art VQA system such as the one we developed here, those three parts are immediately apparent when considering the high-level system architecture outlined in Fig. 2. 3.1 Data embedding Attention modules deliver to the decision making component a succinct representation of the relevant data modalities. As such, their performance depends on how we represent the data modalities themselves. Oftentimes, an attention module tends to use expressive yet concise data embedding algorithms to better capture their correlations and consequently to improve the decision making performance. For example, data embeddings based on convolutional deep nets which constitute the state-of-the-art in many visual recognition and scene understanding tasks. Language embeddings heavily rely on LSTM which are able to capture context in sequential data, such as words, phrases and sentences. We give a detailed account to our data embedding architectures for VQA in Sec. 4.1. 3 ResNet Concatenate Word Embedding Is the dog trying to catch a frisbee? 1. Yes 2. Yellow … 17. No 18. Food MCB LSTM LSTM Word Embedding 1D-Conv Unary Potential Pairwise Potential Unary Potential Softmax Softmax Unary Potential Pairwise Potential Pairwise Potential Softmax Ternary Potential MCB MCB Yes Data embedding (Sec. 3.1) Attention (Sec. 3.2) Decision (Sec. 3.3) Figure 2: Our state-of-the-art VQA system 3.2 Attention As apparent from the aforementioned description, attention is the crucial component connecting data embeddings with decision making modules. Subsequently we denote attention over the nq words in the question via PQ(iq), where iq ∈ {1, . . . , nq} is the word index. Similarly, attention over the image is referred to via PV (iv), where iv ∈{1, . . . , nv}, and attention over the possible answers are denoted PA(ia), where ia ∈{1, . . . , na}. We consider the attention mechanism as a probability model, with each attention mechanism computing “potentials.” First, unary potentials θV , θQ, θA denote the importance of each feature (e.g., question word representations, multiple choice answers representations, and image patch features) for the VQA task. Second, pairwise potentials, θV,Q, θV,A, θQ,A express correlations between two modalities. Last, third-order potential, θV,Q,A captures dependencies between the three modalities. To obtain marginal probabilities PQ, PV and PA from potentials, our model performs mean-field inference. We combine the unary potential, the marginalized pairwise potential and the marginalized third order potential linearly including a bias term: PV (iv) = smax(α1θV (iv)+α2θV,Q(iv)+α3θA,V (iv)+α4θV,Q,A(iv) + α5), PQ(iq) = smax(β1θQ(iq)+β2θV,Q(iq)+β3θA,Q(iq)+β4θV,Q,A(iq) + β5), (1) PA(ia) = smax(γ1θA(ia)+γ2θA,V (ia)+γ3θA,Q(ia)+γ4θV,Q.A(ia) + γ5). Hereby αi, βi, and γi are learnable parameters and smax(·) refers to the soft-max operation over iv ∈{1, . . . , nv}, iq ∈{1, . . . , nq} and ia ∈{1, . . . , na} respectively. The soft-max converts the combined potentials to probability distributions, which corresponds to a single mean-field iteration. Such a linear combination of potentials provides extra flexibility for the model, since it can learn the reliability of the potential from the data. For instance, we observe that question attention relies more on the unary question potential and on pairwise question and answer potentials. In contrast, the image attention relies more on the pairwise question and image potential. Given the aforementioned probabilities PV , PQ, and PA, the attended image, question and answer vectors are denoted by aV ∈Rd, aQ ∈Rd and aA ∈Rd. The attended modalities are calculated as the weighted sum of the image features V = [v1, . . . , vnv]T ∈Rnv×d, the question features Q = [q1, . . . , qnq]T ∈Rnq×d, and the answer features A = [a1, . . . , ana]T ∈Rna×d, i.e., aV = nv X iv=1 PV (iv)viv, aQ = nq X iq=1 PQ(iq)qiq, and aV = na X ia=1 PA(ia)aia. 4 Ternary Potential Corr3 conv conv tanh tanh conv tanh Pairwise Potential Corr2 conv conv tanh tanh Unary Potential conv tanh conv Unary Potential Pairwise Potential Threeway Potential V θV Ternary Potential conv conv Corr3 conv conv tanh tanh conv conv tanh Pairwise Potential conv conv Corr2 conv conv tanh tanh Unary Potential conv tanh conv Unary Potential Pairwise Potential Threeway Potential Q V θQ,V (iq) θQ,V (iv) Ternary Potential conv conv Corr3 conv conv tanh tanh conv conv tanh Pairwise Potential conv conv Corr2 conv conv tanh tanh Unary Potential conv tanh conv Unary Potential Pairwise Potential Threeway Potential Q V A θQ,V,A(iq) θQ,V,A(iv) θQ,V,A(ia) (a) (b) (c) Figure 3: Illustration of our k−order attention. (a) unary attention module (e.g., visual). (b) pairwise attention module (e.g., visual and question) marginalized over its two data modalities. (c) ternary attention module (e.g., visual, question and answer) marginalized over its three data modalities.. The attended modalities, which effectively focus on the data relevant for the task, are passed to a classifier for decision making, e.g., the ones discussed in Sec. 3.3. In the following we now describe the attention mechanisms for unary, pairwise and ternary potentials in more detail. 3.2.1 Unary potentials We illustrate the unary attention schematically in Fig. 3 (a). The input to the unary attention module is a data representation, i.e., either the visual representation V , the question representation Q, or the answer representation A. Using those representations, we obtain the ‘unary potentials’ θV , θQ and θA using a convolution operation with kernel size 1 × 1 over the data representation as an additional embedding step, followed by a non-linearity (tanh in our case), followed by another convolution operation with kernel size 1 × 1 to reduce embedding dimensionality. Since convolutions with kernel size 1 × 1 are identical to matrix multiplies we formally obtain the unary potentials via θV (iv) = tanh(V Wv2)Wv1, θQ(iq) = tanh(QWq2)Wq1, θA(ia) = tanh(AWa2)Wa1. where Wv1, Wq1, Wa1 ∈Rd×1, and Wv2, Wq2, Wa2 ∈Rd×d are trainable parameters. 3.2.2 Pairwise potentials Besides the mentioned mechanisms to generate unary potentials, we specifically aim at taking advantage of pairwise attention modules, which are able to capture the correlation between the representation of different modalities. Our approach is illustrated in Fig. 3 (b). We use a similarity matrix between image and question modalities C2 = QWq(V Wv)⊤. Alternatively, the (i, j)-th entry is the correlation (inner-product) of the i-th column of QWq and the j-th column of V Wv: (C2)i,j = corr2((QWq):,i, (V Wv):,j), corr2(q, v) = d X l=1 qlvl. where Wq, Wv ∈Rd×d are trainable parameters. We consider (C2)i,j as a pairwise potential that represents the correlation of the i-th word in a question and the j-th patch in an image. Therefore, to retrieve the attention for a specific word, we convolve the matrix along the visual dimension using a 1 × 1 dimensional kernel. Specifically, θV,Q(iq) = tanh nv X iv=1 wiv(C2)iv,iq ! , and θV,Q(iv) = tanh nq X iq=1 wiq(C2)iv,iq . Similarly, we obtain θA,V and θA,Q, which we omit due to space limitations. These potentials are used to compute the attention probabilities as defined in Eq. (1). 3.2.3 Ternary Potentials To capture the dependencies between all three modalities, we consider their high-order correlations. (C3)i,j,k = corr3((QWq):,i, (V Wv):,j, (AWa):,k), corr3(q, v, a) = d X l=1 qlvlal. 5 v h v Unary Potential Pairwise Potential Threeway Potential MCT MCB Outer Product Space aQ aV T MCT Outer Product Space MCB Outer Product Space aQ aV aA (a) (b) Figure 4: Illustration of correlation units used for decision making. (a) MCB unit approximately sample from outer product space of two attention vectors, (b) MCT unit approximately sample from outer product space of three attention vectors. Where Wq, Wv, Wa ∈Rd×d are trainable parameters. Similarly to the pairwise potentials, we use the C3 tensor to obtain correlated attention for each modality: θV,Q,A(iq)=tanh nv X iv=1 na X ia=1 wiv,ia(C3)iq,iv,ia ! , θV,Q,A(iv)=tanh nq X iq=1 na X ia=1 wiq,ia(C3)iq,iv,ia , and θV,Q,A(ia) = tanh nv X iv=1 nq X iq=1 wiq,ia(C3)iq,iv,ia . These potentials are used to compute the attention probabilities as defined in Eq. (1). 3.3 Decision making The decision making component receives as input the attended modalities and predicts the desired output. Each attended modality is a vector that consists of the relevant data for making the decision. While the decision making component can consider the modalities independently, the nature of the task usually requires to take into account correlations between the attended modalities. The correlation of a set of attended modalities are represented by the outer product of their respective vectors, e.g., the correlation of two attended modalities is represented by a matrix and the correlation of k-attended modalities is represented by a k-dimensional tensor. Ideally, the attended modalities and their high-order correlation tensors are fed into a deep net which produces the final decision. The number of parameters in such a network grows exponentially in the number of modalities, as seen in Fig. 4. To overcome this computational bottleneck, we follow the tensor sketch algorithm of Pham and Pagh [21], which was recently applied to attention models by Fukui et al. [7] via Multimodal Compact Bilinear Pooling (MCB) in the pairwise setting or Multimodal Compact Trilinear Pooling (MCT), an extension of MCB that pools data from three modalities. The tensor sketch algorithm enables us to reduce the dimension of any rank-one tensor while referring to it implicitly. It relies on the count sketch technique [4] that randomly embeds an attended vector a ∈Rd1 into another Euclidean space Ψ(a) ∈Rd2. The tensor sketch algorithm then projects the rank-one tensor ⊗k i=1ai which consists of attention correlations of order k using the convolution Ψ(⊗k i=1ai) = ∗k i=1Ψ(ai). For example, for two attention modalities, the correlation matrix a1a⊤ 2 = a1⊗a2 is randomly projected to Rd2 by the convolution Ψ(a1⊗a2) = Ψ(a1)∗Ψ(a2). The attended modalities Ψ(ai) and their high-order correlations Ψ(⊗k i=1ai) are fed into a fully connected neural net to complete decision making. 4 Visual question answering In the following we evaluate our approach qualitatively and quantitatively. Before doing so we describe the data embeddings. 4.1 Data embedding The attention module requires the question representation Q ∈Rnq×d, the image representation V ∈Rnv×d, and the answer representation A ∈Rna×d, which are computed as follows. Image embedding: To embed the image, we use pre-trained convolutional deep nets (i.e., VGG-19, ResNet). We extract the last layer before the fully connected units. Its dimension in the VGG net case is 512 × 14 × 14 and the dimension in the ResNet case is 2048 × 14 × 14. Hence we obtain 6 Table 1: Comparison of results on the Multiple-Choice VQA dataset for a variety of methods. We observe the combination of all three unary, pairwise and ternary potentials to yield the best result. test-dev test-std Method Y/N Num Other All All Naive Bayes [15] 79.7 40.1 57.9 64.9 HieCoAtt (ResNet) [15] 79.7 40.0 59.8 65.8 66.1 RAU (ResNet) [20] 81.9 41.1 61.5 67.7 67.3 MCB (ResNet) [7] 68.6 DAN (VGG) [19] 67.0 DAN (ResNet) [19] 69.1 69.0 MLB (ResNet) [13] 68.9 2-Modalities: Unary+Pairwis (ResNet) 80.9 36.0 61.6 66.7 3-Modalities: Unary+Pairwise (ResNet) 82.0 42.7 63.3 68.7 68.7 3-Modalities: Unary + Pairwise + Ternary (VGG) 81.2 42.7 62.3 67.9 3-Modalities: Unary + Pairwise + Ternary (ResNet) 81.6 43.3 64.8 69.4 69.3 nv = 196 and we embed both the 196 VGG-19 or ResNet features into a d = 512 dimensional space to obtain the image representation V . Question embedding: To obtain a question representation, Q ∈Rnq×d, we first map a 1-hot encoding of each word in the question into a d-dimensional embedding space using a linear transformation plus corresponding bias terms. To obtain a richer representation that accounts for neighboring words, we use a 1-dimensional temporal convolution with filter of size 3. While a combination of multiple sized filters is suggested in the literature [15], we didn’t find any benefit from using such an approach. Subsequently, to capture long-term dependencies, we used a Long Short Term Memory (LSTM) layer. To reduce overfitting caused by the LSTM units, we used two LSTM layers with d/2 hidden dimension, one uses as input the word embedding representation, and the other one operates on the 1D conv layer output. Their output is then concatenated to obtain Q. We also note that nq is a constant hyperparameter, i.e., questions with more than nq words are cut, while questions with less words are zero-padded. Answer embedding: To embed the possible answers we use a regular word embedding. The vocabulary is specified by taking only the most frequent answers in the training set. Answers that are not included in the top answers are embedded to the same vector. Answers containing multiple words are embedded as n-grams to a single vector. We assume there is no real dependency between the answers, therefore there is no need of using additional 1D conv, or LSTM layers. 4.2 Decision making For our VQA example we investigate two techniques to combine vectors from three modalities. First, the attended feature representation for each modality, i.e., aV , aA and aQ, are combined using an MCT unit. Each feature element is of the form ((aV )i · (aQ)j · (aA)k). While this first solution is most general, in some cases like VQA, our experiments show that it is better to use our second approach, a 2-layer MCB unit combination. This permits greater expressiveness as we employ features of the form ((aV )i · (aQ)j · (aQ)k · (aA)t) therefore also allowing image features to interact with themselves. Note that in terms of parameters both approaches are identical as neither MCB nor MCT are parametric modules. Beyond MCB, we tested several other techniques that were suggested in the literature, including element-wise multiplication, element-wise addition and concatenation [13, 15, 11], optionally followed by another hidden fully connected layer. The tensor sketching units consistently performed best. 4.3 Results Experimental setup: We use the RMSProp optimizer with a base learning rate of 4e−4 and α = 0.99 as well as ϵ = 1e−8. The batch size is set to 300. The dimension d of all hidden layers is set to 512. The MCB unit feature dimension was set to d = 8192. We apply dropout with a rate of 0.5 after the word embeddings, the LSTM layer, and the first conv layer in the unary potential units. Additionally, for the last fully connected layer we use a dropout rate of 0.3. We use the top 3000 most frequent 7 How many glasses are on the table? How many glasses are on the table? How many glasses are on the table? Is anyone in the scene wearing blue? Is anyone in the scene wearing blue? Is anyone in the scene wearing blue? What kind of flooring is in the bathroom? What kind of flooring is in the bathroom? What kind of flooring is in the bathroom? What room is this? What room is this? What room is this? Figure 5: For each image (1st column) we show the attention generated for two different questions in columns 2-4 and columns 5-7 respectively. The attentions are ordered as unary attention, pairwise attention and combined attention for both the image and the question. We observe the combined attention to significantly depend on the question. Is this animal drinking water? 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Attention no ... this red no yes white forks 4 1 tomatoes presidential blue 3 13 green 2 fila i ... don't Is this animal drinking water? What kind of animal is this? 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Attention blue red cutting ... cake green bear 1 white objazd 3 elephant 4 giraffe yes reject cow 2 spain no What kind of animal is this? What is on the wall? 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Attention yes next ... to blue green parka 3 1 pirates gadzoom picture ... of 2 picture clock no 4 white photo red What is on the wall? Is a light on? 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Attention 3 not white 1 on ... boy's red yes 4 no 2 aspro pimp player blue pain if ... you've green no ... image Is a light on? Figure 6: The attention generated for two different questions over three modalities. We find the attention over multiple choice answers to emphasis the unusual answers. answers as possible outputs, which covers 91% of all answers in the train set. We implemented our models using the Torch framework1 [5]. As a comparison for our attention mechanism we use the approach of Lu et al. [15] and the technique of Fukui et al. [7]. Their methods are based on a hierarchical attention mechanism and multi-modal compact bilinear (MCB) pooling. In contrast to their approach we demonstrate a relatively simple technique based on a probabilistic intuition grounded on potentials. For comparative reasons only, the visualized attention is based on two modalities: image and question. We evaluate our attention modules on the VQA real-image test-dev and test-std datasets [2]. The dataset consists of 123, 287 training images and 81, 434 test set images. Each image comes with 3 questions along with 18 multiple choice answers. Quantitative evaluation: We first evaluate the overall performance of our model and compare it to a variety of baselines. Tab. 1 shows the performance of our model and the baselines on the test-dev and the test-standard datasets for multiple choice (MC) questions. To obtain multiple choice results we follow common practice and use the highest scoring answer among the provided ones. Our approach (Fig. 2) for the multiple choice answering task achieved the reported result after 180,000 iterations, which requires about 40 hours of training on the ‘train+val’ dataset using a TitanX GPU. Despite the fact that our model has only 40 million parameters, while techniques like [7] use over 70 million parameters, we observe state-of-the-art behavior. Additionally, we employ a 2-modality model having a similar experimental setup. We observe a significant improvement for our 3-modality model, which shows the importance of high-order attention models. Due to the fact that we use a lower embedding dimension of 512 (similar to [15]) compared to 2048 of existing 2-modality models [13, 7], the 2-modality model achieves inferior performance. We believe that higher embedding dimension and proper tuning can improve our 2-modality starting point. Additionally, we compared our proposed decision units. MCT, which is a generic extension of MCB for 3-modalities, and 2-layers MCB which has greater expressiveness (Sec. 4.2). Evaluating on the ’val’ dataset while training on the ’train’ part using the VGG features, the MCT setup yields 63.82% 1https://github.com/idansc/HighOrderAtten 8 Is she using a battery-operated device? Is she using a battery-operated device? Is she using a battery device? Ours: yes [15]: no [7]: no GT: yes Is this a boy or a girl? Is this a boy or a girl? Is this boy or a girl? Ours: girl [15]: boy [7]: girl GT: girl Figure 7: Comparison of our attention results (2nd column) with attention provided by [15] (3rd column) and [7] (4th column). The fourth column provides the question and the answer of the different techniques. What color is the table? What color is the table? What color is the table? What color is the table? GT: brown Ours: blue What color is the umbrella? What color is the umbrella? What color is the umbrella? What color is the umbrella? GT: blue Ours: blue Figure 8: Failure cases: Unary, pairwise and combined attention of our approach. Our system focuses on the colorful umbrella as opposed to the table in the first row. where 2-layer MCB yields 64.57%. We also tested a different ordering of the input to the 2-modality MCB and found them to yield inferior results. Qualitative evaluation: Next, we evaluate our technique qualitatively. In Fig. 5 we illustrate the unary, pairwise and combined attention of our approach based on the two modality architecture, without the multiple choice as input. For each image we show multiple questions. We observe the unary attention usually attends to strong features of the image, while pairwise potentials emphasize areas that correlate with question words. Importantly, the combined result is dependent on the provided question. For instance, in the first row we observe for the question “How many glasses are on the table?,” that the pairwise potential reacts to the image area depicting the glass. In contrast, for the question “Is anyone in the scene wearing blue?” the pairwise potentials reacts to the guy with the blue shirt. In Fig. 6, we illustrate the attention for our 3-modality model. We find the attention over multiple choice answers to favor the more unusual results. In Fig. 7, we compare the final attention obtained from our approach to the results obtained with techniques discussed in [15] and [7]. We observe that our approach attends to reasonable pixel and question locations. For example, considering the first row in Fig. 7, the question refers to the battery operated device. Compared to existing approaches, our technique attends to the laptop, which seems to help in choosing the correct answer. In the second row, the question wonders “Is this a boy or a girl?”. Both of the correct answers were produced when the attention focuses on the hair. In Fig. 8, we illustrate a failure case, where the attention of our approach is identical, despite two different input questions. Our system focuses on the colorful umbrella as opposed to the object queried for in the question. 5 Conclusion In this paper we investigated a series of techniques to design attention for multimodal input data. Beyond demonstrating state-of-the-art performance using relatively simple models, we hope that this work inspires researchers to work in this direction. 9 Acknowledgments: This research was supported in part by The Israel Science Foundation (grant No. 948/15). This material is based upon work supported in part by the National Science Foundation under Grant No. 1718221. We thank Nvidia for providing GPUs used in this research. References [1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705, 2016. [2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015. [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [4] Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In ICALP. Springer, 2002. [5] Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. [6] Abhishek Das, Harsh Agrawal, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? arXiv preprint arXiv:1606.03556, 2016. [7] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016. [8] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, pages 1693–1701, 2015. [9] Allan Jabri, Armand Joulin, and Laurens van der Maaten. Revisiting visual question answering baselines. In ECCV. Springer, 2016. [10] U. Jain∗, Z. Zhang∗, and A. G. Schwing. Creativity: Generating Diverse Questions using Variational Autoencoders. In CVPR, 2017. ∗equal contribution. [11] Vahid Kazemi and Ali Elqursh. Show, ask, attend, and answer: A strong baseline for visual question answering. arXiv preprint arXiv:1704.03162, 2017. [12] Jin-Hwa Kim, Sang-Woo Lee, Donghyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, and ByoungTak Zhang. Multimodal residual learning for visual qa. In NIPS, 2016. [13] Jin-Hwa Kim, Kyoung Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard Product for Low-rank Bilinear Pooling. In ICLR, 2017. [14] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. Structured attention networks. arXiv preprint arXiv:1702.00887, 2017. [15] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016. [16] Lin Ma, Zhengdong Lu, and Hang Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. [17] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based approach to answering questions about images. In ICCV, 2015. [18] Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. Generating natural questions about an image. arXiv preprint arXiv:1603.06059, 2016. [19] Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. Dual attention networks for multimodal reasoning and matching. arXiv preprint arXiv:1611.00471, 2016. [20] Hyeonwoo Noh and Bohyung Han. Training recurrent answering units with joint loss minimization for vqa. arXiv preprint arXiv:1606.03647, 2016. [21] Ninh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In SIGKDD. ACM, 2013. 10 [22] Tim Rocktäschel, Edward Grefenstette, Moritz Hermann, Karl, Tomáš Koˇcisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. In ICLR, 2016. [23] Kevin J Shih, Saurabh Singh, and Derek Hoiem. Where to look: Focus regions for visual question answering. In CVPR, 2016. [24] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. arXiv preprint arXiv:1603.01417, 2016. [25] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, pages 451–466. Springer, 2016. [26] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. [27] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In CVPR, 2016. [28] Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193, 2015. [29] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In CVPR, 2016. 11 | 2017 | 19 |
6,663 | The Numerics of GANs Lars Mescheder Autonomous Vision Group MPI Tübingen lars.mescheder@tuebingen.mpg.de Sebastian Nowozin Machine Intelligence and Perception Group Microsoft Research sebastian.nowozin@microsoft.com Andreas Geiger Autonomous Vision Group MPI Tübingen andreas.geiger@tuebingen.mpg.de Abstract In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train. 1 Introduction Generative Adversarial Networks (GANs) [10] have been very successful in learning probability distributions. Since their first appearance, GANs have been successfully applied to a variety of tasks, including image-to-image translation [12], image super-resolution [13], image in-painting [27] domain adaptation [26], probabilistic inference [14, 9, 8] and many more. While very powerful, GANs are known to be notoriously hard to train. The standard strategy for stabilizing training is to carefully design the model, either by adapting the architecture [21] or by selecting an easy-to-optimize objective function [23, 4, 11]. In this work, we examine the general problem of finding local Nash-equilibria of smooth games. We revisit the de-facto standard algorithm for finding such equilibrium points, simultaneous gradient ascent. We theoretically show that the main factors preventing the algorithm from converging are the presence of eigenvalues of the Jacobian of the associated gradient vector field with zero real-part and eigenvalues with a large imaginary part. The presence of the latter is also one of the reasons that make saddle-point problems more difficult than local optimization problems. Utilizing these insights, we design a new algorithm that overcomes some of these problems. Experimentally, we show that our algorithm leads to stable training on many GAN architectures, including some that are known to be hard to train. Our technique is orthogonal to strategies that try to make the GAN-game well-defined, e.g. by adding instance noise [24] or by using the Wasserstein-divergence [4, 11]: while these strategies try to ensure the existence of Nash-equilibria, our paper deals with their computation and the numerical difficulties that can arise in practice. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In summary, our contributions are as follows: • We identify the main reasons why simultaneous gradient ascent often fails to find local Nash-equilibria. • By utilizing these insights, we design a new, more robust algorithm for finding Nashequilibria of smooth two-player games. • We empirically demonstrate that our method enables stable training of GANs on a variety of architectures and divergence measures. The proofs for the theorems in this paper can be found the supplementary material.1 2 Background In this section we first revisit the concept of Generative Adversarial Networks (GANs) from a divergence minimization point of view. We then introduce the concept of a smooth (non-convex) two-player game and define the terminology used in the rest of the paper. Finally, we describe simultaneous gradient ascent, the de-facto standard algorithm for finding Nash-equilibria of such games, and derive some of its properties. 2.1 Divergence Measures and GANs Generative Adversarial Networks are best understood in the context of divergence minimization: assume we are given a divergence function D, i.e. a function that takes a pair of probability distributions as input, outputs an element from [0, ∞] and satisfies D(p, p) = 0 for all probability distributions p. Moreover, assume we are given some target distribution p0 from which we can draw i.i.d. samples and a parametric family of distributions qθ that also allows us to draw i.i.d. samples. In practice qθ is usually implemented as a neural network that acts on a hidden code z sampled from some known distribution and outputs an element from the target space. Our goal is to find ¯θ that minimizes the divergence D(p0, qθ), i.e. we want to solve the optimization problem min θ D(p0, qθ). (1) Most divergences that are used in practice can be represented in the following form [10, 16, 4]: D(p, q) = max f∈F Ex∼q [g1(f(x))] −Ex∼p [g2(f(x))] (2) for some function class F ⊆X →R and convex functions g1, g2 : R →R. Together with (1), this leads to mini-max problems of the form min θ max f∈F Ex∼qθ [g1(f(x))] −Ex∼p0 [g2(f(x))] . (3) These divergences include the Jensen-Shannon divergence [10], all f-divergences [16], the Wasserstein divergence [4] and even the indicator divergence, which is 0 if p = q and ∞otherwise. In practice, the function class F in (3) is approximated with a parametric family of functions, e.g. parameterized by a neural network. Of course, when minimizing the divergence w.r.t. this approximated family, we no longer minimize the correct divergence. However, it can be verified that taking any class of functions in (3) leads to a divergence function for appropriate choices of g1 and g2. Therefore, some authors call these divergence functions neural network divergences [5]. 2.2 Smooth Two-Player Games A differentiable two-player game is defined by two utility functions f(φ, θ) and g(φ, θ) defined over a common space (φ, θ) ∈Ω1 × Ω2. Ω1 corresponds to the possible actions of player 1, Ω2 corresponds to the possible actions of player 2. The goal of player 1 is to maximize f, whereas player 2 tries to maximize g. In the context of GANs, Ω1 is the set of possible parameter values for the generator, whereas Ω2 is the set of possible parameter values for the discriminator. We call a game a zero-sum game if f = −g. Note that the derivation of the GAN-game in Section 2.1 leads to a zero-sum game, 1The code for all experiments in this paper is available under https://github.com/LMescheder/ TheNumericsOfGANs. 2 Algorithm 1 Simultaneous Gradient Ascent (SimGA) 1: while not converged do 2: vφ ←∇φf(θ, φ) 3: vθ ←∇θg(θ, φ) 4: φ ←φ + hvφ 5: θ ←θ + hvθ 6: end while whereas in practice people usually employ a variant of this formulation that is not a zero-sum game for better convergence [10]. Our goal is to find a Nash-equilibrium of the game, i.e. a point ¯x = (¯φ, ¯θ) given by the two conditions ¯φ ∈argmax φ f(φ, ¯θ) and ¯θ ∈argmax θ g(¯φ, θ). (4) We call a point (¯φ, ¯θ) a local Nash-equilibrium, if (4) holds in a local neighborhood of (¯φ, ¯θ). Every differentiable two-player game defines a vector field v(φ, θ) = ∇φf(φ, θ) ∇θg(φ, θ) . (5) We call v the associated gradient vector field to the game defined by f and g. For the special case of zero-sum two-player games, we have g = −f and thus v′(φ, θ) = ∇2 φf(φ, θ) ∇φ,θf(φ, θ) −∇φ,θf(φ, θ) −∇2 θf(φ, θ) . (6) As a direct consequence, we have the following: Lemma 1. For zero-sum games, v′(x) is negative (semi-)definite if and only if ∇2 φf(φ, θ) is negative (semi-)definite and ∇2 θf(φ, θ) is positive (semi-)definite. Corollary 2. For zero-sum games, v′(¯x) is negative semi-definite for any local Nash-equilibrium ¯x. Conversely, if ¯x is a stationary point of v(x) and v′(¯x) is negative definite, then ¯x is a local Nash-equilibrium. Note that Corollary 2 is not true for general two-player games. 2.3 Simultaneous Gradient Ascent The de-facto standard algorithm for finding Nash-equilibria of general smooth two-player games is Simultaneous Gradient Ascent (SimGA), which was described in several works, for example in [22] and, more recently also in the context of GANs, in [16]. The idea is simple and is illustrated in Algorithm 1. We iteratively update the parameters of the two players by simultaneously applying gradient ascent to the utility functions of the two players. This can also be understood as applying the Euler-method to the ordinary differential equation d dtx(t) = v(x(t)), (7) where v(x) is the associated gradient vector field of the two-player game. It can be shown that simultaneous gradient ascent converges locally to a Nash-equilibrium for a zero-sum game, if the Hessian of both players is negative definite [16, 22] and the learning rate is small enough. Unfortunately, in the context of GANs the former condition is rarely met. We revisit the properties of simultaneous gradient ascent in Section 3 and also show a more subtle property, namely that even if the conditions for the convergence of simultaneous gradient ascent are met, it might require extremely small step sizes for convergence if the Jacobian of the associated gradient vector field has eigenvalues with large imaginary part. 3 ℜ(z) ℑ(z) (a) Illustration how the eigenvalues are projected into unit ball. ℜ(z) ℑ(z) (b) Example where h has to be chosen extremely small. ℜ(z) ℑ(z) (c) Illustration how our method alleviates the problem. Figure 1: Images showing how the eigenvalues of A are projected into the unit circle and what causes problems: when discretizing the gradient flow with step size h, the eigenvalues of the Jacobian at a fixed point are projected into the unit ball along rays from 1. However, this is only possible if the eigenvalues lie in the left half plane and requires extremely small step sizes h if the eigenvalues are close to the imaginary axis. The proposed method moves the eigenvalues to the left in order to make the problem better posed, thus allowing the algorithm to converge for reasonable step sizes. 3 Convergence Theory In this section, we analyze the convergence properties of the most common method for training GANs, simultaneous gradient ascent2. We show that two major failure causes for this algorithm are eigenvalues of the Jacobian of the associated gradient vector field with zero real-part as well as eigenvalues with large imaginary part. For our theoretical analysis, we start with the following classical theorem about the convergence of fixed-point iterations: Proposition 3. Let F : Ω→Ωbe a continuously differential function on an open subset Ωof Rn and let ¯x ∈Ωbe so that 1. F(¯x) = ¯x, and 2. the absolute values of the eigenvalues of the Jacobian F ′(¯x) are all smaller than 1. Then there is an open neighborhood U of ¯x so that for all x0 ∈U, the iterates F (k)(x0) converge to ¯x. The rate of convergence is at least linear. More precisely, the error ∥F (k)(x0) −¯x∥is in O(|λmax|k) for k →∞where λmax is the eigenvalue of F ′(¯x) with the largest absolute value. Proof. See [6], Proposition 4.4.1. In numerics, we often consider functions of the form F(x) = x + h G(x) (8) for some h > 0. Finding fixed points of F is then equivalent to finding solutions to the nonlinear equation G(x) = 0 for x. For F as in (8), the Jacobian is given by F ′(x) = I + h G′(x). (9) Note that in general neither F ′(x) nor G′(x) are symmetric and can therefore have complex eigenvalues. The following Lemma gives an easy condition, when a fixed point of F as in (8) satisfies the conditions of Proposition 3. 2A similar analysis of alternating gradient ascent, a popular alternative to simultaneous gradient ascent, can be found in the supplementary material. 4 Lemma 4. Assume that A ∈Rn×n only has eigenvalues with negative real-part and let h > 0. Then the eigenvalues of the matrix I + h A lie in the unit ball if and only if h < 1 |ℜ(λ)| 2 1 + ℑ(λ) ℜ(λ) 2 (10) for all eigenvalues λ of A. Corollary 5. If v′(¯x) only has eigenvalues with negative real-part at a stationary point ¯x, then Algorithm 1 is locally convergent to ¯x for h > 0 small enough. Equation 10 shows that there are two major factors that determine the maximum possible step size h: (i) the maximum value of ℜ(λ) and (ii) the maximum value q of |ℑ(λ)/ℜ(λ)|. Note that as q goes to infinity, we have to choose h according to O(q−2) which can quickly become extremely small. This is visualized in Figure 1: if G′(¯x) has an eigenvalue with small absolute real part but big imaginary part, h needs to be chosen extremely small to still achieve convergence. Moreover, even if we make h small enough, most eigenvalues of F ′(¯x) will be very close to 1, which leads by Proposition 3 to very slow convergence of the algorithm. This is in particular a problem of simultaneous gradient ascent for two-player games (in contrast to gradient ascent for local optimization), where the Jacobian G′(¯x) is not symmetric and can therefore have non-real eigenvalues. 4 Consensus Optimization In this section, we derive the proposed method and analyze its convergence properties. 4.1 Derivation Finding stationary points of the vector field v(x) is equivalent to solving the equation v(x) = 0. In the context of two-player games this means solving the two equations ∇φf(φ, θ) = 0 and ∇θg(φ, θ) = 0. (11) A simple strategy for finding such stationary points is to minimize L(x) = 1 2∥v(x)∥2 for x. Unfortunately, this can result in unstable stationary points of v or other local minima of 1 2∥v(x)∥2 and in practice, we found it did not work well. We therefore consider a modified vector field w(x) that is as close as possible to the original vector field v(x), but at the same time still minimizes L(x) (at least locally). A sensible candidate for such a vector field is w(x) = v(x) −γ∇L(x) (12) for some γ > 0. A simple calculation shows that the gradient ∇L(x) is given by ∇L(x) = v′(x)Tv(x). (13) This vector field is the gradient vector field associated to the modified two-player game given by the two modified utility functions ˜f(φ, θ) = f(φ, θ) −γL(φ, θ) and ˜g(φ, θ) = g(φ, θ) −γL(φ, θ). (14) The regularizer L(φ, θ) encourages agreement between the two players. Therefore we call the resulting algorithm Consensus Optimization (Algorithm 2). 3 4 3This algorithm requires backpropagation through the squared norm of the gradient with respect to the weights of the network. This is sometimes called double backpropagation and is for example supported by the deep learning frameworks Tensorflow [1] and PyTorch [19]. 4As was pointed out by Ferenc Huzsár in one of his blog posts on www.inference.vc, naively implementing this algorithm in a mini-batch setting leads to biased estimates of L(x). However, the bias goes down linearly with the batch size, which justifies the usage of consensus optimization in a mini-batch setting. Alternatively, it is possible to debias the estimate by subtracting a multiple of the sample variance of the gradients, see the supplementary material for details. 5 Algorithm 2 Consensus optimization 1: while not converged do 2: vφ ←∇φ(f(θ, φ) −γL(θ, φ)) 3: vθ ←∇θ(g(θ, φ) −γL(θ, φ)) 4: φ ←φ + hvφ 5: θ ←θ + hvθ 6: end while 4.2 Convergence For analyzing convergence, we consider a more general algorithm than in Section 4.1 which is given by iteratively applying a function F of the form F(x) = x + h A(x)v(x). (15) for some step size h > 0 and an invertible matrix A(x) to x. Consensus optimization is a special case of this algorithm for A(x) = I −γ v′(x)T. We assume that 1 γ is not an eigenvalue of v′(x)T for any x, so that A(x) is indeed invertible. Lemma 6. Assume h > 0 and A(x) invertible for all x. Then ¯x is a fixed point of (15) if and only if it is a stationary point of v. Moreover, if ¯x is a stationary point of v, we have F ′(¯x) = I + hA(¯x)v′(¯x). (16) Lemma 7. Let A(x) = I −γv′(x)T and assume that v′(¯x) is negative semi-definite and invertible5 . Then A(¯x)v′(¯x) is negative definite. As a consequence of Lemma 6 and Lemma 7, we can show local convergence of our algorithm to a local Nash equilibrium: Corollary 8. Let v(x) be the associated gradient vector field of a two-player zero-sum game and A(x) = I −γv′(x)T. If ¯x is a local Nash-equilibrium, then there is an open neighborhood U of ¯x so that for all x0 ∈U, the iterates F (k)(x0) converge to ¯x for h > 0 small enough. Our method solves the problem of eigenvalues of the Jacobian with (approximately) zero real-part. As the next Lemma shows, it also alleviates the problem of eigenvalues with a big imaginary-to-realpart-quotient: Lemma 9. Assume that A ∈Rn×n is negative semi-definite. Let q(γ) be the maximum of |ℑ(λ)| |ℜ(λ)| (possibly infinite) with respect to λ where λ denotes the eigenvalues of A −γAT A and ℜ(λ) and ℑ(λ) denote their real and imaginary part respectively. Moreover, assume that A is invertible with |Av| ≥ρ|v| for ρ > 0 and let c = min v∈S(Cn) |¯vT(A + AT)v| |¯vT(A −AT)v| (17) where S(Cn) denotes the unit sphere in Cn. Then q(γ) ≤ 1 c + 2ρ2γ . (18) Lemma 9 shows that the imaginary-to-real-part-quotient can be made arbitrarily small for an appropriate choice of γ. According to Proposition 3, this leads to better convergence properties near a local Nash-equilibrium. 5 Experiments Mixture of Gaussians In our first experiment we evaluate our method on a simple 2D-example where our goal is to learn a mixture of 8 Gaussians with standard deviations equal to 10−2 and modes 5Note that v′(¯x) is usually not symmetric and therefore it is possible that v′(¯x) is negative semi-definite and invertible but not negative-definite. 6 (a) Simultaneous Gradient Ascent (b) Consensus optimization Figure 2: Comparison of Simultaneous Gradient Ascent and Consensus optimization on a circular mixture of Gaussians. The images depict from left to right the resulting densities of the algorithm after 0, 5000, 10000 and 20000 iterations as well as the target density (in red). v′(x) w′(x) Before training After training Figure 3: Empirical distribution of eigenvalues before and after training using consensus optimization. The first column shows the distribution of the eigenvalues of the Jacobian v′(x) of the unmodified vector field v(x). The second column shows the eigenvalues of the Jacobian w′(x) of the regularized vector field w(x) = v(x) −γ∇L(x) used in consensus optimization. We see that v′(x) has eigenvalues close to the imaginary axis near the Nash-equilibrium. As predicted theoretically, this is not the case for the regularized vector field w(x). For visualization purposes, the real part of the spectrum of w′(x) before training was clipped. uniformly distributed around the unit circle. While simplistic, algorithms training GANs often fail to converge even on such simple examples without extensive fine-tuning of the architecture and hyper parameters [15]. For both the generator and critic we use fully connected neural networks with 4 hidden layers and 16 hidden units in each layer. For all layers, we use RELU-nonlinearities. We use a 16-dimensional Gaussian prior for the latent code z and set up the game between the generator and critic using the utility functions as in [10]. To test our method, we run both SimGA and our method with RMSProp and a learning rate of 10−4 for 20000 steps. For our method, we use a regularization parameter of γ = 10. The results produced by SimGA and our method for 0, 5000, 10000 and 20000 iterations are depicted in Figure 2. We see that while SimGA jumps around the modes of the distribution and fails to converge , our method converges smoothly to the target distribution (shown in red). Figure 3 shows the empirical distribution of the eigenvalues of the Jacobian of v(x) and the regularized vector field w(x). It can be seen that near the Nash-equilibrium most eigenvalues are indeed very close to the 7 (a) cifar-10 (b) celebA Figure 4: Samples generated from a model where both the generator and discriminator are given as in [21], but without batch-normalization. For celebA, we also use a constant number of filters in each layer and add additional RESNET-layers. (a) Discriminator loss (b) Generator loss (c) Inception score Figure 5: (a) and (b): Comparison of the generator and discriminator loss on a DC-GAN architecture with 3 convolutional layers trained on cifar-10 for consensus optimization (without batchnormalization) and alternating gradient ascent (with batch-normalization). We observe that while alternating gradient ascent leads to highly fluctuating losses, consensus optimization successfully stabilizes the training and makes the losses almost constant during training. (c): Comparison of the inception score over time which was computed using 6400 samples. We see that on this architecture both methods have comparable rates of convergence and consensus optimization achieves slightly better end results. imaginary axis and that the proposed modification of the vector field used in consensus optimization moves the eigenvalues to the left. CIFAR-10 and CelebA In our second experiment, we apply our method to the cifar-10 and celebAdatasets, using a DC-GAN-like architecture [21] without batch normalization in the generator or the discriminator. For celebA, we additionally use a constant number of filters in each layer and add additional RESNET-layers. These architectures are known to be hard to optimize using simultaneous (or alternating) gradient ascent [21, 4]. Figure 4a and 4b depict samples from the model trained with our method. We see that our method successfully trains the models and we also observe that unlike when using alternating gradient ascent, the generator and discriminator losses remain almost constant during training. This is illustrated in Figure 5. For a quantitative evaluation, we also measured the inception-score [23] over time (Figure 5c), showing that our method compares favorably to a DC-GAN trained with alternating gradient ascent. The improvement of consensus optimization over alternating gradient ascent is even more significant if we use 4 instead of 3 convolutional layers, see Figure 11 in the supplementary material for details. Additional experimental results can be found in the supplementary material. 6 Discussion While we could prove local convergence of our method in Section 4, we believe that even more insights can be gained by examining global convergence properties. In particular, our analysis from 8 Section 4 cannot explain why the generator and discriminator losses remain almost constant during training. Our theoretical results assume the existence of a Nash-equilibrium. When we are trying to minimize an f-divergence and the dimensionality of the generator distribution is misspecified, this might not be the case [3]. Nonetheless, we found that our method works well in practice and we leave a closer theoretical investigation of this fact to future research. In practice, our method can potentially make formerly instable stationary points of the gradient vector field stable if the regularization parameter is chosen to be high. This may lead to poor solutions. We also found that our method becomes less stable for deeper architectures, which we attribute to the fact that the gradients can have very different scales in such architectures, so that the simple L2-penalty from Section 4 needs to be rescaled accordingly. Our method can be regarded as an approximation to the implicit Euler method for integrating the gradient vector field. It can be shown that the implicit Euler method has appealing stability properties [7] that can be translated into convergence theorems for local Nash-equilibria. However, the implicit Euler method requires the solution of a nonlinear equation in each iteration. Nonetheless, we believe that further progress can be made by finding better approximations to the implicit Euler method. An alternative interpretation is to view our method as a second order method. We hence believe that further progress can be made by revisiting second order optimization methods [2, 18] in the context of saddle point problems. 7 Related Work Saddle point problems do not only arise in the context of training GANs. For example, the popular actor-critic models [20] in reinforcement learning are also special cases of saddle-point problems. Finding a stable algorithm for training GANs is a long standing problem and multiple solutions have been proposed. Unrolled GANs [15] unroll the optimization with respect to the critic, thereby giving the generator more informative gradients. Though unrolling the optimization was shown to stabilize training, it can be cumbersome to implement and in addition it also results in a big model. As was recently shown, the stability of GAN-training can be improved by using objectives derived from the Wasserstein-1-distance (induced by the Kantorovich-Rubinstein-norm) instead of f-divergences [4, 11]. While Wasserstein-GANs often provide a good solution for the stable training of GANs, they require keeping the critic optimal, which can be time-consuming and can in practice only be achieved approximately, thus violating the conditions for theoretical guarantees. Moreover, some methods like Adversarial Variational Bayes [14] explicitly prescribe the divergence measure to be used, thus making it impossible to apply Wasserstein-GANs. Other approaches that try to stabilize training, try to design an easy-to-optimize architecture [23, 21] or make use of additional labels [23, 17]. In contrast to all the approaches described above, our work focuses on stabilizing training on a wide range of architecture and divergence functions. 8 Conclusion In this work, starting from GAN objective functions we analyzed the general difficulties of finding local Nash-equilibria in smooth two-player games. We pinpointed the major numerical difficulties that arise in the current state-of-the-art algorithms and, using our insights, we presented a new algorithm for training generative adversarial networks. Our novel algorithm has favorable properties in theory and practice: from the theoretical viewpoint, we showed that it is locally convergent to a Nashequilibrium even if the eigenvalues of the Jacobian are problematic. This is particularly interesting for games that arise in the context of GANs where such problems are common. From the practical viewpoint, our algorithm can be used in combination with any GAN-architecture whose objective can be formulated as a two-player game to stabilize the training. We demonstrated experimentally that our algorithm stabilizes the training and successfully combats training issues like mode collapse. We believe our work is a first step towards an understanding of the numerics of GAN training and more general deep learning objective functions. 9 Acknowledgements This work was supported by Microsoft Research through its PhD Scholarship Programme. References [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. [2] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251– 276, 1998. [3] Martín Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. CoRR, abs/1701.04862, 2017. [4] Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. [5] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 224–232, 2017. [6] Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods. Academic press, 2014. [7] John Charles Butcher. Numerical methods for ordinary differential equations. John Wiley & Sons, 2016. [8] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. CoRR, abs/1605.09782, 2016. [9] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martín Arjovsky, Olivier Mastropietro, and Aaron C. Courville. Adversarially learned inference. CoRR, abs/1606.00704, 2016. [10] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672–2680, 2014. [11] Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. CoRR, abs/1704.00028, 2017. [12] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. CoRR, abs/1611.07004, 2016. [13] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. [14] Lars M. Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2391–2400, 2017. [15] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. CoRR, abs/1611.02163, 2016. 10 [16] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 271–279, 2016. [17] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2642–2651, 2017. [18] Razvan Pascanu and Yoshua Bengio. Natural gradient revisited. CoRR, abs/1301.3584, 2013. [19] Adam Paszke and Soumith Chintala. Pytorch, 2017. [20] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. CoRR, abs/1610.01945, 2016. [21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [22] Lillian J. Ratliff, Samuel Burden, and S. Shankar Sastry. Characterization and computation of local nash equilibria in continuous games. In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton 2013, Allerton Park & Retreat Center, Monticello, IL, USA, October 2-4, 2013, pages 917–924, 2013. [23] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2226–2234, 2016. [24] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised MAP inference for image super-resolution. CoRR, abs/1610.04490, 2016. [25] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012. [26] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. CoRR, abs/1702.05464, 2017. [27] Raymond Yeh, Chen Chen, Teck-Yian Lim, Mark Hasegawa-Johnson, and Minh N. Do. Semantic image inpainting with perceptual and contextual losses. CoRR, abs/1607.07539, 2016. 11 | 2017 | 190 |
6,664 | Cortical microcircuits as gated-recurrent neural networks Rui Ponte Costa∗ Centre for Neural Circuits and Behaviour Dept. of Physiology, Anatomy and Genetics University of Oxford, Oxford, UK rui.costa@cncb.ox.ac.uk Yannis M. Assael∗ Dept. of Computer Science University of Oxford, Oxford, UK and DeepMind, London, UK yannis.assael@cs.ox.ac.uk Brendan Shillingford∗ Dept. of Computer Science University of Oxford, Oxford, UK and DeepMind, London, UK brendan.shillingford@cs.ox.ac.uk Nando de Freitas DeepMind London, UK nandodefreitas@google.com Tim P. Vogels Centre for Neural Circuits and Behaviour Dept. of Physiology, Anatomy and Genetics University of Oxford, Oxford, UK tim.vogels@cncb.ox.ac.uk Abstract Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts. 1 Introduction Over the last decades neuroscience research has collected enormous amounts of data on the architecture and dynamics of cortical circuits, unveiling complex but stereotypical structures across the neocortex (Markram et al., 2004; Harris and Mrsic-Flogel, 2013; Jiang et al., 2015). One of the most prevalent features of cortical nets is their laminar organisation and their high degree of recurrence, even at the level of local (micro-)circuits (Douglas et al., 1995; Song et al., 2005; Harris and Mrsic-Flogel, 2013; Jiang et al., 2015) (Fig. 1a). Another key feature of cortical circuits is the detailed and tight balance of excitation and inhibition, which has received growing support * These authors contributed equally to this work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. both at the experimental (Froemke et al., 2007; Xue et al., 2014; Froemke, 2015) and theoretical level (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Vogels and Abbott, 2009; Hennequin et al., 2014, 2017). However, the computational processes that are facilitated by these architectures and dynamics are still elusive. There remains a fundamental disconnect between the underlying biophysical networks and the emergence of intelligent and complex behaviours. Artificial recurrent neural networks (RNNs), on the other hand, are crafted to perform specific computations. In fact, RNNs have recently proven very successful at solving complex tasks such as language modelling, speech recognition, and other perceptual tasks (Graves, 2013; Graves et al., 2013; Sutskever et al., 2014; van den Oord et al., 2016; Assael et al., 2016). In these tasks, the input data contains information across multiple timescales that needs to be filtered and processed according to its relevance. The ongoing presentation of stimuli makes it difficult to learn to separate meaningful stimuli from background noise (Hochreiter et al., 2001; Pascanu et al., 2012). RNNs, and in particular gated-RNNs, can solve this problem by maintaining a representation of relevant input sequences until needed, without interference from new stimuli. In principle, such protected memories conserve past inputs and thus allow back-propagation of errors further backwards in time (Pascanu et al., 2012). Because of their memory properties, one of the first and most successful types of gated-RNNs was named “long short-term memory networks” (LSTMs, Hochreiter and Schmidhuber (1997), Fig. 1c). Here we note that the architectural features of LSTMs overlap closely with known cortical structures, but with a few important differences with regard to the mechanistic implementation of gates in a cortical network and LSTMs (Fig. 1b). In LSTMs, the gates control the memory cell as a multiplicative factor, but in biological networks, the gates, i.e. inhibitory neurons, act (to a first approximation) subtractively — excitatory and inhibitory (EI) currents cancel each other linearly at the level of the postsynaptic membrane potential (Kandel et al., 2000; Gerstner et al., 2014). Moreover, such a subtractive inhibitory mechanism must be well balanced (i.e. closely match the excitatory input) to act as a gate to the inputs in the ’closed’ state, without perturbing activity flow with too much inhibition. Previous models have explored gating in subtractive excitatory and inhibitory balanced networks (Vogels and Abbott, 2009; Kremkow et al., 2010), but without a clear computational role. On the other hand, predictive coding RNNs with EI features have been studied (Bastos et al., 2012; Deneve and Machens, 2016), but without a clear match to state-of-the-art machine learning networks. Regarding previous neuroscientific interpretations of LSTMs, there have been suggestions of LSTMs as models of working memory and different brain areas (e.g. prefrontal cortex, basal ganglia and hippocampus) (O’Reilly and Frank, 2006; Krueger and Dayan, 2009; Cox and Dean, 2014; Marblestone et al., 2016; Hassabis et al., 2017; Bhalla, 2017), but without a clear interpretation of the individual components of LSTMs and a specific mapping to known circuits. We propose to map the architecture and function of LSTMs directly onto cortical circuits, with gating provided by lateral subtractive inhibition. Our networks have the potential to exhibit the excitation-inhibition balance observed in experiments (Douglas et al., 1989; Bastos et al., 2012; Harris and Mrsic-Flogel, 2013) and yield simpler gradient propagation than multiplicative gating. We study these dynamics through our empirical evaluation showing that subLSTMs achieve similar performance to LSTMs in the Penn Treebank and Wikitext-2 language modelling tasks, as well as pixelwise sequential MNIST classification. By transferring the functionality of LSTMs into a biologically more plausible network, our work provides testable hypotheses for the most recently emerging, technologically advanced experiments on the functionality of entire cortical microcircuits. 2 Biological motivation The architecture of LSTM units, with their general feedforward structure aided by additional recurrent memory and controlled by lateral gates, is remarkably similar to the columnar architecture of cortical circuits (Fig. 1; see also Fig. S1 for a more detailed neocortical schematic). The central element in LSTMs and similar RNNs is the memory cell, which we hypothesise to be implemented by local recurrent networks of pyramidal cells in layer-5. This is in line with previous studies showing a relatively high level of recurrence and non-random connectivity between pyramidal cells in layer5 (Douglas et al., 1995; Thomson et al., 2002; Song et al., 2005). Furthermore, layer-5 pyramidal networks display rich activity on (relatively) long time scales in vivo (Barthó et al., 2009; Sakata and Harris, 2009; Luczak et al., 2015; van Kerkoerle et al., 2017) and in slices (Egorov et al., 2002; Wang et al., 2006), consistent with LSTM-like function. There is strong evidence for persistent 2 neuronal activity both in higher cortical areas (Goldman-Rakic, 1995) and in sensory areas (Huang et al., 2016; van Kerkoerle et al., 2017; Kornblith et al., 2017). Relatively speaking, sensory areas (e.g. visual cortex) exhibit sorter timescales than higher brain areas (e.g. prefrontal cortex), which we would expect given the different temporal requirements these brain areas have. A similar behaviour is expected in multi-area (or layer) LSTMs. Note that such longer time-scales can also be present in more superficial layers (e.g. layer 2/3) (Goldman-Rakic, 1995; van Kerkoerle et al., 2017), suggesting the possibility of more than one memory cell per cortical microcircuit. Slow memory decay in these networks may be controlled through short- (York and van Rossum, 2009; Costa et al., 2013, 2017a) and long-term synaptic plasticity (Abbott and Nelson, 2000; Senn et al., 2001; Pfister and Gerstner, 2006; Zenke et al., 2015; Costa et al., 2015, 2017a,b) at recurrent excitatory synapses. The gates that protect a given memory in LSTMs can be mapped onto lateral inhibitory inputs in cortical circuits. We propose that, similar to LSTMs, the input gate is implemented by inhibitory neurons in layer-2/3 (or layer-4; Fig. 1a). Such lateral inhibition is consistent with the canonical view of microcircuits (Douglas et al., 1989; Bastos et al., 2012; Harris and Mrsic-Flogel, 2013) and sparse sensory-evoked responses in layer-2/3 (Sakata and Harris, 2009; Harris and Mrsic-Flogel, 2013). In the brain, this inhibition is believed to originate from (parvalbumin) basket cells, providing a near-exact balanced inhibitory counter signal to a given excitatory feedforward input (Froemke et al., 2007; Xue et al., 2014; Froemke, 2015). Excitatory and inhibitory inputs thus cancel each other and arriving signals are ignored by default. Consequently, any activity within the downstream memory network remains largely unperturbed, unless it is altered through targeted modulation of the inhibitory activity (Harris and Mrsic-Flogel, 2013; Vogels and Abbott, 2009; Letzkus et al., 2015). Similarly, the memory cell itself can only affect the output of the LSTM when its activity is unaccompanied by congruent inhibition (mapped onto layer-5, layer-6 or layer 2/3 in the same microcircuit, which are known to project to higher brain areas (Harris and Mrsic-Flogel, 2013); see Fig. S1), i.e. when lateral inhibition is turned down and the gate is open. 2.1 Why subtractive neural integration? When a presynaptic cell fires, neurotransmitter is released by its synaptic terminals. The neurotransmitter is subsequently bound by postsynaptic receptors where it prompts a structural change of an ion channel to allow the flow of electrically charged ions into or out of the postsynaptic cell. Depending on the receptor type, the ion flux will either increase (depolarise) or decrease (hyperpolarise) the postsynaptic membrane potential. If sufficiently depolarising “excitatory” input is provided, the postsynaptic potential will reach a threshold and fire a stereotyped action potential (“spike”, Kandel et al. (2000)). This behaviour can be formalised as a RC–circuit (R = resistance, C = capacitance), which follows Ohm’s laws u = RI and yields the standard leaky-integrate-and-fire neuron model (Gerstner and Kistler, 2002), τm ˙u = −u + RIexc −RIinh, where τm = RC is the membrane time constant, and Iexc and Iinh are the excitatory and inhibitory (hyperpolarizing) synaptic input currents, respectively. Action potentials are initiated in this standard model (Brette and Gerstner, 2005; Gerstner et al., 2014)) when the membrane potential hits a hard threshold θ. They are modelled as a momentary pulse and a subsequent reset to a resting potential. Neuronal excitation and inhibition have opposite effects, such that inhibitory inputs acts linearly and subtractively on the membrane potential. The leaky-integrate-and-fire model can be approximated at the level of firing rates as rate ∼ τm ln R(Iexc−Iinh) R(Iexc−Iinh)−θ −1 (see Fig. 1a for the input-output response; Gerstner and Kistler (2002)), which we used to demonstrate the impact of subtractive gating (Fig. 1b), and contrast it with multiplicative gating (Fig. 1c). This firing-rate approximation forms the basis for our gated-RNN model which has a similar subtractive behaviour and input-output function (cf. Fig. 1b; bottom). Moreover, the rate formulation also allows a cleaner comparison to LSTM units and the use of existing machine learning optimisation methods. It could be argued that a different form of inhibition (shunting inhibition), which counteracts excitatory inputs by decreasing the over all membrane resistance, has a characteristic multiplicative gating effect on the membrane potential. However, when analysed at the level of the output firing rate its effect becomes subtractive (Holt and Koch, 1997; Prescott and De Koninck, 2003). This is consistent with 3 xt,ht-1 (input) ht (output) cell ct xt,ht-1 it ot zt xt,ht-1 xt,ht-1 ft xt,ht-1 (input) ht (output) cell ct xt,ht-1 it ot zt f xt,ht-1 input unit j unit j unit j output L6, L4 or 2/3 IN PC = = IN L4 or 2/3 PCs memory cell f PC Layer 5 a b c subLSTM LSTM cortical circuit =1 =1 =1 1 2 3 4 0 100 200 output rate (Hz) 1 2 3 4 input 0 100 200 1 2 3 4 0 100 200 baseline weak inh. strong inh. inh. = exc. inh. = exc. closed gate baseline weak inh. strong inh. baseline strong gate subtractive gating multiplicative gating weak gate Figure 1: Biological and artificial gated recurrent neural networks. (a) Example unit of a simplified cortical recurrent neural network. Sensory (or downstream) input arrives at pyramidal cells in layer-2/3 (L2/3, or layer-4), which is then fed onto memory cells (recurrently connected pyramidal cells in layer-5). The memory decays with a decay time constant f. Input onto layer-5 is balanced out by inhibitory basket cells (BC). The balance is represented by the diagonal ‘equal’ connection. The output of the memory cell is gated by basket cells at layer-6, 2/3 or 4 within the same area (or at an upstream brain area). (b) Implementation of (a), following a similar notation to LSTM units, but with it and ot as the input and output subtractive gates. Dashed connections represent the potential to have a balance between excitatory and inhibitory input (weights are set to 1) (c) LSTM recurrent neural network cell (see main text for details). The plots bellow illustrate the different gating modes: (a) using a simple current-based noisy leaky-integrate-and-fire neuron (capped to 200Hz) with subtractive inhibition; (b) sigmoidal activation functions with subtractive gating; (c) sigmoidal activation functions with multiplicative gating. Output rate represents the number of spikes per second (Hz) as in biological circuits. our approach in that our model is framed at the firing-rate level (rather than at the level of membrane potentials). 3 Subtractive-gated long short-term memory In an LSTM unit (Hochreiter and Schmidhuber, 1997; Greff et al., 2015) the access to the memory cell ct is controlled by an input gate it (see Fig.1c). At the same time a forget gate ft controls the decay of this memory1, and the output gate ot controls whether the content of the memory cell ct is transmitted to the rest of the network. A LSTM network consists of many LSTM units, each containing its own memory cell ct, input it, forget ft and output ot gates. The LSTM state is described as ht = f(xt, ht−1, it, ft, ot) and the unit follows the dynamics given in the middle column below. 1Note that this leak is controlled by the input and recurrent units, which may be biologically unrealistic. 4 LSTM subLSTM [ft, ot, it]T = σ(Wxt + Rht−1 + b), σ(Wxt + Rht−1 + b), 2 zt = tanh(Wxt + Rht−1 + b), σ(Wxt + Rht−1 + b), ct = ct−1 ⊙ft + zt ⊙it, ct−1 ⊙ft + zt −it, ht = tanh(ct) ⊙ot. σ(ct) −ot. Here, ct is the memory cell (note the multiplicative control of the input gate), ⊙denotes element-wise multiplication and zt is the new weighted input given with xt and ht−1 being the input vector and recurrent input from other LSTM units, respectively. The overall output of the LSTM unit is then computed as ht. LSTM networks can have multiple layers with millions of parameters (weights and biases), which are typically trained using stochastic gradient descent in a supervised setting. Above, the parameters are W, R and b. The multiple gates allow the network to adapt the flow of information depending on the task at hand. In particular, they enable writing to the memory cell (controlled by input gate, it), adjusting the timescale of the memory (controlled by forget gate, ft) and exposing the memory to the network (controlled by output gate, ot). The combined effect of these gates makes it possible for LSTM units to capture temporal (contextual) dependencies across multiple timescales. Here, we introduce and study a new RNN unit, subLSTM. SubLSTM units are a mapping of LSTMs onto known canonical excitatory-inhibitory cortical microcircuits (Douglas et al., 1995; Song et al., 2005; Harris and Mrsic-Flogel, 2013). Similarly, subLSTMs are defined as ht = f(xt, ht−1, it, ft, ot) (Fig. 1b), however here the gating is subtractive rather than multiplicative. A subLSTM is defined by a memory cell ct, the transformed input zt and the input gate it. In our model we use a simplified notion of balance in the gating (θzj t −θij t) (for the jth unit), where θ = 1. 3 For the memory forgetting we consider two options: (i) controlled by gates (as in an LSTM unit) as ft = σ(Wxt + Rht−1 + b) or (ii) a more biologically plausible learned simple decay [0, 1], referred to in the results as fix-subLSTM. Similarly to its input, subLSTM’s output ht is also gated through a subtractive output gate ot (see equations above). We evaluated different activation functions and sigmoidal transformations had the highest performance. The key differences to other gated-RNNs is in the subtractive inhibitory gating (it and ot) that has the potential to be balanced with the excitatory input (zt and ct, respectively; Fig. 1b). See below a more detailed comparison of the different gating modes. 3.1 Subtractive versus multiplicative gating in RNNs The key difference between subLSTMs and LSTMs lies in the implementation of the gating mechanism. LSTMs typically use a multiplicative factor to control the amplitude of the input signal. SubLSTMs use a more biologically plausible interaction of excitation and inhibition. An important consequence of subtractive gating is the potential for an improved gradient flow backwards towards the input layers. To illustrate this we can compare the gradients for the subLSTMs and LSTMs in a simple example. First, we review the derivatives of the loss with respect to the various components of the subLSTM, using notation based on (Greff et al., 2015). In this notation, δa represents the derivative of the loss 2Note that we consider two versions of subLSTMs: one with a forget gate as in LSTMs (subLSTM) and another with a simple memory decay (i.e. a scalar [0,1] that defines the memory timeconstant, fix-subLSTM). 3These weights could also be optimised, but for this model we decided to keep the number of parameters to a minimum for simplicity and ease of comparison with LSTMs. 5 with respect to a, and ∆t def = dloss dht , the error from the layer above. Then by chain rule we have: δht = ∆t δot = −δht ⊙σ′(ot) δct = δht ⊙σ′(ct) + δct+1 ⊙ft+1 δf t = δct ⊙ct−1 ⊙σ′(f t) δit = −δct ⊙σ′(it) δzt = δct ⊙σ′(zt) For comparison, the corresponding derivatives for an LSTM unit are given by: δht = ∆t δot = ht ⊙tanh(ct) ⊙σ′(ot) δct = ht ⊙ot ⊙tanh′(ct) + δct+1 ⊙ft+1 δf t = δct ⊙ct−1 ⊙σ′(f t) δit = δct ⊙zt ⊙σ′(it) δzt = δct ⊙it ⊙tanh′(zt) where σ(·) is the sigmoid activation function and the overlined variables ct, f t, etc. are the preactivation values of a gate or input transformation (e.g. ot = Woxt + Roht−1 + bo for the output gate of a subLSTM). Note that compared to the those of an LSTM, subLSTMs provide a simpler gradient with fewer multiplicative factors. Now, the LSTMs weights Wz of the input transformation z are updated according to δWz = T X t=0 T X t′=t ∆t′ ∂ht′ ∂ct′ · · · ∂ct ∂zt ∂zt ∂Wz , (1) where T is the total number of temporal steps and the ellipsis abbreviates the recurrent gradient paths through time, containing the path backwards through time via hs and cs for t ≤s ≤t′. For simplicity of analysis, we ignore these recurrent connections as they are the same in LSTM and subLSTM, and only consider the depth-wise path through the network; we call this tth timestep depth-only contribution to the derivative (δWz)t. For an LSTM, by this slight abuse of notation, we have (δWz)t = ∆t ∂ht ∂ct ∂ct ∂zt ∂zt ∂zt ∂zt ∂Wz = ∆t ⊙ot |{z} output gate ⊙tanh′(ct) ⊙ it |{z} input gate ⊙tanh′(zt) x⊤ t , (2) where tanh′(·) is the derivative of tanh. Notice that when either of the input or output gates are set to zero (closed), the corresponding contributions to the gradient are zero. For a network with subtractive gating, the depth-only derivative contribution becomes (δWz)t = ∆t ⊙σ′(ct) ⊙σ′(zt) x⊤ t , (3) where σ′(·) is the sigmoid derivative. In this case, the input and output gates, ot and it, are not present. As a result, the subtractive gates in subLSTMs do not (directly) impair error propagation. 4 Results The aims of our work were two-fold. First, inspired by cortical circuits we aimed to propose a biological plausible implementation of an LSTM unit, which would allow us to better understand cortical architectures and their dynamics. To compare the performance of subLSTM units to LSTMs, we first compared the learning dynamics for subtractive and multiplicative networks mathematically. In a second step, we empirically compared subLSTM and fix-subLSTM with LSTM networks in 6 two tasks: sequential MNIST classification and word-level language modelling on Penn Treebank (Marcus et al., 1993) and Wikitext-2 (Merity et al., 2016). The network weights are initialised with Glorot initialisation (Glorot and Bengio, 2010), and LSTM units have an initial forget gate bias of 1. We selected the number of units for fix-subLSTM such that the number of parameters is held constant across experiments to facilitate fair comparison with LSTMs and subLSTMs. 4.1 Sequential MNIST In the “sequential” MNIST digit classification task, each digit image from the MNIST dataset is presented to the RNN as a sequence of pixels (Le et al. (2015); Fig. 2a) We decompose the MNIST images of 28×28 pixels into sequences of 784 steps. The network was optimised using RMSProp with momentum (Tieleman and Hinton, 2012), a learning rate of 10−4, one hidden layer and 100 hidden units. Our results show that subLSTMs achieves similar results to LSTMs (Fig. 2b). Our results are comparable to previous results using the same task (Le et al., 2015) and RNNs. LSTM subLSTM fix-subLSTM 90.0 92.5 95.0 97.5 100.0 testing accuracy (%) seq. ... 28x28 a b 97.96 97.29 97.27 Figure 2: Comparison of LSTM and subLSTM networks for sequential pixel-by-pixel MNIST, using 100 hidden units. (a) Samples from MNIST dataset. We converted each matrix of 28×28 pixels into a temporal sequence of 784 timesteps. (b) Classification accuracy on the test set. fix-subLSTM has a fixed but learned forget gate. 4.2 Language modelling Language modelling represents a more challenging task for RNNs, with both short and long-term dependencies. RNN language models (RNN LMs) models the probability of text by autoregressively predicting a sequence of words. Each timestep is trained to predict the following word; in other words, we model the word sequence as a product of conditional multinoulli distributions. We evaluate the RNN LMs by measuring their perplexity, defined for a sequence of n words as perplexity = P(w1, . . . , wn)−1/n. (4) We first used the Penn Treebank (PTB) dataset to train our model on word-level language modelling (929k training, 73k validation and 82k test words; with a vocabulary of 10k words). All RNNs tested have 2 hidden layers; backpropagation is truncated to 35 steps, and a batch size of 20. To optimise the networks we used RMSProp with momentum. We also performed a hyperparameter search on the validation set over input, output, and update dropout rates, the learning rate, and weight decay. The hyperparameter search was done with Google Vizier, which performs black-box optimisation using Gaussian process bandits and transfer learning. Tables 2 and 3 show the resulting hyperparameters. Table 1 reports perplexity on the test set (Golovin et al., 2017). To understand how subLSTMs scale with network size we varied the number of hidden units between 10, 100, 200 and 650. We also tested the Wikitext-2 language modelling dataset based on Wikipedia articles. This dataset is twice as large as the PTB dataset (2000k training, 217k validation and 245k test words) and also 7 features a larger vocabulary (33k words). Therefore, it is well suited to evaluate model performance on longer term dependencies and reduces the likelihood of overfitting. On both datasets, our results show that subLSTMs achieve perplexity similar to LSTMs (Table 1a and 1b). Interestingly, the more biological plausible version of subLSTM (with a simple decay as forget gates) achieves performance similar to or better than subLSTMs. (a) Penn Treebank (PTB) test perplexity size subLSTM fix-subLSTM LSTM 10 222.80 213.86 215.93 100 91.46 91.84 88.39 200 79.59 81.97 74.60 650 76.17 70.58 64.34 (b) Wikitext-2 test perplexity size subLSTM fix-subLSTM LSTM 10 268.33 259.89 271.44 100 103.36 105.06 102.77 200 89.00 94.33 86.15 650 78.92 79.49 74.27 Table 1: Language modelling (word-level) test set perplexities on (a) Penn Treebank and (b) Wikitext-2. The models have two layers and fix-subLSTM uses a fixed but learned forget gate f = [0, 1] for each unit. The number of units for fix-subLSTM was chosen such that the number of parameters were the same as those of (sub)LSTM to facilitate fair comparison. Size indicates the number of units. The number of hidden units for fix-subLSTM were selected such that the number of parameters were the same as LSTM and subLSTM, facilitating fair comparison. 5 Conclusions & future work Cortical microcircuits exhibit complex and stereotypical network architectures that support rich dynamics, but their computational power and dynamics have yet to be properly understood. It is known that excitatory and inhibitory neuron types interact closely to process sensory information with great accuracy, but making sense of these interactions is beyond the scope of most contemporary experimental approaches. LSTMs, on the other hand, are a well-understood and powerful tool for contextual tasks, and their structure maps intriguingly well onto the stereotyped connectivity of cortical circuits. Here, we analysed if biologically constrained LSTMs (i.e. subLSTMs) could perform similarly well, and indeed, Model hidden units input dropout output dropout update dropout learning rate weight decay LSTM 10 0.026 0.047 0.002 0.01186 0.000020 subLSTM 10 0.012 0.045 0.438 0.01666 0.000009 fix-subLSTM 11 0.009 0.043 0 0.01006 0.000029 LSTM 100 0.099 0.074 0.015 0.00906 0.000532 subLSTM 100 0.392 0.051 0.246 0.01186 0.000157 fix-subLSTM 115 0.194 0.148 0.042 0.00400 0.000218 LSTM 200 0.473 0.345 0.013 0.00496 0.000191 subLSTM 200 0.337 0.373 0.439 0.01534 0.000076 fix-subLSTM 230 0.394 0.472 0.161 0.00382 0.000066 LSTM 650 0.607 0.630 0.083 0.00568 0.000145 subLSTM 650 0.562 0.515 0.794 0.00301 0.000227 fix-subLSTM 750 0.662 0.730 0.530 0.00347 0.000136 Table 2: Penn Treebank hyperparameters. 8 Model hidden units input dropout output dropout update dropout learning rate weight decay LSTM 10 0.015 0.039 0 0.01235 0 subLSTM 10 0.002 0.030 0.390 0.00859 0.000013 fix-subLSTM 11 0.033 0.070 0.013 0.00875 0 LSTM 100 0.198 0.154 0.002 0.01162 0.000123 subLSTM 100 0.172 0.150 0.009 0.00635 0.000177 fix-subLSTM 115 0.130 0.187 0 0.00541 0.000172 LSTM 200 0.379 0.351 0 0.00734 0.000076 subLSTM 200 0.342 0.269 0.018 0.00722 0.000111 fix-subLSTM 230 0.256 0.273 0 0.00533 0.000160 LSTM 650 0.572 0.566 0.071 0.00354 0.000112 subLSTM 650 0.633 0.567 0.257 0.00300 0.000142 fix-subLSTM 750 0.656 0.590 0.711 0.00321 0.000122 Table 3: Wikitext-2 hyperparameters. such subtractively gated excitation-inhibition recurrent neural networks show promise compared against LSTMs 4 on benchmarks such as sequence classification and word-level language modelling. While it is notable that subLSTMs could not outperform their traditional counterpart (yet), we hope that our work will serve as a platform to discuss and develop ideas of cortical function and to establish links to relevant experimental work on the role of excitatory and inhibitory neurons in contextual learning (Froemke et al., 2007; Froemke, 2015; Poort et al., 2015; Pakan et al., 2016; Kuchibhotla et al., 2017). In future work, it will be interesting to study how additional biological detail may affect performance. Next steps should aim to include Dale’s principle (i.e. that a given neuron can only make either excitatory or inhibitory connections, Strata and Harvey (1999)), and naturally focus on the perplexing diversity of inhibitory cell types (Markram et al., 2004) and behaviour, such as shunting inhibition and mixed subtractive and divisive control (Doiron et al., 2001; Mejias et al., 2013; El Boustani and Sur, 2014; Seybold et al., 2015). Overall, given the success of multiplicative gated LSTMs, it will be most insightful to understand if some of the biological tricks of cortical networks may give LSTMs a further performance boost. Acknowledgements We would like to thank Everton Agnes, Ça˘glar Gülçehre, Gabor Melis and Jake Stroud for helpful comments and discussion. R.P.C. and T.P.V. were supported by a Sir Henry Dale Fellowship by the Wellcome Trust and the Royal Society (WT 100000). Y.M.A. was supported by the EPSRC and the Research Council UK (RCUK). B.S. was supported by the Clarendon Fund. References Abbott, L. F. and Nelson, S. B. (2000). Synaptic plasticity: taming the beast. Nature Neuroscience, 3:1178. Assael, Y. M., Shillingford, B., Whiteson, S., and de Freitas, N. (2016). Lipnet: Sentence-level lipreading. arXiv preprint arXiv:1611.01599. Barthó, P., Curto, C., Luczak, A., Marguet, S. L., and Harris, K. D. (2009). Population coding of tone stimuli in auditory cortex: dynamic rate vector analysis. European Journal of Neuroscience, 30(9):1767–1778. Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., and Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4):695–711. Bhalla, U. S. (2017). Dendrites, deep learning, and sequences in the hippocampus. Hippocampus. 4Although here we have focus on a comparison with LSTMs, similar points would also apply to other gated-RNNs, such as Gated Recurrent Units (Chung et al., 2014). 9 Brette, R. and Gerstner, W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 94(5):3637. Brunel, N. (2000). Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience, 8(3):183–208. Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014). Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv.org. Costa, R. P., Froemke, R. C., Sjostrom, P. J., and van Rossum, M. C. W. (2015). Unified pre- and postsynaptic long-term plasticity enables reliable and flexible learning. eLife, 4:e09457. Costa, R. P., Mizusaki, B. E. P., Sjostrom, P. J., and van Rossum, M. C. W. (2017a). Functional consequences of pre- and postsynaptic expression of synaptic plasticity. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 372(1715):20160153. Costa, R. P., Padamsey, Z., D’amour, J. A., Emptage, N. J., Froemke, R. C., and Vogels, T. P. (2017b). Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity. Neuron, 96(1):177–189.e7. Costa, R. P., Sjostrom, P. J., and van Rossum, M. C. W. (2013). Probabilistic inference of short-term synaptic plasticity in neocortical microcircuits. Frontiers in Computational Neuroscience, 7:75. Cox, D. D. and Dean, T. (2014). Neural Networks and Neuroscience-Inspired Computer Vision. Current Biology, 24(18):R921–R929. Deneve, S. and Machens, C. K. (2016). Efficient codes and balanced networks. Nature Neuroscience, 19(3):375– 382. Doiron, B., Longtin, A., Berman, N., and Maler, L. (2001). Subtractive and divisive inhibition: effect of voltage-dependent inhibitory conductances and noise. Neural Computation, 13(1):227–248. Douglas, R., Koch, C., Mahowald, M., Martin, K., and Suarez, H. (1995). Recurrent excitation in neocortical circuits. Science, 269(5226):981–985. Douglas, R. J., Martin, K. A. C., and Whitteridge, D. (1989). A Canonical Microcircuit for Neocortex. Neural Computation, 1(4):480–488. Egorov, A. V., Hamam, B. N., Fransén, E., Hasselmo, M. E., and Alonso, A. A. (2002). Graded persistent activity in entorhinal cortex neurons. Nature, 420(6912):173–178. El Boustani, S. and Sur, M. (2014). Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo. Nature Communications, 5:5689. Froemke, R. C. (2015). Plasticity of cortical excitatory-inhibitory balance. Annual Review of Neuroscience, 38(1):195–219. Froemke, R. C., Merzenich, M. M., and Schreiner, C. E. (2007). A synaptic memory trace for cortical receptive field plasticity. Nature. Gerstner, W. and Kistler, W. M. (2002). Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press. Gerstner, W., Kistler, W. M., Naud, R., and Paninski, L. (2014). Neuronal Dynamics. From Single Neurons to Networks and Models of Cognition. Cambridge University Press. Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 249–256. Goldman-Rakic, P. S. (1995). Cellular basis of working memory. Neuron, 14(3):477–485. Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., and Sculley, D. (2017). Google vizier: A service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1487–1495. ACM. Graves, A. (2013). Generating Sequences With Recurrent Neural Networks. arXiv.org. Graves, A., Mohamed, A.-r., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. arXiv preprint arXiv:1303.5778. 10 Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., and Schmidhuber, J. (2015). LSTM: A Search Space Odyssey. arXiv.org. Harris, K. D. and Mrsic-Flogel, T. D. (2013). Cortical connectivity and sensory coding. Nature, 503(7474):51–58. Hassabis, D., Kumaran, D., Summerfield, C., and Botvinick, M. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95(2):245–258. Hennequin, G., Agnes, E. J., and Vogels, T. P. (2017). Inhibitory Plasticity: Balance, Control, and Codependence. Annual Review of Neuroscience, 40(1):557–579. Hennequin, G., Vogels, T. P., and Gerstner, W. (2014). Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. Neuron, 82(6):1394–1406. Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735–1780. Holt, G. R. and Koch, C. (1997). Shunting inhibition does not have a divisive effect on firing rates. Neural Computation, 9(5):1001–1013. Huang, Y., Matysiak, A., Heil, P., König, R., Brosch, M., and King, A. J. (2016). Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates. eLife, 5:e15441. Jiang, X., Shen, S., Cadwell, C. R., Berens, P., Sinz, F., Ecker, A. S., Patel, S., and Tolias, A. S. (2015). Principles of connectivity among morphologically defined cell types in adult neocortex. Science, 350(6264):aac9462– aac9462. Kandel, E. R., Schwartz, J. H., Jessell, T. M., and Siegelbaum, S. A. (2000). Principles of neural science. Kornblith, S., Quiroga, R. Q., Koch, C., Fried, I., and Mormann, F. (2017). Persistent Single-Neuron Activity during Working Memory in the Human Medial Temporal Lobe. Current biology : CB, 0(0). Kremkow, J., Aertsen, A., and Kumar, A. (2010). Gating of signal propagation in spiking neural networks by balanced and correlated excitation and inhibition. The Journal of neuroscience, 30(47):15760–15768. Krueger, K. A. and Dayan, P. (2009). Flexible shaping: How learning in small steps helps. Cognition, 110(3):380–394. Kuchibhotla, K. V., Gill, J. V., Lindsay, G. W., Papadoyannis, E. S., Field, R. E., Sten, T. A. H., Miller, K. D., and Froemke, R. C. (2017). Parallel processing by cortical inhibition enables context-dependent behavior. Nature Neuroscience, 20(1):62–71. Le, Q. V., Jaitly, N., and Hinton, G. E. (2015). A Simple Way to Initialize Recurrent Networks of Rectified Linear Units. arXiv.org. Letzkus, J. J., Wolff, S., and Lüthi, A. (2015). Disinhibition, a Circuit Mechanism for Associative Learning and Memory. Neuron. Luczak, A., McNaughton, B. L., and Harris, K. D. (2015). Packet-based communication in the cortex. Nature Reviews Neuroscience. Marblestone, A. H., Wayne, G., and Kording, K. P. (2016). Toward an Integration of Deep Learning and Neuroscience. Frontiers in Computational Neuroscience, 10:94. Marcus, M. P., Marcinkiewicz, M. A., and Santorini, B. (1993). Building a large annotated corpus of English: the penn treebank. Computational Linguistics, 19(2):313–330. Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G., and Wu, C. (2004). Interneurons of the neocortical inhibitory system. Nature Reviews Neuroscience, 5(10):793–807. Mejias, J. F., Kappen, H. J., Longtin, A., and Torres, J. J. (2013). Short-term synaptic plasticity and heterogeneity in neural systems. 1510:185. Merity, S., Xiong, C., Bradbury, J., and Socher, R. (2016). Pointer Sentinel Mixture Models. arXiv.org. O’Reilly, R. C. and Frank, M. J. (2006). Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural Computation, 18(2):283–328. 11 Pakan, J. M., Lowe, S. C., Dylda, E., Keemink, S. W., Currie, S. P., Coutts, C. A., Rochefort, N. L., and Mrsic-Flogel, T. D. (2016). Behavioral-state modulation of inhibition is context-dependent and cell type specific in mouse visual cortex. eLife, 5:e14985. Pascanu, R., Mikolov, T., and Bengio, Y. (2012). On the difficulty of training Recurrent Neural Networks. arXiv.org. Pfister, J.-P. and Gerstner, W. (2006). Triplets of spikes in a model of spike timing-dependent plasticity. Journal of Neuroscience, 26(38):9673–9682. Poort, J., Khan, A. G., Pachitariu, M., Nemri, A., Orsolic, I., Krupic, J., Bauza, M., Sahani, M., Keller, G. B., Mrsic-Flogel, T. D., and Hofer, S. B. (2015). Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex. Neuron, 86(6):1478–1490. Prescott, S. A. and De Koninck, Y. (2003). Gain control of firing rate by shunting inhibition: roles of synaptic noise and dendritic saturation. Proc. Natl. Acad. Sci. USA, 100(4):2076–2081. Sakata, S. and Harris, K. D. (2009). Laminar structure of spontaneous and sensory-evoked population activity in auditory cortex. Neuron, 64(3):404–418. Senn, W., Markram, H., and Tsodyks, M. (2001). An algorithm for modifying neurotransmitter release probability based on pre-and postsynaptic spike timing. Neural Computation, 13(1):35–67. Seybold, B. A., Phillips, E. A. K., Schreiner, C. E., and Hasenstaub, A. R. (2015). Inhibitory Actions Unified by Network Integration. Neuron, 87(6):1181–1192. Song, S., Sjöström, P. J., Reigl, M., Nelson, S., and Chklovskii, D. B. (2005). Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits. PLoS Biology, 3(3):e68. Strata, P. and Harvey, R. (1999). Dale’s principle. Brain research bulletin, 50(5):349–350. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. arXiv.org. Thomson, A. M., West, D. C., Wang, Y., and Bannister, A. P. (2002). Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 2-5 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro. Cerebral cortex (New York, N.Y. : 1991), 12(9):936–953. Tieleman, T. and Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. van den Oord, A., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., and Kavukcuoglu, K. (2016). Conditional Image Generation with PixelCNN Decoders. arXiv.org. van Kerkoerle, T., Self, M. W., and Roelfsema, P. R. (2017). Layer-specificity in the effects of attention and working memory on activity in primary visual cortex. Nature Communications, 8:13804. van Vreeswijk, C. and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724–1726. Vogels, T. P. and Abbott, L. F. (2009). Gating multiple signals through detailed balance of excitation and inhibition in spiking networks. Nature Neuroscience, 12(4):483. Wang, Y., Markram, H., Goodman, P. H., Berger, T. K., Ma, J., and Goldman-Rakic, P. S. (2006). Heterogeneity in the pyramidal network of the medial prefrontal cortex. Nature Publishing Group, 9(4):534–542. Xue, M., Atallah, B. V., and Scanziani, M. (2014). Equalizing excitation-inhibition ratios across visual cortical neurons. Nature, 511(7511):596–600. York, L. C. and van Rossum, M. C. W. (2009). Recurrent networks with short term synaptic depression. Journal of Computational Neuroscience, 27(3):607–620. Zenke, F., Agnes, E. J., and Gerstner, W. (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications, 6:6922. 12 | 2017 | 191 |
6,665 | Deep Lattice Networks and Partial Monotonic Functions Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta Google Research 1600 Amphitheatre Parkway, Mountain View, CA 94043 {siyou,dwding,canini,janpf,mayagupta}@google.com Abstract We propose learning deep models that are monotonic with respect to a userspecified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network. We implement the layers and projections with new computational graph nodes in TensorFlow and use the Adam optimizer and batched stochastic gradients. Experiments on benchmark and real-world datasets show that six-layer monotonic deep lattice networks achieve state-of-the art performance for classification and regression with monotonicity guarantees. 1 Introduction We propose building models with multiple layers of lattices, which we refer to as deep lattice networks (DLNs). While we hypothesize that DLNs may generally be useful, we focus on the challenge of learning flexible partially-monotonic functions, that is, models that are guaranteed to be monotonic with respect to a user-specified subset of the inputs. For example, if one is predicting whether to give someone else a loan, we expect and would like to constrain the prediction to be monotonically increasing with respect to the applicant’s income, if all other features are unchanged. Imposing monotonicity acts as a regularizer, improves generalization to test data, and makes the end-to-end model more interpretable, debuggable, and trustworthy. To learn more flexible partial monotonic functions, we propose architectures that alternate three kinds of layers: linear embeddings, calibrators, and ensembles of lattices, each of which is trained discriminatively to optimize a structural risk objective and obey any given monotonicity constraints. See Fig. 2 for an example DLN with nine such layers. Lattices are interpolated look-up tables, as shown in Fig. 1. Lattices have been shown to be an efficient nonlinear function class that can be constrained to be monotonic by adding appropriate sparse linear inequalities on the parameters [1], and can be trained in a standard empirical risk minimization framework [2, 1]. Recent work showed lattices could be jointly trained as an ensemble to learn flexible monotonic functions for an arbitrary number of inputs [3]. Calibrators are one-dimensional lattices, which nonlinearly transform a single input [1]; see Fig. 1 for an example. They have been used to pre-process inputs in two-layer models: calibrators-then-linear models [4], calibrators-then-lattice models [1], and calibrators-then-ensemble-of-lattices model [3]. Here, we extend their use to discriminatively normalize between other layers of the deep model, as well as act as a pre-processing layer. We also find that using a calibrator for a last layer can help nonlinearly transform the outputs to better match the labels. We first describe the proposed DLN layers in detail in Section 2. In Section 3, we review more related work in learning flexible partial monotonic functions. We provide theoretical results characterizing the flexibility of the DLN in Section 4, followed by details on our open-source TensorFlow imple31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. -10 -5 0 5 10 0 1 (1,0,0) (0,0,0) (1,1,0) (1,1,1) (1,0,1) (0,0,1) (0,1,1) (1,0,0) (0,0,0) (1,1,0) (1,1,1) (1,0,1) (0,1,1) (.7,0,.8) (.2,0,.4) (.5,5,1) (0,0,1) Figure 1: Left: Example calibrator (1-d lattice) with fixed input range [−10, 10] and five fixed uniformly-spaced keypoints and corresponding discriminatively-trained outputs (look-up table values values). Middle: Example lattice on three inputs in fixed input range [0, 1]3, with 8 discriminativelytrained parameters (shown as gray-values), each corresponding to one of the 23 vertices of the unit hypercube. The parameters are linearly interpolated for any input [0, 1]3 to form the lattice function’s output. If the parameters are increasing in any direction, then the function is monotonic increasing in that direction. In this example, the gray-value parameters get lighter in all three directions, so the function is monotonic increasing in all three inputs. Right: Three examples of lattice values are shown in italics, each interpolated from the 8 lattice parameters. 1d calibrator monotonic Monotonic inputs Wm ≥ 0 Nonmonotonic inputs Wn multi-d lattice non-monotonic Figure 2: Illustration of a nine-layer DLN: calibrators, linear embedding, calibrators, ensemble of lattices, calibrators, ensemble of lattices, calibrators, lattice, calibrator. mentation and numerical optimization choices in Section 5. Experimental results demonstrate the potential on benchmark and real-world scenarios in Section 6. 2 Deep Lattice Network Layers We describe in detail the three types of layers we propose for learning flexible functions that can be constrained to be monotonic with respect to any subset of the inputs. Without loss of generality, we assume monotonic means monotonic non-decreasing (one can flip the sign of an input if nonincreasing monotonicity is desired). Let xt ∈RDt be the input vector to the tth layer, with Dt inputs, and let xt[d] denote the dth input for d = 1, . . . , Dt. Table 1 summarizes the parameters and hyperparameters for each layer. For notational simplicity, in some places we drop the notation t if it is clear in the context. We also denote as xm t the subset of xt that are to be monotonically constrained, and as xn t the subset of xt that are non-monotonic. Linear Embedding Layer: Each linear embedding layer consists of two linear matrices, one matrix W m t ∈RDm t+1×Dm t that linearly embeds the monotonic inputs xm t , and a separate matrix W n t ∈R(Dt+1−Dm t+1)×(Dt−Dm t ) that linearly embeds non-monotonic inputs xn t , and one bias vector 2 bt. To preserve monotonicity on the embedded vector W m t xm t , we impose the following linear inequality constraints: W m t [i, j] ≥0 for all (i, j). (1) The output of the linear embedding layer is: xt+1 = xm t+1 xn t+1 = W m t xm t W n t xn t + bt Only the first Dm t+1 coordinates of xt+1 needs to be a monotonic input to the t + 1 layer. These two linear embedding matrices and bias vector are discriminatively trained. Calibration Layer: Each calibration layer consists of a separate one-dimensional piecewise linear transform for each input at that layer, ct,d(xt[d]) that maps R to [0, 1], so that xt+1 := [ct,1(xt[1]) ct,2(xt[2]) · · · ct,Dt(xt[Dt])]T . Here each ct,d is a 1D lattice with K key-value pairs (a ∈RK, b ∈RK), and the function for each input is linearly interpolated between the two b values corresponding to the input’s surrounding a values. An example is shown on the left in Fig. 1. Each 1D calibration function is equivalent to a sum of weighted-and-shifted Rectified linear units (ReLU), that is, a calibrator function c(x[d]; a, b) can be equivalently expressed as c(x[d]; a, b) = K X k=1 α[k]ReLU(x −a[k]) + b[1], (2) where α[k] := b[k+1]−b[k] a[k+1]−a[k] −b[k]−b[k−1] a[k]−a[k−1] for k = 2, · · · , K −1 b[2]−b[1] a[2]−a[1] for k = 1 −b[K]−b[K−1] a[K]−a[K−1] for k = K However, enforcing monotonicity and boundedness constraints for the calibrator output is much simpler with the (a, b) parameterization of each keypoint’s input-output values, as we discuss shortly. Before training the DLN, we fix the input range for each calibrator to [amin, amax], and we fix the K keypoints a ∈RK to be uniformly-spaced over [amin, amax]. Inputs that fall outside [amin, amax] are clipped to that range. The calibrator output parameters b ∈[0, 1]K are discriminatively trained. For monotonic inputs, we can constrain the calibrator functions to be monotonic by constraining the calibrator parameters b ∈[0, 1]K to be monotonic, by adding the linear inequality constraints b[k] ≤b[k + 1] for k = 1, . . . , K −1 (3) into the training objective [3]. We also experimented with constraining all calibrators to be monotonic (even for non-monotonic inputs) for more stable/regularized training. Ensemble of Lattices Layer: Each ensemble of lattices layer consists of G lattices. Each lattice is a linearly interpolated multidimensional look-up table; for an example, see the middle and right pictures in Fig. 1. Each S-dimensional look-up table takes inputs over the S-dimensional unit hypercube [0, 1]S, and has 2S parameters θ ∈R2S, specifying the lattice’s output for each of the 2S vertices of the unit hypercube. Inputs in-between the vertices are linearly interpolated, which forms a smooth but nonlinear function over the unit hypercube. Two interpolation methods have been used, multilinear interpolation and simplex interpolation [1] (also known as Lovász extension [5]). We use multilinear interpolation for all our experiments, which can be expressed ψ(x)T θ where the non-linear feature transformation ψ(x) : [0, 1]S →[0, 1]2S are the 2S linear interpolation weights that input x puts on each of the 2S parameters θ such that the interpolated value for x is ψ(x)T θ, and ψ(x)[j] = ΠS d=1x[d]vj[d](1−x[d])1−vj[d], where vj[·] ∈0, 1 is the coordinate vector of the jth vertex of the unit hypercube, and j = 1, · · · , 2D. For example, when S = 2, v1 = (0, 0), v2 = (0, 1), v3 = (1, 0), v4 = (1, 1) and ψ(x) = ((1 −x[1])(1 −x[2]), (1 −x[1])x[2], x[1](1 −x[2]), x[1]x[2]). The ensemble of lattices layer produces G outputs, one per lattice. When initializing the DLN, if the t + 1th layer is an ensemble of lattices, we randomly permute the outputs of the previous layer 3 Table 1: DLN layers and hyperparameters Layer t Parameters Hyperparameters Linear Embedding bt ∈RDt+1, W m t ∈RDm t+1×Dm t , Dt+1 W n t ∈R(Dt+1−Dm t+1)×(Dt−Dm t ) Calibrators Bt ∈RDt×K K ∈N+ keypoints, input range [ℓ, u] Lattice Ensemble θt,g ∈R2St for g = 1, . . . , Gt Gt lattices St inputs per lattice to be assigned to the Gt+1 × St+1 inputs of the ensemble. If a lattice has at least one monotonic input, then that lattice’s output is constrained to be a monotonic input to the next layer to guarantee end-to-end monotonicity. Each lattice is constrained to be monotonic by enforcing monotonicity constraints on each pair of lattice parameters that are adjacent in the monotonic directions; for details see Gupta et al. [1]. End-to-end monotonicity: The DLN is constructed to preserve end-to-end monotonicity with respect to a user-specified subset of the inputs. As we described, the parameters for each component (matrix, calibrator, lattice) can be constrained to be monotonic with respect to a subset of inputs by satisfying certain linear inequality constraints [1]. Also if a component has a monotonic input, then the output of that component is treated as a monotonic input to the following layer. Because the composition of monotonic functions is monotonic, the constructed DLN belongs to the partial monotonic function class. The arrows in Figure 2 illustrate this construction, i.e., how the tth layer output becomes a monotonic input to t + 1th layer. 2.1 Hyperparameters We detail the hyperparameters for each type of DLN layer in Table 1. Some of these hyperparameters constrain each other since the number of outputs from each layer must be equal to the number of inputs to the next layer; for example, if you have a linear embedding layer with Dt+1 = 1000 outputs, then there are 1000 inputs to the next layer, and if that next layer is a lattice ensemble, its hyperparameters must obey Gt × St = 1000. 3 Related Work Low-dimensional monotonic models have a long history in statistics, where they are called shape constraints, and often use isotonic regression [6]. Learning monotonic single-layer neural nets by constraining the neural net weights to be positive dates back to Archer and Wang in 1993 [7], and that basic idea has been re-visited by others [8, 9, 10, 11], but with some negative results about the obtainable flexibility, even with multiple hidden layers [12]. Sill [13] proposed a three-layer monotonic network that used monotonic linear embedding and max-and-min-pooling. Daniels and Velikova [12] extended Sill’s result to learn a partial monotonic function by combining min-maxpooling, also known as adaptive logic networks [14], with partial monotonic linear embedding, and showed that their proposed architecture is a universal approximator for partial monotone functions. None of these prior neural networks were demonstrated on problems with more than D = 10 features, nor trained on more than a few thousand examples. For our experiments we implemented a positive neural network and a min-max-pooling network [12] with TensorFlow. This paper extends recent work in learning multidimensional flexible partial monotonic 2-layer networks consisting of a layer of calibrators followed by an ensemble of lattices [3], with parameters appropriately constrained for monotonicity, which built on earlier work of Gupta et al. [1]. This work differs in three key regards. First, we alternate layers to form a deeper, and hence potentially more flexible, network. Second, a key question addressed in Canini et al. [3] is how to decide which features should be put together in each lattice in their ensemble. They found that random assignment worked well, but required large ensembles. They showed that smaller (and hence faster) models with the same accuracy could be 4 trained by using a heuristic pre-processing step they proposed (crystals) to identify which features interact nonlinearly. This pre-processing step requires training a lattice for each pair of inputs to judge that pair’s strength of interaction, which scales as O(D2), and we found it can be a large fraction of overall training time for D > 50. We solve this problem of determining which inputs should interact in each lattice by using a linear embedding layer before an ensemble of lattices layer to discriminatively and adaptively learn during training how to map the features to the first ensemble-layer lattices’ inputs. This strategy also means each input to a lattice can be a linear combination of the features. This use of a jointly trained linear embedding is the second key difference to that prior work [3]. The third difference is that in previous work [4, 1, 3], the calibrator keypoint values were fixed a priori based on the quantiles of the features, which is challenging to do for the calibration layers mid-DLN, because the quantiles of their inputs are evolving during training. Instead, we fix the keypoint values uniformly over the bounded calibrator domain. 4 Function Class of Deep Lattice Networks We offer some results and hypotheses about the function class of deep lattice networks, depending on whether the lattices are interpolated with multilinear interpolation (which forms multilinear polynomials), or simplex interpolation (which forms locally linear surfaces). 4.1 Cascaded multilinear lookup tables We show that a deep lattice network made up only of cascaded layers of lattices (without intervening layers of calibrators or linear embeddings) is equivalent to a single lattice defined on the D input features if multilinear interpolation is used. It is easy to construct counter-examples showing that this result does not hold for simplex-interpolated lattices. Lemma 1. Suppose that a lattice has L inputs that can each be expressed in the form θT i ψ(x[si]), where the si are mutually disjoint and ψ represents multilinear interpolation weights. Then the output can be expressed in the form ˆθT ˆψ(x[∪si]). That is, the lattice preserves the functional form of its inputs, changing only the values of the coefficients θ and the linear interpolation weights ψ. Proof. Each input i of the lattice can be expressed in the following form: fi = θT i ψ(x[si]) = 2|si| X k=1 θi[vik] Y d∈si x[d]vik[d](1 −x[d])1−vik[d] This is a multilinear polynomial on x[si]. The output can be expressed in the following form: F = 2L X j=1 θi[vj] L Y i=1 f vj[i] i (1 −fi)1−vj[i] Note the product in the expression: fi and 1 −fi are both multilinear polynomials, but within each term of the product, only one is present, since one of the two has exponent 0 and the other has exponent 1. Furthermore, since each fi is a function of a different subset of x, we conclude that the entire product is a multilinear polynomial. Since the sum of multilinear polynomials is still a multilinear polynomial, we conclude that F is a multilinear polynomial. Any multilinear polynomial on k variables can be converted to a k-dimensional multilinear lookup table, which concludes the proof. Lemma 1 can be applied inductively to every layer of cascaded lattices down to the final output F(x). We have shown that cascaded lattices using multilinear interpolation is equivalent to a single multilinear lattice defined on all D features. 4.2 Universal approximation of partial monotone functions Theorem 4.1 in [12] states that partial monotone linear embedding followed by min and max pooling can approximate any partial monotone functions on the hypercube up to arbitrary precision given 5 sufficiently high embedding dimension. We show in the next lemma that simplex-interpolated lattices can represent min or max pooling. Thus one can use a DLN constructed with a linear embedding layer followed by two cascaded simplex-interpolated lattice layers to approximate any partial monotone function on the hypercube. Lemma 2. Let θmin = (0, 0, · · · , 0, 1) ∈R2n and θmax = (1, 0, · · · , 0) ∈R2n, and ψsimplex be the simplex interpolation weights. Then min(x[0], x[1], · · · , x[n]) = ψsimplex(x)T θmin max(x[0], x[1], · · · , x[n]) = ψsimplex(x)T θmax Proof. From the definition of simplex interpolation [1], ψsimplex(x)T θ = θ[1]x[π[1]] + · · · + θ[2n]x[π[n]], where π is the sorted order such that x[π[1]] ≥· · · ≥x[π[n]], and due to sparsity, θmin and θmax selects the min and the max. 4.3 Locally linear functions If simplex interpolation [1] (aka the Lovász extension) is used, the deep lattice network produces a locally linear function, because each layer is locally linear, and compositions of locally linear functions are locally linear. Note that a D input lattice interpolated with simplex interpolation has D! linear pieces [1]. If one cascades an ensemble of D lattices into a lattice, then the number of possible locally linear pieces is of the order O((D!)!). 5 Numerical Optimization Details for the DLN Operators: We implemented 1D calibrators and multilinear interpolation over a lattice as new C++ operators in TensorFlow [15] and express each layer as a computational graph node using these new and existing TensorFlow operators. Our implementation is open sourced and can be found in https://github.com/tensorflow/lattice. We use the Adam optimizer [16] and batched stochastic gradients to update model parameters. After each batched gradient update, we project parameters to satisfy their monotonicity constraints. The linear embedding layer’s constraints are element-wise non-negativity constraints, so its projection clips each negative component to zero. This projection can be done in O(# of elements in a monotonic linear embedding matrix). Projection for each calibrator is isotonic regression with chain ordering, which we implement with the pooladjacent-violator algorithm [17] for each calibrator. This can be done in O(# of calibration keypoints). Projection for each lattice is isotonic regression with partial ordering that imposes O(S2S) linear constraints for each lattice [1]. We solved it with consensus optimization and alternating direction method of multipliers [18] to parallelize the projection computations with a convergence criterion of ϵ = 10−7. This can be done in O(S2S log(1/ϵ)). Initialization: For linear embedding layers, we initialize each component in the linear embedding matrix with IID Gaussian noise N(2, 1). The initial mean of 2 is to bias the initial parameters to be positive so that they are not clipped to zero by the first monotonicity projection. However, because the calibration layer before the linear embedding outputs in [0, 1] and thus is expected to have output E[xt] = 0.5, initializing the linear embedding with a mean of 2 introduces an initial bias: E[xt+1] = E[Wtxt] = Dt. To counteract that we initialize each component of the bias vector, bt, to −Dt, so that the initial expected output of the linear layer is E[xt+1] = E[Wtxt + bt] = 0. We initialize each lattice’s parameters to be a linear function spanning [0, 1], and add IID Gaussian noise N(0, 1 S2 ) to each parameter, where S is the number of input to a lattice. We initialize each calibrator to be a linear function that maps [xmin, xmax] to [0, 1] (and did not add any noise). 6 Experiments We present results on the same benchmark dataset (Adult) with the same monotonic features as in Canini et al. [3], and for three problems from Google where the monotonicity constraints were specified by product groups. For each experiment, every model considered is trained with monotonicity guarantees on the same set of inputs. See Table 2 for a summary of the datasets. 6 Table 2: Dataset Summary Dataset Type # Features (# Monotonic) # Training # Validation # Test Adult Classify 90 (4) 26,065 6,496 16,281 User Intent Classify 49 (19) 241,325 60,412 176,792 Rater Score Regress 10 (10) 1,565,468 195,530 195,748 Usefulness Classify 9 (9) 62,220 7,764 7,919 Table 3: User Intent Case Study Results Validation Test # Parameters G × S Accuracy Accuracy DLN 74.39% 72.48% 27,903 30 × 5D Crystals 74.24% 72.01% 15,840 80 × 7D Min-Max network 73.89% 72.02% 31,500 90 × 7D For classification problems, we used logistic loss, and for the regression, we used squared error. For each problem, we used a validation set to optimize the hyperparameters for each model architecture: the learning rate, the number of training steps, etc. For an ensemble of lattices, we tune the number of lattices, G, and number of inputs to each lattice, S. All calibrators for all models used a fixed number of 100 keypoints, and set [−100, 100] as an input range. In all experiments, we use the six-layer DLN architecture: Calibrators →Linear Embedding → Calibrators →Ensemble of Lattices →Calibrators →Linear Embedding, and validate the number of lattices in the ensemble G, number of inputs to each lattice, S, the Adam stepsize and number of loops. For crystals [3] we validated the number of ensembles, G, and number of inputs to each lattice, S, as well as Adam stepsize and number of loops. For min-max net [12], we validated the number of groups, G, and dimension of each group S, as well as Adam stepsize and number of loops. For datasets where all features are monotonic, we also train a deep neural network with a non-negative weight matrix and ReLU as an activation unit with a final fully connected layer with non-negative weight matrix, which we call monotonic DNN, akin to the proposals of [7, 8, 9, 10, 11]. We tune the depth of hidden layers, G, and the activation units in each layer S. All the result tables are sorted by their validation accuracy, and contain an additional column for chosen hyperparameters; 2 × 5D means G = 2 and S = 5. 6.1 User Intent Case Study (Classification) For this real-world Google problem, the problem is to classify the user intent. This experiment is set-up to test generalization ability to non-IID test data. The train and validation examples are collected from the U.S., and the test set is collected from 20 other countries, and as a result of this difference between the train/validation and test distributions, there is a notable difference between the validation and the test accuracy. The results in Table 3 show a 0.5% gain in test accuracy for the DLN. 6.2 Adult Benchmark Dataset (Classification) We compare accuracy on the benchmark Adult dataset [19], where a model predicts whether a person’s income is at least $50,000 or not. Following Canini et al. [1], we require all models to be monotonically increasing in capital-gain, weekly hours of work and education level, and the gender wage gap. We used one-hot encoding for the other categorical features, for 90 features in total. We randomly split the usual train set [19] 80-20 and trained over the 80%, and validated over the 20%. 7 Table 4: Adult Results Validation Test # Parameters G × S Accuracy Accuracy DLN 86.50% 86.08% 40,549 70 × 5D Crystals 86.02% 85.87% 3,360 60 × 4D Min-Max network 85.28% 84.63% 57,330 70 × 9D Results in Table 4 show the DLN provides better validation and test accuracy than the min-max network or crystals. 6.3 Rater Score Prediction Case Study (Regression) For this real-world Google problem, we train a model to predict a rater score for a candidate result, where each rater score is averaged over 1-5 raters, and takes on 5-25 possible real values. All 10 monotonic features are required to be monotonic. Results in Table 5 show the DLN has very test MSE than the two-layer crystals model, and much better MSE than the other monotonic networks. Table 5: Rater Score Prediction (Monotonic Features Only) Results Validation MSE Test MSE # Parameters G × S DLN 1.2078 1.2096 81,601 50 × 9D Crystals 1.2101 1.2109 1,980 10 × 7D Min-Max network 1.3474 1.3447 5,500 100 × 5D Monotonic DNN 1.3920 1.3939 2,341 20 × 100D 6.4 Usefulness Case Study (Classifier) For this real-world Google problem, we train a model to predict whether a candidate result adds useful information given the presence of another result. All 9 features are required to be monotonic. Table 6 shows the DLN has slightly better validation and test accuracy than crystals, and both are notably better than the min-max network or positive-weight DNN. Table 6: Usefulness Results Validation Test # Parameters G × S Accuracy Accuracy DLN 66.08% 65.26% 81,051 50 × 9D Crystals 65.45% 65.13% 9,920 80 × 6D Min-Max network 64.62% 63.65% 4,200 70 × 6D Monotonic DNN 64.27% 62.88% 2,012 1 × 1000D 7 Conclusions In this paper, we proposed combining three types of layers, (1) calibrators, (2) linear embeddings, and (3) multidimensional lattices, to produce a new class of models we call deep lattice networks that combines the flexibility of deep networks with the regularization, interpretability and debuggability advantages that come with being able to impose monotonicity constraints on some inputs. 8 References [1] M. R. Gupta, A. Cotter, J. Pfeifer, K. Voevodski, K. Canini, A. Mangylov, W. Moczydlowski, and A. Van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research, 17(109):1–47, 2016. [2] E. K. Garcia and M. R. Gupta. Lattice regression. In Advances in Neural Information Processing Systems (NIPS), 2009. [3] K. Canini, A. Cotter, M. M. Fard, M. R. Gupta, and J. Pfeifer. Fast and flexible monotonic functions with ensembles of lattices. Advances in Neural Information Processing Systems (NIPS), 2016. [4] A. Howard and T. Jebara. Learning monotonic transformations for classification. Advances in Neural Information Processing Systems (NIPS), 2007. [5] L. Lovász. Submodular functions and convexity. In Mathematical Programming The State of the Art, pages 235–257. Springer, 1983. [6] P. Groeneboom and G. Jongbloed. Nonparametric estimation under shape constraints. Cambridge Press, New York, USA, 2014. [7] N. P. Archer and S. Wang. Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decision Sciences, 24(1):60–75, 1993. [8] S. Wang. A neural network method of density estimation for univariate unimodal data. Neural Computing & Applications, 2(3):160–167, 1994. [9] H. Kay and L. H. Ungar. Estimating monotonic functions and their bounds. AIChE Journal, 46(12):2426–2434, 2000. [10] C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, and R. Garcia. Incorporating functional knowledge in neural networks. Journal Machine Learning Research, 2009. [11] A. Minin, M. Velikova, B. Lang, and H. Daniels. Comparison of universal approximators incorporating partial monotonicity by structure. Neural Networks, 23(4):471–475, 2010. [12] H. Daniels and M. Velikova. Monotone and partially monotone neural networks. IEEE Trans. Neural Networks, 21(6):906–917, 2010. [13] J. Sill. Monotonic networks. Advances in Neural Information Processing Systems (NIPS), 1998. [14] W. W. Armstrong and M. M. Thomas. Adaptive logic networks. Handbook of Neural Computation, Section C1. 8, IOP Publishing and Oxford U. Press, ISBN 0 7503 0312, 3, 1996. [15] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [17] M. Ayer, H. D. Brunk, G. M. Ewing, W. T. William, E. Silverman, et al. An empirical distribution function for sampling with incomplete information. The annals of mathematical statistics, 26(4):641–647, 1955. [18] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, 2011. [19] C. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. 9 | 2017 | 192 |
6,666 | Zap Q-Learning Adithya M. Devraj Sean P. Meyn Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32608. adithyamdevraj@ufl.edu, meyn@ece.ufl.edu Abstract The Zap Q-learning algorithm introduced in this paper is an improvement of Watkins’ original algorithm and recent competitors in several respects. It is a matrix-gain algorithm designed so that its asymptotic variance is optimal. Moreover, an ODE analysis suggests that the transient behavior is a close match to a deterministic Newton-Raphson implementation. This is made possible by a two time-scale update equation for the matrix gain sequence. The analysis suggests that the approach will lead to stable and efficient computation even for non-ideal parameterized settings. Numerical experiments confirm the quick convergence, even in such non-ideal cases. 1 Introduction It is recognized that algorithms for reinforcement learning such as TD- and Q-learning can be slow to converge. The poor performance of Watkins’ Q-learning algorithm was first quantified in [25], and since then many papers have appeared with proposed improvements, such as [9, 1]. An emphasis in much of the literature is computation of finite-time PAC (probably almost correct) bounds as a metric for performance. Explicit bounds were obtained in [25] for Watkins’ algorithm, and in [1] for the “speedy” Q-learning algorithm that was introduced by these authors. A general theory is presented in [18] for stochastic approximation algorithms. In each of the models considered in prior work, the update equation for the parameter estimates can be expressed θn+1 = θn + αn[f(θn) + ∆n+1] , n ≥0 , (1) in which {αn} is a positive gain sequence, and {∆n} is a martingale difference sequence. This representation is critical in analysis, but unfortunately is not typical in reinforcement learning applications outside of these versions of Q-learning. For Markovian models, the usual transformation used to obtain a representation similar to (1) results in an error sequence {∆n} that is the sum of a martingale difference sequence and a telescoping sequence [15]. It is the telescoping sequence that prevents easy analysis of Markovian models. This gap in the research literature carries over to the general theory of Markov chains. Examples of concentration bounds for i.i.d. sequences or martingale-difference sequences include the finite-time bounds of Hoeffding and Bennett. Extensions to Markovian models either offer very crude bounds [17], or restrictive assumptions [14, 11]; this remains an active area of research [20]. In contrast, asymptotic theory for stochastic approximation (as well as general state space Markov chains) is mature. Large Deviations or Central Limit Theorem (CLT) limits hold under very general assumptions [3, 13, 4]. The CLT will be a guide to algorithm design in the present paper. For a typical stochastic approximation algorithm, this takes the following form: denoting {eθn :=θn −θ∗: n ≥0} to be the error sequence, under general conditions the scaled sequence {√neθn : n ≥1} 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. converges in distribution to a Gaussian distribution, N(0, Σθ). Typically, the scaled covariance is also convergent to the limit, which is known as the asymptotic covariance: Σθ = lim n→∞nE[eθneθT n] . (2) An asymptotic bound such as (2) may not be satisfying for practitioners of stochastic optimization or reinforcement learning, given the success of finite-n performance bounds in prior research. However, the fact that the asymptotic covariance Σθ has a simple representation, and can therefore be easily improved or optimized, makes it a compelling tool to consider. Moreover, as the examples in this paper suggest, the asymptotic covariance is often a good predictor of finite-time performance, since the CLT approximation is accurate for reasonable values of n. Two approaches are known for optimizing the asymptotic covariance. First is the remarkable averaging technique introduced in [21, 22, 24] (also see [12]). Second is Stochastic Newton-Raphson, based on a special choice of matrix gain for the algorithm [13, 23]. The algorithms proposed here use the second approach. Matrix gain variants of TD-learning [10, 19, 29, 30] and Q-learning [27] are available in the literature, but none are based on optimizing the asymptotic variance. It is a fortunate coincidence that LSTD(λ) of [6] achieves this goal [8]. In addition to accelerating the convergence rate of the standard Q-learning algorithm, it is hoped that this paper will lead to entirely new algorithms. In particular, there is little theory to support Q-learning in non-ideal settings in which the optimal “Q-function” does not lie in the parameterized function class. Convergence results have been obtained for a class of optimal stopping problems [31], and for deterministic models [16]. There is now intense practical interest, despite an incomplete theory. A stronger supporting theory will surely lead to more efficient algorithms. Contributions A new class of Q-learning algorithms is proposed, called Zap Q-learning, designed to more accurately mimic the classical Newton-Raphson algorithm. It is based on a two time-scale stochastic approximation algorithm, constructed so that the matrix gain tracks the gain that would be used in a deterministic Newton-Raphson method. A full analysis is presented for the special case of a complete parameterization (similar to the setting of Watkins’ algorithm [28]). It is found that the associated ODE has a remarkable and simple representation, which implies consistency under suitable assumptions. Extensions to non-ideal parameterized settings are also proposed, and numerical experiments show dramatic variance reductions. Moreover, results obtained from finite-n experiments show close solidarity with asymptotic theory. The remainder of the paper is organized as follows. The new Zap Q-learning algorithm is introduced in Section 2, which contains a summary of the theory from extended version of this paper [8]. Numerical results are surveyed in Section 3, and conclusions are contained in Section 4. 2 Zap Q-Learning Consider an MDP model with state space X, action space U, cost function c: X × U →R, and discount factor β ∈(0, 1). It is assumed that the state and action space are finite: denote ℓ= |X|, ℓu = |U|, and Pu the ℓ× ℓconditional transition probability matrix, conditioned on u ∈U. The state-action process (X, U) is adapted to a filtration {Fn : n ≥0}, and Q1 is assumed throughout: Q1: The joint process (X, U) is an irreducible Markov chain, with unique invariant pmf ϖ. The minimal value function is the unique solution to the discounted-cost optimality equation: h∗(x) = min u∈U Q∗(x, u) := min u∈U{c(x, u) + β X x′∈X Pu(x, x′)h∗(x′)} , x ∈X. The “Q-function” solves a similar fixed point equation: Q∗(x, u) = c(x, u) + β X x′∈X Pu(x, x′)Q∗(x′) , x ∈X, u ∈U, (3) in which Q(x) := minu∈U Q(x, u) for any function Q: X × U →R. Given any function ς : X × U →R, let Q(ς) denote the corresponding solution to the fixed point equation (3), with c replaced by ς: The function q = Q(ς) is the solution to the fixed point equation, q(x, u) = ς(x, u) + β X x′ Pu(x, x′) min u′ q(x′, u′) , x ∈X, u ∈U. 2 The mapping Q is a bijection on the set of real-valued functions on X×U. It is also piecewise linear, concave and monotone (See [8] for proofs and discussions). It is known that Watkins’ Q-learning algorithm can be regarded as a stochastic approximation method [26, 5] to obtain the solution θ∗∈Rd to the steady-state mean equations, E c(Xn, Un) + βQθ∗(Xn+1) −Qθ∗(Xn, Un) ζn(i) = 0, 1 ≤i ≤d (4) where {ζn} are d-dimensional Fn-measurable functions and Qθ = θTψ for basis functions {ψi : 1 ≤i ≤d}. In Watkins’ algorithm ζn = ψ(Xn, Un), and the basis functions are indicator functions: ψk(x, u) = I{x = xk, u = uk}, 1 ≤k ≤d, with d = ℓ× ℓu the total number of state-action pairs [26]. In this special case we identify Qθ∗= Q∗, and the parameter θ is identified with the estimate Qθ. A stochastic approximation algorithm to solve (4) coincides with Watkins’ algorithm [28]: θn+1 = θn + αn+1 c(Xn, Un) + βθn(Xn+1) −θn(Xn, Un) ψ(Xn, Un) (5) One very general technique that is used to analyze convergence of stochastic approximation algorithms is to consider the associated limiting ODE, which is the continuous-time, deterministic approximation of the original recursion [4, 5]. For (5), denoting the continuous time approximation of {θn} to be {qt}, and under standard assumptions on the gain sequence {αn}, the associated ODE is of the form d dtqt(x, u) = ϖ(x, u) n c(x, u) + β X x′ Pu(x, x′) min u′ q(x′, u′) −qt(x, u) o . (6) Under Q1, {qt} converges to Q∗: A key step in the proof of convergence of {θn} to the same limit. While Watkins’ Q-learning (5) is consistent, it is argued in [8] that the asymptotic covariance of this algorithm is typically infinite. This conclusion is complementary to the finite-n analysis of [25]: Theorem 2.1. Watkins’ Q-learning algorithm with step-size αn ≡1/n is consistent under Assumption Q1. Suppose that in addition max x,u ϖ(x, u) ≤ 1 2(1 −β)−1, and the conditional variance of h∗(Xt) is positive: X x,x′,u ϖ(x, u)Pu(x, x′)[h∗(x′) −Puh∗(x)]2 > 0 Then the asymptotic covariance is infinite: lim n→∞nE[∥θn −θ∗∥2] = ∞. The assumption maxx,u ϖ(x, u) ≤1 2(1 −β)−1 is satisfied whenever β ≥1 2. Matrix-gain stochastic approximation algorithms have appeared in previous literature. In particular, matrix gain techniques have been used to speed-up the rate of convergence of Q-learning (see [7] and the second example in Section 3). The general G-Q(λ) algorithm is described as follows, based on a sequence of d × d matrices G = {Gn} and λ ∈[0, 1]: For initialization θ0 , ζ0 ∈Rd, the sequence of estimates are defined recursively: θn+1 = θn + αn+1Gn+1ζndn+1 dn+1 = c(Xn, Un) + βQθn(Xn+1) −Qθn(Xn, Un) ζn+1 = λβζn + ψ(Xn+1, Un+1) (7) The special case based on stochastic Newton-Raphson is Zap Q(λ)-learning: Algorithm 1 Zap Q(λ)-learning Input: θ0 ∈Rd, ζ0 = ψ(X0, U0), bA0 ∈Rd×d, n = 0, T ∈Z+ ▷Initialization 1: repeat 2: φn(Xn+1) := arg minu Qθn(Xn+1, u); 3: dn+1 := c(Xn, Un) + βQθn(Xn+1, φn(Xn+1)) −Qθn(Xn, Un); ▷Temporal difference 4: An+1 := ζn βψ(Xn+1, φn(Xn+1)) −ψ(Xn, Un) T; 5: bAn+1 = bAn + γn+1 An+1 −bAn ; ▷Matrix gain update rule 6: θn+1 = θn −αn+1 bA−1 n+1ζndn+1; ▷Zap-Q update rule 7: ζn+1 := λβζn + ψ(Xn+1, Un+1); ▷Eligibility vector update rule 8: n = n + 1 9: until n ≥T 3 A special case is considered in the analysis here: the basis is chosen as in Watkins’ algorithm, λ = 0, and αn ≡1/n. An equivalent representation for the parameter recursion is thus θn+1 = θn −αn+1 bA−1 n+1 Ψnc + An+1θn , in which c and θn are treated as d-dimensional vectors rather than functions on X × U, and Ψn = ψ(Xn, Un)ψ(Xn, Un)T. Part of the analysis is based on a recursion for the following d-dimensional sequence: bCn = −Π−1 bAnθn , n ≥1 , where Π is the d × d diagonal matrix with entries ϖ (the steady-state distribution of (X, U)). The sequence { bCn} admits a very simple recursion in the special case γ ≡α: bCn+1 = bCn + αn+1[Π−1Ψnc −bCn] . (8) It follows that bCn converges to c as n →∞, since (8) is essentially a Monte-Carlo average of {Π−1Ψnc : n ≥0}. Analysis for this case is complicated since bAn is obtained as a uniform average of {An}. The main contributions of this paper concern a two time-scale implementation for which X γn = ∞ X γ2 n < ∞ and lim n→∞ αn γn = 0 . (9) In our analysis, we restrict to γn ≡1/nρ, for some fixed ρ ∈( 1 2, 1). Through ODE analysis, it is argued that the Zap Q-learning algorithm closely resembles an implementation of Newton-Raphson in this case. This analysis suggests that { bAn} more closely tracks the mean of {An}. Theorem 2.2 summarizes the main results under Q1, and the following additional assumptions: Q2: The optimal policy φ∗is unique. Q3: The sequence of policies {φn} satisfy ∞ X n=1 γnI{φn+1 ̸= φn} < ∞, a.s.. The assumption Q3 is used to address the discontinuity in the recursion for { bAn} resulting from the dependence of An+1 on φn. Theorem 2.2. Suppose that Assumptions Q1–Q3 hold, and the gain sequences α and γ satisfy: αn = n−1 , γn = n−ρ , n ≥1 , for some fixed ρ ∈( 1 2, 1). Then, (i) The parameter sequence {θn} obtained using the Zap-Q algorithm converges to Q∗a.s.. (ii) The asymptotic covariance (2) is minimized over all G-Q(0) matrix gain versions of Watkins’ Q-learning algorithm. (iii) An ODE approximation holds for the sequence {θn, bCn}, by continuous functions (q, ς) satisfying qt = Q(ςt) , d dtςt = −ςt + c (10) This ODE approximation is exponentially asymptotically stable, with lim t→∞qt = Q∗. The ODE result (10) is an important aspect of this work. It says that the sequence {qt}, a continuous time approximation of the parameter estimates {θn} that are obtained using the Zap Q-learning algorithm, evolves as the Q-function of some time-varying cost function ςt. Furthermore, this timevarying cost function ςt has dynamics independent of qt, and converges to c; the cost function defined in the MDP model. Convergence follows from the continuity of the mapping Q: lim n→∞θn = lim t→∞qt = lim t→∞Q(ςt) = Q(c) = Q∗. The reader is referred to [8] for complete proofs and technical details. 3 Numerical Results Results from numerical experiments are surveyed here to illustrate the performance of the Zap Qlearning algorithm. 4 1 4 6 5 3 2 Figure 1: Graph for MDP Finite state-action MDP Consider first a simple path-finding problem. The state space X = {1, . . . , 6} coincides with the six nodes on the undirected graph shown in Fig. 1. The action space U = {ex,x′}, x, x′ ∈X, consists of all feasible edges along which the agent can travel, including each “self-loop”, u = ex,x. The goal is to reach the state x∗= 6 and maximize the time spent there. The reader is referred to [8] for details on the cost function and other modeling assumptions. Six variants of Q-learning were tested: Watkins’ algorithm (5), Watkins’ algorithm with Ruppert-Polyak-Juditsky (RPJ) averaging [21, 22, 24], Watkins’ algorithm with a “polynomial learning rate” αn ≡n−0.6 [9], Speedy Q-learning [1], and two versions of Zap Q-learning: γn ≡αn ≡ n−1, and γn ≡α0.85 n ≡n−0.85. Fig. 2 shows the normalized trace of the asymptotic covariance of Watkins’ algorithm with stepsize αn = g/n, as a function of g > 0. Based on this observation or on Theorem 2.1, it follows that the asymptotic covariance is not finite for the standard Watkins’ algorithm with αn ≡1/n. In simulations it was found that the parameter estimates are not close to θ∗even after many millions of iterations. 1000 2000 3000 50 60 70 80 90 0 2 4 6 8 10 β = 0.8 β = 0.99 g Figure 2: Normalized trace of the asymptotic covariance It was also found that Watkins’ algorithm performed poorly in practice for any scalar gain. For example, more than half of the 103 experiments using β = 0.8 and g = 70 resulted in values of θn(15) exceeding θ∗(15) by 104 (with θ∗(15) ≈500), even with n = 106. The algorithm performed well with the introduction of projection (to ensure that the parameter estimates evolve on a bounded set) in the case β = 0.8. With β = 0.99, the performance was unacceptable for any scalar gain, even with projection. Fig. 3 shows normalized histograms of {W i n(k) = √n(θi n(k) −θn(k)) : 1 ≤i ≤N} for the projected Watkins Q-learning with gain g = 70, and the Zap algorithm, γn ≡α0.85 n . The theoretical predictions were based on the solution to a Lyapunov equation [8]. Results for β = 0.99 contained in [8] show similar solidarity with asymptotic theory. -600 -400 -200 0 200 400 600 0 1 2 -600 -400 -200 0 200 400 600 10-3 10-3 10-3 10-3 0 1 2 3 4 5 6 7 -500 -400 -300 -200 -100 0 100 200 -150 -100 -50 0 50 100 150 (a) Wn(18) with n = 104 (b) Wn(18) with n = 106 (c) Wn(10) with n = 104 (d) Wn(10) with n = 106 -1000 0 1000 2000 -1000 -500 0 500 1000 -400 -200 0 200 400 600 -1000 -500 0 500 1000 1500 Experimental histogram Theoritical pdf Experimental pdf 0 0.5 1 1.5 0 1 2 3 4 5 6 7 Zap-Q Scalar Gain Figure 3: Asymptotic variance for Watkins’ g = 70 and Zap Q-learning, γn ≡α0.85 n ; β = 0.8 Bellman Error The Bellman error at iteration n is denoted: Bn(x, u) = θn(x, u) −r(x, u) −β X x′∈X Pu(x, x′) max u′∈U θn(x′, u′) . This is identically zero if and only if θn = Q∗. Fig. 4 contains plots of the maximal error Bn = maxx,u |Bn(x, u)| for the six algorithms. Though all six algorithms perform reasonably well when β = 0.8, Zap Q-learning is the only one that achieves near zero Bellman error within n = 106 iterations in the case β = 0.99. Moreover, the 5 performance of the two time-scale algorithm is clearly superior to the one time-scale algorithm. It is also observed that the Watkins algorithm with an optimized scalar gain (i.e., step-size αn ≡g∗/n with g∗chosen so that the asymptotic variance is minimized) has the best performance among scalargain algorithms. 0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 12 14 16 18 20 0 1 2 3 4 5 6 7 8 9 10 0 20 40 60 80 100 120 10 x 5 n Bellman Error RPJ Zap-Q: Zap-Q: Speedy Polynomial Watkins β = 0.8 g = 70 β = 0.99 g = 1500 ≡α0.85 n γn ≡ γn αn Bn Figure 4: Maximum Bellman error {Bn : n ≥0} for the six Q-learning algorithms Fig. 4 shows only the typical behavior — repeated trials were run to investigate the range of possible outcomes. Plots of the mean and 2σ confidence intervals of Bn are shown in Fig. 5 for β = 0.99. g = 500 g = 1500 Speedy Poly g = 5000 g = 500 g = 1500 Speedy Poly g = 5000 10 3 10 4 10 5 10 6 10 6 100 101 102 103 104 10 3 10 4 10 5 10 6 n RPJ RPJ Bellman Error Normalized number of of Observations B B n 0 10 20 30 40 50 0 1 2 0 20 40 60 80 100 120 140 160 Bellman Error n = 10 6 Bellman Error n = 0 0.5 Zap-Q: Zap-Q: ≡α0 85 n γn ≡ γn αn Zap-Q: Zap-Q: ≡α0 85 n γn ≡ γn αn Figure 5: Simulation-based 2σ confidence intervals for the six Q-learning algorithms for the case β = 0.99 Finance model The next example is taken from [27, 7]. The reader is referred to these references for complete details of the problem set-up and the reinforcement learning architecture used in this prior work. The example is of interest because it shows how the Zap Q-learning algorithm can be used with a more general basis, and also how the technique can be extended to optimal stopping time problems. The Markovian state process for the model evolves in X = R100. The “time to exercise” is modeled as a discrete valued stopping time τ. The associated expected reward is defined as E[βτr(Xτ)], where β ∈(0, 1), r(Xn) := Xn(100) = epn/epn−100, and {ept : t ∈R} is a geometric Brownian motion (derived from an exogenous price-process). The objective of finding a policy that maximizes the expected reward is modeled as an optimal stopping time problem. The value function is defined to be the supremum over all stopping times: h∗(x) = sup τ>0 E[βτr(Xτ) | X0 = x]. This solves the Bellman equation: For each x ∈X, h∗(x) = max r(x), βE[h∗(Xn+1) | Xn = x] . The associated Q-function is denoted Q∗(x) := βE[h∗(Xn+1) | Xn = x], and solves a similar fixed point equation: Q∗(x) = βE[max(r(Xn+1), Q∗(Xn+1)) | Xn = x]. 6 The Q(0)-learning algorithm considered in [27] is defined as follows: θn+1 = θn + αn+1ψ(Xn) h β max Xn+1(100), Qθn(Xn+1) −Qθn(Xn) i , n ≥0 . In [7] the authors attempt to improve the performance of the Q(0) algorithm through the use of a sequence of matrix gains, which can be regarded as an instance of the G-Q(0)-learning algorithm defined in (7). For details see this prior work as well as the extended version of this paper [8]. A gain sequence {Gn} was introduced in [7] to improve performance. Denoting G and A to be the steady state means of {Gn} and {An} respectively, the eigenvalues corresponding to the matrix GA are shown on the right hand side of Fig. 6. It is observed that the sufficient condition for a finite asymptotic covariance are “just” satisfied in this algorithm: the maximum eigenvalue of GA is approximately λ ≈−0.525 < −1 2 (see Theorem 2.1 of [8]). It is worth stressing that the finite asymptotic covariance was not a design goal in this prior work. It is only now on revisiting this paper that we find that the sufficient condition λ < −1 2 is satisfied. The Zap Q-learning algorithm for this example is defined by the following recursion: θn+1 = θn −αn+1 bA−1 n+1ψ(Xn) h β max Xn+1(100), Qθn(Xn+1) −Qθn(Xn) i , bAn+1 = bAn + γn[An+1 −bAn], An+1 = ψ(Xn)ϕT(θn, Xn+1) , ϕ(θn, Xn+1) = βψ(Xn+1)I{Qθn(Xn+1) ≥Xn+1(100)} −ψ(Xn). High performance despite ill-conditioned matrix gain The real part of the eigenvalues of A are shown on a logarithmic scale on the left-hand side of Fig. 6. These eigenvalues have a wide spread: the ratio of the largest to the smallest real parts of the eigenvalues is of the order 104. This presents a challenge in applying any method. In particular, it was found that the performance of any scalar-gain algorithm was extremely poor, even with projection of parameter estimates. i 0 1 2 3 4 5 6 7 8 9 10 -100 -10-1 -10-2 -10-3 -10-4 -10-5 -10-6 -0.525 -30 -25 -20 -15 -10 -5 -10 -5 0 5 10 Re (λ(GA)) Co (λ(GA)) λi(GA) Real λi(A) Figure 6: Eigenvalues of A and GA for the finance example Experimental histogram Theoritical pdf Experimental pdf Wn(1) with n = 2 × 104 Wn(1) with n = 2 × 106 Wn(7) with n = 2 × 104 Wn(7) with n = 2 × 106 Zap-Q 0 -200 -150 -100 -50 0 50 100 150 200 250 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 -250 -200 -150 -100 -50 0 50 100 -200 -100 0 100 200 300 400 500 600 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 -200 -100 0 100 200 300 Figure 7: Theoretical and empirical variance for the finance example In applying the Zap Q-learning algorithm it was found that the estimates { bAn} defined in the above recursion are nearly singular. Despite the unfavorable setting for this approach, the performance of the algorithm was better than any alternative that was tested. Fig. 7 contains normalized histograms of {W i n(k) = √n(θi n(k) −θn(k)) : 1 ≤i ≤N} for the Zap-Q algorithm, with γn ≡α0.85 n ≡n−0.85. The variance for finite n is close to the theoretical predictions based on the optimal asymptotic covariance. The histograms were generated for two values of n, and k = 1, 7. Of the d = 10 possibilities, the histogram for k = 1 had the worst match with theoretical predictions, and k = 7 was the closest. The histograms for the G-Q(0) algorithm contained in [8] showed extremely high variance, and the experimental results did not match theoretical predictions. 7 1 1.05 1.1 1.15 1.2 1.25 0 20 40 60 80 100 1 1.05 1.1 1.15 1.2 1.25 0 100 200 300 400 500 600 1 1.05 1.1 1.15 1.2 1.25 0 5 10 15 20 25 30 35 G-Q(0) G-Q(0) Zap-Q Zap-Q ρ = 0.8 ρ = 1.0 g = 100 g = 200 Zap-Q ρ = 0.85 n = 2 × 104 n = 2 × 105 n = 2 × 106 Figure 8: Histograms of average reward: G-Q(0) learning and Zap-Q-learning, γn ≡αρ n ≡n−ρ n 2e4 2e5 2e6 G-Q(0) g = 100 82.7 77.5 68 G-Q(0) g = 200 82.4 72.5 55.9 Zap-Q ρ = 1.0 35.7 0 0 Zap-Q ρ = 0.8 0.17 0.03 0 Zap-Q ρ = 0.85 0.13 0.03 0 (a) Percentage of runs with hθn(x) ≤0.999 2e4 2e5 2e6 81.1 75.5 65.4 80.6 70.6 53.7 0.55 0 0 0 0 0 0 0 0 (b) hθn(x) ≤0.95 2e4 2e5 2e6 54.5 49.7 39.5 64.1 51.8 39 0 0 0 0 0 0 0 0 0 (c) hθn(x) ≤0.5 Table 1: Percentage of outliers observed in N = 1000 runs. Each table represents the percentage of runs which resulted in an average reward below a certain value Histograms of the average reward hθn(x) obtained from N = 1000 simulations is contained in Fig. 8, for n = 2 × 104, 2 × 105 and 2 × 106, and x(i) = 1, 1 ≤i ≤100. Omitted in this figure are outliers: values of the reward in the interval [0, 1). Table 1 lists the number of outliers for each run. The asymptotic covariance of the G-Q(0) algorithm was not far from optimal (its trace is about 15 times larger than obtained using Zap Q-learning). However, it is observed that this algorithm suffers from much larger outliers. 4 Conclusions Watkins’ Q-learning algorithm is elegant, but subject to two common and valid complaints: it can be very slow to converge, and it is not obvious how to extend this approach to obtain a stable algorithm in non-trivial parameterized settings (i.e., without a look-up table representation for the Qfunction). This paper addresses both concerns with the new Zap Q(λ) algorithms that are motivated by asymptotic theory of stochastic approximation. The potential complexity introduced by the matrix gain is not of great concern in many cases, because of the dramatic acceleration in the rate of convergence. Moreover, the main contribution of this paper is not a single algorithm but a class of algorithms, wherein the computational complexity can be dealt with separately. For example, in a parameterized setting, the basis functions can be intelligently pruned via random projection [2]. There are many avenues for future research. It would be valuable to find an alternative to Assumption Q3 that is readily verified. Based on the ODE analysis, it seems likely that the conclusions of Theorem 2.2 hold without this additional assumption. No theory has been presented here for nonideal parameterized settings. It is conjectured that conditions for stability of Zap Q(λ)-learning will hold under general conditions. Consistency is a more challenging problem. In terms of algorithm design, it is remarkable to see how well the scalar-gain algorithms perform, provided projection is employed and the ratio of largest to smallest real parts of the eigenvalues of A is not too large. It is possible to estimate the optimal scalar gain based on estimates of the matrix A that is central to this paper. How to do so without introducing high complexity is an open question. On the other hand, the performance of RPJ averaging is unpredictable. In many experiments it is found that the asymptotic covariance is a poor indicator of finite-n performance. There are many suggestions in the literature for improving this technique. The results in this paper suggest new approaches that we hope will simultaneously (i) Reduce complexity and potential numerical instability of matrix inversion, (ii) Improve transient performance, and (iii) Maintain optimality of the asymptotic covariance Acknowledgments: This research was supported by the National Science Foundation under grants EPCN-1609131 and CPS-1259040. 8 References [1] M. G. Azar, R. Munos, M. Ghavamzadeh, and H. Kappen. Speedy Q-learning. In Advances in Neural Information Processing Systems, 2011. [2] K. Barman and V. S. Borkar. A note on linear function approximation using random projections. Systems & Control Letters, 57(9):784–786, 2008. [3] A. Benveniste, M. M´etivier, and P. Priouret. Adaptive algorithms and stochastic approximations, volume 22 of Applications of Mathematics (New York). Springer-Verlag, Berlin, 1990. Translated from the French by Stephen S. Wilson. [4] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Hindustan Book Agency and Cambridge University Press (jointly), Delhi, India and Cambridge, UK, 2008. [5] V. S. Borkar and S. P. Meyn. The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM J. Control Optim., 38(2):447–469, 2000. (also presented at the IEEE CDC, December, 1998). [6] J. A. Boyan. Technical update: Least-squares temporal difference learning. Mach. Learn., 49(2-3):233–246, 2002. [7] D. Choi and B. Van Roy. A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning. Discrete Event Dynamic Systems: Theory and Applications, 16(2):207–239, 2006. [8] A. M. Devraj and S. P. Meyn. Fastest Convergence for Q-learning. ArXiv e-prints, July 2017. [9] E. Even-Dar and Y. Mansour. Learning rates for Q-learning. Journal of Machine Learning Research, 5(Dec):1–25, 2003. [10] A. Givchi and M. Palhang. Quasi Newton temporal difference learning. In Asian Conference on Machine Learning, pages 159–172, 2015. [11] P. W. Glynn and D. Ormoneit. Hoeffding’s inequality for uniformly ergodic Markov chains. Statistics and Probability Letters, 56:143–146, 2002. [12] V. R. Konda and J. N. Tsitsiklis. Convergence rate of linear two-time-scale stochastic approximation. Ann. Appl. Probab., 14(2):796–819, 2004. [13] H. J. Kushner and G. G. Yin. Stochastic approximation algorithms and applications, volume 35 of Applications of Mathematics (New York). Springer-Verlag, New York, 1997. [14] R. B. Lund, S. P. Meyn, and R. L. Tweedie. Computable exponential convergence rates for stochastically ordered Markov processes. Ann. Appl. Probab., 6(1):218–237, 1996. [15] D.-J. Ma, A. M. Makowski, and A. Shwartz. Stochastic approximations for finite-state Markov chains. Stochastic Process. Appl., 35(1):27–45, 1990. [16] P. G. Mehta and S. P. Meyn. Q-learning and Pontryagin’s minimum principle. In IEEE Conference on Decision and Control, pages 3598–3605, Dec. 2009. [17] S. P. Meyn and R. L. Tweedie. Computable bounds for convergence rates of Markov chains. Ann. Appl. Probab., 4:981–1011, 1994. [18] E. Moulines and F. R. Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems 24, pages 451– 459. Curran Associates, Inc., 2011. [19] Y. Pan, A. M. White, and M. White. Accelerated gradient temporal difference learning. In AAAI, pages 2464–2470, 2017. [20] D. Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electron. J. Probab., 20:32 pp., 2015. 9 [21] B. T. Polyak. A new method of stochastic approximation type. Avtomatika i telemekhanika (in Russian). translated in Automat. Remote Control, 51 (1991), pages 98–107, 1990. [22] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM J. Control Optim., 30(4):838–855, 1992. [23] D. Ruppert. A Newton-Raphson version of the multivariate Robbins-Monro procedure. The Annals of Statistics, 13(1):236–245, 1985. [24] D. Ruppert. Efficient estimators from a slowly convergent Robbins-Monro processes. Technical Report Tech. Rept. No. 781, Cornell University, School of Operations Research and Industrial Engineering, Ithaca, NY, 1988. [25] C. Szepesv´ari. The asymptotic convergence-rate of Q-learning. In Proceedings of the 10th International Conference on Neural Information Processing Systems, pages 1064–1070. MIT Press, 1997. [26] J. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning, 16:185–202, 1994. [27] J. N. Tsitsiklis and B. Van Roy. Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives. IEEE Trans. Automat. Control, 44(10):1840–1851, 1999. [28] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Cambridge, Cambridge, UK, 1989. [29] H. Yao, S. Bhatnagar, and C. Szepesv´ari. LMS-2: Towards an algorithm that is as cheap as LMS and almost as efficient as RLS. In Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on, pages 1181–1188. IEEE, 2009. [30] H. Yao and Z.-Q. Liu. Preconditioned temporal difference learning. In Proceedings of the 25th international conference on Machine learning, pages 1208–1215. ACM, 2008. [31] H. Yu and D. P. Bertsekas. Q-learning and policy iteration algorithms for stochastic shortest path problems. Annals of Operations Research, 208(1):95–132, 2013. 10 | 2017 | 193 |
6,667 | Contrastive Learning for Image Captioning Bo Dai Dahua Lin Department of Information Engineering, The Chinese University of Hong Kong db014@ie.cuhk.edu.hk dhlin@ie.cuhk.edu.hk Abstract Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learning (CL), for image captioning. Specifically, via two constraints formulated on top of a reference model, the proposed method can encourage distinctiveness, while maintaining the overall quality of the generated captions. We tested our method on two challenging datasets, where it improves the baseline model by significant margins. We also showed in our studies that the proposed method is generic and can be used for models with various structures. 1 Introduction Image captioning, a task to generate natural descriptions of images, has been an active research topic in computer vision and machine learning. Thanks to the advances in deep neural networks, especially the wide adoption of RNN and LSTM, there has been substantial progress on this topic in recent years [23, 24, 15, 19]. However, studies [1, 3, 2, 10] have shown that even the captions generated by state-of-the-art models still leave a lot to be desired. Compared to human descriptions, machine-generated captions are often quite rigid and tend to favor a “safe” (i.e. matching parts of the training captions in a word-by-word manner) but restrictive way. As a consequence, captions generated for different images, especially those that contain objects of the same categories, are sometimes very similar [1], despite their differences in other aspects. We argue that distinctiveness, a property often overlooked in previous work, is significant in natural language descriptions. To be more specific, when people describe an image, they often mention or even emphasize the distinctive aspects of an image that distinguish it from others. With a distinctive description, someone can easily identify the image it is referring to, among a number of similar images. In this work, we performed a self-retrieval study (see Section 4.1), which reveals the lack of distinctiveness affects the quality of descriptions. From a technical standpoint, the lack of distinctiveness is partly related to the way that the captioning model was learned. A majority of image captioning models are learned by Maximum Likelihood Estimation (MLE), where the probabilities of training captions conditioned on corresponding images are maximized. While well grounded in statistics, this approach does not explicitly promote distinctiveness. Specifically, the differences among the captions of different images are not explicitly taken into account. We found empirically that the resultant captions highly resemble the training set in a word-by-word manner, but are not distinctive. In this paper, we propose Contrastive Learning (CL), a new learning method for image captioning, which explicitly encourages distinctiveness, while maintaining the overall quality of the generated captions. Specifically, it employs a baseline, e.g. a state-of-the-art model, as a reference. During learning, in addition to true image-caption pairs, denoted as (I, c), this method also takes as input 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. mismatched pairs, denoted as (I, c/), where c/ is a caption describing another image. Then, the target model is learned to meet two goals, namely (1) giving higher probabilities p(c|I) to positive pairs, and (2) lower probabilities p(c/|I) to negative pairs, compared to the reference model. The former ensures that the overall performance of the target model is not inferior to the reference; while the latter encourages distinctiveness. It is noteworthy that the proposed learning method (CL) is generic. While in this paper, we focused on models based on recurrent neural networks [23, 15], the proposed method can also generalize well to models based on other formulations, e.g. probabilistic graphical models [4, 9]. Also, by choosing the state-of-the-art model as the reference model in CL, one can build on top of the latest advancement in image captioning to obtain improved performances. 2 Related Work Models for Image Captioning The history of image captioning can date back to decades ago. Early attempts are mostly based on detections, which first detect visual concepts (e.g. objects and their attributes) [9, 4] followed by template filling [9] or nearest neighbor retrieving for caption generation [2, 4]. With the development of neural networks, a more powerful paradigm, encoderand-decoder, was proposed by [23], which then becomes the core of most state-of-the-art image captioning models. It uses a CNN [20] to represent the input image with a feature vector, and applies a LSTM net [6] upon the feature to generate words one by one. Based on the encoder-and-decoder, many variants are proposed, where attention mechanism [24] appears to be the most effective add-on. Specifically, attention mechanism replaces the feature vector with a set of feature vectors, such as the features from different regions [24] , and those under different conditions [27]. It also uses the LSTM net to generate words one by one, where the difference is that at each step, a mixed guiding feature over the whole feature set, will be dynamically computed. In recent years, there are also approaches combining attention mechanism and detection. Instead of doing attention on features, they consider the attention on a set of detected visual concepts, such as attributes [25] and objects [26]. Despite of the specific structure of any image captioning model, it is able to give p(c|I), the probability of a caption conditioned on an image. Therefore, all image captioning models can be used as the target or the reference in CL method. Learning Methods for Image Captioning Many state-of-the-art image captioning models adopt Maximum Likelihood Estimation (MLE) as their learning method, which maximizes the conditional log-likelihood of the training samples, as: X (ci,Ii)∈D Ti X t=1 ln p(w(t) i |Ii, w(t−1) i , ..., w(1) i , θ), (1) where θ is the parameter vector, Ii and ci = (w(1) i , w(2) i , ..., w(Ti) i ) are a training image and its caption. Although effective, some issues, including high resemblance in model-gerenated captions, are observed [1] on models learned by MLE. Facing these issues, alternative learning methods are proposed in recent years. Techniques of reinforcement learning (RL) have been introduced in image captioning by [19] and [14]. RL sees the procedure of caption generation as a procedure of sequentially sampling actions (words) in a policy space (vocabulary). The rewards in RL are defined to be evaluation scores of sampled captions. Note that distinctiveness has not been considered in both approaches, RL and MLE. Prior to this work, some relevant ideas have been explored [21, 16, 1]. Specifically, [21, 16] proposed an introspective learning (IL) approach that learns the target model by comparing its outputs on (I, c) and (I/, c). Note that IL uses the target model itself as a reference. On the contrary, the reference model in CL provides more independent and stable indications about distinctiveness. In addition, (I/, c) in IL is pre-defined and fixed across the learning procedure, while the negative sample in CL, i.e. (I, c/), is dynamically sampled, making it more diverse and random. Recently, Generative Adversarial Networks (GAN) was also adopted for image captioning [1], which involves an evaluator that may help promote the distinctiveness. However, this evaluator is learned to directly measure the 2 A man performing stunt in the air at skate park Self Retrieval A man doing a trick on a skateboard Self Retrieval (a) Nondistinctive Caption (b) Distinctive Caption Figure 1: This figure illustrates respectively a nondistinctive and distinctive captions of an image, where the nondistinctive one fails to retrieve back the original image in self retrieval task. Self Retrieval Top-K Recall Captioning Method 1 5 50 500 ROUGE_L CIDEr Neuraltalk2 [8] 0.02 0.32 3.02 27.50 0.652 0.827 AdaptiveAttention [15] 0.10 0.96 11.76 78.46 0.689 1.004 AdaptiveAttention + CL 0.32 1.18 11.84 80.96 0.695 1.029 Table 1: This table lists results of self retrieval and captioning of different models. The results are reported on standard MSCOCO test set. See sec 4.1 for more details. distinctiveness as a parameterized approximation, and the approximation accuracy is not ensured in GAN. In CL, the fixed reference provides stable bounds about the distinctiveness, and the bounds are supported by the model’s performance on image captioning. Besides that, [1] is specifically designed for models that generate captions word-by-word, while CL is more generic. 3 Background Our formulation is partly inspired by Noise Contrastive Estimation (NCE) [5]. NCE is originally introduced for estimating probability distributions, where the partition functions can be difficult or even infeasible to compute. To estimate a parametric distribution pm(.; θ), which we refer to as the target distribution, NCE employs not only the observed samples X = (x1, x2, ..., xTm), but also the samples drawn from a reference distribution pn, denoted as Y = (y1, y2, ..., yTn). Instead of estimating pm(.; θ) directly, NCE estimates the density ratio pm/pn by training a classifier based on logistic regression. Specifically, let U = (u1, ..., uTm+Tn) be the union of X and Y . A binary class label Ct is assigned to each ut, where Ct = 1 if ut ∈X and Ct = 0 if ut ∈Y . The posterior probabilities for the class labels are therefore P(C = 1|u, θ) = pm(u; θ) pm(u; θ) + νpn(u), P(C = 0|u, θ) = νpn(u) pm(u; θ) + νpn(u), (2) where ν = Tn/Tm. Let G(u; θ) = ln pm(u; θ) −ln pn(u) and h(u, θ) = P(C = 1|u, θ), then we can write h(u; θ) = rν(G(u; θ)), with rν(z) = 1 1 + ν exp(−z). (3) The objective function of NCE is the joint conditional log-probabilities of Ct given the samples U, which can be written as L(θ; X, Y ) = Tm X t=1 ln[h(xt; θ)] + Tn X t=1 ln[1 −h(yt; θ)]. (4) Maximizing this objective with respect to θ leads to an estimation of G(·; θ), the logarithm of the density ratio pm/pn. As pn is a known distribution, pm(: |θ) can be readily derived. 4 Contrastive Learning for Image Captioning Learning a model by characterizing desired properties relative to a strong baseline is a convenient and often quite effective way in situations where it is hard to describe these properties directly. Specifically, in image captioning, it is difficult to characterize the distinctiveness of natural image descriptions via a set of rules, without running into the risk that some subtle but significant points are 3 missed. Our idea in this work is to introduce a baseline model as a reference, and try to enhance the distinctiveness on top, while maintaining the overall quality of the generated captions. In the following we will first present an empirical study on the correlation between distinctiveness of its generated captions and the overall performance of a captioning model. Subsequently, we introduce the main framework of Contrastive Learning in detail. 4.1 Empirical Study: Self Retrieval In most of the existing learning methods of image captioning, models are asked to generate a caption that best describes the semantics of a given image. In the meantime, distinctiveness of the caption, which, on the other hand, requires the image to be the best matching among all images for the caption, has not been explored. However, distinctiveness is crucial for high-quality captions. A study by Jas [7] showed that specificity is common in human descriptions, which implies that image descriptions often involve distinctive aspects. Intuitively, a caption satisfying this property is very likely to contain key and unique content of the image, so that the original image could easily be retrieved when the caption is presented. To verify this intuition, we conducted an empirical study which we refer to as self retrieval. In this experiment, we try to retrieve the original image given its model-generated caption and investigate topk recalls, as illustrated in Figure 1. Specifically, we randomly sampled 5, 000 images (I1, I2, ..., I5000) from standard MSCOCO [13] test set as the experiment benchmark. For an image captioning model pm(:, θ), we first ran it on the benchmark to get corresponding captions (c1, c2, ..., c5000) for the images. After that, using each caption ct as a query, we computed the conditional probabilities (pm(ct|I1), pm(ct|I2), ..., pm(ct|I5000)), which were used to get a ranked list of images, denoted by rt. Based on all ranked lists, we can compute top-k recalls, which is the fraction of images within top-k positions of their corresponding ranked lists. The top-k recalls are good indicators of how well a model captures the distinctiveness of descriptions. In this experiment, we compared three different models, including Neuraltalk2 [8] and AdaptiveAttention [15] that are learned by MLE, as well as AdaptiveAttention learned by our method. The top-k recalls are listed in Table 1, along with overall performances of these models in terms of Rouge [12] and Cider [22]. These results clearly show that the recalls of self retrieval are positively correlated to the performances of image captioning models in classical captioning metrics. Although most of the models are not explicitly learned to promote distinctiveness, the one with better recalls of self retrieval, which means the generated-captions are more distinctive, performs better in the image captioning evaluation. Such positive correlation clearly demonstrates the significance of distinctiveness to captioning performance. 4.2 Contrastive Learning In Contrastive Learning (CL), we learn a target image captioning model pm(:; θ) with parameter θ by constraining its behaviors relative to a reference model pn(:; φ) with parameter φ. The learning procedure requires two sets of data: (1) the observed data X, which is a set of ground-truth imagecaption pairs ((c1, I1), (c2, I2), ..., (cTm, ITm)), and is readily available in any image captioning dataset, (2) the noise set Y , which contains mismatched pairs ((c/1, I1), (c/2, I2), ..., (c/Tn, ITn)), and can be generated by randomly sampling c/t ∈C/It for each image It, where C/It is the set of all ground-truth captions except captions of image It. We refer to X as positive pairs while Y as negative pairs. For any pair (c, I), the target model and the reference model will respectively give their estimated conditional probabilities pm(c|I, θ) and pn(c|I, φ). We wish that pm(ct|It, θ) is greater than pn(ct|It, φ) for any positive pair (ct, It), and vice versa for any negative pair (c/t, It). Following this intuition, our initial attempt was to define D((c, I); θ, φ), the difference between pm(c|I, θ) and pn(c|I, φ), as D((c, I); θ, φ) = pm(c|I, θ) −pn(c|I, φ), (5) and set the loss function to be: L′(θ; X, Y, φ) = Tm X t=1 D((ct, It); θ, φ) − Tn X t=1 D((c/t, It); θ, φ). (6) 4 In practice, this formulation would meet with several difficulties. First, pm(c|I, θ) and pn(c|I, φ) are very small (∼1e-8), which may result in numerical problems. Second, Eq (6) treats easy samples, hard samples, and mistaken samples equally. This, however, is not the most effective way. For example, when D((ct, It); θ, φ) ≫0 for some positive pair, further increasing D((ct, It); θ, φ) is probably not as effective as updating D((ct′, It′); θ, φ) for another positive pair, for which D((ct′, It′); θ, φ) is much smaller. To resolve these issues, we adopted an alternative formulation inspired by NCE (sec 3), where we replace the difference function D((c, I); θ, φ) with a log-ratio function G((c, I); θ, φ): G((c, I); θ, φ) = ln pm(c|I, θ) −ln pn(c|I, φ), (7) and further use a logistic function rν (Eq(3)) after G((c, I); θ, φ) to saturate the influence of easy samples. Following the notations in NCE, we let ν = Tn/Tm, and turn D((c, I); θ, φ) into: h((c, I); θ, φ) = rν(G((c, I); θ, φ))). (8) Note that h((c, I); θ, φ) ∈(0, 1). Then, we define our updated loss function as: L(θ; X, Y, φ) = Tm X t=1 ln[h((ct, It); θ, φ)] + Tn X t=1 ln[1 −h((c/t, It); θ, φ)]. (9) For the setting of ν = Tn/Tm, we choose ν = 1, i.e. Tn = Tm, to ensure balanced influences from both positive and negative pairs. This setting consistently yields good performance in our experiments. Furthermore, we copy X for K times and sample K different Y s, in order to involve more diverse negative pairs without overfitted to them. In practice we found K = 5 is sufficient to make the learning stable. Finally, our objective function is defined to be J(θ) = 1 K 1 Tm K X k=1 L(θ; X, Yk, φ). (10) Note that J(θ) attains its upper bound 0 if positive and negative pairs can be perfectly distinguished, namely, for all t, h((ct, It); θ, φ) = 1 and h((c/t, It); θ, φ) = 0. In this case, G((ct, It); θ, φ) →∞ and G((c/t, It); θ, φ) →−∞, which indicates the target model will give higher probability p(ct|It) and lower probability p(c/t|It), compared to the reference model. Towards this goal, the learning process would encourage distinctiveness by suppressing negative pairs, while maintaining the overall performance by maximizing the probability values on positive pairs. 4.3 Discussion Maximum Likelihood Estimation (MLE) is a popular learning method in the area of image captioning [23, 24, 15]. The objective of MLE is to maximize only the probabilities of ground-truth imagecaption pairs, which may lead to some issues [1], including high resemblance in generated captions. While in CL, the probabilities of ground-truth pairs are indirectly ensured by the positive constraint (the first term in Eq(9)), and the negative constraint (the second term in Eq(9)) suppresses the probabilities of mismatched pairs, forcing the target model to also learn from distinctiveness. Generative Adversarial Network (GAN) [1] is a similar learning method that involves an auxiliary model. However, in GAN the auxiliary model and the target model follow two opposite goals, while in CL the auxiliary model and the target model are models in the same track. Moreover, in CL the auxiliary model is stable across the learning procedure, while itself needs careful learning in GAN. It’s worth noting that although our CL method bears certain level of resemblance with Noise Contrastive Estimation (NCE) [5]. The motivation and the actual technical formulation of CL and NCE are essentially different. For example, in NCE the logistic function is a result of computing posterior probabilities, while in CL it is explicitly introduced to saturate the influence of easy samples. As CL requires only pm(c|I) and pn(c|I), the choices of the target model and the reference model can range from models based on LSTMs [6] to models in other formats, such as MRFs [4] and memory-networks [18]. On the other hand, although in CL, the reference model is usually fixed across the learning procedure, one can replace the reference model with the latest target model periodically. The reasons are (1) ∇J(θ) ̸= 0 when the target model and the reference model are identical, (2) latest target model is usually stronger than the reference model, (3) and a stronger reference model can provide stronger bounds and lead to a stronger target model. 5 COCO Online Testing Server C5 Method B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr Google NIC [23] 0.713 0.542 0.407 0.309 0.254 0.530 0.943 Hard-Attention[24] 0.705 0.528 0.383 0.277 0.241 0.516 0.865 AdaptiveAttention [15] 0.735 0.569 0.429 0.323 0.258 0.541 1.001 AdpativeAttention + CL (Ours) 0.742 0.577 0.436 0.326 0.260 0.544 1.010 PG-BCMR [14] 0.754 0.591 0.445 0.332 0.257 0.550 1.013 ATT-FCN† [26] 0.731 0.565 0.424 0.316 0.250 0.535 0.943 MSM† [25] 0.739 0.575 0.436 0.330 0.256 0.542 0.984 AdaptiveAttention† [15] 0.746 0.582 0.443 0.335 0.264 0.550 1.037 Att2in† [19] 0.344 0.268 0.559 1.123 COCO Online Testing Server C40 Method B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr Google NIC [23] 0.895 0.802 0.694 0.587 0.346 0.682 0.946 Hard-Attention [24] 0.881 0.779 0.658 0.537 0.322 0.654 0.893 AdaptiveAttention [15] 0.906 0.823 0.717 0.607 0.347 0.689 1.004 AdaptiveAttention + CL (Ours) 0.910 0.831 0.728 0.617 0.350 0.695 1.029 PG-BCMR [14] ATT-FCN† [26] 0.900 0.815 0.709 0.599 0.335 0.682 0.958 MSM† [25] 0.919 0.842 0.740 0.632 0.350 0.700 1.003 AdaptiveAttention† [15] 0.918 0.842 0.740 0.633 0.359 0.706 1.051 Att2in† [19] Table 2: This table lists published results of state-of-the-art image captioning models on the online COCO testing server. † indicates ensemble model. "-" indicates not reported. In this table, CL improves the base model (AdaptiveAttention [15]) to gain the best results among all single models on C40. 5 Experiment 5.1 Datasets We use two large scale datasets to test our contrastive learning method. The first dataset is MSCOCO [13], which contains 122, 585 images for training and validation. Each image in MSCOCO has 5 human annotated captions. Following splits in [15], we reserved 2, 000 images for validation. A more challenging dataset, InstaPIC-1.1M [18], is used as the second dataset, which contains 648, 761 images for training, and 5, 000 images for testing. The images and their ground-truth captions are acquired from Instagram, where people post images with related descriptions. Each image in InstaPIC-1.1M is paired with 1 caption. This dataset is challenging, as its captions are natural posts with varying formats. In practice, we reserved 2, 000 images from the training set for validation. On both datasets, non-alphabet characters except emojis are removed, and alphabet characters are converted to lowercases. Words and emojis that appeared less than 5 times are replaced with UNK. And all captions are truncated to have at most 18 words and emojis. As a result, we obtained a vocabulary of size 9, 567 on MSCOCO, and a vocabulary of size 22, 886 on InstaPIC-1.1M. 5.2 Settings To study the generalization ability of proposed CL method, we tested it on two different image captioning models, namely Neuraltalk2 [8] and AdaptiveAttention [15]. Both models are based on encoder-and-decoder [23], where no attention mechanism is used in the former, and an adaptive attention component is used in the latter. For both models, we have pretrained them by MLE, and use the pretrain checkpoints as initializations. In all experiments except for the experiment on model choices, we choose the same model and use the same initialization for target model and reference model. In all our experiments, we fixed the learning rate to be 1e-6 for all components, and used Adam optimizer. Seven evaluation metrics have been selected to compare the performances of different models, including Bleu-1,2,3,4 [17], Meteor [11], Rouge [12] and Cider [22]. All experiments for ablation studies are conducted on the validation set of MSCOCO. 6 AA Three clocks are mounted to the side of a building Two people on a yellow yellow and yellow motorcycle A baseball player pitching a ball on top of a field A bunch of lights hanging from a ceiling AA + CL Three three clocks with three different time zones Two people riding a yellow motorcycle in a forest A baseball game in progress with pitcher throwing the ball A bunch of baseballs bats hanging from a ceiling AA Two people on a tennis court playing tennis A fighter jet flying through a blue sky A row of boats on a river near a river A bathroom with a toilet and a sink AA + CL Two tennis players shaking hands on a tennis court A fighter jet flying over a lush green field A row of boats docked in a river A bathroom with a red toilet and red walls Figure 2: This figure illustrates several images with captions generated by different models, where AA represents AdaptiveAttention [15] learned by MLE, and AA + CL represents the same model learned by CL. Compared to AA, AA + CL generated more distinctive captions for these images. Method B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr Google NIC [23] 0.055 0.019 0.007 0.003 0.038 0.081 0.004 Hard-Attention [24] 0.106 0.015 0.000 0.000 0.026 0.140 0.049 CSMN [18] 0.079 0.032 0.015 0.008 0.037 0.120 0.133 AdaptiveAttention [15] 0.065 0.026 0.011 0.005 0.029 0.093 0.126 AdaptiveAttention + CL (Ours) 0.072 0.028 0.013 0.006 0.032 0.101 0.144 Table 3: This table lists results of different models on the test split of InstaPIC-1.1M [18], where CL improves the base model (AdaptiveAttention [15]) by significant margins, achieving the best result on Cider. 5.3 Results Overall Results We compared our best model (AdaptiveAttention [15] learned by CL) with stateof-the-art models on two datasets. On MSCOCO, we submitted the results to the online COCO testing server. The results along with other published results are listed in Table 2. Compared to MLE-learned AdaptiveAttention, CL improves the performace of it by significant margins across all metrics. While most of state-of-the-art results are achieved by ensembling multiple models, our improved AdaptiveAttention gains competitive results as a single model. Specifically, on Cider, CL improves AdaptiveAttention from 1.003 to 1.029, which is the best single-model result on C40 among all published ones. In terms of Cider, if we use MLE, we need to combine 5 models to get 4.5% boost on C40 for AdaptiveAttention. Using CL, we improve the performance by 2.5% with just a single model. On InstaPIC-1.1M, CL improves the performance of AdaptiveAttention by 14% in terms of Cider, which is the state-of-the-art. Some qualitative results are shown in Figure 2. It’s worth noting that the proposed learning method can be used with stronger base models to obtain better results without any modification. Compare Learning Methods Using AdaptiveAttention learned by MLE as base model and initialization, we compared our CL with similar learning methods, including CL(P) and CL(N) that Method B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr AdaptiveAttention [15] (Base) 0.733 0.572 0.433 0.327 0.260 0.540 1.042 Base + IL [21] 0.706 0.544 0.408 0.307 0.253 0.530 1.004 Base + GAN [1] 0.629 0.437 0.290 0.190 0.212 0.458 0.700 Base + CL(P) 0.735 0.573 0.437 0.334 0.262 0.545 1.059 Base + CL(N) 0.539 0.411 0.299 0.212 0.246 0.479 0.603 Base + CL(Full) 0.755 0.598 0.460 0.353 0.271 0.559 1.142 Table 4: This table lists results of a model learned by different methods. The best result is obtained by the one learned with full CL, containing both the positive constraint and negative constraint. 7 Target Model Reference Model B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr NT 0.697 0.525 0.389 0.291 0.238 0.516 0.882 NT NT 0.708 0.536 0.399 0.300 0.242 0.524 0.905 NT AA 0.716 0.547 0.411 0.311 0.249 0.533 0.956 AA 0.733 0.572 0.433 0.327 0.260 0.540 1.042 AA AA 0.755 0.598 0.460 0.353 0.271 0.559 1.142 Table 5: This table lists results of different model choices on MSCOCO. In this table, NT represents Neuraltalk2 [8], and AA represents AdaptiveAttention [15]. "-" indicates the target model is learned using MLE. Run B-1 B-2 B-3 B-4 METEOR ROUGE_L CIDEr 0 0.733 0.572 0.433 0.327 0.260 0.540 1.042 1 0.755 0.598 0.460 0.353 0.271 0.559 1.142 2 0.756 0.598 0.460 0.353 0.272 0.559 1.142 Table 6: This table lists results of periodical replacement of the reference in CL. In run 0, the model is learned by MLE, which are used as both the target and the reference in run 1. In run 2, the reference is replaced with the best target in run 1. respectively contains only the positive constraint and the negative constraint in CL. We also compared with IL [21], and GAN [1]. The results on MSCOCO are listed in Table 4, where (1) among IL, CL and GAN, CL improves performance of the base model, while both IL and GAN decrease the results. This indicates the trade-off between learning distinctiveness and maintaining overall performance is not well settled in IL and GAN. (2) comparing models learned by CL(P), CL(N) and CL, we found using the positive constraint or the negative constraint alone is not sufficient, as only one source of guidance is provided. While CL(P) gives the base model lower improvement than full CL, CL(N) downgrades the base model, indicating overfits on distinctiveness. Combining CL(P) and CL(N), CL is able to encourage distinctiveness while also emphasizing on overall performance, resulting in largest improvements on all metrics. Compare Model Choices To study the generalization ability of CL, AdaptiveAttention and Neuraltalk2 are respectively chosen as both the target and the reference in CL. In addition, AdaptiveAttention learned by MLE, as a better model, is chosen to be the reference, for Neuraltalk2. The results are listed in Table 5, where compared to models learned by MLE, both AdaptiveAttention and Neuraltalk2 are improved after learning using CL. For example, on Cider, AdaptiveAttention improves from 1.042 to 1.142, and Neuraltalk2 improves from 0.882 to 0.905. Moreover, by using a stronger model, AdaptiveAttention, as the reference, Neuraltalk2 improves further from 0.905 to 0.956, which indicates stronger references empirically provide tighter bounds on both the positive constraint and the negative constraint. Reference Replacement As discussed in sec 4.3, one can periodically replace the reference with latest best target model, to further improve the performance. In our study, using AdaptiveAttention learned by MLE as a start, each run we fix the reference model util the target saturates its performance on the validation set, then we replace the reference with latest best target model and rerun the learning. As listed in Table 6, in second run, the relative improvements of the target model is incremental, compared to its improvement in the first run. Therefore, when learning a model using CL, with a sufficiently strong reference, the improvement is usually saturated in the first run, and there is no need, in terms of overall performance, to replace the reference multiple times. 6 Conclusion In this paper, we propose Contrastive Learning, a new learning method for image captioning. By employing a state-of-the-art model as a reference, the proposed method is able to maintain the optimality of the target model, while encouraging it to learn from distinctiveness, which is an important property of high quality captions. On two challenging datasets, namely MSCOCO and InstaPIC-1.1M, the proposed method improves the target model by significant margins, and gains state-of-the-art results across multiple metrics. On comparative studies, the proposed method extends well to models with different structures, which clearly shows its generalization ability. 8 Acknowledgment This work is partially supported by the Big Data Collaboration Research grant from SenseTime Group (CUHK Agreement No.TS1610626), the General Research Fund (GRF) of Hong Kong (No.14236516) and the Early Career Scheme (ECS) of Hong Kong (No.24204215). References [1] Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. Towards diverse and natural image descriptions via a conditional gan. In Proceedings of the IEEE International Conference on Computer Vision, 2017. [2] Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, and C Lawrence Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015. [3] Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. From captions to visual concepts and back. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1473–1482, 2015. [4] Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. Every picture tells a story: Generating sentences from images. In European conference on computer vision, pages 15–29. Springer, 2010. [5] Michael U Gutmann and Aapo Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(Feb):307–361, 2012. [6] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [7] Mainak Jas and Devi Parikh. Image specificity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2727–2736, 2015. [8] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015. [9] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2891– 2903, 2013. [10] Polina Kuznetsova, Vicente Ordonez, Tamara L Berg, and Yejin Choi. Treetalk: Composition and compression of trees for image descriptions. TACL, 2(10):351–362, 2014. [11] Michael Denkowski Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. ACL 2014, page 376, 2014. [12] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. [13] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014. [14] Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. Optimization of image description metrics using policy gradient methods. arXiv preprint arXiv:1612.00370, 2016. [15] Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. arXiv preprint arXiv:1612.01887, 2016. [16] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11–20, 2016. 9 [17] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002. [18] Cesc Chunseong Park, Byeongchang Kim, and Gunhee Kim. Attend to you: Personalized image captioning with context sequence memory networks. In CVPR, 2017. [19] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Selfcritical sequence training for image captioning. arXiv preprint arXiv:1612.00563, 2016. [20] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [21] Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. Contextaware captions from context-agnostic supervision. arXiv preprint arXiv:1701.02870, 2017. [22] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4566–4575, 2015. [23] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015. [24] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, volume 14, pages 77–81, 2015. [25] Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei. Boosting image captioning with attributes. arXiv preprint arXiv:1611.01646, 2016. [26] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4651–4659, 2016. [27] Luowei Zhou, Chenliang Xu, Parker Koch, and Jason J Corso. Image caption generation with text-conditional semantic attention. arXiv preprint arXiv:1606.04621, 2016. 10 | 2017 | 194 |
6,668 | Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net Anirudh Goyal MILA, Université de Montréal anirudhgoyal9119@gmail.com Nan Rosemary Ke MILA, École Polytechnique de Montréal rosemary.nan.ke@gmail.com Surya Ganguli Stanford University sganguli@stanford.edu Yoshua Bengio MILA, Université de Montréal yoshua.umontreal@gmail.com Abstract We propose a novel method to directly learn a stochastic transition operator whose repeated application provides generated samples. Traditional undirected graphical models approach this problem indirectly by learning a Markov chain model whose stationary distribution obeys detailed balance with respect to a parameterized energy function. The energy function is then modified so the model and data distributions match, with no guarantee on the number of steps required for the Markov chain to converge. Moreover, the detailed balance condition is highly restrictive: energy based models corresponding to neural networks must have symmetric weights, unlike biological neural circuits. In contrast, we develop a method for directly learning arbitrarily parameterized transition operators capable of expressing nonequilibrium stationary distributions that violate detailed balance, thereby enabling us to learn more biologically plausible asymmetric neural networks and more general non-energy based dynamical systems. The proposed training objective, which we derive via principled variational methods, encourages the transition operator to "walk back" (prefer to revert its steps) in multi-step trajectories that start at datapoints, as quickly as possible back to the original data points. We present a series of experimental results illustrating the soundness of the proposed approach, Variational Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating superior samples compared to earlier attempts to learn a transition operator. We also show that although each rapid training trajectory is limited to a finite but variable number of steps, our transition operator continues to generate good samples well past the length of such trajectories, thereby demonstrating the match of its non-equilibrium stationary distribution to the data distribution. Source Code: http://github.com/anirudh9119/walkback_nips17 1 Introduction A fundamental goal of unsupervised learning involves training generative models that can understand sensory data and employ this understanding to generate, or sample new data and make new inferences. In machine learning, the vast majority of probabilistic generative models that can learn complex probability distributions over data fall into one of two classes: (1) directed graphical models, corresponding to a finite time feedforward generative process (e.g. variants of the Helmholtz machine (Dayan et al., 1995) like the Variational Auto-Encoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014)), or (2) energy function based undirected graphical models, corresponding to sampling from a stochastic process whose equilibrium stationary distribution obeys detailed balance with respect to the energy function (e.g. various Boltzmann machines (Salakhutdinov and Hinton, 2009)). This detailed 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. balance condition is highly restrictive: for example, energy-based undirected models corresponding to neural networks require symmetric weight matrices and very specific computations which may not match well with what biological neurons or analog hardware could compute. In contrast, biological neural circuits are capable of powerful generative dynamics enabling us to model the world and imagine new futures. Cortical computation is highly recurrent and therefore its generative dynamics cannot simply map to the purely feed-forward, finite time generative process of a directed model. Moreover, the recurrent connectivity of biological circuits is not symmetric, and so their generative dynamics cannot correspond to sampling from an energy-based undirected model. Thus, the asymmetric biological neural circuits of our brain instantiate a type of stochastic dynamics arising from the repeated application of a transition operator∗whose stationary distribution over neural activity patterns is a non-equilibrium distribution that does not obey detailed balance with respect to any energy function. Despite these fundamental properties of brain dynamics, machine learning approaches to training generative models currently lack effective methods to model complex data distributions through the repeated application a transition operator, that is not indirectly specified through an energy function, but rather is directly parameterized in ways that are inconsistent with the existence of any energy function. Indeed the lack of such methods constitutes a glaring gap in the pantheon of machine learning methods for training probabilistic generative models. The fundamental goal of this paper is to provide a step to filling such a gap by proposing a novel method to learn such directly parameterized transition operators, thereby providing an empirical method to control the stationary distributions of non-equilibrium stochastic processes that do not obey detailed balance, and match these distributions to data. The basic idea underlying our training approach is to start from a training example, and iteratively apply the transition operator while gradually increasing the amount of noise being injected (i.e., temperature). This heating process yields a trajectory that starts from the data manifold and walks away from the data due to the heating and to the mismatch between the model and the data distribution. Similarly to the update of a denoising autoencoder, we then modify the parameters of the transition operator so as to make the reverse of this heated trajectory more likely under a reverse cooling schedule. This encourages the transition operator to generate stochastic trajectories that evolve towards the data distribution, by learning to walk back the heated trajectories starting at data points. This walkback idea had been introduced for generative stochastic networks (GSNs) and denoising autoencoders (Bengio et al., 2013b) as a heuristic, and without temperature annealing. Here, we derive the specific objective function for learning the parameters through a principled variational lower bound, hence we call our training method variational walkback (VW). Despite the fact that the training procedure involves walking back a set of trajectories that last a finite, but variable number of time-steps, we find empirically that this yields a transition operator that continues to generate sensible samples for many more time-steps than are used to train, demonstrating that our finite time training procedure can sculpt the non-equilibrium stationary distribution of the transition operator to match the data distribution. We show how VW emerges naturally from a variational derivation, with the need for annealing arising out of the objective of making the variational bound as tight as possible. We then describe experimental results illustrating the soundness of the proposed approach on the MNIST, CIFAR-10, SVHN and CelebA datasets. Intriguingly, we find that our finite time VW training process involves modifications of variational methods for training directed graphical models, while our potentially asymptotically infinite generative sampling process corresponds to non-equilibrium generalizations of energy based undirected models. Thus VW goes beyond the two disparate model classes of undirected and directed graphical models, while simultaneously incorporating good ideas from each. 2 The Variational Walkback Training Process Our goal is to learn a stochastic transition operator pT (s′|s) such that its repeated application yields samples from the data manifold. Here T reflects an underlying temperature, which we will modify during the training process. The transition operator is further specified by other parameters which must be learned from data. When K steps are chosen to generate a sample, the generative process has joint probability p(sK 0 ) = p(sK) QK t=1 pTt(st−1|st), where Tt is the temperature at step t. We first give an intuitive description of our learning algorithm before deriving it via variational methods in the next section. The basic idea, as illustrated in Fig. 1 and Algorithm 1 is to follow a walkback ∗A transition operator maps the previous-state distribution to a next-state distribution, and is implemented by a stochastic transformation which from the previous state of a Markov chain generates the next state 2 Figure 1: Variational WalkBack framework. The generative process is represented in the blue arrows with the sequence of pTt(st−1|st) transitions. The destructive forward process starts at a datapoint (from qT0(s0)) and gradually heats it through applications of qTt(st|st−1). Larger temperatures on the right correspond to a flatter distribution, so the whole destructive forward process maps the data distribution to a Gaussian and the creation process operates in reverse. strategy similar to that introduced in Alain and Bengio (2014). In particular, imagine a destructive process qTt+1(st+1|st) (red arrows in Fig. 1), which starts from a data point s0 = x, and evolves it stochastically to obtain a trajectory s0, . . . , sK ≡sK 0 , i.e., q(sK 0 ) = q(s0) QK t=1 qTt(st|st−1), where q(s0) is the data distribution. Note that the p and q chains will share the same parameters for the transition operator (one going backwards and one forward) but they start from different priors for their first step: q(s0) is the data distribution while p(s0) is a flat factorized prior (e.g. Gaussian). The training procedure trains the transition operator pT to make reverse transitions of the destructive process more likely. For this reason we index time so the destructive process operates forward in time, while the reverse generative process operates backwards in time, with the data distribution occurring at t = 0. In particular, we need only train the transition operator to reverse time by 1-step at each step, making it unnecessary to solve a deep credit assignment problem by performing backpropagation through time across multiple walk-back steps. Overall, the destructive process generates trajectories that walk away from the data manifold, and the transition operator pT learns to walkback these trajectories to sculpt the stationary distribution of pT at T = 1 to match the data distribution. Because we choose qT to have the same parameters as pT , they have the same transition operator but not the same joint over the whole sequence because of differing initial distributions for each trajectory. We also choose to increase temperature with time in the destructive process, following a temperature schedule T1 ≤· · · ≤TK. Thus the forward destructive (reverse generative) process corresponds to a heating (cooling) protocol. This training procedure is similar in spirit to DAE’s (Vincent et al., 2008) or NET (Sohl-Dickstein et al., 2015) but with one major difference: the destructive process in these works corresponds to the addition of random noise which knows nothing about the current generative process during training. To understand why tying together destruction and creation may be a good idea, consider the special case in which pT corresponds to a stochastic process whose stationary distribution obeys detailed balance with respect to the energy function of an undirected graphical model. Learning any such model involves two fundamental goals: the model must place probability mass (i.e. lower the energy function) where the data is located, and remove probability mass (i.e. raise the energy function) elsewhere. Probability modes where there is no data are known as spurious modes, and a fundamental goal of learning is to hunt down these spurious modes and remove them. Making the destructive process identical to the transition operator to be learned is motivated by the notion that the destructive process should then efficiently explore the spurious modes of the current transition operator. The walkback training will then destroy these modes. In contrast, in DAE’s and NET’s, since the destructive process corresponds to the addition of unstructured noise that knows nothing about the generative process, it is not clear that such an agnostic destructive process will efficiently seek out the spurious modes of the reverse, generative process. We chose the annealing schedule empirically to minimize training time. The generative process starts by sampling a state sK from a broad Gaussian p∗(sK), whose variance is initially equal to the total data variance σ2 max (but can be later adapted to match the final samples from the inference trajectories). Then we sample from pTmax(sK−1|sK), where Tmax is a high enough temperature so that the resultant injected noise can move the state across the whole domain of the data. The injected noise used to simulate the effects of finite temperature has variance linearly proportional to 3 temperature. Thus if σ2 is the equivalent noise injected by the transition operator pT at T = 1, we choose Tmax = σ2 max σ2 to achieve the goal of the first sample sK−1 being able to move across the entire range of the data distribution. Then we successively cool the temperature as we sample “previous” states st−1 according to pT (st−1|st), with T reduced by a factor of 2 at each step, followed by n steps at temperature 1. This cooling protocol requires the number of steps to be K = log2 Tmax + n, (1) in order to go from T = Tmax to T = 1 in K steps. We choose K from a random distribution. Thus the training procedure trains pT to rapidly transition from a simple Gaussian distribution to the data distribution in a finite but variable number of steps. Ideally, this training procedure should then indirectly create a transition operator pT at T = 1 whose repeated iteration samples the data distribution with a relatively rapid mixing time. Interestingly, this intuitive learning algorithm for a recurrent dynamical system, formalized in Algorithm 1, can be derived in a principled manner from variational methods that are usually applied to directed graphical models, as we see next. Algorithm 1 VariationalWalkback(θ) Train a generative model associated with a transition operator pT (s|s′) at temperature T (temperature 1 for sampling from the actual model), parameterized by θ. This transition operator injects noise of variance Tσ2 at each step, where σ2 is the noise level at temperature 1. Require: Transition operator pT (s|s′) from which one can both sample and compute the gradient of log pT (s|s′) with respect to parameters θ, given s and s′. Require: Precomputed σ2 max, initially data variance (or squared diameter). Require: N1 > 1 the number of initial temperature-1 steps of q trajectory (or ending a p trajectory). repeat Set p∗to be a Gaussian with mean and variance of the data. Tmax ←σ2 max σ2 Sample n as a uniform integer between 0 and N1 K ←ceil(log2 Tmax) + n Sample x ∼data (or equivalently sample a minibatch to parallelize computation and process each element of the minibatch independently) Let s0 = (x) and initial temperature T = 1, initialize L = 0 for t = 1 to K do Sample st ∼pT (s|st−1) Increment L ←L + log pT (st−1|st) Update parameters with log likelihood gradient ∂log pT (st−1|st) ∂θ If t > n, increase temperature with T ←2T end for Increment L ←L + log p∗(sK) Update mean and variance of p∗to match the accumulated 1st and 2nd moment statistics of the samples of sK until convergence monitoring L on a validation set and doing early stopping =0 3 Variational Derivation of Walkback The marginal probability of a data point s0 at the end of the K-step generative cooling process is p(s0) = X sK 1 pT0(s0|s1) K Y t=2 pTt(st−1|st) ! p∗(sK) (2) where sK 1 = (s1, s2, . . . , sK) and v = s0 is a visible variable in our generative process, while the cooling trajectory that lead to it can be thought of as a latent, hidden variable h = sK 1 . Recall the decomposition of the marginal log-likelihood via a variational lower bound, ln p(v) ≡ln X h p(v|h)p(h) = X h q(h|v) ln p(v, h) q(h|v) | {z } L +DKL[q(h|v)||p(h|v)]. (3) 4 Here L is the variational lower bound which motivates the proposed training procedure, and q(h|v) is a variational approximation to p(h|v). Applying this decomposition to v = s0 and h = sK 1 , we find ln p(s0) = X sk 1 q(sk 1|s0) ln p(s0|sk 1)p(sk 1) q(sk 1|s0) + DKL[q(sk 1|s0) || p(sk 1|s0)]. (4) Similarly to the EM algorithm, we aim to approximately maximize the log-likelihood with a 2-step procedure. Let θp be the parameters of the generative model p and θq be the parameters of the approximate inference procedure q. Before seeing the next example we have θq = θp. Then in the first step we update θp towards maximizing the variational bound L, for example by a stochastic gradient descent step. In the second step, we update θq by setting θq ←θp, with the objective to reduce the KL term in the above decomposition. See Sec. 3.1 below regarding conditions for the tightness of the bound, which may not be perfect, yielding a possibly biased gradient when we force the constraint θp = θq. We continue iterating this procedure, with training examples s0. We can obtain an unbiased Monte-Carlo estimator of L as follows from a single trajectory: L(s0) ≈ K X t=1 ln pTt(st−1|st) qTt(st|st−1) + ln p∗(sK) (5) with respect to pθ, where s0 is sampled from the data distribution qT0(s0), and the single sequence sK 1 is sampled from the heating process q(sK 1 |s0). We are making the reverse of heated trajectories more likely under the cooling process, leading to Algorithm 1. Such variational bounds have been used successfully in many learning algorithms in the past, such as the VAE (Kingma and Welling, 2013), except that they use an explicitly different set of parameters for p and q. Some VAE variants (Sønderby et al., 2016; Kingma et al., 2016) however mix the p-parameters implicitly in forming q, by using the likelihood gradient to iteratively form the approximate posterior. 3.1 Tightness of the variational lower bound As seen in (4), the gap between L(s0) and ln p(s0) is controlled by DKL[q(sk 1|s0)||p(sk 1|s0)], and is therefore tight when the distribution of the heated trajectory, starting from a point s0, matches the posterior distribution of the cooled trajectory ending at s0. Explicitly, this KL divergence is given by DKL = X sk 1 q(sk 1|s0) ln p(s0) p∗(sK) K Y t=1 qTt(st|st−1) pTt(st−1|st). (6) As the heating process q unfolds forward in time, while the cooling process p unfolds backwards in time, we introduce the time reversal of the transition operator pT , denoted by pR T , as follows. Under repeated application of the transition operator pT , state s settles into a stationary distribution πT (s) at temperature T. The probability of observing a transition st →st−1 under pT in its stationary state is then pT (st−1|st)πT (st). The time-reversal pR T is the transition operator that makes the reverse transition equally likely for all state pairs, and therefore obeys PT (st−1|st)πT (st) = P R T (st|st−1)πT (st−1) (7) for all pairs of states st−1 and st. It is well known that pR T is a valid stochastic transition operator and has the same stationary distribution πT (s) as pT . Furthermore, the process pT obeys detailed balance if and only if it is invariant under time-reversal, so that pT = pR T . To better understand the KL divergence in (6), at each temperature Tt, we use relation (7) to replace the cooling process PTt which occurs backwards in time with its time-reversal, unfolding forward in time, at the expense of introducing ratios of stationary probabilities. We also exploit the fact that q and p are the same transition operator. With these substitutions in (6), we find DKL = X sk 1 q(sk 1|s0) ln K Y t=1 pTt(st|st−1) pR Tt(st|st−1) + X sk 1 q(sk 1|s0) ln p(s0) p∗(sK) K Y t=1 πTt(st) πTt(st−1). (8) The first term in (8) is simply the KL divergence between the distribution over heated trajectories, and the time reversal of the cooled trajectories. Since the heating (q) and cooling (p) processes are tied, this KL divergence is 0 if and only if pTt = pR Tt for all t. This time-reversal invariance requirement for vanishing KL divergence is equivalent to the transition operator pT obeying detailed balance at all temperatures. 5 Now intuitively, the second term can be made small in the limit where K is large and the temperature sequence is annealed slowly. To see why, note we can write the ratio of probabilities in this term as, p(s0) πT1(s0) πT1(s1) πT2(s1) · · · πTK−1(sK−1) πTK−1(sK) πTK(sK) p∗(sK) . (9) which is similar in shape (but arising in a different context) to the product of probability ratios computed for annealed importance sampling (Neal, 2001) and reverse annealed importance sampling (Burda et al., 2014). Here it is manifest that, under slow incremental annealing schedules, we are comparing probabilities of the same state under slightly different distributions, so all ratios are close to 1. For example, under many steps, with slow annealing, the generative process approximately reaches its stationary distribution, p(s0) ≈πT1(s0). This slow annealing to go from p∗(sK) to p(s0) corresponds to the quasistatic limit in statistical physics, where the work required to perform the transformation is equal to the free energy difference between states. To go faster, one must perform excess work, above and beyond the free energy difference, and this excess work is dissipated as heat into the surrounding environment. By writing the distributions in terms of energies and free energies: πTt(st) ∝e−E(st)/Tt, p∗(sK) = e−[EK(sK)−FK], and p(s0) = e−[E0(s0)−F0], one can see that the second term in the KL divergence is closely related to average heat dissipation in a finite time heating process (see e.g. (Crooks, 2000)). This intriguing connection between the size of the gap in a variational lower bound, and the excess heat dissipation in a finite time heating process opens the door to exploiting a wealth of work in statistical physics for finding optimal thermodynamic paths that minimize heat dissipation (Schmiedl and Seifert, 2007; Sivak and Crooks, 2012; Gingrich et al., 2016), which may provide new ideas to improve variational inference. In summary, tightness of the variational bound can be achieved if: (1) The transition operator of p approximately obeys detailed balance, and (2) the temperature annealing is done slowly over many steps. And intriguingly, the magnitude of the looseness of the bound is related to two physical quantities: (1) the degree of irreversiblity of the transition operator p, as measured by the KL divergence between p and its time reversal pR, and (2) the excess physical work, or equivalently, excess heat dissipated, in performing the heating trajectory. To check, post-hoc, potential looseness of the variational lower bound, we can measure the degree of irreversibility of pT by estimating the KL divergence DKL(pT (s′|s)πT (s) || pT (s|s′)πT (s′)), which is 0 if and only if pT obeys detailed balance and is therefore time-reversal invariant. This quantity can be estimated by 1 K PK t=1 ln pT (st+1|st) pT (st|st+1), where sK 1 is a long sequence sampled by repeatedly applying transition operator pT from a draw s1 ∼πT . If this quantity is strongly positive (negative) then forward transitions are more (less) likely than reverse transitions, and the process pT is not time-reversal invariant. This estimated KL divergence can be normalized by the corresponding entropy to get a relative value (with 3.6% measured on a trained model, as detailed in Appendix). 3.2 Estimating log likelihood via importance sampling We can derive an importance sampling estimate of the negative log-likelihood by the following procedure. For each training example x, we sample a large number of destructive paths (as in Algorithm 1). We then use the following formulation to estimate the log-likelihood log p(x) via log Ex∼pD,qT0(x)qT1(s1|s0(x,))( QK t=2 qTt(st|st−1)) pT0(s0 = x|s1) QK t=2 pTt(st−1|st) p∗(sK) qT0(x)qT1(s1|s0 = x) QK t=2 qTt(st|st−1) (10) 3.3 VW transition operators and their convergence The VW approach allows considerable freedom in choosing transition operators, obviating the need for specifying them indirectly through an energy function. Here we consider Bernoulli and isotropic Gaussian transition operators for binary and real-valued data respectively. The form of the stochastic state update imitates a discretized version of the Langevin differential equation. The Bernoulli transition operator computes the element-wise probability as ρ = sigmoid( (1−α)∗st−1+α∗Fρ(st−1) Tt ). The Gaussian operator computes a conditional mean and standard deviation via µ = (1 −α) ∗st−1 + α ∗Fµ(st−1) and σ = Tt log(1 + eFσ(st−1)). Here the F functions can be arbitrary parametrized functions, such as a neural net and Tt is the temperature at time step t. 6 A natural question is when will the finite time VW training process learn a transition operator whose stationary distribution matches the data distribution, so that repeated sampling far beyond the training time continues to yield data samples. To partially address this, we prove the following theorem: Proposition 1. If p has enough capacity, training data and training time, with slow enough annealing and a small departure from reversibility so p can match q, then at convergence of VW training, the transition operator pT at T = 1 has the data generating distribution as its stationary distribution. A proof can be found in the Appendix, but the essential intuition is that if the finite time generative process converges to the data distribution at multiple different VW walkback time-steps, then it remains on the data distribution for all future time at T = 1. We cannot always guarantee the preconditions of this theorem but we find experimentally that its essential outcome holds in practice. 4 Related Work A variety of learning algorithms can be cast in the framework of Fig. 1. For example, for directed graphical models like VAEs (Kingma and Welling, 2013; Rezende et al., 2014), DBNs (Hinton et al., 2006), and Helmholtz machines in general, q corresponds to a recognition model, transforming data to a latent space, while p corresponds to a generative model that goes from latent to visible data in a finite number of steps. None of these directed models are designed to learn transition operators that can be iterated ad infinitum, as we do. Moreover, learning such models involves a complex, deep credit assignment problem, limiting the number of unobserved latent layers that can be used to generate data. Similar issues of limited trainable depth in a finite time feedforward generative process apply to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which also further eschew the goal of specifically assigning probabilities to data points. Our method circumvents this deep credit assignment problem by providing training targets at each time-step; in essence each past time-step of the heated trajectory constitutes a training target for the future output of the generative operator pT , thereby obviating the need for backpropagation across multiple steps. Similarly, unlike VW, Generative Stochastic Networks (GSN) (Bengio et al., 2014) and the DRAW (Gregor et al., 2015) also require training iterative operators by backpropagating across multiple computational steps. VW is similar in spirit to DAE (Bengio et al., 2013b), and NET approaches (Sohl-Dickstein et al., 2015) but it retains two crucial differences. First, in each of these frameworks, q corresponds to a very simple destruction process in which unstructured Gaussian noise is injected into the data. This agnostic destruction process has no knowledge of underlying generative process p that is to be learned, and therefore cannot be expected to efficiently explore spurious modes, or regions of space, unoccupied by data, to which p assigns high probability. VW has the advantage of using a high-temperature version of the model p itself as part of the destructive process, and so should be better than random noise injection at finding these spurious modes. A second crucial difference is that VW ties weights of the transition operator across time-steps, thereby enabling us to learn a bona fide transition operator than can be iterated well beyond the training time, unlike DAEs and NET. There’s also another related recent approach to learning a transition operator with a denoising cost, developed in parallel, called Infusion training (Bordes et al., 2017), which tries to reconstruct the target data in the chain, instead of the previous step in the destructive chain. 5 Experiments VW is evaluated on four datasets: MNIST, CIFAR10 (Krizhevsky and Hinton, 2009), SVHN (Netzer et al., 2011) and CelebA (Liu et al., 2015). The MNIST, SVHN and CIFAR10 datasets were used as is except for uniform noise added to MNIST and CIFAR10, as per Theis et al. (2016), and the aligned and cropped version of CelebA was scaled from 218 x 178 pixels to 78 x 64 pixels and center-cropped at 64 x 64 pixels (Liu et al., 2015). We used the Adam optimizer (Kingma and Ba, 2014) and the Theano framework (Al-Rfou et al., 2016). More details are in Appendix and code for training and generation is at http://github.com/anirudh9119/walkback_nips17. Table 1 compares with published NET results on CIFAR. Image Generation. Figure 3, 5, 6, 7, 8 (see supplementary section) show VW samples on each of the datasets. For MNIST, real-valued views of the data are modeled. Image Inpainting. We clamped the bottom part of CelebA test images (for each step during sampling), and ran it through the model. Figure 1 (see Supplementary section) shows the generated conditional samples. 7 Model bits/dim ≤ NET (Sohl-Dickstein et al., 2015) 5.40 VW(20 steps) 5.20 Deep VAE < 4.54 VW(30 steps) 4.40 DRAW (Gregor et al., 2015) < 4.13 ResNet VAE with IAF (Kingma et al., 2016) 3.11 Table 1: Comparisons on CIFAR10, test set average number of bits/data dimension(lower is better) 6 Discussion 6.1 Summary of results Our main advance involves using variational inference to learn recurrent transition operators that can rapidly approach the data distribution and then be iterated much longer than the training time while still remaining on the data manifold. Our innovations enabling us to achieve this involved: (a) tying weights across time, (b) tying the destruction and generation process together to efficiently destroy spurious modes, (c) using the past of the destructive process to train the future of the creation process, thereby circumventing issues with deep credit assignment (like NET), (d) introducing an aggressive temperature annealing schedule to rapidly approach the data distribution (e.g. NET takes 1000 steps while VWB only takes 30 steps to do so), and (e) introducing variable trajectory lengths during training to encourage the generator to stay on the data manifold for times longer than the training sequence length. Indeed, it is often difficult to sample from recurrent neural networks for many more time steps than the duration of their training sequences, especially non-symmetric networks that could exhibit chaotic activity. Transition operators learned by VW can be stably sampled for exceedingly long times; for example, in experiments (see supplementary section) we trained our model on CelebA for 30 steps, while at test time we sampled for 100000 time-steps. Overall, our method of learning a transition operator outperforms previous attempts at learning transition operators (i.e. VAE, GSN and NET) using a local learning rule. Overall, we introduced a new approach to learning non-energy-based transition operators which inherits advantages from several previous generative models, including a training objective that requires rapidly generating the data in a finite number of steps (as in directed models), re-using the same parameters for each step (as in undirected models), directly parametrizing the generator (as in GANs and DAEs), and using the model itself to quickly find its own spurious modes (the walk-back idea). We also anchor the algorithm in a variational bound and show how its analysis suggests to use the same transition operator for the destruction or inference process, and the creation or generation process, and to use a cooling schedule during generation, and a reverse heating schedule during inference. 6.2 New bridges between variational inference and non-equilibrium statistical physics We connected the variational gap to physical notions like reversibility and heat dissipation. This novel bridge between variational inference and concepts like excess heat dissipation in non-equilbrium statistical physics, could potentially open the door to improving variational inference by exploiting a wealth of work in statistical physics. For example, physical methods for finding optimal thermodynamic paths that minimize heat dissipation (Schmiedl and Seifert, 2007; Sivak and Crooks, 2012; Gingrich et al., 2016), could potentially be exploited to tighten lowerbounds in variational inference. Moreover, motivated by the relation between the variational gap and reversibility, we verified empirically that the model converges towards an approximately reversible chain (see Appendix) making the variational bound tighter. 6.3 Neural weight asymmetry A fundamental aspect of our approach is that we can train stochastic processes that need not exactly 8 obey detailed balance, yielding access to a larger and potentially more powerful space of models. In particular, this enables us to relax the weight symmetry constraint of undirected graphical models corresponding to neural networks, yielding a more brain like iterative computation characteristic of asymmetric biological neural circuits. Our approach thus avoids the biologically implausible requirement of weight transport (Lillicrap et al., 2014) which arises as a consequence of imposing weight symmetry as a hard constraint. With VW, this hard constraint is removed, although the training procedure itself may converge towards more symmetry. Such approach towards symmetry is consistent with both empirical observations (Vincent et al., 2010) and theoretical analysis (Arora et al., 2015) of auto-encoders, for which symmetric weights are associated with minimizing reconstruction error. 6.4 A connection to the neurobiology of dreams The learning rule underlying VW, when applied to an asymmetric stochastic neural network, yields a speculative, but intriguing connection to the neurobiology of dreams. As discussed in Bengio et al. (2015), spike-timing dependent plasticity (STDP), a plasticity rule found in the brain (Markram and Sakmann, 1995), corresponds to increasing the probability of configurations towards which the network intrinsically likes to go (i.e., remembering observed configurations), while reverse-STDP corresponds to forgetting or unlearning the states towards which the network goes (which potentially may occur during sleep). In the VW update applied to a neural network, the resultant learning rule does indeed strengthen synapses for which a presynaptic neuron is active before a postsynaptic neuron in the generative cooling process (STDP), and it weakens synapses in which a postsynaptic neuron is active before a presynaptic neuron in the heated destructive process (reverse STDP). If, as suggested, the neurobiological function of sleep involves re-organizing memories and in particular unlearning spurious modes through reverse-STDP, then the heating destructive process may map to sleep states, in which the brain is hunting down and destroying spurious modes. In contrast, the cooling generative dynamics of VW may map to awake states in which STDP reinforces neural trajectories moving towards observed sensory data. Under this mapping, the relative incoherence of dreams compared to reality is qualitatively consistent with the heated destructive dynamics of VW, compared to the cooled transition operator in place during awake states. 6.5 Future work Many questions remain open in terms of analyzing and extending VW. Of particular interest is the incorporation of latent layers. The state at each step would now include both visible x and latent h components. Essentially the same procedure can be run, except for the chain initialization, with s0 = (x, h0) where h0 a sample from the posterior distribution of h given x. Another interesting direction is to replace the log-likelihood objective at each step by a GAN-like objective, thereby avoiding the need to inject noise independently on each of the pixels, during each transition step, and allowing latent variable sampling to inject the required high-level decisions associated with the transition. Based on the earlier results from (Bengio et al., 2013a), sampling in the latent space rather than in the pixel space should allow for better generative models and even better mixing between modes (Bengio et al., 2013a). Overall, our work takes a step to filling a relatively open niche in the machine learning literature on directly training non-energy-based iterative stochastic operators, and we hope that the many possible extensions of this approach could lead to a rich new class of more powerful brain-like machine learning models. Acknowledgments The authors would like to thank Benjamin Scellier, Ben Poole, Tim Cooijmans, Philemon Brakel, Gaétan Marceau Caron, and Alex Lamb for their helpful feedback and discussions, as well as NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and Compute Canada for computing resources. S.G. would like to thank the Simons, McKnight, James S. McDonnell, and Burroughs Wellcome Foundations and the Office of Naval Research for support. Y.B would also like to thank Geoff Hinton for an analogy which is used in this work, while discussing contrastive divergence (personnal communication). The authors would also like to express debt of gratitude towards those who contributed to theano over the years (as it is no longer maintained), making it such a great tool. 9 References Al-Rfou, R., Alain, G., Almahairi, A., and et al. (2016). Theano: A python framework for fast computation of mathematical expressions. CoRR, abs/1605.02688. Alain, G. and Bengio, Y. (2014). What regularized auto-encoders learn from the data-generating distribution. J. Mach. Learn. Res., 15(1):3563–3593. Arora, S., Liang, Y., and Ma, T. (2015). Why are deep nets reversible: a simple theory, with implications for training. Technical report, arXiv:1511.05653. Bengio, Y., Mesnard, T., Fischer, A., Zhang, S., and Wu, Y. (2015). An objective function for STDP. CoRR, abs/1509.05936. Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013a). Better mixing via deep representations. Bengio, Y., Thibodeau-Laufer, E. r., Alain, G., and Yosinski, J. (2014). Deep generative stochastic networks trainable by backprop. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, pages II–226–II–234. JMLR.org. Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013b). Generalized denoising auto-encoders as generative models. In NIPS’2013, arXiv:1305.6663. Bordes, F., Honari, S., and Vincent, P. (2017). Learning to generate samples from noise through infusion training. CoRR, abs/1703.06975. Burda, Y., Grosse, R. B., and Salakhutdinov, R. (2014). Accurate and conservative estimates of MRF log-likelihood using reverse annealing. CoRR, abs/1412.8566. Crooks, G. E. (2000). Path-ensemble averages in systems driven far from equilibrium. Physical review E, 61(3):2361. Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The helmholtz machine. Neural Comput., 7(5):889–904. Gingrich, T. R., Rotskoff, G. M., Crooks, G. E., and Geissler, P. L. (2016). Near-optimal protocols in complex nonequilibrium transformations. Proceedings of the National Academy of Sciences, page 201606273. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680. Gregor, K., Danihelka, I., Graves, A., and Wierstra, D. (2015). Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554. Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kingma, D. P., Salimans, T., and Welling, M. (2016). Improving variational inference with inverse autoregressive flow. CoRR, abs/1606.04934. Kingma, D. P. and Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman, C. J. (2014). Random feedback weights support learning in deep neural networks. arXiv:1411.0247. Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738. 10 Markram, H. and Sakmann, B. (1995). Action potentials propagating back into dendrites triggers changes in efficacy. Soc. Neurosci. Abs, 21. Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2):125–139. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 5. Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. Salakhutdinov, R. and Hinton, G. (2009). Deep boltzmann machines. In Artificial Intelligence and Statistics. Schmiedl, T. and Seifert, U. (2007). Optimal finite-time processes in stochastic thermodynamics. Physical review letters, 98(10):108301. Sivak, D. A. and Crooks, G. E. (2012). Thermodynamic metrics and optimal paths. Physical review letters, 108(19):190602. Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585. Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., and Winther, O. (2016). Ladder variational autoencoders. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3738–3746. Theis, L., van den Oord, A., and Bethge, M. (2016). A note on the evaluation of generative models. In International Conference on Learning Representations. Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Machine Learning Res., 11. 11 | 2017 | 195 |
6,669 | Linear Time Computation of Moments in Sum-Product Networks Han Zhao Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 han.zhao@cs.cmu.edu Geoff Gordon Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 ggordon@cs.cmu.edu Abstract Bayesian online algorithms for Sum-Product Networks (SPNs) need to update their posterior distribution after seeing one single additional instance. To do so, they must compute moments of the model parameters under this distribution. The best existing method for computing such moments scales quadratically in the size of the SPN, although it scales linearly for trees. This unfortunate scaling makes Bayesian online algorithms prohibitively expensive, except for small or tree-structured SPNs. We propose an optimal linear-time algorithm that works even when the SPN is a general directed acyclic graph (DAG), which significantly broadens the applicability of Bayesian online algorithms for SPNs. There are three key ingredients in the design and analysis of our algorithm: 1). For each edge in the graph, we construct a linear time reduction from the moment computation problem to a joint inference problem in SPNs. 2). Using the property that each SPN computes a multilinear polynomial, we give an efficient procedure for polynomial evaluation by differentiation without expanding the network that may contain exponentially many monomials. 3). We propose a dynamic programming method to further reduce the computation of the moments of all the edges in the graph from quadratic to linear. We demonstrate the usefulness of our linear time algorithm by applying it to develop a linear time assume density filter (ADF) for SPNs. 1 Introduction Sum-Product Networks (SPNs) have recently attracted some interest because of their flexibility in modeling complex distributions as well as the tractability of performing exact marginal inference [11, 5, 6, 9, 16–18, 10]. They are general-purpose inference machines over which one can perform exact joint, marginal and conditional queries in linear time in the size of the network. It has been shown that discrete SPNs are equivalent to arithmetic circuits (ACs) [3, 8] in the sense that one can transform each SPN into an equivalent AC and vice versa in linear time and space with respect to the network size [13]. SPNs are also closely connected to probabilistic graphical models: by interpreting each sum node in the network as a hidden variable and each product node as a rule encoding context-specific conditional independence [1], every SPN can be equivalently converted into a Bayesian network where compact data structures are used to represent the local probability distributions [16]. This relationship characterizes the probabilistic semantics encoded by the network structure and allows practitioners to design principled and efficient parameter learning algorithms for SPNs [17, 18]. Most existing batch learning algorithms for SPNs can be straightforwardly adapted to the online setting, where the network updates its parameters after it receives one instance at each time step. This online learning setting makes SPNs more widely applicable in various real-world scenarios. This includes the case where either the data set is too large to store at once, or the network needs to adapt to the change of external data distributions. Recently Rashwan et al. [12] proposed an 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. online Bayesian Moment Matching (BMM) algorithm to learn the probability distribution of the model parameters of SPNs based on the method of moments. Later Jaini et al. [7] extended this algorithm to the continuous case where the leaf nodes in the network are assumed to be Gaussian distributions. At a high level BMM can be understood as an instance of the general assumed density filtering framework [14] where the algorithm finds an approximate posterior distribution within a tractable family of distributions by the method of moments. Specifically, BMM for SPNs works by matching the first and second order moments of the approximate tractable posterior distribution to the exact but intractable posterior. An essential sub-routine of the above two algorithms [12, 7] is to efficiently compute the exact first and second order moments of the one-step update posterior distribution (cf. 3.2). Rashwan et al. [12] designed a recursive algorithm to achieve this goal in linear time when the underlying network structure is a tree, and this algorithm is also used by Jaini et al. [7] in the continuous case. However, the algorithm only works when the underlying network structure is a tree, and a naive computation of such moments in a DAG will scale quadratically w.r.t. the network size. Often this quadratic computation is prohibitively expensive even for SPNs with moderate sizes. In this paper we propose a linear time (and space) algorithm that is able to compute any moments of all the network parameters simultaneously even when the underlying network structure is a DAG. There are three key ingredients in the design and analysis of our algorithm: 1). A linear time reduction from the moment computation problem to the joint inference problem in SPNs, 2). A succinct evaluation procedure of polynomial by differentiation without expanding it, and 3). A dynamic programming method to further reduce the quadratic computation to linear. The differential approach [3] used for polynomial evaluation can also be applied for exact inference in Bayesian networks. This technique has also been implicitly used in the recent development of a concave-convex procedure (CCCP) for optimizing the weights of SPNs [18]. Essentially, by reducing the moment computation problem to a joint inference problem in SPNs, we are able to exploit the fact that the network polynomial of an SPN computes a multilinear function in the model parameters, so we can efficiently evaluate this polynomial by differentiation even if the polynomial may contain exponentially many monomials, provided that the polynomial admits a tractable circuit complexity. Dynamic programming can be further used to trade off a constant factor in space complexity (using two additional copies of the network) to reduce the quadratic time complexity to linear so that all the edge moments can be computed simultaneously in two passes of the network. To demonstrate the usefulness of our linear time sub-routine for computing moments, we apply it to design an efficient assumed density filter [14] to learn the parameters of SPNs in an online fashion. ADF runs in linear time and space due to our efficient sub-routine. As an additional contribution, we also show that ADF and BMM can both be understood under a general framework of moment matching, where the only difference lies in the moments chosen to be matched and how to match the chosen moments. 2 Preliminaries We use [n] to abbreviate {1, 2, . . . , n}, and we reserve S to represent an SPN, and use |S| to mean the size of an SPN, i.e., the number of edges plus the number of nodes in the graph. 2.1 Sum-Product Networks A sum-product network S is a computational circuit over a set of random variables X = {X1, . . . , Xn}. It is a rooted directed acyclic graph. The internal nodes of S are sums or products and the leaves are univariate distributions over Xi. In its simplest form, the leaves of S are indicator variables IX=x, which can also be understood as categorical distributions whose entire probability mass is on a single value. Edges from sum nodes are parameterized with positive weights. Sum node computes a weighted sum of its children and product node computes the product of its children. If we interpret each node in an SPN as a function of leaf nodes, then the scope of a node in SPN is defined as the set of variables that appear in this function. More formally, for any node v in an SPN, if v is a terminal node, say, an indicator variable over X, then scope(v) = {X}, else scope(v) = ∪˜v∈Ch(v)scope(˜v). An SPN is complete iff each sum node has children with the same scope, and is decomposable iff for every product node v, scope(vi) ∩scope(vj) = ∅for every pair (vi, vj) of children of v. It has been shown that every valid SPN can be converted into a complete and decomposable SPN with at most a quadratic increase in size [16] without changing the underlying distribution. As a result, in this work we assume that all the SPNs we discuss are complete and decomposable. 2 Let x be an instantiation of the random vector X. We associate an unnormalized probability Vk(x; w) with each node k when the input to the network is x with network weights set to be w: Vk(x; w) = p(Xi = xi) if k is a leaf node over Xi Q j∈Ch(k) Vj(x; w) if k is a product node P j∈Ch(k) wk,jVj(x; w) if k is a sum node (1) where Ch(k) is the child list of node k in the graph and wk,j is the edge weight associated with sum node k and its child node j. The probability of a joint assignment X = x is computed by the value at the root of S with input x divided by a normalization constant Vroot(1; w): p(x) = Vroot(x; w)/Vroot(1; w), where Vroot(1; w) is the value of the root node when all the values of leaf nodes are set to be 1. This essentially corresponds to marginalizing out the random vector X, which will ensure p(x) defines a proper probability distribution. Remarkably, all queries w.r.t. x, including joint, marginal, and conditional, can be answered in linear time in the size of the network. 2.2 Bayesian Networks and Mixture Models We provide two alternative interpretations of SPNs that will be useful later to design our linear time moment computation algorithm. The first one relates SPNs with Bayesian networks (BNs). Informally, any complete and decomposable SPN S over X = {X1, . . . , Xn} can be converted into a bipartite BN with O(n|S|) size [16]. In this construction, each internal sum node in S corresponds to one latent variable in the constructed BN, and each leaf distribution node corresponds to one observable variable in the BN. Furthermore, the constructed BN will be a simple bipartite graph with one layer of local latent variables pointing to one layer of observable variables X. An observable variable is a child of a local latent variable if and only if the observable variable appears as a descendant of the latent variable (sum node) in the original SPN. This means that the SPN S can be understood as a BN where the number of latent variables per instance is O(|S|). The second perspective is to view an SPN S as a mixture model with exponentially many mixture components [4, 18]. More specifically, we can decompose each complete and decomposable SPN S into a sum of induced trees, where each tree corresponds to a product of univariate distributions. To proceed, we first formally define what we called induced trees: Definition 1 (Induced tree SPN). Given a complete and decomposable SPN S over X = {X1, . . . , Xn}, T = (TV , TE) is called an induced tree SPN from S if 1). Root(S) ∈TV ; 2). If v ∈TV is a sum node, then exactly one child of v in S is in TV , and the corresponding edge is in TE; 3). If v ∈TV is a product node, then all the children of v in S are in TV , and the corresponding edges are in TE. It has been shown that Def. 1 produces subgraphs of S that are trees as long as the original SPN S is complete and decomposable [4, 18]. One useful result based on the concept of induced trees is: Theorem 1 ([18]). Let τS = Vroot(1; 1). τS counts the number of unique induced trees in S, and Vroot(x; w) can be written as PτS t=1 Q (k,j)∈TtE wk,j Qn i=1 pt(Xi = xi), where Tt is the tth unique induced tree of S and pt(Xi) is a univariate distribution over Xi in Tt as a leaf node. Thm. 1 shows that τS = Vroot(1; 1) can also be computed efficiently by setting all the edge weights to be 1. In general counting problems are in the #P complexity class [15], and the fact that both probabilistic inference and counting problem are tractable in SPNs also implies that SPNs work on subsets of distributions that have succinct/efficient circuit representation. Without loss of generality assuming that sum layers alternate with product layers in S, we have τS = Ω(2H(S)), where H(S) is the height of S. Hence the mixture model represented by S has number of mixture components that is exponential in the height of S. Thm. 1 characterizes both the number of components and the form of each component in the mixture model, as well as their mixture weights. For the convenience of later discussion, we call Vroot(x; w) the network polynomial of S. Corollary 1. The network polynomial Vroot(x; w) is a multilinear function of w with positive coefficients on each monomial. Corollary 1 holds since each monomial corresponds to an induced tree and each edge appears at most once in the tree. This property will be crucial and useful in our derivation of a linear time algorithm for moment computation in SPNs. 3 3 Linear Time Exact Moment Computation 3.1 Exact Posterior Has Exponentially Many Modes Let m be the number of sum nodes in S. Suppose we are given a fully factorized prior distribution p0(w;ααα) = Qm k=1 p0(wk; αk) over w. It is worth pointing out the fully factorized prior distribution is well justified by the bipartite graph structure of the equivalent BN we introduced in section 2.2. We are interested in computing the moments of the posterior distribution after we receive one observation from the world. Essentially, this is the Bayesian online learning setting where we update the belief about the distribution of model parameters as we observe data from the world sequentially. Note that wk corresponds to the weight vector associated with sum node k, so wk is a vector that satisfies wk > 0 and 1T wk = 1. Let us assume that the prior distribution for each wk is Dirichlet, i.e., p0(w;ααα) = m Y k=1 Dir(wk; αk) = m Y k=1 Γ(P j αk,j) Q j Γ(αk,j) Y j wαk,j−1 k,j After observing one instance x, we have the exact posterior distribution to be: p(w | x) = p0(w;ααα)p(x | w)/p(x). Let Zx ≜p(x) and realize that the network polynomial also computes the likelihood p(x | w). Plugging the expression for the prior distribution as well as the network polynomial into the above Bayes formula, we have p(w | x) = 1 Zx τS X t=1 m Y k=1 Dir(wk; αk) Y (k,j)∈TtE wk,j n Y i=1 pt(xi) Since Dirichlet is a conjugate distribution to the multinomial, each term in the summation is an updated Dirichlet with a multiplicative constant. So, the above equation suggests that the exact posterior distribution becomes a mixture of τS Dirichlets after one observation. In a data set of D instances, the exact posterior will become a mixture of τ D S components, which is intractable to maintain since τS = Ω(2H(S)). The hardness of maintaining the exact posterior distribution appeals for an approximate scheme where we can sequentially update our belief about the distribution while at the same time efficiently maintain the approximation. Assumed density filtering [14] is such a framework: the algorithm chooses an approximate distribution from a tractable family of distributions after observing each instance. A typical choice is to match the moments of an approximation to the exact posterior. 3.2 The Hardness of Computing Moments In order to find an approximate distribution to match the moments of the exact posterior, we need to be able to compute those moments under the exact posterior. This is not a problem for traditional mixture models including mixture of Gaussians, latent Dirichlet allocation, etc., since the number of mixture components in those models are assumed to be small constants. However, this is not the case for SPNs, where the effective number of mixture components is τS = Ω(2H(S)), which also depends on the input network S. To simplify the notation, for each t ∈ [τS], we define ct ≜ Qn i=1 pt(xi)1 and ut ≜ R w p0(w) Q (k,j)∈TtE wk,j dw. That is, ct corresponds to the product of leaf distributions in the tth induced tree Tt, and ut is the moment of Q (k,j)∈TtE wk,j, i.e., the product of tree edges, under the prior distribution p0(w). Realizing that the posterior distribution needs to satisfy the normalization constraint, we have: τS X t=1 ct Z w p0(w) Y (k,j)∈TtE wk,j dw = τS X t=1 ctut = Zx (2) Note that the prior distribution for a sum node is a Dirichlet distribution. In this case we can compute a closed form expression for ut as: ut = Y (k,j)∈TtE Z wk p0(wk)wk,j dwk = Y (k,j)∈TtE Ep0(wk)[wk,j] = Y (k,j)∈TtE αk,j P j′ αk,j′ (3) 1For ease of notation, we omit the explicit dependency of ct on the instance x . 4 More generally, let f(·) be a function applied to each edge weight in an SPN. We use the notation Mp(f) to mean the moment of function f evaluated under distribution p. We are interested in computing Mp(f) where p = p(w | x), which we call the one-step update posterior distribution. More specifically, for each edge weight wk,j, we would like to compute the following quantity: Mp(f(wk,j)) = Z w f(wk,j)p(w | x) dw = 1 Zx τS X t=1 ct Z w p0(w)f(wk,j) Y (k′,j′)∈TtE wk′,j′ dw (4) We note that (4) is not trivial to compute as it involves τS = Ω(2H(S)) terms. Furthermore, in order to conduct moment matching, we need to compute the above moment for each edge (k, j) from a sum node. A naive computation will lead to a total time complexity Ω(|S| · 2H(S)). A linear time algorithm to compute these moments has been designed by Rashwan et al. [12] when the underlying structure of S is a tree. This algorithm recursively computes the moments in a top-down fashion along the tree. However, this algorithm breaks down when the graph is a DAG. In what follows we will present a O(|S|) time and space algorithm that is able to compute all the moments simultaneously for general SPNs with DAG structures. We will first show a linear time reduction from the moment computation in (4) to a joint inference problem in S, and then proceed to use the differential trick to efficiently compute (4) for each edge in the graph. The final component will be a dynamic program to simultaneously compute (4) for all edges wk,j in the graph by trading constant factors of space complexity to reduce time complexity. 3.3 Linear Time Reduction from Moment Computation to Joint Inference Let us first compute (4) for a fixed edge (k, j). Our strategy is to partition all the induced trees based on whether they contain the tree edge (k, j) or not. Define TF = {Tt | (k, j) ̸∈Tt, t ∈[τS]} and TT = {Tt | (k, j) ∈Tt, t ∈[τS]}. In other words, TF corresponds to the set of trees that do not contain edge (k, j) and TT corresponds to the set of trees that contain edge (k, j). Then, Mp(f(wk,j)) = 1 Zx X Tt∈TT ct Z w p0(w)f(wk,j) Y (k′,j′)∈TtE wk′,j′ dw + 1 Zx X Tt∈TF ct Z w p0(w)f(wk,j) Y (k′,j′)∈TtE wk′,j′ dw (5) For the induced trees that contain edge (k, j), we have 1 Zx X Tt∈TT ct Z w p0(w)f(wk,j) Y (k′,j′)∈TtE wk′,j′ dw = 1 Zx X Tt∈TT ctutMp′ 0,k(f(wk,j)) (6) where p′ 0,k is the one-step update posterior Dirichlet distribution for sum node k after absorbing the term wk,j. Similarly, for the induced trees that do not contain the edge (k, j): 1 Zx X Tt∈TF ct Z w p0(w)f(wk,j) Y (k′,j′)∈TtE wk′,j′ dw = 1 Zx X Tt∈TF ctutMp0,k(f(wk,j)) (7) where p0,k is the prior Dirichlet distribution for sum node k. The above equation holds by changing the order of integration and realize that since (k, j) is not in tree Tt, Q (k′,j′)∈TtE wk′,j′ does not contain the term wk,j. Note that both Mp0,k(f(wk,j)) and Mp′ 0,k(f(wk,j)) are independent of specific induced trees, so we can combine the above two parts to express Mp(f(wk,j)) as: Mp(f(wk,j)) = 1 Zx X Tt∈TF ctut ! Mp0,k(f(wk,j)) + 1 Zx X Tt∈TT ctut ! Mp′ 0,k(f(wk,j)) (8) From (2) we have 1 Zx τS X t=1 ctut = 1 and τS X t=1 ctut = X Tt∈TT ctut + X Tt∈TF ctut 5 This implies that Mp(f) is in fact a convex combination of Mp0,k(f) and Mp′ 0,k(f). In other words, since both Mp0,k(f) and Mp′ 0,k(f) can be computed in closed form for each edge (k, j), so in order to compute (4), we only need to be able to compute the two coefficients efficiently. Recall that for each induced tree Tt, we have the expression of ut as ut = Q (k,j)∈TtE αk,j/ P j′ αk,j′. So the term PτS t=1 ctut can thus be expressed as: τS X t=1 ctut = τS X t=1 Y (k,j)∈TtE αk,j P j′ αk,j′ n Y i=1 pt(xi) (9) The key observation that allows us to find the linear time reduction lies in the fact that (9) shares exactly the same functional form as the network polynomial, with the only difference being the specification of edge weights in the network. The following lemma formalizes our argument. Lemma 1. PτS t=1 ctut can be computed in O(|S|) time and space in a bottom-up evaluation of S. Proof. Compare the form of (9) to the network polynomial: p(x | w) = Vroot(x; w) = τS X t=1 Y (k,j)∈TtE wk,j n Y i=1 pt(xi) (10) Clearly (9) and (10) share the same functional form and the only difference lies in that the edge weight used in (9) is given by αk,j/ P j′ αk,j′ while the edge weight used in (10) is given by wk,j, both of which are constrained to be positive and locally normalized. This means that in order to compute the value of (9), we can replace all the edge weights wk,j with αk,j/ P j′ αk,j′, and a bottom-up pass evaluation of S will give us the desired result at the root of the network. The linear time and space complexity then follows from the linear time and space inference complexity of SPNs. ■ In other words, we reduce the original moment computation problem for edge (k, j) to a joint inference problem in S with a set of weights determined by ααα. 3.4 Efficient Polynomial Evaluation by Differentiation To evaluate (8), we also need to compute P Tt∈TT ctut efficiently, where the sum is over a subset of induced trees that contain edge (k, j). Again, due to the exponential lower bound on the number of unique induced trees, a brute force computation is infeasible in the worst case. The key observation is that we can use the differential trick to solve this problem by realizing the fact that Zx = PτS t=1 ctut is a multilinear function in αk,j/ P j′ αk,j′, ∀k, j and it has a tractable circuit representation since it shares the same network structure with S. Lemma 2. P Tt∈TT ctut = wk,j (∂PτS t=1 ctut/∂wk,j), and it can be computed in O(|S|) time and space in a top-down differentiation of S. Proof. Define wk,j ≜αk,j/ P j′ αk,j′, then X Tt∈TT ctut = X Tt∈TT Y (k′,j′)∈TtE wk′,j′ n Y i=1 pt(xi) = wk,j X Tt∈TT Y (k′,j′)∈TtE (k′,j′)̸=(k,j) wk′,j′ n Y i=1 pt(xi) + 0 · X Tt∈TF ctut = wk,j ∂ ∂wk,j X Tt∈TT ctut + ∂ ∂wk,j X Tt∈TF ctut ! = wk,j ∂ ∂wk,j τS X t=1 ctut ! where the second equality is by Corollary 1 that the network polynomial is a multilinear function of wk,j and the third equality holds because TF is the set of trees that do not contain wk,j. The last equality follows by simple algebraic transformations. In summary, the above lemma holds because of the fact that differential operator applied to a multilinear function acts as a selector for all the 6 monomials containing a specific variable. Hence, P Tt∈TF ctut = PτS t=1 ctut −P Tt∈TT ctut can also be computed. To show the linear time and space complexity, recall that the differentiation w.r.t.wk,j can be efficiently computed by back-propagation in a top-down pass of S once we have computed PτS t=1 ctut in a bottom-up pass of S. ■ Remark. The fact that we can compute the differentiation w.r.t. wk,j using the original circuit without expanding it underlies many recent advances in the algorithmic design of SPNs. Zhao et al. [18, 17] used the above differential trick to design linear time collapsed variational algorithm and the concave-convex produce for parameter estimation in SPNs. A different but related approach, where the differential operator is taken w.r.t. input indicators, not model parameters, is applied in computing the marginal probability in Bayesian networks and junction trees [3, 8]. We finish this discussion by concluding that when the polynomial computed by the network is a multilinear function in terms of model parameters or input indicators (such as in SPNs), then the differential operator w.r.t. a variable can be used as an efficient way to compute the sum of the subset of monomials that contain the specific variable. 3.5 Dynamic Programming: from Quadratic to Linear Define Dk(x; w) = ∂Vroot(x; w)/∂Vk(x; w). Then the differentiation term ∂PτS t=1 ctut/∂wk,j in Lemma 2 can be computed via back-propagation in a top-down pass of the network as follows: ∂PτS t=1 ctut ∂wk,j = ∂Vroot(x; w) ∂Vk(x; w) ∂Vk(x; w) ∂wk,j = Dk(x; w)Vj(x; w) (11) Let λk,j = (wk,jVj(x; w)Dk(x; w)) /Vroot(x; w) and fk,j = f(wk,j), then the final formula for computing the moment of edge weight wk,j under the one-step update posterior p is given by Mp(fk,j) = (1 −λk,j) Mp0(fk,j) + λk,jMp′ 0(fk,j) (12) Corollary 2. For each edges (k, j), (8) can be computed in O(|S|) time and space. + Dk(x; w) × Vj(x; w) × × wk,j Figure 1: The moment computation only needs three quantities: the forward evaluation value at node j, the backward differentiation value node k, and the weight of edge (k, j). The corollary simply follows from Lemma 1 and Lemma 2 with the assumption that the moments under the prior has closed form solution. By definition, we also have λk,j = P Tt∈TT ctut/Zx, hence 0 ≤λk,j ≤1, ∀(k, j). This formula shows that λk,j computes the ratio of all the induced trees that contain edge (k, j) to the network. Roughly speaking, this measures how important the contribution of a specific edge is to the whole network polynomial. As a result, we can interpret (12) as follows: the more important the edge is, the more portion of the moment comes from the new observation. We visualize our moment computation method for a single edge (k, j) in Fig. 1. Remark. CCCP for SPNs was originally derived using a sequential convex relaxation technique, where in each iteration a concave surrogate function is constructed and optimized. The key update in each iteration of CCCP ([18], (7)) is given as follows: w′ k,j ∝wk,jVj(x; w)Dk(x; w)/Vroot(x; w), where the R.H.S. is exactly the same as λk,j defined above. From this perspective, CCCP can also be understood as implicitly applying the differential trick to compute λk,j, i.e., the relative importance of edge (k, j), and then take updates according to this importance measure. In order to compute the moments of all the edge weights wk,j, a naive computation would scale O(|S|2) because there are O(|S|) edges in the graph and from Cor. 2 each such computation takes O(|S|) time. The key observation that allows us to further reduce the complexity to linear comes from the structure of λk,j: λk,j only depends on three terms, i.e., the forward evaluation value 7 Vj(x; w), the backward differentiation value Dk(x; w) and the original weight of the edge wk,j. This implies that we can use dynamic programming to cache both Vj(x; w) and Dk(x; w) in a bottom-up evaluation pass and a top-down differentiation pass, respectively. At a high level, we trade off a constant factor in space complexity (using two additional copies of the network) to reduce the quadratic time complexity to linear. Theorem 2. For all edges (k, j), (8) can be computed in O(|S|) time and space. Proof. During the bottom-up evaluation pass, in order to compute the value Vroot(x; w) at the root of S, we will also obtain all the values Vj(x; w) at each node j in the graph. So instead of discarding these intermediate Vj(x; w), we cache them by allocating additional space at each node j. So after one bottom-up evaluation pass of the network, we will also have all the Vj(x; w) for each node j, at the cost of one additional copy of the network. Similarly, during the top-down differentiation pass of the network, because of the chain rule, we will also obtain all the intermediate Dk(x; w) at each node k. Again, we cache them. Once we have both Vj(x; w) and Dk(x; w) for each edge (k, j), from (12), we can get all the moments for all the weighted edges in S simultaneously. Because the whole process only requires one bottom-up evaluation pass and one top-down differentiation pass of S, the time complexity is 2|S|. Since we use two additional copies of S, the space complexity is 3|S|. ■ We summarize the linear time algorithm for moment computation in Alg. 1. Algorithm 1 Linear Time Exact Moment Computation Input: Prior p0(w | ααα), moment f, SPN S and input x. Output: Mp(f(wk,j)), ∀(k, j). 1: wk,j ←αk,j/ P j′ αk,j′, ∀(k, j). 2: Compute Mp0(f(wk,j)) and Mp′ 0(f(wk,j)), ∀(k, j). 3: Bottom-up evaluation pass of S with input x. Record Vk(x; w) at each node k. 4: Top-down differentiation pass of S with input x. Record Dk(x; w) at each node k. 5: Compute the exact moment for each (k, j): Mp(fk,j) = (1 −λk,j) Mp0(fk,j) + λk,jMp′ 0(fk,j). 4 Applications in Online Moment Matching In this section we use Alg. 1 as a sub-routine to develop a new Bayesian online learning algorithm for SPNs based on assumed density filtering [14]. To do so, we find an approximate distribution by minimizing the KL divergence between the one-step update posterior and the approximate distribution. Let P = {q | q = Qm k=1 Dir(wk; βk)}, i.e., P is the space of product of Dirichlet densities that are decomposable over all the sum nodes in S. Note that since p0(w;ααα) is fully decomposable, we have p0 ∈P. One natural choice is to try to find an approximate distribution q ∈P such that q minimizes the KL-divergence between p(w|x) and q, i.e., ˆp = arg min q∈P KL(p(w | x) || q) It is not hard to show that when q is an exponential family distribution, which is the case in our setting, the minimization problem corresponds to solving the following moment matching equation: Ep(w|x)[T(wk)] = Eq(w)[T(wk)] (13) where T(wk) is the vector of sufficient statistics of q(wk). When q(·) is a Dirichlet, we have T(wk) = log wk, where the log is understood to be taken elementwise. This principle of finding an approximate distribution is also known as reverse information projection in the literature of information theory [2]. As a comparison, information projection corresponds to minimizing KL(q || p(w | x)) within the same family of distributions q ∈P. By utilizing our efficient linear time algorithm for exact moment computation, we propose a Bayesian online learning algorithm for SPNs based on the above moment matching principle, called assumed density filtering (ADF). The pseudocode is shown in Alg. 2. In the ADF algorithm, for each edge wk,j the above moment matching equation amounts to solving the following equation: ψ(βk,j) −ψ( X j′ βk,j′) = Ep(w|x)[log wk,j] 8 where ψ(·) is the digamma function. This is a system of nonlinear equations about β where the R.H.S. of the above equation can be computed using Alg. 1 in O(|S|) time for all the edges (k, j). To efficiently solve it, we take exp(·) at both sides of the equation and approximate the L.H.S. using the fact that exp(ψ(βk,j)) ≈βk,j −1 2 for βk,j > 1. Expanding the R.H.S. of the above equation using the identity from (12), we have: exp ψ(βk,j) −ψ( X j′ βw,j′) = exp Ep(w|x)[log wk,j] ⇔ βk,j −1 2 P j′ βk,j′ −1 2 = αk,j −1 2 P j′ αk,j′ −1 2 !(1−λk,j) × αk,j + 1 2 P j′ αk,j′ + 1 2 !λk,j (14) Note that (αk,j −0.5)/(P j′ αk,j′ −0.5) is approximately the mean of the prior Dirichlet under p0 and (αk,j + 0.5)/(P j′ αk,j′ + 0.5) is approximately the mean of p′ 0, where p′ 0 is the posterior by adding one pseudo-count to wk,j. So (14) is essentially finding a posterior with hyperparameter β such that the posterior mean is approximately the weighted geometric mean of the means given by p0 and p′ 0, weighted by λk,j. Instead of matching the moments given by the sufficient statistics, also known as the natural moments, BMM tries to find an approximate distribution q by matching the first order moments, i.e., the mean of the prior and the one-step update posterior. Using the same notation, we want q to match the following equation: Eq(w)[wk] = Ep(w|x)[wk] ⇔ βk,j P j′ βk,j′ = (1 −λk,j) αk,j P j′ αk,j′ + λk,j αk,j + 1 P j′ αk,j′ + 1 (15) Again, we can interpret the above equation as to find the posterior hyperparameter β such that the posterior mean is given by the weighted arithmetic mean of the means given by p0 and p′ 0, weighted by λk,j. Notice that due to the normalization constraint, we cannot solve for β directly from the above equations, and in order to solve for β we will need one more equation to be added into the system. However, from line 1 of Alg. 1, what we need in the next iteration of the algorithm is not β, but only its normalized version. So we can get rid of the additional equation and use (15) as the update formula directly in our algorithm. Using Alg. 1 as a sub-routine, both ADF and BMM enjoy linear running time, sharing the same order of time complexity as CCCP. However, since CCCP directly optimizes over the data log-likelihood, in practice we observe that CCCP often outperforms ADF and BMM in log-likelihood scores. Algorithm 2 Assumed Density Filtering for SPN Input: Prior p0(w | ααα), SPN S and input {xi}∞ i=1. 1: p(w) ←p0(w | ααα) 2: for i = 1, . . . , ∞do 3: Apply Alg. 1 to compute Ep(w|xi)[log wk,j] for all edges (k, j). 4: Find ˆp = arg minq∈P KL(p(w | xi) || q) by solving the moment matching equation (13). 5: p(w) ←ˆp(w). 6: end for 5 Conclusion We propose an optimal linear time algorithm to efficiently compute the moments of model parameters in SPNs under online settings. The key techniques used in the design of our algorithm include the liner time reduction from moment computation to joint inference, the differential trick that is able to efficiently evaluate a multilinear function, and the dynamic programming to further reduce redundant computations. Using the proposed algorithm as a sub-routine, we are able to improve the time complexity of BMM from quadratic to linear on general SPNs with DAG structures. We also use the proposed algorithm as a sub-routine to design a new online algorithm, ADF. As a future direction, we hope to apply the proposed moment computation algorithm in the design of efficient structure learning algorithms for SPNs. We also expect that the analysis techniques we develop might find other uses for learning SPNs. 9 Acknowledgements HZ thanks Pascal Poupart for providing insightful comments. HZ and GG are supported in part by ONR award N000141512365. References [1] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks. In Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence, pages 115–123. Morgan Kaufmann Publishers Inc., 1996. [2] I. Csiszár and F. Matus. Information projections revisited. IEEE Transactions on Information Theory, 49(6):1474–1490, 2003. [3] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM (JACM), 50(3):280–305, 2003. [4] A. Dennis and D. Ventura. Greedy structure search for sum-product networks. In International Joint Conference on Artificial Intelligence, volume 24, 2015. [5] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In Advances in Neural Information Processing Systems, pages 3248–3256, 2012. [6] R. Gens and P. Domingos. Learning the structure of sum-product networks. In Proceedings of The 30th International Conference on Machine Learning, pages 873–880, 2013. [7] P. Jaini, A. Rashwan, H. Zhao, Y. Liu, E. Banijamali, Z. Chen, and P. Poupart. Online algorithms for sum-product networks with continuous variables. In Proceedings of the Eighth International Conference on Probabilistic Graphical Models, pages 228–239, 2016. [8] J. D. Park and A. Darwiche. A differential semantics for jointree algorithms. Artificial Intelligence, 156(2):197–216, 2004. [9] R. Peharz, S. Tschiatschek, F. Pernkopf, and P. Domingos. On theoretical properties of sumproduct networks. In AISTATS, 2015. [10] R. Peharz, R. Gens, F. Pernkopf, and P. Domingos. On the latent variable interpretation in sum-product networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (10):2030–2044, 2017. [11] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In Proc. 12th Conf. on Uncertainty in Artificial Intelligence, pages 2551–2558, 2011. [12] A. Rashwan, H. Zhao, and P. Poupart. Online and distributed bayesian moment matching for parameter learning in sum-product networks. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1469–1477, 2016. [13] A. Rooshenas and D. Lowd. Learning sum-product networks with direct and indirect variable interactions. In ICML, 2014. [14] H. W. Sorenson and A. R. Stubberud. Non-linear filtering by approximation of the a posteriori density. International Journal of Control, 8(1):33–51, 1968. [15] L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8 (2):189–201, 1979. [16] H. Zhao, M. Melibari, and P. Poupart. On the relationship between sum-product networks and bayesian networks. In ICML, 2015. [17] H. Zhao, T. Adel, G. Gordon, and B. Amos. Collapsed variational inference for sum-product networks. In ICML, 2016. [18] H. Zhao, P. Poupart, and G. Gordon. A unified approach for learning the parameters of sum-product networks. NIPS, 2016. 10 | 2017 | 196 |
6,670 | SGD Learns the Conjugate Kernel Class of the Network Amit Daniely Hebrew University and Google Research amit.daniely@mail.huji.ac.il Abstract We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. 1 Introduction While stochastic gradient decent (SGD) from a random initialization is probably the most popular supervised learning algorithm today, we have very few results that depicts conditions that guarantee its success. Indeed, to the best of our knowledge, Andoni et al. [2014] provides the only known result of this form, and it is valid in a rather restricted setting. Namely, for depth-2 networks, where the underlying distribution is Gaussian, the algorithm is full gradient decent (rather than SGD), and the task is regression when the learnt function is a constant degree polynomial. We build on the framework of Daniely et al. [2016] to establish guarantees on SGD in a rather general setting. Daniely et al. [2016] defined a framework that associates a reproducing kernel to a network architecture. They also connected the kernel to the network via the random initialization. Namely, they showed that right after the random initialization, any function in the kernel space can be approximated by changing the weights of the last layer. The quality of the approximation depends on the size of the network and the norm of the function in the kernel space. As optimizing the last layer is a convex procedure, the result of Daniely et al. [2016] intuitively shows that the optimization process starts from a favourable point for learning a function in the conjugate kernel space. In this paper we verify this intuition. Namely, for a fairly general family of architectures (that contains fully connected networks and convolutional networks) and supervised learning tasks, we show that if the network is large enough, the learning rate is small enough, and the number of SGD steps is large enough as well, SGD is guaranteed to learn any function in the corresponding kernel space. We emphasize that the number of steps and the size of the network are only required to be polynomial (which is best possible) in the relevant parameters – the norm of the function, the required accuracy parameter (), and the dimension of the input and the output of the network. Likewise, the result holds for any input distribution. To evaluate our result, one should understand which functions it guarantee that SGD will learn. Namely, what functions reside in the conjugate kernel space, how rich it is, and how good those functions are as predictors. From an empirical perspective, in [Daniely et al., 2017], it is shown that for standard convolutional networks the conjugate class contains functions whose performance is 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. close to the performance of the function that is actually learned by the network. This is based on experiments on the standard CIFAR-10 dataset. From a theoretical perspective, we list below a few implications that demonstrate the richness of the conjugate kernel space. These implications are valid for fully connected networks of any depth between 2 and log(n), where n is the input dimension. Likewise, they are also valid for convolutional networks of any depth between 2 and log(n), and with constantly many convolutional layers. • SGD is guaranteed to learn in polynomial time constant degree polynomials with polynomially bounded coefficients. As a corollary, SGD is guaranteed to learn in polynomial time conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. These function classes comprise a considerable fraction of the function classes that are known to be poly-time (PAC) learnable by any method. Exceptions include constant degree polynomial thresholds with no restriction on the coefficients, decision lists and parities. • SGD is guaranteed to learn, not necessarily in polynomial time, any continuous function. This complements classical universal approximation results that show that neural networks can (approximately) express any continuous function (see Scarselli and Tsoi [1998] for a survey). Our results strengthen those results and show that networks are not only able to express those functions, but actually guaranteed to learn them. 1.1 Related work Guarantees on SGD. As noted above, there are very few results that provide polynomial time guarantees for SGD on NN. One notable exception is the work of Andoni et al. [2014], that proves a result that is similar to ours, but in a substantially more restricted setting. Concretely, their result holds for depth-2 fully connected networks, as opposed to rather general architecture and constant or logarithmic depth in our case. Likewise, the marginal distribution on the instance space is assumed to be Gaussian or uniform, as opposed to arbitrary in our case. In addition, the algorithm they consider is full gradient decent, which corresponds to SGD with infinitely large mini-batch, as opposed to SGD with arbitrary mini-batch size in our case. Finally, the underlying task is regression in which the target function is a constant degree polynomial, whereas we consider rather general supervised learning setting. Other polynomial time guarantees on learning deep architectures. Various recent papers show that poly-time learning is possible in the case that the the learnt function can be realized by a neural network with certain (usually fairly strong) restrictions on the weights [Livni et al., 2014, Zhang et al., 2016a, 2015, 2016b], or under the assumption that the data is generated by a generative model that is derived from the network architecture [Arora et al., 2014, 2016]. We emphasize that the main difference of those results from our results and the results of Andoni et al. [2014] is that they do not provide guarantees on the standard SGD learning algorithm. Rather, they show that under those aforementioned conditions, there are some algorithms, usually very different from SGD on the network, that are able to learn in polynomial time. Connection to kernels. As mentioned earlier, our paper builds on Daniely et al. [2016], who developed the association of kernels to NN which we rely on. Several previous papers [Mairal et al., 2014, Cho and Saul, 2009, Rahimi and Recht, 2009, 2007, Neal, 2012, Williams, 1997, Kar and Karnick, 2012, Pennington et al., 2015, Bach, 2015, 2014, Hazan and Jaakkola, 2015, Anselmi et al., 2015] investigated such associations, but in a more restricted settings (i.e., for less architectures). Some of those papers [Rahimi and Recht, 2009, 2007, Daniely et al., 2016, Kar and Karnick, 2012, Bach, 2015, 2014] also provide measure of concentration results, that show that w.h.p. the random initialization of the network’s weights is reach enough to approximate the functions in the corresponding kernel space. As a result, these papers provide polynomial time guarantees on the variant of SGD, where only the last layer is trained. We remark that with the exception of Daniely et al. [2016], those results apply just to depth-2 networks. 1.2 Discussion and future directions We next want to place this work in the appropriate learning theoretic context, and to elaborate further on this paper’s approach for investigating neural networks. For the sake of concreteness, let us 2 restrict the discussion to binary classification over the Boolean cube. Namely, given examples from a distribution D on {±1}n × {0, 1}, the goal is to learn a function h : {±1}n →{0, 1} whose 0-1 error, L0−1 D (h) = Pr(x,y)∼D (h(x) = y), is as small as possible. We will use a bit of terminology. A model is a distribution D on {±1}n × {0, 1} and a model class is a collection M of models. We note that any function class H ⊂{0, 1}{±1}n defines a model class, M(H), consisting of all models D such that L0−1 D (h) = 0 for some h ∈H. We define the capacity of a model class as the minimal number m for which there is an algorithm such that for every D ∈M the following holds. Given m samples from D, the algorithm is guaranteed to return, w.p. ≥ 9 10 over the samples and its internal randomness, a function h : {±1}n →{0, 1} with 0-1 error ≤ 1 10. We note that for function classes the capacity is the VC dimension, up to a constant factor. Learning theory analyses learning algorithms via model classes. Concretely, one fixes some model class M and show that the algorithm is guaranteed to succeed whenever the underlying model is from M. Often, the connection between the algorithm and the class at hand is very clear. For example, in the case that the model is derived from a function class H, the algorithm might simply be one that finds a function in H that makes no mistake on the given sample. The natural choice for a model class for analyzing SGD on NN would be the class of all functions that can be realized by the network, possibly with some reasonable restrictions on the weights. Unfortunately, this approach it is probably doomed to fail, as implied by various computational hardness results [Blum and Rivest, 1989, Kearns and Valiant, 1994, Blum et al., 1994, Kharitonov, 1993, Klivans and Sherstov, 2006, 2007, Daniely et al., 2014, Daniely and Shalev-Shwartz, 2016]. So, what model classes should we consider? With a few isolated exceptions (e.g. Bshouty et al. [1998]) all known efficiently learnable model classes are either a linear model class, or contained in an efficiently learnable linear model class. Namely, functions classes composed of compositions of some predefined embedding with linear threshold functions, or linear functions over some finite field. Coming up we new tractable models would be a fascinating progress. Still, as linear function classes are the main tool that learning theory currently has for providing guarantees on learning, it seems natural to try to analyze SGD via linear model classes. Our work follows this line of thought, and we believe that there is much more to achieve via this approach. Concretely, while our bounds are polynomial, the degree of the polynomials is rather large, and possibly much better quantitative bounds can be achieved. To be more concrete, suppose that we consider simple fully connected architecture, with 2-layers, ReLU activation, and n hidden neurons. In this case, the capacity of the model class that our results guarantee that SGD will learn is Θ n 1 3 . For comparison, the capacity of the class of all functions that are realized by this network is Θ n2 . As a challenge, we encourage the reader to prove that with this architecture (possibly with an activation that is different from the ReLU), SGD is guaranteed to learn some model class of capacity that is super-linear in n. 2 Preliminaries Notation. We denote vectors by bold-face letters (e.g. x), matrices by upper case letters (e.g. W), and collection of matrices by bold-face upper case letters (e.g. W). The p-norm of x ∈Rd is denoted by xp = d i=1 |xi|p 1 p . We will also use the convention that x = x2. For functions σ : R →R we let σ := EX∼N (0,1) σ2(X) = 1 √ 2π ∞ −∞σ2(x)e−x2 2 dx . Let G = (V, E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is denoted in(v) := {u ∈V | uv ∈E}. We also denote deg(v) = |in(v)|. Given weight function δ : V →[0, ∞) and U ⊂V we let δ(U) = u∈U δ(u). The d −1 dimensional sphere is denoted Sd−1 = {x ∈Rd | x = 1}. We use [x]+ to denote max(x, 0). Input space. Throughout the paper we assume that each example is a sequence of n elements, each of which is represented as a unit vector. Namely, we fix n and take the input space to be X = Xn,d = Sd−1n. Each input example is denoted, x = (x1, . . . , xn), where xi ∈Sd−1 . (1) 3 While this notation is slightly non-standard, it unifies input types seen in various domains (see Daniely et al. [2016]). Supervised learning. The goal in supervised learning is to devise a mapping from the input space X to an output space Y based on a sample S = {(x1, y1), . . . , (xm, ym)}, where (xi, yi) ∈ X × Y drawn i.i.d. from a distribution D over X × Y. A supervised learning problem is further specified by an output length k and a loss function : Rk × Y →[0, ∞), and the goal is to find a predictor h : X →Rk whose loss, LD(h) := E(x,y)∼D (h(x), y), is small. The empirical loss LS(h) := 1 m m i=1 (h(xi), yi) is commonly used as a proxy for the loss LD. When h is defined by a vector w of parameters, we will use the notations LD(w) = LD(h), LS(w) = LS(h) and (x,y)(w) = (h(x), y). Regression problems correspond to k = 1, Y = R and, for instance, the squared loss square(ˆy, y) = (ˆy −y)2. Binary classification is captured by k = 1, Y = {±1} and, say, the zero-one loss 0−1(ˆy, y) = 1[ˆyy ≤0] or the hinge loss hinge(ˆy, y) = [1 −ˆyy]+. Multiclass classification is captured by k being the number of classes, Y = [k], and, say, the zero-one loss 0−1(ˆy, y) = 1[ˆyy ≤argmaxy ˆyy] or the logistic loss log(ˆy, y) = −log (py(ˆy)) where p : Rk →Δk−1 is given by pi(ˆy) = eˆyi k j=1 eˆyj . A loss is L-Lipschitz if for all y ∈Y, the function y(ˆy) := (ˆy, y) is L-Lipschitz. Likewise, it is convex if y is convex for every y ∈Y. Neural network learning. We define a neural network N to be a vertices weighted directed acyclic graph (DAG) whose nodes are denoted V (N) and edges E(N). The weight function will be denoted by δ : V (N) →[0, ∞), and its sole role would be to dictate the distribution of the initial weights. We will refer N’s nodes by neurons. Each of non-input neuron, i.e. neuron with incoming edges, is associated with an activation function σv : R →R. In this paper, an activation can be any function σ : R →R that is right and left differentiable, square integrable with respect to the Gaussian measure on R, and is normalized in the sense that σ = 1. The set of neurons having only incoming edges are called the output neurons. To match the setup of supervised learning defined above, a network N has nd input neurons and k output neurons, denoted o1, . . . , ok. A network N together with a weight vector w = {wuv | uv ∈E} ∪{bv | v ∈V is an internal neuron} defines a predictor hN ,w : X →Rk whose prediction is given by “propagating” x forward through the network. Concretely, we define hv,w(·) to be the output of the subgraph of the neuron v as follows: for an input neuron v, hv,w outputs the corresponding coordinate in x, and internal neurons, we define hv,w recursively as hv,w(x) = σv u∈in(v) wuv hu,w(x) + bv . For output neurons, we define hv,w as hv,w(x) = u∈in(v) wuv hu,w(x) . Finally, we let hN ,w(x) = (ho1,w(x), . . . , hok,w(x)). We next describe the learning algorithm that we analyze in this paper. While there is no standard training algorithm for neural networks, the algorithms used in practice are usually quite similar to the one we describe, both in the way the weights are initialized and the way they are updated. We will use the popular Xavier initialization [Glorot and Bengio, 2010] for the network weights. Fix 0 ≤β ≤1. We say that w0 = {w0 uv}uv∈E ∪{bv}v∈V is an internal neuron are β-biased random weights (or, β-biased random initialization) if each weight wuv is sampled independently from a normal distribution with mean 0 and variance (1 −β)dδ(u)/δ(in(v)) if u is an input neuron and (1 −β)δ(u)/δ(in(v)) otherwise. Finally, each bias term bv is sampled independently from a normal distribution with mean 0 and variance β. We note that the rational behind this initialization scheme is that for every example x and every neuron v we have Ew0 (hv,w0(x))2 = 1 (see Glorot and Bengio [2010]) Kernel classes. A function κ : X × X →R is a reproducing kernel, or simply a kernel, if for every x1, . . . , xr ∈X, the r × r matrix Γi,j = {κ(xi, xj)} is positive semi-definite. Each kernel induces a Hilbert space Hκ of functions from X to R with a corresponding norm · κ. For h ∈Hk κ we denote hκ = k i=1 hi2κ. A kernel and its corresponding space are normalized if ∀x ∈X, κ(x, x) = 1. 4 Algorithm 1 Generic Neural Network Training Input: Network N, learning rate η > 0, batch size m, number of steps T > 0, bias parameter 0 ≤β ≤1, flag zero prediction layer ∈{True, False}. Let w0 be β-biased random weights if zero prediction layer then Set w0 uv = 0 whenever v is an output neuron end if for t = 1, . . . , T do Obtain a mini-batch St = {(xt i, yt i)}m i=1 ∼Dm Using back-propagation, calculate a stochastic gradient vt = ∇LSt(wt) Update wt+1 = wt −ηvt end for S1 S2 S3 S4 Figure 1: Examples of computation skeletons. Kernels give rise to popular benchmarks for learning algorithms. Fix a normalized kernel κ and M > 0. It is well known that that for L-Lipschitz loss , the SGD algorithm is guaranteed to return a function h such that E LD(h) ≤minh∈Hkκ, hκ≤M LD(h) + using LM 2 examples. In the context of multiclass classification, for γ > 0 we define γ : Rk × [k] →R by γ(ˆy, y) = 1[ˆyy ≤ γ +maxy =y ˆyy]. We say that a distribution D on X ×[k] is M-separable w.r.t. κ if there is h∗∈Hk κ such that h∗κ ≤M and L1 D(h∗) = 0. In this case, the perceptron algorithm is guaranteed to return a function h such that E L0−1 D (h) ≤ using 2M 2 examples. We note that both for perceptron and SGD, the above mentioned results are best possible, in the sense that any algorithm with the same guarantees, will have to use at least the same number of examples, up to a constant factor. Computation skeletons [Daniely et al., 2016] In this section we define a simple structure which we term a computation skeleton. The purpose of a computational skeleton is to compactly describe a feed-forward computation from an input to an output. A single skeleton encompasses a family of neural networks that share the same skeletal structure. Likewise, it defines a corresponding normalized kernel. Definition 1. A computation skeleton S is a DAG with n inputs, whose non-input nodes are labeled by activations, and has a single output node out(S). Figure 1 shows four example skeletons, omitting the designation of the activation functions. We denote by |S| the number of non-input nodes of S. The following definition shows how a skeleton, accompanied with a replication parameter r ≥1 and a number of output nodes k, induces a neural network architecture. 5 S N(S, 5, 4) Figure 2: A (5, 4)-realization of the computation skeleton S with d = 2. Definition 2 (Realization of a skeleton). Let S be a computation skeleton and consider input coordinates in Sd−1 as in (1). For r, k ≥1 we define the following neural network N = N(S, r, k). For each input node in S, N has d corresponding input neurons with weight 1/d. For each internal node v ∈S labelled by an activation σ, N has r neurons v1, . . . , vr, each with an activation σ and weight 1/r. In addition, N has k output neurons o1, . . . , ok with the identity activation σ(x) = x and weight 1. There is an edge viuj ∈E(N) whenever uv ∈E(S). For every output node v in S, each neuron vj is connected to all output neurons o1, . . . , ok. We term N the (r, k)-fold realization of S. Note that the notion of the replication parameter r corresponds, in the terminology of convolutional networks, to the number of channels taken in a convolutional layer and to the number of hidden neurons taken in a fully-connected layer. In addition to networks’ architectures, a computation skeleton S also defines a normalized kernel κS : X × X →[−1, 1]. To define the kernel, we use the notion of a conjugate activation. For ρ ∈[−1, 1], we denote by Nρ the multivariate Gaussian distribution on R2 with mean 0 and covariance matrix 1 ρ ρ 1 . Definition 3 (Conjugate activation). The conjugate activation of an activation σ is the function ˆσ : [−1, 1] →R defined as ˆσ(ρ) = E(X,Y )∼Nρ σ(X)σ(Y ) . The following definition gives the kernel corresponding to a skeleton Definition 4 (Compositional kernels). Let S be a computation skeleton and let 0 ≤β ≤1. For every node v, inductively define a kernel κβ v : X × X →R as follows. For an input node v corresponding to the ith coordinate, define κβ v(x, y) = xi, yi. For a non-input node v, define κβ v(x, y) = ˆσv (1 −β) u∈in(v) κβ u(x, y) |in(v)| + β . The final kernel κβ S is κβ out(S). The resulting Hilbert space and norm are denoted HS,β and · S,β respectively. 3 Main results An activation σ : R →R is called C-bounded if σ∞, σ∞, σ∞≤C. Fix a skeleton S and 1-Lipschitz1 convex loss . Define comp(S) = depth(S) i=1 maxv∈S,depth(v)=i(deg(v) + 1) and C(S) = (8C)depth(S) comp(S), where C is the minimal number for which all the activations in S are C-bounded, and depth(v) is the maximal length of a path from an input node to v. We also define C(S) = (4C)depth(S) comp(S), where C is the minimal number for which all the activations in S are C-Lipschitz and satisfy |σ(0)| ≤C. Through this and remaining sections we use to hide universal constants. Likewise, we fix the bias parameter β and therefore omit it from the relevant notation. 1If is L-Lipschitz, we can replace by 1 L and the learning rate η by Lη. The operation of algorithm 1 will be identical to its operation before the modification. Given this observation, it is very easy to derive results for general L given our results. Hence, to save one paramater, we will assume that L = 1. 6 We note that for constant depth skeletons with maximal degree that is polynomial in n, C(S) and C(S) are polynomial in n. These quantities are polynomial in n also for various log-depth skeletons. For example, this is true for fully connected skeletons, or more generally, layered skeletons with constantly many layers that are not fully connected. Theorem 1. Suppose that all activations are C-bounded. Let M, > 0. Suppose that we run algorithm 1 on the network N(S, r, k) with the following parameters: • η = η r for η (C(S))2 • T M 2 η • r C4(T η)2M 2(C(S)) 4 log( C|S| δ ) 2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥1 −δ over the choice of the initial weights, there is t ∈[T] such that E LD(wt) ≤ minh∈Hk S, hS≤M LD(h) + . Here, the expectation is over the training examples. We next consider ReLU activations. Here, C(S) = ( √ 32)depth(S) comp(S). Theorem 2. Suppose that all activations are the ReLU. Let M, > 0. Suppose that we run algorithm 1 on the network N(S, r, k) with the following parameters: • η = η r for η (C(S))2 • T M 2 η • r (T η)2M 2(C(S)) 4 log( |S| δ ) 2 + d • Zero initialized prediction layer • Arbitrary m Then, w.p. ≥1 −δ over the choice of the initial weights, there is t ∈[T] such that E LD(wt) ≤ minh∈Hk S, hS≤M LD(h) + . Here, the expectation is over the training examples. Finally, we consider the case in which the last layer is also initialized randomly. Here, we provide guarantees in a more restricted setting of supervised learning. Concretely, we consider multiclass classification, when D is separable with margin, and is the logistic loss. Theorem 3. Suppose that all activations are C-bounded, that D is M-separable with w.r.t. κS and let > 0. Suppose we run algorithm 1 on N(S, r, k) with the following parameters: • η = η r for η 2 M 2(C(S))4 • T log(k)M 2 η2 • r C4 (C(S))4 M 2 (Tη)2 log C|S| + k + d • Randomly initialized prediction layer • Arbitrary m Then, w.p. ≥1 4 over the choice of the initial weights and the training examples, there is t ∈[T] such that L0−1 D (wt) ≤ 7 3.1 Implications To demonstrate our results, let us elaborate on a few implications for specific network architectures. To this end, let us fix the instance space X to be either {±1}n or Sn−1. Also, fix a bias parameter 1 ≥β > 0, a batch size m, and a skeleton S that is a skeleton of a fully connected network of depth between 2 and log(n). Finally, we also fix the activation function to be either the ReLU or a C-bounded activation, assume that the prediction layer is initialized to 0, and fix the loss function to be some convex and Lipschitz loss function. Very similar results are valid for convolutional networks with constantly many convolutional layers. We however omit the details for brevity. Our first implication shows that SGD is guaranteed to efficiently learn constant degree polynomials with polynomially bounded weights. To this end, let us denote by Pt the collection of degree t polynomials. Furthermore, for any polynomial p we denote by p the 2 norm of its coefficients. Corollary 4. Fix any positive integers t0, t1. Suppose that we run algorithm 1 on the network N(S, r, 1) with the following parameters: • η poly n • T, r poly n , log (1/δ) Then, w.p. ≥1 −δ over the choice of the initial weights, there is t ∈[T] such that E LD(wt) ≤ minp∈Pt0, p≤nt1 LD(p) + . Here, the expectation is over the training examples. We note that several hypothesis classes that were studied in PAC learning can be realized by polynomial threshold functions with polynomially bounded coefficients. This includes conjunctions, DNF and CNF formulas with constantly many terms, and DNF and CNF formulas with constantly many literals in each term. If we take the loss function to be the logistic loss or the hinge loss, Corollary 4 implies that SGD efficiently learns these hypothesis classes as well. Our second implication shows that any continuous function is learnable (not necessarily in polynomial time) by SGD. Corollary 5. Fix a continuous function h∗: Sn−1 →R and , δ > 0. Assume that D is realized2 by h∗. Assume that we run algorithm 1 on the network N(S, r, 1). If η > 0 is sufficiently small and T and r are sufficiently large, then, w.p. ≥1 −δ over the choice of the initial weights, there is t ∈[T] such that E LD(wt) ≤. 3.2 Extensions We next remark on two extensions of our main results. The extended results can be proved in a similar fashion to our results. To avoid cumbersome notation, we restrict the proofs to the main theorems as stated, and will elaborate on the extended results in an extended version of this manuscript. First, we assume that the replication parameter is the same for all nodes. In practice, replication parameters for different nodes are different. This can be captured by a vector {rv}v∈Int(S). Our main results can be extended to this case if for all v, rv ≤ u∈in(v) ru (a requirement that usually holds in practice). Second, we assume that there is no weight sharing that is standard in convolutional networks. Our results can be extended to convolutional networks with weight sharing. We also note that we assume that in each step of algorithm 1, a fresh batch of examples is given. In practice this is often not the case. Rather, the algorithm is given a training set of examples, and at each step it samples from that set. In this case, our results provide guarantees on the training loss. If the training set is large enough, this also implies guarantees on the population loss via standard sample complexity results. Acknowledgments The author thanks Roy Frostig, Yoram Singer and Kunal Talwar for valuable discussions and comments. 2That is, if (x, y) ∼D then y = h∗(x) with probability 1. 8 References A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In Proceedings of the 31st International Conference on Machine Learning, pages 1908–1916, 2014. F. Anselmi, L. Rosasco, C. Tan, and T. Poggio. Deep convolutional networks are hierarchical kernel machines. arXiv:1508.01084, 2015. Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In ICML, pages 584–592, 2014. Sanjeev Arora, Rong Ge, Tengyu Ma, and Andrej Risteski. Provable learning of noisy-or networks. arXiv preprint arXiv:1612.08795, 2016. F. Bach. Breaking the curse of dimensionality with convex neural networks. arXiv:1412.8690, 2014. F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. 2015. A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and Steven Rudich. Weakly learning DNF and characterizing statistical query learning using fourier analysis. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pages 253–262. ACM, 1994. Avrim Blum and Ronald L. Rivest. Training a 3-node neural net is NP-Complete. In David S. Touretzky, editor, Advances in Neural Information Processing Systems I, pages 494–501. Morgan Kaufmann, 1989. Nader H Bshouty, Christino Tamon, and David K Wilson. On learning width two branching programs. Information Processing Letters, 65(4):217–222, 1998. Y. Cho and L.K. Saul. Kernel methods for deep learning. In Advances in neural information processing systems, pages 342–350, 2009. A. Daniely and S. Shalev-Shwartz. Complexity theoretic limitations on learning DNFs. In COLT, 2016. A. Daniely, N. Linial, and S. Shalev-Shwartz. From average case complexity to improper learning complexity. In STOC, 2014. Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In NIPS, 2016. Amit Daniely, Roy Frostig, Vineet Gupta, and Yoram Singer. Random features for compositional kernels. arXiv preprint arXiv:1703.07872, 2017. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256, 2010. T. Hazan and T. Jaakkola. Steps toward deep kernel methods from infinite neural networks. arXiv:1508.05133, 2015. Gautam C Kamath. Bounds on the expectation of the maximum of samples from a gaussian, 2015. URL http://www. gautamkamath. com/writings/gaussian max. pdf. P. Kar and H. Karnick. Random feature maps for dot product kernels. arXiv:1201.6530, 2012. M. Kearns and L.G. Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the Association for Computing Machinery, 41(1):67–95, January 1994. Michael Kharitonov. Cryptographic hardness of distribution-specific learning. In Proceedings of the twenty-fifth annual ACM symposium on Theory of computing, pages 372–381. ACM, 1993. A.R. Klivans and A.A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. In FOCS, 2006. A.R. Klivans and A.A. Sherstov. Unconditional lower bounds for learning intersections of halfspaces. Machine Learning, 69(2-3):97–114, 2007. 9 | 2017 | 197 |
6,671 | Learning to Pivot with Adversarial Networks Gilles Louppe New York University g.louppe@nyu.edu Michael Kagan SLAC National Accelerator Laboratory makagan@slac.stanford.edu Kyle Cranmer New York University kyle.cranmer@nyu.edu Abstract Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot – a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. The method includes a hyperparameter to control the tradeoff between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics. 1 Introduction Machine learning techniques have been used to enhance a number of scientific disciplines, and they have the potential to transform even more of the scientific process. One of the challenges of applying machine learning to scientific problems is the need to incorporate systematic uncertainties, which affect both the robustness of inference and the metrics used to evaluate a particular analysis strategy. In this work, we focus on supervised learning techniques where systematic uncertainties can be associated to a data generation process that is not uniquely specified. In other words, the lack of systematic uncertainties corresponds to the (rare) case that the process that generates training data is unique, fully specified, and an accurate representative of the real world data. By contrast, a common situation when systematic uncertainty is present is when the training data are not representative of the real data. Several techniques for domain adaptation have been developed to create models that are more robust to this binary type of uncertainty. A more generic situation is that there are several plausible data generation processes, specified as a family parametrized by continuous nuisance parameters, as is typically found in scientific domains. In this broader context, statisticians have for long been working on robust inference techniques based on the concept of a pivot – a quantity whose distribution is invariant with the nuisance parameters (see e.g., (Degroot and Schervish, 1975)). Assuming a probability model p(X, Y, Z), where X are the data, Y are the target labels, and Z are the nuisance parameters, we consider the problem of learning a predictive model f(X) for Y conditional on the observed values of X that is robust to uncertainty in the unknown value of Z. We introduce a flexible learning procedure based on adversarial networks (Goodfellow et al., 2014) for enforcing that f(X) is a pivot with respect to Z. We derive theoretical results proving that the procedure converges towards a model that is both optimal and statistically independent of the nuisance parameters (if that model exists) or for which one can tune a trade-off between accuracy and robustness (e.g., as driven by a higher level objective). In particular, and to the best of our knowledge, our contribution is the first solution for imposing pivotal constraints on a predictive model, working regardless of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Classifier f X θf f(X; θf ) Lf (θf ) ... Adversary r γ1(f(X; θf ); θr) γ2(f(X; θf ); θr) . . . θr ... Z pθr (Z|f(X; θf )) P(γ1, γ2, . . . ) Lr(θf , θr) Figure 1: Architecture for the adversarial training of a binary classifier f against a nuisance parameters Z. The adversary r models the distribution p(z|f(X; θf) = s) of the nuisance parameters as observed only through the output f(X; θf) of the classifier. By maximizing the antagonistic objective Lr(θf, θr), the classifier f forces p(z|f(X; θf) = s) towards the prior p(z), which happens when f(X; θf) is independent of the nuisance parameter Z and therefore pivotal. type of the nuisance parameter (discrete or continuous) or of its prior. Finally, we demonstrate the effectiveness of the approach with a toy example and examples from particle physics. 2 Problem statement We begin with a family of data generation processes p(X, Y, Z), where X ∈X are the data, Y ∈Y are the target labels, and Z ∈Z are the nuisance parameters that can be continuous or categorical. Let us assume that prior to incorporating the effect of uncertainty in Z, our goal is to learn a regression function f : X →S with parameters θf (e.g., a neural network-based probabilistic classifier) that minimizes a loss Lf(θf) (e.g., the cross-entropy). In classification, values s ∈S = R|Y| correspond to the classifier scores used for mapping hard predictions y ∈Y, while S = Y for regression. We augment our initial objective so that inference based on f(X; θf) will be robust to the value z ∈Z of the nuisance parameter Z – which remains unknown at test time. A formal way of enforcing robustness is to require that the distribution of f(X; θf) conditional on Z (and possibly Y ) be invariant with the nuisance parameter Z. Thus, we wish to find a function f such that p(f(X; θf) = s|z) = p(f(X; θf) = s|z′) (1) for all z, z′ ∈Z and all values s ∈S of f(X; θf). In words, we are looking for a predictive function f which is a pivotal quantity with respect to the nuisance parameters. This implies that f(X; θf) and Z are independent random variables. As stated in Eqn. 1, the pivotal quantity criterion is imposed with respect to p(X|Z) where Y is marginalized out. In some situations however (see e.g., Sec. 5.2), class conditional independence of f(X; θf) on the nuisance Z is preferred, which can then be stated as requiring p(f(X; θf) = s|z, y) = p(f(X; θf) = s|z′, y) (2) for one or several specified values y ∈Y. 3 Method Joint training of adversarial networks was first proposed by (Goodfellow et al., 2014) as a way to build a generative model capable of producing samples from random noise z. More specifically, the authors pit a generative model g : Rn →Rp against an adversarial classifier d : Rp →[0, 1] whose antagonistic objective is to recognize real data X from generated data g(Z). Both models g and d are trained simultaneously, in such a way that g learns to produce samples that are difficult to identify by d, while d incrementally adapts to changes in g. At the equilibrium, g models a distribution whose samples can be identified by d only by chance. That is, assuming enough capacity in d and g, the distribution of g(Z) eventually converges towards the real distribution of X. 2 Algorithm 1 Adversarial training of a classifier f against an adversary r. Inputs: training data {xi, yi, zi}N i=1; Outputs: ˆθf, ˆθr. 1: for t = 1 to T do 2: for k = 1 to K do 3: Sample minibatch {xm, zm, sm = f(xm; θf)}M m=1 of size M; 4: With θf fixed, update r by ascending its stochastic gradient ∇θrE(θf, θr) := ∇θr M X m=1 log pθr(zm|sm); 5: end for 6: Sample minibatch {xm, ym, zm, sm = f(xm; θf)}M m=1 of size M; 7: With θr fixed, update f by descending its stochastic gradient ∇θf E(θf, θr) := ∇θf M X m=1 −log pθf (ym|xm) + log pθr(zm|sm) , where pθf (ym|xm) denotes 1(ym = 0)(1 −sm) + 1(ym = 1)sm; 8: end for In this work, we repurpose adversarial networks as a means to constrain the predictive model f in order to satisfy Eqn. 1. As illustrated in Fig. 1, we pit f against an adversarial model r := pθr(z|f(X; θf) = s) with parameters θr and associated loss Lr(θf, θr). This model takes as input realizations s of f(X; θf) and produces as output a function modeling the posterior probability density pθr(z|f(X; θf) = s). Intuitively, if p(f(X; θf) = s|z) varies with z, then the corresponding correlation can be captured by r. By contrast, if p(f(X; θf) = s|z) is invariant with z, as we require, then r should perform poorly and be close to random guessing. Training f such that it additionally minimizes the performance of r therefore acts as a regularization towards Eqn. 1. If Z takes discrete values, then pθr can be represented as a probabilistic classifier R →R|Z| whose jth output (for j = 1, . . . , |Z|) is the estimated probability mass pθr(zj|f(X; θf) = s). Similarly, if Z takes continuous values, then we can model the posterior probability density p(z|f(X; θf) = s) with a sufficiently flexible parametric family of distributions P(γ1, γ2, . . . ), where the parameters γj depend on f(X, θf) and θr. The adversary r may take any form, i.e. it does not need to be a neural network, as long as it exposes a differentiable function pθr(z|f(X; θf) = s) of sufficient capacity to represent the true distribution. Fig. 1 illustrates a concrete example where pθr(z|f(X; θf) = s) is a mixture of gaussians, as modeled with a mixture density network (Bishop, 1994)). The jth output corresponds to the estimated value of the corresponding parameter γj of that distribution (e.g., the mean, variance and mixing coefficients of its components). The estimated probability density pθr(z|f(X; θf) = s) can then be evaluated for any z ∈Z and any score s ∈S. As with generative adversarial networks, we propose to train f and r simultaneously, which we carry out by considering the value function E(θf, θr) = Lf(θf) −Lr(θf, θr) (3) that we optimize by finding the minimax solution ˆθf, ˆθr = arg min θf max θr E(θf, θr). (4) Without loss of generality, the adversarial training procedure to obtain (ˆθf, ˆθr) is formally presented in Algorithm 1 in the case of a binary classifier f : Rp →[0, 1] modeling p(Y = 1|X). For reasons further explained in Sec. 4, Lf and Lr are respectively set to the expected value of the negative log-likelihood of Y |X under f and of Z|f(X; θf) under r: Lf(θf) = Ex∼XEy∼Y |x[−log pθf (y|x)], (5) Lr(θf, θr) = Es∼f(X;θf )Ez∼Z|s[−log pθr(z|s)]. (6) The optimization algorithm consists in using stochastic gradient descent alternatively for solving Eqn. 4. Finally, in the case of a class conditional pivot, the settings are the same, except that the adversarial term Lr(θf, θr) is restricted to Y = y. 3 4 Theoretical results In this section, we show that in the setting of Algorithm 1 where Lf and Lr are respectively set to expected value of the negative log-likelihood of Y |X under f and of Z|f(X; θf) under r, the minimax solution of Eqn. 4 corresponds to a classifier f which is a pivotal quantity. In this setting, the nuisance parameter Z is considered as a random variable of prior p(Z), and our goal is to find a function f(·; θf) such that f(X; θf) and Z are independent random variables. Importantly, classification of Y with respect to X is considered in the context where Z is marginalized out, which means that the classifier minimizing Lf is optimal with respect to Y |X, but not necessarily with Y |X, Z. Results hold for a nuisance parameters Z taking either categorical or continuous values. By abuse of notation, H(Z) denotes the differential entropy in this latter case. Finally, the proposition below is derived in a non-parametric setting, by assuming that both f and r have enough capacity. Proposition 1. If there exists a minimax solution (ˆθf, ˆθr) for Eqn. 4 such that E(ˆθf, ˆθr) = H(Y |X) −H(Z), then f(·; ˆθf) is both an optimal classifier and a pivotal quantity. Proof. For fixed θf, the adversary r is optimal at ˆˆθr = arg max θr E(θf, θr) = arg min θr Lr(θf, θr), (7) in which case pˆˆθr(z|f(X; θf) = s) = p(z|f(X; θf) = s) for all z and all s, and Lr reduces to the expected entropy Es∼f(X;θf )[H(Z|f(X; θf) = s)] of the conditional distribution of the nuisance parameters. This expectation corresponds to the conditional entropy of the random variables Z and f(X; θf) and can be written as H(Z|f(X; θf)). Accordingly, the value function E can be restated as a function depending on θf only: E′(θf) = Lf(θf) −H(Z|f(X; θf)). (8) In particular, we have the lower bound H(Y |X) −H(Z) ≤Lf(θf) −H(Z|f(X; θf)) (9) where the equality holds at ˆθf = arg minθf E′(θf) when: • ˆθf minimizes the negative log-likelihood of Y |X under f, which happens when ˆθf are the parameters of an optimal classifier. In this case, Lf reduces to its minimum value H(Y |X). • ˆθf maximizes the conditional entropy H(Z|f(X; θf)), since H(Z|f(X; θ)) ≤H(Z) from the properties of entropy. Note that this latter inequality holds for both the discrete and the differential definitions of entropy. By assumption, the lower bound is active, thus we have H(Z|f(X; θf)) = H(Z) because of the second condition, which happens exactly when Z and f(X; θf) are independent variables. In other words, the optimal classifier f(·; ˆθf) is also a pivotal quantity. Proposition 1 suggests that if at each step of Algorithm 1 the adversary r is allowed to reach its optimum given f (e.g., by setting K sufficiently high) and if f is updated to improve Lf(θf) − H(Z|f(X; θf)) with sufficiently small steps, then f should converge to a classifier that is both optimal and pivotal, provided such a classifier exists. Therefore, the adversarial term Lr can be regarded as a way to select among the class of all optimal classifiers a function f that is also pivotal. Despite the former theoretical characterization of the minimax solution of Eqn. 4, let us note that formal guarantees of convergence towards that solution by Algorithm 1 in the case where a finite number K of steps is taken for r remains to be proven. In practice, the assumption of existence of an optimal and pivotal classifier may not hold because the nuisance parameter directly shapes the decision boundary. In this case, the lower bound H(Y |X) −H(Z) < Lf(θf) −H(Z|f(X; θf)) (10) is strict: f can either be an optimal classifier or a pivotal quantity, but not both simultaneously. In this situation, it is natural to rewrite the value function E as Eλ(θf, θr) = Lf(θf) −λLr(θf, θr), (11) 4 0.0 0.2 0.4 0.6 0.8 1.0 f(X) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 p(f(X)) p(f(X)|Z = −σ) p(f(X)|Z = 0) p(f(X)|Z = + σ) 1.0 0.5 0.0 0.5 1.0 1.5 2.0 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Z = −σ Z = 0 Z = + σ µ0 µ1|Z = z 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.0 0.2 0.4 0.6 0.8 1.0 f(X) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 p(f(X)) p(f(X)|Z = −σ) p(f(X)|Z = 0) p(f(X)|Z = + σ) 1.0 0.5 0.0 0.5 1.0 1.5 2.0 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Z = −σ Z = 0 Z = σ µ0 µ1|Z = z 0.12 0.24 0.36 0.48 0.60 0.72 0.84 Figure 2: Toy example. (Left) Conditional probability densities of the decision scores at Z = −σ, 0, σ without adversarial training. The resulting densities are dependent on the continuous parameter Z, indicating that f is not pivotal. (Middle left) The associated decision surface, highlighting the fact that samples are easier to classify for values of Z above σ, hence explaining the dependency. (Middle right) Conditional probability densities of the decision scores at Z = −σ, 0, σ when f is built with adversarial training. The resulting densities are now almost identical to each other, indicating only a small dependency on Z. (Right) The associated decision surface, illustrating how adversarial training bends the decision function vertically to erase the dependency on Z. where λ ≥0 is a hyper-parameter controlling the trade-off between the performance of f and its independence with respect to the nuisance parameter. Setting λ to a large value will preferably enforces f to be pivotal while setting λ close to 0 will rather constraint f to be optimal. When the lower bound is strict, let us note however that there may exist distinct but equally good solutions θf, θr minimizing Eqn. 11. In this zero-sum game, an increase in accuracy would exactly be compensated by a decrease in pivotality and vice-versa. How to best navigate this Pareto frontier to maximize a higher-level objective remains a question open for future works. Interestingly, let us finally emphasize that our results hold using only the (1D) output s of f(·; θf) as input to the adversary. We could similarly enforce an intermediate representation of the data to be pivotal, e.g. as in (Ganin and Lempitsky, 2014), but this is not necessary. 5 Experiments In this section, we empirically demonstrate the effectiveness of the approach with a toy example and examples from particle physics. Notably, there are no other other approaches to compare to in the case of continuous nuisance parameters, as further explained in Sec. 6. In the case of binary parameters, we do not expect results to be much different from previous works. The source code to reproduce the experiments is available online 1. 5.1 A toy example with a continous nuisance parameter As a guiding toy example, let us consider the binary classification of 2D data drawn from multivariate gaussians with equal priors, such that x ∼N (0, 0), 1 −0.5 −0.5 1 when Y = 0, (12) x|Z = z ∼N (1, 1 + z), 1 0 0 1 when Y = 1. (13) The continuous nuisance parameter Z here represents our uncertainty about the location of the mean of the second gaussian. Our goal is to build a classifier f(·; θf) for predicting Y given X, but such that the probability distribution of f(X; θf) is invariant with respect to the nuisance parameter Z. Assuming a gaussian prior z ∼N(0, 1), we generate data {xi, yi, zi}N i=1, from which we train a neural network f minimizing Lf(θf) without considering its adversary r. The network architecture comprises 2 dense hidden layers of 20 nodes respectively with tanh and ReLU activations, followed by a dense output layer with a single node with a sigmoid activation. As shown in Fig. 2, the resulting classifier is not pivotal, as the conditional probability densities of its decision scores f(X; θf) show 1https://github.com/glouppe/paper-learning-to-pivot 5 0.45 0.50 0.55 0.60 0.65 0.70 Lf 1.36 1.37 1.38 1.39 1.40 1.41 1.42 Lr 0 50 100 150 200 T 70.5 70.0 69.5 69.0 68.5 68.0 67.5 Lf −λLr Figure 3: Toy example. Training curves for Lf(θf), Lr(θf, θr) and Lf(θf) −λLr(θf, θr). Initialized with a pre-trained classifier f, adversarial training was performed for 200 iterations, mini-batches of size M = 128, K = 500 and λ = 50. 0.0 0.2 0.4 0.6 0.8 1.0 threshold on f(X) 1 0 1 2 3 4 5 6 7 8 AMS λ = 0|Z = 0 λ = 0 λ = 1 λ = 10 λ = 500 Figure 4: Physics example. Approximate median significance as a function of the decision threshold on the output of f. At λ = 10, trading accuracy for independence to pileup results in a net benefit in terms of statistical significance. large discrepancies between values z of the nuisance parameters. While not shown here, a classifier trained only from data generated at the nominal value Z = 0 would also not be pivotal. Let us now consider the joint training of f against an adversary r implemented as a mixture density network modeling Z|f(X; θf) as a mixture of five gaussians. The network architecture of r comprises 2 dense hidden layers of 20 nodes with ReLU activations, followed by an output layer of 15 nodes corresponding to the means, standard deviations and mixture coefficients of the gaussians. Output nodes for the mean values come with linear activations, output nodes for the standard deviations with exponential activations to ensure positivity, while output nodes for the mixture coefficients implement the softmax function to ensure positivity and normalization. When running Algorithm 1 as initialized with the classifier f obtained previously, adversarial training effectively reshapes the decision function so it that becomes almost independent on the nuisance parameter, as shown in Fig. 2. The conditional probability densities of the decision scores f(X; θf) are now very similar to each other, indicating only a residual dependency on the nuisance, as theoretically expected. The dynamics of adversarial training is illustrated in Fig. 3, where the losses Lf, Lr and Lf −λLr are evaluated after each iteration. In the first iterations, we observe that the global objective Lf −λLr is minimized by making the classifier less accurate, hence the corresponding increase of Lf, but which results in a classifier that is more pivotal, hence the associated increase of Lr and the total net benefit. As learning goes, minimizing E requires making predictions that are more accurate, hence decreasing Lf, or that are even less dependent on Z, hence shaping pθr towards the prior p(Z). Indeed, Lf eventually starts decreasing, while remaining bounded from below by minθf Lf(θf) as approximated by the dashed line in the first plot. Similarly, Lr tends towards the differential entropy H(Z) of the prior (where H(Z) = log(σ √ 2πe) = 1.419 in the case of a standard normal), as shown by the dashed line in the second plot. Finally, let us note that the ideal situation of a classifier that is both optimal and pivotal is unreachable for this problem, as shown in the third plot by the offset between Lf −λLr and the dashed line approximating H(Y |X) −λH(Z). 5.2 High energy physics examples Binary Case Experiments at high energy colliders like the LHC (Evans and Bryant, 2008) are searching for evidence of new particles beyond those described by the Standard Model (SM) of particle physics. A wide array of theories predict the existence of new massive particles that would decay to known particles in the SM such as the W boson. The W boson is unstable and can decay to two quarks, each of which produce collimated sprays of particles known as jets. If the exotic particle is heavy, then the W boson will be moving very fast, and relativistic effects will cause the two jets from its decay to merge into a single ‘W-jet’. These W-jets have a rich internal substructure. However, jets are also produced ubiquitously at high energy colliders through more mundane processes in the 6 SM, which leads to a challenging classification problem that is beset with a number of sources of systematic uncertainty. The classification challenge used here is common in jet substructure studies (see e.g. (CMS Collaboration, 2014; ATLAS Collaboration, 2015, 2014)): we aim to distinguish normal jets produced copiously at the LHC (Y = 0) and from W-jets (Y = 1) potentially coming from an exotic process. We reuse the datasets used in (Baldi et al., 2016a). Challenging in its own right, this classification problem is made all the more difficult by the presence of pileup, or multiple proton-proton interactions occurring simultaneously with the primary interaction. These pileup interactions produce additional particles that can contribute significant energies to jets unrelated to the underlying discriminating information. The number of pileup interactions can vary with the running conditions of the collider, and we want the classifier to be robust to these conditions. Taking some liberty, we consider an extreme case with a categorical nuisance parameter, where Z = 0 corresponds to events without pileup and Z = 1 corresponds to events with pileup, for which there are an average of 50 independent pileup interactions overlaid. We do not expect that we will be able to find a function f that simultaneously minimizes the classification loss Lf and is pivotal. Thus, we need to optimize the hyper-parameter λ of Eqn. 11 with respect to a higher-level objective. In this case, the natural higher-level context is a hypothesis test of a null hypothesis with no Y = 1 events against an alternate hypothesis that is a mixture of Y = 0 and Y = 1 events. In the absence of systematic uncertainties, optimizing Lf simultaneously optimizes the power of a classical hypothesis test in the Neyman-Pearson sense. When we include systematic uncertainties we need to balance the classification performance against the robustness to uncertainty in Z. Since we are still performing a hypothesis test against the null, we only wish to impose the pivotal property on Y = 0 events. To this end, we use as a higher level objective the Approximate Median Significance (AMS), which is a natural generalization of the power of a hypothesis test when systematic uncertainties are taken into account (see Eqn. 20 of Adam-Bourdarios et al. (2014)). For several values of λ, we train a classifier using Algorithm 1 but consider the adversarial term Lr conditioned on Y = 0 only, as outlined in Sec. 2. The architecture of f comprises 3 hidden layers of 64 nodes respectively with tanh, ReLU and ReLU activations, and is terminated by a single final output node with a sigmoid activation. The architecture of r is the same, but uses only ReLU activations in its hidden nodes. As in the previous example, adversarial training is initialized with f pre-trained. Experiments are performed on a subset of 150000 samples for training while AMS is evaluated on an independent test set of 5000000 samples. Both training and testing samples are weighted such that the null hypothesis corresponded to 1000 of Y = 0 events and the alternate hypothesis included an additional 100 Y = 1 events prior to any thresholding on f. This allows us to probe the efficacy of the method proposed here in a representative background-dominated high energy physics environment. Results reported below are averages over 5 runs. As Fig. 4 illustrates, without adversarial training (at λ = 0|Z = 0 when building a classifier at the nominal value Z = 0 only, or at λ = 0 when building a classifier on data sampled from p(X, Y, Z)), the AMS peaks at 7. By contrast, as the pivotal constraint is made stronger (for λ > 0) the AMS peak moves higher, with a maximum value around 7.8 for λ = 10. Trading classification accuracy for robustness to pileup thereby results in a net benefit in terms of the power of the hypothesis test. Setting λ too high however (e.g. λ = 500) results in a decrease of the maximum AMS, by focusing the capacity of f too strongly on independence with Z, at the expense of accuracy. In effect, optimizing λ yields a principled and effective approach to control the trade-off between accuracy and robustness that ultimately maximizes the power of the enveloping hypothesis test. Continous Case Recently, an independent group has used our approach to learn jet classifiers that are independent of the jet mass (Shimmin et al., 2017), which is a continuous attribute. The results of their studies show that the adversarial training strategy works very well for real-world problems with continuous attributes, thus enhancing the sensitivity of searches for new physics at the LHC. 6 Related work Learning to pivot can be related to the problem of domain adaptation (Blitzer et al., 2006; Pan et al., 2011; Gopalan et al., 2011; Gong et al., 2013; Baktashmotlagh et al., 2013; Ajakan et al., 2014; Ganin and Lempitsky, 2014), where the goal is often stated as trying to learn a domain-invariant representation of the data. Likewise, our method also relates to the problem of enforcing fairness 7 in classification (Kamishima et al., 2012; Zemel et al., 2013; Feldman et al., 2015; Edwards and Storkey, 2015; Zafar et al., 2015; Louizos et al., 2015), which is stated as learning a classifier that is independent of some chosen attribute such as gender, color or age. For both families of methods, the problem can equivalently be stated as learning a classifier which is a pivotal quantity with respect to either the domain or the selected feature. As an example, unsupervised domain adaptation with labeled data from a source domain and unlabeled data from a target domain can be recast as learning a predictive model f (i.e., trained to minimize Lf evaluated on labeled source data only) that is also a pivot with respect to the domain Z (i.e., trained to maximize Lr evaluated on both source and target data). In this context, (Ganin and Lempitsky, 2014; Edwards and Storkey, 2015) are certainly among the closest to our work, in which domain invariance and fairness are enforced through an adversarial minimax setup composed of a classifier and an adversarial discriminator. Following this line of work, our method can be regarded as a unified generalization that also supports a continuously parametrized family of domains or as enforcing fairness over continuous attributes. Most related work is based on the strong and limiting assumption that Z is a binary random variable (e.g., Z = 0 for the source domain, and Z = 1 for the target domain). In particular, (Pan et al., 2011; Gong et al., 2013; Baktashmotlagh et al., 2013; Zemel et al., 2013; Ganin and Lempitsky, 2014; Ajakan et al., 2014; Edwards and Storkey, 2015; Louizos et al., 2015) are all based on the minimization of some form of divergence between the two distributions of f(X)|Z = 0 and f(X)|Z = 1. For this reason, these works cannot directly be generalized to non-binary or continuous nuisance parameters, both from a practical and theoretical point of view. Notably, Kamishima et al. (2012) enforces fairness through a prejudice regularization term based on empirical estimates of p(f(X)|Z). While this approach is in principle sufficient for handling non-binary nuisance parameters Z, it requires accurate empirical estimates of p(f(X)|Z = z) for all values z, which quickly becomes impractical as the cardinality of Z increases. By contrast, our approach models the conditional dependence through an adversarial network, which allows for generalization without necessarily requiring an exponentially growing number of training examples. A common approach to account for systematic uncertainties in a scientific context (e.g. in high energy physics) is to take as fixed a classifier f built from training data for a nominal value z0 of the nuisance parameter, and then propagate uncertainty by estimating p(f(x)|z) with a parametrized calibration procedure. Clearly, this classifier is however not optimal for z ̸= z0. To overcome this issue, the classifier f is sometimes built instead on a mixture of training data generated from several plausible values z0, z1, . . . of the nuisance parameter. While this certainly improves classification performance with respect to the marginal model p(X, Y ), there is no reason to expect the resulting classifier to be pivotal, as shown previously in Sec. 5.1. As an alternative, parametrized classifiers (Cranmer et al., 2015; Baldi et al., 2016b) directly take (nuisance) parameters as additional input variables, hence ultimately providing the most statistically powerful approach for incorporating the effect of systematics on the underlying classification task. In practice, parametrized classifiers are also computationally expensive to build and evaluate. In particular, calibrating their decision function, i.e. approximating p(f(x, z)|y, z) as a continuous function of z, remains an open challenge. By contrast, constraining f to be pivotal yields a classifier that can be directly used in a wider range of applications, since the dependence on the nuisance parameter Z has already been eliminated. 7 Conclusions In this work, we proposed a flexible learning procedure for building a predictive model that is independent of continuous or categorical nuisance parameters by jointly training two neural networks in an adversarial fashion. From a theoretical perspective, we motivated the proposed algorithm by showing that the minimax value of its value function corresponds to a predictive model that is both optimal and pivotal (if that models exists) or for which one can tune the trade-off between power and robustness. From an empirical point of view, we confirmed the effectiveness of our method on a toy example and a particle physics example. In terms of applications, our solution can be used in any situation where the training data may not be representative of the real data the predictive model will be applied to in practice. In the scientific context, the presence of systematic uncertainty can be incorporated by considering a family of data generation processes, and it would be worth revisiting those scientific problems that utilize machine learning in light of this technique. The approach also extends to cases where independence of the predictive model with respect to observed random variables is desired, as in fairness for classification. 8 Acknowledgements We would like to thank the authors of (Baldi et al., 2016a) for sharing the data used in their studies. KC and GL are both supported through NSF ACI-1450310, additionally KC is supported through PHY-1505463 and PHY-1205376. MK is supported by the US Department of Energy (DOE) under grant DE-AC02-76SF00515 and by the SLAC Panofsky Fellowship. References Adam-Bourdarios, C., Cowan, G., Germain, C., Guyon, I., Kégl, B., and Rousseau, D. (2014). The higgs boson machine learning challenge. In NIPS 2014 Workshop on High-energy Physics and Machine Learning, volume 42, page 37. Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., and Marchand, M. (2014). Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446. ATLAS Collaboration (2014). Performance of Boosted W Boson Identification with the ATLAS Detector. Technical Report ATL-PHYS-PUB-2014-004, CERN, Geneva. ATLAS Collaboration (2015). Identification of boosted, hadronically-decaying W and Z bosons in √s = 13 TeV Monte Carlo Simulations for ATLAS. Technical Report ATL-PHYS-PUB-2015-033, CERN, Geneva. Baktashmotlagh, M., Harandi, M., Lovell, B., and Salzmann, M. (2013). Unsupervised domain adaptation by domain invariant projection. In Proceedings of the IEEE International Conference on Computer Vision, pages 769–776. Baldi, P., Bauer, K., Eng, C., Sadowski, P., and Whiteson, D. (2016a). Jet substructure classification in high-energy physics with deep neural networks. Physical Review D, 93(9):094034. Baldi, P., Cranmer, K., Faucett, T., Sadowski, P., and Whiteson, D. (2016b). Parameterized neural networks for high-energy physics. Eur. Phys. J., C76(5):235. Bishop, C. M. (1994). Mixture density networks. Blitzer, J., McDonald, R., and Pereira, F. (2006). Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128. Association for Computational Linguistics. CMS Collaboration (2014). Identification techniques for highly boosted W bosons that decay into hadrons. JHEP, 12:017. Cranmer, K., Pavez, J., and Louppe, G. (2015). Approximating likelihood ratios with calibrated discriminative classifiers. arXiv preprint arXiv:1506.02169. Degroot, M. H. and Schervish, M. J. (1975). Probability and statistics. 1st edition. Edwards, H. and Storkey, A. J. (2015). Censoring representations with an adversary. arXiv preprint arXiv:1511.05897. Evans, L. and Bryant, P. (2008). LHC Machine. JINST, 3:S08001. Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268. ACM. Ganin, Y. and Lempitsky, V. (2014). Unsupervised Domain Adaptation by Backpropagation. arXiv preprint arXiv:1409.7495. Gong, B., Grauman, K., and Sha, F. (2013). Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In Proceedings of The 30th International Conference on Machine Learning, pages 222–230. 9 Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680. Gopalan, R., Li, R., and Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 999–1006. IEEE. Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J. (2012). Fairness-aware classifier with prejudice remover regularizer. Machine Learning and Knowledge Discovery in Databases, pages 35–50. Louizos, C., Swersky, K., Li, Y., Welling, M., and Zemel, R. (2015). The variational fair autoencoder. arXiv preprint arXiv:1511.00830. Pan, S. J., Tsang, I. W., Kwok, J. T., and Yang, Q. (2011). Domain adaptation via transfer component analysis. Neural Networks, IEEE Transactions on, 22(2):199–210. Shimmin, C., Sadowski, P., Baldi, P., Weik, E., Whiteson, D., Goul, E., and Søgaard, A. (2017). Decorrelated Jet Substructure Tagging using Adversarial Neural Networks. Zafar, M. B., Valera, I., Rodriguez, M. G., and Gummadi, K. P. (2015). Fairness constraints: A mechanism for fair classification. arXiv preprint arXiv:1507.05259. Zemel, R. S., Wu, Y., Swersky, K., Pitassi, T., and Dwork, C. (2013). Learning fair representations. ICML (3), 28:325–333. 10 | 2017 | 198 |
6,672 | Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration Jason Altschuler MIT jasonalt@mit.edu Jonathan Weed MIT jweed@mit.edu Philippe Rigollet MIT rigollet@mit.edu Abstract Computing optimal transport distances such as the earth mover’s distance is a fundamental problem in machine learning, statistics, and computer vision. Despite the recent introduction of several algorithms with good empirical performance, it is unknown whether general optimal transport distances can be approximated in near-linear time. This paper demonstrates that this ambitious goal is in fact achieved by Cuturi’s Sinkhorn Distances. This result relies on a new analysis of Sinkhorn iterations, which also directly suggests a new greedy coordinate descent algorithm GREENKHORN with the same theoretical guarantees. Numerical simulations illustrate that GREENKHORN significantly outperforms the classical SINKHORN algorithm in practice. Dedicated to the memory of Michael B. Cohen 1 Introduction Computing distances between probability measures on metric spaces, or more generally between point clouds, plays an increasingly preponderant role in machine learning [SL11,MJ15,LG15,JSCG16, ACB17], statistics [FCCR16, PZ16, SR04, BGKL17] and computer vision [RTG00, BvdPPH11, SdGP+15]. A prominent example of such distances is the earth mover’s distance introduced in [WPR85] (see also [RTG00]), which is a special case of Wasserstein distance, or optimal transport (OT) distance [Vil09]. While OT distances exhibit a unique ability to capture geometric features of the objects at hand, they suffer from a heavy computational cost that had been prohibitive in large scale applications until the recent introduction to the machine learning community of Sinkhorn Distances by Cuturi [Cut13]. Combined with other numerical tricks, these recent advances have enabled the treatment of large point clouds in computer graphics such as triangle meshes [SdGP+15] and high-resolution neuroimaging data [GPC15]. Sinkhorn Distances rely on the idea of entropic penalization, which has been implemented in similar problems at least since Schrödinger [Sch31, Leo14]. This powerful idea has been successfully applied to a variety of contexts not only as a statistical tool for model selection [JRT08,RT11,RT12] and online learning [CBL06], but also as an optimization gadget in first-order optimization methods such as mirror descent and proximal methods [Bub15]. Related work. Computing an OT distance amounts to solving the following linear system: min P ∈Ur,c⟨P, C⟩, Ur,c := P ∈IRn×n + : P1 = r , P ⊤1 = c , (1) where 1 is the all-ones vector in IRn, C ∈IRn×n + is a given cost matrix, and r ∈IRn, c ∈IRn are given vectors with positive entries that sum to one. Typically C is a matrix containing pairwise 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. distances (and is thus dense), but in this paper we allow C to be an arbitrary non-negative dense matrix with bounded entries since our results are more general. For brevity, this paper focuses on square matrices C and P, since extensions to the rectangular case are straightforward. This paper is at the intersection of two lines of research: a theoretical one that aims at finding (near) linear time approximation algorithms for simple problems that are already known to run in polynomial time and a practical one that pursues fast algorithms for solving optimal transport approximately for large datasets. Noticing that (1) is a linear program with O(n) linear constraints and certain graphical structure, one can use the recent Lee-Sidford linear solver to find a solution in time eO(n2.5) [LS14], improving over the previous standard of O(n3.5) [Ren88]. While no practical implementation of the Lee-Sidford algorithm is known, it provides a theoretical benchmark for our methods. Their result is part of a long line of work initiated by the seminal paper of Spielman and Teng [ST04] on solving linear systems of equations, which has provided a building block for near-linear time approximation algorithms in a variety of combinatorially structured linear problems. A separate line of work has focused on obtaining faster algorithms for (1) by imposing additional assumptions. For instance, [AS14] obtain approximations to (1) when the cost matrix C arises from a metric, but their running times are not truly near-linear. [SA12,ANOY14] develop even faster algorithms for (1), but require C to arise from a low-dimensional ℓp metric. Practical algorithms for computing OT distances include Orlin’s algorithm for the Uncapacitated Minimum Cost Flow problem via a standard reduction. Like interior point methods, it has a provable complexity of O(n3 log n). This dependence on the dimension is also observed in practice, thereby preventing large-scale applications. To overcome the limitations of such general solvers, various ideas ranging from graph sparsification [PW09] to metric embedding [IT03,GD04,SJ08] have been proposed over the years to deal with particular cases of OT distance. Our work complements both lines of work, theoretical and practical, by providing the first near-linear time guarantee to approximate (1) for general non-negative cost matrices. Moreover we show that this performance is achieved by algorithms that are also very efficient in practice. Central to our contribution are recent developments of scalable methods for general OT that leverage the idea of entropic regularization [Cut13,BCC+15,GCPB16]. However, the apparent practical efficacy of these approaches came without theoretical guarantees. In particular, showing that this regularization yields an algorithm to compute or approximate general OT distances in time nearly linear in the input size n2 was an open question before this work. Our contribution. The contribution of this paper is twofold. First we demonstrate that, with an appropriate choice of parameters, the algorithm for Sinkhorn Distances introduced in [Cut13] is in fact a near-linear time approximation algorithm for computing OT distances between discrete measures. This is the first proof that such near-linear time results are achievable for optimal transport. We also provide previously unavailable guidance for parameter tuning in this algorithm. Core to our work is a new and arguably more natural analysis of the Sinkhorn iteration algorithm, which we show converges in a number of iterations independent of the dimension n of the matrix to balance. In particular, this analysis directly suggests a greedy variant of Sinkhorn iteration that also provably runs in near-linear time and significantly outperforms the classical algorithm in practice. Finally, while most approximation algorithms output an approximation of the optimum value of the linear program (1), we also describe a simple, parallelizable rounding algorithm that provably outputs a feasible solution to (1). Specifically, for any ε > 0 and bounded, non-negative cost matrix C, we describe an algorithm that runs in time eO(n2/ε3) and outputs ˆP ∈Ur,c such that ⟨ˆP, C⟩≤min P ∈Ur,c⟨P, C⟩+ ε We emphasize that our analysis does not require the cost matrix C to come from an underlying metric; we only require C to be non-negative. This implies that our results also give, for example, near-linear time approximation algorithms for Wasserstein p-distances between discrete measures. Notation. We denote non-negative real numbers by IR+, the set of integers {1, . . . , n} by [n], and the n-dimensional simplex by ∆n := {x ∈IRn + : Pn i=1 xi = 1}. For two probability distributions p, q ∈∆n such that p is absolutely continuous w.r.t. q, we define the entropy H(p) of p and the 2 Kullback-Leibler divergence K(p∥q) between p and q respectively by H(p) = n X i=1 pi log 1 pi , K(p∥q) := n X i=1 pi log pi qi . Similarly, for a matrix P ∈IRn×n + , we define the entropy H(P) entrywise as P ij Pij log 1 Pij . We use 1 and 0 to denote the all-ones and all-zeroes vectors in IRn. For a matrix A = (Aij), we denote by exp(A) the matrix with entries (eAij). For A ∈IRn×n, we denote its row and columns sums by r(A) := A1 ∈IRn and c(A) := A⊤1 ∈IRn, respectively. The coordinates ri(A) and cj(A) denote the ith row sum and jth column sum of A, respectively. We write ∥A∥∞= maxij |Aij| and ∥A∥1 = P ij |Aij|. For two matrices of the same dimension, we denote the Frobenius inner product of A and B by ⟨A, B⟩= P ij AijBij. For a vector x ∈IRn, we write D(x) ∈IRn×n to denote the diagonal matrix with entries (D(x))ii = xi. For any two nonnegative sequences (un)n, (vn)n, we write un = eO(vn) if there exist positive constants C, c such that un ≤Cvn(log n)c. For any two real numbers, we write a ∧b = min(a, b). 2 Optimal Transport in near-linear time In this section, we describe the main algorithm studied in this paper. Pseudocode appears in Algorithm 1. Algorithm 1 APPROXOT(C, r, c, ε) η ←4 log n ε , ε′ ← ε 8∥C∥∞ \\ Step 1: Approximately project onto Ur,c 1: A ←exp(−ηC) 2: B ←PROJ(A, Ur,c, ε′) \\ Step 2: Round to feasible point in Ur,c 3: Output ˆP ←ROUND(B, Ur,c) Algorithm 2 ROUND(F, Ur,c) 1: X ←D(x) with xi = ri ri(F ) ∧1 2: F ′ ←XF 3: Y ←D(y) with yj = cj cj(F ′) ∧1 4: F ′′ ←F ′Y 5: errr ←r −r(F ′′), errc ←c −c(F ′′) 6: Output G ←F ′′ + errrerr⊤ c /∥errr∥1 The core of our algorithm is the computation of an approximate Sinkhorn projection of the matrix A = exp(−ηC) (Step 1), details for which will be given in Section 3. Since our approximate Sinkhorn projection is not guaranteed to lie in the feasible set, we round our approximation to ensure that it lies in Ur,c (Step 2). Pseudocode for a simple, parallelizable rounding procedure is given in Algorithm 2. Algorithm 1 hinges on two subroutines: PROJ and ROUND. We give two algorithms for PROJ: SINKHORN and GREENKHORN. We devote Section 3 to their analysis, which is of independent interest. On the other hand, ROUND is fairly simple. Its analysis is postponed to Section 4. Our main theorem about Algorithm 1 is the following accuracy and runtime guarantee. The proof is postponed to Section 4, since it relies on the analysis of PROJ and ROUND. Theorem 1. Algorithm 1 returns a point ˆP ∈Ur,c satisfying ⟨ˆP, C⟩≤min P ∈Ur,c⟨P, C⟩+ ε in time O(n2 + S), where S is the running time of the subroutine PROJ(A, Ur,c, ε′). In particular, if ∥C∥∞≤L, then S can be O(n2L3(log n)ε−3), so that Algorithm 1 runs in O(n2L3(log n)ε−3) time. Remark 1. The time complexity in the above theorem reflects only elementary arithmetic operations. In the interest of clarity, we ignore questions of bit complexity that may arise from taking exponentials. The effect of this simplification is marginal since it can be easily shown [KLRS08] that the maximum bit complexity throughout the iterations of our algorithm is O(L(log n)/ε). As a result, factoring in bit complexity leads to a runtime of O(n2L4(log n)2ε−4), which is still truly near-linear. 3 3 Linear-time approximate Sinkhorn projection The core of our OT algorithm is the entropic penalty proposed by Cuturi [Cut13]: Pη := argmin P ∈Ur,c ⟨P, C⟩−η−1H(P) . (2) The solution to (2) can be characterized explicitly by analyzing its first-order conditions for optimality. Lemma 1. [Cut13] For any cost matrix C and r, c ∈∆n, the minimization program (2) has a unique minimum at Pη ∈Ur,c of the form Pη = XAY , where A = exp(−ηC) and X, Y ∈IRn×n + are both diagonal matrices. The matrices (X, Y ) are unique up to a constant factor. We call the matrix Pη appearing in Lemma 1 the Sinkhorn projection of A, denoted ΠS(A, Ur,c), after Sinkhorn, who proved uniqueness in [Sin67]. Computing ΠS(A, Ur,c) exactly is impractical, so we implement instead an approximate version PROJ(A, Ur,c, ε′), which outputs a matrix B = XAY that may not lie in Ur,c but satisfies the condition ∥r(B) −r∥1 + ∥c(B) −c∥1 ≤ε′. We stress that this condition is very natural from a statistical standpoint, since it requires that r(B) and c(B) are close to the target marginals r and c in total variation distance. 3.1 The classical Sinkhorn algorithm Given a matrix A, Sinkhorn proposed a simple iterative algorithm to approximate the Sinkhorn projection ΠS(A, Ur,c), which is now known as the Sinkhorn-Knopp algorithm or RAS method. Despite the simplicity of this algorithm and its good performance in practice, it has been difficult to analyze. As a result, recent work showing that ΠS(A, Ur,c) can be approximated in near-linear time [AZLOW17,CMTV17] has bypassed the Sinkhorn-Knopp algorithm entirely1. In our work, we obtain a new analysis of the simple and practical Sinkhorn-Knopp algorithm, showing that it also approximates ΠS(A, Ur,c) in near-linear time. Algorithm 3 SINKHORN(A, Ur,c, ε′) 1: Initialize k ←0 2: A(0) ←A/∥A∥1, x0 ←0, y0 ←0 3: while dist(A(k), Ur,c) > ε′ do 4: k ←k + 1 5: if k odd then 6: xi ←log ri ri(A(k−1)) for i ∈[n] 7: xk ←xk−1 + x, yk ←yk−1 8: else 9: y ←log cj cj(A(k−1)) for j ∈[n] 10: yk ←yk−1 + y, xk ←xk−1 11: A(k) = D(exp(xk))AD(exp(yk)) 12: Output B ←A(k) Pseudocode for the Sinkhorn-Knopp algorithm appears in Algorithm 3. In brief, it is an alternating projection procedure which renormalizes the rows and columns of A in turn so that they match the desired row and column marginals r and c. At each step, it prescribes to either modify all the rows by multiplying row i by ri/ri(A) for i ∈[n], or to do the analogous operation on the columns. (We interpret the quantity 0/0 as 1 in this algorithm if ever it occurs.) The algorithm terminates when the matrix A(k) is sufficiently close to the polytope Ur,c. 3.2 Prior work Before this work, the best analysis of Algorithm 3 showed that eO((ε′)−2) iterations suffice to obtain a matrix close to Ur,c in ℓ2 distance: Proposition 1. [KLRS08] Let A be a strictly positive matrix. Algorithm 3 with dist(A, Ur,c) = ∥r(A) −r∥2 + ∥c(A) −c∥2 outputs a matrix B satisfying ∥r(B) −r∥2 + ∥c(B) −c∥2 ≤ε′ in O ρ(ε′)−2 log(s/ℓ) iterations, where s = P ij Aij, ℓ= minij Aij, and ρ > 0 is such that ri, ci ≤ρ for all i ∈[n]. Unfortunately, this analysis is not strong enough to obtain a true near-linear time guarantee. Indeed, the ℓ2 norm is not an appropriate measure of closeness between probability vectors, since very different distributions on large alphabets can nevertheless have small ℓ2 distance: for example, (n−1, . . . , n−1, 0, . . . , 0) and (0, . . . , 0, n−1, . . . , n−1) in ∆2n have ℓ2 distance p 2/n even though 1Replacing the PROJ step in Algorithm 1 with the matrix-scaling algorithm developed in [CMTV17] results in a runtime that is a single factor of ε faster than what we present in Theorem 1. The benefit of our approach is that it is extremely easy to implement, whereas the matrix-scaling algorithm of [CMTV17] relies heavily on near-linear time Laplacian solver subroutines, which are not implementable in practice. 4 they have disjoint support. As noted above, for statistical problems, including computation of the OT distance, it is more natural to measure distance in ℓ1 norm. The following Corollary gives the best ℓ1 guarantee available from Proposition 1. Corollary 1. Algorithm 3 with dist(A, Ur,c) = ∥r(A) −r∥2 + ∥c(A) −c∥2 outputs a matrix B satisfying ∥r(B) −r∥1 + ∥c(B) −c∥1 ≤ε′ in O nρ(ε′)−2 log(s/ℓ) iterations. The extra factor of n in the runtime of Corollary 1 is the price to pay to convert an ℓ2 bound to an ℓ1 bound. Note that ρ ≥1/n, so nρ is always larger than 1. If r = c = 1n/n are uniform distributions, then nρ = 1 and no dependence on the dimension appears. However, in the extreme where r or c contains an entry of constant size, we get nρ = Ω(n). 3.3 New analysis of the Sinkhorn algorithm Our new analysis allows us to obtain a dimension-independent bound on the number of iterations beyond the uniform case. Theorem 2. Algorithm 3 with dist(A, Ur,c) = ∥r(A) −r∥1 + ∥c(A) −c∥1 outputs a matrix B satisfying ∥r(B) −r∥1 + ∥c(B) −c∥1 ≤ε′ in O (ε′)−2 log(s/ℓ) iterations, where s = P ij Aij and ℓ= minij Aij. Comparing our result with Corollary 1, we see what our bound is always stronger, by up to a factor of n. Moreover, our analysis is extremely short. Our improved results and simplified proof follow directly from the fact that we carry out the analysis entirely with respect to the Kullback–Leibler divergence, a common measure of statistical distance. This measure possesses a close connection to the total-variation distance via Pinsker’s inequality (Lemma 4, below), from which we obtain the desired ℓ1 bound. Similar ideas can be traced back at least to [GY98] where an analysis of Sinkhorn iterations for bistochastic targets is sketched in the context of a different problem: detecting the existence of a perfect matching in a bipartite graph. We first define some notation. Given a matrix A and desired row and column sums r and c, we define the potential (Lyapunov) function f : IRn × IRn →IR by f(x, y) = X ij Aijexi+yj −⟨r, x⟩−⟨c, y⟩. This auxiliary function has appeared in much of the literature on Sinkhorn projections [KLRS08, CMTV17, KK96, KK93]. We call the vectors x and y scaling vectors. It is easy to check that a minimizer (x∗, y∗) of f yields the Sinkhorn projection of A: writing X = D(exp(x∗)) and Y = D(exp(y∗)), first order optimality conditions imply that XAY lies in Ur,c, and therefore XAY = ΠS(A, Ur,c). The following lemma exactly characterizes the improvement in the potential function f from an iteration of Sinkhorn, in terms of our current divergence to the target marginals. Lemma 2. If k ≥2, then f(xk−1, yk−1) −f(xk, yk) = K(r∥r(A(k−1))) + K(c∥c(A(k−1))) . Proof. Assume without loss of generality that k is odd, so that c(A(k−1)) = c and r(A(k)) = r. (If k is even, interchange the roles of r and c.) By definition, f(xk−1, yk−1) −f(xk, yk) = X ij A(k−1) ij −A(k) ij + ⟨r, xk −xk−1⟩+ ⟨c, yk −yk−1⟩ = X i ri(xk i −xk−1 i ) = K(r∥r(A(k−1)) + K(c∥c(A(k−1)) , where we have used that: ∥A(k−1)∥1 = ∥A(k)∥1 = 1 and Y (k) = Y (k−1); for all i, ri(xk i −xk−1 i ) = ri log ri ri(A(k−1)); and K(c∥c(A(k−1))) = 0 since c = c(A(k−1)). The next lemma has already appeared in the literature and we defer its proof to the supplement. Lemma 3. If A is a positive matrix with ∥A∥1 ≤s and smallest entry ℓ, then f(x1, y1) −min x,y∈IR f(x, y) ≤f(0, 0) −min x,y∈IR f(x, y) ≤log s ℓ. 5 Lemma 4 (Pinsker’s Inequality). For any probability measures p and q, ∥p −q∥1 ≤ p 2K(p∥q). Proof of Theorem 2. Let k∗be the first iteration such that ∥r(A(k∗)) −r∥1 + ∥c(A(k∗)) −c∥1 ≤ε′. Pinsker’s inequality implies that for any k < k∗, we have ε′2 < (∥r(A(k)) −r∥1 + ∥c(A(k)) −c∥1)2 ≤4(K(r∥r(A(k)) + K(c∥c(A(k))) , so Lemmas 2 and 3 imply that we terminate in k∗≤4ε′−2 log(s/ℓ) steps, as claimed. 3.4 Greedy Sinkhorn In addition to a new analysis of SINKHORN, we propose a new algorithm GREENKHORN which enjoys the same convergence guarantee but performs better in practice. Instead of performing alternating updates of all rows and columns of A, the GREENKHORN algorithm updates only a single row or column at each step. Thus GREENKHORN updates only O(n) entries of A per iteration, rather than O(n2). In this respect, GREENKHORN is similar to the stochastic algorithm for Sinkhorn projection proposed by [GCPB16]. There is a natural interpretation of both algorithms as coordinate descent algorithms in the dual space corresponding to row/column violations. Nevertheless, our algorithm differs from theirs in several key ways. Instead of choosing a row or column to update randomly, GREENKHORN chooses the best row or column to update greedily. Additionally, GREENKHORN does an exact line search on the coordinate in question since there is a simple closed form for the optimum, whereas the algorithm proposed by [GCPB16] updates in the direction of the average gradient. Our experiments establish that GREENKHORN performs better in practice; more details appear in the Supplement. We emphasize that although this algorithm is an extremely natural modification of SINKHORN, previous analyses of SINKHORN cannot be modified to extract any meaningful performance guarantees on GREENKHORN. On the other hand, our new analysis of SINKHORN from Section 3.3 applies to GREENKHORN with only trivial modifications. Algorithm 4 GREENKHORN(A, Ur,c, ε′) 1: A(0) ←A/∥A∥1, x ←0, y ←0. 2: A ←A(0) 3: while dist(A, Ur,c) > ε do 4: I ←argmaxi ρ(ri, ri(A)) 5: J ←argmaxj ρ(cj, cj(A)) 6: if ρ(rI, rI(A)) > ρ(cJ, cJ(A)) then 7: xI ←xI + log rI rI(A) 8: else 9: yJ ←yJ + log cJ cJ(A) 10: A ←D(exp(x))A(0)D(exp(y)) 11: Output B ←A Pseudocode for GREENKHORN appears in Algorithm 4. We let dist(A, Ur,c) = ∥r(A) −r∥1 + ∥c(A) −c∥1 and define the distance function ρ : IR+ × IR+ →[0, +∞] by ρ(a, b) = b −a + a log a b . The choice of ρ is justified by its appearance in Lemma 5, below. While ρ is not a metric, it is easy to see that ρ is nonnegative and satisfies ρ(a, b) = 0 iff a = b. We note that after r(A) and c(A) are computed once at the beginning of the algorithm, GREENKHORN can easily be implemented such that each iteration runs in only O(n) time. Theorem 3. The algorithm GREENKHORN outputs a matrix B satisfying ∥r(B) −r∥1 + ∥c(B) − c∥1 ≤ε′ in O(n(ε′)−2 log(s/ℓ)) iterations, where s = P ij Aij and ℓ= minij Aij. Since each iteration takes O(n) time, such a matrix can be found in O(n2(ε′)−2 log(s/ℓ)) time. The analysis requires the following lemma, which is an easy modification of Lemma 2. Lemma 5. Let A′ and A′′ be successive iterates of GREENKHORN, with corresponding scaling vectors (x′, y′) and (x′′, y′′). If A′′ was obtained from A′ by updating row I, then f(x′, y′) −f(x′′, y′′) = ρ(rI, rI(A′)) , and if it was obtained by updating column J, then f(x′, y′) −f(x′′, y′′) = ρ(cJ, cJ(A′)) . We also require the following extension of Pinsker’s inequality (proof in Supplement). 6 Lemma 6. For any α ∈∆n, β ∈IRn +, define ρ(α, β) = P i ρ(αi, βi). If ρ(α, β) ≤1, then ∥α −β∥1 ≤ p 7ρ(α, β) . Proof of Theorem 3. We follow the proof of Theorem 2. Since the row or column update is chosen greedily, at each step we make progress of at least 1 2n(ρ(r, r(A)) + ρ(c, c(A))). If ρ(r, r(A)) and ρ(c, c(A)) are both at most 1, then under the assumption that ∥r(A) −r∥1 + ∥c(A) −c∥1 > ε′, our progress is at least 1 2n(ρ(r, r(A)) + ρ(c, c(A))) ≥ 1 14n(∥r(A) −r∥2 1 + ∥c(A) −c∥2 1) ≥ 1 28nε′2 Likewise, if either ρ(r, r(A)) or ρ(c, c(A)) is larger than 1, our progress is at least 1/2n ≥ 1 28nε′2. Therefore, we terminate in at most 28nε′−2 log(s/ℓ) iterations. 4 Proof of Theorem 1 First, we present a simple guarantee about the rounding Algorithm 2. The following lemma shows that the ℓ1 distance between the input matrix F and rounded matrix G = ROUND(F, Ur,c) is controlled by the total-variation distance between the input matrix’s marginals r(F) and c(F) and the desired marginals r and c. Lemma 7. If r, c ∈∆n and F ∈IRn×n + , then Algorithm 2 takes O(n2) time to output a matrix G ∈Ur,c satisfying ∥G −F∥1 ≤2 h ∥r(F) −r∥1 + ∥c(F) −c∥1 i . The proof of Lemma 7 is simple and left to the Supplement. (We also describe in the Supplement a randomized variant of Algorithm 2 that achieves a slightly better bound than Lemma 7). We are now ready to prove Theorem 1. Proof of Theorem 1. ERROR ANALYSIS. Let B be the output of PROJ(A, Ur,c, ε′), and let P ∗∈ argminP ∈Ur,c⟨P, C⟩be an optimal solution to the original OT program. We first show that ⟨B, C⟩is not much larger than ⟨P ∗, C⟩. To that end, write r′ := r(B) and c′ := c(B). Since B = XAY for positive diagonal matrices X and Y , Lemma 1 implies B is the optimal solution to min P ∈Ur′,c′⟨P, C⟩−η−1H(P) . (3) By Lemma 7, there exists a matrix P ′ ∈Ur′,c′ such that ∥P ′ −P ∗∥1 ≤2 (∥r′ −r∥1 + ∥c′ −c∥1). Moreover, since B is an optimal solution of (3), we have ⟨B, C⟩−η−1H(B) ≤⟨P ′, C⟩−η−1H(P ′) . Thus, by Hölder’s inequality ⟨B, C⟩−⟨P ∗, C⟩= ⟨B, C⟩−⟨P ′, C⟩+ ⟨P ′, C⟩−⟨P ∗, C⟩ ≤η−1(H(B) −H(P ′)) + 2(∥r′ −r∥1 + ∥c′ −c∥1)∥C∥∞ ≤2η−1 log n + 2(∥r′ −r∥1 + ∥c′ −c∥1)∥C∥∞, (4) where we have used the fact that 0 ≤H(B), H(P ′) ≤2 log n. Lemma 7 implies that the output ˆP of ROUND(B, Ur,c) satisfies the inequality ∥B −ˆP∥1 ≤ 2 (∥r′ −r∥1 + ∥c′ −c∥1). This fact together with (4) and Hölder’s inequality yields ⟨ˆP, C⟩≤min P ∈Ur,c⟨P, C⟩+ 2η−1 log n + 4(∥r′ −r∥1 + ∥c′ −c∥1)∥C∥∞. Applying the guarantee of PROJ(A, Ur,c, ε′), we obtain ⟨ˆP, C⟩≤min P ∈Ur,c⟨P, C⟩+ 2 log n η + 4ε′∥C∥∞. 7 Plugging in the values of η and ε′ prescribed in Algorithm 1 finishes the error analysis. RUNTIME ANALYSIS. Lemma 7 shows that Step 2 of Algorithm 1 takes O(n2) time. The runtime of Step 1 is dominated by the PROJ(A, Ur,c, ε′) subroutine. Theorems 2 and 3 imply that both the SINKHORN and GREENKHORN algorithms accomplish this in S = O(n2(ε′)−2 log s ℓ) time, where s is the sum of the entries of A and ℓis the smallest entry of A. Since the matrix C is nonnegative, the entries of A are bounded above by 1, thus s ≤n2. The smallest entry of A is e−η∥C∥∞, so log 1/ℓ= η∥C∥∞. We obtain S = O(n2(ε′)−2(log n+η∥C∥∞)). The proof is finished by plugging in the values of η and ε′ prescribed in Algorithm 1. 5 Empirical results Figure 1: Synthetic image. Cuturi [Cut13] already gave experimental evidence that using SINKHORN to solve (2) outperforms state-of-the-art techniques for optimal transport. In this section, we provide strong empirical evidence that our proposed GREENKHORN algorithm significantly outperforms SINKHORN. We consider transportation between pairs of m×m greyscale images, normalized to have unit total mass. The target marginals r and c represent two images in a pair, and C ∈IRm2×m2 is the matrix of ℓ1 distances between pixel locations. Therefore, we aim to compute the earth mover’s distance. We run experiments on two datasets: real images, from MNIST, and synthetic images, as in Figure 1. 5.1 MNIST We first compare the behavior of GREENKHORN and SINKHORN on real images. To that end, we choose 10 random pairs of images from the MNIST dataset, and for each one analyze the performance of APPROXOT when using both GREENKHORN and SINKHORN for the approximate projection step. We add negligible noise 0.01 to each background pixel with intensity 0. Figure 2 paints a clear picture: GREENKHORN significantly outperforms SINKHORN both in the short and long term. 5.2 Random images Figure 2: Comparison of GREENKHORN and SINKHORN on pairs of MNIST images of dimension 28 × 28 (top) and random images of dimension 20 × 20 with 20% foreground (bottom). Left: distance dist(A, Ur,c) to the transport polytope (average over 10 random pairs of images). Right: maximum, median, and minimum values of the competitive ratio ln (dist(AS, Ur,c)/dist(AG, Ur,c)) over 10 runs. To better understand the empirical behavior of both algorithms in a number of different regimes, we devised a synthetic and tunable framework whereby we generate images by choosing a randomly positioned “foreground” square in an otherwise black background. The size of this square is a tunable parameter varied between 20%, 50%, and 80% of the total image’s area. Intensities of background pixels are drawn uniformly from [0, 1]; foreground pixels are drawn uniformly from [0, 50]. Such an image is depicted in Figure 1, and results appear in Figure 2. We perform two other experiments with random images in Figure 3. In the first, we vary the number of background pixels and show that GREENKHORN performs better when the number of background pixels is larger. We conjecture that this is related to the fact that GREENKHORN only updates salient rows and 8 columns at each step, whereas SINKHORN wastes time updating rows and columns corresponding to background pixels, which have negligible impact. This demonstrates that GREENKHORN is a better choice especially when data is sparse, which is often the case in practice. In the second, we consider the role of the regularization parameter η. Our analysis requires taking η of order log n/ε, but Cuturi [Cut13] observed that in practice η can be much smaller. Cuturi showed that SINKHORN outperforms state-of-the art techniques for computing OT distance even when η is a small constant, and Figure 3 shows that GREENKHORN runs faster than SINKHORN in this regime with no loss in accuracy. Figure 3: Left: Comparison of median competitive ratio for random images containing 20%, 50%, and 80% foreground. Right: Performance of GREENKHORN and SINKHORN for small values of η. 9 Acknowledgments We thank Michael Cohen, Adrian Vladu, John Kelner, Justin Solomon, and Marco Cuturi for helpful discussions. We are grateful to Pablo Parrilo for drawing our attention to the fact that GREENKHORN is a coordinate descent algorithm, and to Alexandr Andoni for references. JA and JW were generously supported by NSF Graduate Research Fellowship 1122374. PR is supported in part by grants NSF CAREER DMS-1541099, NSF DMS-1541100, NSF DMS-1712596, DARPA W911NF-16-1-0551, ONR N00014-17-1-2147 and a grant from the MIT NEC Corporation. References [ACB17] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. ArXiv:1701.07875, January 2017. [ANOY14] A. Andoni, A. Nikolov, K. Onak, and G. Yaroslavtsev. Parallel algorithms for geometric graph problems. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 574–583, New York, NY, USA, 2014. ACM. [AS14] P. K. Agarwal and R. Sharathkumar. Approximation algorithms for bipartite matching with metric and geometric costs. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 555–564, New York, NY, USA, 2014. ACM. [AZLOW17] Z. Allen-Zhu, Y. Li, R. Oliveira, and A. Wigderson. Much faster algorithms for matrix scaling. arXiv preprint arXiv:1704.02315, 2017. [BCC+15] J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyré. Iterative Bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111–A1138, 2015. [BGKL17] J. Bigot, R. Gouet, T. Klein, and A. López. Geodesic PCA in the Wasserstein space by convex PCA. Ann. Inst. H. Poincaré Probab. Statist., 53(1):1–26, 02 2017. [Bub15] S. Bubeck. Convex optimization: Algorithms and complexity. Found. Trends Mach. Learn., 8(3-4):231–357, 2015. [BvdPPH11] N. Bonneel, M. van de Panne, S. Paris, and W. Heidrich. Displacement interpolation using Lagrangian mass transport. ACM Trans. Graph., 30(6):158:1–158:12, December 2011. [CBL06] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, Cambridge, 2006. [CMTV17] M. B. Cohen, A. Madry, D. Tsipras, and A. Vladu. Matrix scaling and balancing via box constrained Newton’s method and interior point methods. arXiv:1704.02310, 2017. [Cut13] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2292–2300. Curran Associates, Inc., 2013. [FCCR16] R. Flamary, M. Cuturi, N. Courty, and A. Rakotomamonjy. Wasserstein discriminant analysis. arXiv:1608.08063, 2016. [GCPB16] A. Genevay, M. Cuturi, G. Peyré, and F. Bach. Stochastic optimization for large-scale optimal transport. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3440–3448. Curran Associates, Inc., 2016. [GD04] K. Grauman and T. Darrell. Fast contour matching using approximate earth mover’s distance. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 1, pages I–220–I–227 Vol.1, June 2004. [GPC15] A. Gramfort, G. Peyré, and M. Cuturi. Fast Optimal Transport Averaging of Neuroimaging Data, pages 261–272. Springer International Publishing, 2015. [GY98] L. Gurvits and P. Yianilos. The deflation-inflation method for certain semidefinite programming and maximum determinant completion problems. Technical report, NECI, 1998. [IT03] P. Indyk and N. Thaper. Fast image retrieval via embeddings. In Third International Workshop on Statistical and Computational Theories of Vision, 2003. [JRT08] A. Juditsky, P. Rigollet, and A. Tsybakov. Learning by mirror averaging. Ann. Statist., 36(5):2183– 2206, 2008. [JSCG16] W. Jitkrittum, Z. Szabó, K. P. Chwialkowski, and A. Gretton. Interpretable distribution features with maximum testing power. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 181–189, 2016. 10 [KK93] B. Kalantari and L. Khachiyan. On the rate of convergence of deterministic and randomized RAS matrix scaling algorithms. Oper. Res. Lett., 14(5):237–244, 1993. [KK96] B. Kalantari and L. Khachiyan. On the complexity of nonnegative-matrix scaling. Linear Algebra Appl., 240:87–103, 1996. [KLRS08] B. Kalantari, I. Lari, F. Ricca, and B. Simeone. On the complexity of general matrix scaling and entropy minimization via the RAS algorithm. Math. Program., 112(2, Ser. A):371–401, 2008. [Leo14] C. Leonard. A survey of the Schrödinger problem and some of its connections with optimal transport. Discrete and Continuous Dynamical Systems, 34(4):1533–1574, 2014. [LG15] J. R. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, pages 829–837, Cambridge, MA, USA, 2015. MIT Press. [LS14] Y. T. Lee and A. Sidford. Path finding methods for linear programming: Solving linear programs in Õ(√rank) iterations and faster algorithms for maximum flow. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, FOCS ’14, pages 424–433, Washington, DC, USA, 2014. IEEE Computer Society. [MJ15] J. Mueller and T. Jaakkola. Principal differences analysis: Interpretable characterization of differences between distributions. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, pages 1702–1710, Cambridge, MA, USA, 2015. MIT Press. [PW09] O. Pele and M. Werman. Fast and robust earth mover’s distances. In 2009 IEEE 12th International Conference on Computer Vision, pages 460–467, Sept 2009. [PZ16] V. M. Panaretos and Y. Zemel. Amplitude and phase variation of point processes. Ann. Statist., 44(2):771–812, 04 2016. [Ren88] J. Renegar. A polynomial-time algorithm, based on Newton’s method, for linear programming. Mathematical Programming, 40(1):59–93, 1988. [RT11] P. Rigollet and A. Tsybakov. Exponential screening and optimal rates of sparse estimation. Ann. Statist., 39(2):731–771, 2011. [RT12] P. Rigollet and A. Tsybakov. Sparse estimation by exponential weighting. Statistical Science, 27(4):558–575, 2012. [RTG00] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vision, 40(2):99–121, November 2000. [SA12] R. Sharathkumar and P. K. Agarwal. A near-linear time ϵ-approximation algorithm for geometric bipartite matching. In H. J. Karloff and T. Pitassi, editors, Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, pages 385–394. ACM, 2012. [Sch31] E. Schrödinger. Über die Umkehrung der Naturgesetze. Angewandte Chemie, 44(30):636–636, 1931. [SdGP+15] J. Solomon, F. de Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph., 34(4):66:1–66:11, July 2015. [Sin67] R. Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums. The American Mathematical Monthly, 74(4):402–405, 1967. [SJ08] S. Shirdhonkar and D. W. Jacobs. Approximate earth mover’s distance in linear time. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2008. [SL11] R. Sandler and M. Lindenbaum. Nonnegative matrix factorization with earth mover’s distance metric for image analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1590–1602, Aug 2011. [SR04] G. J. Székely and M. L. Rizzo. Testing for equal distributions in high dimension. Inter-Stat (London), 11(5):1–16, 2004. [ST04] D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, STOC ’04, pages 81–90, New York, NY, USA, 2004. ACM. [Vil09] C. Villani. Optimal transport, volume 338 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2009. Old and new. [WPR85] M. Werman, S. Peleg, and A. Rosenfeld. A distance metric for multidimensional histograms. Computer Vision, Graphics, and Image Processing, 32(3):328 – 336, 1985. 11 | 2017 | 199 |
6,673 | Joint distribution optimal transportation for domain adaptation Nicolas Courty∗ Université de Bretagne Sud, IRISA, UMR 6074, CNRS, courty@univ-ubs.fr Rémi Flamary∗ Université Côte d’Azur, Lagrange, UMR 7293 , CNRS, OCA remi.flamary@unice.fr Amaury Habrard Univ Lyon, UJM-Saint-Etienne, CNRS, Lab. Hubert Curien UMR 5516, F-42023 amaury.habrard@univ-st-etienne.fr Alain Rakotomamonjy Normandie Universite Université de Rouen, LITIS EA 4108 alain.rakoto@insa-rouen.fr Abstract This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target Pf t = (X, f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results. 1 Introduction In the context of supervised learning, one generally assumes that the test data is a realization of the same process that generated the learning set. Yet, in many practical applications it is often not the case, since several factors can slightly alter this process. The particular case of visual adaptation [1] in computer vision is a good example: given a new dataset of images without any label, one may want to exploit a different annotated dataset, provided that they share sufficient common information and labels. However, the generating process can be different in several aspects, such as the conditions and devices used for acquisition, different pre-processing, different compressions, etc. Domain adaptation techniques aim at alleviating this issue by transferring knowledge between domains [2]. We propose in this paper a principled and theoretically founded way of tackling this problem. The domain adaptation (DA) problem is not new and has received a lot of attention during the past ten years. State-of-the-art methods are mainly differing by the assumptions made over the change in data distributions. In the covariate shift assumption, the differences between the domains are characterized by a change in the feature distributions P(X), while the conditional distributions P(Y |X) remain unchanged (X and Y being respectively the instance and label spaces). Importance re-weighting can be used to learn a new classifier (e.g. [3]), provided that the overlapping of the distributions is large ∗Both authors contributed equally. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. enough. Kernel alignment [4] has also been considered for the same purpose. Other types of method, denoted as Invariant Components by Gong and co-authors [5], are looking for a transformation T such that the new representations of input data are matching, i.e. Ps(T (X)) = Pt(T (X)). Methods are then differing by: i) The considered class of transformation, that are generally defined as projections (e.g. [6, 7, 8, 9, 5]), affine transform [4] or non-linear transformation as expressed by neural networks [10, 11] ii) The types of divergences used to compare Ps(T (X)) and Pt(T (X)), such as Kullback Leibler [12] or Maximum Mean Discrepancy [9, 5]. Those divergences usually require that the distributions share a common support to be defined. A particular case is found in the use of optimal transport, introduced for domain adaptation by [13, 14]. T is then defined to be a push-forward operator such that Ps(X) = Pt(T (X)) and that minimizes a global transportation effort or cost between distributions. The associated divergence is the so-called Wasserstein metric, that has a natural Lagrangian formulation and avoids the estimation of continuous distribution by means of kernel. As such, it also alleviates the need for a shared support. The methods discussed above implicitly assume that the conditional distributions are unchanged by T , i.e. Ps(Y |T (X)) ≈Pt(Y |T (X)) but there is no clear reason for this assumption to hold. A more general approach is to adapt both marginal feature and conditional distributions by minimizing a global divergence between them. However, this task is usually hard since no label is available in the target domain and therefore no empirical version Pt(Y |X) can be used. This was achieved by restricting to specific class of transformation such as projection [9, 5]. Contributions and outline. In this work we propose a novel framework for unsupervised domain adaptation between joint distributions. We propose to find a function f that predicts an output value given an input x ∈X, and that minimizes the optimal transport loss between the joint source distribution Ps and an estimated target joint distribution Pf t = (X, f(X)) depending on f (detailed in Section 2). The method is denoted as JDOT for “Joint Distribution Optimal Transport" in the remainder. We show that the resulting optimization problem stands for a minimization of a bound on the target error of f (Section 3) and propose an efficient algorithm to solve it (Section 4). Our approach is very general and does not require to learn explicitly a transformation, as it directly solves for the best function. We show that it can handle both regression and classification problems with a large class of functions f including kernel machines and neural networks. We finally provide several numerical experiments on real regression and classification problems that show the performances of JDOT over the state-of-the-art (Section 5). 2 Joint distribution Optimal Transport Let Ω∈Rd be a compact input measurable space of dimension d and C the set of labels. P(Ω) denotes the set of all the probability measures over Ω. The standard learning paradigm assumes classically the existence of a set of data Xs = {xs i}Ns i=1 associated with a set of class label information Ys = {ys i }Ns i=1, ys i ∈C (the learning set), and a data set with unknown labels Xt = {xt i}Nt i=1 (the testing set). In order to determine the set of labels Yt associated with Xt , one usually relies on an empirical estimate of the joint probability distribution P(X, Y ) ∈P(Ω× C) from (Xs, Ys), and the assumption that Xs and Xt are drawn from the same distribution µ ∈P(Ω). In the considered adaptation problem, one assumes the existence of two distinct joint probability distributions Ps(X, Y ) and Pt(X, Y ) which correspond respectively to two different source and target domains. We will write µs and µt their respective marginal distributions over X. 2.1 Optimal transport in domain adaptation The Monge problem is seeking for a map T0 : Ω→Ωthat pushes µs toward µt defined as: T0 = argmin T Z Ω d(x, T (x))dµs(x), s.t. T #µs = µt, where T #µs the image measure of µs by T , verifying: T #µs(A) = µt(T −1(A)), ∀Borel subset A ⊂Ω, (1) and d : Ω× Ω→R+ is a metric. In the remainder, we will always consider without further notification the case where d is the squared Euclidean metric. When T0 exists, it is called an optimal transport map, but it is not always the case (e.g. assume that µs is defined by one Dirac measure and 2 µt by two). A relaxed version of this problem has been proposed by Kantorovitch [15], who rather seeks for a transport plan (or equivalently a joint probability distribution) γ ∈P(Ω× Ω) such that: γ0 = argmin γ∈Π(µs,µt) Z Ω×Ω d(x1, x2)dγ(x1, x2), (2) where Π(µs, µt) = {γ ∈P(Ω× Ω)|p+#γ = µs, p−#γ = µt} and p+ and p−denotes the two marginal projections of Ω× Ωto Ω. Minimizers of this problem are called optimal transport plans. Should γ0 be of the form (id × T )#µs, then the solution to Kantorovich and Monge problems coincide. As such the Kantorovich relaxation can be seen as a generalization of the Monge problem, with less constraints on the existence and uniqueness of solutions [16]. Optimal transport has been used in DA as a principled way to bring the source and target distribution closer [13, 14, 17], by seeking for a transport plan between the empirical distributions of Xs and Xt and interpolating Xs thanks to a barycentric mapping [14], or by estimating a mapping which is not the solution of Monge problem but allows to map unseen samples [17]. Moreover, they show that better constraining the structure of γ through entropic or classwise regularization terms helps in achieving better empirical results. 2.2 Joint distribution optimal transport loss The main idea of this work is is to handle a change in both marginal and conditional distributions. As such, we are looking for a transformation T that will align directly the joint distributions Ps and Pt. Following the Kantovorich formulation of (2), T will be implicitly expressed through a coupling between both joint distributions as: γ0 = argmin γ∈Π(Ps,Pt) Z (Ω×C)2 D(x1, y1; x2, y2)dγ(x1, y1; x2, y2), (3) where D(x1, y1; x2, y2) = αd(x1, x2) + L(y1, y2) is a joint cost measure combining both the distances between the samples and a loss function L measuring the discrepancy between y1 and y2. While this joint cost is specific (separable), we leave for future work the analysis of generic joint cost function. Putting it in words, matching close source and target samples with similar labels costs few. α is a positive parameter which balances the metric in the feature space and the loss. As such, when α →+∞, this cost is dominated by the metric in the input feature space, and the solution of the coupling problem is the same as in [14]. It can be shown that a minimizer to (3) always exists and is unique provided that D(·) is lower semi-continuous (see [18], Theorem 4.1), which is the case when d(·) is a norm and for every usual loss functions [19]. In the unsupervised DA problem, one does not have access to labels in the target domain, and as such it is not possible to find the optimal coupling. Since our goal is to find a function on the target domain f : Ω→C, we suggest to replace y2 by a proxy f(x2). This leads to the definition of the following joint distribution that uses a given function f as a proxy for y: Pf t = (x, f(x))x∼µt (4) In practice we consider empirical versions of Ps and Pf t , i.e. ˆ Ps = 1 Ns PNs i=1 δxs i ,ys i and ˆ Pf t = 1 Nt PNt i=1 δxt i,f(xt i). γ is then a matrix which belongs to ∆, i.e.the transportation polytope of nonnegative matrices between uniform distributions. Since our goal is to estimate a prediction f on the target domain, we propose to find the one that produces predictions that match optimally source labels to the aligned target instances in the transport plan. For this purpose, we propose to solve the following problem for JDOT: min f,γ∈∆ X ij D(xs i, ys i ; xt j, f(xt j))γij ≡ min f W1( ˆ Ps, ˆ Pf t ) (5) where W1 is the 1-Wasserstein distance for the loss D(x1, y1; x2, y2) = αd(x1, x2) + L(y1, y2). We will make clear in the next section that the function f we retrieve is theoretically sound with respect to the target error. Note that in practice we add a regularization term for function f in order to avoid overfitting as discussed in Section 4. An illustration of JDOT for a regression problem is given in Figure 1. In this figure, we have very different joint and marginal distributions but we want 3 5 0 5 x 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 y Toy regression distributions 2.5 0.0 2.5 5.0 x 1.0 0.5 0.0 0.5 1.0 Toy regression models Source model Target model Source samples Target samples 2.5 0.0 2.5 5.0 x 1.0 0.5 0.0 0.5 1.0 Joint OT matrices JDOT matrix link OT matrix link 2.5 0.0 2.5 5.0 x 1.0 0.5 0.0 0.5 1.0 Model estimated with JDOT Source model Target model JDOT model Figure 1: Illustration of JDOT on a 1D regression problem. (left) Source and target empirical distributions and marginals (middle left) Source and target models (middle right) OT matrix on empirical joint distributions and with JDOT proxy joint distribution (right) estimated prediction function f. to illustrate that the OT matrix γ obtained using the true empirical distribution Pt is very similar to the one obtained with the proxy Pf t which leads to a very good model for JDOT. Choice of α. This is an important parameter balancing the alignment of feature space and labels. A natural choice of the α parameter is obtained by normalizing the range of values of d(xs i, xt j) with α = 1/ maxi,j d(xs i, xt j). In the numerical experiment section, we show that this setting is very good in two out of three experiments. However, in some cases, better performances are obtained with a cross-validation of this parameter. Also note that α is strongly linked to the smoothness of the loss L and of the optimal labelling functions and can be seen as a Lipschitz constant in the bound of Theorem 3.1. Relation to other optimal transport based DA methods. Previous DA methods based on optimal transport [14, 17] do not not only differ by the nature of the considered distributions, but also in the way the optimal plan is used to find f. They learn a complex mapping between the source and target distributions when the objective is only to estimate a prediction function f on target. To do so, they rely on a barycentric mapping that minimizes only approximately the Wasserstein distance between the distributions. As discussed in Section 4, JDOT uses the optimal plan to propagate and fuse the labels from the source to target. Not only are the performances enhanced, but we also show how this approach is more theoretically well grounded in next section 3. Relation to Transport Lp distances. Recently, Thorpe and co-authors introduced the Transportation Lp distance [20]. Their objective is to compute a meaningful distance between multi-dimensional signals. Interestingly their distance can be seen as optimal transport between two distributions of the form (4) where the functions are known and the label loss L is chosen as a Lp distance. While their approach is inspirational, JDOT is different both in its formulation, where we introduce a more general class of loss L, and in its objective, as our goal is to estimate the target function f which is not known a priori. Finally we show theoretically and empirically that our formulation addresses successfully the problem of domain adaptation. 3 A Bound on the Target Error Let f be an hypothesis function from a given class of hypothesis H. We define the expected loss in the target domain errT (f) as errT (f) def = E(x,y)∼Pt L(y, f(x)). We define similarly errS(f) for the source domain. We assume the loss function L to be bounded, symmetric, k-lipschitz and satisfying the triangle inequality. To provide some guarantees on our method, we consider an adaptation of the notion probabilistic Lipschitzness introduced in [21, 22] which assumes that two close instances must have the same labels with high probability. It corresponds to a relaxation of the classic Lipschitzness allowing one to model the marginal-label relatedness such as in Nearest-Neighbor classification, linear classification or cluster assumption. We propose an extension of this notion in a domain adaptation context by assuming that a labeling function must comply with two close instances of each domain w.r.t. a coupling Π. 4 Definition (Probabilistic Transfer Lipschitzness) Let µs and µt be respectively the source and target distributions. Let φ : R →[0, 1]. A labeling function f : Ω→R and a joint distribution Π(µs, µt) over µs and µt are φ-Lipschitz transferable if for all λ > 0: Pr(x1,x2)∼Π(µs,µt) [|f(x1) −f(x2)| > λd(x1, x2)] ≤φ(λ). Intuitively, given a deterministic labeling functions f and a coupling Π, it bounds the probability of finding pairs of source-target instances labelled differently in a (1/λ)-ball with respect to Π. We can now give our main result (simplified version): Theorem 3.1 Let f be any labeling function of ∈ H. Let Π∗ = argminΠ∈Π(Ps,Pf t ) R (Ω×C)2 αd(xs, xt) + L(ys, yt)dΠ(xs, ys; xt, yt) and W1( ˆ Ps, ˆ Pf t ) the associated 1-Wasserstein distance. Let f ∗∈H be a Lipschitz labeling function that verifies the φ-probabilistic transfer Lipschitzness (PTL) assumption w.r.t. Π∗and that minimizes the joint error errS(f ∗) + errT (f ∗) w.r.t all PTL functions compatible with Π∗. We assume the input instances are bounded s.t. |f ∗(x1) −f ∗(x2)| ≤M for all x1, x2. Let L be any symmetric loss function, k-Lipschitz and satisfying the triangle inequality. Consider a sample of Ns labeled source instances drawn from Ps and Nt unlabeled instances drawn from µt, and then for all λ > 0, with α = kλ, we have with probability at least 1 −δ that: errT (f) ≤W1( ˆ Ps, ˆ Pf t ) + r 2 c′ log(2 δ ) 1 √NS + 1 √NT + errS(f ∗) + errT (f ∗) + kMφ(λ). The detailed proof of Theorem 3.1 is given in the supplementary material. The previous bound on the target error above is interesting to interpret. The first two terms correspond to the objective function (5) we propose to minimize accompanied with a sampling bound. The last term φ(λ) assesses the probability under which the probabilistic Lipschitzness does not hold. The remaining two terms involving f ∗correspond to the joint error minimizer illustrating that domain adaptation can work only if we can predict well in both domains, similarly to existing results in the literature [23, 24]. If the last terms are small enough, adaptation is possible if we are able to align well Ps and Pf t , provided that f ∗and Π∗verify the PTL. Finally, note that α = kλ and tuning this parameter is thus actually related to finding the Lipschitz constants of the problem. 4 Learning with Joint Distribution OT In this section, we provide some details about the JDOT’s optimization problem given in Equation (5) and discuss algorithms for its resolution. We will assume that the function space H to which f belongs is either a RKHS or a function space parametrized by some parameters w ∈Rp. This framework encompasses linear models, neural networks, and kernel methods. Accordingly, we are going to define a regularization term Ω(f) on f. Depending on how H is defined, Ω(f) is either a non-decreasing function of the squared-norm induced by the RKHS (so that the representer theorem is applicable) or a squared-norm on the vector parameter. We will further assume that Ω(f) is continuously differentiable. As discussed above, f is to be learned according to the following optimization problem min f∈H,γ∈∆ X i,j γi,j αd(xs i, xt j) + L(ys i , f(xt j)) + λΩ(f) (6) where the loss function L is continuous and differentiable with respects to its second variable. Note that while the above problem does not involve any regularization term on the coupling matrix γ, it is essentially for the sake of simplicity and readability. Regularizers like entropic regularization [25], which is relevant when the number of samples is very large, can still be used without significant change to the algorithmic framework. Optimization procedure. According to the above hypotheses on f and L, Problem (6) is smooth and the constraints are separable according to f and γ. Hence, a natural way to solve the problem (6) is to rely on alternate optimization w.r.t. both parameters γ and f. This algorithm well-known as Block Coordinate Descent (BCD) or Gauss-Seidel method (the pseudo code of the algorithm is given in appendix). Block optimization steps are discussed with further details in the following. 5 Solving with fixed f boils down to a classical OT problem with a loss matrix C such that Ci,j = αd(xs i, xt j) + L(ys i , f(xt j)). We can use classical OT solvers such as the network simplex algorithm, but other strategies can be considered, such as regularized OT [25] or stochastic versions [26]. The optimization problem with fixed γ leads to a new learning problem expressed as min f∈H X i,j γi,jL(ys i , f(xt j)) + λΩ(f) (7) Note how the data fitting term elegantly and naturally encodes the transfer of source labels ys i through estimated labels of test samples with a weighting depending on the optimal transport matrix. However, this comes at the price of having a quadratic number NsNt of terms, which can be considered as computationally expensive. We will see in the sequel that we can benefit from the structure of the chosen loss to greatly reduce its complexity. In addition, we emphasize that when H is a RKHS, owing to kernel trick and the representer theorem, problem (7) can be re-expressed as an optimization problem with Nt number of parameters all belonging to R. Let us now discuss briefly the convergence of the proposed algorithm. Owing to the 2-block coordinate descent structure, to the differentiability of the objective function in Problem (6) and constraints on f (or its kernel trick parameters) and γ are closed, non-empty and convex, convergence result of Grippo et al. [27] on 2-block Gauss-Seidel methods directly applies. It states that if the sequence {γk, f k} produced by the algorithm has limit points then every limit point of the sequence is a critical point of Problem (6). Estimating f for least square regression problems. We detail the use of JDOT for transfer leastsquare regression problem i.e when L is the squared-loss. In this context, when the optimal transport matrix γ is fixed the learning problem boils down to min f∈H X j 1 nt ∥ˆyj −f(xt j)∥2 + λ∥f∥2 (8) where the ˆyj = nt P j γi,jys i is a weighted average of the source target values. Note that this simplification results from the properties of the quadratic loss and that it may not occur for more complex regression loss. Estimating f for hinge loss classification problems. We now aim at estimating a multiclass classifier with a one-against-all strategy. We suppose that the data fitting is the binary squared hinge loss of the form L(y, f(x)) = max(0, 1 −yf(x))2. In a One-Against-All strategy we often use the binary matrices P such that P s i,k = 1 if sample i is of class k else P s i,k = 0. Denote as fk ∈H the decision function related to the k-vs-all problem. The learning problem (7) can now be expressed as min fk∈H X j,k ˆPj,kL(1, fk(xt j)) + (1 −ˆPj,k)L(−1, fk(xt j)) + λ X k ∥fk∥2 (9) where ˆP is the transported class proportion matrix ˆP = 1 Nt γ⊤Ps. Interestingly this formulation illustrates that for each target sample, the data fitting term is a convex sum of hinge loss for a negative and positive label with weights in γ. 5 Numerical experiments In this section we evaluate the performance of our method (JDOT) on two different transfer tasks of classification and regression on real datasets 2. Caltech-Office classification dataset. This dataset [28] is dedicated to visual adaptation. It contains images from four different domains: Amazon, the Caltech-256 image collection, Webcam and DSLR. Several features, such as presence/absence of background, lightning conditions, image quality, etc.) induce a distribution shift between the domains, and it is therefore relevant to consider a domain adaptation task to perform the classification. Following [14], we choose deep learning features to represent the images, extracted as the weights of the fully connected 6th layer of the DECAF convolutional neural network [29], pre-trained on ImageNet. The final feature vector is a sparse 4096 dimensional vector. 2Open Source Python implementation of JDOT: https://github.com/rflamary/JDOT 6 Table 1: Accuracy on the Caltech-Office Dataset. Best value in bold. Domains Base SurK SA ARTL OT-IT OT-MM JDOT caltech→amazon 92.07 91.65 90.50 92.17 89.98 92.59 91.54 caltech→webcam 76.27 77.97 81.02 80.00 80.34 78.98 88.81 caltech→dslr 84.08 82.80 85.99 88.54 78.34 76.43 89.81 amazon→caltech 84.77 84.95 85.13 85.04 85.93 87.36 85.22 amazon→webcam 79.32 81.36 85.42 79.32 74.24 85.08 84.75 amazon→dslr 86.62 87.26 89.17 85.99 77.71 79.62 87.90 webcam→caltech 71.77 71.86 75.78 72.75 84.06 82.99 82.64 webcam→amazon 79.44 78.18 81.42 79.85 89.56 90.50 90.71 webcam→dslr 96.18 95.54 94.90 100.00 99.36 99.36 98.09 dslr→caltech 77.03 76.94 81.75 78.45 85.57 83.35 84.33 dslr→amazon 83.19 82.15 83.19 83.82 90.50 90.50 88.10 dslr→webcam 96.27 92.88 88.47 98.98 96.61 96.61 96.61 Mean 83.92 83.63 85.23 85.41 86.02 86.95 89.04 Mean rank 5.33 5.58 4.00 3.75 3.50 2.83 2.50 p-value < 0.01 < 0.01 0.01 0.04 0.25 0.86 − Table 2: Accuracy on the Amazon review experiment. Maximum value in bold font. Domains NN DANN JDOT (mse) JDOT (Hinge) books→dvd 0.805 0.806 0.794 0.795 books→kitchen 0.768 0.767 0.791 0.794 books→electronics 0.746 0.747 0.778 0.781 dvd→books 0.725 0.747 0.761 0.763 dvd→kitchen 0.760 0.765 0.811 0.821 dvd→electronics 0.732 0.738 0.778 0.788 kitchen→books 0.704 0.718 0.732 0.728 kitchen→dvd 0.723 0.730 0.764 0.765 kitchen→electronics 0.847 0.846 0.844 0.845 electronics→books 0.713 0.718 0.740 0.749 electronics→dvd 0.726 0.726 0.738 0.737 electronics→kitchen 0.855 0.850 0.868 0.872 Mean 0.759 0.763 0.783 0.787 p-value 0.004 0.006 0.025 − We compare our method with four other methods: the surrogate kernel approach ([4], denoted SurK), subspace adaptation for its simplicity and good performances on visual adaptation ([8], SA), Adaptation Regularization based Transfer Learning ([30], ARTL), and the two variants of regularized optimal transport [14]: entropy-regularized OT-IT and classwise regularization implemented with the Majoration-Minimization algorithm OT-MM, that showed to give better results in practice than its group-lasso counterpart. The classification is conducted with a SVM together with a linear kernel for every method. Its results when learned on the source domain and tested on the target domain are also reported to serve as baseline (Base). All the methods have hyper-parameters, that are selected using the reverse cross-validation of Zhong and colleagues [31].The dimension d for SA is chosen from {1, 4, 7, . . . , 31}. The entropy regularization for OT-IT and OT-MM is taken from {102, . . . , 105}, 102 being the minimum value for the Sinkhorn algorithm to prevent numerical errors. Finally the η parameter of OT-MM is selected from {1, . . . , 105} and the α in JDOT from {10−5, 10−4, . . . , 1}. The classification accuracy for all the methods is reported in Table 1. We can see that JDOT is consistently outperforming the baseline (5 points in average), indicating that the adaptation is successful in every cases. Its mean accuracy is the best as well as its average ranking. We conducted a Wilcoxon signed-rank test to test if JDOT was statistically better than the other methods, and report the p-value in the tables. This test shows that JDOT is statistically better than the considered methods, except for OT based ones that where state of the art on this dataset [14]. Amazon review classification dataset We now consider the Amazon review dataset [32] which contains online reviews of different products collected on the Amazon website. Reviews are encoded with bag-of-word unigram and bigram features as input. The problem is to predict positive (higher than 3 stars) or negative (3 stars or less) notation of reviews (binary classification). Since different 7 Table 3: Comparison of different methods on the Wifilocalization dataset. Maximum value in bold. Domains KRR SurK DIP DIP-CC GeTarS CTC CTC-TIP JDOT t1 →t2 80.84±1.14 90.36±1.22 87.98±2.33 91.30±3.24 86.76 ± 1.91 89.36±1.78 89.22±1.66 93.03 ± 1.24 t1 →t3 76.44±2.66 94.97±1.29 84.20±4.29 84.32±4.57 90.62±2.25 94.80±0.87 92.60 ± 4.50 90.06 ± 2.01 t2 →t3 67.12±1.28 85.83 ± 1.31 80.58 ± 2.10 81.22 ± 4.31 82.68 ± 3.71 87.92 ± 1.87 89.52 ± 1.14 86.76 ± 1.72 hallway1 60.02 ±2.60 76.36 ± 2.44 77.48 ± 2.68 76.24± 5.14 84.38 ± 1.98 86.98 ± 2.02 86.78 ± 2.31 98.83±0.58 hallway2 49.38 ± 2.30 64.69 ±0.77 78.54 ± 1.66 77.8± 2.70 77.38 ± 2.09 87.74 ± 1.89 87.94 ± 2.07 98.45±0.67 hallway3 48.42 ±1.32 65.73 ± 1.57 75.10± 3.39 73.40± 4.06 80.64 ± 1.76 82.02± 2.34 81.72 ± 2.25 99.27±0.41 words are employed to qualify the different categories of products, a domain adaptation task can be formulated if one wants to predict positive reviews of a product from labelled reviews of a different product. Following [33, 11], we consider only a subset of four different types of product: books, DVDs, electronics and kitchens. This yields 12 possible adaptation tasks. Each domain contains 2000 labelled samples and approximately 4000 unlabelled ones. We therefore use these unlabelled samples to perform the transfer, and test on the 2000 labelled data. The goal of this experiment is to compare to the state-of-the-art method on this subset, namely Domain adversarial neural network ([11], denoted DANN), and to show the versatility of our method that can adapt to any type of classifier. The neural network used for all methods in this experiment is a simple 2-layer model with sigmoid activation function in the hidden layer to promote non-linearity. 50 neurons are used in this hidden layer. For DANN, hyper-parameters are set through the reverse cross-validation proposed in [11], and following the recommendation of authors the learning rate is set to 10−3. In the case of JDOT, we used the heuristic setting of α = 1/ maxi,j d(xs i, xt j), and as such we do not need any cross-validation. The squared Euclidean norm is used for both metric in feature space and we test as loss functions both mean squared errors (mse) and Hinge losses. 10 iterations of the block coordinate descent are realized. For each method, we stop the learning process of the network after 5 epochs. Classification accuracies are presented in table 2. The neural network (NN), trained on source and tested on target, is also presented as a baseline. JDOT surpasses DANN in 11 out of 12 tasks (except on books→dvd). The Hinge loss is better in than mse in 10 out of 12 cases, which is expected given the superiority of the Hinge loss on classification tasks [19]. Wifilocalization regression dataset For the regression task, we use the cross-domain indoor Wifi localization dataset that was proposed by Zhang and co-authors [4], and recently studied in [5]. From a multi-dimensional signal (collection of signal strength perceived from several access points), the goal is to locate the device in a hallway, discretized into a grid of 119 squares, by learning a mapping from the signal to the grid element. This translates as a regression problem. As the signals were acquired at different time periods by different devices, a shift can be encountered and calls for an adaptation. In the remaining, we follow the exact same experimental protocol as in [4, 5] for ease of comparison. Two cases of adaptation are considered: transfer across periods, for which three time periods t1, t2 and t3 are considered, and transfer across devices, where three different devices are used to collect the signals in the same straight-line hallways (hallway1-3), leading to three different adaptation tasks in both cases. We compare the result of our method with several state-of-the-art methods: kernel ridge regression with RBF kernel (KRR), surrogate kernel ([4], denoted SurK), domain-invariant projection and its cluster regularized version ([7], denoted respectively DIP and DIP-CC), generalized target shift ([34], denoted GeTarS), and conditional transferable components, with its target information preservation regularization ([5], denoted respectively CTC and CTC-TIP). As in [4, 5], the hyper-parameters of the competing methods are cross-validated on a small subset of the target domain. In the case of JDOT, we simply set the α to the heuristic value of α = 1/ maxi,j d(xs i, xt j) as discussed previously, and f is estimated with kernel ridge regression. Following [4], the accuracy is measured in the following way: the prediction is said to be correct if it falls within a range of three meters in the transfer across periods, and six meters in the transfer across devices. For each experiment, we randomly sample sixty percent of the source and target domain, and report the mean and standard deviation of ten repetitions accuracies in Table 3. For transfer across periods, JDOT performs best in one out of three tasks. For transfer across devices, the superiority of JDOT is clearly assessed, for it reaches an average score > 98%, which is at least ten points ahead of the best competing method for every task. Those extremely good results could be explained by the fact that using optimal transport allows to consider large shifts of distribution, for which divergences (such as maximum mean discrepancy used in CTC) or reweighting strategies can not cope with. 8 6 Discussion and conclusion We have presented in this paper the Joint Distribution Optimal Transport for domain adaptation, which is a principled way of performing domain adaptation with optimal transport. JDOT assumes the existence of a transfer map that transforms a source domain joint distribution Ps(X, Y ) into a target domain equivalent version Pt(X, Y ). Through this transformation, the alignment of both feature space and conditional distributions is operated, allowing to devise an efficient algorithm that simultaneously optimizes for a coupling between Ps and Pt and a prediction function that solves the transfer problem. We also proved that learning with JDOT is equivalent to minimizing a bound on the target distribution. We have demonstrated through experiments on classical real-world benchmark datasets the superiority of our approach w.r.t. several state-of-the-art methods, including previous work on optimal transport based domain adaptation, domain adversarial neural networks or transfer components, on a variety of task including classification and regression. We have also showed the versatility of our method, that can accommodate with several types of loss functions (mse, hinge) or class of hypothesis (including kernel machines or neural networks). Potential follow-ups of this work include a semi-supervised extension (using unlabelled examples in source domain) and investigating stochastic techniques for solving efficiently the adaptation. From a theoretical standpoint, future works include a deeper study of probabilistic transfer lipschitzness and the development of guarantees able to take into the complexity of the hypothesis class and the space of possible transport plans. Acknowledgements This work benefited from the support of the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR), the Normandie Projet GRR-DAISI, European funding FEDER DAISI and CNRS funding from the DéfiImag’In. The authors also wish to thank Kai Zhang and Qiaojun Wang for providing the Wifilocalization dataset. References [1] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual domain adaptation: an overview of recent advances. IEEE Signal Processing Magazine, 32(3), 2015. [2] S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2010. [3] M. Sugiyama, S. Nakajima, H. Kashima, P.V. Buenau, and M. Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In NIPS, 2008. [4] K. Zhang, V. W. Zheng, Q. Wang, J. T. Kwok, Q. Yang, and I. Marsic. Covariate shift in Hilbert space: A solution via surrogate kernels. In ICML, 2013. [5] M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Schölkopf. Domain adaptation with conditional transferable components. In ICML, volume 48, pages 2839–2848, 2016. [6] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [7] M. Baktashmotlagh, M. Harandi, B. Lovell, and M. Salzmann. Unsupervised domain adaptation by domain invariant projection. In ICCV, pages 769–776, 2013. [8] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV, 2013. [9] M. Long, J. Wang, G. Ding, J. Sun, and P. Yu. Transfer joint matching for unsupervised domain adaptation. In CVPR, pages 1410–1417, 2014. [10] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages 1180–1189, 2015. [11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. 9 [12] S. Si, D. Tao, and B. Geng. Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7):929–942, July 2010. [13] N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In ECML/PKDD, 2014. [14] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. [15] L. Kantorovich. On the translocation of masses. C.R. (Doklady) Acad. Sci. URSS (N.S.), 37:199–201, 1942. [16] F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 2015. [17] M. Perrot, N. Courty, R. Flamary, and A. Habrard. Mapping estimation for discrete optimal transport. In NIPS, pages 4197–4205, 2016. [18] C. Villani. Optimal transport: old and new. Grund. der mathematischen Wissenschaften. Springer, 2009. [19] Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, and Alessandro Verri. Are loss functions all the same? Neural Computation, 16(5):1063–1076, 2004. [20] M. Thorpe, S. Park, S. Kolouri, G. Rohde, and D. Slepcev. A transportation lp distance for signal analysis. CoRR, abs/1609.08669, 2016. [21] R. Urner, S. Shalev-Shwartz, and S. Ben-David. Access to unlabeled data can speed up prediction time. In Proceedings of ICML, pages 641–648, 2011. [22] S. Ben-David, S. Shalev-Shwartz, and R. Urner. Domain adaptation–can quantity compensate for quality? In Proc of ISAIM, 2012. [23] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Proc. of COLT, 2009. [24] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151–175, 2010. [25] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013. [26] A. Genevay, M. Cuturi, G. Peyré, and F. Bach. Stochastic optimization for large-scale optimal transport. In NIPS, pages 3432–3440, 2016. [27] Luigi Grippo and Marco Sciandrone. On the convergence of the block nonlinear gauss–seidel method under convex constraints. Operations research letters, 26(3):127–136, 2000. [28] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, LNCS, pages 213–226, 2010. [29] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [30] M. Long, J. Wang, G. Ding, S. Jialin Pan, and P.S. Yu. Adaptation regularization: A general framework for transfer learning. IEEE TKDE, 26(7):1076–1089, 2014. [31] E. Zhong, W. Fan, Q. Yang, O. Verscheure, and J. Ren. Cross validation framework to choose amongst models and datasets for transfer learning. In ECML/PKDD, 2010. [32] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Proc. of the 2006 conference on empirical methods in natural language processing, pages 120–128, 2006. [33] M. Chen, Z. Xu, K. Weinberger, and F. Sha. Marginalized denoising autoencoders for domain adaptation. In ICML, 2012. [34] K. Zhang, M. Gong, and B. Schölkopf. Multi-source domain adaptation: A causal view. In AAAI Conference on Artificial Intelligence, pages 3150–3157, 2015. 10 | 2017 | 2 |
6,674 | FALKON: An Optimal Large Scale Kernel Method Alessandro Rudi ∗ INRIA – Sierra Project-team, ´Ecole Normale Sup´erieure, Paris Luigi Carratino University of Genoa Genova, Italy Lorenzo Rosasco University of Genoa, LCSL, IIT & MIT Abstract Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n√n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures. 1 Introduction The goal in supervised learning is to learn from examples a function that predicts well new data. Nonparametric methods are often crucial since the functions to be learned can be non-linear and complex Kernel methods are probably the most popular among nonparametric learning methods, but despite excellent theoretical properties, they have limited applications in large scale learning because of time and memory requirements, typically at least quadratic in the number of data points. Overcoming these limitations has motivated a variety of practical approaches including gradient methods, as well accelerated, stochastic and preconditioned extensions, to improve time complexity [1, 2, 3, 4, 5, 6]. Random projections provide an approach to reduce memory requirements, popular methods including Nystr¨om [7, 8], random features [9], and their numerous extensions. From a theoretical perspective a key question has become to characterize statistical and computational tradeoffs, that is if, or under which conditions, computational gains come at the expense of statistical accuracy. In particular, recent results considering least squares, show that there are large class of problems for which, by combining Nystr¨om or random features approaches [10, 11, 12, 13, 14, 15] with ridge regression, it is possible to substantially reduce computations, while preserving the same optimal statistical accuracy of exact kernel ridge regression (KRR). While statistical lower bounds exist for this setting, there are no corresponding computational lower bounds. The state of the art approximation of KRR, for which optimal statistical bounds are known, typically requires complexities that are roughly O(n2) in time and memory (or possibly O(n) in memory, if kernel computations are made on the fly). In this paper, we propose and study FALKON, a new algorithm that, to the best of our knowledge, has the best known theoretical guarantees. At the same time FALKON provides an efficient approach to apply kernel methods on millions of points, and tested on a variety of large scale problems ∗E-mail: alessandro.rudi@inria.fr. This work was done when A.R. was working at Laboratory of Computational and Statistical Learning (Istituto Italiano di Tecnologia). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. outperform previously proposed methods while utilizing only a fraction of computational resources. More precisely, we take a substantial step in provably reducing the computational requirements, showing that, up to logarithmic factors, a time/memory complexity of O(n√n) and O(n) is sufficient for optimal statistical accuracy. Our new algorithm, exploits the idea of using Nystr¨om methods to approximate the KRR problem, but also to efficiently compute a preconditioning to be used in conjugate gradient. To the best of our knowledge this is the first time all these ideas are combined and put to fruition. Our theoretical analysis derives optimal statistical rates both in a basic setting and under benign conditions for which fast rates are possible. The potential benefits of different sampling strategies are also analyzed. Most importantly, the empirical performances are thoroughly tested on available large scale data-sets. Our results show that, even on a single machine, FALKON can outperforms state of the art methods on most problems both in terms of time efficiency and prediction accuracy. In particular, our results suggest that FALKON could be a viable kernel alternative to deep fully connected neural networks for large scale problems. The rest of the paper is organized as follows. In Sect. 2 we give some background on kernel methods. In Sect. 3 we introduce FALKON, while in Sect. 4 we present and discuss the main technical results. Finally in Sect. 5 we present experimental results. 2 Statistical and Computational Trade-offs in Kernel Methods We consider the supervised learning problem of estimating a function from random noisy samples. In statistical learning theory, this can be formalized as the problem of solving inf f∈H E(f), E(f) = Z (f(x) −y)2dρ(x, y), (1) given samples (xi, yi)n i=1 from ρ, which is fixed but unknown and where, H is a space of candidate solutions. Ideally, a good empirical solution bf should have small excess risk R( bf ) = E( bf ) −inf f∈H E(f), (2) since this implies it will generalize/predict well new data. In this paper, we are interested in both computational and statistical aspects of the above problem. In particular, we investigate the computational resources needed to achieve optimal statistical accuracy, i.e. minimal excess risk. Our focus is on the most popular class of nonparametric methods, namely kernel methods. Kernel methods and ridge regression. Kernel methods consider a space H of functions f(x) = n X i=1 αjK(x, xi), (3) where K is a positive definite kernel 2. The coefficients α1, . . . , αn are typically derived from a convex optimization problem, that for the square loss is bfn,λ = argmin f∈H 1 n n X i=1 (f(xi) −yi)2 + λ∥f∥2 H, (4) and defines the so called kernel ridge regression (KRR) estimator [16]. An advantage of least squares approaches is that they reduce computations to a linear system (Knn + λnI) α = by, (5) where Knn is an n × n matrix defined by (Knn)ij = K(xi, xj) and by = (y1, . . . yn). We next comment on computational and statistical properties of KRR. Computations. Solving Eq. (5) for large datasets is challenging. A direct approach requires O(n2) in space, to allocate Knn, O(n2) kernel evaluations, and O(n2cK + n3) in time, to compute and invert Knn (cK is the kernel evaluation cost assumed constant and omitted throughout). Statistics. Under basic assumptions, KRR achieves an error R( bfλn) = O(n−1/2), for λn = n−1/2, which is optimal in a minimax sense and can be improved only under more stringent assumptions [17, 18]. 2K is positive definite, if the matrix with entries K(xi, xj) is positive semidefinite ∀x1, . . . , xN, N ∈N [16] 2 The question is then if it is possible to achieve the statistical properties of KRR, with less computations. Gradient methods and early stopping. A natural idea is to consider iterative solvers and in particular gradient methods, because of their simplicity and low iteration cost. A basic example is computing the coefficients in (3) by αt = αt−1 + τ [(Knnαt−1 −by) + λnαt−1] , (6) for a suitable step-size choice τ. Computations. In this case, if t is the number of iterations, gradient methods require O(n2t) in time, O(n2) in memory and O(n2) in kernel evaluations, if the kernel matrix is stored. Note that, the kernel matrix can also be computed on the fly with only O(n) memory, but O(n2t) kernel evaluations are required. We note that, beyond the above simple iteration, several variants have been considered including accelerated [1, 19] and stochastic extensions [20]. Statistics. The statistical properties of iterative approaches are well studied and also in the case where λ is set to zero, and regularization is performed by choosing a suitable stopping time [21]. In this latter case, the number of iterations can roughly be thought of 1/λ and O(√n) iterations are needed for basic gradient descent, O(n1/4) for accelerated methods and possible O(1) iterations/epochs for stochastic methods. Importantly, we note that unlike most optimization studies, here we are considering the number of iterations needed to solve (1), rather than (4). While the time complexity of these methods dramatically improves over KRR, and computations can be done in blocks, memory requirements (or number of kernel evaluations) still makes the application to large scale setting cumbersome. Randomization provides an approach to tackle this challenge. Random projections. The rough idea is to use random projections to compute Knn only approximately. The most popular examples in this class of approaches are Nystr¨om [7, 8] and random features [9] methods. In the following we focus in particular on a basic Nystr¨om approach based on considering functions of the form efλ,M(x) = M X i=1 eαiK(x, exi), with {ex1, . . . , exM} ⊆{x1, . . . , xn}, (7) defined considering only a subset of M training points sampled uniformly. In this case, there are only M coefficients that, following the approach in (4), can be derived considering the linear system H eα = z, where H = K⊤ nMKnM + λnKMM, z = K⊤ nM ˆy. (8) Here KnM is the n × M matrix with (KnM)ij = K(xi, exj) and KMM is the M × M matrix with (KMM)ij = K(exi, exj). This method consists in subsampling the columns of Knn and can be seen as a particular form of random projections. Computations. Direct methods for solving (8) require O(nM 2) in time to form K⊤ nMKnM and O(M 3) for solving the linear system, and only O(nM) kernel evaluations. The naive memory requirement is O(nM) to store KnM, however if K⊤ nMKnM is computed in blocks of dimension at most M × M only O(M 2) memory is needed. Iterative approaches as in (6) can also be combined with random projections [22, 23, 24] to slightly reduce time requirements (see Table. 1, or Sect. F in the appendix, for more details). Statistics. The key point though, is that random projections allow to dramatically reduce memory requirements as soon as M ≪n and the question arises of whether this comes at expenses of statistical accuracy. Interestingly, recent results considering this question show that there are large classes of problems for which M = ˜O(√n) suffices for the same optimal statistical accuracy of the exact KRR [11, 12, 13]. In summary, in this case the computations needed for optimal statistical accuracy are reduced from O(n2) to O(n√n) kernel evaluations, but the best time complexity is basically O(n2). In the rest of the paper we discuss how this requirement can indeed be dramatically reduced. 3 FALKON Our approach is based on a novel combination of randomized projections with iterative solvers plus preconditioning. The main novelty is that we use random projections to approximate both the problem and the preconditioning. 3 Preliminaries: preconditioning and KRR. We begin recalling the basic idea behind preconditioning. The key quantity is the condition number, that for a linear system is the ratio between the largest and smallest singular values of the matrix defining the problem [25]. For example, for problem (5) the condition number is given by cond(Knn + λnI) = (σmax + λn)/(σmin + λn), with σmax, σmin largest and smallest eigenvalues of Knn, respectively. The importance of the condition number is that it captures the time complexity of iteratively solving the corresponding linear system. For example, if a simple gradient descent (6) is used, the number of iterations needed for an ϵ accurate solution of problem (5) is t = O(cond(Knn + λnI) log(1/ϵ)). It is shown in [23] that in this case t = √n log n are needed to achieve a solution with good statistical properties. Indeed, it can be shown that roughly t ≈1/λ log( 1 ϵ ) are needed where λ = 1/√n and ϵ = 1/n. The idea behind preconditioning is to use a suitable matrix B to define an equivalent linear system with better condition number. For (5), an ideal choice is B such that BB⊤= (Knn + λnI)−1 (9) and B⊤(Knn + λnI)B β = B⊤ˆy. Clearly, if β∗solves the latter problem, α∗= Bβ∗is a solution of problem (5). Using a preconditioner B as in (9) one iteration is sufficient, but computing the B is typically as hard as the original problem. The problem is to derive preconditioning such that (9) might hold only approximately, but that can be computed efficiently. Derivation of efficient preconditioners for the exact KRR problem (5) has been the subject of recent studies, [3, 4, 26, 5, 6]. In particular, [4, 26, 5, 6] consider random projections to approximately compute a preconditioner. Clearly, while preconditioning (5) leads to computational speed ups in terms of the number of iterations, requirements in terms of memory/kernel evaluation are the same as standard kernel ridge regression. The key idea to tackle this problem is to consider an efficient preconditioning approach for problem (8) rather than (5). Basic FALKON algorithm. We begin illustrating a basic version of our approach. The key ingredient is the following preconditioner for Eq. (8), BB⊤= n M K2 MM + λnKMM −1 , (10) which is itself based on a Nystr¨om approximation3. The above preconditioning is a natural approximation of the ideal preconditioning of problem (8) that is BB⊤= (K⊤ nMKnM + λnKMM)−1 and reduces to it if M = n. Our theoretical analysis, shows that M ≪n suffices for deriving optimal statistical rates. In its basic form FALKON is derived combining the above preconditioning and gradient descent, bfλ,M,t(x) = M X i=1 αt,iK(x, exi), with αt = Bβt and (11) βk = βk−1 −τ nB⊤ K⊤ nM(KnM(Bβk−1) −by) + λnKMM(Bβk−1) , (12) for t ∈N, β0 = 0 and 1 ≤k ≤t and a suitable chosen τ. In practice, a refined version of FALKON is preferable where a faster gradient iteration is used and additional care is taken in organizing computations. FALKON. The actual version of FALKON we propose is Alg. 1 (see Sect. A, Alg. 2 for the complete algorithm). It consists in solving the system B⊤HBβ = B⊤z via conjugate gradient [25], since it is a fast gradient method and does not require to specify the step-size. Moreover, to compute B quickly, with reduced numerical errors, we consider the following strategy B = 1 √nT −1A−1, T = chol(KMM), A = chol 1 M T T ⊤+ λI , (13) where chol() is the Cholesky decomposition (in Sect. A the strategy for non invertible KMM). 3 For the sake of simplicity, here we assume KMM to be invertible and the Nystr¨om centers selected with uniform sampling from the training set, see Sect. A and Alg. 2 in the appendix for the general algorithm. 4 Algorithm 1 MATLAB code for FALKON. It requires O(nMt + M 3) in time and O(M 2) in memory. See Sect. A and Alg. 2 in the appendixes for the complete algorithm. Input: Dataset X = (xi)n i=1 ∈Rn×D, ˆy = (yi)n i=1 ∈Rn, centers C = (˜xj)M j=1 ∈RM×D, KernelMatrix computing the kernel matrix given two sets of points, regularization parameter λ, number of iterations t. Output: Nystr¨om coefficients α. function alpha = FALKON(X, C, Y, KernelMatrix, lambda, t) n = size(X,1); M = size(C,1); KMM = KernelMatrix(C,C); T = chol(KMM + eps*M*eye(M)); A = chol(T*T’/M + lambda*eye(M)); function w = KnM_times_vector(u, v) w = zeros(M,1); ms = ceil(linspace(0, n, ceil(n/M)+1)); for i=1:ceil(n/M) Kr = KernelMatrix( X(ms(i)+1:ms(i+1),:), C ); w = w + Kr’*(Kr*u + v(ms(i)+1:ms(i+1),:)); end end BHB = @(u) A’\(T’\(KnM_times_vector(T\(A\u), zeros(n,1))/n) + lambda*(A\u)); r = A’\(T’\KnM_times_vector(zeros(M,1), Y/n)); alpha = T\(A\conjgrad(BHB, r, t)); end Computations. in Alg. 1, B is never built explicitly and A, T are two upper-triangular matrices, so A−⊤u, A−1u for a vector u costs M 2, and the same for T. The cost of computing the preconditioner is only 4 3M 3 floating point operations (consisting in two Cholesky decompositions and one product of two triangular matrices). Then FALKON requires O(nMt + M 3) in time and the same O(M 2) memory requirement of the basic Nystr¨om method, if matrix/vector multiplications at each iteration are performed in blocks. This implies O(nMt) kernel evaluations are needed. The question remains to characterize M and the number of iterations needed for good statistical accuracy. Indeed, in the next section we show that roughly O(n√n) computations and O(n) memory are sufficient for optimal accuracy. This implies that FALKON is currently the most efficient kernel method with the same optimal statistical accuracy of KRR, see Table 1. 4 Theoretical Analysis In this section, we characterize the generalization properties of FALKON showing it achieves the optimal generalization error of KRR, with dramatically reduced computations. This result is given in Thm. 3 and derived in two steps. First, we study the difference between the excess risk of FALKON and that of the basic Nystr¨om (8), showing it depends on the condition number induced by the preconditioning, hence on M (see Thm.1). Deriving these results requires some care, since differently to standard optimization results, our goal is to solve (1) i.e. achieve small excess risk, not to minimize the empirical error. Second, we show that choosing M = eO(1/λ) allows to make this difference as small as e−t/2 (see Thm.2). Finally, recalling that the basic Nystr¨om for λ = 1/√n has essentially the same statistical properties of KRR [13], we answer the question posed at the end of the last section and show that roughly log n iterations are sufficient for optimal statistical accuracy. Following the discussion in the previous section this means that the computational requirements for optimal accuracy are eO(n√n) in time/kernel evaluations and eO(n) in space. Later in this section faster rates under further regularity assumptions are also derived and the effect of different selection methods for the Nystr¨om centers considered. The proofs for this section are provided in Sect. E of the appendixes. 4.1 Main Result The first result is interesting in its own right since it corresponds to translating optimization guarantees into statistical results. In particular, we derive a relation the excess risk of the FALKON algorithm bfλ,M,t from Alg. 1 and the Nystr¨om estimator efλ,M from Eq. (8) with uniform sampling. 5 Algorithm train time kernel evaluations memory test time SVM / KRR + direct method n3 n2 n2 n KRR + iterative [1, 2] n2 4√n n2 n2 n Doubly stochastic [22] n2√n n2√n n n Pegasos / KRR + sgd [27] n2 n2 n n KRR + iter + precond [3, 28, 4, 5, 6] n2 n2 n n Divide & Conquer [29] n2 n√n n n Nystr¨om, random features [7, 8, 9] n2 n√n n √n Nystr¨om + iterative [23, 24] n2 n√n n √n Nystr¨om + sgd [20] n2 n√n n √n FALKON (see Thm. 3) n√n n√n n √n Table 1: Computational complexity required by different algorithms, for optimal generalization. Logarithmic terms are not showed. Theorem 1. Let n, M ≥3, t ∈N, 0 < λ ≤λ1 and δ ∈(0, 1]. Assume there exists κ ≥1 such that K(x, x) ≤κ2 for any x ∈X. Then, the following inequality holds with probability 1 −δ R( bfλ,M,t)1/2 ≤R( efλ,M)1/2 + 4bv e−νt r 1 + 9κ2 λn log n δ , where bv2 = 1 n Pn i=1 y2 i and ν = log(1 + 2/(cond (B⊤HB) 1/2 −1)), with cond (B⊤HB) the condition number of B⊤HB. Note that λ1 > 0 is a constant not depending on λ, n, M, δ, t. The additive term in the bound above decreases exponentially in the number of iterations. If the condition number of B⊤HB is smaller than a small universal constant (e.g. 17), then ν > 1/2 and the additive term decreases as e−t 2 . Next, theorems derive a condition on M that allows to control cond (B⊤HB), and derive such an exponential decay. Theorem 2. Under the same conditions of Thm. 1, if M ≥5 1 + 14κ2 λ log 8κ2 λδ . then the exponent ν in Thm. 1 satisfies ν ≥1/2. The above result gives the desired exponential bound showing that after log n iterations the excess risk of FALKON is controlled by that of the basic Nystr¨om, more precisely R( bfλ,M,t) ≤2R( efλ,M) when t ≥log R( efλ,M) + log 1 + 9κ2 λn log n δ + log 16bv2 . Finally, we derive an excess risk bound for FALKON. By the no-free-lunch theorem, this requires some conditions on the learning problem. We first consider a standard basic setting where we only assume it exists fH ∈H such that E(fH) = inff∈H E(f). Theorem 3. Let δ ∈(0, 1]. Assume there exists κ ≥1 such that K(x, x) ≤κ2 for any x ∈X, and y ∈[−a 2, a 2], almost surely, a > 0. There exist n0 ∈N such that for any n ≥n0, if λ = 1 √n, M ≥75 √n log 48κ2n δ , t ≥1 2 log(n) + 5 + 2 log(a + 3κ), then with probability 1 −δ, R( bfλ,M,t ) ≤c0 log2 24 δ √n . In particular n0, c0 do not depend on λ, M, n, t and c0 do not depend on δ. The above result provides the desired bound, and all the constants are given in the appendix. The obtained learning rate is the same as the full KRR estimator and is known to be optimal in a minmax sense [17], hence not improvable. As mentioned before, the same bound is also achieved by the 6 basic Nystr¨om method but with much worse time complexity. Indeed, as discussed before, using a simple iterative solver typically requires O(√n log n) iterations, while we need only O(log n). Considering the choice for M this leads to a computational time of O(nMt) = O(n√n) for optimal generalization (omitting logarithmic terms). To the best of our knowledge FALKON currently provides the best time/space complexity to achieve the statistical accuracy of KRR. Beyond the basic setting considered above, in the next section we show that FALKON can achieve much faster rates under refined regularity assumptions and also consider the potential benefits of leverage score sampling. 4.2 Fast learning rates and Nystr¨om with approximate leverage scores Considering fast rates and Nystr¨om with more general sampling is considerably more technical and a heavier notation is needed. Our analysis apply to any approximation scheme (e.g. [30, 12, 31]) satisfying the definition of q-approximate leverage scores [13], satisfying q−1li(λ) ≤bli(λ) ≤ qli(λ), ∀i ∈{1, . . . , n}. Here λ > 0, li(λ) = (Knn(Knn + λnI)−1)ii are the leverage scores and q ≥1 controls the quality of the approximation. In particular, given λ, the Nystr¨om points are sampled independently from the dataset with probability pi ∝bli(λ). We need a few more definitions. Let Kx = K(x, ·) for any x ∈X and H the reproducing kernel Hilbert space [32] of functions with inner product defined by H = span{Kx | x ∈X} and closed with respect to the inner product ⟨·, ·⟩H defined by ⟨Kx, Kx′⟩H = K(x, x′), for all x, x′ ∈X. Define C : H →H to be the linear operator ⟨f, Cg⟩H = R X f(x)g(x)dρX(x), for all f, g ∈H. Finally define the following quantities, N∞(λ) = sup x∈X ∥(C + λI)−1/2Kx∥H, N(λ) = Tr(C(C + λI)−1). The latter quantity is known as degrees of freedom or effective dimension, can be seen as a measure of the size of H. The quantity N∞(λ) can be seen to provide a uniform bound on the leverage scores. In particular note that N(λ) ≤N∞(λ) ≤κ2 λ [13]. We can now provide a refined version of Thm. 2. Theorem 4. Under the same conditions of Thm. 1, the exponent ν in Thm. 1 satisfies ν ≥1/2, when 1. either Nystr¨om uniform sampling is used with M ≥70 [1 + N∞(λ)] log 8κ2 λδ . 2. or Nystr¨om q-approx. lev. scores [13] is used, with λ ≥19κ2 n log n 2δ, n ≥405κ2 log 12κ2 δ , M ≥215 2 + q2N(λ) log 8κ2 λδ . We then recall the standard, albeit technical, assumptions leading to fast rates [17, 18]. The capacity condition requires the existence of γ ∈(0, 1] and Q ≥0, such that N(λ) ≤Q2λ−γ. Note that this condition is always satisfied with Q = κ and γ = 1. The source condition requires the existence of r ∈[1/2, 1] and g ∈H, such that fH = Cr−1/2g. Intuitively, the capacity condition measures the size of H, if γ is small then H is small and rates are faster. The source condition measures the regularity of fH, if r is big fH is regular and rates are faster. The case r = 1/2 and γ = D/(2s) (for a kernel with smoothness s and input space RD) recovers the classic Sobolev condition. For further discussions on the interpretation of the conditions above see [17, 18, 11, 13]. We can then state our main result on fast rates Theorem 5. Let δ ∈(0, 1]. Assume there exists κ ≥1 such that K(x, x) ≤κ2 for any x ∈X, and y ∈[−a 2, a 2], almost surely, with a > 0. There exist an n0 ∈N such that for any n ≥n0 the following holds. When λ = n− 1 2r+γ , t ≥log(n) + 5 + 2 log(a + 3κ2), 1. and either Nystr¨om uniform sampling is used with M ≥70 [1 + N∞(λ)] log 8κ2 λδ , 2. or Nystr¨om q-approx. lev. scores [13] is used with M ≥220 2 + q2N(λ) log 8κ2 λδ , then with probability 1 −δ, R( bfλ,M,t) ≤c0 log2 24 δ n− 2r 2r+γ . where bfλ,M,t is the FALKON estimator (Sect. 3, Alg. 1 and Sect. A, Alg. 2 in the appendix for the complete version). In particular n0, c0 do not depend on λ, M, n, t and c0 do not depend on δ. 7 0 20 40 60 80 100 0.75 0.8 0.85 0.9 0.95 1 7 Nystrom GD Nystrom SGD Nystrom CG NYTRO GD NYTRO SGD NYTRO CG FALKON Iterates/epochs MSE Figure 1: Falkon is compared to stochastic gradient, gradient descent and conjugate gradient applied to Problem (8), while NYTRO refer to the variants described in [23]. The graph shows the test error on the HIGGS dataset (1.1 × 107 examples) with respect to the number of iterations (epochs for stochastic algorithms). The above result shows that FALKON achieves the same fast rates as KRR, under the same conditions [17]. For r = 1/2, γ = 1, the rate in Thm. 3 is recovered. If γ < 1, r > 1/2, FALKON achieves a rate close to O(1/n). By selecting the Nystr¨om points with uniform sampling, a bigger M could be needed for fast rates (albeit always less than n). However, when approximate leverage scores are used M, smaller than nγ/2 ≪√n is always enough for optimal generalization. This shows that FALKON with approximate leverage scores is the first algorithm to achieve fast rates with a computational complexity that is O(nN(λ)) = O(n1+ γ 2r+γ ) ≤O(n1+ γ 2 ) in time. 5 Experiments We present FALKON’s performance on a range of large scale datasets. As shown in Table 2, 3, FALKON achieves state of the art accuracy and typically outperforms previous approaches in all the considered large scale datasets including IMAGENET. This is remarkable considering FALKON required only a fraction of the competitor’s computational resources. Indeed we used a single machine equipped with two Intel Xeon E5-2630 v3, one NVIDIA Tesla K40c and 128 GB of RAM and a basic MATLAB FALKON implementation, while typically the results for competing algorithm have been performed on clusters of GPU workstations (accuracies, times and used architectures are cited from the corresponding papers). A minimal MATLAB implementation of FALKON is presented in Appendix G. The code necessary to reproduce the following experiments, plus a FALKON version that is able to use the GPU, is available on GitHub at https://github.com/LCSL/FALKON_paper . The error is measured with MSE, RMSE or relative error for regression problems, and with classification error (c-err) or AUC for the classification problems, to be consistent with the literature. For datasets which do not have a fixed test set, we set apart 20% of the data for testing. For all datasets, but YELP and IMAGENET, we normalize the features by their z-score. From now on we denote with n the cardinality of the dataset, d the dimensionality. A comparison of FALKON with respect to other methods to compute the Nystr¨om estimator, in terms of the MSE test error on the HIGGS dataset, is given in Figure 1. MillionSongs [36] (Table 2, n = 4.6 × 105, d = 90, regression). We used a Gaussian kernel with σ = 6, λ = 10−6 and 104 Nystr¨om centers. Moreover with 5 × 104 center, FALKON achieves a 79.20 MSE, and 4.49 × 10−3 rel. error in 630 sec. TIMIT (Table 2, n = 1.2 × 106, d = 440, multiclass classification). We used the same preprocessed dataset of [6] and Gaussian Kernel with σ = 15, λ = 10−9 and 105 Nystr¨om centers. YELP (Table 2, n = 1.5 × 106, d = 6.52 × 107, regression). We used the same dataset of [24]. We extracted the 3-grams from the plain text with the same pipeline as [24], then we mapped 8 them in a sparse binary vector which records if the 3-gram is present or not in the example. We used a linear kernel with 5×104 Nystr¨om centers. With 105 centers, we get a RMSE of 0.828 in 50 minutes. Table 2: Architectures: ‡ cluster 128 EC2 r3.2xlarge machines, † cluster 8 EC2 r3.8xlarge machines, ≀ single machine with two Intel Xeon E5-2620, one Nvidia GTX Titan X GPU, 128GB RAM, ⋆cluster with IBM POWER8 12-core processor, 512 GB RAM, ∗unknown platform. MillionSongs YELP TIMIT MSE Relative error Time(s) RMSE Time(m) c-err Time(h) FALKON 80.10 4.51 × 10−3 55 0.833 20 32.3% 1.5 Prec. KRR [4] 4.58 × 10−3 289† Hierarchical [33] 4.56 × 10−3 293⋆ D&C [29] 80.35 737∗ Rand. Feat. [29] 80.93 772∗ Nystr¨om [29] 80.38 876∗ ADMM R. F.[4] 5.01 × 10−3 958† BCD R. F. [24] 0.949 42‡ 34.0% 1.7‡ BCD Nystr¨om [24] 0.861 60‡ 33.7% 1.7‡ EigenPro [6] 32.6% 3.9≀ KRR [33] [24] 4.55 × 10−3 0.854 500‡ 33.5% 8.3‡ Deep NN [34] 32.4% Sparse Kernels [34] 30.9% Ensemble [35] 33.5% Table 3: Architectures: † cluster with IBM POWER8 12-core cpu, 512 GB RAM, ≀single machine with two Intel Xeon E5-2620, one Nvidia GTX Titan X GPU, 128GB RAM, ‡ single machine [37] SUSY HIGGS IMAGENET c-err AUC Time(m) AUC Time(h) c-err Time(h) FALKON 19.6% 0.877 4 0.833 3 20.7% 4 EigenPro [6] 19.8% 6≀ Hierarchical [33] 20.1% 40† Boosted Decision Tree [38] 0.863 0.810 Neural Network [38] 0.875 0.816 Deep Neural Network [38] 0.879 4680‡ 0.885 78‡ Inception-V4 [39] 20.0% SUSY (Table 3, n = 5 × 106, d = 18, binary classification). We used a Gaussian kernel with σ = 4, λ = 10−6 and 104 Nystr¨om centers. HIGGS (Table 3, n = 1.1 × 107, d = 28, binary classification). Each feature has been normalized subtracting its mean and dividing for its variance. We used a Gaussian kernel with diagonal matrix width learned with cross validation on a small validation set, λ = 10−8 and 105 Nystr¨om centers. If we use a single σ = 5 we reach an AUC of 0.825. IMAGENET (Table 3, n = 1.3 × 106, d = 1536, multiclass classification). We report the top 1 c-err over the validation set of ILSVRC 2012 with a single crop. The features are obtained from the convolutional layers of pre-trained Inception-V4 [39]. We used Gaussian kernel with σ = 19, λ = 10−9 and 5 × 104 Nystr¨om centers. Note that with linear kernel we achieve c-err = 22.2%. Acknowledgments. The authors would like to thank Mikhail Belkin, Benjamin Recht and Siyuan Ma, Eric Fosler-Lussier, Shivaram Venkataraman, Stephen L. Tu, for providing their features of the TIMIT and YELP datasets, and NVIDIA Corporation for the donation of the Tesla K40c GPU used for this research. This work is funded by the Air Force project FA9550-17-1-0390 (European Office of Aerospace Research and Development) and by the FIRB project RBFR12M3AC (Italian Ministry of Education, University and Research). 9 References [1] A. Caponnetto and Yuan Yao. Adaptive rates for regularization operators in learning theory. Analysis and Applications, 08, 2010. [2] L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro Verri. Spectral Algorithms for Supervised Learning. Neural Computation, 20(7):1873–1897, 2008. [3] Gregory E Fasshauer and Michael J McCourt. Stable evaluation of gaussian radial basis function interpolants. SIAM Journal on Scientific Computing, 34(2):A737–A762, 2012. [4] Haim Avron, Kenneth L Clarkson, and David P Woodruff. Faster kernel ridge regression using sketching and preconditioning. arXiv preprint arXiv:1611.03220, 2016. [5] Alon Gonen, Francesco Orabona, and Shai Shalev-Shwartz. Solving ridge regression using sketched preconditioned svrg. arXiv preprint arXiv:1602.02350, 2016. [6] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. arXiv preprint arXiv:1703.10622, 2017. [7] Christopher Williams and Matthias Seeger. Using the Nystr¨om Method to Speed Up Kernel Machines. In NIPS, pages 682–688. MIT Press, 2000. [8] Alex J. Smola and Bernhard Sch¨olkopf. Sparse Greedy Matrix Approximation for Machine Learning. In ICML, pages 911–918. Morgan Kaufmann, 2000. [9] Ali Rahimi and Benjamin Recht. Random Features for Large-Scale Kernel Machines. In NIPS, pages 1177–1184. Curran Associates, Inc., 2007. [10] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in neural information processing systems, pages 1313–1320, 2009. [11] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, volume 30 of JMLR Proceedings, pages 185–209. JMLR.org, 2013. [12] Ahmed Alaoui and Michael W Mahoney. Fast randomized kernel ridge regression with statistical guarantees. In Advances in Neural Information Processing Systems 28, pages 775–783. 2015. [13] Alessandro Rudi, Raffaello Camoriano, and Lorenzo Rosasco. Less is more: Nystr¨om computational regularization. In Advances in Neural Information Processing Systems, pages 1648–1656, 2015. [14] Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. arXiv preprint arXiv:1602.04474, 2016. [15] Francis Bach. On the equivalence between kernel quadrature rules and random feature expansions. Journal of Machine Learning Research, 18(21):1–38, 2017. [16] Bernhard Sch¨olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2002. [17] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368, 2007. [18] Ingo Steinwart, Don R Hush, Clint Scovel, et al. Optimal rates for regularized least squares regression. In COLT, 2009. [19] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23(1):52–72, 2007. [20] Aymeric Dieuleveut and Francis Bach. Non-parametric stochastic approximation with large step sizes. arXiv preprint arXiv:1408.0361, 2014. [21] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289–315, 2007. [22] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems, pages 3041–3049, 2014. 10 [23] Raffaello Camoriano, Tom´as Angles, Alessandro Rudi, and Lorenzo Rosasco. Nytro: When subsampling meets early stopping. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1403–1411, 2016. [24] Stephen Tu, Rebecca Roelofs, Shivaram Venkataraman, and Benjamin Recht. Large scale kernel learning using block coordinate descent. arXiv preprint arXiv:1602.05310, 2016. [25] Yousef Saad. Iterative methods for sparse linear systems. SIAM, 2003. [26] Kurt Cutajar, Michael Osborne, John Cunningham, and Maurizio Filippone. Preconditioning kernel matrices. In International Conference on Machine Learning, pages 2529–2538, 2016. [27] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 127(1):3–30, 2011. [28] Yun Yang, Mert Pilanci, and Martin J Wainwright. Randomized sketches for kernels: Fast and optimal non-parametric regression. arXiv preprint arXiv:1501.06195, 2015. [29] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Divide and Conquer Kernel Ridge Regression. In COLT, volume 30 of JMLR Proceedings, pages 592–617. JMLR.org, 2013. [30] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation of matrix coherence and statistical leverage. JMLR, 13:3475–3506, 2012. [31] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford. Uniform Sampling for Matrix Approximation. In ITCS, pages 181–190. ACM, 2015. [32] I. Steinwart and A. Christmann. Support Vector Machines. Information Science and Statistics. Springer New York, 2008. [33] Jie Chen, Haim Avron, and Vikas Sindhwani. Hierarchically compositional kernels for scalable nonparametric learning. CoRR, abs/1608.00860, 2016. [34] Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aurelien Bellet, Linxi Fan, Michael Collins, Daniel J. Hsu, Brian Kingsbury, Michael Picheny, and Fei Sha. Kernel approximation methods for speech recognition. CoRR, abs/1701.03577, 2017. [35] Po-Sen Huang, Haim Avron, Tara N. Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel methods match deep neural networks on timit. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 205–209, 2014. [36] Thierry Bertin-Mahieux, Daniel P. W. Ellis, Brian Whitman, and Paul Lamere. The million song dataset. In ISMIR, 2011. [37] Alexandre Alves. Stacking machine learning classifiers to identify higgs bosons at the lhc. CoRR, abs/1612.07725, 2016. [38] Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5, 2014. [39] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inceptionresnet and the impact of residual connections on learning. pages 4278–4284, 2017. [40] Michael Reed and Barry Simon. Methods of Modern Mathematical Physics: Vol.: 1.: Functional Analysis. Academic press, 1980. [41] Ernesto D Vito, Lorenzo Rosasco, Andrea Caponnetto, Umberto D Giovannini, and Francesca Odone. Learning from examples as an inverse problem. In Journal of Machine Learning Research, pages 883–904, 2005. [42] Alessandro Rudi, Guillermo D Canas, and Lorenzo Rosasco. On the Sample Complexity of Subspace Learning. In NIPS, pages 2067–2075, 2013. [43] St´ephane Boucheron, G´abor Lugosi, and Olivier Bousquet. Concentration inequalities. In Advanced Lectures on Machine Learning. 2004. 11 | 2017 | 20 |
6,675 | Universal Style Transfer via Feature Transforms Yijun Li UC Merced yli62@ucmerced.edu Chen Fang Adobe Research cfang@adobe.com Jimei Yang Adobe Research jimyang@adobe.com Zhaowen Wang Adobe Research zhawang@adobe.com Xin Lu Adobe Research xinl@adobe.com Ming-Hsuan Yang UC Merced, NVIDIA Research mhyang@ucmerced.edu Abstract Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures via simple feature coloring. 1 Introduction Style transfer is an important image editing task which enables the creation of new artistic works. Given a pair of examples, i.e., the content and style image, it aims to synthesize an image that preserves some notion of the content but carries characteristics of the style. The key challenge is how to extract effective representations of the style and then match it in the content image. The seminal work by Gatys et al. [8, 9] show that the correlation between features, i.e., Gram matrix or covariance matrix (shown to be as effective as Gram matrix in [20]), extracted by a trained deep neural network has remarkable ability of capturing visual styles. Since then, significant efforts have been made to synthesize stylized images by minimizing Gram/covariance matrices based loss functions, through either iterative optimization [9] or trained feed-forward networks [27, 16, 20, 2, 6]. Despite the recent rapid progress, these existing works often trade off between generalization, quality and efficiency, which means that optimization-based methods can handle arbitrary styles with pleasing visual quality but at the expense of high computational costs, while feed-forward approaches can be executed efficiently but are limited to a fixed number of styles or compromised visual quality. By far, the problem of universal style transfer remains a daunting task as it is challenging to develop neural networks that achieve generalization, quality and efficiency at the same time. The main issue is how to properly and effectively apply the extracted style characteristics (feature correlations) to content images in a style-agnostic manner. In this work, we propose a simple yet effective method for universal style transfer, which enjoys the style-agnostic generalization ability with marginally compromised visual quality and execution efficiency. The transfer task is formulated as image reconstruction processes, with the content features 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. VGG Relu_X_1 Recons DecoderX VGG Relu_X_1 Recons DecoderX Whitening & Coloring Transform (WCT) Output C S VGG Relu_5_1 Recons Decoder5 WCT VGG Relu_4_1 Recons Decoder4 VGG Relu_3_1 Recons Decoder3 VGG Relu_2_1 Recons Decoder2 VGG Relu_1_1 Recons Decoder1 WCT WCT WCT WCT Output C S S S S S I5 I4 I3 I2 I1 (a) Reconstruction (b) Single-level stylization (c) Multi-level stylization Figure 1: Universal style transfer pipeline. (a) We first pre-train five decoder networks DecoderX (X=1,2,...,5) through image reconstruction to invert different levels of VGG features. (b) With both VGG and DecoderX fixed, and given the content image C and style image S, our method performs the style transfer through whitening and coloring transforms. (c) We extend single-level to multi-level stylization in order to match the statistics of the style at all levels. The result obtained by matching higher level statistics of the style is treated as the new content to continue to match lower-level information of the style. being transformed at intermediate layers with regard to the statistics of the style features, in the midst of feed-forward passes. In each intermediate layer, our main goal is to transform the extracted content features such that they exhibit the same statistical characteristics as the style features of the same layer and we found that the classic signal whitening and coloring transforms (WCTs) on those features are able to achieve this goal in an almost effortless manner. In this work, we first employ the VGG-19 network [26] as the feature extractor (encoder), and train a symmetric decoder to invert the VGG-19 features to the original image, which is essentially the image reconstruction task (Figure 1(a)). Once trained, both the encoder and the decoder are fixed through all the experiments. To perform style transfer, we apply WCT to one layer of content features such that its covariance matrix matches that of style features, as shown in Figure 1(b). The transformed features are then fed forward into the downstream decoder layers to obtain the stylized image. In addition to this single-level stylization, we further develop a multi-level stylization pipeline, as depicted in Figure 1(c), where we apply WCT sequentially to multiple feature layers. The multi-level algorithm generates stylized images with greater visual quality, which are comparable or even better with much less computational costs. We also introduce a control parameter that defines the degree of style transfer so that the users can choose the balance between stylization and content preservation. The entire procedure of our algorithm only requires learning the image reconstruction decoder with no style images involved. So when given a new style, we simply need to extract its feature covariance matrices and apply them to the content features via WCT. Note that this learning-free scheme is fundamentally different from existing feed-forward networks that require learning with pre-defined styles and fine-tuning for new styles. Therefore, our approach is able to achieve style transfer universally. The main contributions of this work are summarized as follows: • We propose to use feature transforms, i.e., whitening and coloring, to directly match content feature statistics to those of a style image in the deep feature space. • We couple the feature transforms with a pre-trained general encoder-decoder network, such that the transferring process can be implemented by simple feed-forward operations. • We demonstrate the effectiveness of our method for universal style transfer with high-quality visual results, and also show its application to universal texture synthesis. 2 Related Work Existing style transfer methods are mostly example-based [13, 25, 24, 7, 21]. The image analogy method [13] aims to determine the relationship between a pair of images and then apply it to stylize 2 other images. As it is based on finding dense correspondence, analogy-based approaches [25, 24, 7, 21] often require that a pair of image depicts the same type of scene. Therefore these methods do not scale to the setting of arbitrary style images well. Recently, Gatys et al. [8, 9] proposed an algorithm for arbitrary stylization based on matching the correlations (Gram matrix) between deep features extracted by a trained network classifier within an iterative optimization framework. Numerous methods have since been developed to address different aspects including speed [27, 19, 16], quality [28, 18, 32, 31], user control [10], diversity [29, 20], semantics understanding [7, 1] and photorealism [23]. It is worth mentioning that one of the major drawbacks of [8, 9] is the inefficiency due to the optimization process. The improvement of efficiency in [27, 19, 16] is realized by formulating the stylization as learning a feed-forward image transformation network. However, these methods are limited by the requirement of training one network per style due to the lack of generalization in network design. Most recently, a number of methods have been proposed to empower a single network to transfer multiple styles, including a model that conditioned on binary selection units [20], a network that learns a set of new filters for every new style [2], and a novel conditional normalization layer that learns normalization parameters for each style [6]. To achieve arbitrary style transfer, Chen et al. [3] first propose to swap the content feature with the closest style feature locally. Meanwhile, inspired by [6], two following work [30, 11] turn to learn a general mapping from the style image to style parameters. One closest related work [15] directly adjusts the content feature to match the mean and variance of the style feature. However, the generalization ability of the learned models on unseen styles is still limited. Different from the existing methods, our approach performs style transfer efficiently in a feed-forward manner while achieving generalization and visual quality on arbitrary styles. Our approach is closely related to [15], where content feature in a particular (higher) layer is adaptively instance normalized by the mean and variance of style feature. This step can be viewed as a sub-optimal approximation of the WCT operation, thereby leading to less effective results on both training and unseen styles. Moreover, our encoder-decoder network is trained solely based on image reconstruction, while [15] requires learning such a module particularly for stylization task. We evaluate the proposed algorithm with existing approaches extensively on both style transfer and texture synthesis tasks and present in-depth analysis. 3 Proposed Algorithm We formulate style transfer as an image reconstruction process coupled with feature transformation, i.e., whitening and coloring. The reconstruction part is responsible for inverting features back to the RGB space and the feature transformation matches the statistics of a content image to a style image. 3.1 Reconstruction decoder We construct an auto-encoder network for general image reconstruction. We employ the VGG-19 [26] as the encoder, fix it and train a decoder network simply for inverting VGG features to the original image, as shown in Figure 1(a). The decoder is designed as being symmetrical to that of VGG-19 network (up to Relu_X_1 layer), with the nearest neighbor upsampling layer used for enlarging feature maps. To evaluate with features extracted at different layers, we select feature maps at five layers of the VGG-19, i.e., Relu_X_1 (X=1,2,3,4,5), and train five decoders accordingly. The pixel reconstruction loss [5] and feature loss [16, 5] are employed for reconstructing an input image, L = ∥Io −Ii∥2 2 + λ∥Φ(Io) −Φ(Ii)∥2 2 , (1) where Ii, Io are the input image and reconstruction output, and Φ is the VGG encoder that extracts the Relu_X_1 features. In addition, λ is the weight to balance the two losses. After training, the decoder is fixed (i.e., will not be fine-tuned) and used as a feature inverter. 3.2 Whitening and coloring transforms Given a pair of content image Ic and style image Is, we first extract their vectorized VGG feature maps fc ∈ℜC×HcWc and fs ∈ℜC×HsWs at a certain layer (e.g., Relu_4_1), where Hc, Wc (Hs, 3 Figure 2: Inverting whitened features. We invert the whitened VGG Relu_4_1 feature as an example. Left: original images, Right: inverted results (pixel intensities are rescaled for better visualization). The whitened features still maintain global content structures. (a) Style (b) Content (c) HM (d) WCT (e) Style (f) Content (g) HM (h) WCT Figure 3: Comparisons between different feature transform strategies. Results are obtained by our multi-level stylization framework in order to match all levels of information of the style. Ws) are the height and width of the content (style) feature, and C is the number of channels. The decoder will reconstruct the original image Ic if fc is directly fed into it. We next propose to use a whitening and coloring transform to adjust fc with respect to the statistics of fs. The goal of WCT is to directly transform the fc to match the covariance matrix of fs. It consists of two steps, i.e., whitening and coloring transform. Whitening transform. Before whitening, we first center fc by subtracting its mean vector mc. Then we transform fc linearly as in (2) so that we obtain ˆfc such that the feature maps are uncorrelated ( ˆfc ˆfc ⊤= I), ˆfc = Ec D −1 2 c E⊤ c fc , (2) where Dc is a diagonal matrix with the eigenvalues of the covariance matrix fc f ⊤ c ∈ℜC×C, and Ec is the corresponding orthogonal matrix of eigenvectors, satisfying fc f ⊤ c = EcDcE⊤ c . To validate what is encoded in the whitened feature ˆfc, we invert it to the RGB space with our previous decoder trained for reconstruction only. Figure 2 shows two visualization examples, which indicate that the whitened features still maintain global structures of the image contents, but greatly help remove other information related to styles. We note especially that, for the Starry_night example on right, the detailed stroke patterns across the original image are gone. In other words, the whitening step helps peel off the style from an input image while preserving the global content structure. The outcome of this operation is ready to be transformed with the target style. Coloring transform. We first center fs by subtracting its mean vector ms, and then carry out the coloring transform [14], which is essentially the inverse of the whitening step to transform ˆfc linearly as in (3) such that we obtain ˆ fcs which has the desired correlations between its feature maps ( ˆ fcs ˆ fcs ⊤= fs f ⊤ s ), ˆ fcs = Es D 1 2s E⊤ s ˆfc , (3) where Ds is a diagonal matrix with the eigenvalues of the covariance matrix fs f ⊤ s ∈ℜC×C, and Es is the corresponding orthogonal matrix of eigenvectors. Finally we re-center the ˆ fcs with the mean vector ms of the style, i.e., ˆ fcs = ˆ fcs + ms. To demonstrate the effectiveness of WCT, we compare it with a commonly used feature adjustment technique, i.e., histogram matching (HM), in Figure 3. The channel-wise histogram matching [12] method determines a mapping function such that the mapped fc has the same cumulative histogram as fs. In Figure 3, it is clear that the HM method helps transfer the global color of the style image well 4 (a) Style (b) Relu_1_1 (c) Relu_2_1 (d) Relu_3_1 (e) Relu_4_1 (f) Relu_5_1 Figure 4: Single-level stylization using different VGG features. The content image is from Figure 2. (a) I5 (b) I4 (c) I1 (d) Fine-to-coarse Figure 5: (a)-(c) Intermediate results of our coarse-to-fine multi-level stylization framework in Figure 1(c). The style and content images are from Figure 4. I1 is the final output of our multi-level pipeline. (d) Reversed fine-to-coarse multi-level pipeline. but fails to capture salient visual patterns, e.g., patterns are broken into pieces and local structures are misrepresented. In contrast, our WCT captures patterns that reflect the style image better. This can be explained by that the HM method does not consider the correlations between features channels, which are exactly what the covariance matrix is designed for. After the WCT, we may blend ˆ fcs with the content feature fc as in (4) before feeding it to the decoder in order to provide user controls on the strength of stylization effects: ˆ fcs = α ˆ fcs + (1 −α) fc , (4) where α serves as the style weight for users to control the transfer effect. 3.3 Multi-level coarse-to-fine stylization Based on the single-level stylization framework shown in Figure 1(b), we use different layers of VGG features Relu_X_1 (X=1,2,...,5) and show the corresponding stylized results in Figure 4. It clearly shows that the higher layer features capture more complicated local structures, while lower layer features carry more low-level information (e.g., colors). This can be explained by the increasing size of receptive field and feature complexity in the network hierarchy. Therefore, it is advantageous to use features at all five layers to fully capture the characteristics of a style from low to high levels. Figure 1(c) shows our multi-level stylization pipeline. We start by applying the WCT on Relu_5_1 features to obtain a coarse stylized result and regard it as the new content image to further adjust features in lower layers. An example of intermediate results are shown in Figure 5. We show the intermediate results I5, I4, I1 with obvious differences, which indicates that the higher layer features first capture salient patterns of the style and lower layer features further improve details. If we reverse feature processing order (i.e., fine-to-coarse layers) by starting with Relu_1_1, low-level information cannot be preserved after manipulating higher level features, as shown in Figure 5(d). 4 Experimental Results 4.1 Decoder training For the multi-level stylization approach, we separately train five reconstruction decoders for features at the VGG-19 Relu_X_1 (X=1,2,...,5) layer. It is trained on the Microsoft COCO dataset [22] and the weight λ to balance the two losses in (1) is set as 1. 5 (a) Style (b) [3] (c) [15] (d) [27] (e) [9] (f) Ours Figure 6: Results from different style transfer methods. The content images are from Figure 2-3. We evaluate various styles including paintings, abstract styles, and styles with obvious texton elements. We adjust the style weight of each method to obtain the best stylized effect. For our results, we set the style weight α = 0.6. Table 1: Differences between our approach and other methods. Chen et al. [3] Huang et al. [15] TNet [27] DeepArt [9] Ours Arbitrary √ √ × √ √ Efficient √ √ √ × √ Learning-free × × × √ √ 4.2 Style transfer To demonstrate the effectiveness of the proposed algorithm, we list the differences with existing methods in Table 1 and present stylized results in Figure 6. We adjust the style weight of other methods to obtain the best stylized effect. The optimization-based work of [9] handles arbitrary styles but is likely to encounter unexpected local minima issues (e.g., 5th and 6th row of Figure 6(e)). Although the method [27] greatly improves the stylization speed, it trades off quality and generality for efficiency, which generates repetitive patterns that overlay with the image contents (Figure 6(d)). 6 Table 2: Quantitative comparisons between different stylization methods in terms of the covariance matrix difference (Ls), user preference and run-time, tested on images of size 256 × 256 and a 12GB TITAN X. Chen et al. [3] Huang et al. [15] TNet [27] Gatys et al. [9] Ours log(Ls) 7.4 7.0 6.8 6.7 6.3 Preference/% 15.7 24.9 12.7 16.4 30.3 Time/sec 2.1 0.20 0.18 21.2 0.83 Style scale = 256 scale = 768 α = 0.4 α = 0.6 α = 1.0 Figure 7: Controlling the stylization on the scale and weight. Closest to our work on generalization are the recent methods [3, 15], but the quality of the stylized results are less appealing. The work of [3] replaces the content feature with the most similar style feature based on patch similarity and hence has limited capability, i.e., the content is strictly preserved while style is not well reflected with only low-level information (e.g., colors) transferred, as shown in Figure 6(b). In [15], the content feature is simply adjusted to have the same mean and variance with the style feature, which is not effective in capturing high-level representations of the style. Even learned with a set of training styles, it does not generalize well on unseen styles. Results in Figure 6(c) indicate that the method in [15] is not effective at capturing and synthesizing salient style patterns, especially for complicated styles where there are rich local structures and non-smooth regions. Figure 6(f) shows the stylized results of our approach. Without learning any style, our method is able to capture visually salient patterns in style images (e.g., the brick wall on the 6th row). Moreover, key components in the content images (e.g., bridge, eye, mouth) are also well stylized in our results, while other methods only transfer patterns to relatively smooth regions (e.g., sky, face). The models and code are available at https://github.com/Yijunmaverick/UniversalStyleTransfer. In addition, we quantitatively evaluate different methods by computing the covariance matrix difference (Ls) on all five levels of VGG features between stylized results and the given style image. We randomly select 10 content images from [22] and 40 style images from [17], compute the averaged difference over all styles, and show the results in Table 2 (1st row). Quantitative results show that we generate stylized results with lower Ls, i.e., closer to the statistics of the style. User study. Evaluating artistic style transfer has been an open question in the community. Since the qualitative assessment is highly subjective, we conduct a user study to evaluate 5 methods shown in Figure 6. We use 5 content images and 30 style images, and generate 150 results based on each content/style pair for each method. We randomly select 15 style images for each subject to evaluate. We display stylized images by 5 compared methods side-by-side on a webpage in random order. Each subject is asked to vote his/her ONE favorite result for each style. We finally collect the feedback from 80 subjects of totally 1,200 votes and show the percentage of the votes each method received in Table 2 (2nd row). The study shows that our method receives the most votes for better stylized results. It can be an interesting direction to develop evaluation metrics based on human visual perception for general image synthesis problems. Efficiency. In Table 2 (3rd row), we also compare our approach with other methods in terms of efficiency. The method by Gatys et al. [9] is slow due to loops of optimization and usually requires at least 500 iterations to generate good results. The methods [27] and [15] are efficient as the scheme is based on one feed-forward pass with a trained network. The approach [3] is feed-forward based but relatively slower as the feature swapping operation needs to be carried out for thousands of patches. Our approach is also efficient but a little bit slower than [27, 15] because we have a eigenvalue decomposition step in WCT. But note that the computational cost on this step will not increase along with the image size because the the dimension of covariance matrix only depends on filter numbers (or 7 (a) Content (b) Different masks and styles (c) Our result Figure 8: Spatial control in transferring, which enables users to edit the content with different styles. Figure 9: Texture synthesis. In each panel, Left: original textures, Right: our synthesized results. Texture images are mostly from the Describable Textures Dataset (DTD) [4]. channels), which is at most 512 (Relu_5_1). Currently the decomposition step is implemented based on CPU. Our future work includes more efficient GPU implementations of the proposed algorithm. User Controls. Given a content/style pair, our approach is not only as simple as a one-click transferring, but also flexible enough to accommodate different requirements from users by providing different controls on the stylization, including the scale, weight and spatial control. The style input on different scales will lead to different extracted statistics due to the fixed receptive field of the network. Therefore the scale control is easily achieved by adjusting the style image size. In the middle of Figure 7, we show two examples where the brick can be transferred in either small or large scale. The weight control refers to controlling the balance between stylization and content preservation. As shown on right of Figure 7, our method enjoys this flexibility in simple feed-forward passes by simply adjusting the style weight α in (4). However in [9] and [27], to obtain visual results of different weight settings, a new round of time-consuming optimization or model training is needed. Moreover, our blending directly works on deep feature space before inversion/reconstruction, which is fundamentally different from [9, 27] where the blending is formulated as the weighted sum of the content and style losses that may not always lead to a good balance point. The spatial control is also highly desired when users want to edit an image with different styles transferred on different parts of the image. Figure 8 shows an example of spatially controlling the stylization. A set of masks M (Figure 8(b)) is additionally required as input to indicate the spatial correspondence between content regions and styles. By replacing the content feature fc in (3) with M ⊙fc where ⊙is a simple mask-out operation, we are able to stylize the specified region only. 4.3 Texture synthesis By setting the content image as a random noise image (e.g., Gaussian noise), our stylization framework can be easily applied to texture synthesis. An alternative is to directly initialize the ˆfc in (3) to be white noise. Both approaches achieve similar results. Figure 9 shows a few examples of the synthesized textures. We empirically find that it is better to run the multi-level pipeline for a few times (e.g., 3) to get more visually pleasing results. Our method is also able to synthesize the interpolated result of two textures. Given two texture examples s1 and s2, we first perform the WCT on the input noise and get transformed features ˆ fcs1 and ˆ fcs2 respectively. Then we blend these two features ˆ fcs = β ˆ fcs1 + (1 −β) ˆ fcs2 and feed the 8 Texture s1 Texture s2 β = 0.75 β = 0.5 β = 0.25 β = 0.5 Figure 10: Interpolation between two texture examples. Left: original textures, Middle: our interpolation results, Right: interpolated results of [9]. β controls the weight of interpolation. Texture TNet [27] Ours Figure 11: Comparisons of diverse synthesized results between TNet [27] and our model. combined feature into the decoder to generate mixed effects. Note that our interpolation directly works on deep feature space. By contrast, the method in [9] generates the interpolation by matching the weighted sum of Gram matrices of two textures at the loss end. Figure 10 shows that the result by [9] is simply overlaid by two textures while our method generates new textural effects, e.g., bricks in the stripe shape. One important aspect in texture synthesis is diversity. By sampling different noise images, our method can generate diverse synthesized results for each texture. While [27] can generate different results driven by the input noise, the learned networks are very likely to be trapped in local optima. In other words, the noise is marginalized out and thus fails to drive the network to generate large visual variations. In contrast, our approach explains each input noise better because the network is unlikely to absorb the variations in input noise since it is never trained for learning textures. We compare the diverse outputs of our model with [27] in Figure 11. Note that the common diagonal layout is shared across different results of [27], which causes unsatisfying visual experiences. The comparison shows that our method achieves diversity in a more natural and flexible manner. 5 Concluding Remarks In this work, we propose a universal style transfer algorithm that does not require learning for each individual style. By unfolding the image generation process via training an auto-encoder for image reconstruction, we integrate the whitening and coloring transforms in the feed-forward passes to match the statistical distributions and correlations between the intermediate features of content and style. We also present a multi-level stylization pipeline, which takes all level of information of a style into account, for improved results. In addition, the proposed approach is shown to be equally effective for texture synthesis. Experimental results demonstrate that the proposed algorithm achieves favorable performance against the state-of-the-art methods in generalizing to arbitrary styles. Acknowledgments This work is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and NVIDIA. 9 References [1] A. J. Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. arXiv preprint arXiv:1603.01768, 2016. [2] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. In CVPR, 2017. [3] T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016. [4] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In CVPR, 2014. [5] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016. [6] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. In ICLR, 2017. [7] O. Frigo, N. Sabater, J. Delon, and P. Hellier. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. In CVPR, 2016. [8] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, 2015. [9] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. [10] L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In CVPR, 2017. [11] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens. Exploring the structure of a real-time, arbitrary neural artistic stylization network. In BMVC, 2017. [12] R. C. Gonzalez and R. E. Woods. Digital image processing (3rd edition). Prentice Hall, 2008. [13] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analogies. In SIGGRAPH, 2001. [14] M. Hossain. Whitening and coloring transforms for multivariate gaussian random variables. Project Rhea, 2016. [15] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. [16] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. [17] S. Karayev, M. Trentacoste, H. Han, A. Agarwala, T. Darrell, A. Hertzmann, and H. Winnemoeller. Recognizing image style. In BMVC, 2014. [18] C. Li and M. Wand. Combining markov random fields and convolutional neural networks for image synthesis. In CVPR, 2016. [19] C. Li and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In ECCV, 2016. [20] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In CVPR, 2017. [21] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang. Visual attribute transfer through deep image analogy. arXiv preprint arXiv:1705.01088, 2017. [22] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. [23] F. Luan, S. Paris, E. Shechtman, and K. Bala. Deep photo style transfer. In CVPR, 2017. [24] Y. Shih, S. Paris, C. Barnes, W. T. Freeman, and F. Durand. Style transfer for headshot portraits. In SIGGRAPH, 2014. 10 [25] Y. Shih, S. Paris, F. Durand, and W. T. Freeman. Data-driven hallucination of different times of day from a single outdoor photo. In SIGGRAPH, 2013. [26] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [27] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. [28] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. [29] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, 2017. [30] H. Wang, X. Liang, H. Zhang, D.-Y. Yeung, and E. P. Xing. Zm-net: Real-time zero-shot image manipulation network. arXiv preprint arXiv:1703.07255, 2017. [31] X. Wang, G. Oxholm, D. Zhang, and Y.-F. Wang. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. In CVPR, 2017. [32] P. Wilmot, E. Risser, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893, 2017. 11 | 2017 | 200 |
6,676 | Ensemble Sampling Xiuyuan Lu Stanford University lxy@stanford.edu Benjamin Van Roy Stanford University bvr@stanford.edu Abstract Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applications for which Thompson sampling is viable. We establish a theoretical basis that supports the approach and present computational results that offer further insight. 1 Introduction Thompson sampling [8] has emerged as an effective heuristic for trading off between exploration and exploitation in a broad range of online decision problems. To select an action, the algorithm samples a model of the system from the prevailing posterior distribution and then determines which action maximizes expected immediate reward according to the sampled model. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. With complex models such as neural networks, exact computation of posterior distributions becomes intractable. One can resort to to the Laplace approximation, as discussed, for example, in [2, 5], but this approach is suitable only when posterior distributions are unimodal, and computations become an obstacle with complex models like neural networks because compute time requirements grow quadratically with the number of parameters. An alternative is to leverage Markov chain Monte Carlo methods, but those are computationally onerous, especially when the model is complex. A practical approximation to Thompson sampling that can address complex models and problems requiring frequent decisions should facilitate fast incremental updating. That is, the time required per time period to learn from new data and generate a new sample model should be small and should not grow with time. Such a fast incremental method that builds on the Laplace approximation concept is presented in [5]. In this paper, we study a fast incremental method that applies more broadly, without relying on unimodality. As a sanity check we offer theoretical assurances that apply to the special case of linear bandits. We also present computational results involving simple bandit problems as well as complex neural network models that demonstrate efficacy of the approach. Our approach is inspired by [6], which applies a similar concept to the more complex context of deep reinforcement learning, but without any theoretical analysis. The essential idea is to maintain and incrementally update an ensemble of statistically plausible models, and to sample uniformly from this set in each time period as an approximation to sampling from the posterior distribution. Each model is initially sampled from the prior, and then updated in a manner that incorporates data and random perturbations that diversify the models. The intention is for the ensemble to approximate the posterior distribution and the variance among models to diminish as the posterior concentrates. We refine this methodology and bound the incremental regret relative to exact Thompson sampling for a broad class 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of online decision problems. Our bound indicates that it suffices to maintain a number of models that grows only logarithmically with the horizon of the decision problem, ensuring computational tractability of the approach. 2 Problem formulation We consider a broad class of online decision problems to which Thompson sampling could, in principle, be applied, though that would typically be hindered by intractable computational requirements. We will define random variables with respect to a probability space (Ω, F, P) endowed with a filtration (Ft : t = 0, . . . , T). As a convention, random variables we index by t will be Ft-measurable, and we use Pt and Et to denote probabilities and expectations conditioned on Ft. The decision-maker chooses actions A0, . . . , AT −1 ∈A and observes outcomes Y1, . . . , YT ∈Y. There is a random variable θ, which represents a model index. Conditioned on (θ, At−1), Yt is independent of Ft−1. Further, P(Yt = y|θ, At−1) does not depend on t. This can be thought of as a Bayesian formulation, where randomness in θ reflects prior uncertainty about which model corresponds to the true nature of the system. We assume that A is finite and that each action At is chosen by a randomized policy π = (π0, . . . , πT −1). Each πt is Ft-measurable, and each realization is a probability mass function over actions A; At is sampled independently from πt. The agent associates a reward R(y) with each outcome y ∈Y, where the reward function R is fixed and known. Let Rt = R(Yt) denote the reward realized at time t. Let Rθ(a) = E [R(Yt)|θ, At−1 = a]. Uncertainty about θ induces uncertainty about the true optimal action, which we denote by A∗∈arg max a∈A Rθ(a). Let R∗= Rθ(A∗). The T-period conditional regret when the actions (A0, .., AT −1) are chosen according to π is defined by Regret(T, π, θ) = E " T X t=1 (R∗−Rt) θ # , (1) where the expectation is taken over the randomness in actions At and outcomes Yt, conditioned on θ. We illustrate with a couple of examples that fit our formulation. Example 1. (linear bandit) Let θ be drawn from ℜN and distributed according to a N(µ0, Σ0) prior. There is a set of K actions A ⊆ℜN. At each time t = 0, 1, . . . , T −1, an action At ∈A is selected, after which a reward Rt+1 = Yt+1 = θ⊤At + Wt+1 is observed, where Wt+1 ∼N(0, σ2 w). Example 2. (neural network) Let gθ : ℜN 7→ℜK denote a mapping induced by a neural network with weights θ. Suppose there are K actions A ⊆ℜN, which serve as inputs to the neural network, and the goal is to select inputs that yield desirable outputs. At each time t = 0, 1, . . . , T −1, an action At ∈A is selected, after which Yt+1 = gθ(At) + Wt+1 is observed, where Wt+1 ∼N(0, σ2 wI). A reward Rt+1 = R(Yt+1) is associated with each observation. Let θ be distributed according to a N(µ0, Σ0) prior. The idea here is that data pairs (At, Yt+1) can be used to fit a neural network model, while actions are selected to trade off between generating data pairs that reduce uncertainty in neural network weights and those that offer desirable immediate outcomes. 3 Algorithms Thompson sampling offers a heuristic policy for selecting actions. In each time period, the algorithm samples an action from the posterior distribution pt(a) = Pt(A∗= a) of the optimal action. In other words, Thompson sampling uses a policy πt = pt. It is easy to see that this is equivalent to sampling a model index ˆθt from the posterior distribution of models and then selecting an action At = arg max a∈A Rˆθt(a) that optimizes the sampled model. Thompson sampling is computationally tractable for some problem classes, like the linear bandit problem, where the posterior distribution is Gaussian with parameters (µt, Σt) that can be updated incrementally and efficiently via Kalman filtering as outcomes are observed. However, when dealing with complex models, like neural networks, computing the posterior distribution becomes intractable. Ensemble sampling serves as an approximation to Thompson sampling for such contexts. 2 Algorithm 1 EnsembleSampling 1: Sample: ˜θ0,1, . . . , ˜θ0,M ∼p0 2: for t = 0, . . . , T −1 do 3: Sample: m ∼unif({1, . . . , M}) 4: Act: At = arg max a∈A R˜θt,m(a) 5: Observe: Yt+1 6: Update: ˜θt+1,1, . . . , ˜θt+1,M 7: end for The posterior can be interpreted as a distribution of “statistically plausible” models, by which we mean models that are sufficiently consistent with prior beliefs and the history of observations. With this interpretation in mind, Thompson sampling can be thought of as randomly drawing from the range of statistically plausible models. Ensemble sampling aims to maintain, incrementally update, and sample from a finite set of such models. In the spirit of particle filtering, this set of models approximates the posterior distribution. The workings of ensemble sampling are in some ways more intricate than conventional uses of particle filtering, however, because interactions between the ensemble of models and selected actions can skew the distribution. While elements of ensemble sampling require customization, a general template is presented as Algorithm 1. The algorithm begins by sampling M models from the prior distribution. Then, over each time period, a model is sampled uniformly from the ensemble, an action is selected to maximize expected reward under the sampled model, the resulting outcome is observed, and each of the M models is updated. To produce an explicit algorithm, we must specify a model class, prior distribution, and algorithms for sampling from the prior and updating models. For a concrete illustration, let us consider the linear bandit (Example 1). Though ensemble sampling is unwarranted in this case, since Thompson sampling is efficient, the linear bandit serves as a useful context for understanding the approach. Standard algorithms can be used to sample models from the N(µ0, Σ0) prior. One possible procedure for updating models maintains a covariance matrix, updating it according to Σt+1 = Σ−1 t + AtA⊤ t /σ2 w −1 , and generates model parameters incrementally according to ˜θt+1,m = Σt+1 Σ−1 t ˜θt,m + At(Rt+1 + ˜Wt+1,m)/σ2 w , for m = 1, . . . , M, where ( ˜Wt,m : t = 1, . . . , T, m = 1, . . . , M) are independent N(0, σ2 w) random samples drawn by the updating algorithm. It is easy to show that the resulting parameter vectors satisfy ˜θt,m = arg min ν 1 σ2w t−1 X τ=0 (Rτ+1 + ˜Wτ+1,m −A⊤ τ ν)2 + (ν −˜θ0,m)⊤Σ−1 0 (ν −˜θ0,m) ! , which admits an intuitive interpretation: each ˜θt,m is a model fit to a randomly perturbed prior and randomly perturbed observations. As we establish in the appendix, for any deterministic sequence A0, . . . , At−1, conditioned on Ft, the models ˜θt,1, . . . , ˜θt,M are independent and identically distributed according to the posterior distribution of θ. In this sense, the ensemble approximates the posterior. It is not a new observation that, for deterministic action sequences, such a scheme generates exact samples of the posterior distribution (see, e.g., [7]). However, for stochastic action sequences selected by Algorithm 1, it is not immediately clear how well the ensemble approximates the posterior distribution. We will provide a bound in the next section which establishes that, as the number of models M increases, the regret of ensemble sampling quickly approaches that of Thompson sampling. The ensemble sampling algorithm we have described for the linear bandit problem motivates an analogous approach for the neural network model of Example 2. This approach would again begin with M models, with connection weights ˜θ0,1, . . . , ˜θ0,M sampled from a N(µ0, Σ0) prior. It could be 3 natural here to let µ0 = 0 and Σ0 = σ2 0I for some variance σ2 0 chosen so that the range of probable models spans plausible outcomes. To incrementally update parameters, at each time t, each model m applies some number of stochastic gradient descent iterations to reduce a loss function of the form Lt(ν) = 1 σ2w t−1 X τ=0 (Yτ+1 + ˜Wτ+1,m −gν(Aτ))2 + (ν −˜θ0,m)⊤Σ−1 0 (ν −˜θ0,m). We present computational results in Section 5.2 that demonstrate viability of this approach. 4 Analysis of ensemble sampling for the linear bandit Past analyses of Thompson sampling have relied on independence between models sampled over time periods. Ensemble sampling introduces dependencies that may adversely impact performance. It is not immediately clear whether the degree of degradation should be tolerable and how that depends on the number of models in the ensemble. In this section, we establish a bound for the linear bandit context. Our result serves as a sanity check for ensemble sampling and offers insight that should extend to broader model classes, though we leave formal analysis beyond the linear bandit for future work. Consider the linear bandit problem described in Example 1. Let πTS and πES denote the Thompson and ensemble sampling policies for this problem, with the latter based on an ensemble of M models, generated and updated according to the procedure described in Section 3. Let R∗= mina∈A θ⊤a denote the worst mean reward and let ∆(θ) = R∗−R∗denote the gap between maximal and minimal mean rewards. The following result bounds the difference in regret as a function of the gap, ensemble size, and number of actions. Theorem 3. For all ϵ > 0, if M ≥4|A| ϵ2 log 4|A|T ϵ3 , then Regret(T, πES, θ) ≤Regret(T, πTS, θ) + ϵ∆(θ)T. This inequality bounds the regret realized by ensemble sampling by a sum of the regret realized by Thompson sampling and an error term ϵ∆(θ)T. Since we are talking about cumulative regret, the error term bounds the per-period degradation relative to Thompson sampling by ϵ∆(θ). The value of ϵ can be made arbitrarily small by increasing M. Hence, with a sufficiently large ensemble, the per-period loss will be small. This supports the viability of ensemble sampling. An important implication of this result is that it suffices for the ensemble size to grow logarithmically in the horizon T. Since Thompson sampling requires independence between models sampled over time, in a sense, it relies on T models – one per time period. So to be useful, ensemble sampling should operate effectively with a much smaller number, and the logarithmic dependence is suitable. The bound also grows with |A| log |A|, which is manageable when there are a modest number of actions. We conjecture that a similar bound holds that depends instead on a multiple of N log N, where N is the linear dimension, which would offer a stronger guarantee when the number of actions becomes large or infinite, though we leave proof of this alternative bound for future work. The bound of Theorem 3 is on a notion of regret conditioned on the realization of θ. A Bayesian regret bound that removes dependence on this realization can be obtained by taking an expectation, integrating over θ: E Regret(T, πES, θ) ≤E Regret(T, πTS, θ) + ϵE [∆(θ)] T. We provide a complete proof of Theorem 3 in the appendix. Due to space constraints, we only offer a sketch here. Sketch of Proof. Let A denote an Ft-adapted action process (A0, . . . , AT −1). Our procedure for generating and updating models with ensemble sampling is designed so that, for any deterministic A, conditioned on the history of rewards (R1, . . . , Rt), models ˜θt,1, . . . , ˜θt,M that comprise the ensemble are independent and identically distributed according to the posterior distribution of θ. This can be verified via some algebra, as is done in the appendix. 4 Recall that pt(a) denotes the posterior probability Pt(A∗= a) = P (A∗= a|A0, R1, . . . , At−1, Rt). To explicitly indicate dependence on the action process, we will use a superscript: pt(a) = pA t (a). Let ˆpA t denote an approximation to pA t , given by ˆpA t (a) = 1 M PM m=1 I a = arg maxa′ ˜θ⊤ t,ma′ . Note that given an action process A, at time t Thompson sampling would sample the next action from pA t , while ensemble sampling would sample the next action from ˆpA t . If A is deterministic then, since ˜θt,1, . . . , ˜θt,M, conditioned on the history of rewards, are i.i.d. and distributed as θ, ˆpA t represents an empirical distribution of samples drawn from pA t . It follows from this and Sanov’s Theorem that, for any deterministic A, P dKL(ˆpA t ∥pA t ) ≥ϵ|θ ≤(M + 1)|A|e−Mϵ. A naive application of the union bound over all deterministic action sequences would establish that, for any A (deterministic or stochastic), P dKL(ˆpA t ∥pA t ) ≥ϵ|θ ≤P max a∈At dKL(ˆpa t ∥pa t ) ≥ϵ θ ≤|A|t(M + 1)|A|e−Mϵ However, our proof takes advantage of the fact that, for any deterministic A, pA t and ˆpA t do not depend on the ordering of past actions and observations. To make it precise, we encode the sequence of actions in terms of action counts c0, . . . , cT −1. In particular, let ct,a = |{τ ≤t : Aτ = a}| be the number of times that action a has been selected by time t. We apply a coupling argument that introduces dependencies between the noise terms Wt and action counts, without changing the distributions of any observable variables. We let (Zn,a : n ∈N, a ∈A) be i.i.d. N(0, 1) random variables, and let Wt+1 = Zct,At,At. Similarly, we let ( ˜Zn,a,m : n ∈N, a ∈A, m = 1, . . . , M) be i.i.d N(0, 1) random variables, and let ˜Wt+1,m = ˜Zct,At,At,m. To make explicit the dependence on A, we will use a superscript and write cA t to denote the action counts at time t when the action process is given by A. It is not hard to verify, as is done in the appendix, that if a, a ∈AT are two deterministic action sequences such that ca t−1 = ca t−1, then pa t = pa t and ˆpa t = ˆpa t . This allows us to apply the union bound over action counts, instead of action sequences, and we get that for any A (deterministic or stochastic), P dKL(ˆpA t ∥pA t ) ≥ϵ|θ ≤P max ca t−1:a∈At dKL(ˆpa t ∥pa t ) ≥ϵ θ ! ≤(t + 1)|A|(M + 1)|A|e−Mϵ. Now, we specialize the action process A to the action sequence At = AES t selected by ensemble sampling, and we will omit the superscripts in pA t and ˆpA t . We can decompose the per-period regret of ensemble sampling as E R∗−θ⊤At|θ = E (R∗−θ⊤At)I (dKL(ˆpt∥pt) ≥ϵ) |θ + E (R∗−θ⊤At)I (dKL(ˆpt∥pt) < ϵ) |θ . (2) The first term can be bounded by E (R∗−θ⊤At)I (dKL(ˆpt∥pt) ≥ϵ) |θ ≤ ∆(θ)P (dKL(ˆpt∥pt) ≥ϵ|θ) ≤ ∆(θ)(t + 1)|A|(M + 1)|A|e−Mϵ. To bound the second term, we will use another coupling argument that couples the actions that would be selected by ensemble sampling with those that would be selected by Thompson sampling. Let ATS t denote the action that Thompson sampling would select at time t. On {dKL(ˆpt∥pt) ≤ϵ}, we have ∥ˆpt −pt∥TV ≤ √ 2ϵ by Pinsker’s inequality. Conditioning on ˆpt and pt, if dKL(ˆpt∥pt) ≤ϵ, we can construct random variables ˜AES t and ˜ATS t such that they have the same distributions as AES t and ATS t , respectively. Using maximal coupling, we can make ˜AES t = ˜ATS t with probability at least 1 −1 2∥ˆpt −pt∥TV ≥1 − p ϵ/2. Then, the second term of the sum in (2) can be decomposed into E (R∗−θ⊤At)I (dKL(ˆpt∥pt) ≤ϵ) |θ = E h E h (R∗−θ⊤˜AES t )I dKL(ˆpt∥pt) ≤ϵ, ˜AES t = ˜ATS t ˆpt, pt, θ i θ i +E h E h (R∗−θ⊤˜AES t )I dKL(ˆpt∥pt) ≤ϵ, ˜AES t ̸= ˜ATS t ˆpt, pt, θ i θ i , 5 which, after some algebraic manipulations, leads to E (R∗−θ⊤At)I (dKL(ˆpt∥pt) < ϵ) |θ ≤E R∗−θ⊤ATS t |θ + p ϵ/2 ∆(θ). The result then follows from some straightforward algebra. 5 Computational results In this section, we present computational results that demonstrate viability of ensemble sampling. We will start with a simple case of independent Gaussian bandits in Section 5.1 and move on to more complex models of neural networks in Section 5.2. Section 5.1 serves as a sanity check for the empirical performance of ensemble sampling, as Thompson sampling can be efficiently applied in this case and we are able to compare the performances of these two algorithms. In addition, we provide simulation results that demonstrate how the ensemble size grows with the number of actions. Section 5.2 goes beyond our theoretical analysis in Section 4 and gives computational evidence of the efficacy of ensemble sampling when applied to more complex models such as neural networks. We show that ensemble sampling, even with a few models, achieves efficient learning and outperforms ϵ-greedy and dropout on the example neural networks. 5.1 Gaussian bandits with independent arms We consider a Gaussian bandit with K actions, where action k has mean reward θk. Each θk is drawn i.i.d. from N(0, 1). During each time step t = 0, . . . , T −1, we select an action k ∈{1, . . . , K} and observe reward Rt+1 = θk + Wt+1, where Wt+1 ∼N(0, 1). Note that this is a special case of Example 1. Since the posterior distribution of θ can be explicitly computed in this case, we use it as a sanity check for the performance of ensemble sampling. Figure 1a shows the per-period regret of Thompson sampling and ensemble sampling applied to a Gaussian bandit with 50 independent arms. We see that as the number of models increases, ensemble sampling better approximates Thompson sampling. The results were averaged over 2,000 realizations. Figure 1b shows the minimum number of models required so that the expected per-period regret of ensemble sampling is no more than ϵ plus the expected per-period regret of Thompson sampling at some large time horizon T across different numbers of actions. All results are averaged over 10,000 realizations. We chose T = 2000 and ϵ = 0.03. The plot shows that the number of models needed seems to grow sublinearly with the number of actions, which is stronger than the bound proved in Section 4. 0 100 200 300 400 500 600 700 t 0.0 0.5 1.0 1.5 2.0 Per-period regret Ensemble sampling on an independent Gaussian bandit with 50 arms Thompson sampling 5 models 10 models 20 models 30 models (a) 0 25 50 75 100 125 150 175 200 Number of actions 0 20 40 60 80 100 Number of models Ensemble sampling on independent Gaussian bandits (b) Figure 1: (a) Ensemble sampling compared with Thompson sampling on a Gaussian bandit with 50 independent arms. (b) Minimum number of models required so that the expected per-period regret of ensemble sampling is no more than ϵ = 0.03 plus the expected per-period regret of Thompson sampling at T = 2000 for Gaussian bandits across different numbers of arms. 5.2 Neural networks In this section, we follow Example 2 and show computational results of ensemble sampling applied to neural networks. Figure 2 shows ϵ-greedy and ensemble sampling applied to a bandit problem 6 where the mapping from actions to expected rewards is represented by a neuron. More specifically, we have a set of K actions A ⊆ℜN. The mean reward of selecting an action a ∈A is given by gθ(a) = max(0, θ⊤a), where weights θ ∈ℜN are drawn from N(0, λI). During each time period, we select an action At ∈A and observe reward Rt+1 = gθ(At) + Zt+1, where Zt+1 ∼N(0, σ2 z). We set the input dimension N = 100, number of actions K = 100, prior variance λ = 10, and noise variance σ2 z = 100. Each dimension of each action was sampled uniformly from [−1, 1], except for the last dimension, which was set to 1. In Figure 3, we consider a bandit problem where the mapping from actions to expected rewards is represented by a two-layer neural network with weights θ ≡(W1, W2), where W1 ∈ℜD×N and W2 ∈ℜD. Each entry of the weight matrices is drawn independently from N(0, λ). There is a set of K actions A ⊆ℜN. The mean reward of choosing an action a ∈A is gθ(a) = W ⊤ 2 max(0, W1a). During each time period, we select an action At ∈A and observe reward Rt+1 = gθ(At) + Zt+1, where Zt+1 ∼N(0, σ2 z). We used N = 100 for the input dimension, D = 50 for the dimension of the hidden layer, number of actions K = 100, prior variance λ = 1, and noise variance σ2 z = 100. Each dimension of each action was sampled uniformly from [−1, 1], except for the last dimension, which was set to 1. Ensemble sampling with M models starts by sampling ˜θm from the prior distribution independently for each model m. At each time step, we pick a model m uniformly at random and apply the greedy action with respect to that model. We update the ensemble incrementally. During each time period, we apply a few steps of stochastic gradient descent for each model m with respect to the loss function Lt(θ) = 1 σ2z t−1 X τ=0 (Rτ+1 + ˜Zτ+1,m −gθ(Aτ))2 + 1 λ∥θ −˜θm∥2 2, where perturbations ( ˜Zt,m : t = 1, . . . , T, m = 1, . . . , M) are drawn i.i.d. from N(0, σ2 z). Besides ensemble sampling, there are other heuristics for sampling from an approximate posterior distribution over neural networks, which may be used to develop approximate Thompson sampling. Gal and Ghahramani proposed an approach based on dropout [4] to approximately sample from a posterior over neural networks. In Figure 3, we include results from using dropout to approximate Thompson sampling on the two-layer neural network bandit. To facilitate gradient flow, we used leaky ReLUs of the form max(0.01x, x) internally in all agents, while the target neural nets still use regular ReLUs as described above. We took 3 stochastic gradient steps with a minibatch size of 64 for each model update. We used a learning rate of 1e-1 for ϵ-greedy and ensemble sampling, and a learning rate of 1e-2, 1e-2, 2e-2, and 5e-2 for dropout with dropping probabilities 0.25, 0.5, 0.75, and 0.9 respectively. All results were averaged over around 1,000 realizations. Figure 2 plots the per-period regret of ϵ-greedy and ensemble sampling on the single neuron bandit. We see that ensemble sampling, even with 10 models, performs better than ϵ-greedy with the best tuned parameters. Increasing the size of the ensemble further improves the performance. An ensemble of size 50 achieves orders of magnitude lower regret than ϵ-greedy. Figure 3a and 3b show different versions of ϵ-greedy applied to the two-layer neural network model. We see that ϵ-greedy with an annealing schedule tends to perform better than a fixed ϵ. Figure 3c plots the per-period regret of the dropout approach with different dropping probabilities, which seems to perform worse than ϵ-greedy. Figure 3d plots the per-period regret of ensemble sampling on the neural net bandit. Again, we see that ensemble sampling, with a moderate number of models, outperforms the other approaches by a significant amount. 6 Conclusion Ensemble sampling offers a potentially efficient means to approximate Thompson sampling when using complex models such as neural networks. We have provided an analysis that offers theoretical assurances for the case of linear bandit models and computational results that demonstrate efficacy with complex neural network models. We are motivated largely by the need for effective exploration methods that can efficiently be applied in conjunction with complex models such as neural networks. Ensemble sampling offers one approach 7 0 500 1000 1500 2000 0 10 20 30 40 instant regret (a) Epsilon-greedy agent name epsilon=0.05 epsilon=0.1 epsilon=0.2 epsilon=50/(50+t) epsilon=150/(150+t) epsilon=300/(300+t) ensemble=5 ensemble=10 ensemble=30 ensemble=50 0 500 1000 1500 2000 t (b) Ensemble sampling Figure 2: (a) ϵ-greedy and (b) ensemble sampling applied to a single neuron bandit. 0 20 40 60 instant regret (a) Fixed epsilon agent name epsilon=0.05 epsilon=0.1 epsilon=0.2 epsilon=0.3 epsilon=10/(10+t) epsilon=30/(30+t) epsilon=50/(50+t) epsilon=70/(70+t) dropout=0.25 dropout=0.5 dropout=0.75 dropout=0.9 ensemble=5 ensemble=10 ensemble=30 ensemble=50 (b) Annealing epsilon 0 100 200 300 400 500 0 20 40 60 (c) Dropout 0 100 200 300 400 500 t (d) Ensemble sampling Figure 3: (a) Fixed ϵ-greedy, (b) annealing ϵ-greedy, (c) dropout, and (d) ensemble sampling applied to a two-layer neural network bandit. to representing uncertainty in neural network models, and there are others that might also be brought to bear in developing approximate versions of Thompson sampling [1, 4]. The analysis of various other forms of approximate Thompson sampling remains open. Ensemble sampling loosely relates to ensemble learning methods [3], though an important difference in motivation lies in the fact that the latter learns multiple models for the purpose of generating a more accurate model through their combination, while the former learns multiple models to reflect uncertainty in the posterior distribution over models. That said, combining the two related approaches may be fruitful. In particular, there may be practical benefit to learning many forms of models (neural networks, tree-based models, etc.) and viewing the ensemble as representing uncertainty from which one can sample. Acknowledgments This work was generously supported by a research grant from Boeing and a Marketing Research Award from Adobe. 8 References [1] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 1613–1622. JMLR.org, 2015. [2] Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In J. ShaweTaylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2249–2257. Curran Associates, Inc., 2011. [3] Thomas G Dietterich. Ensemble learning. The handbook of brain theory and neural networks, 2:110–125, 2002. [4] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1050–1059, New York, New York, USA, 20–22 Jun 2016. PMLR. [5] Carlos Gómez-Uribe. Online algorithms for parameter mean and variance estimation in dynamic regression. arXiv preprint arXiv:1605.05697v1, 2016. [6] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4026–4034. Curran Associates, Inc., 2016. [7] George Papandreou and Alan L Yuille. Gaussian sampling by local perturbations. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1858–1866. Curran Associates, Inc., 2010. [8] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. 9 | 2017 | 201 |
6,677 | Practical Data-Dependent Metric Compression with Provable Guarantees Piotr Indyk∗ MIT Ilya Razenshteyn∗ MIT Tal Wagner∗ MIT Abstract We introduce a new distance-preserving compact representation of multidimensional point-sets. Given n points in a d-dimensional space where each coordinate is represented using B bits (i.e., dB bits per point), it produces a representation of size O(d log(dB/ϵ) + log n) bits per point from which one can approximate the distances up to a factor of 1 ± ϵ. Our algorithm almost matches the recent bound of [6] while being much simpler. We compare our algorithm to Product Quantization (PQ) [7], a state of the art heuristic metric compression method. We evaluate both algorithms on several data sets: SIFT (used in [7]), MNIST [11], New York City taxi time series [4] and a synthetic one-dimensional data set embedded in a high-dimensional space. With appropriately tuned parameters, our algorithm produces representations that are comparable to or better than those produced by PQ, while having provable guarantees on its performance. 1 Introduction Compact distance-preserving representations of high-dimensional objects are very useful tools in data analysis and machine learning. They compress each data point in a data set using a small number of bits while preserving the distances between the points up to a controllable accuracy. This makes it possible to run data analysis algorithms, such as similarity search, machine learning classifiers, etc, on data sets of reduced size. The benefits of this approach include: (a) reduced running time (b) reduced storage and (c) reduced communication cost (between machines, between CPU and RAM, between CPU and GPU, etc). These three factors make the computation more efficient overall, especially on modern architectures where the communication cost is often the dominant factor in the running time, so fitting the data in a single processing unit is highly beneficial. Because of these benefits, various compact representations have been extensively studied over the last decade, for applications such as: speeding up similarity search [3, 5, 10, 19, 22, 7, 15, 18], scalable learning algorithms [21, 12], streaming algorithms [13] and other tasks. For example, a recent paper [8] describes a similarity search software package based on one such method (Product Quantization (PQ)) that has been used to solve very large similarity search problems over billions of point on GPUs at Facebook. The methods for designing such representations can be classified into data-dependent and dataoblivious. The former analyze the whole data set in order to construct the point-set representation, while the latter apply a fixed procedure individually to each data point. A classic example of the data-oblivious approach is based on randomized dimensionality reduction [9], which states that any set of n points in the Euclidean space of arbitrary dimension D can be mapped into a space of dimension d = O(ϵ−2 log n), such that the distances between all pairs of points are preserved up to a factor of 1 ± ϵ. This allows representing each point using d(B + log D) bits, where B is the number ∗Authors ordered alphabetically. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of bits of precision in the coordinates of the original pointset. 2 More efficient representations are possible if the goal is to preserve only the distances in a certain range. In particular, O(ϵ−2 log n) bits are sufficient to distinguish between distances smaller than 1 and greater than 1 + ϵ, independently of the precision parameter [10] (see also [16] for kernel generalizations). Even more efficient methods are known if the coordinates are binary [3, 12, 18]. Data-dependent methods compute the bit representations of points “holistically", typically by solving a global optimization problem. Examples of this approach include Semantic Hashing [17], Spectral Hashing [22] or Product Quantization [7] (see also the survey [20]). Although successful, most of the results in this line of research are empirical in nature, and we are not aware of any worst-case accuracy vs. compression tradeoff bounds for those methods along the lines of the aforementioned data oblivious approaches. A recent work [6] shows that it is possible to combine the two approaches and obtain algorithms that adapt to the data while providing worst-case accuracy/compression tradeoffs. In particular, the latter paper shows how to construct representations of d-dimensional pointsets that preserve all distances up to a factor of 1±ϵ while using only O((d+log n) log(1/ϵ)+log(Bn)) bits per point. Their algorithm uses hierarchical clustering in order to group close points together, and represents each point by a displacement vector from a near by point that has already been stored. The displacement vector is then appropriately rounded to reduce the representation size. Although theoretically interesting, that algorithm is rather complex and (to the best of our knowledge) has not been implemented. Our results. The main contribution of this paper is QuadSketch (QS), a simple data-adaptive algorithm, which is both provable and practical. It represents each point using O(d log(dB/ϵ)+log n) bits, where (as before) we can set d = O(ϵ−2 log n) using the Johnson-Lindenstrauss lemma. Our bound significantly improves over the “vanilla” O(dB) bound (obtained by storing all d coordinates to full precision), and comes close to bound of [6]. At the same time, the algorithm is quite simple and intuitive: it computes a d-dimensional quadtree3 and appropriately prunes its edges and nodes.4 We evaluate QuadSketch experimentally on both real and synthetic data sets: a SIFT feature data set from [7], MNIST [11], time series data reflecting taxi ridership in New York City [4] and a synthetic data set (Diagonal) containing random points from a one-dimensional subspace (i.e., a line) embedded in a high-dimensional space. The data sets are quite diverse: SIFT and MNIST data sets are de-facto “standard” test cases for nearest neighbor search and distance preserving sketches, NYC taxi data was designed to contain anomalies and “irrelevant” dimensions, while Diagonal has extremely low intrinsic dimension. We compare our algorithms to Product Quantization (PQ) [7], a state of the art method for computing distance-preserving sketches, as well as a baseline simple uniform quantization method (Grid). The sketch length/accuracy tradeoffs for QS and PQ are comparable on SIFT and MNIST data, with PQ having higher accuracy for shorter sketches while QS having better accuracy for longer sketches. On NYC taxi data, the accuracy of QS is higher over the whole range of sketch lengths . Finally, Diagonal exemplifies a situation where the low dimensionality of the data set hinders the performance of PQ, while QS naturally adapts to this data set. Overall, QS performs well on “typical” data sets, while its provable guarantees ensure robust performance in a wide range of scenarios. Both algorithms improve over the baseline quantization method. 2 Formal Statement of Results Preliminaries. Let X = {x1, . . . , xn} ⊂Rd be a pointset in Euclidean space. A compression scheme constructs from X a bit representation referred to as a sketch. Given the sketch, and without access to the original pointset, one can decompress the sketch into an approximate pointset 2The bounds can be stated more generally in terms of the aspect ratio Φ of the point-set. See Section 2 for the discussion. 3Traditionally, the term “quadtree” is used for the case of d = 2, while its higher-dimensional variants are called “ hyperoctrees” [23]. However, for the sake of simplicity, in this paper we use the same term “quadtree” for any value of d. 4We note that a similar idea (using kd-trees instead of quadtrees) has been earlier proposed in [1]. However, we are not aware of any provable space/distortion tradeoffs for the latter algorithm. 2 ˜X = {˜x1, . . . , ˜xn} ⊂Rd. The goal is to minimize the size of the sketch, while approximately preserving the geometric properties of the pointset, in particular the distances and near neighbors. In the previous section we parameterized the sketch size in terms of the number of points n, the dimension d, and the bits per coordinate B. In fact, our results are more general, and can be stated in terms of the aspect ratio of the pointset, denoted by Φ and defined as the ratio between the largest to smallest distance, Φ = max1≤i<j≤n∥xi −xj∥ min1≤i<j≤n∥xi −xj∥. Note that log(Φ) ≤log d + B, so our bounds, stated in terms of log Φ, immediately imply analogous bounds in terms of B. We will use [n] to denote {1, . . . , n}, and ˜O(f) to suppress polylogarithmic factors in f. QuadSketch. Our compression algorithm, described in detail in Section 3, is based on a randomized variant of a quadtree followed by a pruning step. In its simplest variant, the trade-off between the sketch size and compression quality is governed by a single parameter Λ. Specifically, Λ controls the pruning step, in which the algorithm identifies “non-important” bits among those stored in the quadtree (i.e. bits whose omission would have little effect on the approximation quality), and removes them from the sketch. Higher values of Λ result in sketches that are longer but have better approximation quality. Approximate nearest neighbors. Our main theorem provides the following guarantees for the basic variant of QuadSketch: for each point, the distances from that point to all other points are preserved up to a factor of 1 ± ϵ with a constant probability. Theorem 1. Given ϵ, δ > 0, let Λ = O(log(d log Φ/ϵδ)) and L = log Φ + Λ. QuadSketch runs in time ˜O(ndL) and produces a sketch of size O(ndΛ + n log n) bits, with the following guarantee: For every i ∈[n], Pr ∀j∈[n]∥˜xi −˜xj∥= (1 ± ϵ)∥xi −xj∥ ≥1 −δ. In particular, with probability 1 −δ, if ˜xi∗is the nearest neighbor of ˜xi in ˜X, then xi∗is a (1 + ϵ)approximate nearest neighbor of xi in X. Note that the theorem allows us to compress the input point-set into a sketch and then decompress it back into a point-set which can be fed to a black box similarity search algorithm. Alternatively, one can decompress only specific points and approximate the distance between them. For example, if d = O(ϵ−2 log n) and Φ is polynomially bounded in n, then Theorem 1 uses Λ = O(log log n + log(1/ϵ)) bits per coordinate to preserve (1 + ϵ)-approximate nearest neighbors. The full version of QuadSketch, described in Section 3, allows extra fine-tuning by exposing additional parameters of the algorithm. The guarantees for the full version are summarized by Theorem 3 in Section 3. Maximum distortion. We also show that a recursive application of QuadSketch makes it possible to approximately preserve the distances between all pairs of points. This is the setting considered in [6]. (In contrast, Theorem 1 preserves the distances from any single point.) Theorem 2. Given ϵ > 0, let Λ = O(log(d log Φ/ϵ)) and L = log Φ + Λ. There is a randomized algorithm that runs in time ˜O(ndL) and produces a sketch of size O(ndΛ + n log n) bits, such that with high probability, every distance ∥xi −xj∥can be recovered from the sketch up to distortion 1 ± ϵ. Theorem 2 has smaller sketch size than that provided by the “vanilla” bound, and only slightly larger than that in [6]. For example, for d = O(ϵ−2 log n) and Φ = poly(n), it improves over the “vanilla” bound by a factor of O(log n/ log log n) and is lossier than the bound of [6] by a factor of O(log log n). However, compared to the latter, our construction time is nearly linear in n. The comparison is summarized in Table 1. 3 Table 1: Comparison of Euclidean metric sketches with maximum distortion 1 ± ϵ, for d = O(ϵ−2 log n) and log Φ = O(log n). REFERENCE BITS PER POINT CONSTRUCTION TIME “Vanilla” bound O(ϵ−2 log2 n) – Algorithm of [6] O(ϵ−2 log n log(1/ϵ)) ˜O(n1+α + ϵ−2n) for α ∈(0, 1] Theorem 2 O(ϵ−2 log n (log log n + log(1/ϵ))) ˜O(ϵ−2n) We remark that Theorem 2 does not let us recover an approximate embedding of the pointset, ˜x1, . . . , ˜xn, as Theorem 1 does. Instead, the sketch functions as an oracle that accepts queries of the form (i, j) and return an approximation for the distance ∥xi −xj∥. 3 The Compression Scheme The sketching algorithm takes as input the pointset X, and two parameters L and Λ that control the amount of compression. Step 1: Randomly shifted grid. The algorithm starts by imposing a randomly shifted axis-parallel grid on the points. We first enclose the whole pointset in an axis-parallel hypercube H. Let ∆′ = maxi∈[n]∥x1 −xi∥, and ∆= 2⌈log ∆′⌉. Set up H to be centered at x1 with side length 4∆. Now choose σ1, . . . , σd ∈[−∆, ∆] independently and uniformly at random, and shift H in each coordinate j by σj. By the choice of side length 4∆, one can see that H after the shift still contains the whole pointset. For every integer ℓsuch that −∞< ℓ≤log(4∆), let Gℓdenote the axis-parallel grid with cell side 2ℓwhich is aligned with H. Note that this step can be often eliminated in practice without affecting the empirical performance of the algorithm, but it is necessary in order to achieve guarantees for arbitrary pointsets. Step 2: Quadtree construction. The 2d-ary quadtree on the nested grids Gℓis naturally defined by associating every grid cell c in Gℓwith the tree node at level ℓ, such that its children are the 2d grid cells in Gℓ−1 which are contained in c. The edge connecting a node v to a child v′ is labeled with a bitstring of length d defined as follows: the jth bit is 0 if v′ coincides with the bottom half of v along coordinate j, and 1 if v′ coincides with the upper half along that coordinate. In order to construct the tree, we start with H as the root, and bucket the points contained in it into the 2d children cells. We only add child nodes for cells that contain at least one point of X. Then we continue by recursing on the child nodes. The quadtree construction is finished after L levels. We denote the resulting edge-labeled tree by T ∗. A construction for L = 2 is illustrated in Figure 1. Figure 1: Quadtree construction for points x, y, z. The x and y coordinates are written as binary numbers. 4 We define the level of a tree node with side length 2ℓto be ℓ(note that ℓcan be negative). The degree of a node in T ∗is its number of children. Since all leaves are located at the bottom level, each point xi ∈X is contained in exactly one leaf, which we henceforth denote by vi. Step 3: Pruning. Consider a downward path u0, u1, . . . , uk in T ∗, such that u1, . . . , uk−1 are nodes with degree 1, and u0, uk are nodes with degree other than 1 (uk may be a leaf). For every such path in T ∗, if k > Λ + 1, we remove the nodes uΛ+1, . . . , uk−1 from T ∗with all their adjacent edges (and edge labels). Instead we connect uk directly to uΛ as its child. We refer to that edge as the long edge, and label it with the length of the path it replaces (k −Λ). The original edges from T ∗ are called short edges. At the end of the pruning step, we denote the resulting tree by T. The sketch. For each point xi ∈X the sketch stores the index of the leaf vi that contains it. In addition it stores the structure of the tree T, encoded using the Eulerian Tour Technique5. Specifically, starting at the root, we traverse T in the Depth First Search (DFS) order. In each step, DFS either explores the child of the current node (downward step), or returns to the parent node (upward step). We encode a downward step by 0 and an upward step by 1. With each downward step we also store the label of the traversed edge (a length-d bitstring for a short edge or the edge length for a long edge, and an additional bit marking if the edge is short or long). Decompression. Recovering ˜xi from the sketch is done simply by following the downward path from the root of T to the associated leaf vi, collecting the edge labels of the short edges, and placing zeros instead of the missing bits of the long edges. The collected bits then correspond to the binary expansion of the coordinates of ˜xi. More formally, for every node u (not necessarily a leaf) we define c(u) ∈Rd as follows: For j ∈{1, . . . , d}, concatenate the jth bit of every short edge label traversed along the downward path from the root to u. When traversing a long edge labeled with length k, concatenate k zeros.6 Then, place a binary floating point in the resulting bitstring, after the bit corresponding to level 0. (Recall that the levels in T are defined by the grid cell side lengths, and T might not have any nodes in level 0; in this case we need to pad with 0’s either on the right or on the left until we have a 0 bit in the location corresponding to level 0.) The resulting binary string is the binary expansion of the jth coordinate of c(u). Now ˜xi is defined to be c(vi). Block QuadSketch. We can further modify QuadSketch in a manner similar to Product Quantization [7]. Specifically, we partition the d dimensions into m blocks B1 . . . Bm of size d/m each, and apply QuadSketch separately to each block. More formally, for each Bi, we apply QuadSketch to the pointset (x1)Bi . . . (xn)Bi, where xB denotes the m/d-dimensional vector obtained by projecting x on the dimensions in B. The following statement is an immediate corollary of Theorem 1. Theorem 3. Given ϵ, δ > 0, and m dividing d, set the pruning parameter Λ to O(log(d log Φ/ϵδ)) and the number of levels L to log Φ + Λ. The m-block variant of QuadSketch runs in time ˜O(ndL) and produces a sketch of size O(ndΛ + nm log n) bits, with the following guarantee: For every i ∈[n], Pr ∀j∈[n]∥˜xi −˜xj∥= (1 ± ϵ)∥xi −xj∥ ≥1 −mδ. It can be seen that increasing the number of blocks m up to a certain threshold ( dΛ/ log n ) does not affect the asymptotic bound on the sketch size. Although we cannot prove that varying m allows to improve the accuracy of the sketch, this seems to be the case empirically, as demonstrated in the experimental section. 5See e.g., https://en.wikipedia.org/wiki/Euler_tour_technique. 6This is the “lossy” step in our sketching method: the original bits could be arbitrary, but they are replaced with zeros. 5 Table 2: Datasets used in our empirical evaluation. The aspect ratio of SIFT and MNIST is estimated on a random sample. Dataset Points Dimension Aspect ratio (Φ) SIFT 1, 000, 000 128 ≥83.2 MNIST 60, 000 784 ≥9.2 NYC Taxi 8, 874 48 49.5 Diagonal (synthetic) 10, 000 128 20, 478, 740.2 4 Experiments We evaluate QuadSketch experimentally and compare its performance to Product Quantization (PQ) [7], a state-of-the-art compression scheme for approximate nearest neighbors, and to a baseline of uniform scalar quantization, which we refer to as Grid. For each dimension of the dataset, Grid places k equally spaced landmark scalars on the interval between the minimum and the maximum values along that dimension, and rounds each coordinate to the nearest landmark. All three algorithms work by partitioning the data dimensions into blocks, and performing a quantization step in each block independently of the other ones. QuadSketch and PQ take the number of blocks as a parameter, and Grid uses blocks of size 1. The quantization step is the basic algorithm described in Section 3 for QuadSketch, k-means for PQ, and uniform scalar quantization for Grid. We test the algorithms on four datasets: The SIFT data used in [7], MNIST [11] (with all vectors normalized to 1), NYC Taxi ridership data [4], and a synthetic dataset called Diagonal, consisting of random points on a line embedded in a high-dimensional space. The properties of the datasets are summarized in Table 2. Note that we were not able to compute the exact diameters for MNIST and SIFT, hence we only report estimates for Φ for these data sets, obtained via random sampling. The Diagonal dataset consists of 10, 000 points of the form (x, x, . . . , x), where x is chosen independently and uniformly at random from the interval [0..40000]. This yields a dataset with a very large aspect ratio Φ, and on which partitioning into blocks is not expected to be beneficial since all coordinates are maximally correlated. For SIFT and MNIST we use the standard query set provided with each dataset. For Taxi and Diagonal we use 500 queries chosen at random from each dataset. For the sake of consistency, for all data sets, we apply the same quantization process jointly to both the point set and the query set, for both PQ and QS. We note, however, that both algorithms can be run on “out of sample” queries. For each dataset, we enumerate the number of blocks over all divisors of the dimension d. For QuadSketch, L ranges in 2, . . . , 20, and Λ ranges in 1, . . . , L −1. For PQ, the number of k-means landmarks per block ranges in 25, 26, . . . , 212. For both algorithms we include the results for all combinations of the parameters, and plot the envelope of the best performing combinations. We report two measures of performance for each dataset: (a) the accuracy, defined as the fraction of queries for which the sketch returns the true nearest neighbor, and (b) the average distortion, defined as the ratio between the (true) distances from the query to the reported near neighbor and to the true nearest neighbor. The sketch size is measured in bits per coordinate. The results appear in Figures 2 to 5. Note that the vertical coordinate in the distortion plots corresponds to the value of ϵ, not 1 + ϵ. For SIFT, we also include a comparison with Cartesian k-Means (CKM) [14], in Figure 6. 4.1 QuadSketch Parameter Setting We plot how the different parameters of QuadSketch effect its performance. Recall that L determines the number of levels in the quadtree prior to the pruning step, and Λ controls the amount of pruning. By construction, the higher we set these parameters, the larger the sketch will be and with better accuracy. The empirical tradeoff for the SIFT dataset is plotted in Figure 7. 6 Figure 2: Results for the SIFT dataset. Figure 3: Results for the MNIST dataset. Figure 4: Results for the Taxi dataset. Figure 5: Results for the Diagonal dataset. 7 Figure 6: Additional results for the SIFT dataset. Figure 7: On the left, L varies from 2 to 11 for a fixed setting of 16 blocks and Λ = L −1 (no pruning). On the right, Λ varies from 1 to 9 for a fixed setting of 16 blocks and L = 10. Increasing Λ beyond 6 does not have further effect on the resulting sketch. The optimal setting for the number of blocks is not monotone, and generally depends on the specific dataset. It was noted in [7] that on SIFT data an intermediate number of blocks gives the best results, and this is confirmed by our experiments. Figure 8 lists the performance on the SIFT dataset for a varying number of blocks, for a fixed setting of L = 6 and Λ = 5. It shows that the sketch quality remains essentially the same, while the size varies significantly, with the optimal size attained at 16 blocks. # Blocks Bits per coordinate Accuracy Average distortion 1 5.17 0.719 1.0077 2 4.523 0.717 1.0076 4 4.02 0.722 1.0079 8 3.272 0.712 1.0079 16 2.795 0.712 1.008 32 3.474 0.712 1.0082 64 4.032 0.713 1.0081 128 4.079 0.72 1.0078 Figure 8: QuadSketch accuracy on SIFT data by number of blocks, with L = 6 and Λ = 5. 8 References [1] R. Arandjelovi´c and A. Zisserman. Extremely low bit-rate nearest neighbor search using a set compression tree. IEEE transactions on pattern analysis and machine intelligence, 36(12):2396– 2406, 2014. [2] Y. Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In Foundations of Computer Science, 1996. Proceedings., 37th Annual Symposium on, pages 184–193. IEEE, 1996. [3] A. Z. Broder. On the resemblance and containment of documents. In Compression and Complexity of Sequences 1997. Proceedings, pages 21–29. IEEE, 1997. [4] S. Guha, N. Mishra, G. Roy, and O. Schrijvers. Robust random cut forest based anomaly detection on streams. In International Conference on Machine Learning, pages 2712–2721, 2016. [5] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604–613. ACM, 1998. [6] P. Indyk and T. Wagner. Near-optimal (euclidean) metric compression. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 710–723. SIAM, 2017. [7] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128, 2011. [8] J. Johnson, M. Douze, and H. Jégou. Billion-scale similarity search with gpus. CoRR, abs/1702.08734, 2017. [9] W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [10] E. Kushilevitz, R. Ostrovsky, and Y. Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. SIAM Journal on Computing, 30(2):457–474, 2000. [11] Y. LeCun and C. Cortes. The mnist database of handwritten digits, 1998. [12] P. Li, A. Shrivastava, J. L. Moore, and A. C. König. Hashing algorithms for large-scale learning. In Advances in neural information processing systems, pages 2672–2680, 2011. [13] S. Muthukrishnan et al. Data streams: Algorithms and applications. Foundations and Trends R⃝ in Theoretical Computer Science, 1(2):117–236, 2005. [14] M. Norouzi and D. J. Fleet. Cartesian k-means. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3017–3024, 2013. [15] M. Norouzi, D. J. Fleet, and R. R. Salakhutdinov. Hamming distance metric learning. In Advances in neural information processing systems, pages 1061–1069, 2012. [16] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In Advances in neural information processing systems, pages 1509–1517, 2009. [17] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978, 2009. [18] A. Shrivastava and P. Li. Densifying one permutation hashing via rotation for fast near neighbor search. In ICML, pages 557–565, 2014. [19] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. 9 [20] J. Wang, W. Liu, S. Kumar, and S.-F. Chang. Learning to hash for indexing big data: a survey. Proceedings of the IEEE, 104(1):34–57, 2016. [21] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for large scale multitask learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1113–1120. ACM, 2009. [22] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in neural information processing systems, pages 1753–1760, 2009. [23] M.-M. Yau and S. N. Srihari. A hierarchical data structure for multidimensional digital images. Communications of the ACM, 26(7):504–515, 1983. 10 | 2017 | 202 |
6,678 | Partial Hard Thresholding: Towards A Principled Analysis of Support Recovery Jie Shen Department of Computer Science School of Arts and Sciences Rutgers University New Jersey, USA js2007@rutgers.edu Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University New Jersey, USA pingli@stat.rutgers.edu Abstract In machine learning and compressed sensing, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements. In this paper, we present a principled analysis on the support recovery performance for a family of hard thresholding algorithms. To this end, we appeal to the partial hard thresholding (PHT) operator proposed recently by Jain et al. [IEEE Trans. Information Theory, 2017]. We show that under proper conditions, PHT recovers an arbitrary s-sparse signal within O(sκ log κ) iterations where κ is an appropriate condition number. Specifying the PHT operator, we obtain the best known results for hard thresholding pursuit and orthogonal matching pursuit with replacement. Experiments on the simulated data complement our theoretical findings and also illustrate the effectiveness of PHT. 1 Introduction This paper is concerned with the problem of recovering an arbitrary sparse signal from a set of its (compressed) measurements. We say that a signal ¯x ∈Rd is s-sparse if there are no more than s non-zeros in ¯x. This problem, together with its many variants, have found a variety of successful applications in compressed sensing, machine learning and statistics. Of particular interest is the setting where ¯x is the true signal and only a small number of linear measurements are given, referred to as compressed sensing. Such instance has been exhaustively studied in the last decade, along with a large body of elegant work devoted to efficient algorithms including ℓ1-based convex optimization and hard thresholding based greedy pursuits [7, 6, 15, 8, 3, 5, 11]. Another quintessential example is the sparsity-constrained minimization program recently considered in machine learning [30, 2, 14, 22], for which the goal is to efficiently learn the global sparse minimizer ¯x from a set of training data. Though in most cases, the underlying signal can be categorized into either of the two classes, we note that it could also be other object such as the parameter of logistic regression [19]. Hence, for a unified analysis, this paper copes with an arbitrary sparse signal and the results to be established quickly apply to the special instances above. It is also worth mentioning that while one can characterize the performance of an algorithm and can evaluate the obtained estimate from various aspects, we are specifically interested in the quality of support recovery. Recall that for sparse recovery problems, there are two prominent metrics: the ℓ2 distance and the support recovery. Theoretical results phrased in terms of the ℓ2 metric is also referred to as parameter estimation, on which most of the previous papers emphasized. Under this metric, many popular algorithms, e.g., the Lasso [24, 27] and hard thresholding based algorithms [9, 3, 15, 8, 10, 22], are guaranteed with accurate approximation up to the energy of noise. Support recovery is another important factor to evaluate an algorithm, which is also known as feature 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. selection or variable selection. As one of the earliest work, [25] offered sufficient and necessary conditions under which orthogonal matching pursuit and basis pursuit identify the support. The theory was then developed by [35, 32, 27] for the Lasso estimator and by [29] for the garrotte estimator. Typically, recovering the support of a target signal is more challenging than parameter estimation. For instance, [18] showed that the restricted eigenvalue condition suffices for the Lasso to produce an accurate estimate whereas in order to recover the sign pattern, a more stringent mutual incoherence condition has to be imposed [27]. However, as has been recognized, if the support is detected precisely by a method, then the solution admits the optimal statistical rate [27]. In this regard, research on support recovery continues to be a central theme in recent years [33, 34, 31, 4, 17]. Our work follows this line and studies the support recovery performance of hard thresholding based algorithms, which enjoy superior computational efficiency to the convex programs when manipulating a huge volume of data [26]. We note that though [31, 4] have carried out theoretical understanding for hard thresholding pursuit (HTP) [10], showing that HTP identifies the support of a signal within a few iterations, neither of them obtained the general results in this paper. To be more detailed, under the restricted isometry property (RIP) condition [6], our iteration bound holds for an arbitrary sparse signal of interest, while the results from [31, 4] hold either for the global sparse minimizer or for the true sparse signal. Using a relaxed sparsity condition, we obtain a clear iteration complexity O(sκ log κ) where κ is a proper condition number. In contrast, it is hard to quantify the bound of [31] (see Theorem 3 therein). From the algorithmic perspective, we consider a more general algorithm than HTP. In fact, we appeal to the recently proposed partial hard thresholding (PHT) operator [13] and demonstrate novel results, which in turn indicates the best known iteration complexity for HTP and orthogonal matching pursuit with replacement (OMPR) [12]. Thereby, the results in this paper considerably extend our earlier work on HTP [23]. It is also worth mentioning that, though our analysis hinges on the PHT operator, the support recovery results to be established are stronger than the results in [13] since they only showed parameter estimation of PHT. Finally, we remark that while a couple of previous work considered signals that are not exactly sparse (e.g., [4]), we in this paper focus on the sparse case. Extensions to the generic signals are left as interesting future directions. Contribution. The contribution of this paper is summarized as follows. We study the iteration complexity of the PHT algorithm, and show that under the RIP condition or the relaxed sparsity condition (to be clarified), PHT recovers the support of an arbitrary s-sparse signal within O(sκ log κ) iterations. This strengthens the theoretical results of [13] where only parameter estimation of PHT was established. Thanks to the generality of the PHT operator, our results shed light on the support recovery performance of a family of prevalent iterative algorithms. As two extreme cases of PHT, the new results immediately apply to HTP and OMPR, and imply the best known bound. Roadmap. The remainder of the paper is organized as follows. We describe the problem setting, as well as the partial hard thresholding operator in Section 2, followed by the main results regarding the iteration complexity. In Section 3, we sketch the proof of the main results and list some useful lemmas which might be of independent interest. Numerical results are illustrated in Section 4 and Section 5 concludes the paper and poses several interesting future work. The detailed proof of our theoretical results is deferred to the appendix (see the supplementary file). Notation. We collect the notation that is involved in this paper. The upper-right letter C and its subscript variants (e.g., C1) are used to denote absolute constants whose values may change from appearance to appearance. For a vector x ∈Rd, its ℓ2 norm is denoted by ∥x∥. The support set of x is denoted by supp (x) which indexes the non-zeros in x. With a slight abuse, supp (x, k) is the set of indices for the k largest (in magnitude) elements. Ties are broken lexicographically. We interchangeably write ∥x∥0 or |supp (x)| to signify the cardinality of supp (x). We will also consider a vector restricted on a support set. That is, for a d-dimensional vector x and a support set T ⊂{1, 2, . . ., d}, depending on the context, xT can either be a |T |-dimensional vector by extracting the elements belonging to T or a d-dimensional vector by setting the elements outside T to zero. The complement of a set T is denoted by T. We reserve ¯x ∈Rd for the target s-sparse signal whose support is denoted by S. The quantity ¯xmin > 0 is the minimum absolute element in ¯xS, where we recall that ¯xS ∈Rs consists of the non-zeros of ¯x. The PHT algorithm will depend on a carefully chosen function F(x). We write its gradient as ∇F(x) and we use ∇kF(x) as a shorthand of (∇F(x))supp(∇F (x),k), i.e., the top k absolute components of ∇F(x). 2 2 Partial Hard Thresholding To pursue a sparse solution, hard thresholding has been broadly invoked by many popular greedy algorithms. In the present work, we are interested in the partial hard thresholding operator which sheds light upon a unified design and analysis for iterative algorithms employing this operator and the hard thresholding operator [13]. Formally, given a support set T and a freedom parameter r > 0, the PHT operator which is used to produce a k-sparse approximation to z is defined as follows: PHTk (z; T, r) := arg min x∈Rd ∥x −z∥, s.t. ∥x∥0 ≤k, |T \ supp (x)| ≤r. (1) The first constraint simply enforces a k-sparse solution. To gain intuition on the second one, consider that T is the support set of the last iterate of an iterative algorithm, for which |T | ≤k. Then the second constraint ensures that the new support set differs from the previous one by at most r positions. As a special case, one may have noticed that the PHT operator reduces to the standard hard thresholding when picking the freedom parameter r ≥k. On the other spectrum, if we look at the case where r = 1, the PHT operator yields the interesting algorithm termed orthogonal matching pursuit with replacement [12], which in general replaces one element in each iteration. It has been shown in [13] that the PHT operator can be computed in an efficient manner for a general support set T and a freedom parameter r. In this paper, our major focus will be on the case |T | = k1. Then Lemma 1 of [13] indicates that PHTk (z; T, r) is given as follows: top = supp zT , r , PHTk (z; T, r) = HTk zT ∪top , (2) where HTk(·) is the standard hard thresholding operator that sets all but the k largest absolute components of a vector to zero. Equipped with the PHT operator, we are now in the position to describe a general iterative greedy algorithm, termed PHT(r) where r is the freedom parameter in (1). At the t-th iteration, the algorithm reveals the last iterate xt−1 as well as its support set St−1, and returns a new solution as follows: zt = xt−1 −η∇F(xt−1), yt = PHTk zt; St−1, r , St = supp yt , xt = arg min x∈Rd F(x), s.t. supp (x) ⊂St. Above, we note that η > 0 is a step size and F(x) is a proxy function which should be carefully chosen (to be clarified later). Typically, the sparsity parameter k equals s, the sparsity of the target signal ¯x. In this paper, we consider a more general choice of k which leads to novel results. For further clarity, several comments on F(x) are in order. First, one may have observed that in the context of sparsity-constrained minimization, the proxy function F(x) used above is chosen as the objective function [30, 14]. In that scenario, the target signal is a global optimum and PHT(r) proceeds as projected gradient descent. Nevertheless, recall that our goal is to estimate an arbitrary signal ¯x. It is not realistic to look for a function F(x) such that our target happens to be its global minimizer. The remedy we will offer is characterizing a deterministic condition between ¯x and ∇F(¯x) which is analogous to the signal-to-noise ratio condition, so that any function F(x) fulfilling that condition suffices. In this light, we find that F(x) behaves more like a proxy that guides the algorithm to a given target. Remarkably, our analysis also encompasses the situation considered in [30, 14]. Second, though it is not being made explicitly, one should think of F(x) as containing the measurements or the training data. Consider, for example, recovering ¯x from y = A¯x where A is a design matrix and y is the response (both are known). A natural way would be running the PHT(r) algorithm with F(x) = ∥y −Ax∥2. One may also think of the logistic regression model where y is a binary vector (label), A is a collection of training data (feature), and F(x) is the logistic loss evaluated on the training samples. With the above clarification, we are ready to make assumptions on F(x). It turns out that two properties of F(x) are vital for our analysis: restricted strong convexity and restricted smoothness. These two conditions were proposed by [16] and have been standard in the literature [34, 1, 14, 22]. 1Our results actually hold for |T | ≤k. But we observe that the size of T we will consider is usually equal to k. Hence, for ease of exposition, we take |T | = k. This is also the case considered in [12]. 3 Definition 1. We say that a differentiable function F(x) satisfies the property of restricted strong convexity (RSC) at sparsity level s with parameter ρ− s > 0 if for all x, x′ ∈Rd with ∥x −x′∥0 ≤s, F(x) −F(x′) −⟨∇F(x′), x −x′⟩≥ρ− s 2 ∥x −x′∥2 . Likewise, we say that F(x) satisfies the property of restricted smoothness (RSS) at sparsity level s with parameter ρ+ s > 0 if for all x, x′ ∈Rd with ∥x −x′∥0 ≤s, it holds that F(x) −F(x′) −⟨∇F(x′), x −x′⟩≤ρ+ s 2 ∥x −x′∥2 . We call κs = ρ+ s /ρ− s as the condition number of the problem, since it is essentially identical to the condition number of the Hessian matrix of F(x) restricted on s-sparse directions. 2.1 Deterministic Analysis The following proposition shows that under very mild conditions, PHT(r) either terminates or recovers the support of an arbitrary s-sparse signal ¯x using at most O(sκ2s log κ2s) iterations. Proposition 2. Consider the PHT(r) algorithm with k = s. Suppose that F(x) is ρ− 2s-RSC and ρ+ 2sRSS, and the step size η ∈(0, 1/ρ+ 2s). Let κ := ρ+ 2s/ρ− 2s. Then PHT(r) either terminates or recovers the support of ¯x within O(sκ log κ) iterations provided that ¯xmin ≥4 √ 2+2√κ ρ− 2s ∥∇2sF(¯x)∥. A few remarks are in order. First, we remind the reader that under the conditions stated above, it is not guaranteed that PHT(r) succeeds. We say that PHT(r) fails if it terminates at some time stamp t but St ̸= S. This indeed happens if, for example, we feed it with a bad initial point and pick a very small step size. In particular, if x0 min > η
∇F(x0)
∞, then the algorithm makes no progress. The crux to remedy this issue is imposing a lower bound on η or looking at more coordinates in each iteration, which is the theme below. However, the proposition is still useful because it asserts that as far as we make sure that PHT(r) runs long enough (i.e., O(sκ log κ) iterations), it recovers the support of an arbitrary sparse signal. We also note that neither the RIP condition nor a relaxed sparsity is assumed in this proposition. The ¯xmin-condition above is natural, which can be viewed as a generalization of the well-known signal-to-noise ratio (SNR) condition. This follows by considering the noisy compressed sensing problem, where y = A¯x + e and F(x) = ∥y −Ax∥2. Here, the vector e is some noise. Now the RSC and RSS imply for any 2s-sparse x ρ− 2s ∥x∥2 ≤∥Ax∥2 ≤ρ+ 2s ∥x∥2 . Hence ∥∇2sF(¯x)∥=
(A⊤e)2s
= Θ(∥e∥) In fact, the ¯xmin-condition has been used many times to establish support recovery. See, for example, [31, 4, 23]. In the following, we strengthen Prop. 2 by considering the RIP condition which requires a wellbounded condition number, i.e., κ ≤O(1). Theorem 3. Consider the PHT(r) algorithm with k = s. Suppose that F(x) is ρ− 2s+r-RSC and ρ+ 2s+r-RSS. Let κ := ρ+ 2s+r/ρ− 2s+r be the condition number which is smaller than 1 + 1/( √ 2 + ν) where ν = √s −r + 2. Pick the step size η = η′/ρ+ 2s+r such that κ − 1 √ 2+ν < η′ ≤1. Then PHT(r) recovers the support of ¯x within tmax = log κ log(1/β) + log( √ 2/(1 −λ)) log(1/β) + 2 ! ∥¯x∥0 iterations, provided that for some constant λ ∈(0, 1) ¯xmin ≥2ν + 6 λρ− 2s+r ∥∇s+rF(¯x)∥. Above, β = ( √ 2 + ν)(κ −η′) ∈(0, 1). 4 We remark several aspects of the theorem. The most important part is that Theorem 3 offers the theoretical justification that PHT(r) always recovers the support. This is achieved by imposing an RIP condition (i.e., bounding the condition number from the above) and using a proper step size. We also make the iteration bound explicit, in order to examine the parameter dependency. First, we note that tmax scales approximately linearly with λ. This conforms the intuition because a small λ actually indicates a large signal-to-noise ratio, and hence easy to distinguish the support of interest from the noise. The freedom parameter r is mainly encoded in the coefficient β through the quantity ν. Observe that when increasing the scalar r, we have a small β, and hence fewer iterations. This is not surprising since a large value of r grants the algorithm more freedom to look at the current iterate. Indeed, in the best case, PHT(s) is able to recover the support in O(1) iterations while PHT(1) has to take O(s) steps. However, if we investigate the conditions, we find that we need a stronger RSC/RSS condition to afford a large freedom parameter. It is also interesting to contrast Theorem 3 to [31, 4], which independently built state-of-the-art support recovery results for HTP. As has been mentioned, [31] made use of the optimality of the target signal, which is a restricted setting compared to our result. Their iteration bound (see Theorem 1 therein), though provides an appealing insight, does not have a clear parameter dependence on the natural parameters of the problem (e.g., sparsity and condition number). [4] developed O(∥¯x∥0) iteration complexity for compressed sensing. Again, they confined to a special signal whereas we carry out a generalization that allows us to analyze a family of algorithms. Though the RIP condition has been ubiquitous in the literature, many researchers point out that it is not realistic in practical applications [18, 20, 21]. This is true for large-scale machine learning problems, where the condition number may grow with the sample size (hence one cannot upper bound it with a constant). A clever solution was first (to our knowledge) suggested by [14], where they showed that using the sparsity parameter k = O(κ2s) guarantees convergence of projected gradient descent. The idea was recently employed by [22, 31] to show an RIP-free condition for sparse recovery, though in a technically different way. The following theorem borrows this elegant idea to prove RIP-free results for PHT(r). Theorem 4. Consider the PHT(r) algorithm. Suppose that F(x) is ρ− 2k-RSC and ρ+ 2k-RSS. Let κ := ρ+ 2k/ρ− 2k be the condition number. Further pick k ≥s + 1 + 4 η2(ρ− 2k)2 min{s, r} where η ∈(0, 1/ρ+ 2k). Then the support of ¯x is included in the iterate of PHT(r) within tmax = 3 log κ log(1/µ) + 2 log(2/(1 −λ)) log(1/µ) + 2 ∥¯x∥0 iterations, provided that for some constant λ ∈(0, 1), ¯xmin ≥ √κ + 3 λρ− 2k ∥∇k+sF(¯x)∥. Above, we have µ = 1 −ηρ− 2k(1−ηρ+ 2k) 2 . We discuss the salient features of Theorem 4 compared to Prop. 2 and Theorem 3. First, note that we can pick η = 1/(2ρ+ 2k) in the above theorem, which results in µ = O(1 −1/κ). So the iteration complexity is essentially given by O(sκ log κ) that is similar to the one in Prop. 2. However, in Theorem 4, the sparsity parameter k is set to be O(s + κ2 min{s, r}) which guarantees support inclusion. We pose an open question of whether the ¯xmin-condition might be refined, in that it currently scales with √κ which is stringent for ill-conditioned problems. Another important consequence implied by the theorem is that the sparsity parameter k actually depends on the minimum of s and r. Consider r = 1 which corresponds to the OMPR algorithm. Then k = O(s + κ2) suffices. In contrast, previous work of [14, 31, 22, 23] only obtained theoretical result for k = O(κ2s), owing to a restricted problem setting. We also note that even in the original OMPR paper [12] and its latest version [13], such an RIP-free condition was not established. 2.2 Statistical Results Until now, all of our theoretical results are phrased in terms of deterministic conditions (i.e., RSC, RSS and ¯xmin). It is known that these conditions can be satisfied by prevalent statistical models 5 such as linear regression and logistic regression. Here, we give detailed statistical results for sparse linear regression, and we refer the reader to [1, 14, 22, 23] for other applications. Consider the sparse linear regression model yi = ⟨ai, ¯x⟩+ ei, 1 ≤i ≤N, where ai are drawn i.i.d. from a sub-gaussian distribution with zero mean and covariance Σ ∈Rd×d and ei are drawn i.i.d. from N(0, ω2). We presume that the diagonal elements of Σ are properly scaled, i.e., Σjj ≤1 for 1 ≤j ≤d. Let A = (a⊤ 1 ; . . . ; a⊤ N) and y = (y1; . . . ; yN). Our goal is to recover ¯x from the knowledge of A and y. To this end, we may choose F(x) = 1 2 ∥y −Ax∥2. Let σmin(Σ) and σmax(Σ) be the smallest and the largest singulars of Σ, respectively. Then it is known that with high probability, F(x) satisfies the RSC and RSS properties at the sparsity level K with parameters ρ− K = σmin(Σ) −C1 · K log d N , ρ+ K = σmax(Σ) + C2 · K log d N , (3) respectively. It is also known that with high probability, the following holds: ∥∇KF(¯x)∥≤2ω r K log d N . (4) See [1] for a detailed discussion. With these probabilistic arguments on hand, we investigate the sufficient conditions under which the preceding deterministic results hold. For Prop. 2, recall that the sparsity level of RSC and RSS is 2s. Hence, if we pick the sample size N = q · 2C1s log d/σmin(Σ) for some q > 1, then 4 √ 2 + 2√κ2s ρ− 2s ∥∇2sF(¯x)∥≤4ω 2 √ 2 + q σmax(Σ) σmin(Σ) · q 1+C2/qC1 1−1/q (1 −1/q) p qC1σmin(Σ) . The right-hand side is monotonically decreasing with q, which indicates that as soon as we pick q large enough, it becomes smaller than ¯xmin. To be more concrete, consider that the covariance matrix Σ is the identity matrix for which σmin(Σ) = σmax(Σ) = 1. Now suppose that q ≥2, which gives an upper bound 4 √ 2 + 2√κ2s ρ− 2s ∥∇2sF(¯x)∥≤8ω(2 √ 2 + p 2 + C2/C1) √qC1 . Thus, in order to fulfill the ¯xmin-condition in Prop. 2, it suffices to pick q = max ( 2, 8ω(2 √ 2 + p 2 + C2/C1) √C1¯xmin !2 ) . For Theorem 3, it essentially asks for a well-conditioned design matrix at the sparsity level 2s + r. Note that (3) implies κ2s+r ≥σmax(Σ)/σmin(Σ), which in return requires a well-conditioned covariance matrix. Thus, to guarantee that κ2s+r ≤1 + ǫ for some ǫ > 0, it suffices to choose Σ such that σmax(Σ)/σmin(Σ) < 1 + ǫ and pick N = q · C1(2s + r) log d/σmin(Σ) with q = 1 + ǫ + C−1 1 C2σmax(Σ)/σmin(Σ) 1 + ǫ −σmax(Σ)/σmin(Σ) . Finally, Theorem 4 asserts support inclusion by expanding the support size of the iterates. Suppose that η = 1/(2ρ+ 2k), which results in k ≥s+(16κ2 2k+1) min{r, s}. Given that the condition number κ2k is always greater than 1, we can pick k ≥s + 20κ2 2k min{r, s}. At a first sight, this seems to be weird in that k depends on the condition number κ2k which itself relies on the choice of k. In the following, we present concrete sample complexity showing that this condition can be met. We will focus on two extreme cases: r = 1 and r = s. For r = 1, we require k ≥s + 20κ2 2k. Let us pick N = 4C1k log d/σmin(Σ). In this way, we obtain ρ− 2k = 1 2σmin(Σ) and ρ+ 2k ≤(1 + C2 2C1 )σmax(Σ). It then follows that the condition number of the design matrix κ2k ≤(2 + C2 C1 )σmax(Σ)/σmin(Σ). Consequently, we can set the parameter k = s + 20 2 + C2 C1 σmax(Σ) σmin(Σ) 2 . 6 Note that the above quantities depend only on the covariance matrix. Again, if Σ is the identity matrix, the sample complexity is O(s log d). For r = s, likewise k ≥20κ2 2ks suffices. Following the deduction above, we get k = 20 2 + C2 C1 σmax(Σ) σmin(Σ) 2 s. 3 Proof Sketch We sketch the proof and list some useful lemmas which might be of independent interest. The high-level proof technique follows from the recent work of [4] which performs an RIP analysis for compressed sensing. But for our purpose, we have to deal with the freedom parameter r as well as the RIP-free condition. We also need to generalize the arguments in [4] to show support recovery results for arbitrary sparse signals. Indeed, we prove the following lemma which is crucial for our analysis. Below we assume without loss of generality that the elements in ¯x are in descending order according to the magnitude. Lemma 5. Consider the PHT(r) algorithm. Assume that F(x) is ρ− 2k-RSC and ρ+ 2k-RSS. Further assume that the sequence of {xt}t≥0 satisfies
xt −¯x
≤α · βt
x0 −¯x
+ ψ1, (5)
xt −¯x
≤γ
¯xSt
+ ψ2, (6) for positive α, ψ1, γ, ψ2 and 0 < β < 1. Suppose that at the n-th iteration (n ≥0), Sn contains the indices of top p (in magnitude) elements of ¯x. Then, for any integer 1 ≤q ≤s −p, there exists an integer ∆≥1 determined by √ 2 |¯xp+q| > αγ · β∆−1
¯x{p+1,...,s}
+ Ψ, where Ψ = αψ2 + ψ1 + 1 ρ− 2k ∥∇2F(¯x)∥, such that Sn+∆contains the indices of top p + q elements of ¯x provided that Ψ ≤ √ 2λ¯xmin for some constant λ ∈(0, 1). We isolate this lemma here since we find it inspiring and general. The lemma states that under proper conditions, as far as one can show that the sequence satisfies (5) and (6), then after a few iterations, PHT(r) captures more correct indices in the iterate. Note that the condition (5) states that the sequence should contract with a geometric rate, and the condition (6) follows immediately from the fully corrective step (i.e., minimizing F(x) over the new support set). The next theorem concludes that under the conditions of Lemma 5, the total iteration complexity for support recovery is proportional to the sparsity of the underlying signal. Theorem 6. Assume same conditions as in Lemma 5. Then PHT(r) successfully identifies the support of ¯x using log 2 2 log(1/β) + log(αγ/(1−λ)) log(1/β) + 2 ∥¯x∥0 number of iterations. The detailed proofs of these two results are given in the appendix. Armed with them, it remains to show that PHT(r) satisfies the condition (5) under different settings. Proof Sketch for Prop. 2. We start with comparing F(zt St) and F(xt−1). For the sake, we record several important properties. First, due to the fully corrective step, the support set of ∇F(xt−1) is orthogonal to St−1. That means for any subset Ω⊂St−1, zt Ω= xt−1 Ω and for any set Ω⊂St−1, zt Ω= −η∇ΩF(xt−1). We also note that due to the PHT operator, any element of zt St\St−1 is not smaller than that of zt St−1\St. These critical facts together with the RSS condition result in F(xt) −F(xt−1) ≤F(zt St) −F(xt−1) ≤−η(1 −ηρ+ 2s)
∇F(xt−1) St\St−1
2 . 7 Since St \ St−1 consists of the top elements of ∇F(xt−1), we can show that
∇F(xt−1) St\St−1
2 ≥ 2ρ− 2s St \ St−1 |St \ St−1| + |S \ St−1| F(xt−1) −F(¯x) . Using the argument of Prop. 21, we establish the linear convergence of the iterates, i.e., the condition (5). The result then follows. Proof Sketch for Theorem 3. To prove this theorem, we present a more careful analysis on the problem structure. In particular, let T = supp ∇F(xt−1, r) , Jt = St−1∪T , and consider the elements of ∇F(xt−1). Since T contains the largest elements, any element outside T is smaller than those of T . Then we may compare the elements of ∇F(xt−1) on S \ T and S \ T. Though they have different number of components, we can show the relationship between the averaged energy: 1 |T \ S|
∇F(xt−1) T \S
2 ≥ 1 |S \ T |
∇F(xt−1) S\T
2 . Using this equation followed by some standard relaxation, we can bound
¯xJt
in terms of
xt−1 −¯x
as follows. Lemma 7. Assume that F(x) satisfies the properties of RSC and RSS at sparsity level k + s + r. Let ρ−:= ρ− k+s+r and ρ+ := ρ+ k+s+r. Consider the support set Jt = St−1 ∪supp ∇F(xt−1), r . We have for any 0 < θ ≤1/ρ+,
¯xJt
≤ν(1 −θρ−)
xt−1 −¯x
+ ν ρ−∥∇s+rF(¯x)∥, where ν = √s −r + 2. In particular, picking θ = 1/ρ+ gives
¯xJt
≤ν 1 −1 κ
xt−1 −¯x
+ ν ρ−∥∇s+rF(¯x)∥. Note that the lemma also applies to the two-stage thresholding algorithms (e.g., CoSaMP [15]) whose first step is expanding the support set. On the other hand, we also know that
zt Jt\St
≤
zt Jt\S
. This is because Jt \St contains the r smallest elements of zt Jt. It then follows that
¯xJt\St
can be upper bounded by
xt−1 −¯x
. Finally, we note that St = (Jt \ St) ∪Jt. Hence, (5) follows. Proof Sketch for Theorem 4. The proof idea of Theorem 4 is inspired by [31], though we give a tighter and a more general analysis. We first observe that St \ St−1 contains larger elements than St−1 \ St, due to PHT. This enables us to show that F(xt) −F(xt−1) ≤−1 −ηρ+ 2k 2η
zt St −xt−1
2 ≤−1 −ηρ+ 2k 2η
∇F(xt−1) St\St−1
2 . Then we prove the claim
∇F(xt−1) St\St−1
2 ≥ρ− 2k F(xt−1) −F(¯x) . To this end, we consider whether r is larger than s. If r ≥s, then it is possible that St \ St−1 ≥s. In this case, using the RSC condition and the PHT property, we can show that
∇F(xt−1) St\St−1
2 ≥
∇F(xt−1) S\St−1
2 ≥ρ− 2k F(xt−1) −F(¯x) . If St \ St−1 < s ≤r, then the above does not hold. But we may partition the set S \ St−1 as a union of T1 = S \ (St ∪St−1) and T2 = (St \ St−1) ∩S, and show that the ℓ2-norm of F(xt−1) on T1 is smaller than that on T2 if k = s + κ2s. In addition, the RSC condition gives ρ− 2k 4
¯x −xt−1
2 ≤F(¯x) −F(xt−1) + 1 ρ− 2k
∇F(xt−1) T1
2 + 1 ρ− 2k
∇F(xt−1) St\St−1
2 . Since T2 ⊂St \ St−1, it implies the desired bound by rearranging the terms. The case r < s follows in a reminiscent way. The proof is complete. 8 4 Simulation We complement our theoretical results by performing numerical experiments in this section. In particular, we are interested in two aspects: first, the number of iterations required to identify the support of an s-sparse signal; second, the tradeoff between the iteration number and percentage of success resulted from different choices of the freedom parameter r. We consider the compressed sensing model y = A¯x+0.01e, where the dimension d = 200 and the entries of A and e are i.i.d. normal variables. Given a sparsity level s, we first uniformly choose the support of ¯x, and assign values to the non-zeros with i.i.d. normals. There are two configurations: the sparsity s and the sample size N. Given s and N, we independently generate 100 signals and test PHT(r) on them. We say PHT(r) succeeds in a trial if it returns an iterate with correct support within 10 thousands iterations. Otherwise we mark the trial as failure. Iteration numbers to be reported are counted only on those success trials. The step size η is fixed to be the unit, though one can tune it using cross-validation for better performance. 1 10 20 30 40 50 60 70 80 90100 0 10 20 30 40 50 60 70 80 #non−zeros #iterations r = 1 r = 2 r = 5 r = 10 r = 100 N = 200 1 10 20 30 40 50 60 70 80 90100 0 20 40 60 80 100 #non−zeros percentage of success r = 1 r = 2 r = 5 r = 10 r = 100 N = 200 1 50 100 150 200 2 4 6 8 10 12 14 #measurements #iterations s = 10 r = 100 r = 5 r = 2 r = 1 1 50 100 150 200 0 20 40 60 80 100 #measurements percentage of success r = 1 r = 2 r = 5 r = 10 r = 100 s = 10 Figure 1: Iteration number and success percentage against sparsity and sample size. The first panel shows that the iteration number grows linearly with the sparsity. The choice r = 5 suffices to guarantee a minimum iteration complexity. The second panel shows comparable statistical performance for different choices of r. The third one illustrates how the iteration number changes with the sample size and the last panel depicts phase transition. To study how the iteration number scales with the sparsity in practice, we fix N = 200 and tune s from 1 to 100. We test different freedom parameter r on these signals. The results are shown in the leftmost figure in Figure 1. As our theory predicted, we observe that within O(s) iterations, PHT(r) precisely identifies the true support. In the second subfigure, we plot the percentage of success against the sparsity. It appears that PHT(r) lays on top of each other. This is possibly because we used a sufficiently large sample size. Next, we fix s = 10 and vary N from 1 to 200. Surprisingly, from the rightmost figure, we do not observe performance degrade using a large freedom parameter. So we conjecture that the ¯xmincondition we established can be refined. Figure 1 also illustrates an interesting phenomenon: after a particular threshold, say r = 5, PHT(r) does not significantly reduces the iteration number by increasing r. This cannot be explained by our theorems in the paper. We leave it as a promising research direction. 5 Conclusion and Future Work In this paper, we have presented a principled analysis on a family of hard thresholding algorithms. To facilitate our analysis, we appealed to the recently proposed partial hard thresholding operator. We have shown that under the RIP condition or the relaxed sparsity condition, the PHT(r) algorithm recovers the support of an arbitrary sparse signal ¯x within O(∥¯x∥0 κ log κ) iterations, provided that a generalized signal-to-noise ratio condition is satisfied. On account of our unified analysis, we have established the best known bound for HTP and OMPR. We have also illustrated that the simulation results agree with our finding that the iteration number is proportional to the sparsity. There are several interesting future directions. First, it would be interesting to examine if we can close the logarithmic factor log κ in the iteration bound. Second, it is also useful to study RIP-free conditions for two-stage PHT algorithms such as CoSaMP. Finally, we pose the open question of whether one can improve the √κ factor in the ¯xmin-condition. Acknowledgements. The work is supported in part by NSF-Bigdata-1419210 and NSF-III1360971. We thank the anonymous reviewers for valuable comments. 9 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics, 40(5):2452–2482, 2012. [2] S. Bahmani, B. Raj, and P. T. Boufounos. Greedy sparsity-constrained optimization. Journal of Machine Learning Research, 14(1):807–841, 2013. [3] T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265–274, 2009. [4] J.-L. Bouchot, S. Foucart, and P. Hitczenko. Hard thresholding pursuit algorithms: number of iterations. Applied and Computational Harmonic Analysis, 41(2):412–435, 2016. [5] T. T. Cai and L. Wang. Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Information Theory, 57(7):4680–4688, 2011. [6] E. J. Candès and T. Tao. Decoding by linear programming. IEEE Trans. Information Theory, 51(12):4203–4215, 2005. [7] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [8] W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Information Theory, 55(5):2230–2249, 2009. [9] I. Daubechies, M. Defrise, and C. D. Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11):1413–1457, 2004. [10] S. Foucart. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6):2543–2563, 2011. [11] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser, 2013. [12] P. Jain, A. Tewari, and I. S. Dhillon. Orthogonal matching pursuit with replacement. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pages 1215–1223, 2011. [13] P. Jain, A. Tewari, and I. S. Dhillon. Partial hard thresholding. IEEE Trans. Information Theory, 63(5):3029–3038, 2017. [14] P. Jain, A. Tewari, and P. Kar. On iterative hard thresholding methods for high-dimensional Mestimation. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, pages 685–693, 2014. [15] D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, 2009. [16] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, pages 1348–1356, 2009. [17] S. Osher, F. Ruan, J. Xiong, Y. Yao, and W. Yin. Sparse recovery via differential inclusions. Applied and Computational Harmonic Analysis, 41(2):436–469, 2016. [18] Y. R. Peter J. Bickel and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, pages 1705–1732, 2009. [19] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Trans. Information Theory, 59(1):482–494, 2013. [20] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research, 11:2241–2259, 2010. 10 [21] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE Trans. Information Theory, 59(6):3434–3447, 2013. [22] J. Shen and P. Li. A tight bound of hard thresholding. arXiv preprint arXiv:1605.01656, 2016. [23] J. Shen and P. Li. On the iteration complexity of support recovery via hard thresholding pursuit. In Proceedings of the 34th International Conference on Machine Learning, pages 3115–3124, 2017. [24] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. [25] J. A. Tropp. Greed is good: algorithmic results for sparse approximation. IEEE Trans. Information Theory, 50(10):2231–2242, 2004. [26] J. A. Tropp and S. J. Wright. Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6):948–958, 2010. [27] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55(5):2183– 2202, 2009. [28] J. Wang, S. Kwon, P. Li, and B. Shim. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Processing, 64(4):1076–1089, 2016. [29] M. Yuan and Y. Lin. On the non-negative garrotte estimator. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2):143–161, 2007. [30] X.-T. Yuan, P. Li, and T. Zhang. Gradient hard thresholding pursuit for sparsity-constrained optimization. In Proceedings of the 31st International Conference on Machine Learning, pages 127–135, 2014. [31] X.-T. Yuan, P. Li, and T. Zhang. Exact recovery of hard thresholding pursuit. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems, pages 3558–3566, 2016. [32] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal of Machine Learning Research, 10:555–568, 2009. [33] T. Zhang. Some sharp performance bounds for least squares regression with L1 regularization. The Annals of Statistics, 37(5A):2109–2144, 2009. [34] T. Zhang. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Trans. Information Theory, 57(9):6215–6221, 2011. [35] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research, 7:2541–2563, 2006. 11 | 2017 | 203 |
6,679 | Selective Classification for Deep Neural Networks Yonatan Geifman Computer Science Department Technion – Israel Institute of Technology yonatan.g@cs.technion.ac.il Ran El-Yaniv Computer Science Department Technion – Israel Institute of Technology rani@cs.technion.ac.il Abstract Selective classification techniques (also known as reject option) have not yet been considered in the context of deep neural networks (DNNs). These techniques can potentially significantly improve DNNs prediction performance by trading-off coverage. In this paper we propose a method to construct a selective classifier given a trained neural network. Our method allows a user to set a desired risk level. At test time, the classifier rejects instances as needed, to grant the desired risk (with high probability). Empirical results over CIFAR and ImageNet convincingly demonstrate the viability of our method, which opens up possibilities to operate DNNs in mission-critical applications. For example, using our method an unprecedented 2% error in top-5 ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage. 1 Introduction While self-awareness remains an illusive, hard to define concept, a rudimentary kind of self-awareness, which is much easier to grasp, is the ability to know what you don’t know, which can make you smarter. The subfield dealing with such capabilities in machine learning is called selective prediction (also known as prediction with a reject option), which has been around for 60 years [1, 5]. The main motivation for selective prediction is to reduce the error rate by abstaining from prediction when in doubt, while keeping coverage as high as possible. An ultimate manifestation of selective prediction is a classifier equipped with a “dial” that allows for precise control of the desired true error rate (which should be guaranteed with high probability), while keeping the coverage of the classifier as high as possible. Many present and future tasks performed by (deep) predictive models can be dramatically enhanced by high quality selective prediction. Consider, for example, autonomous driving. Since we cannot rely on the advent of “singularity”, where AI is superhuman, we must manage with standard machine learning, which sometimes errs. But what if our deep autonomous driving network were capable of knowing that it doesn’t know how to respond in a certain situation, disengaging itself in advance and alerting the human driver (hopefully not sleeping at that time) to take over? There are plenty of other mission-critical applications that would likewise greatly benefit from effective selective prediction. The literature on the reject option is quite extensive and mainly discusses rejection mechanisms for various hypothesis classes and learning algorithms, such as SVM, boosting, and nearest-neighbors [8, 13, 3]. The reject option has rarely been discussed in the context of neural networks (NNs), and so far has not been considered for deep NNs (DNNs). Existing NN works consider a cost-based rejection model [2, 4], whereby the costs of misclassification and abstaining must be specified, and a rejection mechanism is optimized for these costs. The proposed mechanism for classification is based on applying a carefully selected threshold on the maximal neuronal response of the softmax layer. We that call this mechanism softmax response (SR). The cost model can be very useful when we can quantify the involved costs, but in many applications of interest meaningful costs are hard to reason. (Imagine trying to set up appropriate rejection/misclassification costs for disengaging an autopilot 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. driving system.) Here we consider the alternative risk-coverage view for selective classification discussed in [5]. Ensemble techniques have been considered for selective (and confidence-rated) prediction, where rejection mechanisms are typically based on the ensemble statistics [18, 7]. However, such techniques are presently hard to realize in the context of DNNs, for which it could be very costly to train sufficiently many ensemble members. Recently, Gal and Ghahramani [9] proposed an ensemble-like method for measuring uncertainty in DNNs, which bypasses the need to train several ensemble members. Their method works via sampling multiple dropout applications of the forward pass to perturb the network prediction randomly. While this Monte-Carlo dropout (MC-dropout) technique was not mentioned in the context of selective prediction, it can be directly applied as a viable selective prediction method using a threshold, as we discuss here. In this paper we consider classification tasks, and our goal is to learn a selective classifier (f, g), where f is a standard classifier and g is a rejection function. The selective classifier has to allow full guaranteed control over the true risk. The ideal method should be able to classify samples in production with any desired level of risk with the optimal coverage rate. It is reasonable to assume that this optimal performance can only be obtained if the pair (f, g) is trained together. As a first step, however, we consider a simpler setting where a (deep) neural classifier f is already given, and our goal is to learn a rejection function g that will guarantee with high probability a desired error rate. To this end, we consider the above two known techniques for rejection (SR and MC-dropout), and devise a learning method that chooses an appropriate threshold that ensures the desired risk. For a given classifier f, confidence level δ, and desired risk r∗, our method outputs a selective classifier (f, g) whose test error will be no larger than r∗with probability of at least 1 −δ. Using the well-known VGG-16 architecture, we apply our method on CIFAR-10, CIFAR-100 and ImageNet (on ImageNet we also apply the RESNET-50 architecture). We show that both SR and dropout lead to extremely effective selective classification. On both the CIFAR datasets, these two mechanisms achieve nearly identical results. However, on ImageNet, the simpler SR mechanism is significantly superior. More importantly, we show that almost any desirable risk level can be guaranteed with a surprisingly high coverage. For example, an unprecedented 2% error in top-5 ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage. 2 Problem Setting We consider a standard multi-class classification problem. Let X be some feature space (e.g., raw image data) and Y, a finite label set, Y = {1, 2, 3, . . . , k}, representing k classes. Let P(X, Y ) be a distribution over X × Y. A classifier f is a function f : X →Y, and the true risk of f w.r.t. P is R(f|P) = ∆EP (X,Y )[ℓ(f(x), y)], where ℓ: Y × Y →R+ is a given loss function, for example the 0/1 error. Given a labeled set Sm = {(xi, yi)}m i=1 ⊆(X × Y) sampled i.i.d. from P(X, Y ), the empirical risk of the classifier f is ˆr(f|Sm) = ∆ 1 m Pm i=1 ℓ(f(xi), yi). A selective classifier [5] is a pair (f, g), where f is a classifier, and g : X →{0, 1} is a selection function, which serves as a binary qualifier for f as follows, (f, g)(x) = ∆ f(x), if g(x) = 1; don’t know, if g(x) = 0. Thus, the selective classifier abstains from prediction at a point x iff g(x) = 0. The performance of a selective classifier is quantified using coverage and risk. Fixing P, coverage, defined to be φ(f, g) = ∆EP [g(x)], is the probability mass of the non-rejected region in X. The selective risk of (f, g) is R(f, g) = ∆EP [ℓ(f(x), y)g(x)] φ(f, g) . (1) Clearly, the risk of a selective classifier can be traded-off for coverage. The entire performance profile of such a classifier can be specified by its risk-coverage curve, defined to be risk as a function of coverage [5]. Consider the following problem. We are given a classifier f, a training sample Sm, a confidence parameter δ > 0, and a desired risk target r∗> 0. Our goal is to use Sm to create a selection function 2 g such that the selective risk of (f, g) satisfies PrSm {R(f, g) > r∗} < δ, (2) where the probability is over training samples, Sm, sampled i.i.d. from the unknown underlying distribution P. Among all classifiers satisfying (2), the best ones are those that maximize the coverage. For a fixed f, and a given class G (which will be discussed below), in this paper our goal is to select g ∈G such that the selective risk R(f, g) satisfies (2) while the coverage Φ(f, g). is maximized. 3 Selection with Guaranteed Risk Control In this section, we present a general technique for constructing a selection function with guaranteed performance, based on a given classifier f, and a confidence-rate function κf : X →R+ for f. We do not assume anything on κf, and the interpretation is that κ can rank in the sense that if κf(x1) ≥κf(x2), for x1, x2 ∈X, the confidence function κf indicates that the confidence in the prediction f(x2) is not higher than the confidence in the prediction f(x1). In this section we are not concerned with the question of what is a good κf (which is discussed in Section 4); our goal is to generate a selection function g, with guaranteed performance for a given κf. For the reminder of this paper, the loss function ℓis taken to be the standard 0/1 loss function (unless explicitly mentioned otherwise). Let Sm = {(xi, yi)}m i=1 ⊆(X × Y)m be a training set, assumed to be sampled i.i.d. from an unknown distribution P(X, Y ). Given also are a confidence parameter δ > 0, and a desired risk target r∗> 0. Based on Sm, our goal is to learn a selection function g such that the selective risk of the classifier (f, g) satisfies (2). For θ > 0, we define the selection function gθ : X →{0, 1} as gθ(x) = gθ(x|κf) = ∆ 1, if κf(x) ≥θ; 0, otherwise. (3) For any selective classifier (f, g), we define its empirical selective risk with respect to the labeled sample Sm, ˆr(f, g|Sm) = ∆ 1 m Pm i=1 ℓ(f(xi), yi)g(xi) ˆφ(f, g|Sm) , where ˆφ is the empirical coverage, ˆφ(f, g|Sm) = ∆ 1 m Pm i=1 g(xi). For any selection function g, denote by g(Sm) the g-projection of Sm, g(Sm) = ∆{(x, y) ∈Sm : g(x) = 1}. The selection with guaranteed risk (SGR) learning algorithm appears in Algorithm 1. The algorithm receives as input a classifier f, a confidence-rate function κf, a confidence parameter δ > 0, a target risk r∗1, and a training set Sm. The algorithm performs a binary search to find the optimal bound guaranteeing the required risk with sufficient confidence. The SGR algorithm outputs a selective classifier (f, g) and a risk bound b∗. In the rest of this section we analyze the SGR algorithm. We make use of the following lemma, which gives the tightest possible numerical generalization bound for a single classifier, based on a test over a labeled sample. Lemma 3.1 (Gascuel and Caraux, 1992, [10]) Let P be any distribution and consider a classifier f whose true error w.r.t. P is R(f|P). Let 0 < δ < 1 be given and let ˆr(f|Sm) be the empirical error of f w.r.t. to the labeled set Sm, sampled i.i.d. from P. Let B∗(ˆri, δ, Sm) be the solution b of the following equation, m·ˆr(f|Sm) X j=0 m j bj(1 −b)m−j = δ. (4) Then, PrSm{R(f|P) > B∗(ˆri, δ, Sm)} < δ. We emphasize that the numerical bound of Lemma 3.1 is the tightest possible in this setting. As discussed in [10], the analytic bounds derived using, e.g., Hoeffding inequality (or other concentration inequalities), approximate this numerical bound and incur some slack. 1Whenever the triplet Sm, δ and r∗is infeasible, the algorithm will return a vacuous solution with zero coverage. 3 Algorithm 1 Selection with Guaranteed Risk (SGR) 1: SGR(f,κf,δ,r∗,Sm) 2: Sort Sm according to κf(xi), xi ∈Sm (and now assume w.l.o.g. that indices reflect this ordering). 3: zmin = 1; zmax = m 4: for i = 1 to k = ∆⌈log2 m⌉do 5: z = ⌈(zmin + zmax)/2⌉ 6: θ = κf(xz) 7: gi = gθ {(see (3))} 8: ˆri = ˆr(f, gi|Sm) 9: b∗ i = B∗(ˆri, δ/⌈log2 m⌉, gi(Sm)) {see Lemma 3.1 } 10: if b∗ i < r∗then 11: zmax = z 12: else 13: zmin = z 14: end if 15: end for 16: Output- (f, gk) and the bound b∗ k. For any selection function, g, let Pg(X, Y ) be the projection of P over g; that is, Pg(X, Y ) = ∆ P(X, Y |g(X) = 1). The following theorem is a uniform convergence result for the SGR procedure. Theorem 3.2 (SGR) Let Sm be a given labeled set, sampled i.i.d. from P, and consider an application of the SGR procedure. For k = ∆⌈log2 m⌉, let (f, gi) and b∗ i , i = 1, . . . , k, be the selective classifier and bound computed by SGR in its ith iterations. Then, PrSm {∃i : R(f|Pgi) > B∗(ˆri, δ/k, gi(Sm))} < δ. Proof Sketch: For any i = 1, . . . , k, let mi = |gi(Sm)| be the random variable giving the size of accepted examples from Sm on the ith iteration of SGR. For any fixed value of 0 ≤mi ≤m, by Lemma 3.1, applied with the projected distribution Pgi(X, Y ), and a sample Smi, consisting of mi examples drawn from the product distribution (Pgi)mi, PrSmi∼(Pgi)mi {R(f|Pgi) > B∗(ˆri, δ/k, gi(Sm))} < δ/k. (5) The sampling distribution of mi labeled examples in SGR is determined by the following process: sample a set Sm of m examples from the product distribution P m and then use gi to filter Sm, resulting in a (randon) number mi of examples. Therefore, the left-hand side of (5) equals PrSm∼P m {R(f|Pgi) > B∗(ˆri, δ/k, gi(Sm)) |gi(Sm) = mi} . Clearly, R(f|Pgi) = EPgi [ℓ(f(x), y)] = EP [ℓ(f(x), y)g(x)] φ(f, g) = R(f, gi). Therefore, PrSm{R(f, gi) > B∗(ˆri, δ/k, gi(Sm))} = m X n=0 PrSm{R(f, gi) > B∗(ˆri, δ/k, gi(Sm)) | gi(Sm) = n} · Pr{gi(Sm) = n} ≤ δ k m X n=0 Pr{gi(Sm) = n} = δ k . An application of the union bound completes the proof. □ 4 Confidence-Rate Functions for Neural Networks Consider a classifier f, assumed to be trained for some unknown distribution P. In this section we consider two confidence-rate functions, κf, based on previous work [9, 2]. We note that an ideal 4 confidence-rate function κf(x) for f, should reflect true loss monotonicity. Given (x1, y1) ∼P and (x2, y2) ∼P, we would like the following to hold: κf(x1) ≤κf(x2) if and only if ℓ(f(x1), y1) ≥ ℓ(f(x2), y2). Obviously, one cannot expect to have an ideal κf. Given a confidence-rate functions κf, a useful way to analyze its effectiveness is to draw the risk-coverage curve of its induced rejection function, gθ(x|κf), as defined in (3). This risk-coverage curve shows the relationship between θ and R(f, gθ). For example, see Figure 2(a) where a two (nearly identical) risk-coverage curves are plotted. While the confidence-rate functions we consider are not ideal, they will be shown empirically to be extremely effective. 2 The first confidence-rate function we consider has been around in the NN folklore for years, and is explicitly mentioned by [2, 4] in the context of reject option. This function works as follows: given any neural network classifier f(x) where the last layer is a softmax, we denote by f(x|j) the soft response output for the jth class. The confidence-rate function is defined as κ = ∆maxj∈Y(f(x|j)). We call this function softmax response (SR). Softmax responses are often treated as probabilities (responses are positive and sum to 1), but some authors criticize this approach [9]. Noting that, for our purposes, the ideal confidence-rate function should only provide coherent ranking rather than absolute probability values, softmax responses are potentially good candidates for relative confidence rates. We are not familiar with a rigorous explanation for SR, but it can be intuitively motivated by observing neuron activations. For example, Figure 1 depicts average response values of every neuron in the second-to-last layer for true positives and false positives for the class ‘8’ in the MNIST dataset (and qualitatively similar behavior occurs in all MNIST classes). The x-axis corresponds to neuron indices in that layer (1-128); and the y-axis shows the average responses, where green squares are averages of true positives, boldface squares highlight strong responses, and red circles correspond to the average response of false positives. It is evident that the true positive activation response in the active neurons is much higher than the false positive, which is expected to be reflected in the final softmax layer response. Moreover, it can be seen that the large activation values are spread over many neurons, indicating that the confidence signal arises due to numerous patterns detected by neurons in this layer. Qualitatively similar behavior can be observed in deeper layers. Figure 1: Average response values of neuron activations for class "8" on the MNIST dataset; green squares, true positives, red circles, false negatives The MC-dropout technique we consider was recently proposed to quantify uncertainty in neural networks [9]. To estimate uncertainty for a given instance x, we run a number of feed-forward iterations over x, each applied with dropout in the last fully connected layer. Uncertainty is taken as the variance in the responses of the neuron corresponding to the most probable class. We consider minus uncertainty as the MC-dropout confidence rate. 5 Empirical Results In Section 4 we introduced the SR and MC-dropout confidence-rate function, defined for a given model f. We trained VGG models [17] for CIFAR-10, CIFAR-100 and ImageNet. For each of these models f, we considered both the SR and MC-dropout confidence-rate functions, κf, and the induced 2While Theorem 3.2 always holds, we note that if κf is severely skewed (far from ideal), the bound of the resulting selective classifier can be far from the target risk. 5 rejection function, gθ(x|κf). In Figure 2 we present the risk-coverage curves obtained for each of the three datasets. These curves were obtained by computing a validation risk and coverage for many θ values. It is evident that the risk-coverage profile for SR and MC-dropout is nearly identical for both the CIFAR datasets. For the ImageNet set we plot the curves corresponding to top-1 (dashed curves) and top-5 tasks (solid curves). On this dataset, we see that SR is significantly better than MC-dropout on both tasks. For example, in the top-1 task and 60% coverage, the SR rejection has 10% error while MC-dropout rejection incurs more than 20% error. But most importantly, these risk-coverage curves show that selective classification can potentially be used to dramatically reduce the error in the three datasets. Due to the relative advantage of SR, in the rest of our experiments we only focus on the SR rating. (a) CIFAR-10 (b) CIFAR-100 (c) Image-Net Figure 2: Risk coverage curves for (a) cifar-10, (b) cifar-100 and (c) image-net (top-1 task: dashed curves; top-5 task: solid crves), SR method in blue and MC-dropout in red. We now report on experiments with our SGR routine, and apply it on each of the datasets to construct high probability risk-controlled selective classifiers for the three datasets. Table 1: Risk control results for CIFAR-10 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound (b∗) 0.01 0.0079 0.7822 0.0092 0.7856 0.0099 0.02 0.0160 0.8482 0.0149 0.8466 0.0199 0.03 0.0260 0.8988 0.0261 0.8966 0.0298 0.04 0.0362 0.9348 0.0380 0.9318 0.0399 0.05 0.0454 0.9610 0.0486 0.9596 0.0491 0.06 0.0526 0.9778 0.0572 0.9784 0.0600 5.1 Selective Guaranteed Risk for CIFAR-10 We now consider CIFAR-10; see [14] for details. We used the VGG-16 architecture [17] and adapted it to the CIFAR-10 dataset by adding massive dropout, exactly as described in [15]. We used data augmentation containing horizontal flips, vertical and horizontal shifts, and rotations, and trained using SGD with momentum of 0.9, initial learning rate of 0.1, and weight decay of 0.0005. We multiplicatively dropped the learning rate by 0.5 every 25 epochs, and trained for 250 epochs. With this setting we reached validation accuracy of 93.54, and used the resulting network f10 as the basis for our selective classifier. We applied the SGR algorithm on f10 with the SR confidence-rating function, where the training set for SGR, Sm, was taken as half of the standard CIFAR-10 validation set that was randomly split to two equal parts. The other half, which was not consumed by SGR for training, was reserved for testing the resulting bounds. Thus, this training and test sets where each of approximately 5000 samples. We applied the SGR routine with several desired risk values, r∗, and obtained, for each such r∗, corresponding selective classifier and risk bound b∗. All our applications of the SGR routine 6 (for this dataset and the rest) where with a particularly small confidence level δ = 0.001.3 We then applied these selective classifiers on the reserved test set, and computed, for each selective classifier, test risk and test coverage. The results are summarized in Table 1, where we also include train risk and train coverage that were computed, for each selective classifier, over the training set. Observing the results in Table 1, we see that the risk bound, b∗, is always very close to the target risk, r∗. Moreover, the test risk is always bounded above by the bound b∗, as required. We compared this result to a basic baseline in which the threshold is defined to be the value that maximizes coverage while keeping train error smaller then r∗. For this simple baseline we found that in over 50% of the cases (1000 random train/test splits), the bound r∗was violated over the test set, with a mean violation of 18% relative to the requested r∗. Finally, we see that it is possible to guarantee with this method amazingly small 1% error while covering more than 78% of the domain. 5.2 Selective Guaranteed Risk for CIFAR-100 Using the same VGG architechture (now adapted to 100 classes) we trained a model for CIFAR-100 while applying the same data augmentation routine as in the CIFAR-10 experiment. Following precisly the same experimental design as in the CFAR-10 case, we obtained the results of Table 2 Table 2: Risk control results for CIFAR-100 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound (b∗) 0.02 0.0119 0.2010 0.0187 0.2134 0.0197 0.05 0.0425 0.4286 0.0413 0.4450 0.0499 0.10 0.0927 0.5736 0.0938 0.5952 0.0998 0.15 0.1363 0.6546 0.1327 0.6752 0.1498 0.20 0.1872 0.7650 0.1810 0.7778 0.1999 0.25 0.2380 0.8716 0.2395 0.8826 0.2499 Here again, SGR generated tight bounds, very close to the desired target risk, and the bounds were never violated by the true risk. Also, we see again that it is possible to dramatically reduce the risk with only moderate compromise of the coverage. While the architecture we used is not state-of-the art, with a coverage of 67%, we easily surpassed the best known result for CIFAR-100, which currently stands on 18.85% using the wide residual network architecture [19]. It is very likely that by using ourselves the wide residual network architecture we could obtain significantly better results. 5.3 Selective Guaranteed Risk for ImageNet We used an already trained Image-Net VGG-16 model based on ILSVRC2014 [16]. We repeated the same experimental design but now the sizes of the training and test set were approximately 25,000. The SGR results for both the top-1 and top-5 classification tasks are summarized in Tables 3 and 4, respectively. We also implemented the RESNET-50 architecture [12] in order to see if qualitatively similar results can be obtained with a different architecture. The RESNET-50 results for ImageNet top-1 and top-5 classification tasks are summarized in Tables 5 and 6, respectively. Table 3: SGR results for Image-Net dataset using VGG-16 top-1 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound(b∗) 0.02 0.0161 0.2355 0.0131 0.2322 0.0200 0.05 0.0462 0.4292 0.0446 0.4276 0.0500 0.10 0.0964 0.5968 0.0948 0.5951 0.1000 0.15 0.1466 0.7164 0.1467 0.7138 0.1500 0.20 0.1937 0.8131 0.1949 0.8154 0.2000 0.25 0.2441 0.9117 0.2445 0.9120 0.2500 3With this small δ, and small number of reported experiments (6-7 lines in each table) we did not perform a Bonferroni correction (which can be easily added). 7 Table 4: SGR results for Image-Net dataset using VGG-16 top-5 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound(b∗) 0.01 0.0080 0.3391 0.0078 0.3341 0.0100 0.02 0.0181 0.5360 0.0179 0.5351 0.0200 0.03 0.0281 0.6768 0.0290 0.6735 0.0300 0.04 0.0381 0.7610 0.0379 0.7586 0.0400 0.05 0.0481 0.8263 0.0496 0.8262 0.0500 0.06 0.0563 0.8654 0.0577 0.8668 0.0600 0.07 0.0663 0.9093 0.0694 0.9114 0.0700 Table 5: SGR results for Image-Net dataset using RESNET50 top-1 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound (b∗) 0.02 0.0161 0.2613 0.0164 0.2585 0.0199 0.05 0.0462 0.4906 0.0474 0.4878 0.0500 0.10 0.0965 0.6544 0.0988 0.6502 0.1000 0.15 0.1466 0.7711 0.1475 0.7676 0.1500 0.20 0.1937 0.8688 0.1955 0.8677 0.2000 0.25 0.2441 0.9634 0.2451 0.9614 0.2500 These results show that even for the challenging ImageNet, with both the VGG and RESNET architectures, our selective classifiers are extremely effective, and with appropriate coverage compromise, our classifier easily surpasses the best known results for ImageNet. Not surprisingly, RESNET, which is known to achieve better results than VGG on this set, preserves its relative advantage relative to VGG through all r∗values. 6 Concluding Remarks We presented an algorithm for learning a selective classifier whose risk can be fully controlled and guaranteed with high confidence. Our empirical study validated this algorithm on challenging image classification datasets, and showed that guaranteed risk-control is achievable. Our methods can be immediately used by deep learning practitioners, helping them in coping with mission-critical tasks. We believe that our work is only the first significant step in this direction, and many research questions are left open. The starting point in our approach is a trained neural classifier f (supposedly trained to optimize risk under full coverage). While the rejection mechanisms we considered were extremely effective, it might be possible to identify superior mechanisms for a given classifier f. We believe, however, that the most challenging open question would be to simultaneously train both the classifier f and the selection function g to optimize coverage for a given risk level. Selective classification is intimately related to active learning in the context of linear classifiers [6, 11]. It would be very interesting to explore this potential relationship in the context of (deep) neural classification. In this paper we only studied selective classification under the 0/1 loss. It would be of great importance Table 6: SGR results for Image-Net dataset using RESNET50 top-5 for δ = 0.001 Desired risk (r∗) Train risk Train coverage Test risk Test coverage Risk bound(b∗) 0.01 0.0080 0.3796 0.0085 0.3807 0.0099 0.02 0.0181 0.5938 0.0189 0.5935 0.0200 0.03 0.0281 0.7122 0.0273 0.7096 0.0300 0.04 0.0381 0.8180 0.0358 0.8158 0.0400 0.05 0.0481 0.8856 0.0464 0.8846 0.0500 0.06 0.0581 0.9256 0.0552 0.9231 0.0600 0.07 0.0663 0.9508 0.0629 0.9484 0.0700 8 to extend our techniques to other loss functions and specifically to regression, and to fully control false-positive and false-negative rates. This work has many applications. In general, any classification task where a controlled risk is critical would benefit by using our methods. An obvious example is that of medical applications where utmost precision is required and rejections should be handled by human experts. In such applications the existence of performance guarantees, as we propose here, is essential. Financial investment applications are also obvious, where there are great many opportunities from which one should cherry-pick the most certain ones. A more futuristic application is that of robotic sales representatives, where it could extremely harmful if the bot would try to answer questions it does not fully understand. Acknowledgments This research was supported by The Israel Science Foundation (grant No. 1890/14) References [1] Chao K Chow. An optimum character recognition system using decision functions. IRE Transactions on Electronic Computers, (4):247–254, 1957. [2] Luigi Pietro Cordella, Claudio De Stefano, Francesco Tortorella, and Mario Vento. A method for improving classification reliability of multilayer perceptrons. IEEE Transactions on Neural Networks, 6(5):1140–1147, 1995. [3] Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri. Boosting with abstention. In Advances in Neural Information Processing Systems, pages 1660–1668, 2016. [4] Claudio De Stefano, Carlo Sansone, and Mario Vento. To reject or not to reject: that is the question-an answer in case of neural classifiers. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 30(1):84–94, 2000. [5] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. Journal of Machine Learning Research, 11:1605–1641, 2010. [6] Ran El-Yaniv and Yair Wiener. Active learning via perfect selective classification. Journal of Machine Learning Research (JMLR), 13(Feb):255–279, 2012. [7] Yoav Freund, Yishay Mansour, and Robert E Schapire. Generalization bounds for averaged classifiers. Annals of Statistics, pages 1698–1722, 2004. [8] Giorgio Fumera and Fabio Roli. Support vector machines with embedded reject option. In Pattern recognition with support vector machines, pages 68–82. Springer, 2002. [9] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1050–1059, 2016. [10] O. Gascuel and G. Caraux. Distribution-free performance bounds with the resubstitution error estimate. Pattern Recognition Letters, 13:757–764, 1992. [11] R. Gelbhart and R. El-Yaniv. The Relationship Between Agnostic Selective Classification and Active. ArXiv e-prints, January 2017. [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [13] Martin E Hellman. The nearest neighbor classification rule with a reject option. IEEE Transactions on Systems Science and Cybernetics, 6(3):179–185, 1970. [14] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. 9 [15] Shuying Liu and Weihong Deng. Very deep convolutional neural network based image classification using small training sample size. In Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on, pages 730–734. IEEE, 2015. [16] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [17] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [18] Kush R Varshney. A risk bound for ensemble classification with a reject option. In Statistical Signal Processing Workshop (SSP), 2011 IEEE, pages 769–772. IEEE, 2011. [19] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. 10 | 2017 | 204 |
6,680 | Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space Liwei Wang Alexander G. Schwing Svetlana Lazebnik {lwang97, aschwing, slazebni}@illinois.edu University of Illinois at Urbana-Champaign Abstract This paper explores image caption generation using conditional variational autoencoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a “vanilla” CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise. 1 Introduction Automatic image captioning [9, 11, 18–20, 24] is a challenging open-ended conditional generation task. State-of-the-art captioning techniques [23, 32, 36, 1] are based on recurrent neural nets with long-short term memory (LSTM) units [13], which take as input a feature representation of a provided image, and are trained to maximize the likelihood of reference human descriptions. Such methods are good at producing relatively short, generic captions that roughly fit the image content, but they are unsuited for sampling multiple diverse candidate captions given the image. The ability to generate such candidates is valuable because captioning is profoundly ambiguous: not only can the same image be described in many different ways, but also, images can be hard to interpret even for humans, let alone machines relying on imperfect visual features. In short, we would like the posterior distribution of captions given the image, as estimated by our model, to accurately capture both the open-ended nature of language and any uncertainty about what is depicted in the image. Achieving more diverse image description is a major theme in several recent works [6, 14, 27, 31, 35]. Deep generative models are a natural fit for this goal, and to date, Generative Adversarial Models (GANs) have attracted the most attention. Dai et al. [6] proposed jointly learning a generator to produce descriptions and an evaluator to assess how well a description fits the image. Shetty et al. [27] changed the training objective of the generator from reproducing ground-truth captions to generating captions that are indistinguishable from those produced by humans. In this paper, we also explore a generative model for image description, but unlike the GAN-style training of [6, 27], we adopt the conditional variational auto-encoder (CVAE) formalism [17, 29]. Our starting point is the work of Jain et al. [14], who trained a “vanilla” CVAE to generate questions given images. At training time, given an image and a sentence, the CVAE encoder samples a latent z vector from a Gaussian distribution in the encoding space whose parameters (mean and variance) come from a Gaussian prior with zero mean and unit variance. This z vector is then fed into a decoder that uses it, together with the features of the input image, to generate a question. The encoder and the decoder are jointly trained to maximize (an upper bound on) the likelihood of the reference questions 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Predicted Object Labels: ‘person’ ‘cup’ ‘donut’ ‘dining table’ AG-CVAE: Predicted Object Labels: ‘cup’ ‘fork’ ‘knife’ ‘sandwich’ ‘dining table’ ‘mouse’ LSTM Baseline: AG-CVAE: LSTM Baseline: a close up of a plate of food on a table a table with a plate of food on it a plate of food with a sandwich on it a white plate topped with a plate of food a plate of food on a table next to a cup of coffee a close up of a plate of food on a table a close up of a plate of food with a sandwich a close up of a plate of food a close up of a plate of food on a white plate a close up of a plate of food with a sandwich on it a woman sitting at a table with a cup of coffee a person sitting at a table with a cup of coffee a table with two plates of donuts and a cup of coffee a woman sitting at a table with a plate of coffee a man sitting at a table with a plate of food a close up of a table with two plates of coffee a close up of a table with a plate of food a close up of a plate of food on a table a close up of a table with two plates of food a close up of a table with plates of food Figure 1: Example output of our proposed AG-CVAE approach compared to an LSTM baseline (see Section 4 for details). For each method, we show top five sentences following consensus re-ranking [10]. The captions produced by our method are both more diverse and more accurate. Object Labels: ‘person’ AG-CVAE sentences: a man and a woman standing in a room a man and a woman are playing a game a man standing next to a woman in a room a man standing next to a woman in a field a man standing next to a woman in a suit AG-CVAE sentences: a man and a woman playing a video game a man and a woman are playing a video game a man and woman are playing a video game a man and a woman playing a game with a remote a woman holding a nintendo wii game controller AG-CVAE sentences: a man and a woman sitting on a bus a man and a woman sitting on a train a man and woman sitting on a bus a man and a woman sitting on a bench a man and a woman are sitting on a bus AG-CVAE sentences: a man and a woman sitting on a train a woman and a woman sitting on a train a woman sitting on a train next to a train a woman sitting on a bench in a train a man and a woman sitting on a bench Object Labels: ‘person’, ‘remote’ Object Labels: ‘person’,‘bus’ Object Labels: ‘person’, ‘train’ Figure 2: Illustration of how our additive latent space structure controls the image description process. Modifying the object labels changes the weight vectors associated with semantic components in the latent space. In turn, this shifts the mean from which the z vectors are drawn and modifies the resulting descriptions in an intuitive way. given the images. At test time, the decoder is seeded with an image feature and different z samples, so that multiple z’s result in multiple questions. While Jain et al. [14] obtained promising question generation performance with the above CVAE model equipped with a fixed Gaussian prior, for the task of image captioning, we observed a tendency for the learned conditional posteriors to collapse to a single mode, yielding little diversity in candidate captions sampled given an image. To improve the behavior of the CVAE, we propose using a set of K Gaussian priors in the latent z space with different means and standard deviations, corresponding to different “modes” or types of image content. For concreteness, we identify these modes with specific object categories, such as ‘dog’ or ‘cat.’ If ‘dog’ and ‘cat’ are detected in an image, we would like to encourage the generated captions to capture both of them. Starting with the idea of multiple Gaussian priors, we propose two different ways of structuring the latent z space. The first is to represent the distribution of z vectors using a Gaussian Mixture model (GMM). Due to the intractability of Gaussian mixtures in the VAE framework, we also introduce a novel Additive Gaussian (AG) prior that directly adds multiple semantic aspects in the z space. If an image contains several objects or aspects, each corresponding to means µk in the latent space, then we require the mean of the encoder distribution to be close to a weighted linear combination of the respective means. Our CVAE formulation with this additive Gaussian prior (AG-CVAE) is able to model a richer, more flexible encoding space, resulting in more diverse and accurate captions, as illustrated in Figure 1. As an additional advantage, the additive prior gives us an interpretable mechanism for controlling the captions based on the image content, as shown in Figure 2. Experiments of Section 4 will show that both GMM-CVAE and AG-CVAE outperform LSTMs and “vanilla” CVAE baselines on the challenging MSCOCO dataset [5], with AG-CVAE showing marginally higher accuracy and by far the best diversity and controllability. 2 Background Our proposed framework for image captioning extends the standard variational auto-encoder [17] and its conditional variant [29]. We briefly set up the necessary background here. Variational auto-encoder (VAE): Given samples x from a dataset, VAEs aim at modeling the data likelihood p(x). To this end, VAEs assume that the data points x cluster around a low-dimensional manifold parameterized by embeddings or encodings z. To obtain the sample x corresponding to an embedding z, we employ the decoder p(x|z) which is often based on deep nets. Since the decoder’s posterior p(z|x) is not tractably computable we approximate it with a distribution q(z|x) which is 2 referred to as the encoder. Taking together all those ingredients, VAEs are based on the identity log p(x) −DKL[q(z|x), p(z|x)] = Eq(z|x)[log p(x|z)] −DKL[q(z|x), p(z)], (1) which relates the likelihood p(x) and the conditional p(z|x). It is hard to compute the KL-divergence DKL[q(z|x), p(z|x)] because the posterior p(z|x) is not readily available from the decoder distribution p(x|z) if we use deep nets. However, by choosing an encoder distribution q(z|x) with sufficient capacity, we can assume that the non-negative KL-divergence DKL[q(z|x), p(z|x)] is small. Thus, we know that the right-hand-side is a lower bound on the log-likelihood log p(x), which can be maximized w.r.t. both encoder and decoder parameters. Conditional variational auto-encoders (CVAE): In tasks like image captioning, we are interested in modeling the conditional distribution p(x|c), where x are the desired descriptions and c is some representation of content of the input image. The VAE identity can be straightforwardly extended by conditioning both the encoder and decoder distributions on c. Training of the encoder and decoder proceeds by maximizing the lower bound on the conditional data-log-likelihood p(x|c), i.e., log pθ(x|c) ≥Eqφ(z|x,c)[log pθ(x|z, c)] −DKL[qφ(z|x, c), p(z|c)] , (2) where θ and φ, the parameters for the decoder distribution pθ(x|z, c) and the encoder distribution qφ(z|x, c) respectively. In practice, the following stochastic objective is typically used: max θ,φ 1 N N X i=1 log pθ(xi|zi, ci) −DKL[qφ(z|x, c), p(z|c)], s.t. ∀i zi ∼qφ(z|x, c). It approximates the expectation Eqφ(z|x,c)[log pθ(x|z, c)] using N samples zi drawn from the approximate posterior qφ(z|x, c) (typically, just a single sample is used). Backpropagation through the encoder that produces samples zi is achieved via the reparameterization trick [17], which is applicable if we restrict the encoder distribution qφ(z|x, c) to be, e.g., a Gaussian with mean and standard deviation output by a deep net. 3 Gaussian Mixture Prior and Additive Gaussian Prior Our key observation is that the behavior of the trained CVAE crucially depends on the choice of the prior p(z|c). The prior determines how the learned latent space is structured, because the KLdivergence term in Eq. (2) encourages qφ(z|x, c), the encoder distribution over z given a particular description x and image content c, to be close to this prior distribution. In the vanilla CVAE formulation, such as the one adopted in [14], the prior is not dependent on c and is fixed to a zero-mean unit-variance Gaussian. While this choice is the most computationally convenient, our experiments in Sec. 4 will demonstrate that for the task of image captioning, the resulting model has poor diversity and worse accuracy than the standard maximum-likelihood-trained LSTM. Clearly, the prior has to change based on the content of the image. However, because of the need to efficiently compute the KL-divergence in closed form, it still needs to have a simple structure, ideally a Gaussian or a mixture of Gaussians. Motivated by the above considerations, we encourage the latent z space to have a multi-modal structure composed of K modes or clusters, each corresponding to different types of image content. Given an image I, we assume that we can obtain a distribution c(I) = (c1(I), . . . , cK(I)), where the entries ck are nonnegative and sum to one. In our current work, for concreteness, we identify these with a set of object categories that can be reliably detected automatically, such as ‘car,’ ‘person,’ or ‘cat.’ The MSCOCO dataset, on which we conduct our experiments, has direct supervision for 80 such categories. Note, however, our formulation is general and can be applied to other definitions of modes or clusters, including latent topics automatically obtained in an unsupervised fashion. GMM-CVAE: We can model p(z|c) as a Gaussian mixture with weights ck and components with means µk and standard deviations σk: p(z|c) = K X k=1 ckN z |µk, σ2 kI , (3) where ck is defined as the weights above and µk represents the mean vector of the k-th component. In practice, for all components, we use the same standard deviation σ. 3 C2 C1 C3 switch … Cluster Vector Decoder Encoder Z C1 C2 C3 Cz Cluster Vector Decoder Encoder Z … C1 C2 C3 (a) GMM-CVAE (b) AG-CVAE Figure 3: Overview of GMM-CVAE and AG-CVAE models. To sample z vectors given an image, GMM-CVAE (a) switches from one cluster center to another, while AG-CVAE (b) encourages the embedding z for an image to be close to the average of its objects’ means. It is not directly tractable to optimize Eq. (2) with the above GMM prior. We therefore approximate the KL divergence stochastically [12]. In each step during training, we first draw a discrete component k according to the cluster probability c(I), and then sample z from the resulting Gaussian component. Then we have DKL[qφ(z|x, ck), p(z|ck)] = log σk σφ + 1 2σ2 Eqφ(z|x,ck) ∥z −µk∥2 2 −1 2 = log σk σφ + σ2 φ + ∥µφ −µk∥2 2 2σ2 k −1 2, ∀k ck ∼c(I). (4) We plug the above KL term into Eq. (2) to obtain an objective function, which we optimize w.r.t. the encoder and decoder parameters φ and θ using stochastic gradient descent (SGD). In principle, the prior parameters µk and σk can also be trained, but we obtained good results by keeping them fixed (the means are drawn randomly and all standard deviations are set to the same constant, as will be further explained in Section 4). At test time, in order to generate a description given an image I, we first sample a component index k from c(I), and then sample z from the corresponding component distribution. One limitation of this procedure is that, if an image contains multiple objects, each individual description is still conditioned on just a single object. AG-CVAE: We would like to structure the z space in a way that can directly reflect object cooccurrence. To this end, we propose a simple novel conditioning mechanism with an additive Gaussian prior. If an image contains several objects with weights ck, each corresponding to means µk in the latent space, we want the mean of the encoder distribution to be close to the linear combination of the respective means with the same weights: p(z|c) = N z K X k=1 ckµk, σ2I ! , (5) where σ2I is a spherical covariance matrix with σ2 = PK k=1 c2 kσ2 k. Figure 3 illustrates the difference between this AG-CVAE model and the GMM-CVAE model introduced above. In order to train the AG-CVAE model using the objective of Eq. (2), we need to compute the KL-divergence DKL[qφ(z|x, c), p(z|c)] where qφ(z|x, c) = N(z | µφ(x, c), σ2 φ(x, c)I) and the prior p(z|c) is given by Eq. (5). Its analytic expression can be derived to be DKL[qφ(z|x, c), p(z|c)] = log σ σφ + 1 2σ2 Eqφ
z − K X k=1 ckµk
2 −1 2 = log σ σφ + σ2 φ + ∥µφ −PK k=1 ckµk∥2 2σ2 −1 2. We plug the above KL-divergence term into Eq. (2) to obtain the stochastic objective function for training the encoder and decoder parameters. We initialize the mean and variance parameters µk and σk in the same way as for GMM-CVAE and keep them fixed throughout training. 4 WI LSTM We We We Wc LSTM LSTM LSTM LSTM Image Feature Cluster Vector µφ1, log σ2 φ1 µφK, log(σ2 φK) Wc1 Wc2 WcK µφ2, log(σ2 φ2) Reconstruction Loss WI LSTM We We We Wc LSTM LSTM LSTM LSTM Image Feature Cluster Vector LSTM …… …… …… …… …… …… w1 w2 wT hT z wT w1 hT h0 h1 p1 p0 P(Real/Fake) ws Wz µφ, log(σ2 φ) Figure 4: Illustration of our encoder (left) and decoder (right). See text for details. Next, we need to specify our architectures for the encoder and decoder, which are shown in Fig. 4. The encoder uses an LSTM to map an image I, its vector c(I), and a caption into a point in the latent space. More specifically, the LSTM receives the image feature in the first step, the cluster vector in the second step, and then the caption word by word. The hidden state hT after the last step is transformed into K mean vectors, µφk, and K log variances, log σ2 φk, using a linear layer for each. For AG-CVAE, the µφk and σ2 φk are then summed with weights ck and c2 k respectively to generate the desired µφ and σ2 φ encoder outputs. Note that the encoder is used at training time only, and the input cluster vectors are produced from ground truth object annotations. The decoder uses a different LSTM that receives as input first the image feature, then the cluster vector, then a z vector sampled from the conditional distribution of Eq. (5). Next, it receives a ‘start’ symbol and proceeds to output a sentence word by word until it produces an ‘end’ symbol. During training, its c(I) inputs are derived from the ground truth, same as for the encoder, and the log-loss is used to encourage reconstruction of the provided ground-truth caption. At test time, ground truth object vectors are not available, so we rely on automatic object detection, as explained in Section 4. 4 Experiments 4.1 Implementation Details We test our methods on the MSCOCO dataset [5], which is the largest “clean” image captioning dataset available to date. The current (2014) release contains 82,783 training and 40,504 validation images with five reference captions each, but many captioning works re-partition this data to enlarge the training set. We follow the train/val/test split released by [23]. It allocates 118, 287 images for training, 4, 000 for validation, and 1, 000 for testing. Features. As image features, we use 4,096-dimensional activations from the VGG-16 network [28]. The cluster or object vectors c(I) are 80-dimensional, corresponding to the 80 MSCOCO object categories. At training time, c(I) consist of binary indicators corresponding to ground truth object labels, rescaled to sum to one. For example, an image with labels ‘person,’ ‘car,’ and ‘dog’ results in a cluster vector with weights of 1/3 for the corresponding objects and zeros elsewhere. For test images I, c(I) are obtained automatically through object detection. We train a Faster R-CNN detector [26] for the MSCOCO categories using our train/val split by fine-tuning the VGG-16 net [28]. At test time, we use a threshold of 0.5 on the per-class confidence scores output by this detector to determine whether the image contains a given object (i.e., all the weights are once again equal). Baselines. Our LSTM baseline is obtained by deleting the z vector input from the decoder architecture shown in Fig. 4. This gives a strong baseline comparable to NeuralTalk2 [1] or Google Show and Tell [33]. To generate different candidate sentences using the LSTM, we use beam search with a width of 10. Our second baseline is given by the “vanilla” CVAE with a fixed Gaussian prior following [14]. For completeness, we report the performance of our method as well as all baselines both with and without the cluster vector input c(I). Parameter settings and training. For all the LSTMs, we use a one-hot encoding with vocabulary size of 11,488, which is the number of words in the training set. This input gets projected into a word embedding layer of dimension 256, and the LSTM hidden space dimension is 512. We found that the same LSTM settings worked well for all models. For our three models (CVAE, GMM-CVAE, and AG-CVAE), we use a dimension of 150 for the z space. We wanted it to be at least equal to the number of categories to make sure that each z vector corresponds to a unique set of cluster weights. The means µk of clusters for GMM-CVAE and AG-CVAE are randomly initialized on the unit ball 5 obj #z std beam B4 B3 B2 B1 C R M S LSTM 10 0.413 0.515 0.643 0.790 1.157 0.597 0.285 0.218 ✓ 10 0.428 0.529 0.654 0.797 1.202 0.607 0.290 0.223 CVAE 20 0.1 0.261 0.381 0.538 0.742 0.860 0.531 0.246 0.184 ✓ 20 2 0.312 0.421 0.565 0.733 0.910 0.541 0.244 0.176 GMMCVAE 20 0.1 0.371 0.481 0.619 0.778 1.080 0.582 0.274 0.209 ✓ 20 2 0.423 0.533 0.666 0.813 1.216 0.617 0.298 0.233 ✓ 20 2 2 0.449 0.553 0.680 0.821 1.251 0.624 0.299 0.232 ✓ 100 2 0.494 0.597 0.719 0.856 1.378 0.659 0.325 0.261 ✓ 100 2 2 0.527 0.625 0.740 0.865 1.430 0.670 0.329 0.263 AGCVAE 20 0.1 0.431 0.537 0.668 0.814 1.230 0.622 0.300 0.235 ✓ 20 2 0.451 0.557 0.686 0.829 1.259 0.630 0.305 0.243 ✓ 20 2 2 0.471 0.573 0.698 0.834 1.308 0.638 0.309 0.244 ✓ 100 2 0.532 0.631 0.749 0.876 1.478 0.682 0.342 0.278 ✓ 100 2 2 0.557 0.654 0.767 0.883 1.517 0.690 0.345 0.277 Table 1: Oracle (upper bound) performance according to each metric. Obj indicates whether the object (cluster) vector is used; #z is the number of z samples; std is the test-time standard deviation; beam is the beam width if beam search is used. For the caption quality metrics, C is short for Cider, R for ROUGE, M for METEOR, S for SPICE. obj #z std beam B4 B3 B2 B1 C R M S LSTM 10 0.286 0.388 0.529 0.702 0.915 0.510 0.235 0.165 ✓ 10 0.292 0.395 0.536 0.711 0.947 0.516 0.238 0.170 CVAE 20 0.1 0.245 0.347 0.495 0.674 0.775 0.491 0.217 0.147 ✓ 20 2 0.265 0.372 0.521 0.698 0.834 0.506 0.225 0.158 GMMCVAE 20 0.1 0.271 0.376 0.522 0.702 0.890 0.507 0.231 0.166 ✓ 20 2 0.278 0.388 0.538 0.718 0.932 0.516 0.238 0.170 ✓ 20 2 2 0.289 0.394 0.538 0.715 0.941 0.513 0.235 0.169 ✓ 100 2 0.292 0.402 0.552 0.728 0.972 0.520 0.241 0.174 ✓ 100 2 2 0.307 0.413 0.557 0.729 0.986 0.525 0.242 0.177 AGCVAE 20 0.1 0.287 0.394 0.540 0.715 0.942 0.518 0.238 0.168 ✓ 20 2 0.286 0.391 0.537 0.716 0.953 0.517 0.239 0.172 ✓ 20 2 2 0.299 0.402 0.544 0.716 0.963 0.518 0.237 0.173 ✓ 100 2 0.301 0.410 0.557 0.732 0.991 0.527 0.243 0.177 ✓ 100 2 2 0.311 0.417 0.559 0.732 1.001 0.528 0.245 0.179 Table 2: Consensus re-ranking using CIDEr. See caption of Table 1 for legend. and are not changed throughout training. The standard deviations σk are set to 0.1 at training time and tuned on the validation set at test time (the values used for our results are reported in the tables). All networks are trained with SGD with a learning rate that is 0.01 for the first 5 epochs, and is reduced by half every 5 epochs. On average all models converge within 50 epochs. 4.2 Results A big part of the motivation for generating diverse candidate captions is the prospect of being able to re-rank them using some discriminative method. Because the performance of any re-ranking method is upper-bounded by the quality of the best candidate caption in the set, we will first evaluate different methods assuming an oracle that can choose the best sentence among all the candidates. Next, for a more realistic evaluation, we will use a consensus re-ranking approach [10] to automatically select a single top candidate per image. Finally, we will assess the diversity of the generated captions using uniqueness and novelty metrics. Oracle evaluation. Table 1 reports caption evaluation metrics in the oracle setting, i.e., taking the maximum of each relevant metric over all the candidates. We compare caption quality using five metrics: BLEU [25], METEOR [7], CIDEr [30], SPICE [2], and ROUGE [21]. These are calculated using the MSCOCO caption evaluation tool [5] augmented by the author of SPICE [2]. For the LSTM baseline, we report the scores attained among 10 candidates generated using beam search (as suggested in [23]). For CVAE, GMM-CVAE and AG-CVAE, we sample a fixed number of z vectors from the corresponding prior distributions (the numbers of samples are given in the table). The high-level trend is that “vanilla” CVAE falls short even of the LSTM baseline, while the upperbound performance for GMM-CVAE and AG-CVAE considerably exceeds that of the LSTM given 6 obj #z std beam size % unique per image % novel sentences LSTM ✓ 10 0.656 CVAE ✓ 20 2 0.118 0.820 GMMCVAE ✓ 20 2 0.594 0.809 ✓ 20 2 2 0.539 0.716 ✓ 100 2 0.376 0.767 ✓ 100 2 2 0.326 0.688 AGCVAE ✓ 20 2 0.764 0.795 ✓ 20 2 2 0.698 0.707 ✓ 100 2 0.550 0.745 ✓ 100 2 2 0.474 0.667 Table 3: Diversity evaluation. For each method, we report the percentage of unique candidates generated per image by sampling different numbers of z vectors. We also report the percentage of novel sentences (i.e., sentences not seen in the training set) out of (at most) top 10 sentences following consensus re-ranking. It should be noted that for CVAE, there are 2,466 novel sentences out of 3,006. For GMM-CVAE and AG-CVAE, we get roughly 6,200-7,800 novel sentences. Predicted Object Labels: 'bottle' 'refrigerator' Predicted Object Labels: 'person' 'backpack' 'umbrella' AG-CVAE: a person holding an umbrella in front of a building a woman holding a red umbrella in front of a building a person holding an umbrella in the rain a man and woman holding an umbrella in the rain a man holding a red umbrella in front of a building LSTM Baseline: a man holding an umbrella on a city street a man holding an umbrella in the rain a man is holding an umbrella in the rain a person holding an umbrella in the rain a man holding an umbrella in the rain with an umbrella Predicted Object Labels: 'person' 'horse' 'bear' AG-CVAE: a man standing next to a brown horse a man is standing next to a horse a person standing next to a brown and white horse a man standing next to a horse and a man a man holding a brown and white horse LSTM Baseline: a close up of a person with a horse a close up of a horse with a horse a black and white photo of a man wearing a hat a black and white photo of a person wearing a hat a black and white photo of a man in a hat AG-CVAE: an open refrigerator filled with lots of food a refrigerator filled with lots of food and drinks a refrigerator filled with lots of food a large open refrigerator filled with lots of food a refrigerator filled with lots of food and other items LSTM Baseline: a refrigerator filled with lots of food a refrigerator filled with lots of food on top a refrigerator filled with lots of food inside a refrigerator filled with lots of food inside of it a refrigerator filled with lots of food and other items Predicted Object Labels: 'person' ‘bed’ AG-CVAE: a baby laying on a bed with a blanket a woman laying on a bed with a baby a man laying on a bed with a baby a baby laying in a bed with a blanket a baby is laying in bed with a cat LSTM Baseline: a baby is laying on a bed with a blanket a baby is laying on a bed with a stuffed animal a little girl laying in a bed with a blanket a little girl laying on a bed with a blanket a man laying in a bed with a blanket (a) (b) (d) (c) Figure 5: Comparison of captions produced by our AG-CVAE method and the LSTM baseline. For each method, top five captions following consensus re-ranking are shown. the right choice of standard deviation and a large enough number of z samples. AG-CVAE obtains the highest upper bound. A big advantage of the CVAE variants over the LSTM is that they can be easily used to generate more candidate sentences simply by increasing the number of z samples, while the only way to do so for the LSTM is to increase the beam width, which is computationally prohibitive. In more detail, the top two lines of Table 1 compare performance of the LSTM with and without the additional object (cluster) vector input, and show that it does not make a dramatic difference. That is, improving over the LSTM baseline is not just a matter of adding stronger conditioning information as input. Similarly, for CVAE, GMM-CVAE, and AG-CVAE, using the object vector as additional conditioning information in the encoder and decoder can increase accuracy somewhat, but does not account for all the improvements that we see. One thing we noticed about the models without the object vector is that they are more sensitive to the standard deviation parameter and require more careful tuning (to demonstrate this, the table includes results for several values of σ for the CVAE models). Consensus re-ranking evaluation. For a more realistic evaluation we next compare the same models after consensus re-ranking [10, 23]. Specifically, for a given test image, we first find its nearest neighbors in the training set in the cross-modal embedding space learned by a two-branch network proposed in [34]. Then we take all the ground-truth reference captions of those neighbors and calculate the consensus re-ranking scores between them and the candidate captions. For this, we use the CIDEr metric, based on the observation of [22, 30] that it can give more human-consistent evaluations than BLEU. 7 Object Labels: ‘cat’ ‘suitcase’ GMM-CVAE: AG-CVAE: a small white and black cat sitting on top of a suitcase a cat sitting on a piece of luggage a small gray and white cat sitting in a suitcase a white cat sitting on top of a suitcase a black and white cat sitting in a suitcase a black and white cat sitting on top of a suitcase a cat that is sitting on a table a black and white cat sitting next to a suitcase a cat sitting in front of a suitcase a cat sitting on a wooden bench in the sun a close up of a cat sitting on a suitcase a cat sitting on top of a blue suitcase a large brown and white cat sitting on top of a suitcase a cat sitting on top of a suitcase a white cat with a suitcase a black and white cat is sitting in a suitcase a cat that is sitting in a suitcase a cat that is sitting on a suitcase a cat sitting on top of a suitcase a black and white cat sitting on a suitcase a cat sitting in a suitcase on a table a cat that is sitting in a suitcase a cat sitting on top of a suitcase a cat sitting in a suitcase on the floor a black and white cat is sitting in a suitcase a close up of a cat on a suitcase Object Labels: ‘cat’ ‘suitcase’ ‘chair’ GMM-CVAE: AG-CVAE: a white and black cat sitting in a suitcase a cat that is sitting on a chair a white and black cat sitting on top of a suitcase a black and white cat sitting on a chair a cat sitting on a chair in a room a large brown and white cat sitting on top of a desk a cat sitting on a wooden bench in the sun a close up of a cat sitting on a suitcase a black and white cat sitting next to a piece of luggage a small white and black cat sitting in a chair a black and white cat sitting on top of a suitcase a cat sitting on top of a blue chair a cat sitting on top of a suitcase Object Labels: ‘cup’ ‘dining table’ ‘teddy bear’ GMM-CVAE: AG-CVAE: Object Labels: ‘cup’ ‘dining table’ ‘teddy bear’ ‘sandwich’ ‘cake’ GMM-CVAE: AG-CVAE: a teddy bear sitting next to a teddy bear a teddy bear sitting on a table next to a table a teddy bear sitting on top of a table a teddy bear sitting on a table next to a cup of coffee a stuffed teddy bear sitting next to a table a stuffed teddy bear sitting on a table a teddy bear sitting next to a table filled with stuffed animals a teddy bear is sitting on a table ateddy bear sitting on a table next to a teddy bear a white teddy bear sitting next to a table a couple of stuffed animals sitting on a table a teddy bear sitting next to a bunch of flowers a couple of teddy bears sitting on a table a large teddy bear sitting on a table a bunch of stuffed animals sitting on a table a group of teddy bears sitting on a table a large teddy bear sitting on a table next to a table a teddy bear sitting next to a pile of books a group of teddy bears sitting next to each other a white teddy bear sitting on a wooden table two teddy bears sitting next to each other a couple of teddy bears sitting next to each other a white teddy bear sitting next to a table a teddy bear sitting next to a wooden table a large stuffed animal sitting on top of a table a teddy bear sitting next to a teddy bear a teddy bear sitting on a table next to a cup of coffee a teddy bear sitting on a table with a teddy bear a teddy bear with a teddy bear sitting on top of it a teddy bear sitting on top of a table a teddy bear sitting next to a cup of coffee a table with a teddy bear and a teddy bear a teddy bear sitting on a table next to a glass of coffee two teddy bears sitting on a table next to each other a table topped with a cake a couple of cake sitting on top of a table a table with a cake and a bunch of stuffed animals a cake with a bunch of coffee on it a white teddy bear sitting next to a glass of coffee a table with a cake and a bear on it a table with a bunch of teddy bears a table with two plates of food on it a table topped with a variety of food a table with two teddy bears a table with a cake and a plate of food a couple of sandwiches sitting on top of a table a table topped with a cake and two plates of food a table with a bunch of cakes on it a table with a cake and a cup of coffee a white plate of food next to a table a white table topped with lots of food Figure 6: Comparison of captions produced by GMM-CVAE and AG-CVAE for two different versions of input object vectors for the same images. For both models, we draw 20 z samples and show the resulting unique captions. Table 2 shows the evaluation based on the single top-ranked sentence for each test image. While the re-ranked performance cannot get near the upper bounds of Table 1, the numbers follow a similar trend, with GMM-CVAE and AG-CVAE achieving better performance than the baselines in almost all metrics. It should also be noted that, while it is not our goal to outperform the state of the art in absolute terms, our performance is actually better than some of the best methods to date [23, 37], although [37] was trained on a different split. AG-CVAE tends to get slightly higher numbers than GMM-CVAE, although the advantage is smaller than for the upper-bound results in Table 1. One of the most important take-aways for us is that there is still a big gap between upper-bound and re-ranking performance and that improving re-ranking of candidate sentences is an important future direction. Diversity evaluation. To compare the generative capabilities of our different methods we report two indicative numbers in Table 3. One is the average percentage of unique captions in the set of candidates generated for each image. This number is only meaningful for the CVAE models, where we sample candidates by drawing different z samples, and multiple z’s can result in the same caption. For LSTM, the candidates are obtained using beam search and are by definition distinct. From Table 3, we observe that CVAE has very little diversity, GMM-CVAE is much better, but AG-CVAE has the decisive advantage. Similarly to [27], we also report the percentage of all generated sentences for the test set that have not been seen in the training set. It only really makes sense to assess novelty for sentences that are plausible, so we compute this percentage based on (at most) top 10 sentences per image after consensus re-ranking. Based on the novelty ratio, CVAE does well. However, since it generates fewer distinct candidates per image, the absolute numbers of novel sentences are much lower than for GMM-CVAE and AG-CVAE (see table caption for details). 8 Qualitative results. Figure 5 compares captions generated by AG-CVAE and the LSTM baseline on four example images. The AG-CVAE captions tend to exhibit a more diverse sentence structure with a wider variety of nouns and verbs used to describe the same image. Often this yields captions that are more accurate (‘open refrigerator’ vs. ‘refrigerator’ in (a)) and better reflective of the cardinality and types of entities in the image (in (b), our captions mention both the person and the horse while the LSTM tends to mention only one). Even when AG-CVAE does not manage to generate any correct candidates, as in (d), it still gets the right number of people in some candidates. A shortcoming of AG-CVAE is that detected objects frequently end up omitted from the candidate sentences if the LSTM language model cannot accommodate them (‘bear’ in (b) and ‘backpack’ in (c)). On the one hand, this shows that the capacity of the LSTM decoder to generate combinatorially complex sentences is still limited, but on the other hand, it provides robustness against false positive detections. Controllable sentence generation. Figure 6 illustrates how the output of our GMM-CVAE and AG-CVAE models changes when we change the input object vectors in an attempt to control the generation process. Consistent with Table 3, we observe that for the same number of z samples, AG-CVAE produces more unique candidates than GMM-CVAE. Further, AG-CVAE is more flexible than GMM-CVAE and more responsive to the content of the object vectors. For the first image showing a cat, when we add the additional object label ‘chair,’ AG-CVAE is able to generate some captions mentioning a chair, but GMM-CVAE is not. Similarly, in the second example, when we add the concepts of ‘sandwich’ and ‘cake,’ only AG-CVAE can generate some sentences that capture them. Still, the controllability of AG-CVAE leaves something to be desired, since, as observed above, it has trouble mentioning more than two or three objects in the same sentence, especially in unusual combinations. 5 Discussion Our experiments have shown that both our proposed GMM-CVAE and AG-CVAE approaches generate image captions that are more diverse and more accurate than standard LSTM baselines. While GMM-CVAE and AG-CVAE have very similar bottom-line accuracies according to Table 2, AG-CVAE has a clear edge in terms of diversity (unique captions per image) and controllability, both quantitatively (Table 3) and qualitatively (Figure 6). Related work. To date, CVAEs have been used for image question generation [14], but as far as we know, our work is the first to apply them to captioning. In [8], a mixture of Gaussian prior is used in CVAEs for colorization. Their approach is essentially similar to our GMM-CVAE, though it is based on mixture density networks [4] and uses a different approximation scheme during training. Our CVAE formulation has some advantages over the CGAN approach adopted by other recent works aimed at the same general goals [6, 27]. GANs do not expose control over the structure of the latent space, while our additive prior results in an interpretable way to control the sampling process. GANs are also notoriously tricky to train, in particular for discrete sampling problems like sentence generation (Dai et al. [6] have to resort to reinforcement learning and Shetty et al. [27] to an approximate Gumbel sampler [15]). Our CVAE training is much more straightforward. While we represent the z space as a simple vector space with multiple modes, it is possible to impose on it a more general graphical model structure [16], though this incurs a much greater level of complexity. Finally, from the viewpoint of inference, our work is also related to general approaches to diverse structured prediction, which focus on extracting multiple modes from a single energy function [3]. This is a hard problem necessitating sophisticated approximations, and we prefer to circumvent it by cheaply generating a large number of diverse and plausible candidates, so that “good enough” ones can be identified using simple re-ranking mechanisms. Future work. We would like to investigate more general formulations for the conditioning information c(I), not necessarily relying on object labels whose supervisory information must be provided separately from the sentences. These can be obtained, for example, by automatically clustering nouns or noun phrases extracted from reference sentences, or even clustering vector representations of entire sentences. We are also interested in other tasks, such as question generation, where the cluster vectors can represent the question type (‘what is,’ ‘where is,’ ‘how many,’ etc.) as well as the image content. Control of the output by modifying the c vector would in this case be particularly natural. Acknowledgments: This material is based upon work supported in part by the National Science Foundation under Grants No. 1563727 and 1718221, and by the Sloan Foundation. We would like to thank Jian Peng and Yang Liu for helpful discussions. 9 References [1] Neuraltalk2. https://github.com/karpathy/neuraltalk2. [2] P. Anderson, B. Fernando, M. Johnson, and S. Gould. Spice: Semantic propositional image caption evaluation. In ECCV, 2016. [3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-Best Solutions in Markov Random Fields. In ECCV, 2012. [4] C. M. Bishop. Mixture density networks. 1994. [5] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. [6] B. Dai, D. Lin, R. Urtasun, and S. Fidler. Towards diverse and natural image descriptions via a conditional gan. ICCV, 2017. [7] M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, 2014. [8] A. Deshpande, J. Lu, M.-C. Yeh, and D. Forsyth. Learning diverse image colorization. CVPR, 2017. [9] J. Devlin, H. Cheng, H. Fang, S. Gupta, L. Deng, X. He, G. Zweig, and M. Mitchell. Language models for image captioning: The quirks and what works. arXiv preprint arXiv:1505.01809, 2015. [10] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015. [11] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences from images. In ECCV, 2010. [12] J. R. Hershey and P. A. Olsen. Approximating the kullback leibler divergence between gaussian mixture models. In ICASSP, 2007. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [14] U. Jain, Z. Zhang, and A. Schwing. Creativity: Generating diverse questions using variational autoencoders. CVPR, 2017. [15] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. ICLR, 2017. [16] M. J. Johnson, D. Duvenaud, A. Wiltschko, S. Datta, and R. Adams. Structured vaes: Composing probabilistic graphical models and variational autoencoders. NIPS, 2016. [17] D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014. [18] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal neural language models. In ICML, 2014. [19] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2891–2903, 2013. [20] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi. Generalizing image captions for image-text parallel corpus. In ACL, 2013. [21] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. [22] S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy. Improved image captioning via policy gradient optimization of spider. ICCV, 2017. [23] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR, 2015. [24] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daumé III. Midge: Generating image descriptions from computer vision detections. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 747–756. Association for Computational Linguistics, 2012. [25] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL. Association for Computational Linguistics, 2002. [26] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015. [27] R. Shetty, M. Rohrbach, L. A. Hendricks, M. Fritz, and B. Schiele. Speaking the same language: Matching machine to human captions by adversarial training. ICCV, 2017. [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [29] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In NIPS, 2015. [30] R. Vedantam, C. Lawrence Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015. [31] A. K. Vijayakumar, M. Cogswell, R. R. Selvaraju, Q. Sun, S. Lee, D. Crandall, and D. Batra. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424, 2016. [32] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. [33] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: Lessons learned from the 2015 mscoco image captioning challenge. IEEE transactions on pattern analysis and machine intelligence, 2016. [34] L. Wang, Y. Li, and S. Lazebnik. Learning deep structure-preserving image-text embeddings. In CVPR, 2016. [35] Z. Wang, F. Wu, W. Lu, J. Xiao, X. Li, Z. Zhang, and Y. Zhuang. Diverse image captioning via grouptalk. In IJCAI, 2016. 10 [36] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. [37] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. In CVPR, 2016. 11 | 2017 | 205 |
6,681 | Deconvolutional Paragraph Representation Learning Yizhe Zhang Dinghan Shen Guoyin Wang Zhe Gan Ricardo Henao Lawrence Carin Department of Electrical & Computer Engineering, Duke University Abstract Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data. 1 Introduction A central task in natural language processing is to learn representations (features) for sentences or multi-sentence paragraphs. These representations are typically a required first step toward more applied tasks, such as sentiment analysis [1, 2, 3, 4], machine translation [5, 6, 7], dialogue systems [8, 9, 10] and text summarization [11, 12, 13]. An approach for learning sentence representations from data is to leverage an encoder-decoder framework [14]. In a standard autoencoding setup, a vector representation is first encoded from an embedding of an input sequence, then decoded to the original domain to reconstruct the input sequence. Recent advances in Recurrent Neural Networks (RNNs) [15], especially Long Short-Term Memory (LSTM) [16] and variants [17], have achieved great success in numerous tasks that heavily rely on sentence-representation learning. RNN-based methods typically model sentences recursively as a generative Markov process with hidden units, where the one-step-ahead word from an input sentence is generated by conditioning on previous words and hidden units, via emission and transition operators modeled as neural networks. In principle, the neural representations of input sequences aim to encapsulate sufficient information about their structure, to subsequently recover the original sentences via decoding. However, due to the recursive nature of the RNN, challenges exist for RNN-based strategies to fully encode a sentence into a vector representation. Typically, during training, the RNN generates words in sequence conditioning on previous ground-truth words, i.e., teacher forcing training [18], rather than decoding the whole sentence solely from the encoded representation vector. This teacher forcing strategy has proven important because it forces the output sequence of the RNN to stay close to the ground-truth sequence. However, allowing the decoder to access ground truth information when reconstructing the sequence weakens the encoder’s ability to produce self-contained representations, that carry enough information to steer the decoder through the decoding process without additional guidance. Aiming to solve this problem, [19] proposed a scheduled sampling approach during training, which gradually shifts from learning via both latent representation and ground-truth signals to solely use the encoded latent representation. Unfortunately, [20] showed that scheduled sampling is a fundamentally inconsistent 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. training strategy, in that it produces largely unstable results in practice. As a result, training may fail to converge on occasion. During inference, for which ground-truth sentences are not available, words ahead can only be generated by conditioning on previously generated words through the representation vector. Consequently, decoding error compounds proportional to the length of the sequence. This means that generated sentences quickly deviate from the ground-truth once an error has been made, and as the sentence progresses. This phenomenon was coined exposure bias in [19]. We propose a simple yet powerful purely convolutional framework for learning sentence representations. Conveniently, without RNNs in our framework, issues connected to teacher forcing training and exposure bias are not relevant. The proposed approach uses a Convolutional Neural Network (CNN) [21, 22, 23] as encoder and a deconvolutional (i.e., transposed convolutional) neural network [24, 25] as decoder. To the best of our knowledge, the proposed framework is the first to force the encoded latent representation to capture information from the entire sentence via a multi-layer CNN specification, to achieve high reconstruction quality without leveraging RNN-based decoders. Our multi-layer CNN allows representation vectors to abstract information from the entire sentence, irrespective of order or length, making it an appealing choice for tasks involving long sentences or paragraphs. Further, since our framework does not involve recursive encoding or decoding, it can be very efficiently parallelized using convolution-specific Graphical Process Unit (GPU) primitives, yielding significant computational savings compared to RNN-based models. 2 Convolutional Auto-encoding for Text Modeling 2.1 Convolutional encoder Let wt denote the t-th word in a given sentence. Each word wt is embedded into a k-dimensional word vector xt = We[wt], where We ∈Rk×V is a (learned) word embedding matrix, V is the vocabulary size, and We[v] denotes the v-th column of We. All columns of We are normalized to have unit ℓ2-norm, i.e., ||We[v]||2 = 1, ∀v, by dividing each column with its ℓ2-norm. After embedding, a sentence of length T (padded where necessary) is represented as X ∈Rk×T , by concatenating its word embeddings, i.e., xt is the t-th column of X. For sentence encoding, we use a CNN architecture similar to [26], though originally proposed for image data. The CNN consists of L layers (L −1 convolutional, and the Lth fully-connected) that ultimately summarize an input sentence into a (fixed-length) latent representation vector, h. Layer l ∈{1, . . . , L} consists of pl filters, learned from data. For the i-th filter in layer 1, a convolutional operation with stride length r(1) applies filter W(i,1) c ∈Rk×h to X, where h is the convolution filter size. This yields latent feature map, c(i,1) = γ(X ∗W(i,1) c + b(i,1)) ∈R(T −h)/r(1)+1, where γ(·) is a nonlinear activation function, b(i,1) ∈R(T −h)/r(1)+1, and ∗denotes the convolutional operator. In our experiments, γ(·) is represented by a Rectified Linear Unit (ReLU) [27]. Note that the original embedding dimension, k, changes after the first convolutional layer, as c(i,1) ∈R(T −h)/r(1)+1, for i = 1, . . . , p1. Concatenating the results from p1 filters (for layer 1), results in feature map, C(1) = [c(1,1) . . . c(p1,1)] ∈Rp1×[(T −h)/r(1)+1]. After this first convolutional layer, we apply the convolution operation to the feature map, C(1), using the same filter size, h, with this repeated in sequence for L −1 layers. Each time, the length along the spatial coordinate is reduced to T (l+1) = ⌊(T (l) −h)/r(l) + 1⌋, where r(l) is the stride length, T (l) is the spatial length, l denotes the l-th layer and ⌊·⌋is the floor function. For the final layer, L, the feature map C(L−1) is fed into a fully-connected layer, to produce the latent representation h. Implementation-wise, we use a convolutional layer with filter size equals to T (L−1) (regardless of h), which is equivalent to a fully-connected layer; this implementation trick has been also utilized in [26]. This last layer summarizes all remaining spatial coordinates, T (L−1), into scalar features that encapsulate sentence sub-structures throughout the entire sentence characterized by filters, {W(i,l) c } for i = 1, . . . , pl and l = 1, . . . , L, where W(i,l) c denotes filter i for layer l. This also implies that the extracted feature is of fixed-dimensionality, independent of the length of the input sentence. 2 300 x 60 Deconvolution Layers Convolution Layers (k,h,p1,r(1)) (300, 5, 300, 2) 28 x 300 12 x 600 (k x T) 300 600 500 600 300 (T(1)x p1) (T(2)x p2) (T(2)x p2) 12 x 600 28 x 300 (T(1)x p1) 300 x 60 (k x T) (k,h,p2,r(2)) (1, 5, 600, 2) (k,T(2),p3,r(3)) (1, 12, 500, 1) C(1) C(2) Figure 1: Convolutional auto-encoding architecture. Encoder: the input sequence is first expanded to an embedding matrix, X, then fully compressed to a representation vector h, through a multi-layer convolutional encoder with stride. In the last layer, the spatial dimension is collapsed to remove the spatial dependency. Decoder: the latent vector h is fed through a multi-layer deconvolutional decoder with stride to reconstruct X as ˆX, via cosine-similarity cross-entropy loss. Having pL filters on the last layer, results in pL-dimensional representation vector, h = C(L), for the input sentence. For example, in Figure 1, the encoder consists of L = 3 layers, which for a sentence of length T = 60, embedding dimension k = 300, stride lengths {r(1), r(2), r(3)} = {2, 2, 1}, filter sizes h = {5, 5, 12} and number of filters {p1, p2, p3} = {300, 600, 500}, results in intermediate feature maps, C(1) and C(2) of sizes {28 × 300, 12 × 600}, respectively. The last feature map of size 1 × 500, corresponds to latent representation vector, h. Conceptually, filters from the lower layers capture primitive sentence information (h-grams, analogous to edges in images), while higher level filters capture more sophisticated linguistic features, such as semantic and syntactic structures (analogous to image elements). Such a bottom-up architecture models sentences by hierarchically stacking text segments (h-grams) as building blocks for representation vector, h. This is similar in spirit to modeling linguistic grammar formalisms via concrete syntax trees [28], however, we do not pre-specify a tree structure based on some syntactic structure (i.e., English language), but rather abstract it from data via a multi-layer convolutional network. 2.2 Deconvolutional decoder We apply the deconvolution with stride (i.e., convolutional transpose), as the conjugate operation of convolution, to decode the latent representation, h, back to the source (discrete) text domain. As the deconvolution operation proceeds, the spatial resolution gradually increases, by mirroring the convolutional steps described above, as illustrated in Figure 1. The spatial dimension is first expanded to match the spatial dimension of the (L −1)-th layer of convolution, then progressively expanded as T (l+1) = (T (l) −1) ∗r(l) + h, for l = 1, · · · up to L-th deconvolutional layer (which corresponds to the input layer of the convolutional encoder). The output of the L-layer deconvolution operation aims to reconstruct the word embedding matrix, which we denote as ˆX. In line with word embedding matrix We, columns of ˆX are normalized to have unit ℓ2-norm. Denoting ˆwt as the t-th word in reconstructed sentence ˆs, the probability of ˆwt to be word v is specified as p( ˆwt = v) = exp[τ −1Dcos(ˆxt, We[v])] P v′∈V exp[τ −1Dcos(ˆxt, We[v′])] , (1) where Dcos(x, y) is the cosine similarity defined as, ⟨x,y⟩ ||x||||y||, We[v] is the v-th column of We, ˆxt is the t-th column of ˆX, τ is a positive number we denote as temperature parameter [29]. This parameter is akin to the concentration parameter of a Dirichlet distribution, in that it controls the spread of probability vector [p( ˆwt = 1) . . . p( ˆwt = V )], thus a large τ encourages uniformly distributed probabilities, whereas a small τ encourages sparse, concentrated probability values. In the experiments we set τ = 0.01. Note that in our setting, the cosine similarity can be obtained as an inner product, provided that columns of We and ˆX have unit ℓ2-norm by specification. This deconvolutional module can also be leveraged as building block in VAE[30, 31] or GAN[32, 33] 3 2.3 Model learning The objective of the convolutional autoencoder described above can be written as the word-wise log-likelihood for all sentences s ∈D, i.e., Lae = P d∈D P t log p( ˆwt d = wt d) , (2) where D denotes the set of observed sentences. The simple, maximum-likelihood objective in (2) is optimized via stochastic gradient descent. Details of the implementation are provided in the experiments. Note that (2) differs from prior related work in two ways: i) [22, 34] use pooling and un-pooling operators, while we use convolution/deconvolution with stride; and ii) more importantly, [22, 34] do not use a cosine similarity reconstruction as in (1), but a RNN-based decoder. A further discussion of related work is provided in Section 3. We could use pooling and un-pooling instead of striding (a particular case of deterministic pooling/un-pooling), however, in early experiments (not shown) we did not observe significant performance gains, while convolution/deconvolution operations with stride are considerably more efficient in terms of memory footprint. Compared to a standard LSTM-based RNN sequence autoencoders with roughly the same number of parameters, computations in our case are considerably faster (see experiments) using single NVIDIA TITAN X GPU. This is due to the high parallelization efficiency of CNNs via cuDNN primitives [35]. Comparison between deconvolutional and RNN Decoders The proposed framework can be seen as a complementary building block for natural language modeling. Contrary to the standard LSTMbased decoder, the deconvolutional decoder imposes in general a less strict sequence dependency compared to RNN architectures. Specifically, generating a word from an RNN requires a vector of hidden units that recursively accumulate information from the entire sentence in an order-preserving manner (long-term dependencies are heavily down-weighted), while for a deconvolutional decoder, the generation only depends on a representation vector that encapsulates information from throughout the sentence without a pre-specified ordering structure. As a result, for language generation tasks, a RNN decoder will usually generate more coherent text, when compared to a deconvolutional decoder. On the contrary, a deconvolutional decoder is better at accounting for distant dependencies in long sentences, which can be very beneficial in feature extraction for classification and text summarization tasks. 2.4 Semi-supervised classification and summarization Identifying related topics or sentiments, and abstracting (short) summaries from user generated content such as blogs or product reviews, has recently received significant interest [1, 3, 4, 36, 37, 13, 11]. In many practical scenarios, unlabeled data are abundant, however, there are not many practical cases where the potential of such unlabeled data is fully realized. Motivated by this opportunity, here we seek to complement scarcer but more valuable labeled data, to improve the generalization ability of supervised models. By ingesting unlabeled data, the model can learn to abstract latent representations that capture the semantic meaning of all available sentences irrespective of whether or not they are labeled. This can be done prior to the supervised model training, as a two-step process. Recently, RNN-based methods exploiting this idea have been widely utilized and have achieved state-of-the-art performance in many tasks [1, 3, 4, 36, 37]. Alternatively, one can learn the autoencoder and classifier jointly, by specifying a classification model whose input is the latent representation, h; see for instance [38, 31]. In the case of product reviews, for example, each review may contain hundreds of words. This poses challenges when training RNN-based sequence encoders, in the sense that the RNN has to abstract information on-the-fly as it moves through the sentence, which often leads to loss of information, particularly in long sentences [39]. Furthermore, the decoding process uses ground-truth information during training, thus the learned representation may not necessarily keep all information from the input text that is necessary for proper reconstruction, summarization or classification. We consider applying our convolutional autoencoding framework to semi-supervised learning from long-sentences and paragraphs. Instead of pre-training a fully unsupervised model as in [1, 3], we cast the semi-supervised task as a multi-task learning problem similar to [40], i.e., we simultaneously train a sequence autoencoder and a supervised model. In principle, by using this joint training strategy, the learned paragraph embedding vector will preserve both reconstruction and classification ability. 4 Specifically, we consider the following objective: Lsemi = α P d∈{Dl+Du} P t log p( ˆwt d = wt d) + P d∈Dl Lsup(f(hd), yd) , (3) where α > 0 is an annealing parameter balancing the relative importance of supervised and unsupervised loss; Dl and Du denote the set of labeled and unlabeled data, respectively. The first term in (3) is the sequence autoencoder loss in (2) for the d-th sequence. Lsup(·) is the supervision loss for the d-th sequence (labeled only). The classifier function, f(·), that attempts to reconstruct yd from hd can be either a Multi-Layer Perceptron (MLP) in classification tasks, or a CNN/RNN in text summarization tasks. For the latter, we are interested in a purely convolutional specification, however, we also consider an RNN for comparison. For classification, we use a standard cross-entropy loss, and for text summarization we use either (2) for the CNN or the standard LSTM loss for the RNN. In practice, we adopt a scheduled annealing strategy for α as in [41, 42], rather than fixing it a priori as in [1]. During training, (3) gradually transits from focusing solely on the unsupervised sequence autoencoder to the supervised task, by annealing α from 1 to a small positive value αmin. We set αmin = 0.01 in the experiments. The motivation for this annealing strategy is to first focus on abstracting paragraph features, then to selectively refine learned features that are most informative to the supervised task. 3 Related Work Previous work has considered leveraging CNNs as encoders for various natural language processing tasks [22, 34, 21, 43, 44]. Typically, CNN-based encoder architectures apply a single convolution layer followed by a pooling layer, which essentially acts as a detector of specific classes of h-grams, given a convolution filter window of size h. The deep architecture in our framework will, in principle, enable the high-level layers to capture more sophisticated language features. We use convolutions with stride rather than pooling operators, e.g., max-pooling, for spatial downsampling following [26, 45], where it is argued that fully convolutional architectures are able to learn their own spatial downsampling. Further, [46] uses a 29-layer CNN for text classification. Our CNN encoder is considerably simpler in structure (convolutions with stride and no more than 4 layers) while still achieving good performance. Language decoders other than RNNs are less well studied. Recently, [47] proposed a hybrid model by coupling a convolutional-deconvolutional network with an RNN, where the RNN acts as decoder and the deconvolutional model as a bridge between the encoder (convolutional network) and decoder. Additionally, [42, 48, 49, 50] considered CNN variants, such as pixelCNN [51], for text generation. Nevertheless, to achieve good empirical results, these methods still require the sentences to be generated sequentially, conditioning on the ground truth historical information, akin to RNN-based decoders, thus still suffering from the exposure bias. Other efforts have been made to improve embeddings from long paragraphs using unsupervised approaches [2, 52]. The paragraph vector [2] learns a fixed length vector by concatenating it with a word2vec [53] embedding of history sequence to predict future words. The hierarchical neural autoencoder [52] builds a hierarchical attentive RNN, then it uses paragraph-level hidden units of that RNN as embedding. Our work differs from these approaches in that we force the sequence to be fully restored from the latent representation, without aid from any history information. Previous methods have considered leveraging unlabeled data for semi-supervised sequence classification tasks. Typically, RNN-based methods consider either i) training a sequence-to-sequence RNN autoencoder, or a RNN classifier that is robust to adversarial perturbation, as initialization for the encoder in the supervised model [1, 4]; or, ii) learning latent representation via a sequence-to-sequence RNN autoencoder, and then using them as inputs to a classifier that also takes features extracted from a CNN as inputs [3]. For summarization tasks, [54] has considered a semi-supervised approach based on support vector machines, however, so far, research on semi-supervised text summarization using deep models is scarce. 4 Experiments Experimental setup For all the experiments, we use a 3-layer convolutional encoder followed by a 3-layer deconvolutional decoder (recall implementation details for the top layer). Filter size, stride 5 Ground-truth: on every visit to nyc , the hotel beacon is the place we love to stay . so conveniently located to central park , lincoln center and great local restaurants . the rooms are lovely . beds so comfortable , a great little kitchen and new wizz bang coffee maker . the staff are so accommodating and just love walking across the street to the fairway supermarket with every imaginable goodies to eat . Hier. LSTM [52] every time in new york , lighthouse hotel is our favorite place to stay . very convenient , central park , lincoln center , and great restaurants . the room is wonderful , very comfortable bed , a kitchenette and a large explosion of coffee maker . the staff is so inclusive , just across the street to walk to the supermarket channel love with all kinds of what to eat . Our LSTM-LSTM on every visit to nyc , the hotel beacon is the place to relax and wanting to become conveniently located . hotel , in the evenings out good budget accommodations . the views are great and we were more than two couples . manny the doorman has a great big guy come and will definitly want to leave during my stay and enjoy a wonderfully relaxing wind break in having for 24 hour early rick’s cafe . oh perfect ! easy easy walking distance to everything imaginable groceries . if you may want to watch yours ! Our CNN-DCNN on every visit to nyc , the hotel beacon is the place we love to stay . so closely located to central park , lincoln center and great local restaurants . biggest rooms are lovely . beds so comfortable , a great little kitchen and new UNK suggestion coffee maker . the staff turned so accommodating and just love walking across the street to former fairway supermarket with every food taxes to eat . Table 1: Reconstructed paragraph of the Hotel Reviews example, used in [52]. and word embedding are set to h = 5, rl = 2, for l = 1, . . . , 3 and k = 300, respectively. The dimension of the latent representation vector varies for each experiment, thus is reported separately. For notational convenience, we denote our convolutional-deconvolutional autoencoder as CNNDCNN. In most comparisons, we also considered two standard autoencoders as baselines: a) CNNLSTM: CNN encoder coupled with LSTM decoder; and b) LSTM-LSTM: LSTM encoder with LSTM decoder. An LSTM-DCNN configuration is not included because it yields similar performance to CNN-DCNN while being more computationally expensive. The complete experimental setup and baseline details is provided in the Supplementary Material (SM). CNN-DCNN has the least number of parameters. For example, using 500 as the dimension of h results in about 9, 13, 15 million total trainable parameters for CNN-DCNN, CNN-LSTM and LSTM-LSTM, respectively. Model BLEU ROUGE-1 ROUGE-2 LSTM-LSTM [52] 24.1 57.1 30.2 Hier. LSTM-LSTM [52] 26.7 59.0 33.0 Hier. + att. LSTM-LSTM [52] 28.5 62.4 35.5 CNN-LSTM 18.3 56.6 28.2 CNN-DCNN 94.2 97.0 94.2 Table 2: Reconstruction evaluation results on the Hotel Reviews Dataset. 60 80 100 120 140 160 180 200 Sentence length 0 20 40 60 80 100 Bleu score CNN-DCNN CNN-LSTM LSTM-LSTM Figure 2: BLEU score vs. sentence length for Hotel Review data. Paragraph reconstruction We first investigate the performance of the proposed autoencoder in terms of learning representations that can preserve paragraph information. We adopt evaluation criteria from [52], i.e., ROUGE score [55] and BLEU score [56], to measure the closeness of the reconstructed paragraph (model output) to the input paragraph. Briefly, ROUGE and BLEU scores measures the n-gram recall and precision between the model outputs and the (ground-truth) references. We use BLEU-4, ROUGE-1, 2 in our evaluation, in alignment with [52]. In addition to the CNNLSTM and LSTM-LSTM autoencoder, we also compared with the hierarchical LSTM autoencoder [52]. The comparison is performed on the Hotel Reviews datasets, following the experimental setup from [52], i.e., we only keep reviews with sentence length ranging from 50 to 250 words, resulting in 348,544 training data samples and 39,023 testing data samples. For all comparisons, we set the dimension of the latent representation to h = 500. From Table 1, we see that for long paragraphs, the LSTM decoder in CNN-LSTM and LSTM-LSTM suffers from heavy exposure bias issues. We further evaluate the performance of each model with different paragraph lengths. As shown in Figure 2 and Table 2, on this task CNN-DCNN demonstrates a clear advantage, meanwhile, as the length of the sentence increases, the comparative advantage becomes more substantial. For LSTM-based methods, the quality of the reconstruction deteriorates quickly as sequences get longer. In constrast, the reconstruction quality of CNN-DCNN is stable and consistent regardless of sentence length. Furthermore, the computational cost, evaluated as wall-clock, is significantly lower in CNN-DCNN. Roughly, CNN-LSTM is 3 times slower than CNN-DCNN, and LSTM-LSTM is 5 times slower on a single GPU. Details are reported in the SM. Character-level and word-level correction This task seeks to evaluate whether the deconvolutional decoder can overcome exposure bias, which severely limits LSTM-based decoders. We consider 6 a denoising autoencoder where the input is tweaked slightly with certain modifications, while the model attempts to denoise (correct) the unknown modification, thus recover the original sentence. For character-level correction, we consider the Yahoo! Answer dataset [57]. The dataset description and setup for word-level correction is provided in the SM. We follow the experimental setup in [58] for word-level and character-level spelling correction (see details in the SM). We considered substituting each word/character with a different one at random with probability η, with η = 0.30. For character-level analysis, we first map all characters into a 40 dimensional embedding vector, with the network structure for word- and character-level models kept the same. 0 10 20 30 40 50 60 70 Time (hour) 0.0 0.2 0.4 0.6 0.8 1.0 Character Error Rate (CER) CNN-DCNN CNN-LSTM LSTM-LSTM Figure 3: CER comparison. Black triangles indicate the end of an epoch. Original c Original a Original n Original Original a Original n Original y Original o Original n Original e Original Original s Original u Original g Original g Original e Original s Original t Original Original s Original o Original m Original e Original Original g Original o Original o Original d Original Original b Original o Original o Original k Original s Original Original ? Modified c Modified a Modified p Modified Modified a Modified n Modified y Modified o Modified n Modified k Modified Modified w Modified u Modified g Modified g Modified e Modified s Modified t Modified Modified x Modified o Modified h Modified e Modified Modified i Modified o Modified r Modified d Modified Modified y Modified o Modified o Modified k Modified u Modified Modified ? ActorCritic c ActorCritic a ActorCritic n ActorCritic ActorCritic a ActorCritic n ActorCritic y ActorCritic o ActorCritic n ActorCritic e ActorCritic ActorCritic w ActorCritic i ActorCritic t ActorCritic h ActorCritic e ActorCritic s ActorCritic t ActorCritic ActorCritic t ActorCritic o ActorCritic ActorCritic e ActorCritic ActorCritic f ActorCritic o ActorCritic r ActorCritic d ActorCritic ActorCritic y ActorCritic o ActorCritic u ActorCritic ActorCritic u ActorCritic ActorCritic ? LSTM-LSTM c LSTM-LSTM a LSTM-LSTM n LSTM-LSTM LSTM-LSTM a LSTM-LSTM n LSTM-LSTM y LSTM-LSTM o LSTM-LSTM n LSTM-LSTM e LSTM-LSTM LSTM-LSTM s LSTM-LSTM u LSTM-LSTM g LSTM-LSTM g LSTM-LSTM e LSTM-LSTM s LSTM-LSTM t LSTM-LSTM LSTM-LSTM j LSTM-LSTM o LSTM-LSTM k LSTM-LSTM e LSTM-LSTM LSTM-LSTM f LSTM-LSTM o LSTM-LSTM o LSTM-LSTM d LSTM-LSTM LSTM-LSTM y LSTM-LSTM o LSTM-LSTM u LSTM-LSTM n LSTM-LSTM g LSTM-LSTM LSTM-LSTM ? CNN-LSTM c CNN-LSTM a CNN-LSTM n CNN-LSTM CNN-LSTM a CNN-LSTM n CNN-LSTM y CNN-LSTM o CNN-LSTM n CNN-LSTM e CNN-LSTM CNN-LSTM g CNN-LSTM u CNN-LSTM i CNN-LSTM t CNN-LSTM e CNN-LSTM s CNN-LSTM CNN-LSTM s CNN-LSTM o CNN-LSTM m CNN-LSTM e CNN-LSTM CNN-LSTM o CNN-LSTM w CNN-LSTM e CNN-LSTM CNN-LSTM p CNN-LSTM o CNN-LSTM o CNN-LSTM k CNN-LSTM s CNN-LSTM CNN-LSTM ? CNN-LSTM CNN-LSTM CNN-DCNN c CNN-DCNN a CNN-DCNN n CNN-DCNN CNN-DCNN a CNN-DCNN n CNN-DCNN y CNN-DCNN o CNN-DCNN n CNN-DCNN e CNN-DCNN CNN-DCNN s CNN-DCNN u CNN-DCNN g CNN-DCNN g CNN-DCNN e CNN-DCNN s CNN-DCNN t CNN-DCNN CNN-DCNN s CNN-DCNN o CNN-DCNN m CNN-DCNN e CNN-DCNN CNN-DCNN w CNN-DCNN o CNN-DCNN o CNN-DCNN d CNN-DCNN CNN-DCNN b CNN-DCNN o CNN-DCNN o CNN-DCNN k CNN-DCNN s CNN-DCNN CNN-DCNN ? Original w Original h Original a Original t Original Original s Original Original y Original o Original u Original r Original Original i Original d Original e Original a Original Original o Original f Original Original a Original Original s Original t Original e Original p Original p Original i Original n Original g Original Original s Original t Original o Original n Original e Original Original t Original o Original Original b Original e Original t Original t Original e Original r Original Original t Original h Original i Original n Original g Original s Original Original t Original o Original Original c Original o Original m Original e Original Original ? Modified w Modified u Modified a Modified t Modified Modified s Modified Modified y Modified o Modified g Modified r Modified Modified i Modified d Modified e Modified m Modified Modified o Modified f Modified Modified t Modified Modified s Modified t Modified e Modified p Modified u Modified k Modified n Modified g Modified Modified j Modified t Modified z Modified n Modified e Modified Modified t Modified i Modified Modified b Modified e Modified t Modified t Modified e Modified r Modified Modified t Modified h Modified i Modified n Modified g Modified z Modified Modified t Modified t Modified Modified c Modified o Modified e Modified e Modified Modified ? ActorCritic w ActorCritic h ActorCritic a ActorCritic t ActorCritic ActorCritic s ActorCritic ActorCritic y ActorCritic o ActorCritic u ActorCritic r ActorCritic ActorCritic i ActorCritic d ActorCritic e ActorCritic m ActorCritic ActorCritic o ActorCritic f ActorCritic ActorCritic t ActorCritic ActorCritic s ActorCritic t ActorCritic e ActorCritic p ActorCritic u ActorCritic a ActorCritic n ActorCritic g ActorCritic ActorCritic j ActorCritic o ActorCritic k ActorCritic n ActorCritic e ActorCritic ActorCritic t ActorCritic i ActorCritic ActorCritic b ActorCritic e ActorCritic t ActorCritic t ActorCritic e ActorCritic r ActorCritic ActorCritic t ActorCritic h ActorCritic i ActorCritic n ActorCritic g ActorCritic ActorCritic i ActorCritic t ActorCritic t ActorCritic ActorCritic c ActorCritic o ActorCritic m ActorCritic e ActorCritic ActorCritic ? LSTM-LSTM w LSTM-LSTM h LSTM-LSTM a LSTM-LSTM t LSTM-LSTM LSTM-LSTM s LSTM-LSTM LSTM-LSTM y LSTM-LSTM o LSTM-LSTM u LSTM-LSTM r LSTM-LSTM LSTM-LSTM i LSTM-LSTM d LSTM-LSTM e LSTM-LSTM a LSTM-LSTM LSTM-LSTM o LSTM-LSTM f LSTM-LSTM LSTM-LSTM a LSTM-LSTM LSTM-LSTM s LSTM-LSTM p LSTM-LSTM e LSTM-LSTM a LSTM-LSTM k LSTM-LSTM i LSTM-LSTM n LSTM-LSTM g LSTM-LSTM LSTM-LSTM s LSTM-LSTM t LSTM-LSTM a LSTM-LSTM n LSTM-LSTM d LSTM-LSTM LSTM-LSTM t LSTM-LSTM o LSTM-LSTM LSTM-LSTM b LSTM-LSTM e LSTM-LSTM t LSTM-LSTM t LSTM-LSTM e LSTM-LSTM r LSTM-LSTM LSTM-LSTM t LSTM-LSTM h LSTM-LSTM i LSTM-LSTM n LSTM-LSTM g LSTM-LSTM s LSTM-LSTM LSTM-LSTM t LSTM-LSTM o LSTM-LSTM LSTM-LSTM c LSTM-LSTM o LSTM-LSTM m LSTM-LSTM e LSTM-LSTM LSTM-LSTM ? CNN-LSTM w CNN-LSTM h CNN-LSTM a CNN-LSTM t CNN-LSTM CNN-LSTM s CNN-LSTM CNN-LSTM y CNN-LSTM o CNN-LSTM u CNN-LSTM r CNN-LSTM CNN-LSTM i CNN-LSTM d CNN-LSTM e CNN-LSTM m CNN-LSTM CNN-LSTM o CNN-LSTM f CNN-LSTM CNN-LSTM a CNN-LSTM CNN-LSTM s CNN-LSTM t CNN-LSTM e CNN-LSTM p CNN-LSTM p CNN-LSTM i CNN-LSTM n CNN-LSTM g CNN-LSTM CNN-LSTM s CNN-LSTM t CNN-LSTM a CNN-LSTM r CNN-LSTM t CNN-LSTM CNN-LSTM t CNN-LSTM o CNN-LSTM CNN-LSTM b CNN-LSTM e CNN-LSTM t CNN-LSTM t CNN-LSTM e CNN-LSTM r CNN-LSTM CNN-LSTM t CNN-LSTM h CNN-LSTM i CNN-LSTM n CNN-LSTM g CNN-LSTM CNN-LSTM t CNN-LSTM o CNN-LSTM CNN-LSTM c CNN-LSTM o CNN-LSTM m CNN-LSTM e CNN-LSTM CNN-LSTM ? CNN-LSTM CNN-DCNN w CNN-DCNN h CNN-DCNN a CNN-DCNN t CNN-DCNN CNN-DCNN s CNN-DCNN CNN-DCNN y CNN-DCNN o CNN-DCNN u CNN-DCNN r CNN-DCNN CNN-DCNN i CNN-DCNN d CNN-DCNN e CNN-DCNN a CNN-DCNN CNN-DCNN o CNN-DCNN f CNN-DCNN CNN-DCNN a CNN-DCNN CNN-DCNN s CNN-DCNN t CNN-DCNN e CNN-DCNN p CNN-DCNN p CNN-DCNN i CNN-DCNN n CNN-DCNN g CNN-DCNN CNN-DCNN s CNN-DCNN t CNN-DCNN o CNN-DCNN n CNN-DCNN e CNN-DCNN CNN-DCNN t CNN-DCNN o CNN-DCNN CNN-DCNN b CNN-DCNN e CNN-DCNN t CNN-DCNN t CNN-DCNN e CNN-DCNN r CNN-DCNN CNN-DCNN t CNN-DCNN h CNN-DCNN i CNN-DCNN n CNN-DCNN g CNN-DCNN s CNN-DCNN CNN-DCNN t CNN-DCNN o CNN-DCNN CNN-DCNN c CNN-DCNN o CNN-DCNN m CNN-DCNN e CNN-DCNN CNN-DCNN ? Figure 4: Spelling error denoising comparison. Darker colors indicate higher uncertainty. Trained on modified sentences. Model Yahoo(CER) Actor-critic[58] 0.2284 LSTM-LSTM 0.2621 CNN-LSTM 0.2035 CNN-DCNN 0.1323 Model ArXiv(WER) LSTM-LSTM 0.7250 CNN-LSTM 0.3819 CNN-DCNN 0.3067 Table 3: CER and WER comparison on Yahoo and ArXiv data. We employ Character Error Rate (CER) [58] and Word Error Rate (WER) [59] for evaluation. The WER/CER measure the ratio of Levenshtein distance (a.k.a., edit distance) between model predictions and the ground-truth, and the total length of sequence. Conceptually, lower WER/CER indicates better performance. We use LSTM-LSTM and CNN-LSTM denoising autoencoders for comparison. The architecture for the word-level baseline models is the same as in the previous experiment. For character-level correction, we set dimension of h to 900. We also compare to actor-critic training [58], following their experimental guidelines (see details in the SM). As shown in Figure 3 and Table 3, we observed CNN-DCNN achieves both lower CER and faster convergence. Further, CNN-DCNN delivers stable denoising performance irrespective of the noise location within the sentence, as seen in Figure 4. For CNN-DCNN, even when an error is detected but not exactly corrected (darker colors in Figure 4 indicate higher uncertainty), denoising with future words is not effected, while for CNN-LSTM and LSTM-LSTM the error gradually accumulates with longer sequences, as expected. For word-level correction, we consider word substitutions only, and mixed perturbations from three kinds: substitution, deletion and insertion. Generally, CNN-DCNN outperforms CNN-LSTM and LSTM-LSTM, and is faster. We provide experimental details and comparative results in the SM. Semi-supervised sequence classification & summarization We investigate whether our CNNDCNN framework can improve upon supervised natural language tasks that leverage features learned from paragraphs. In principle, a good unsupervised feature extractor will improve the generalization ability in a semi-supervised learning setting. We evaluate our approach on three popular natural language tasks: sentiment analysis, paragraph topic prediction and text summarization. The first two tasks are essentially sequence classification, while summarization involves both language comprehension and language generation. We consider three large-scale document classification datasets: DBPedia, Yahoo! Answers and Yelp Review Polarity [57]. The partition of training, validation and test sets for all datasets follows the settings from [57]. The detailed summary statistics of all datasets are shown in the SM. To demonstrate the advantage of incorporating the reconstruction objective into the training of text classifiers, we further evaluate our model with different amounts of labeled data (0.1%, 0.15%, 0.25%, 1%, 10% and 100%, respectively), and the whole training set as unlabeled data. For our purely supervised baseline model (supervised CNN), we use the same convolutional encoder architecture described above, with a 500-dimensional latent representation dimension, followed by a MLP classifier with one hidden layer of 300 hidden units. The dropout rate is set to 50%. Word embeddings are initialized at random. As shown in Table 4, the joint training strategy consistently and significantly outperforms the purely supervised strategy across datasets, even when all labels are available. We hypothesize that during the early phase of training, when reconstruction is emphasized, features from text fragments can be readily 7 learned. As the training proceeds, the most discriminative text fragment features are selected. Further, the subset of features that are responsible for both reconstruction and discrimination presumably encapsulate longer dependency structure, compared to the features using a purely supervised strategy. Figure 5 demonstrates the behavior of our model in a semi-supervised setting on Yelp Review dataset. The results for Yahoo! Answer and DBpedia are provided in the SM. Model DBpedia Yelp P. Yahoo ngrams TFIDF 1.31 4.56 31.49 Large Word ConvNet 1.72 4.89 29.06 Small Word ConvNet 1.85 5.54 30.02 Large Char ConvNet 1.73 5.89 29.55 Small Char ConvNet 1.98 6.53 29.84 SA-LSTM (word-level) 1.40 Deep ConvNet 1.29 4.28 26.57 Ours (Purely supervised) 1.76 4.62 27.42 Ours (joint training with CNN-LSTM) 1.36 4.21 26.32 Ours (joint training with CNN-DCNN) 1.17 3.96 25.82 Table 4: Test error rates of document classification (%). Results from other methods were obtained from [57]. 0.1 1 10 100 Proportion (%) of labeled data 55 60 65 70 75 80 85 90 95 100 Accuracy (%) Supervised Semi (CNN-DCNN) Semi (CNN-LSTM) Figure 5: Semi-supervised classification accuracy on Yelp review data. For summarization, we used a dataset composed of 58,000 abstract-title pairs, from arXiv. Abstracttitle pairs are selected if the length of the title and abstract do not exceed 50 and 500 words, respectively. We partitioned the training, validation and test sets into 55000, 2000, 1000 pairs each. We train a sequence-to-sequence model to generate the title given the abstract, using a randomly selected subset of paired data with proportion σ = (5%, 10%, 50%, 100%). For every value of σ, we considered both purely supervised summarization using just abstract-title pairs, and semisupervised summarization, by leveraging additional abstracts without titles. We compared LSTM and deconvolutional network as the decoder for generating titles for σ = 100%. Obs. proportion σ 5% 10% 50% 100% DCNN dec. Supervised 12.40 13.07 15.87 16.37 14.75 Semi-sup. 16.04 16.62 17.64 18.14 16.83 Table 5: Summarization task on arXiv data, using ROUGE-L metric. First 4 columns are for the LSTM decoder, and the last column is for the deconvolutional decoder (100% observed). Table 5 summarizes quantitative results using ROUGE-L (longest common subsequence) [55]. In general, the additional abstracts without titles improve the generalization ability on the test set. Interestingly, even when σ = 100% (all titles are observed), the joint training objective still yields a better performance than using Lsup alone. Presumably, since the joint training objective requires the latent representation to be capable of reconstructing the input paragraph, in addition to generating a title, the learned representation may better capture the entire structure (meaning) of the paragraph. We also empirically observed that titles generated under the joint training objective are more likely to use the words appearing in the corresponding paragraph (i.e., more extractive), while the the titles generated using the purely supervised objective Lsup, tend to use wording more freely, thus more abstractive. One possible explanation is that, for the joint training strategy, since the reconstructed paragraph and title are all generated from latent representation h, the text fragments that are used for reconstructing the input paragraph are more likely to be leveraged when “building” the title, thus the title bears more resemblance to the input paragraph. As expected, the titles produced by a deconvolutional decoder are less coherent than an LSTM decoder. Presumably, since each paragraph can be summarized with multiple plausible titles, the deconvolutional decoder may have trouble when positioning text segments. We provide discussions and titles generated under different setups in the SM. Designing a framework which takes the best of these two worlds, LSTM for generation and CNN for decoding, will be an interesting future direction. 5 Conclusion We proposed a general framework for text modeling using purely convolutional and deconvolutional operations. The proposed method is free of sequential conditional generation, avoiding issues associated with exposure bias and teacher forcing training. Our approach enables the model to fully encapsulate a paragraph into a latent representation vector, which can be decompressed to reconstruct the original input sequence. Empirically, the proposed approach achieved excellent long paragraph reconstruction quality and outperforms existing algorithms on spelling correction, and semi-supervised sequence classification and summarization, with largely reduced computational cost. 8 Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR. References [1] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPS, 2015. [2] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014. [3] Rie Johnson and Tong Zhang. Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings. arXiv, February 2016. [4] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial Training Methods for Semi-Supervised Text Classification. In ICLR, May 2017. [5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, 2015. [6] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014. [7] Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. Encoding source language with convolutional neural network for machine translation. In ACL, 2015. [8] Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv, 2015. [9] Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement learning for dialogue generation. arXiv, 2016. [10] Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. arXiv:1701.06547, 2017. [11] Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond. In CoNLL, 2016. [12] Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, and Shay B Cohen. Neural Extractive Summarization with Side Information. arXiv, April 2017. [13] Alexander M Rush, Sumit Chopra, and Jason Weston. A Neural Attention Model for Abstractive Sentence Summarization. In EMNLP, 2015. [14] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [15] Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010. [16] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. In Neural computation, 1997. [17] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv, 2014. [18] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280, 1989. [19] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS, 2015. [20] Ferenc Huszár. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? arXiv, 2015. [21] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. In ACL, 2014. [22] Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014. [23] Zhe Gan, Yunchen Pu, Henao Ricardo, Chunyuan Li, Xiaodong He, and Lawrence Carin. Learning generic sentence representations using convolutional neural networks. In EMNLP, 2017. 9 [24] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv, 2016. [25] Yunchen Pu, Win Yuan, Andrew Stevens, Chunyuan Li, and Lawrence Carin. A deep generative deconvolutional image model. In Artificial Intelligence and Statistics, pages 741–750, 2016. [26] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 2015. [27] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807–814, 2010. [28] Ian Chiswell and Wilfrid Hodges. Mathematical logic, volume 3. OUP Oxford, 2007. [29] Emil Julius Gumbel and Julius Lieblein. Statistical theory of extreme values and some practical applications: a series of lectures. 1954. [30] Yunchen Pu, Xin Yuan, and Lawrence Carin. A generative model for deep convolutional learning. arXiv preprint arXiv:1504.04054, 2015. [31] Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [32] Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial feature matching for text generation. In ICML, 2017. [33] Zhe Gan, Liqun Chen, Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, and Lawrence Carin. Triangle generative adversarial networks. arXiv preprint arXiv:1709.06548, 2017. [34] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. In JMLR, 2011. [35] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv, 2014. [36] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. Hierarchical attention networks for document classification. In NAACL, 2016. [37] Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency. In ICLR, 2016. [38] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [39] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [40] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semisupervised recursive autoencoders for predicting sentiment distributions. In EMNLP. Association for Computational Linguistics, 2011. [41] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv, 2015. [42] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved Variational Autoencoders for Text Modeling using Dilated Convolutions. arXiv, February 2017. [43] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architectures for matching natural language sentences. In NIPS, 2014. [44] Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. In NAACL HLT, 2015. [45] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv, 2014. [46] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 10 [47] Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. A Hybrid Convolutional Variational Autoencoder for Text Generation. arXiv, February 2017. [48] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv, 2016. [49] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language Modeling with Gated Convolutional Networks. arXiv, December 2016. [50] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional Sequence to Sequence Learning. arXiv, May 2017. [51] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In NIPS, pages 4790–4798, 2016. [52] Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraphs and documents. In ACL, 2015. [53] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. [54] Kam-Fai Wong, Mingli Wu, and Wenjie Li. Extractive summarization using supervised and semi-supervised learning. In ICCL. Association for Computational Linguistics, 2008. [55] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In ACL workshop, 2004. [56] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL. Association for Computational Linguistics, 2002. [57] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, pages 649–657, 2015. [58] Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv, 2016. [59] JP Woodard and JT Nelson. An information theoretic measure of speech recognition performance. In Workshop on standardisation for speech I/O, 1982. 11 | 2017 | 206 |
6,682 | Learning to See Physics via Visual De-animation Jiajun Wu MIT CSAIL Erika Lu University of Oxford Pushmeet Kohli DeepMind William T. Freeman MIT CSAIL, Google Research Joshua B. Tenenbaum MIT CSAIL Abstract We introduce a paradigm for understanding physical scenes without human annotations. At the core of our system is a physical world representation that is first recovered by a perception module and then utilized by physics and graphics engines. During training, the perception module and the generative models learn by visual de-animation — interpreting and reconstructing the visual information stream. During testing, the system first recovers the physical world state, and then uses the generative models for reasoning and future prediction. Even more so than forward simulation, inverting a physics or graphics engine is a computationally hard problem; we overcome this challenge by using a convolutional inversion network. Our system quickly recognizes the physical world state from appearance and motion cues, and has the flexibility to incorporate both differentiable and non-differentiable physics and graphics engines. We evaluate our system on both synthetic and real datasets involving multiple physical scenes, and demonstrate that our system performs well on both physical state estimation and reasoning problems. We further show that the knowledge learned on the synthetic dataset generalizes to constrained real images. 1 Introduction Inspired by human abilities, we wish to develop machine systems that understand scenes. Scene understanding has multiple defining characteristics which break down broadly into two features. First, human scene understanding is rich. Scene understanding is physical, predictive, and causal: rather than simply knowing what is where, one can also predict what may happen next, or what actions one can take, based on the physics afforded by the objects, their properties, and relations. These predictions, hypotheticals, and counterfactuals are probabilistic, integrating uncertainty as to what is more or less likely to occur. Second, human scene understanding is fast. Most of the computation has to happen in a single, feedforward, bottom-up pass. There have been many systems proposed recently to tackle these challenges, but existing systems have architectural features that allow them to address one of these features but not the other. Typical approaches based on inverting graphics engines and physics simulators [Kulkarni et al., 2015b] achieve richness at the expense of speed. Conversely, neural networks such as PhysNet [Lerer et al., 2016] are fast, but their ability to generalize to rich physical predictions is limited. We propose a new approach to combine the best of both. Our overall framework for representation is based on graphics and physics engines, where graphics is run in reverse to build the initial physical scene representation, and physics is then run forward to imagine what will happen next or what can be done. Graphics can also be run in the forward direction to visualize the outputs of the physics simulation as images of what we expect to see in the future, or under different viewing conditions. Rather than use traditional, often slow inverse graphics methods [Kulkarni et al., 2015b], we learn to 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. visual data visual data physical world physical world Figure 1: Visual de-animation — we would like to recover the physical world representation behind the visual input, and combine it with generative physics simulation and rendering engines. invert the graphics engine efficiently using convolutional nets. Specifically, we use deep learning to train recognition models on the objects in our world for object detection, structure and viewpoint estimation, and physical property estimation. Bootstrapping from these predictions, we then infer the remaining scene properties through inference via forward simulation of the physics engine. Without human supervision, our system learns by visual de-animation: interpreting and reconstructing visual input. We show the problem formulation in Figure 1. The simulation and rendering engines in the framework force the perception module to extract physical world states that best explain the data. As the physical world states are inputs to physics and graphics engines, we simultaneously obtain an interpretable, disentangled, and compact physical scene representation. Our framework is flexible and adaptable to a number of graphics and physics engines. We present model variants that use neural, differentiable physics engines [Chang et al., 2017], and variants that use traditional physics engines, which are more mature but non-differentiable [Coumans, 2010]. We also explore various graphics engines operating at different levels, ranging from mid-level cues such as object velocity, to pixel-level rendering of images. We demonstrate our system on real and synthetic datasets across multiple domains: synthetic billiard videos [Fragkiadaki et al., 2016], in which balls have varied physical properties, real billiard videos from the web, and real images of block towers from Facebook AI Research [Lerer et al., 2016]. Our contributions are three-fold. First, we propose a novel generative pipeline for physical scene understanding, and demonstrate its flexibility by incorporating various graphics and physics engines. Second, we introduce the problem of visual de-animation – learning rich scene representations without supervision by interpreting and reconstructing visual input. Third, we show that our system performs well across multiple scenarios and on both synthetic and constrained real videos. 2 Related Work Physical scene understanding has attracted increasing attention in recent years [Gupta et al., 2010, Jia et al., 2015, Lerer et al., 2016, Zheng et al., 2015, Battaglia et al., 2013, Mottaghi et al., 2016b, Fragkiadaki et al., 2016, Battaglia et al., 2016, Mottaghi et al., 2016a, Chang et al., 2017, Agrawal et al., 2016, Pinto et al., 2016, Finn et al., 2016, Hamrick et al., 2017, Ehrhardt et al., 2017, Shao et al., 2014, Zhang et al., 2016]. Researchers have attempted to go beyond the traditional goals of high-level computer vision, inferring “what is where”, to capture the physics needed to predict the immediate future of dynamic scenes, and to infer the actions an agent should take to achieve a goal. Most of these efforts do not attempt to learn physical object representations from raw observations. Some systems emphasize learning from pixels but without an explicitly object-based representation [Lerer et al., 2016, Mottaghi et al., 2016b, Fragkiadaki et al., 2016, Agrawal et al., 2016, Pinto et al., 2016, Li et al., 2017], which makes generalization challenging. Others learn a flexible model of the dynamics of object interactions, but assume a decomposition of the scene into physical objects and their properties rather than learning directly from images [Chang et al., 2017, Battaglia et al., 2016]. 2 (e) Output (a) Input (II) Physics engine (simulation) (III) Graphics engine (rendering) (b) Physical world representation (c) Appearance cues Object proposal Object proposal Physical object state Physical object state NMS (I) Perception module (d) Likelihood Figure 2: Our visual de-animation (VDA) model contains three major components: a convolutional perception module (I), a physics engine (II), and a graphics engine (III). The perception module efficiently inverts the graphics engine by inferring the physical object state for each segment proposal in input (a), and combines them to obtain a physical world representation (b). The generative physics and graphics engines then run forward to reconstruct the visual data (e). See Section 3 for details. There have been some works that aim to estimate physical object properties [Wu et al., 2016, 2015, Denil et al., 2017]. Wu et al. [2015] explored an analysis-by-synthesis approach that is easily generalizable, but less efficient. Their framework also lacked a perception module. Denil et al. [2017] instead proposed a reinforcement learning approach. These approaches, however, assumed strong priors of the scene, and approximate object shapes with primitives. Wu et al. [2016] used a feed-forward network for physical property estimation without assuming prior knowledge of the environment, but the constrained setup did not allow interactions between multiple objects. By incorporating physics and graphics engines, our approach can jointly learn the perception module and physical model, optionally in a Helmholtz machine style [Hinton et al., 1995], and recover an explicit physical object representation in a range of scenarios. Another line of related work is on future state prediction in either image pixels [Xue et al., 2016, Mathieu et al., 2016] or object trajectories [Kitani et al., 2017, Walker et al., 2015]. There has also been abundant research making use of physical models for human or scene tracking [Salzmann and Urtasun, 2011, Kyriazis and Argyros, 2013, Vondrak et al., 2013, Brubaker et al., 2009]. Our model builds upon and extends these ideas by jointly modeling an approximate physics engine and a perceptual module, with wide applications including, but not limited to, future prediction. Our framework also relates to the field of “vision as inverse graphics” [Zhu and Mumford, 2007, Yuille and Kersten, 2006, Bai et al., 2012]. Connected to but different from traditional analysisby-synthesis approaches, recent works explored using deep neural networks to efficiently explain an object [Kulkarni et al., 2015a, Rezende et al., 2016], or a scene with multiple objects [Ba et al., 2015, Huang and Murphy, 2015, Eslami et al., 2016]. In particular, Wu et al. [2017] proposed “scene de-rendering”, building an object-based, structured representation from a static image. Our work incorporates inverse graphics with simulation engines for physical scene understanding and scene dynamics modeling. 3 Visual De-animation Our visual de-animation (VDA) model consists of an efficient inverse graphics component to build the initial physical world representation from visual input, a physics engine for physical reasoning of the scene, and a graphics engine for rendering videos. We show the framework in Figure 2. In this section, we first present an overview of the system, and then describe each component in detail. 3.1 Overview The first component of our system is an approximate inverse graphics module for physical object and scene understanding, as shown in Figure 2-I. Specifically, the system sequentially computes object proposals, recognizes objects and estimates their physical state, and recovers the scene layout. 3 The second component of our system is a physics engine, which uses the physical scene representation recovered by the inverse graphics module to simulate future dynamics of the environment (Figure 2II). Our system adapts to both neural, differentiable simulators, which can be jointly trained with the perception module, and rigid-body, non-differentiable simulators, which can be incorporated using methods such as REINFORCE [Williams, 1992]. The third component of our framework is a graphics engine (Figure 2-III), which takes the scene representations from the physics engine and re-renders the video at various levels (e.g. optical flow, raw pixel). The graphics engine may need additional appearance cues such as object shape or color (Figure 2c). Here, we approximate them using simple heuristics, as they are not a focus of our paper. There is a tradeoff between various rendering levels: while pixel-level reconstruction captures details of the scene, rendering at a more abstract level (e.g. silhouettes) may better generalize. We then use a likelihood function (Figure 2d) to evaluate the difference between synthesized and observed signals, and compute gradients or rewards for differentiable and non-differentiable systems, respectively. Our model combines efficient and powerful deep networks for recognition with rich simulation engines for forward prediction. This provides us two major advantages over existing methods: first, simulation engines take an interpretable representation of the physical world, and can thus easily generalize and supply rich physical predictions; second, the model learns by explaining the observations — it can be trained in a self-supervised manner without requiring human annotations. 3.2 Physical Object and Scene Modeling We now discuss each component in detail, starting with the perception module. Object proposal generation Given one or a few frames (Figure 2a), we first generate a number of object proposals. The masked images are then used as input to the following stages of the pipeline. Physical object state estimation For each segment proposal, we use a convolutional network to recognize the physical state of the object, which consists of intrinsic properties such as shape, mass, and friction, as well as extrinsic properties such as 3D position and pose. The input to the network is the masked image of the proposal, and the output is an interpretable vector for its physical state. Physical world reconstruction Given objects’ physical states, we first apply non-maximum suppression to remove object duplicates, and then reconstruct the physical world according to object states. The physical world representation (Figure 2b) will be employed by the physics and graphics engines for simulation and rendering. 3.3 Physical Simulation and Prediction The two types of physics engines we explore in this paper include a neural, differentiable physics engine and a standard rigid-body simulation engine. Neural physics engines The neural physics engine is an extension of the recent work from Chang et al. [2017], which simulates scene dynamics by taking object mass, position, and velocity. We extend their framework to model object friction in our experiments on billiard table videos. Though basic, the neural physics engine is differentiable, and thus can be end-to-end trained with our perception module to explain videos. Please refer to Chang et al. [2017] for details of the neural physics engine. Rigid body simulation engines There exist rather mature, rigid-body physics simulation engines, e.g. Bullet [Coumans, 2010]. Such physics engines are much more powerful, but non-differentiable. In our experiments on block towers, we used a non-differentiable simulator with multi-sample REINFORCE [Rezende et al., 2016, Mnih and Rezende, 2016] for joint training. 3.4 Re-rendering with a Graphics Engine In this work, we consider two graphics engines operating at different levels: for the billiard table scenario, we use a renderer that takes the output of a physics engine and generates pixel-level rendering; for block towers, we use one that computes only object silhouettes. 4 (a) shared appearance, shared physics (b) varied appearance, varied physics (c) shared appearance, varied physics Figure 3: The three settings of our synthetic billiard videos: (a) balls have the same appearance and physical properties, where the system learns to discover them and simulate the dynamics; (b) balls have the same appearance but different physics, and the system learns their physics from motion; (c) balls have varied appearance and physics, and the system learns to associate appearance cues with underlying object states, even from a single image. Input (red) and ground truth Reconstruction and prediction Input (red) and ground truth Reconstruction and prediction Frame t-2 Frame t Frame t+2 Frame t+5 Frame t+10 Frame t-2 Frame t Frame t+2 Frame t+5 Frame t+10 Figure 4: Results on the billiard videos, comparing ground truth videos with our predictions. We show two of three input frames (in red) due to space constraints. Left: balls share appearance and physics (I), where our framework learns to discover objects and simulate scene dynamics. Top right: balls have different appearance and physics (II), where our model learns to associate appearance with physics and simulate collisions. It learns that the green ball should move further than the heavier blue ball after the collision. Bottom right: balls share appearance but have different frictions (III), where our model learns to associate motion with friction. It realizes from three input frames that the right-most ball in the first frame has a large friction coefficient and will stop before the other balls. 4 Evaluation We evaluate variants of our frameworks in three scenarios: synthetic billiard videos, real billiard videos, and block towers. We also test how models trained on synthetic data generalize to real cases. 4.1 Billiard Tables: A Motivating Example We begin with synthetic billiard videos to explore end-to-end learning of the perceptual module along with differentiable simulation engines. We explore how our framework learns the physical object state (position, velocity, mass, and friction) from its appearance and/or motion. Data For the billiard table scenario, we generate data using the released code from Fragkiadaki et al. [2016]. We updated the code to allow balls of different mass and friction. We used the billiard table scenario as an initial exploration of whether our models can learn to associate visual object appearance and motion with physical properties. As shown in Figure 3, we generated three subsets, in which balls may have shared or differing appearance (color), and physical properties. For each case, we generated 9,000 videos for training and 200 for testing. (I) Shared appearance and physics (Figure 3a): balls all have the same appearance and the same physical properties. This basic setup evaluates whether we can jointly learn an object (ball) discoverer and a physics engine for scene dynamics. 5 Datasets Methods Recon. Position Prediction (Abs) Velocity Prediction (Abs) Appear. Physics Pixel MSE 1st 5th 10th 20th 1st 5th 10th 20th Same Same Baseline 0.046 4.58 18.20 46.06 119.97 2.95 5.63 7.32 8.45 VDA (init) 0.046 3.46 6.61 12.76 26.10 1.41 1.97 2.34 2.65 VDA (full) 0.044 3.25 6.52 12.34 25.55 1.37 1.87 2.22 2.55 Diff Diff. Baseline 0.058 6.57 26.38 70.47 180.04 3.78 7.62 10.51 12.02 VDA (init) 0.058 3.82 8.92 17.09 34.65 1.65 2.27 3.02 3.21 VDA (full) 0.055 3.55 8.58 16.33 32.97 1.64 2.20 2.89 3.05 Same Diff. Baseline 0.038 9.58 79.78 143.67 202.56 12.37 23.42 25.08 23.98 VDA (init) 0.038 6.06 19.75 34.24 46.40 3.37 5.16 5.01 3.77 VDA (full) 0.035 5.77 19.34 33.25 43.42 3.23 4.98 4.77 3.35 Table 1: Quantitative results on synthetic billiard table videos. We evaluate our visual de-animation (VDA) model before and after joint training (init vs. full). For scene reconstruction, we compute MSE between rendered images and ground truth. For future prediction, we compute the Manhattan distance in pixels between predicted object position and velocity and ground truth in pixels, at the 1st, 5th, 10th, and 20th future frames. Our full model performs better. See qualitative results in Figure 4. Frame 1 Frame 3 Frame 5 Frame 10 0 5 10 Relative error Baseline VDA (init) VDA (full) Humans (a) Future prediction results on synthetic billiard videos of balls of the same physics Frame 1 Frame 3 Frame 5 Frame 10 0 5 10 Relative error Baseline VDA (init) VDA (full) Humans (b) Future prediction results on synthetic billiard videos of balls of varied frictions Figure 5: Behavioral study results on future position prediction of billiard videos where balls have the same physical properties (a), and balls have varied physical properties (b). For each model and humans, we compare how their long-term relative prediction error grows, by measuring the ratio with respect to errors in predicting the first next frame. Compared to the baseline model, the behavior of our prediction models aligns well with human predictions. (II) Varied appearance and physics (Figure 3b): balls can be of three different masses (light, medium, heavy), and two different friction coefficients. Each of the six possible combinations is associated with a unique color (appearance). In this setup, the scene de-rendering component should be able to associate object appearance with its physical properties, even from a single image. (III) Shared appearance, varied physics (Figure 3c): balls have the same appearance, but have one of two different friction coefficients. Here, the perceptual component should be able to associate object motion with its corresponding friction coefficients, from just a few input images. Setup For this task, the physical state of an object is its intrinsic properties, including mass m and friction f, and its extrinsic properties, including 2D position {x, y} and velocity v. Our system takes three 256×256 RGB frames I1, I2, I3 as input. It first obtains flow fields from I1 to I2 and from I2 to I3 by a pretrained spatial pyramid network (SPyNet) [Ranjan and Black, 2017]. It then generates object proposals by applying color filters on input images. Our perceptual model is a ResNet-18 [He et al., 2015], which takes as input three masked RGB frames and two masked flow images of each object proposal, and recovers the object’s physical state. We use a differentiable, neural physics engine with object intrinsic properties as parameters; at each step, it predicts objects’ extrinsic properties (position {x, y} and velocity v) in the next frame, based on their current estimates. We employ a graphics engine that renders original images from the predicted positions, where the color of the balls is set as the mean color of the input object proposal. The likelihood function compares, at a pixel level, these rendered images and observations. It is straightforward to compute the gradient of object position from rendered RGB images and ground truth. Thus, this simple graphics engine is also differentiable, making our system end-to-end trainable. 6 Video VDA (ours) Figure 6: Sample results on web videos of real billiard games and computer games with realistic rendering. Left: our method correctly estimates the trajectories of multiple objects. Right: our framework correctly predicts the two collisions (white vs. red, white vs. blue), despite the motion blur in the input, though it underestimates the velocity of the red ball after the collision. Note that the billiard table is a chaotic system, and highly accurate long-term prediction is intractable. Our training paradigm consists of two steps. First, we pretrain the perception module and the neural physics engine separately on synthetic data, where ground truth is available. The second step is end-to-end fine-tuning without annotations. We observe that the framework does not converge well without pre-training, possibly due to the multiple hypotheses that can explain a scene (e.g., we can only observe relative, not absolute masses from collisions). We train our framework using SGD, with a learning rate of 0.001 and a momentum of 0.9. We implement our framework in Torch7 [Collobert et al., 2011]. During testing, the perception module is run in reverse to recover object physical states, and the learned physics engine is then run in forward for future prediction. Results Our formulation recovers a rich representation of the scene. With the generative models, we show results in scene reconstruction and future prediction. We compare two variants of our algorithm: the initial system has its perception module and neural physics engine separately trained, while the full system has an additional end-to-end fine-tuning step, as discussed above. We also compare with a baseline, which has the sample perception model, but in prediction, simply repeats object dynamics in the past without considering interactions among them. Scene reconstruction: given input frames, we are able to reconstruct the images based on inferred physical states. For evaluation, we compute pixel-level MSE between reconstructions and observed images. We show qualitative results in Figure 4 and quantitative results in Table 1. Future prediction: with the learned neural simulation engine, our system is able to predict future events based on physical world states. We show qualitative results in Figure 4 and quantitative results in Table 1, where we compute the Manhattan distance in pixels between predicted positions and velocities and the ground truth. Our model achieves good performance in reconstructing the scene, understanding object physics, and predicting scene dynamics. See caption for details. Behavioral studies We further conduct behavioral studies, where we select 50 test cases, show the first three frames of each to human subjects, and ask them the positions of each ball in future frames. We test 3 subjects per case on Amazon Mechanical Turk. For each model and humans, we compare how their long-term relative prediction error grows, by measuring the ratio with respect to errors in predicting the first next frame. As shown in Figure 5, the behavior of our models resembles human predictions much better than the baseline model. 4.2 Billiard Tables: Transferring to Real Videos Data We also collected videos from YouTube, segmenting them into two-second clips. Some videos are from real billiard competitions, and the others are from computer games with realistic rendering. We use it as an out-of-sample test set for evaluating the model’s generalization ability. Setup and Results Our setup is the same as that in Section 4.1, except that we now re-train the perceptual model on the synthetic data of varied physics, but with flow images as input instead of RGB images. Flow images abstract away appearance changes (color, lighting, etc.), allowing the model to generalize better to real data. We show qualitative results of reconstruction and future prediction in Figure 6 by rendering our inferred representation using the graphics software, Blender. 7 Video VDA (ours) PhysNet Video VDA (ours) PhysNet (a) Our reconstruction and prediction results given a single frame (marked in red). From top to bottom: ground truth, our results, results from Lerer et al. [2016]. Methods # Blocks Mean 2 3 4 Chance 50 50 50 50 Humans 67 62 62 64 PhysNet 66 66 73 68 GoogLeNet 70 70 70 70 VDA (init) 73 74 72 73 VDA (joint) 75 76 73 75 VDA (full) 76 76 74 75 (b) Accuracy (%) of stability prediction on the blocks dataset Methods 2 3 4 Mean PhysNet 56 68 70 65 GoogLeNet 70 67 71 69 VDA (init) 74 74 67 72 VDA (joint) 75 77 70 74 VDA (full) 76 76 72 75 (c) Accuracy (%) of stability prediction when trained on synthetic towers of 2 and 4 blocks, and tested on all block tower sizes. Video VDA (ours) (d) Our reconstruction and prediction results given a single frame (marked in red) Figure 7: Results on the blocks dataset [Lerer et al., 2016]. For quantitative results (b), we compare three variants of our visual de-animation (VDA) model: perceptual module trained without fine-tuning (init), joint fine-tuning with REINFORCE (joint), and full model considering stability constraint (full). We also compare with PhysNet [Lerer et al., 2016] and GoogLeNet [Szegedy et al., 2015]. 4.3 The Blocks World We now look into a different scenario — block towers. In this experiment, we demonstrate the applicability of our model to explain and reason from a static image, instead of a video. We focus on the reasoning of object states in the 3D world, instead of physical properties such as mass. We also explore how our framework performs with non-differentiable simulation engines, and how physics signals (e.g., stability) could help in physical reasoning, even when given only a static image. Data Lerer et al. [2016] built a dataset of 492 images of real block towers, with ground truth stability values. Each image may contain 2, 3, or 4 blocks of red, blue, yellow, or green color. Though the blocks are the same size, their sizes in each 2D image differ due to 3D-to-2D perspective transformation. Objects are made of the same material and thus have identical mass and friction. 8 Input VDA What if? Input Future Stabilizing force Figure 8: Examples of predicting hypothetical scenarios and actively engaging with the scene. Left: predictions of the outcome of forces applied to two stable towers. Right: multiple ways to stabilize two unstable towers. Setup Here, the physical state of an object (block) consists of its 3D position {x, y, z} and 3D rotation (roll, pitch, yaw, each quantized into 20 bins). Our perceptual model is again a ResNet-18 [He et al., 2015], which takes block silhouettes generated by simple color filters as input, and recovers the object’s physical state. For this task, we implement an efficient, non-differentiable, rigid body simulator, to predict whether the blocks are stable. We also implement a graphics engine to render object silhouettes for reconstructing the input. Our likelihood function consists of two terms: MSE between rendered silhouettes and observations, and the binary cross-entropy between the predicted stability and the ground truth stability. Our training paradigm resembles the classic wake-sleep algorithm [Hinton et al., 1995]: first, generate 10,000 training images using the simulation engines; second, train the perception module on synthetic data with ground truth physical states; third, end-to-end fine-tuning of the perceptual module by explaining an additional 100,000 synthetic images without annotations of physical states, but with binary annotations of stability. We use multi-sample REINFORCE [Rezende et al., 2016, Mnih and Rezende, 2016] with 16 samples per input, assuming each position parameter is from a Gaussian distribution and each rotation parameter is from a multinomial distribution (quantized into 20 bins). We observe that the training paradigm helps the framework converge. The other setting is the same as that in Section 4.1. Results We show results on two tasks: scene reconstruction and stability prediction. For each task, we compare three variants of our algorithm: the initial system has its perception module trained without fine-tuning; an intermediate system has joint end-to-end fine-tuning, but without considering the physics constraint; and the full system considers both reconstruction and physical stability during fine-tuning. We show qualitative results on scene reconstruction in Figures 7a and 7d, where we also demonstrate future prediction results by exporting our inferred physical states into Blender. We show quantitative results on stability prediction in Table 7b, where we compare our models with PhysNet [Lerer et al., 2016] and GoogleNet [Szegedy et al., 2015]. All given a static image as test input, our algorithms achieve higher prediction accuracy (75% vs. 70%) efficiently (<10 milliseconds per image). Our framework also generalizes well. We test out-of-sample generalization ability, where we train our model on 2- and 4-block towers, but test it on all tower sizes. We show results in Table 7c. Further, in Figure 8, we show examples where our physical scene representation combined with a physics engine can easily make conditional predictions, answering “What happens if...”-type questions. Specifically, we show frame prediction of external forces on stable block towers, as well as ways that an agent can stabilize currently unstable towers, with the help of rich simulation engines. 5 Discussion We propose combining efficient, bottom-up, neural perception modules with rich, generalizable simulation engines for physical scene understanding. Our framework is flexible and can incorporate various graphics and physics engines. It performs well across multiple synthetic and real scenarios, reconstructing the scene and making future predictions accurately and efficiently. We expect our framework to have wider applications in the future, due to the rapid development of scene description languages, 3D reconstruction methods, simulation engines and virtual environments. 9 Acknowledgements We thank Michael Chang, Donglai Wei, and Joseph Lim for helpful discussions. This work is supported by NSF #1212849 and #1447476, ONR MURI N00014-16-1-2007, the Center for Brain, Minds and Machines (NSF #1231216), Toyota Research Institute, Samsung, Shell, and the MIT Advanced Undergraduate Research Opportunities Program (SuperUROP). References Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by poking: Experiential learning of intuitive physics. In NIPS, 2016. 2 Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention. In ICLR, 2015. 3 Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi. Selectively de-animating video. ACM TOG, 31(4):66, 2012. 3 Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical scene understanding. PNAS, 110(45):18327–18332, 2013. 2 Peter W Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In NIPS, 2016. 2 Marcus A. Brubaker, David J. Fleet, and Aaron Hertzmann. Physics-based person tracking using the anthropomorphic walker. IJCV, 87(1-2):140–155, 2009. 3 Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. In ICLR, 2017. 2, 4 Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. 7 Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics. org, 2010. 2, 4 Misha Denil, Pulkit Agrawal, Tejas D Kulkarni, Tom Erez, Peter Battaglia, and Nando de Freitas. Learning to perform physics experiments via deep reinforcement learning. In ICLR, 2017. 3 Sebastien Ehrhardt, Aron Monszpart, Niloy J Mitra, and Andrea Vedaldi. Learning a physical long-term predictor. arXiv:1703.00247, 2017. 2 SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In NIPS, 2016. 3 Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016. 2 Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive models of physics for playing billiards. In ICLR, 2016. 2, 5 Abhinav Gupta, Alexei A Efros, and Martial Hebert. Blocks world revisited: Image understanding using qualitative geometry and mechanics. In ECCV, 2010. 2 Jessica B Hamrick, Andrew J Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W Battaglia. Metacontrol for adaptive imagination-based optimization. In ICLR, 2017. 2 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2015. 6, 9 Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The “wake-sleep” algorithm for unsupervised neural networks. Science, 268(5214):1158, 1995. 3, 9 Jonathan Huang and Kevin Murphy. Efficient inference in occlusion-aware generative models of images. In ICLR Workshop, 2015. 3 Zhaoyin Jia, Andy Gallagher, Ashutosh Saxena, and Tsuhan Chen. 3d reasoning from blocks to stability. IEEE TPAMI, 37(5):905–918, 2015. 2 10 Kris M. Kitani, De-An Huang, and Wei-Chiu Ma. Activity forecasting. In Group and Crowd Behavior for Computer Vision, pages 273–294. Elsevier, 2017. 3 Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In CVPR, 2015a. 3 Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In NIPS, 2015b. 1 Nikolaos Kyriazis and Antonis Argyros. Physically plausible 3d scene tracking: The single actor hypothesis. In CVPR, 2013. 3 Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In ICML, 2016. 1, 2, 8, 9 Wenbin Li, Ales Leonardis, and Mario Fritz. Visual stability prediction for robotic manipulation. In ICRA, 2017. 2 Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. 3 Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. In ICML, 2016. 4, 9 Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi. Newtonian scene understanding: Unfolding the dynamics of objects in static images. In CVPR, 2016a. 2 Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. “what happens if...” learning to predict the effect of forces in images. In ECCV, 2016b. 2 Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong-Lae Park, and Abhinav Gupta. The curious robot: Learning visual representations via physical interactions. In ECCV, 2016. 2 Anurag Ranjan and Michael J Black. Optical flow estimation using a spatial pyramid network. In CVPR, 2017. 6 Danilo Jimenez Rezende, SM Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016. 3, 4, 9 Mathieu Salzmann and Raquel Urtasun. Physically-based motion models for 3d tracking: A convex formulation. In ICCV, 2011. 3 Tianjia Shao, Aron Monszpart, Youyi Zheng, Bongjin Koo, Weiwei Xu, Kun Zhou, and Niloy J Mitra. Imagining the unseen: Stability-based cuboid arrangements for scene understanding. ACM TOG, 33(6), 2014. 2 Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. 8, 9 Marek Vondrak, Leonid Sigal, and Odest Chadwicke Jenkins. Dynamical simulation priors for human motion tracking. IEEE TPAMI, 35(1):52–65, 2013. 3 Jacob Walker, Abhinav Gupta, and Martial Hebert. Dense optical flow prediction from a static image. In ICCV, 2015. 3 Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. MLJ, 8(3-4):229–256, 1992. 4 Jiajun Wu, Ilker Yildirim, Joseph J Lim, William T Freeman, and Joshua B Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NIPS, 2015. 3 Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman. Physics 101: Learning physical object properties from unlabeled videos. In BMVC, 2016. 3 Jiajun Wu, Joshua B Tenenbaum, and Pushmeet Kohli. Neural scene de-rendering. In CVPR, 2017. 3 Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, 2016. 3 Alan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? TiCS, 10(7):301–308, 2006. 3 11 Renqiao Zhang, Jiajun Wu, Chengkai Zhang, William T Freeman, and Joshua B Tenenbaum. A comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding. In CogSci, 2016. 2 Bo Zheng, Yibiao Zhao, Joey Yu, Katsushi Ikeuchi, and Song-Chun Zhu. Scene understanding by reasoning stability and safety. IJCV, 112(2):221–238, 2015. 2 Song-Chun Zhu and David Mumford. A stochastic grammar of images. Foundations and Trends R⃝in Computer Graphics and Vision, 2(4):259–362, 2007. 3 12 | 2017 | 207 |
6,683 | Adversarial Symmetric Variational Autoencoder Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li and Lawrence Carin Department of Electrical and Computer Engineering, Duke University {yp42, ww109, r.henao, lc267, zg27,cl319, lcarin}@duke.edu Abstract A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets. 1 Introduction Recently there has been increasing interest in developing generative models of data, offering the promise of learning based on the often vast quantity of unlabeled data. With such learning, one typically seeks to build rich, hierarchical probabilistic models that are able to fit to the distribution of complex real data, and are also capable of realistic data synthesis. Generative models are often characterized by latent variables (codes), and the variability in the codes encompasses the variation in the data [1, 2]. The generative adversarial network (GAN) [3] employs a generative model in which the code is drawn from a simple distribution (e.g., isotropic Gaussian), and then the code is fed through a sophisticated deep neural network (decoder) to manifest the data. In the context of data synthesis, GANs have shown tremendous capabilities in generating realistic, sharp images from models that learn to mimic the structure of real data [3, 4, 5, 6, 7, 8]. The quality of GAN-generated images has been evaluated by somewhat ad hoc metrics like inception score [9]. However, the original GAN formulation does not allow inference of the underlying code, given observed data. This makes it difficult to quantify the quality of the generative model, as it is not possible to compute the quality of model fit to data. To provide a principled quantitative analysis of model fit, not only should the generative model synthesize realistic-looking data, one also desires the ability to infer the latent code given data (using an encoder). Recent GAN extensions [10, 11] have sought to address this limitation by learning an inverse mapping (encoder) to project data into the latent space, achieving encouraging results on semi-supervised learning. However, these methods still fail to obtain faithful reproductions of the input data, partly due to model underfitting when learning from a fully adversarial objective [10, 11]. Variational autoencoders (VAEs) are designed to learn both an encoder and decoder, leading to excellent data reconstruction and the ability to quantify a bound on the log-likelihood fit of the model to data [12, 13, 14, 15, 16, 17, 18, 19]. In addition, the inferred latent codes can be utilized in downstream applications, including classification [20] and image captioning [21]. However, new images synthesized by VAEs tend to be unspecific and/or blurry, with relatively low resolution. These 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. limitations of VAEs are becoming increasingly understood. Specifically, the traditional VAE seeks to maximize a lower bound on the log-likelihood of the generative model, and therefore VAEs inherit the limitations of maximum-likelihood (ML) learning [22]. Specifically, in ML-based learning one optimizes the (one-way) Kullback-Leibler (KL) divergence between the distribution of the underlying data and the distribution of the model; such learning does not penalize a model that is capable of generating data that are different from that used for training. Based on the above observations, it is desirable to build a generative-model learning framework with which one can compute and assess the log-likelihood fit to real (observed) data, while also being capable of generating synthetic samples of high realism. Since GANs and VAEs have complementary strengths, their integration appears desirable, with this a principal contribution of this paper. While integration seems natural, we make important changes to both the VAE and GAN setups, to leverage the best of both. Specifically, we develop a new form of the variational lower bound, manifested jointly for the expected log-likelihood of the observed data and for the latent codes. Optimizing this variational bound involves maximizing the expected log-likelihood of the data and codes, while simultaneously minimizing a symmetric KL divergence involving the joint distribution of data and codes. To compute parts of this variational lower bound, a new form of adversarial learning is invoked. The proposed framework is termed Adversarial Symmetric VAE (AS-VAE), since within the model (i) the data and codes are treated in a symmetric manner, (ii) a symmetric form of KL divergence is minimized when learning, and (iii) adversarial training is utilized. To illustrate the utility of AS-VAE, we perform an extensive set of experiments, demonstrating state-of-the-art data reconstruction and generation on several benchmarks datasets. 2 Background and Foundations Consider an observed data sample x, modeled as being drawn from pθ(x|z), with model parameters θ and latent code z. The prior distribution on the code is denoted p(z), typically a distribution that is easy to draw from, such as isotropic Gaussian. The posterior distribution on the code given data x is pθ(z|x), and since this is typically intractable, it is approximated as qφ(z|x), parameterized by learned parameters φ. Conditional distributions qφ(z|x) and pθ(x|z) are typically designed such that they are easily sampled and, for flexibility, modeled in terms of neural networks [12]. Since z is a latent code for x, qφ(z|x) is also termed a stochastic encoder, with pθ(x|z) a corresponding stochastic decoder. The observed data are assumed drawn from q(x), for which we do not have a explicit form, but from which we have samples, i.e., the ensemble {xi}i=1,N used for learning. Our goal is to learn the model pθ(x) = R pθ(x|z)p(z)dz such that it synthesizes samples that are well matched to those drawn from q(x). We simultaneously seek to learn a corresponding encoder qφ(z|x) that is both accurate and efficient to implement. Samples x are synthesized via x ∼pθ(x|z) with z ∼p(z); z ∼qφ(z|x) provides an efficient coding of observed x, that may be used for other purposes (e.g., classification or caption generation when x is an image [21]). 2.1 Traditional Variational Autoencoders and Their Limitations Maximum likelihood (ML) learning of θ based on direct evaluation of pθ(x) is typically intractable. The VAE [12, 13] seeks to bound pθ(x) by maximizing variational expression LVAE(θ, φ), with respect to parameters {θ, φ}, where LVAE(θ, φ) = Eqφ(x,z) log pθ(x, z) qφ(z|x) = Eq(x)[log pθ(x) −KL(qφ(z|x)∥pθ(z|x))] (1) = −KL(qφ(x, z)∥pθ(x, z)) + const , (2) with expectations Eqφ(x,z) and Eq(x) performed approximately via sampling. Specifically, to evaluate Eqφ(x,z) we draw a finite set of samples zi ∼qφ(zi|xi), with xi ∼q(x) denoting the observed data, and for Eq(x), we directly use observed data xi ∼q(x). When learning {θ, φ}, the expectation using samples from zi ∼qφ(zi|xi) is implemented via the “reparametrization trick” [12]. Maximizing LVAE(θ, φ) wrt {θ, φ} provides a lower bound on 1 N PN i=1 log pθ(xi), hence the VAE setup is an approximation to ML learning of θ. Learning θ based on 1 N PN i=1 log pθ(xi) is equivalent to learning θ based on minimizing KL(q(x)∥pθ(x)), again implemented in terms of the N observed samples of q(x). As discussed in [22], such learning does not penalize θ severely for yielding x 2 of relatively high probability in pθ(x) while being simultaneously of low probability in q(x). This means that θ seeks to match pθ(x) to the properties of the observed data samples, but pθ(x) may also have high probability of generating samples that do not look like data drawn from q(x). This is a fundamental limitation of ML-based learning [22], inherited by the traditional VAE in (1). One reason for the failing of ML-based learning of θ is that the cumulative posterior on latent codes R pθ(z|x)q(x)dx ≈ R qφ(z|x)q(x)dx = qφ(z) is typically different from p(z), which implies that x ∼pθ(x|z), with z ∼p(z) may yield samples x that are different from those generated from q(x). Hence, when learning {θ, φ} one may seek to match pθ(x) to samples of q(x), as done in (1), while simultaneously matching qφ(z) to samples of p(z). The expression in (1) provides a variational bound for matching pθ(x) to samples of q(x), thus one may naively think to simultaneously set a similar variational expression for qφ(z), with these two variational expressions optimized jointly. However, to compute this additional variational expression we require an analytic expression for qφ(x, z) = qφ(z|x)q(x), which also means we need an analytic expression for q(x), which we do not have. Examining (2), we also note that LVAE(θ, φ) approximates −KL(qφ(x, z)∥pθ(x, z)), which has limitations aligned with those discussed above for ML-based learning of θ. Analogous to the above discussion, we would also like to consider −KL(pθ(x, z)∥qφ(x, z)). So motivated, in Section 3 we develop a new form of variational lower bound, applicable to maximizing 1 N PN i=1 log pθ(xi) and 1 M PM j=1 log qφ(zj), where zj ∼p(z) is the j-th of M samples from p(z). We demonstrate that this new framework leverages both KL(pθ(x, z)∥qφ(x, z)) and KL(qφ(x, z)∥pθ(x, z)), by extending ideas from adversarial networks. 2.2 Adversarial Learning The original idea of GAN [3] was to build an effective generative model pθ(x|z), with z ∼p(z), as discussed above. There was no desire to simultaneously design an inference network qφ(z|x). More recently, authors [10, 11, 23] have devised adversarial networks that seek both pθ(x|z) and qφ(z|x). As an important example, Adversarial Learned Inference (ALI) [10] considers the following objective function: min θ,φ max ψ LALI(θ, φ, ψ) = Eqφ(x,z)[log σ(fψ(x, z))] + Epθ(x,z)[log(1 −σ(fψ(x, z)))] , (3) where the expectations are approximated with samples, as in (1). The function fψ(x, z), termed a discriminator, is typically implemented using a neural network with parameters ψ [10, 11]. Note that in (3) we need only sample from pθ(x, z) = pθ(x|z)p(z) and qφ(x, z) = qφ(z|x)q(x), avoiding the need for an explicit form for q(x). The framework in (3) can, in theory, match pθ(x, z) and qφ(x, z), by finding a Nash equilibrium of their respective non-convex objectives [3, 9]. However, training of such adversarial networks is typically based on stochastic gradient descent, which is designed to find a local mode of a cost function, rather than locating an equilibrium [9]. This objective mismatch may lead to the well-known instability issues associated with GAN training [9, 22]. To alleviate this problem, some researchers add a regularization term, such as reconstruction loss [24, 25, 26] or mutual information [4], to the GAN objective, to restrict the space of suitable mapping functions, thus avoiding some of the failure modes of GANs, i.e., mode collapsing. Below we will formally match the joint distributions as in (3), and reconstruction-based regularization will be manifested by generalizing the VAE setup via adversarial learning. Toward this goal we consider the following lemma, which is analogous to Proposition 1 in [3, 23]. Lemma 1 Consider Random Variables (RVs) x and z with joint distributions, p(x, z) and q(x, z). The optimal discriminator D∗(x, z) = σ(f ∗(x, z)) for the following objective max f Ep(x,z) log[σ(f(x, z))] + Eq(x,z)[log(1 −σ(f(x, z)))] , (4) is f ∗(x, z) = log p(x, z) −log q(x, z). Under Lemma 1, we are able to estimate the log qφ(x, z) −log pθ(x)p(z) and log pθ(x, z) − log q(x)qφ(z) using the following corollary. 3 Corollary 1.1 For RVs x and z with encoder joint distribution qφ(x, z) = q(x)qφ(z|x) and decoder joint distribution pθ(x, z) = p(z)pθ(x|z), consider the following objectives: max ψ1 LA1(ψ1) = Ex∼q(x),z∼qφ(z|x) log[σ(fψ1(x, z))] + Ex∼pθ(x|z′),z′∼p(z),z∼p(z)[log(1 −σ(fψ1(x, z)))] , (5) max ψ2 LA2(ψ2) = Ez∼p(z),x∼pθ(x|z) log[σ(fψ2(x, z))] + Ez∼qφ(z|x′),x′∼q(x),x∼q(x)[log(1 −σ(fψ2(x, z)))] , (6) If the parameters φ and θ are fixed, with fψ∗ 1 the optimal discriminator for (5) and fψ∗ 2 the optimal discriminator for (6), then fψ∗ 1(x, z) = log qφ(x, z) −log pθ(x)p(z), fψ∗ 2(x, z) = log pθ(x, z) −log qφ(z)q(x) . (7) The proof is provided in the Appendix A. We also assume in Corollary 1.1 that fψ1(x, z) and fψ2(x, z) are sufficiently flexible such that there are parameters ψ∗ 1 and ψ∗ 2 capable of achieving the equalities in (7). Toward that end, fψ1 and fψ2 are implemented as ψ1- and ψ2-parameterized neural networks (details below), to encourage universal approximation [27]. 3 Adversarial Symmetric Variational Auto-Encoder (AS-VAE) Consider variational expressions LVAEx(θ, φ) = Eq(x) log pθ(x) −KL(qφ(x, z)∥pθ(x, z)) (8) LVAEz(θ, φ) = Ep(z) log qφ(z) −KL(pθ(x, z)∥qφ(x, z)) , (9) where all expectations are again performed approximately using samples from q(x) and p(z). Recall that Eq(x) log pθ(x) = −KL(q(x)∥pθ(x)) + const, and Ep(z) log pθ(z) = −KL(p(z)∥qφ(z)) + const, thus (8) is maximized when q(x) = pθ(x) and qφ(x, z) = pθ(x, z). Similarly, (9) is maximized when p(z) = qφ(z) and qφ(x, z) = pθ(x, z). Hence, (8) and (9) impose desired constraints on both the marginal and joint distributions. Note that the log-likelihood terms in (8) and (9) are analogous to the data-fit regularizers discussed above in the context of ALI, but here implemented in a generalized form of the VAE. Direct evaluation of (8) and (9) is not possible, as it requires an explicit form for q(x) to evaluate qφ(x, z) = qφ(z|x)q(x). One may readily demonstrate that LVAEx(θ, φ) = Eqφ(x,z)[log pθ(x)p(z) −log qφ(x, z) + log pθ(x|z)] = Eqφ(x,z)[log pθ(x|z) −fψ∗ 1(x, z)] . A similar expression holds for LVAEz(θ, φ), in terms of fψ∗ 2(x, z). This naturally suggests the cumulative variational expression LVAExz(θ, φ, ψ1, ψ2) = LVAEx(θ, φ) + LVAEz(θ, φ) = Eqφ(x,z)[log pθ(x|z) −fψ1(x, z)] + Epθ(x,z)[log qφ(x|z) −fψ2(x, z)] , (10) where ψ1 and ψ2 are updated using the adversarial objectives in (5) and (6), respectively. Note that to evaluate (10) we must be able to sample from qφ(x, z) = q(x)qφ(z|x) and pθ(x, z) = p(z)pθ(x|z), both of which are readily available, as discussed above. Further, we require explicit expressions for qφ(z|x) and pθ(x|z), which we have. For (5) and (6) we similarly must be able to sample from the distributions involved, and we must be able to evaluate fψ1(x, z) and fψ2(x, z), each of which is implemented via a neural network. Note as well that the bound in (1) for Eq(x) log pθ(x) is in terms of the KL distance between conditional distributions qφ(z|x) and pθ(z|x), while (8) utilizes the KL distance between joint distributions qφ(x, z) and pθ(x, z) (use of joint distributions is related to ALI). By combining (8) and (9), the complete variational bound LVAExz employs the symmetric KL between these two joint distributions. By contrast, from (2), the original variational lower bound only addresses a one-way KL distance between qφ(x, z) and pθ(x, z). While [23] had a similar idea of employing adversarial methods in the context variational learning, it was only done within the context of the original form in (1), the limitations of which were discussed in Section 2.1. 4 In the original VAE, in which (1) was optimized, the reparametrization trick [12] was invoked wrt qφ(z|x), with samples zφ(x, ϵ) and ϵ ∼N(0, I), as the expectation was performed wrt this distribution; this reparametrization is convenient for computing gradients wrt φ. In the AS-VAE in (10), expectations are also needed wrt pθ(x|z). Hence, to implement gradients wrt θ, we also constitute a reparametrization of pθ(x|z). Specifically, we consider samples xθ(z, ξ) with ξ ∼N(0, I). LVAExz(θ, φ, ψ1, ψ2) in (10) is re-expressed as LVAExz(θ, φ, ψ1, ψ2) = Ex∼q(x),ϵ∼N (0,I) fψ1(x, zφ(x, ϵ)) −log pθ(x|zφ(x, ϵ)) + Ez∼p(z),ξ∼N (0,I) fψ2(xθ(z, ξ), z) −log qφ(z|xθ(z, ξ)) . (11) The expectations in (11) are approximated via samples drawn from q(x) and p(z), as well as samples of ϵ and ξ. xθ(z, ξ) and zφ(x, ϵ) can be implemented with a Gaussian assumption [12] or via density transformation [14, 16], detailed when presenting experiments in Section 5. The complete objective of the proposed Adversarial Symmetric VAE (AS-VAE) requires the cumulative variational in (11), which we maximize wrt ψ1 and ψ1 as in (5) and (6), using the results in (7). Hence, we write min θ,φ max ψ1,ψ2 −LVAExz(θ, φ, ψ1, ψ2) . (12) The following proposition characterizes the solutions of (12) in terms of the joint distributions of x and z. Proposition 1 The equilibrium for the min-max objective in (12) is achieved by specification {θ∗, φ∗, ψ∗ 1, ψ∗ 2} if and only if (7) holds, and pθ∗(x, z) = qφ∗(x, z). The proof is provided in the Appendix A. This theoretical result implies that (i) θ∗is an estimator that yields good reconstruction, and (ii) φ∗matches the aggregated posterior qφ(z) to prior distribution p(z). 4 Related Work VAEs [12, 13] represent one of the most successful deep generative models developed recently. Aided by the reparameterization trick, VAEs can be trained with stochastic gradient descent. The original VAEs implement a Gaussian assumption for the encoder. More recently, there has been a desire to remove this Gaussian assumption. Normalizing flow [14] employs a sequence of invertible transformation to make the distribution of the latent codes arbitrarily flexible. This work was followed by inverse auto-regressive flow [16], which uses recurrent neural networks to make the latent codes more expressive. More recently, SteinVAE [28] applies Stein variational gradient descent [29] to infer the distribution of latent codes, discarding the assumption of a parametric form of posterior distribution for the latent code. However, these methods are not able to address the fundamental limitation of ML-based models, as they are all based on the variational formulation in (1). GANs [3] constitute another recent framework for learning a generative model. Recent extensions of GAN have focused on boosting the performance of image generation by improving the generator [5], discriminator [30] or the training algorithm [9, 22, 31]. More recently, some researchers [10, 11, 33] have employed a bidirectional network structure within the adversarial learning framework, which in theory guarantees the matching of joint distributions over two domains. However, non-identifiability issues are raised in [32]. For example, they have difficulties in providing good reconstruction in latent variable models, or discovering the correct pairing relationship in domain transformation tasks. It was shown that these problems are alleviated in DiscoGAN [24], CycleGAN [26] and ALICE [32] via additional ℓ1, ℓ2 or adversarial losses. However, these methods lack of explicit probabilistic modeling of observations, thus could not directly evaluate the likelihood of given data samples. A key component of the proposed framework concerns integrating a new VAE formulation with adversarial learning. There are several recent approaches that have tried to combining VAE and GAN [34, 35], Adversarial Variational Bayes (AVB) [23] is the one most closely related to our work. AVB employs adversarial learning to estimate the posterior of the latent codes, which makes the encoder arbitrarily flexible. However, AVB seeks to optimize the original VAE formulation in (1), and hence it inherits the limitations of ML-based learning of θ. Unlike AVB, the proposed use of adversarial learning is based on a new VAE setup, that seeks to minimize the symmetric KL distance between pθ(x, z) and qφ(x, z), while simultaneously seeking to maximize the marginal expected likelihoods Eq(x)[log pθ(x)] and Ep(z)[log pφ(z)]. 5 5 Experiments We evaluate our model on three datasets: MNIST, CIFAR-10 and ImageNet. To balance performance and computational cost, pθ(x|z) and qφ(z|x) are approximated with a normalizing flow [14] of length 80 for the MNIST dataset, and a Gaussian approximation for CIFAR-10 and ImageNet data. All network architectures are provided in the Appendix B. All parameters were initialized with Xavier [36], and optimized via Adam [37] with learning rate 0.0001. We do not perform any dataset-specific tuning or regularization other than dropout [38]. Early stopping is employed based on average reconstruction loss of x and z on validation sets. We show three types of results, using part of or all of our model to illustrate each component. i) AS-VAE-r: This model trained with the first half of the objective in (11) to minimize LVAEx(θ, φ) in (8); it is an ML-based method which focuses on reconstruction. ii) AS-VAE-g: This model trained with the second half of the objective in (11) to minimize LVAEz(θ, φ) in (9); it can be considered as maximizing the likelihood of qφ(z), and designed for generation. iii) AS-VAE This is our proposed model, developed in Section 3. 5.1 Evaluation We evaluate our model on both reconstruction and generation. The performance of the former is evaluated using negative log-likelihood (NLL) estimated via the variational lower bound defined in (1). Images are modeled as continuous. To do this, we add [0, 1]-uniform noise to natural images (one color channel at the time), then divide by 256 to map 8-bit images (256 levels) to the unit interval. This technique is widely used in applications involving natural images [12, 14, 16, 39, 40], since it can be proved that in terms of log-likelihood, modeling in the discrete space is equivalent to modeling in the continuous space (with added noise) [39, 41]. During testing, the likelihood is computed as p(x = i|z) = pθ(x ∈[i/256, (i + 1)/256]|z) where i = 0, . . . , 255. This is done to guarantee a fair comparison with prior work (that assumed quantization). For the MNIST dataset, we treat the [0, 1]-mapped continuous input as the probability of a binary pixel value (on or off) [12]. The inception score (IS), defined as exp(Eq(x)KL(p(y|x)∥p(y))), is employed to quantitatively evaluate the quality of generated natural images, where p(y) is the empirical distribution of labels (we do not leverage any label information during training) and p(y|x) is the output of the Inception model [42] on each generated image. To the authors’ knowledge, we are the first to report both inception score (IS) and NLL for natural images from a single model. For comparison, we implemented DCGAN [5] and PixelCNN++ [40] as baselines. The implementation of DCGAN is based on a similar network architectures as our model. Note that for NLL a lower value is better, whereas for IS a higher value is better. 5.2 MNIST We first evaluate our model on the MNIST dataset. The log-likelihood results are summarized in Table 1. Our AS-VAE achieves a negative log-likelihood of 82.51 nats, outperforming normalizing flow (85.1 nats) with a similar architecture. The perfomance of AS-VAE-r (81.14 nats) is competitive to the state-of-the-art (79.2 nats). The generated samples are showed in Figure 1. AS-VAE-g and AS-VAE both generate good samples while the results of AS-VAE-r are slightly more blurry, partly due to the fact that AS-VAE-r is an ML-based model. 5.3 CIFAR Next we evaluate our models on the CIFAR-10 dataset. The quantitative results are listed in Table 2. AS-VAE-r and AS-VAE-g achieve encouraging results on reconstruction and generation, respectively, while our AS-VAE model (leveraging the full objective) achieves a good balance between these two tasks, which demonstrates the benefit of optimizing a symmetric objective. Compared with Table 1: NLL on MNIST. Method NF (k=80) [14] IAF [16] AVB [23] PixelRNN [39] AS-VAE-r AS-VAE-g AS-VAE NLL (nats) 85.1 80.9 79.5 79.2 81.14 146.32 82.51 6 state-of-the-art ML-based models [39, 40], we achieve competitive results on reconstruction but provide a much better performance on generation, also outperforming other adversarially-trained models. Note that our negative ELBO (evidence lower bound) is an upper bound of NLL as reported in [39, 40]. We also achieve a smaller root-mean-square-error (RMSE). Generated samples are shown in Figure 2. Additional results are provided in the Appendix C. Table 2: Quantitative Results on CIFAR-10; † 2.96 is based on our implementation and 2.92 is reported in [40]. Method NLL(bits) RMSE IS WGAN [43] 3.82 MIX+WassersteinGAN [43] 4.05 DCGAN [5] 4.89 ALI 14.53 4.79 PixelRNN [39] 3.06 PixelCNN++ [40] 2.96 (2.92)† 3.289 5.51 AS-VAE-r 3.09 3.17 2.91 AS-VAE-g 93.12 13.12 6.89 AS-VAE 3.32 3.36 6.34 ALI [10], which also seeks to match the joint encoder and decoder distribution, is also implemented as a baseline. Since the decoder in ALI is a deterministic network, the NLL of ALI is impractical to compute. Alternatively, we report the RMSE of reconstruction as a reference. Figure 3 qualitatively compares the reconstruction performance of our model, ALI and VAE. As can be seen, the reconstruction of ALI is related to but not faithful reproduction of the input data, which evidences the limitation in reconstruction ability of adversarial learning. This is also consistent in terms of RMSE. 5.4 ImageNet ImageNet 2012 is used to evaluate the scalability of our model to large datasets. The images are resized to 64×64. The quantitative results are shown in Table 3. Our model significantly improves the performance on generation compared with DCGAN and PixelCNN++, while achieving competitive results on reconstruction compared with PixelRNN and PixelCNN++. Table 3: Quantitative Results on ImageNet. Method NLL IS DCGAN [5] 5.965 PixelRNN [39] 3.63 PixelCNN++ [40] 3.27 7.65 AS-VAE 3.71 11.14 Note that the PixelCNN++ takes more than two weeks (44 hours per epoch) for training and 52.0 seconds/image for generating samples while our model only requires less than 2 days (4 hours per epoch) for training and 0.01 seconds/image for generating on a single TITAN X GPU. As a reference, the true validation set of ImageNet 2012 achieves 53.24% accuracy. This is because ImageNet has much greater variety of images than CIFAR-10. Figure 4 shows generated samples based on trained with ImageNet, compared with DCGAN and PixelCNN++. Our model is able to produce sharp images without label information while capturing more local spatial dependencies than PixelCNN++, and without suffering from mode collapse as DCGAN. Additional results are provided in the Appendix C. 6 Conclusions We presented Adversarial Symmetrical Variational Autoencoders, a novel deep generative model for unsupervised learning. The learning objective is to minimizing a symmetric KL divergence between the joint distribution of data and latent codes from encoder and decoder, while simultaneously maximizing the expected marginal likelihood of data and codes. An extensive set of results demonstrated excellent performance on both reconstruction and generation, while scaling to large datasets. A possible direction for future work is to apply AS-VAE to semi-supervised learning tasks. Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF. 7 Figure 1: Generated samples trained on MNIST. (Left) AS-VAE-r; (Middle) AS-VAE-g (Right) AS-VAE. Figure 2: Samples generated by AS-VAE when trained on CIFAR-10. Figure 3: Comparison of reconstruction with ALI [10]. In each block: column one for ground-truth, column two for ALI and column three for AS-VAE. Figure 4: Generated samples trained on ImageNet. (Top) AS-VAE; (Middle) DCGAN [5];(Bottom) PixelCNN++ [40]. 8 References [1] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image model. Artificial Intelligence and Statistics (AISTATS), 2016. [2] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop, 2015. [3] I.. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.l Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. [4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. [5] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. [6] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016. [7] Y. Zhang, Z. Gan, K. Fan, Z. Chen, R. Henao, D. Shen, and L. Carin. Adversarial feature matching for text generation. In ICML, 2017. [8] Y. Zhang, Z. Gan, and L. Carin. Generating text with adversarial training. In NIPS workshop, 2016. [9] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016. [10] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville. Adversarially learned inference. In ICLR, 2017. [11] J. Donahue, . Krähenbühl, and T. Darrell. Adversarial feature learning. In ICLR, 2017. [12] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [13] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [14] D.J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015. [15] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016. [16] D. P. Kingma, T. Salimans, R. Jozefowicz, X.i Chen, I. Sutskever, and M. Welling. Improving variational inference with inverse autoregressive flow. In NIPS, 2016. [17] Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin. Deconvolutional paragraph representation learning. In NIPS, 2017. [18] L. Chen, S. Dai, Y. Pu, C. Li, and Q. Su Lawrence Carin. Symmetric variational autoencoder and connections to adversarial learning. In arXiv, 2017. [19] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for text sequence matching. In arXiv, 2017. [20] D.P. Kingma, D.J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [21] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [22] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. [23] L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In arXiv, 2016. 9 [24] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. In arXiv, 2017. [25] C. Li, K. Xu, J. Zhu, and B. Zhang. Triple generative adversarial nets. In arXiv, 2017. [26] JY Zhu, T. Park, P. Isola, and A. Efros. Unpaired image-to-image translation using cycleconsistent adversarial networks. In arXiv, 2017. [27] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural networks, 1989. [28] Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin. Vae learning via stein variational gradient descent. In NIPS, 2017. [29] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In NIPS, 2016. [30] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. In ICLR, 2017. [31] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. In arXiv, 2017. [32] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, and L. Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017. [33] Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and Lawrence Carin. Triangle generative adversarial networks. In NIPS, 2017. [34] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. In arXiv, 2015. [35] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016. [36] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. [37] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [38] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. [39] A. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural network. In ICML, 2016. [40] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In ICLR, 2017. [41] L. Thei, A. Oord, and M. Bethge. A note on the evaluation of generative models. In ICLR, 2016. [42] C. Szegedy, W. Liui, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. [43] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets. In arXiv, 2017. 10 | 2017 | 208 |
6,684 | Model evidence from nonequilibrium simulations Michael Habeck Statistical Inverse Problems in Biophysics, Max Planck Institute for Biophysical Chemistry & Institute for Mathematical Stochastics, University of Göttingen, 37077 Göttingen, Germany email mhabeck@gwdg.de Abstract The marginal likelihood, or model evidence, is a key quantity in Bayesian parameter estimation and model comparison. For many probabilistic models, computation of the marginal likelihood is challenging, because it involves a sum or integral over an enormous parameter space. Markov chain Monte Carlo (MCMC) is a powerful approach to compute marginal likelihoods. Various MCMC algorithms and evidence estimators have been proposed in the literature. Here we discuss the use of nonequilibrium techniques for estimating the marginal likelihood. Nonequilibrium estimators build on recent developments in statistical physics and are known as annealed importance sampling (AIS) and reverse AIS in probabilistic machine learning. We introduce estimators for the model evidence that combine forward and backward simulations and show for various challenging models that the evidence estimators outperform forward and reverse AIS. 1 Introduction The marginal likelihood or model evidence is a central quantity in Bayesian inference [1, 2], but notoriously difficult to compute. If likelihood L(x) ≡p(y|x, M) models data y and prior π(x) ≡ p(x|M) expresses our knowledge about the parameters x of the model M, the posterior p(x|y, M) and the model evidence Z are given by: p(x|y, M) = p(y|x, M) p(x|M) p(y|M) = L(x) π(x) Z , Z ≡p(y|M) = Z L(x) π(x) dx . (1) Parameter estimation proceeds by drawing samples from p(x|y, M), and different ways to model the data are ranked by their evidence. For example, two models M1 and M2 can be compared via their Bayes factor, which is proportional to the ratio of their marginal likelihoods p(y|M1)/p(y|M2) [3]. Often the posterior (and perhaps also the prior) is intractable in the sense that it is not possible to compute the normalizing constant and therefore also the evidence analytically. This makes it difficult to compare different models via their posterior probability and model evidence. Markov chain Monte Carlo (MCMC) algorithms [4] only require unnormalized probability distributions and are among the most powerful and accurate methods to estimate the marginal likelihood, but they are computationally expensive. Therefore, it is important to develop efficient MCMC algorithms that can sample from the posterior and allow us to compute the marginal likelihood. There is a close analogy between the marginal likelihood and the log-partition function or free energy from statistical physics [5]. Therefore, many concepts and algorithms originating in statistical physics have been applied to problems arising in probabilistic inference. Here we show that nonequilibrium fluctuation theorems (FTs) [6, 7, 8] can be used to estimate the marginal likelihood from forward and reverse simulations. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Marginal likelihood estimation by annealed importance sampling A common MCMC strategy to sample from the posterior and estimate the evidence is to simulate a sequence of distributions pk that bridge between the prior and the posterior [9]. Samples can either be generated in sequential order as in annealed importance sampling (AIS) [10] or in parallel as in replica-exchange Monte Carlo or parallel tempering [11, 12]. Several methods have been proposed to estimate the marginal likelihood from MCMC samples including thermodynamic integration (TI) [9], annealed importance sampling (AIS) [10], nested sampling (NS) [13] or the density of states (DOS) [14]. Most of these approaches (TI, DOS and NS) assume that we can draw exact samples from the intermediate distributions pk, typically after a sufficiently large number of equilibration steps has been simulated. AIS, on the other hand, does not assume that the samples are equilibrated after each annealing step, which makes AIS very attractive for analyzing complex models for which equilibration is hard to achieve. AIS employs a sequence of K + 1 probability distributions pk and Markov transition operators Tk whose stationary distributions are pk, i.e. R Tk(x|x′) pk(x′) dx′ = pk(x). In a Bayesian setting, p0 is the prior and pK the posterior. Typically, pk is intractable meaning that we only know an unnormalized version fk, but not the normalizer Zk, i.e. pk(x) = fk(x)/Zk where Zk = R fk(x) dx and the evidence is Z = ZK/Z0. Often, it is convenient to write fk as an energy based model fk(x) = exp{−Ek(x)}. In Bayesian inference, a popular choice is fk(x) = [L(x)]βk π(x) with prior π(x) and likelihood L(x); βk is an inverse temperature schedule that starts at β0 = 0 (prior) and ends at βK = 1 (posterior). AIS samples paths x = [x0, x1, . . . , xK−1] according to the probability Pf[x] = TK−1(xK−1|xK−2) · · · T1(x1|x0) p0(x0) (2) where, following Crooks [8], calligraphic symbols and square brackets denote quantities that depend on the entire path. The subscript indicates that the path is generated by a forward simulation, which starts from p0 and propagates the initial state through a sequence of new states by the successive action of the Markov kernels T1, T2, . . . , TK−1. The importance weight of a path is w[x] = K−1 Y k=0 fk+1(xk) fk(xk) = exp − K−1 X k=0 [ Ek+1(xk) −Ek(xk) ] . (3) The average weight over many paths is a consistent and unbiased estimator of the model evidence Z = ZK/Z0, which follows from [15, 10] (see supplementary material for details): ⟨w⟩f = Z w[x] Pf[x] D[x] = Z (4) where the average ⟨·⟩f is an integral over all possible paths generated by the forward sequence of transition kernels (D[x] = dx0 · · · dxK−1). The average weight of M forward paths x(i) is an estimate of the model evidence: Z ≈ 1 M P i w[x(i)]. This estimator is at the core of AIS and its variants [10, 16]. To avoid overflow problems, it will be numerically more stable to compute log weights. Logarithmic weights also naturally occur from a physical perspective where −log w[x] is identified as the work required to generate path x. 3 Nonequilibrium fluctuation theorems Nonequilibrium fluctuations theorems (FTs) quantify the degree of irreversibility of a stochastic process by relating the probability of generating a path by a forward simulation to the probability of generating the exact same path by a time-reversed simulation. If the Markov kernels Tk satisfy detailed balance, time reversal is achieved by applying the same sequence of kernels in reverse order. For Markov kernels not satisfying detailed balance, the definition is slightly more general [7, 10]. Here we assume that all kernels Tk satisfy detailed balance, which is valid for Markov chains based on the Metropolis algorithm and its variants [4]. 2 Under these assumptions, the probability of generating the path x by a reverse simulation starting in xK−1 is Pr[x] = T1(x0|x1) · · · TK−1(xK−2|xK−1) pK(xK−1) . (5) Averages over the reverse paths are indicated by ⟨·⟩r. The detailed fluctuation theorem [6, 8] relates the probabilities of generating x in a forward and reverse simulation (see supplementary material): Pf[x] Pr[x] = ZK Z0 K−1 Y k=0 fk(xk) fk+1(xk) = Z w[x] = exp{W[x] −∆F} (6) where the physical analogs of the path weight and the marginal likelihood were introduced, namely the work W[x] = −log w[x] = P k[ Ek+1(xk) −Ek(xk) ] and the free energy difference ∆F = −log Z = −log(ZK/Z0). Various demonstrations of relation (6) exist in the physics and machine learning literature [6, 7, 8, 10, 17]. Lower and upper bounds sandwiching the log evidence [17, 18, 16] follow directly from equation (6) and the non-negativity of the relative entropy: DKL(Pf∥Pr) = Z Pf[x] log(Pf[x]/Pr[x]) D[x] = ⟨W⟩f −∆F ≥0 . From DKL(Pr∥Pf) ≥0 we obtain an upper bound on log Z, such that overall we have: ⟨log w⟩f = −⟨W⟩f ≤log Z ≤−⟨W⟩r = ⟨log w⟩r. (7) Grosse et al. use these bounds to assess the convergence of bidirectional Monte Carlo [18]. Thanks to the detailed fluctuation theorem (Eq. 6), we can also relate the marginal distributions of the work resulting from many forward and reverse simulations: pf(W) = Z δ(W −W[x]) Pf[x] D[x] = pr(W) eW −∆F (8) which is Crooks’ fluctuation theorem (CFT) [7]. CFT tells us that the work distributions pf and pr cross exactly at W = ∆F. Therefore, by plotting histograms of the work obtained in forward and backward simulations we can read off an estimate for the negative log evidence. The Jarzynski equality (JE) [15] follows directly from CFT due to the normalization of pr: Z pf(W) e−W dW = ⟨e−W ⟩f = e−∆F . (9) JE restates the AIS estimator ⟨w⟩f = Z (Eq. 4) in terms of the physical quantities. Remarkably, JE holds for any stochastic dynamics bridging between the initial and target distribution. This suggests to use fast annealing protocols to drag samples from the prior into the posterior. However, the JE involves an exponential average in which paths requiring the least work contribute most strongly. These paths correspond to work realizations that reside in the left tail of pf. With faster annealing, the chance of generating a minimal work path decreases exponentially and becomes a rare event. A key feature of CFT and JE is that they do not require exact samples from the stationary distributions pk, which is needed in applications of TI or DOS. For complex probabilistic models, the states generated by the kernels Tk will typically “lag behind” due to slow mixing, especially near phase transitions. The k-th state of the forward path will follow the intermediate distribution qk(xk) = Z k Y l=1 Tl(xl|xl−1) p0(x0) dx0 · · · dxk−1, q0(x) = p0(x) . (10) Unless the transition kernels Tk mix very rapidly, qk ̸= pk for k > 0. Consider the common case in Bayesian inference where Ek(x) = βkE(x) and E(x) = −log L(x). Then, according to inequalities (7), we have the following lower bound on the marginal likelihood ⟨log w⟩f = −⟨W⟩f = − K−1 X k=0 (βk+1 −βk) ⟨E⟩qk (11) 3 0 5 10 15 20 work W 0.0 0.1 0.2 0.3 0.4 work distribution p(W) A pf(W) pr(W) logZ 0 5 10 index k 0 10 20 30 parameter x B pk qk 0 50 100 150 200 K 10 5 0 work W C W f W r log e W f logZ Figure 1: Nonequilibrium analysis of a Gaussian toy model. (A) Work distributions pf and pr shown in blue and green. The correct free energy difference (minus log evidence) is indicated by a dashed line. (B) Comparison of stationary distribution pk and marginal distributions qk generated by the transition kernels. Shown is a 1σ band about the mean positions for pk (blue) and qk (green). (C) Lower and upper bounds of the log evidence (Eq. 7) and logarithm of the exponential average over the forward work distribution for increasingly slow annealing schedules. and an analogous expression for the upper bound/reverse direction, in which the average energies along the forward path ⟨E⟩qk need to be replaced by the corresponding average energies along the backward path. The difference between the forward and reverse averages is called “hysteresis” in physics. The larger the hysteresis, the more strongly will the marginal likelihood bounds disagree and the more uncertain will our guess of the model evidence be. The opposite limiting case is slow annealing and full equilibration where the bound (Eq. 11) approaches thermodynamic integration (see supplementary material). So we expect that there is a tradeoff between switching fast in order to save computation time, versus a desire to control the amount of hysteresis, which otherwise makes it difficult to extract accurate evidence estimates from the simulations. 4 Illustration for a tractable model Let us illustrate the nonequilibrium results for a tractable model where the initial, the target and all intermediate distribution are Gaussians pk(x) = N x; µk, σ2 k) with means µk and standard deviations σk > 0. The transition kernels are also Gaussian: Tk(x|x′) = N x; (1 −τk)µk + τkx′, (1 −τ 2 k)σ2 k) with τk ∈[0, 1] controlling the speed of convergence: For τk = 0 convergence is immediate, whereas for τk →1, the chain generated by Tk converges infinitely slowly. Note that the kernels Tk satisfy detailed balance, therefore backward simulations apply the same kernels in reverse order. The energies and exact log partition functions are Ek(x) = 1 2σ2 k (x −µk)2 and log Zk = log(2πσ2 k)/2. We bridge between an initial distribution with mean µ0 = 20 and standard deviation σ0 = 10 and a target with µK = 0, σK = 1 using K = 10 intermediate distributions and compute work distributions resulting from forward/backward simulations. Both distributions indeed cross at W = −log Z = log(σ0/σK) = log 10, as predicted by CFT (see Fig. 1A). Figure 1B illustrates the difference between the marginal distribution of the samples after k annealing steps qk (Eq. 10) and the stationary distribution pk. The marginal distributions qk are also Gaussian, but their means and variances diverge from the means and variances of the stationary distributions pk. This divergence results in hysteresis, if the annealing process is forced to progress very rapidly without equilibrating the samples (quenching). Figure 1C confirms the validity of the JE (Eq. 9) and of the lower and upper bounds (Eq. 7). For short annealing protocols, the bounds are very conservative, whereas the Jarzynski equality gives the correct evidence even for fast protocols (small K). In realistic applications, however, we cannot compute the work distribution pf over the entire range of work values. In fast annealing simulations, it will become increasingly difficult to explore the left tail of the work distribution, such that in practice the accuracy of the JE estimator deteriorates for too small K. 4 Algorithm 1 Bennett’s acceptance ratio (BAR) Require: Work W (i) f , W (i) r from M forward and reverse nonequilibrium simulations, tolerance δ (e.g. δ = 10−4) Z ← 1 M P i exp{−W (i) f } (Jarzynski estimator) repeat LHS ←P i 1 1+Z exp{W (i) f }, RHS ←P i 1 1+Z−1 exp{−W (i) r } Z ←Z × LHS RHS until | log(LHS/RHS)| < δ return Z 5 Using the fluctuation theorem to estimate the evidence To use the fluctuation theorem for evidence estimation, we run two sets of simulations. As in AIS, forward simulations start from a prior sample which is successively propagated by the transition kernels Tk. For each forward path x(i) the total work W (i) f is recorded. We also run reverse simulations starting from a posterior sample. For complex inference problems it is generally impossible to generate an exact sample from the posterior. However, in some cases the mode of the posterior is known or powerful methods for locating the posterior maximum exist. We can then generate a posterior sample by applying the transition operator TK many times starting from the posterior mode. The reverse simulations could also be started from the final states generated by the forward simulations drawn according to their importance weights w(i) f ∝exp{−W (i) f }. Another possibility to generate a posterior sample is to start from the data, if we want to evaluate an intractable generative model such as a restricted Boltzmann machine. The posterior sample is then propagated by the reverse chain of transition operators. Again, we accumulate the total work generated by the reverse simulation W (i) r . The reverse simulation corresponds to reverse AIS proposed by Burda et al. [16]. 5.1 Jarzynski and cumulant estimators There are various options to estimate the evidence from forward and backward simulations. We can apply the Jarzynski equality to W (i) f and W (i) r , which corresponds to the estimators used in AIS [10] and reverse AIS [16]. According to Eq. (7) we can also compute an interval that likely contains the log evidence, but typically this interval will be quite large. Hummer [19] has developed estimators based on the cumulant expansion of pf and pr: log Z ≈−⟨W⟩f + varf(W)/2, log Z ≈−⟨W⟩r −varr(W)/2 (12) where varf(W) and varr(W) indicate the sample variances of the work generated during the forward and reverse simulations. The cumulant estimators increase/decrease the lower/upper bound of the log evidence (Eq. 7) by the sample variance of the work. The forward and reverse cumulant estimators can also be combined into a single estimate [19]: log Z ≈−1 2 ⟨W⟩f + ⟨W⟩r + 1 12 varf(W) −varr(W) . (13) 5.2 Bennett’s acceptance ratio Another powerful method is Bennett’s acceptance ratio (BAR) [20, 21], which is based on the observation that according to CFT (Eq. 8) Z h(W; ∆F) pf(W) e−W dW = Z h(W; ∆F) pr(W) e−∆F dW for any function h. Therefore, any choice of h gives an implicit estimator for ∆F. Bennett showed [20, 9] that the minimum mean squared error is achieved for h ∝(pf + pr)−1, leading to the implicit 5 0 100 200 K 0.2 0.0 0.2 error A B C D E F Figure 2: Performance of evidence estimators on the Gaussian toy model. M = 100 forward and reverse simulations were run for schedules of increasing length K. This experiment was repeated 1000 times to probe the stability of the estimators. Shown is the difference between the log evidence estimate and its true value −log 10. The average over all repetitions is shown as red line; the light band indicates one standard deviation over all repetitions. (A) Cumulant estimator (Eq. 12) based on the forward simulation. (B) The combined cumulant estimator (Eq. 13). (C) Forward AIS estimator. (D) Reverse AIS. (E) BAR. (F) Histogram estimator. equation X i 1 1 + Z exp{W (i) f } = X i 1 1 + Z−1 exp{−W (i) r } . (14) By numerically solving equation (14) for Z, we obtain an estimator of the evidence based on Bennett’s acceptance ratio (BAR). A straightforward way to solve the BAR equation is to iterate over the multiplicative update Z(t+1) ←Z(t)LHS(Z(t))/RHS(Z(t)) where LHS and RHS are the left and right hand side of equation (14) and the superscript (t) indicates an iteration index. Algorithm (1) provides pseudocode to compute the BAR estimator (further details are given in the supplementary material). 5.3 Histogram estimator Here we introduce yet another way of combining forward/backward simulations and estimating the model evidence. According to CFT, we have: W (i) f ∼pf(W), W (i) r ∼pr(W) = pf(W) e−W /Z . The idea is to combine all samples W (i) f and W (i) r to estimate pf, from which we can then obtain the evidence by using the JE (Eq. 9). Thanks to the CFT, the samples from the reverse simulation contribute most strongly to the integral in the JE. Therefore, if we can use the reverse paths to estimate the forward work distribution, pf will be quite accurate in the region that is most relevant for evaluating JE. To estimate pf from W (i) f and W (i) r is mathematically equivalent to estimating the density of states (DOS) (i.e. the marginal distribution of the log likelihood) from equilibrium simulations run at two inverse temperatures β = 0 and β = 1. We can therefore directly apply histogram techniques [14, 22] used to analyze equilibrium simulations to estimate pf from nonequilibrium simulations (details are given in the supplementary material). Histogram techniques result in a non-parametric estimate of the work distribution: pf(W) ≈ X j pj δ(W −Wj) (15) where all sampled work values, W (i) f and W (i) r , were combined into a single set Wj and pj are normalized weights associated with each Wj. Using the JE, we obtain Z ≈ X j pj e−Wj (16) which is best evaluated in log space. The histogram iterations [14] used to determine pj and Z are very similar to the multiplicative that solve the BAR equation (Eq. 14). After running the histogram iterations, we obtain a non-parametric maximum likelihood estimate of pf (Eq. 15). It is also possible 6 1350 1300 1250 work W 0.00 0.25 0.50 0.75 1.00 work distribution p(W) 1e 1 A pf(W) pr(W) logZ 0.4 0.5 0.6 0.7 inverse temperature 2.0 1.5 1.0 average energy E 1e3 B E f E r E 103 104 equilibration steps N 1.33 1.34 1.35 log evidence logZ 1e3 C Figure 3: Evidence estimation for the 32 × 32 Ising model. (A) Work distributions obtained for K = 1000 annealing and N = 1000 equilibration steps. (B) Average energy ⟨E⟩f and ⟨E⟩r at different annealing steps k in comparison to the average energy of the stationary distribution ⟨E⟩β. Shown is a zoom into the inverse temperature range from 0.4 to 0.7; the average energies agree quite well outside this interval. (C) Evidence estimates for increasing number of equilibration steps N. Light/dark blue: lower/upper bounds ⟨log w⟩f / ⟨log w⟩r; light/dark green: forward/reverse AIS estimators log⟨w⟩f / log⟨w⟩r; light red: BAR; dark red: histogram estimator. For N > 1000, BAR and the histogram estimator produce virtually identical evidence estimates. to carry out a Bayesian analysis, and derive a Gibbs sampler for pf, which does not only provide a point estimate for log Z, but also quantifies its uncertainty (see supplementary material for details). We studied the performance of the evidence estimators on forward/backward simulations of the Gaussian toy model. The cumulant estimators (Figs. 2A, B) are systematically biased in case of rapid annealing (small K). The combined cumulant estimator (Fig. 2B) is a significant improvement over the forward estimator, which does not take the reverse simulation data into account. The forward and reverse AIS estimators are shown in Figs. 2C and 2D. For this system, the evidence estimates derived from the reverse simulation are systematically more accurate than the AIS estimate based on the forward simulation, which is clear given that the work distribution from reverse simulations pr is much more concentrated than the forward work distribution pf (see Fig. 1A). The most accurate, least biased and most stable estimators are BAR (Fig. 2E) and the histogram estimator (Fig. 2F), which both combine forward and backward simulations into a single evidence estimate. 6 Experiments We studied the performance of the nonequilibrium marginal likelihood estimators on various challenging probabilistic models including Markov random fields and Gaussian mixture models. A python package implementing the work simulations and evidence estimators can be downloaded from https://github.com/michaelhabeck/paths. 6.1 Ising model Our first test system is a 32 × 32 Ising model for which the log evidence can be computed exactly: log Z = 1339.27 [23]. A single configuration consists of 1024 spins xi = ±1. The energies of the intermediate distributions are Ek(x) = βk P i∼j xixj where i ∼j indicates nearest neighbors on a 2D square lattice. We generate M = 1000 forward and reverse paths using a linear inverse temperature schedule that interpolates between β0 = 0 and βK = 1 where K = 1000. Forward simulations start from random spin configurations. For the reverse simulations, we start in one of the two ground states with all spins either −1 or +1. Tk are Metropolis kernels based on pk: a new spin configuration is proposed by flipping a randomly selected spin and accepted or rejected according to Metropolis’ rule. The single spin-flip transitions are repeated N times at constant βk, i.e. N is the number of equilibration steps after a perturbation was induced by lowering the temperature. The larger N, the more time we allow the simulation to equilibrate, and the closer will qk be to pk. Figure 3A shows the work distributions obtained with N = 1000 equilibration steps per temperature perturbation. Even though the forward and reverse work distributions overlap only weakly, the 7 103 104 105 K 30 40 50 logZ +1.7e3 A logw f logw r log w f log w r BAR Histogram 1 2 3 4 logZ +1.74e3 0 2 4 6 B N = 105 N = 106 N = 107 N = 108 475 450 logZ 0.00 0.05 0.10 0.15 p(W) C parallel tempering Figure 4: Evidence estimation for the Potts model and RBM. (A) Estimated log evidence of the Potts model for a fixed computational budget K × N = 109 where M = 100 and ten repetitions were computed. The reference value log Z = 1742 (obtained with parallel tempering) is shown as dashed black line. (B) log Z distributions obtained with the Gibbs sampling version of the histogram estimator for K = 1000 and varying number of equilibration steps. (C) Work distributions obtained for a marginal and full RBM (light/dark blue: forward/reverse simulation of the marginal model; light/dark green: forward/reverse simulation of the full model). evidence estimates obtained with BAR and the histogram estimator are quite accurate with 1338.05 (BAR) and 1338.28 (histogram estimator), which differs only by approx. 1 nat from the true evidence and corresponds to a relative error of ∼9×10−4 (BAR) and 7×10−4 (histogram estimator). Forward and reverse AIS provide less accurate estimates of the log evidence: 1333.66 (AIS) and 1342.05 (RAISE). The lower and upper bounds are very broad ⟨log w⟩f = 1290.5 and ⟨log w⟩r = 1352.0, which results from hysteresis effects. Figure 3B zooms into the average energies obtained during the forward and reverse simulations and compares them with the average energy of a fully equilibrated simulation. The average energies differ most strongly at inverse temperatures close to the critical value βcrit ≈0.44 at which the Ising model undergoes a second-order phase transition. We also tested the performance of the estimators as a function of the number of equilibration steps N. As already observed for the Gaussian toy model, BAR and the histogram estimator outperform the Jarzynski estimators (AIS and RAISE) also in case of the Ising model (see Fig. 3C). 6.2 Ten-state Potts model Next we performed simulations of the ten-state Potts model defined over a 32 × 32 lattice where the spins of the Ising model are replaced by integer colors xi ∈{1, . . . , 10} and an interaction energy 2δ(xi, xj). This model is significantly more challenging than the Ising model, because it undergoes a first-order phase transition and has an astronomically larger number of states (101024 colorings rather than 21024 spin configurations). We performed forward/backward simulations using a linear inverse temperature schedule with β0 = 0, βK = 1 and a fixed computational budget K × N = 109. Figure 4A shows that there seems to be no advantage of increasing the number of intermediate distributions at the cost of reducing the number of equilibration steps. Again, BAR and the histogram estimator perform very similarly. The Gibbs sampling version of the histogram estimator also provides the posterior of log Z (see Fig. 4B). For too few equilibration steps N, this distribution is rather broad or even slightly biased, but for large N the log Z posterior concentrates around the correct log evidence. 6.3 Restricted Boltzmann machine The restricted Boltzmann machine (RBM) is a common building block of deep learning hierarchies. RBM is an intractable MRF with bipartite interactions: E(v, h) = −(aT v + bT h + vT Wh) where a, b are the visible and hidden biases and W are the couplings between the visible and hidden units vi and hj. Here we compare annealing of the full model Ek(v, h) = βkE(v, h) against annealing of the marginal model Ek(h) = −βk log P v exp{−E(v, h)}. The full model can be simulated using a Gibbs sampler, which is straightforward since the conditional distributions are Bernoulli. To sample from the marginal model, we use a Metropolis kernel similar to the one used for the Ising model. To start the reverse simulations, we randomly pick an image from the training set and generate an initial 8 hidden state by sampling from the conditional distribution p(h|v). We then run 100 steps of Gibbs sampling with TK to obtain a posterior sample. We ran tests on an RBM with 784 visible and 500 hidden units trained on the MNIST handwritten digits dataset [24] with contrastive divergence using 25 steps [25]. Since the true log evidence is not known, we use a reference value obtained with parallel tempering (PT): log Z ≈451.42. Figure 4C compares evidence estimates based on annealing simulations of the full against the marginal model. Both annealing approaches provide very similar evidence estimates 451.43 (full model) and 451.48 (marginal model) that are close to the PT result. However, simulation of the marginal model is three times faster compared to the full model. Therefore, it seems beneficial to evaluate and train RBMs based on sampling and annealing of the marginal model p(h) rather than the full model p(v, h). 6.4 Gaussian mixture model Finally, we consider a sort of “data annealing” strategy in which independent data points are added one-by-one as in sequential Monte Carlo [10] Ek(x) = −P l<k log p(yl|x, M). We applied thermal and data annealing to a three-component Gaussian mixture model with means -5, 0, 5, standard deviations 1, 3, 0.5 and equal weights. We generated K = 100 data points, and applied both types of annealing to estimate the mixture parameters and marginal likelihood. Parallel tempering produced a reference log evidence of -259.49. A Gibbs sampler utilizing cluster responsibilities as auxiliary variables served as transition kernel. Forward simulations started from prior samples where conjugate priors were used for the component means, widths and weights. The reverse simulations started from a posterior sample obtained by running K-means followed by 100 Gibbs sampling iterations. Thermal annealing with as many temperatures as data points and 10 Gibbs sampling steps per temperature estimated a log evidence of -259.72 ± 0.60 (M = 100, 10 repetitions). For 100 Gibbs steps, we obtain -259.47 ± 0.36. Data annealing with 10 Gibbs steps per addition of a data point yields –257.52 ± 0.97, which seems to be slightly biased. Increasing the number of Gibbs steps to 100 improves the accuracy of the log evidence estimate: -258.32 ± 1.21. This shows that there might be some potential in a data annealing strategy, especially for larger datasets. 7 Summary This paper applies nonequilibrium techniques to estimate the marginal likelihood of an intractable probabilistic model. We outline the most important results from nonequilibrium statistical physics that are relevant to marginal likelihood estimation and relate them to machine learning algorithms such as AIS [10], RAISE [16] and bidirectional Monte Carlo [17, 18]. We introduce two estimators, BAR and the histogram estimator, that are currently not used in the context of probabilistic inference. We study the performance of the estimators on a toy system and various challenging probabilistic models including Markov random fields and Gaussian mixture models. The two evidence estimators perform very similarly and are superior to forward/reverse AIS and the cumulant estimators. Compared to BAR, the histogram estimator has the additional advantage that it also quantifies the uncertainty of the evidence estimate. Acknowledgments This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) SFB 860, subproject B09. References [1] E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, Cambridge UK, 2003. [2] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge UK, 2003. [3] R. Kass and A. Raftery. Bayes factors. American Statistical Association, 90:773–775, 1995. [4] J. S. Liu. Monte Carlo strategies in scientific computing. Springer, 2001. 9 [5] K. H. Knuth, M. Habeck, N. K. Malakar, A. M. Mubeen, and B. Placek. Bayesian evidence and model selection. Digit. Signal Process., 47(C):50–67, 2015. [6] G. E. Crooks. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. Journal of Statistical Physics, 90(5-6):1481–1487, 1998. [7] G. E. Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys Rev E, 60:2721–2726, 1999. [8] G. E. Crooks. Excursions in statistical dynamics. PhD thesis, University of California at Berkeley, 1999. [9] A. Gelman and X. Meng. Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statistical Science, 13:163–185, 1998. [10] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125–139, 2001. [11] R. H. Swendsen and J.-S. Wang. Replica Monte Carlo simulation of spin glasses. Phys Rev Lett, 57:2607– 2609, 1986. [12] C. J. Geyer. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, pages 156–163, 1991. [13] J. Skilling. Nested sampling for general Bayesian computation. Bayesian Analysis, 1:833–860, 2006. [14] M. Habeck. Evaluation of marginal likelihoods using the density of states. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 22, pages 486–494. JMLR: W&CP 22, 2012. [15] C. Jarzynski. Nonequilibrium equality for free energy differences. Phys Rev Lett, 78:2690–2693, 1997. [16] Y. Burda, R. Grosse, and R. Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Artificial Intelligence and Statistics, pages 102–110, 2015. [17] R. B. Grosse, Z. Ghahramani, and R. P. Adams. Sandwiching the marginal likelihood using bidirectional Monte Carlo. arXiv preprint arXiv:1511.02543, 2015. [18] R. B. Grosse, S. Ancha, and D. M. Roy. Measuring the reliability of MCMC inference with bidirectional Monte Carlo. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2451–2459. Curran Associates, Inc., 2016. [19] G. Hummer. Fast-growth thermodynamic integration: Error and efficiency analysis. The Journal of Chemical Physics, 114(17):7330–7337, 2001. [20] C. H. Bennett. Efficient estimation of free energy differences from Monte Carlo data. J. Comput. Phys., 22:245, 1976. [21] M. R. Shirts, E. Bair, G. Hooker, and V. S Pande. Equilibrium free energies from nonequilibrium measurements using maximum-likelihood methods. Phys Rev Lett, 91(14):140601, 2003. [22] M. Habeck. Bayesian estimation of free energies from equilibrium simulations. Phys Rev Lett, 109(10):100601, 2012. [23] P. D. Beale. Exact Distribution of Energies in the Two-Dimensional Ising Model. Phys Rev Lett, 76:78–81, 1996. [24] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [25] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8):1771–1800, 2002. 10 | 2017 | 209 |
6,685 | Generalized Linear Model Regression under Distance-to-set Penalties Jason Xu University of California, Los Angeles jqxu@ucla.edu Eric C. Chi North Carolina State University eric_chi@ncsu.edu Kenneth Lange University of California, Los Angeles klange@ucla.edu Abstract Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and regularization penalties, and avoid the drawback of shrinkage. To optimize distance penalized objectives, we make use of the majorization-minimization principle. Resulting algorithms constructed within this framework are amenable to acceleration and come with global convergence guarantees. Applications to shape constraints, sparse regression, and rank-restricted matrix regression on synthetic and real data showcase strong empirical performance, even under non-convex constraints. 1 Introduction and Background In classical linear regression, the response variable y follows a Gaussian distribution whose mean xtβ depends linearly on a parameter vector β through a vector of predictors x. Generalized linear models (GLMs) extend classical linear regression by allowing y to follow any exponential family distribution, and the conditional mean of y to be a nonlinear function h(xtβ) of xtβ [24]. This encompasses a broad class of important models in statistics and machine learning. For instance, count data and binary classification come within the purview of generalized linear regression. In many settings, it is desirable to impose constraints on the regression coefficients. Sparse regression is a prominent example. In high-dimensional problems where the number of predictors n exceeds the number of cases m, inference is possible provided the regression function lies in a low-dimensional manifold [11]. In this case, the coefficient vector β is sparse, and just a few predictors explain the response y. The goals of sparse regression are to correctly identify the relevant predictors and to estimate their effect sizes. One approach, best subset regression, is known to be NP hard. Penalizing the likelihood by including an ℓ0 penalty ∥β∥0 (the number of nonzero coefficients) is a possibility, but the resulting objective function is nonconvex and discontinuous. The convex relaxation of ℓ0 regression replaces ∥β∥0 by the ℓ1 norm ∥β∥1. This LASSO proxy for ∥β∥0 restores convexity and continuity [31]. While LASSO regression has been a great success, it has the downside of simultaneously inducing both sparsity and parameter shrinkage. Unfortunately, shrinkage often has the undesirable side effect of including spurious predictors (false positives) with the true predictors. Motivated by sparse regression, we now consider the alternative of penalizing the log-likelihood by the squared distance from the parameter vector β to the constraint set. If there are several constraints, then we add a distance penalty for each constraint set. Our approach is closely related to the proximal 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. distance algorithm [19, 20] and proximity function approaches to convex feasibility problems [5]. Neither of these prior algorithm classes explicitly considers generalized linear models. Beyond sparse regression, distance penalization applies to a wide class of statistically relevant constraint sets, including isotonic constraints and matrix rank constraints. To maximize distance penalized loglikelihoods, we advocate the majorization-minimization (MM) principle [2, 18, 19]. MM algorithms are increasingly popular in solving the large-scale optimization problems arising in statistics and machine learning [22]. Although distance penalization preserves convexity when it already exists, neither the objective function nor the constraints sets need be convex to carry out estimation. The capacity to project onto each constraint set is necessary. Fortunately, many projection operators are known. Even in the absence of convexity, we are able to prove that our algorithm converges to a stationary point. In the presence of convexity, the stationary points are global minima. In subsequent sections, we begin by briefly reviewing GLM regression and shrinkage penalties. We then present our distance penalty method and a sample of statistically relevant problems that it can address. Next we lay out in detail our distance penalized GLM algorithm, discuss how it can be accelerated, summarize our convergence results, and compare its performance to that of competing methods on real and simulated data. We close with a summary and a discussion of future directions. GLMs and Exponential Families: In linear regression, the vector of responses y is normally distributed with mean vector E(y) = Xβ and covariance matrix V(y) = σ2I. A GLM preserves the independence of the responses yi but assumes that they are generated from a shared exponential family distribution. The response yi is postulated to have mean µi(β) = E[yi|β] = h(xt iβ), where xi is the ith row of a design matrix X, and the inverse link function h(s) is smooth and strictly increasing [24]. The functional inverse h−1(s) of h(s) is called the link function. The likelihood of any exponential family can be written in the canonical form p(yi|θi, τ) = c1(yi, τ) exp yθi −ψ(θi) c2(τ) . (1) Here τ is a fixed scale parameter, and the positive functions c1 and c2 are constant with respect to the natural parameter θi. The function ψ is smooth and convex; a brief calculation shows that µi = ψ′(θi). The canonical link function h−1(s) is defined by the condition h−1(µi) = xt iβ = θi. In this case, h(θi) = ψ′(θi), and the log-likelihood ln p(y|β, xj, τ) is concave in β. Because c1 and c2 are not functions of θ, we may drop these terms and work with the log-likelihood up to proportionality. We denote this by L(β | y, X) ∝ln p(y|β, xj, τ). The gradient and second differential of L(β | y, X) amount to ∇L = m X i=1 [yi −ψ′(xt iβ)]xi and d2L = − m X i=1 ψ′′(xt iβ)xixt i. (2) As an example, when ψ(θ) = θ2/2 and c2(τ) = τ 2, the density (1) is the Gaussian likelihood, and GLM regression under the identity link coincides with standard linear regression. Choosing ψ(θ) = ln[1 + exp(θ)] and c2(τ) = 1 corresponds to logistic regression under the canonical link h−1(s) = ln s 1−s with inverse link h(s) = es 1+es . GLMs unify a range of regression settings, including Poisson, logistic, gamma, and multinomial regression. Shrinkage penalties: The least absolute shrinkage and selection operator (LASSO) [12, 31] solves ˆβ = argminβ h λ∥β∥1 −1 m m X j=1 L(β | yj, xj) i , (3) where λ > 0 is a tuning constant that controls the strength of the ℓ1 penalty. The ℓ1 relaxation is a popular approach to promote a sparse solution, but there is no obvious map between λ and the sparsity level k. In practice, a suitable value of λ is found by cross-validation. Relying on global shrinkage towards zero, LASSO notoriously leads to biased estimates. This bias can be ameliorated by re-estimating under the model containing only the selected variables, known as the relaxed LASSO [25], but success of this two-stage procedure relies on correct support recovery in the first step. In many cases, LASSO shrinkage is known to introduce false positives [30], resulting in spurious covariates that cannot be corrected. To combat these shortcomings, one may replace the LASSO penalty by a non-convex penalty with milder effects on large coefficients. The smoothly clipped 2 absolute deviation (SCAD) penalty [10] and minimax concave penalty (MCP) [34] are even functions defined through their derivatives q′ γ(βi, λ) = λ 1{|βi|≤λ} + (γλ −|βi|)+ (γ −1)λ 1{|βi|>λ} and q′ γ(βi, λ) = λ 1 −|βi| λγ + for βi > 0. Both penalties reduce bias, interpolate between hard thresholding and LASSO shrinkage, and significantly outperform the LASSO in some settings, especially in problems with extreme sparsity. SCAD, MCP, as well as the relaxed lasso come with the disadvantage of requiring an extra tuning parameter γ > 0 to be selected. 2 Regression with distance-to-constraint set penalties As an alternative to shrinkage, we consider penalizing the distance between the parameter vector β and constraints defined by sets Ci. Penalized estimation seeks the solution ˆβ = argminβ 1 2 X i vidist(β, Ci)2 −1 m m X j=1 L(β | yj, xj) := argminβ f(β), (4) where the vi are weights on the distance penalty to constraint set Ci . The Euclidean distance can also be written as dist(β, Ci) = ∥β −PCi(β)∥2, where PCi(β) denotes the projection of β onto Ci. The projection operator is uniquely defined when Ci is closed and convex. If Ci is merely closed, then PCi(β) may be multi-valued for a few unusual external points β. Notice the distance penalty dist(β, Ci)2 is 0 precisely when β ∈Ci. The solution (4) represents a tradeoff between maximizing the log-likelihood and satisfying the constraints. When each Ci is convex, the objective function is convex as a whole. Sending all of the penalty constants vi to ∞produces in the limit the constrained maximum likelihood estimate. This is the philosophy behind the proximal distance algorithm [19, 20]. In practice, it often suffices to find the solution (4) under fixed vi large. The reader may wonder why we employ squared distances rather than distances. The advantage is that squaring renders the penalties differentiable. Indeed, ∇1 2dist(x, Ci)2 = x −PCi(x) whenever PCi(x) is single valued. This is almost always the case. In contrast, dist(x, Ci) is typically nondifferentiable at boundary points of Ci even when Ci is convex. The following examples motivate distance penalization by considering constraint sets and their projections for several important models. Sparse regression: Sparsity can be imposed directly through the constraint set Ck = {z ∈Rn : ∥z∥0 ≤k} . Projecting a point β onto C is trivially accomplished by setting all but the k largest entries in magnitude of β equal to 0, the same operation behind iterative hard thresholding algorithms. Instead of solving the ℓ1-relaxation (3), our algorithm approximately solves the original ℓ0-constrained problem by repeatedly projecting onto the sparsity set Ck. Unlike LASSO regression, this strategy enables one to directly incorporate prior knowledge of the sparsity level k in an interpretable manner. When no such information is available, k can be selected by cross validation just as the LASSO tuning constant λ is selected. Distance penalization escapes the NP hard dilemma of best subset regression at the cost of possible convergence to a local minimum. Shape and order constraints: As an example of shape and order restrictions, consider isotonic regression [1]. For data y ∈Rn, isotonic regression seeks to minimize 1 2∥y −β∥2 2 subject to the condition that the βi are non-decreasing. In this case, the relevant constraint set is the isotone convex cone C = {β : β1 ≤β2 ≤. . . ≤βn}. Projection onto C is straightforward and efficiently accomplished using the pooled adjacent violators algorithm [1, 8]. More complicated order constraints can be imposed analogously: for instance, βi ≤βj might be required of all edges i →j in a directed graph model. Notably, isotonic linear regression applies to changepoint problems [32]; our approach allows isotonic constraints in GLM estimation. One noteworthy application is Poisson regression where the intensity parameter is assumed to be nondecreasing with time. Rank restriction: Consider GLM regression where the predictors Xi and regression coefficients B are matrix-valued. To impose structure in high-dimensional settings, rank restriction serves as an 3 appropriate matrix counterpart to sparsity for vector parameters. Prior work suggests that imposing matrix sparsity is much less effective than restricting the rank of B in achieving model parsimony [37]. The matrix analog of the LASSO penalty is the nuclear norm penalty. The nuclear norm of a matrix B is defined as the sum of its singular values ∥B∥∗= P j σj(B) = trace( √ B∗B). Notice ∥B∥∗is a convex relaxation of rank(B). Including a nuclear norm penalty entails shrinkage and induces low-rankness by proxy. Distance penalization of rank involves projecting onto the set Cr = {Z ∈Rn×n : rank(Z) ≤r} for a given rank r. Despite sacrificing convexity, distance penalization of rank is, in our view, both more natural and more effective than nuclear norm penalization. Avoiding shrinkage works to the advantage of distance penalization, which we will see empirically in Section 4. According to the Eckart-Young theorem, the projection of a matrix B onto Cr is achieved by extracting the singular value decomposition of B and truncating all but the top r singular values. Truncating the singular value decomposition is a standard numerical task best computed by Krylov subspace methods [14]. Simple box constraints, hyperplanes, and balls: Many relevant set constraints reduce to closed convex sets with trivial projections. For instance, enforcing non-negative parameter values is accomplished by projecting onto the non-negative orthant. This is an example of a box constraint. Specifying linear equality and inequality constraints entails projecting onto a hyperplane or half-space, respectively. A Tikhonov or ridge penalty constraint ∥β∥2 ≤r requires spherical projection. Finally, we stress that it is straightforward to consider combinations of the aforementioned constraints. Multiple norm penalties are already in common use. To encourage selection of correlated variables [38], the elastic net includes both ℓ1 and ℓ2 regularization terms. Further examples include matrix fitting subject to both sparse and low-rank matrix constraints [29] and LASSO regression subject to linear equality and inequality constraints [13]. In our setting the relative importance of different constraints can be controlled via the weights vi. 3 Majorization-minimization Figure 1: Illustrative example of two MM iterates with surrogates g(x|xk) majorizing f(x) = cos(x). To solve the minimization problem (4), we exploit the principle of majorization-minimization. An MM algorithm successively minimizes a sequence of surrogate functions g(β | βk) majorizing the objective function f(β) around the current iterate βk. See Figure 1. Forcing g(β | βk) downhill automatically drives f(β) downhill as well [19, 22]. Every expectation-maximization (EM) algorithm [9] for maximum likelihood estimation is an MM algorithm. Majorization requires two conditions: tangency at the current iterate g(βk | βk) = f(βk), and domination g(β | βk) ≥f(β) for all β ∈Rm. The iterates of the MM algorithm are defined by βk+1 := arg min β g(β | βk) although all that is absolutely necessary is that g(βk+1 | βk) < g(βk | βk). Whenever this holds, the descent property f(βk+1) ≤g(βk+1 | βk) ≤g(βk | βk) = f(βk) 4 follows. This simple principle is widely applicable and converts many hard optimization problems (non-convex or non-smooth) into a sequence of simpler problems. To majorize the objective (4), it suffices to majorize each distance penalty dist (β, Ci)2. The majorization dist (β, Ci)2 ≤∥β −PCi(βk)∥2 2 is an immediate consequence of the definitions of the set distance dist (β, Ci)2 and the projection operator PCi(β) [8]. The surrogate function g(β | βk) = 1 2 X i vi∥β −PCi(βk)∥2 2 −1 m m X j=1 L(β | yj, xj). has gradient ∇g(β | βk) = X i vi[β −PCi(βk)] −1 m m X j=1 ∇L(β | yj, xj) and second differential d2g(β | βk) = X i vi In −1 m m X j=1 d2L(β | yj, xj) := Hk. (5) The score ∇L(β | yj, xj) and information −d2L(β | yj, xj) appear in equation (2). Note that for GLMs under canonical link, the observed and expected information matrices coincide, and their common value is thus positive semidefinite. Adding a multiple of the identity In to the information matrix is analogous to the Levenberg-Marquardt maneuver against ill-conditioning in ordinary regression [26]. Our algorithm therefore naturally benefits from this safeguard. Since solving the stationarity equation ∇g(β | βk) = 0 is not analytically feasible in general, we employ one step of Newton’s method in the form βk+1 = βk −ηkd2g(βk | βk)−1∇f(βk), where ηk ∈(0, 1] is a stepsize multiplier chosen via backtracking. Note here our application of the gradient identity ∇f(βk) = ∇g(βk | βk), valid for all smooth surrogate functions. Because the Newton increment is a descent direction, some value of ηk is bound to produce a decrease in the surrogate and therefore in the objective. The following theorem, proved in the Supplement, establishes global convergence of our algorithm under simple Armijo backtracking for choosing ηk: Theorem 3.1 Consider the algorithm map M(β) = β −ηβH(β)−1∇f(β), where the step size ηβ has been selected by Armijo backtracking. Assume that f(β) is coercive in the sense lim∥β∥→∞f(β) = +∞. Then the limit points of the sequence βk+1 = M(βk) are stationary points of f(β). Moreover, the set of limit points is compact and connected. We remark that stationary points are necessarily global minimizers when f(β) is convex. Furthermore, coercivity of f(β) is a very mild assumption, and is satisfied whenever either the distance penalty or the negative log-likelihood is coercive. For instance, the negative log-likelihoods of the Poisson and Gaussian distributions are coercive functions. While this is not the case for the Bernoulli distribution, adding a small ℓ2 penalty ω∥β∥2 2 restores coerciveness. Including such a penalty in logistic regression is a common remedy to the well-known problem of numerical instability in parameter estimates caused by a poorly conditioned design matrix X [27]. Since L(β) is concave in β, the compactness of one or more of the constraint sets Ci is another sufficient condition for coerciveness. Generalization to Bregman divergences: Although we have focused on penalizing GLM likelihoods with Euclidean distance penalties, this approach holds more generally for objectives containing non-Euclidean measures of distance. As reviewed in the Supplement, the Bregman divergence Dφ(v, u) = φ(v) −φ(u) −dφ(u)(v −u) generated by a convex function φ(v) provides a general notion of directed distance [4]. The Bregman divergence associated with the choice φ(v) = 1 2∥v∥2 2, for instance, is the squared Euclidean distance. One can rewrite the GLM penalized likelihood as a sum of multiple Bregman divergences f(β) = X i viDφ h Pφ Ci(β), β i + m X j=1 wjDζ h yj,ehj(β) i . (6) 5 Algorithm 1 MM algorithm to solve distance-penalized objective (4) 1: Initialize k = 0, starting point β0, initial step size α ∈(0, 1), and halving parameter σ ∈(0, 1): 2: repeat 3: ∇fk ←P i vi[β −PCi(βk)] −1 m Pm j=1 ∇L(β | yj, βj) 4: Hk ← P i vi In −1 m Pm j=1 d2L(β | yj, βj) 5: v ←−H−1 k ∇fk 6: η ←1 7: while f(βk + ηv) > f(βk) + αη∇f t kβk do 8: η ←ση 9: end while 10: βk+1 ←βk + ηv 11: k ←k + 1 12: until convergence The first sum in equation (6) represents the distance penalty to the constraint sets Ci. The projection Pφ Ci(β) denotes the closest point to β in Ci measured under Dφ. The second sum generalizes the GLM log-likelihood term where ehj(β) = h−1(xt jβ). Every exponential family likelihood uniquely corresponds to a Bregman divergence Dζ generated by the conjugate of its cumulant function ζ = ψ∗ [28]. Hence, −L(β | y, X) is proportional to 1 m Pm j=1 Dζ yj, h−1(xt jβ) . The functional form (6) immediately broadens the class of objectives to include quasi-likelihoods and distances to constraint sets measured under a broad range of divergences. Objective functions of this form are closely related to proximity function minimization in the convex feasibility literature [5, 6, 7, 33]. The MM principle makes possible the extension of the projection algorithms of [7] to minimize this general objective. Our MM algorithm for distance penalized GLM regression is summarized in Algorithm 1. Although for the sake of clarity the algorithm is written for vector-valued arguments, it holds more generally for matrix-variate regression. In this setting the regression coefficients B and predictors Xi are matrix valued, and response yj has mean h[trace(Xt iB)] = h[vec(Xi)t vec(B)]. Here the vec operator stacks the columns of its matrix argument. Thus, the algorithm immediately applies if we replace B by vec(B) and X1, . . . , Xm by X = [vec(X1), . . . , vec(Xm)]t. Projections requiring the matrix structure are performed by reshaping vec(B) into matrix form. In contrast to shrinkage approaches, these maneuvers obviate the need for new algorithms in matrix regression [37]. Acceleration: Here we mention two modifications to the MM algorithm that translate to large practical differences in computational cost. Inverting the n-by-n matrix d2g(βk | βk) naively requires O(n3) flops. When the number of cases m ≪n, invoking the Woodbury formula requires solving a substantially smaller m × m linear system at each iteration. This computational savings is crucial in the analysis of the EEG data of Section 4. The Woodbury formula says (vIn + UV )−1 = v−1In −v−2U Im + v−1V U −1V when U and V are n×m and m×n matrices, respectively. Inspection of equations (2) and (5) shows that d2g(βk | βk) takes the required form. Under Woodbury’s formula the dominant computation is the matrix-matrix product V U, which requires only O(nm2) flops. The second modification to the MM algorithm is quasi-Newton acceleration. This technique exploits secant approximations derived from iterates of the algorithm map to approximate the differential of the map. As few as two secant approximations can lead to orders of magnitude reduction in the number of iterations until convergence. We refer the reader to [36] for a detailed description of quasi-Newton acceleration and a summary of its performance on various high-dimensional problems. 4 Results and performance We first compare the performance of our distance penalization method to leading shrinkage methods in sparse regression. Our simulations involve a sparse length n = 2000 coefficient vector β with 10 nonzero entries. Nonzero coefficients have uniformly random effect sizes. The entries of the design matrix X are N(0, 0.1) Gaussian random deviates. We then recover β from undersampled responses 6 Relative Error, Logistic −0.6 −0.2 0.2 MM MCP SCAD LASSO Relative Error, Poisson −0.4 −0.2 0.0 Support indices of true coefficients 600 800 1000 1200 1400 1600 1800 0.03 0.05 0.07 0.09 Number of samples Mean squared error MM MCP SCAD logistic poisson Figure 2: The left figure displays relative errors among nonzero predictors in underdetermined Poisson and logistic regression with m = 1000 cases. It is clear that LASSO suffers the most shrinkage and bias, while MM appears to outperform MCP and SCAD. The right figure displays MSE as a function of m, favoring MM most notably for logistic regression. yj following Poisson and Bernoulli distributions with canonical links. Figure 2 compares solutions obtained using our distance penalties (MM) to those obtained under MCP, SCAD, and LASSO penalties. Relative errors (left) with m = 1000 cases clearly show that LASSO suffers the most shrinkage and bias; MM seems to outperform MCP and SCAD. For a more detailed comparison, the right side of the figure plots mean squared error (MSE) as a function of the number of cases averaged over 50 trials. All methods significantly outperform LASSO, which is omitted for scale, with MM achieving lower MSE than competitors, most noticeably in logistic regression. As suggested by an anonymous reviewer, similar results from additional experiments for Gaussian (linear) regression with comparison to relaxed lasso are included in the Supplement. (a) Sparsity constraint (b) Regularize ∥B∥∗ (c) Restrict rk(B) = 2 (d) Vary rk(B) = 1, . . . , 8 Figure 3: True B0 in the top left of each set of 9 images has rank 2. The other 8 images in (a)—(c) display solutions as ϵ varies over the set {0, 0.1, . . . , 0.7}. Figure (a) applies our MM algorithm with sparsity rather than rank constraints to illustrate how failing to account for matrix structure misses the true signal; Zhou and Li [37] report similar findings comparing spectral regularization to ℓ1 regularization. Figure (b) performs spectral shrinkage [37] and displays solutions under optimal λ values via BIC, while (c) uses our MM algorithm restricting rank(B) = 2. Figure (d) fixes ϵ = 0.1 and uses MM with rank(B) ∈{1, . . . , 8} to illustrate robustness to rank over-specification. For underdetermined matrix regression, we compare to the spectral regularization method developed by Zhou and Li [37]. We generate their cross-shaped 32 × 32 true signal B0 and in all trials sample m = 300 responses yi ∼N[tr(Xt i, B), ϵ]. Here the design tensor X is generated with standard normal entries. Figure 3 demonstrates that imposing sparsity alone fails to recover Y 0 and that rank-set projections visibly outperform spectral norm shrinkage as ϵ varies. The rightmost panel also shows that our method is robust to over-specification of the rank of the true signal to an extent. We consider two real datasets. We apply our method to count data of global temperature anomalies relative to the 1961-1990 average, collected by the Climate Research Unit [17]. We assume a non7 1850 1900 1950 2000 −0.6 −0.2 0.2 0.6 Year Global Temperature Anomalies 10 20 30 40 50 60 250 200 150 100 50 Figure 4: The leftmost plot shows our isotonic fit to temperature anomaly data [17]. The right figures display the estimated coefficient matrix B on EEG alcoholism data using distance penalization, nuclear norm shrinkage [37], and LASSO shrinkage, respectively. decreasing solution, illustrating an instance of isotonic regression. The fitted solution displayed in Figure 4 has mean squared error 0.009, clearly obeys the isotonic constraint, and is consistent with that obtained on a previous version of the data [32]. We next focus on rank constrained matrix regression for electroencephalography (EEG) data, collected by [35] to study the association between alcoholism and voltage patterns over times and channels. The study consists of 77 individuals with alcoholism and 45 controls, providing 122 binary responses yi indicating whether subject i has alcoholism. The EEG measurements are contained in 256 × 64 predictor matrices Xi; the dimension m is thus greater than 16, 000. Further details about the data appear in the Supplement. Previous studies apply dimension reduction [21] and propose algorithms to seek the optimal rank 1 solution [16]. These methods could not handle the size of the original data directly, and the spectral shrinkage approach proposed in [37] is the first to consider the full EEG data. Figure 4 shows that our regression solution is qualitatively similar to that obtained under nuclear norm penalization [37], revealing similar time-varying patterns among channels 20-30 and 50-60. In contrast, ignoring matrix structure and penalizing the ℓ1 norm of B yields no useful information, consistent with findings in [37]. However, our distance penalization approach achieves a lower misclassification error of 0.1475. The lowest misclassification rate reported in previous analyses is 0.139 by [16]. As their approach is strictly more restrictive than ours in seeking a rank 1 solution, we agree with [37] in concluding that the lower misclassification error can be largely attributed to benefits from data preprocessing and dimension reduction. While not visually distinguishable, we also note that shrinking the eigenvalues via nuclear norm penalization [37] fails to produce a low-rank solution on this dataset. We omit detailed timing comparisons throughout since the various methods were run across platforms and depend heavily on implementation. We note that MCP regression relies on the MM principle, and the LQA and LLA algorithms used to fit models with SCAD penalties are also instances of MM algorithms [11]. Almost all MM algorithms share an overall linear rate of convergence. While these require several seconds of compute time on a standard laptop machine, coordinate-descent implementations of LASSO outstrip our algorithm in terms of computational speed. Our MM algorithm required 31 seconds to converge on the EEG data, the largest example we considered. 5 Discussion GLM regression is one of the most widely employed tools in statistics and machine learning. Imposing constraints upon the solution is integral to parameter estimation in many settings. This paper considers GLM regression under distance-to-set penalties when seeking a constrained solution. Such penalties allow a flexible range of constraints, and are competitive with standard shrinkage methods for sparse and low-rank regression in high dimensions. The MM principle yields a reliable solution method with theoretical guarantees and strong empirical results over a number of practical examples. These examples emphasize promising performance under non-convex constraints, and demonstrate how distance penalization avoids the disadvantages of shrinkage approaches. Several avenues for future work may be pursued. The primary computational bottleneck we face is matrix inversion, which limits the algorithm when faced with extremely large and high-dimensional 8 datasets. Further improvements may be possible using modifications of the algorithm tailored to specific problems, such as coordinate or block descent variants. Since the linear systems encountered in our parameter updates are well conditioned, a conjugate gradient algorithm may be preferable to direct methods of solution in such cases. The updates within our algorithm can be recast as weighted least squares minimization, and a re-examination of this classical problem may suggest even better iterative solvers. As the methods apply to a generalized objective comprised of multiple Bregman divergences, it will be fruitful to study penalties under alternate measures of distance, and settings beyond GLM regression such as quasi-likelihood estimation. While our experiments primarily compare against shrinkage approaches, an anonymous referee points us to recent work revisiting best subset selection using modern advances in mixed integer optimization [3]. These exciting developments make best subset regression possible for much larger problems than previously thought possible. As [3] focus on the linear case, it is of interest to consider how ideas in this paper may offer extensions to GLMs, and to compare the performance of such generalizations. Best subsets constitutes a gold standard for sparse estimation in the noiseless setting; whether it outperforms shrinkage methods seems to depend on the noise level and is a topic of much recent discussion [15, 23]. Finally, these studies as well as our present paper focus on estimation, and it will be fruitful to examine variable selection properties in future work. Recent work evidences an inevitable trade-off between false and true positives under LASSO shrinkage in the linear sparsity regime [30]. The authors demonstrate that this need not be the case with ℓ0 methods, remarking that computationally efficient methods which also enjoy good model performance would be highly desirable as ℓ0 and ℓ1 approaches possess one property but not the other [30]. Our results suggest that distance penalties, together with the MM principle, seem to enjoy benefits from both worlds on a number of statistical tasks. Acknowledgements: We would like to thank Hua Zhou for helpful discussions about matrix regression and the EEG data. JX was supported by NSF MSPRF #1606177. References [1] Barlow, R. E., Bartholomew, D. J., Bremner, J., and Brunk, H. D. Statistical inference under order restrictions: The theory and application of isotonic regression. Wiley New York, 1972. [2] Becker, M. P., Yang, I., and Lange, K. EM algorithms without missing data. Statistical Methods in Medical Research, 6:38–54, 1997. [3] Bertsimas, D., King, A., and Mazumder, R. Best subset selection via a modern optimization lens. The Annals of Statistics, 44(2):813–852, 2016. [4] Bregman, L. M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3):200–217, 1967. [5] Byrne, C. and Censor, Y. Proximity function minimization using multiple Bregman projections, with applications to split feasibility and Kullback–Leibler distance minimization. Annals of Operations Research, 105(1-4):77–98, 2001. [6] Censor, Y. and Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms, 8(2):221–239, 1994. [7] Censor, Y., Elfving, T., Kopf, N., and Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems, 21(6):2071–2084, 2005. [8] Chi, E. C., Zhou, H., and Lange, K. Distance majorization and its applications. Mathematical Programming Series A, 146(1-2):409–436, 2014. [9] Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), pages 1–38, 1977. [10] Fan, J. and Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. [11] Fan, J. and Lv, J. A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20(1):101, 2010. 9 [12] Friedman, J., Hastie, T., and Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, 2010. [13] Gaines, B. R. and Zhou, H. Algorithms for fitting the constrained lasso. arXiv preprint arXiv:1611.01511, 2016. [14] Golub, G. H. and Van Loan, C. F. Matrix computations, volume 3. JHU Press, 2012. [15] Hastie, T., Tibshirani, R., and Tibshirani, R. J. Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv preprint arXiv:1707.08692, 2017. [16] Hung, H. and Wang, C.-C. Matrix variate logistic regression model with application to EEG data. Biostatistics, 14(1):189–202, 2013. [17] Jones, P., Parker, D., Osborn, T., and Briffa, K. Global and hemispheric temperature anomalies– land and marine instrumental records. Trends: a compendium of data on global change, 2016. [18] Lange, K., Hunter, D. R., and Yang, I. Optimization transfer using surrogate objective functions (with discussion). Journal of Computational and Graphical Statistics, 9:1–20, 2000. [19] Lange, K. MM Optimization Algorithms. SIAM, 2016. [20] Lange, K. and Keys, K. L. The proximal distance algorithm. arXiv preprint arXiv:1507.07598, 2015. [21] Li, B., Kim, M. K., and Altman, N. On dimension folding of matrix-or array-valued statistical objects. The Annals of Statistics, pages 1094–1121, 2010. [22] Mairal, J. Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM Journal on Optimization, 25(2):829–855, 2015. [23] Mazumder, R., Radchenko, P., and Dedieu, A. Subset selection with shrinkage: Sparse linear modeling when the SNR is low. arXiv preprint arXiv:1708.03288, 2017. [24] McCullagh, P. and Nelder, J. A. Generalized linear models, volume 37. CRC press, 1989. [25] Meinshausen, N. Relaxed lasso. Computational Statistics & Data Analysis, 52(1):374–393, 2007. [26] Moré, J. J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical analysis, pages 105–116. Springer, 1978. [27] Park, M. Y. and Hastie, T. L1-regularization path algorithm for generalized linear models. Journal of the Royal Statistical Society: Series B (Methodological), 69(4):659–677, 2007. [28] Polson, N. G., Scott, J. G., and Willard, B. T. Proximal algorithms in statistics and machine learning. Statistical Science, 30(4):559–581, 2015. [29] Richard, E., Savalle, P.-a., and Vayatis, N. Estimation of simultaneously sparse and low rank matrices. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1351–1358, 2012. [30] Su, W., Bogdan, M., and Candès, E. False discoveries occur early on the lasso path. The Annals of Statistics, 45(5), 2017. [31] Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), pages 267–288, 1996. [32] Wu, W. B., Woodroofe, M., and Mentz, G. Isotonic regression: Another look at the changepoint problem. Biometrika, pages 793–804, 2001. [33] Xu, J., Chi, E. C., Yang, M., and Lange, K. A majorization-minimization algorithm for split feasibility problems. arXiv preprint arXiv:1612.05614, 2017. [34] Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010. [35] Zhang, X. L., Begleiter, H., Porjesz, B., Wang, W., and Litke, A. Event related potentials during object recognition tasks. Brain Research Bulletin, 38(6):531–538, 1995. [36] Zhou, H., Alexander, D., and Lange, K. A quasi-Newton acceleration for high-dimensional optimization algorithms. Statistics and Computing, 21:261–273, 2011. [37] Zhou, H. and Li, L. Regularized matrix regression. Journal of the Royal Statistical Society: Series B (Methodological), 76(2):463–483, 2014. 10 [38] Zou, H. and Hastie, T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Methodological), 67(2):301–320, 2005. 11 | 2017 | 21 |
6,686 | Learning Non-Gaussian Multi-Index Model via Second-Order Stein’s Method Zhuoran Yang⇤ Krishna Balasubramanian⇤ Zhaoran Wang† Han Liu† Abstract We consider estimating the parametric components of semiparametric multi-index models in high dimensions. To bypass the requirements of Gaussianity or elliptical symmetry of covariates in existing methods, we propose to leverage a second-order Stein’s method with score function-based corrections. We prove that our estimator achieves a near-optimal statistical rate of convergence even when the score function or the response variable is heavy-tailed. To establish the key concentration results, we develop a data-driven truncation argument that may be of independent interest. We supplement our theoretical findings with simulations. 1 Introduction We consider the semiparametric index model that relates the response Y 2 R and the covariate X 2 Rd as Y = f (hβ⇤ 1, Xi, . . . , hβ⇤ k, Xi) + ✏, where each coefficient β⇤ ` 2 Rd (` 2 [k]) is s⇤sparse and the noise term ✏is zero-mean. Such a model is known as sparse multiple index model (MIM). Given n i.i.d. observations {Xi, Yi}n i=1 of the above model with possibly d ≫n, we aim to estimate the parametric component {β⇤ ` }`2[k] when the nonparametric component f is unknown. More importantly, we do not impose the assumption that X is Gaussian, which is commonly made in the literature. Special cases of our model include phase retrieval, for which k = 1, and dimensionality reduction, for which k ≥1. Motivated by these applications, we make a distinction between the cases of k = 1, which is also known as single index model (SIM), and k > 1 in the rest of the paper. Estimating the parametric component {β⇤ ` }`2[k] without knowing the exact form of the link function f naturally arises in various applications. For example, in one-bit compressed sensing [3, 39] and sparse generalized linear models [36], we are interested in recovering the underlying signal vector based on nonlinear measurements. In sufficient dimensionality reduction, where k is typically a fixed number greater than one but much less than d, we aim to estimate the projection onto the subspace spanned by {β⇤ ` }`2[k] without knowing f. Furthermore, in deep neural networks, which are cascades of MIMs, the nonparametric component corresponds to the activation function, which is prespecified, and the goal is to estimate the linear parametric component, which is used for prediction at the test stage. Hence, it is crucial to develop estimators for the parametric component with both statistical accuracy and computational efficiency for a broad class of possibly unknown link functions. Challenging aspects of index models: Several subtle issues arise from the optimal estimation of SIM and MIM. In specific, most existing results depend crucially on restrictive assumptions on X and f, and fail to hold when those assumptions are relaxed. Such issues arise even in the low-dimensional setting with n ≫d. Let us consider, for example, the case of k = 1 with a known link function f(z) = z2. This corresponds to phase retrieval, which is a challenging inverse problem that has regained interest in the last few years along with the success of compressed sensing. A straightforward way to estimate β⇤is via nonlinear least squares regression [17], which is a nonconvex optimization problem. [6] propose an estimator based on convex relaxation. Although their estimator is optimal ⇤Princeton University, email: {zy6, kb18}@princeton.edu †Tencent AI Lab & Northwestern University, email: {zhaoranwang, hanliu.cmu}@gmail.com 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Histogram of the score function based on 10000 independent realizations of the Gamma distribution with shape parameter 5 and scale parameter 0.2. The dark solid histogram concentrated around zero corresponds to the Gamma distribution, and the transparent histogram corresponds to the distribution of the score function of the same Gamma distribution. -10 0 10 20 30 40 50 60 0 100 200 300 400 500 600 when X is sub-Gaussian, it is not agnostic to the link function, i.e., the same result does not hold if the link function is not quadratic. Direct optimization of the nonconvex phase retrieval problem is considered by [5] and [30], which propose statistically optimal estimators based on iterative algorithms. However, they rely on the assumption that X is Gaussian. A careful look at their proofs shows that extending them to a broader class of distributions is significantly more challenging — for example, they require sharp concentration inequalities for polynomials of degree four of X, which leads to a suboptimal statistical rate when X is sub-Gaussian. Furthermore, their results are also not agnostic to the link function. Similar observations could be made for both convex [21] and nonconvex estimators [4] for sparse phase retrieval in high dimensions. In addition, a surprising result for SIM is established in [28]. They show that when X is Gaussian, even when the link function is unknown, one could estimate β⇤at the optimal statistical rate with Lasso. Unfortunately, their assumptions on the link function are rather restrictive, which rule out several interesting models including phase retrieval. Furthermore, none of the above approaches are applicable to MIM. A line of work pioneered by Ker-Chau Li [18–20] focuses on the estimation of MIM in low dimensions. We will provide a discussion about this line of work in the related work section, but it again requires restrictive assumptions on either the link function or the distribution of X. For example, in most cases X is assumed to be elliptically symmetric, which limits the applicability. To summarize, there are several subtleties that arise from the interplay between the assumptions on X and f in SIM and MIM. An interesting question is whether it is possible to estimate the parametric component in SIM and MIM with milder assumptions on both X and f in the high-dimensional setting. In this work, we provide a partial answer to this question. We construct estimators that work for a broad class of link functions, including the quadratic link function in phase retrieval, and for a large family of distributions of X, which are assumed to be known a priori. We particularly focus on the case where X follows a non-Gaussian distribution, which is not necessarily elliptically symmetric or sub-Gaussian, therefore making our method applicable to various situations that are not feasible previously. Our estimators are based on a second-order variant of Stein’s identity for non-Gaussian random variables, which utilizes the score function of the distribution of X. As we show in Figure 1, even when the distribution of X is light-tailed, the distribution of the score function of X could be arbitrarily heavy-tailed. In order to develop consistent estimators within this context, we threshold the score function in a data-driven fashion. This enables us to obtain tight concentration bounds that lead to near-optimal statistical rates of convergence. Moreover, our results also shed light on two related problems. First, we provide an alternative interpretation of the initialization in [5] for phase retrieval. Second, our estimators are constructed based on a sparsity constrained semidefinite programming (SDP) formulation, which is related to a similar formulation of the sparse principal component analysis (PCA) problem (see Section 4 for a detailed discussion). A consequence of our results for SIM and MIM is a near-optimal statistical rate of convergence for sparse PCA with heavy-tailed data in the moderate sample size regime. In summary, our contributions are as follows: • We construct estimators for the parametric component of high-dimensional SIM and MIM for a class of unknown link function under the assumption that the covariate distribution is non-Gaussian but known a priori. • We establish near-optimal statistical rates for our estimators. Our results complement existing ones in the literature and hold in several cases that are previously not feasible. • We provide numerical simulations that confirm our theoretical results. 2 Related work: There is a significant body of work on SIMs in the low-dimensional setting. We do not attempt to cover all of them as we concentrate on the high dimensional setting. The success of Lasso and related regression estimators in high-dimensions enables the exploration of high-dimensional SIMs, although this is still very much work in progress. As mentioned previously, [25, 26, 28] show that Lasso and phase retrieval estimators could also work for SIM in high dimensions assuming the covariate is Gaussian and the link function satisfies certain properties. Very recently, [10] relax the Gaussian assumption and show that a modified Lasso-type estimator works for elliptically symmetric distributions. For the case of monotone link function, [38] analyze a nonconvex least squares estimator under the assumption that the covariate is sub-Gaussian. However, the success of their estimator hinges on the knowledge of the link function. Furthermore, [15, 23, 31, 32, 40] analyze the sliced inverse regression estimator in the high-dimensional setting, focusing primarily on support recovery and consistency properties. The Gaussian assumption on the covariate restricts them from being applicable to various real-world applications involving heavy-tailed or non-symmetric covariate, for example, problems in economics [9, 12]. Furthermore, several results are established on a case-by-case basis for specific link functions. In specific, [1, 3, 8, 39] consider one-bit compressed sensing and matrix completion respectively, where the link function is assumed to be the sign function. Also, [4] propose nonconvex estimators for phase retrieval in high dimensions, where the link function is quadratic. This line of work, except [1], makes Gaussian assumptions on the covariate and is specialized for particular link functions. The non-asymptotic result obtained in [1] is under sub-Gaussian assumptions, but the estimator therein lacks asymptotic consistency. For MIMs, relatively less work studies the high-dimensional setting. In the low-dimensional setting, a line of work on the estimation of MIM is proposed by Ker-Chau Li, including inverse regression [18], principal Hessian directions [19], and regression under link violation [20]. The proposed estimators are applicable for a class of unknown link functions under the assumption that the covariate follows Gaussian or symmetric elliptical distributions. Such an assumption is restrictive as often times the covariate is heavy-tailed or skewed [9, 12]. Furthermore, they concentrate only on the low-dimensional setting and establish asymptotic results. The estimation of high-dimensional MIM under the subspace sparsity assumption is previously considered in [7, 32] but also under rather restrictive distribution assumptions on the covariate. Notation: We employ [n] to denote the set {1, . . . , n}. For a vector v 2 Rd, we denote by kvkp the `pnorm of v for any p ≥1. In addition, we define the support of v 2 Rd as supp(v) = {j 2 [d], vj 6= 0}. We denote by λmin(A), the minimum eigenvalue of matrix A. Moreover, we denote the elementwise `1-norm, elementwise `1-norm, operator norm, and Frobenius norm of a matrix A 2 Rd1⇥d2 to be k · k1, k · k1, k · kop, and k · kF, correspondingly. We denote by vec(A) the vectorization of matrix A, which is a vector in Rd1d2. For two matrices A, B 2 Rd1⇥d2, we denote the trace inner product to be hA, Bi = Trace(A>B). Also note that it could be viewed as the vector inner product between vec(A) and vec(B). For a univariate function g: R ! R, we denote by g ◦(v) and g ◦(A) the output of applying g to each element of vector v and matrix A, respectively. Finally, for a random variable X 2 R with density p, we use p⌦d : Rd ! R to denote the joint density of X1, · · · , Xd, which are d identical copies of X. 2 Models and Assumptions As mentioned previously, we consider the cases of k = 1 (SIM) and k > 1 (MIM) separately. We first discuss the motivation for our estimators, which highlights the assumptions on the link function as well. Recall that our estimators are based on the second-order Stein’s identity. To begin with, we present the first-order Stein’s identity, which motivates Lasso-type estimators for SIMs [25, 28]. Proposition 2.1 (First-Order Stein’s Identity [29]). Let X 2 Rd be a real-valued random vector with density p. We assume that p: Rd ! R is differentiable. In addition, let g : Rd ! R be a continous function such that E[rg(X)] exists. Then it holds that E ⇥ g(X) · S(X) ⇤ = E ⇥ rg(X) ⇤ , where S(x) = −rp(x)/p(x) is the score function of p. One could apply the above Stein’s identity to SIMs to obtain an estimator of β⇤. To see this, note that when X ⇠N(0, Id) we have S(x) = x for x 2 Rd. In this case, since E(✏· X) = 0, we have E(Y · X) = E ⇥ f(hX, β⇤i) · X ⇤ = E ⇥ f 0(hX, β⇤i) ⇤ · β⇤. 3 Thus, one could estimate β⇤by estimating E(Y ·X). This observation leads to the estimator proposed in [25, 28]. However, in order for the estimator to work, it is necessary to assume E[f 0(hX, β⇤i)] 6= 0. Such a restriction prevents it from being applicable to some widely used cases of SIM, for example, phase retrieval in which f is the quadratic function. Such a limitation of the first-order Stein’s identity motivates us to examine the second-order Stein’s identity, which is summarized as follows. Proposition 2.2 (Second-Order Stein’s Identity [13]). We assume the density of X is twice differentiable. We define the second-order score function T : Rd ! Rd⇥d as T(x) = r2p(x)/p(x). For any twice differentiable function g : Rd ! R such that E[r2g(X)] exists, we have E ⇥ g(X) · T(X) ⇤ = E ⇥ r2g(X) ⇤ . (2.1) Back to the phase retrieval example, when X ⇠N(0, Id), the second-order score function is T(x) = xx> −Id, for x 2 Rd. Setting g(x) = hx, β⇤i2 in (2.1), we have E ⇥ g(X) · T(X) ⇤ = E ⇥ g(X) · (XX> −Id) ⇤ = E ⇥ hX, β⇤i2 · (XX> −Id) ⇤ = 2β⇤β⇤>. (2.2) Hence for phase retrieval, one could extract ±β⇤based on the second-order Stein’s identity even in the situation where the first-order Stein’s identity fails. In fact, (2.2) is implicitly used in [5] to provide a spectral initialization for the Wirtinger flow algorithm in the case of Gaussian phase retrieval. Here, we establish an alternative justification based on Stein’s identity for why such an initialization works. Motivated by this key observation, we propose to employ the second-order Stein’s identity to estimate the parametric component of SIM and MIM with a broad class of unknown link functions as well as non-Gaussian covariates. The precise statistical models we consider are defined as follows. Definition 2.3 (SIM with Second-Order Link). The response Y 2 R and the covariate X 2 Rd are linked via Y = f(hX, β⇤i) + ✏, (2.3) where f : R ! R is an unknown function, β⇤2 Rd is the parameter of interest, and ✏2 R is the exogenous noise with E(✏) = 0. We assume the entries of X are i.i.d. random variables with density p0 and that β⇤is s⇤-sparse, i.e., β⇤contains only s⇤nonzero entries. Moreover, since the norm of β⇤ could be absorbed into f, we assume that kβ⇤k2 = 1 for identifiability. Finally, we assume that f and X satisfy E[f 00(hX, β⇤i)] > 0. Note that in Definition 2.3, we assume without any loss of generality that E[f 00(hX, β⇤i)] is positive. If E[f 00(hX, β⇤i)] is negative, one could replace f by −f by flipping the sign of Y . In another word, we essentially only require that E[f 00(hX, β⇤i)] is nonzero. Intuitively, such a restriction on f implies that the second-order cross-moments contains the information of β⇤. Thus, we name this type of link functions as the second-order link. Similarly, we define MIM with second-order link. Definition 2.4 (MIM with Second-Order Link). The response Y 2 R and the covariate X 2 Rd are linked via Y = f (hX, β⇤ 1i, . . . , hX, β⇤ ki) + ✏, (2.4) where f : Rk ! R is an unknown link function, {β⇤ ` }`2[k] ✓Rd are the parameters of interest, and ✏2 R is the exogenous random noise that satisfies E(✏) = 0. In addition, we assume that the entries of X are i.i.d. random variables with density p0 and that {β⇤ ` }`2[k] span a k-dimensional subspace of Rd. Let B⇤= (β⇤ 1 . . . β⇤ k) 2 Rd⇥k. The model in (2.4) could be reformulated as Y = f(XB⇤) + ✏. By QR-factorization, we could write B⇤as Q⇤R⇤, where Q⇤2 Rd⇥k is an orthonormal matrix and R⇤2 Rk⇥k is invertible. Since f is unknown, R⇤could be absorbed into the link function. Thus, we assume that B⇤is orthonormal for identifiability. We further assume that B⇤is s⇤-row sparse, that is, B⇤contains only s⇤nonzero rows. Note that this definition of row sparsity does not depends on the choice of coordinate system. Finally, we assume that f and X satisfy λmin(E[r2f(XB⇤)]) > 0. In Definition 2.4, the assumption that E[r2f(XB⇤)] is positive definite is a multivariate generalization of the condition that E[f 00(hX, β⇤i)] > 0 for SIM in Definition 2.3. It essentially guarantees that estimating the projector of the subspace spanned by {β⇤ ` }`2[k] is information-theoretically feasible. 4 3 Estimation Method and Main Results We now introduce our estimators and establish their statistical rates of convergence. Discussion on the optimality of the established rates and connection to sparse PCA are deferred to §4. Recall that we focus on the case in which X has i.i.d. entries with density p0 : R ! R. Hence, the joint density of X is p(x) = p⌦d 0 (x) = Qd j=1 p0(xj). For notational simplicity, let s0(u) = p0 0(u)/p0(u). Then the first-order score function associated with p is S(x) = s0 ◦(x). Equivalently, the j-th entry of the first-order score function associated with p is given by [S(x)]j = s0(xj). Moreover, the second-order score function is T(x) = S(x)S(x)> −rS(x) = S(x)S(x)> −diag ⇥ s0 0 ◦(x) ⇤ . (3.1) Before we present our estimator, we introduce the assumption on Y and s0(·). Assumption 3.1 (Bounded Moment). We assume there exists a constant M such that Ep0[s0(U)6] M and E(Y 6) M. We denote σ2 0 = Ep0[s0(U)2] = Varp0[s0(U)]. The assumption that Ep0[s0(U)6] M allows for a broad family of distributions including Gaussian and more heavy-tailed random variables. Furthermore, we do not require the covariate to be elliptically symmetric as is commonly required in existing methods, which enables our estimator to be applicable for skewed covariates. As for the assumption that E(Y 6) M, note that in the case of SIM we have E(Y 6) C $ E(✏6) + E ⇥ f 6(hX, β⇤i) ⇤% . Thus this assumption is satisfied as long as both ✏and f(hX, β⇤i) have bounded sixth moments. This is a mild assumption that allows for heavy-tailed response. Now we are ready to present our estimator for the sparse SIM in Definition 2.3. Recall that by Proposition 2.2 we have E ⇥ Y · T(X) ⇤ = C0 · β⇤β⇤>, (3.2) where C0 = 2E[f 00(hX, β⇤i)] > 0 as in Definition 2.3. Hence, one way to estimator β⇤is to obtain the leading eigenvector of the sample version of E[Y ·T(X)]. Moreover, as β⇤is sparse, we formulate our estimator as a semidefinite program maximize ⌦ W, e⌃ ↵ −λkWk1 subject to 0 ⪯W ⪯Id, Trace(W) = 1. (3.3) Here e⌃is an estimator of ⌃⇤= E[Y · T(X)], which is defined as follows. Note that both the score T(X) and the response variable Y could be heavy-tailed. In order to obtain near-optimal estimates in the finite-sample setting, we apply the truncation technique to handle the heavy-tails. In specific, for a positive threshold parameter ⌧2 R, we define the truncated random variables by eYi = sign(Yi) · min{|Yi|, ⌧} and ⇥eT(Xi) ⇤ jk = sign ) Tjk(Xi) · min ) |Tjk(Xi)|, ⌧2 . (3.4) Then we define the robust estimator of ⌃⇤as e⌃= 1 n n X i=1 eYi · eT(Xi). (3.5) We denote by c W the solution of the convex optimization problem in (3.3), where λ is a regularization parameter to be specified later. The final estimator bβ is defined as the leading eigenvector of c W. The following theorem quantifies the statistical rates of convergence of the proposed estimator. Theorem 3.2. Let λ = 10 p M log d/n in (3.3) and ⌧= (1.5Mn/ log d)1/6 in (3.4). Then under Assumption 3.1, we have kbβ −β⇤k2 4 p 2/C0 · s⇤λ with probability at least 1 −d−2. Now we introduce the estimator of B⇤for the sparse MIM in Definition 2.4. Proposition 2.2 implies that E[Y · T(X)] = B⇤D0B⇤, where D0 = E[r2f(XB⇤)] is positive definite. Similar to (3.3), we recover the column space of B⇤by solving maximize ⌦ W, e⌃ ↵ −λkWk1, subject to 0 ⪯W ⪯Id, Trace(W) = k, (3.6) 5 where e⌃is defined in (3.5), λ > 0 is a regularization parameter, and k is the number of indices, which is assumed to be known. Let c W be the solution of (3.6), and let the final estimator bB contain the top k leading eigenvectors of c W as columns. For such an estimator, we have the following theorem quantifying its statistical rate of convergence. Let ⇢0 = λmin(E[r2f(XB⇤)]). Theorem 3.3. Let λ = 10 p M log d/n in (3.6) and ⌧= (1.5Mn/ log d)1/6 in (3.4). Then under Assumption 3.1, with probability at least 1 −d−2, we have inf O2Ok // bB −B⇤O // F 4 p 2/⇢0 · s⇤λ, where Ok 2 Rk⇥k is the set of all possible rotation matrices. Minimax lower bounds for subspace estimation for MIM are established in [22]. For k being fixed, Theorem 3.3 is near-optimal from a minimax point of view. The difference between the optimal rate and the above theorem is roughly a factor of p s⇤. We will discuss more about this gap in Section 4. The proofs of Theorem 3.2 and Theorem 3.3 are provided in the supplementary material. Remark 3.4. Recall that our discussion above is under the assumption that the entries of X are i.i.d., which could be relaxed to the case of weak dependence between the covariates without any significant loss in the statistical rates presented above. We do not focus on this extension in this paper as we aim to clearly convey the main message of the paper in a simpler setting. 4 Optimality and Connection to Sparse PCA Now we discuss the optimality of the results presented in §3. Throughout the discussion we assume that k is fixed and does not increase with d and n. The estimators for SIM in (3.3) and MIM in (3.6) are closely related to the semidefinite program-based estimator for sparse PCA [33]. In specific, let X 2 Rd be a random vector with E(X) = 0 and covariance ⌃= E(XX>), which is symmetric and positive definite. The goal of sparse PCA is to estimate the projector onto the subspace spanned by the top k eigenvectors, namely {v⇤ ` }`2[k], of ⌃, under the subspace sparsity assumption as specified in Definition 2.4. An estimator based on semidefinite programing is introduced in [33, 34], which is based on solving maximize ⌦ W, b⌃ ↵ −λkWk1 subject to 0 ⪯W ⪯Id, Trace(W) = k. (4.1) Here b⌃= n−1 Pn i=1 XiX> i is the sample covariance matrix given n i.i.d. observations {Xi}n i=1 of X. Note that the main difference between the SIM estimator and the sparse PCA estimator is the use of e⌃in place of b⌃. It is known that sparse PCA problem exhibits an interesting statistical-computational tradeoff [16, 34, 35], which naturally appears in the context of SIM as well. In particular, while the optimal statistical rate for sparse PCA is O( p s⇤log d/n), the SDP-based estimator could only attain O(s⇤p log d/n) under the assumption that X is light-tailed. It is known that when n = ⌦(s⇤2 log d), one could obtain the optimal statistical rate of O( p s⇤log d/n) by nonconvex method [37]. However, their results rely on the sharp concentration of b⌃to ⌃in the restricted operator norm: //b⌃−⌃⇤// op,s = sup ) w>(b⌃−⌃)w: kwk2 = 1, kwk0 s = O( p s log d/n). (4.2) When X has heavy-tailed entries, for example, with bounded fourth moment, its highly unlikely that (4.2) holds. Heavy-tailed sparse PCA: Recall that our estimators leverage a data-driven truncation argument to handle heavy-tailed distributions. Owing to the close relationship between our SIM/MIM estimators and the sparse PCA estimator, it is natural to ask whether such a truncation argument could lead to a sparse PCA estimator for heavy tailed X. Below we show it is indeed possible to obtain a near-optimal estimator for heavy-tailed sparse PCA based on the truncation technique. For vector v 2 Rd, let #(v) be a truncation operator that works entrywise as [#(v)]j = sign[vj] · min{|vj|, ⌧} for j 2 [d]. Then, our estimator is defined as follows, maximize hW, ⌃i −λkWk1 subject to 0 ⪯W ⪯Id, Trace(W) = k, (4.3) 6 where ⌃= n−1 Pn i=1 XiX> i and Xi = #(Xi), for i = 1, . . . n. For the above estimator, we have the following theorem under the assumption that X has heavy-tailed marginals. Let V ⇤= (v⇤ 1 . . . v⇤ k) 2 Rd⇥k and we assume that ⇢0 = λk(⌃) −λk+1(⌃) > 0. Theorem 4.1. Let c W be the solution of the optimization in (4.3) and let bV 2 Rd⇥k contain the top k leading eigenvectors of c W. Also, we set the regularization parameter in (4.3) to be λ = C1 p M log d/n and the truncation parameter to be ⌧= (C2Mn/ log d)1/4, where C1 and C2 are some positive constants. Moreover, we assume that V ⇤contains only s⇤nonzero rows and that X satisfies E|Xj|4 M and E|Xi · Xj|2 M. Then, with probability at least 1 −d−2, we have inf O2Ok //bV −V ⇤O // F 4 p 2/⇢0 · s⇤λ, where Ok 2 Rk⇥k is the set of all possible rotation matrices. The proof of the above theorem is identical to that of Theorem 3.3 and thus we omit it. The above theorem shows that with elementwise truncation, as long as X satisfies a bounded fourth moment condition, the SDP estimator for sparse PCA achieves the near-optimal statistical rate of O(s⇤p log d/n). We end this section with the following questions based on the above discussions: 1. Could we obtain the minimax optimal statistical rate O( p s log d/n) for sparse PCA in the high sample size regime with n = ⌦(s⇤2 log d) if X has only bounded fourth moment? 2. Could we obtain the minimax optimal statistical rate O( p s log d/n) given n = ⌦(s⇤2 log d) when f, X, and Y satisfy the bounded moment condition in Assumption 3.1 for MIM? The answers to both questions lie in constructing truncation-based estimators that concentrate sharply in restricted operator norm defined in (4.2) or more realistically exhibit one-sided concentration bounds (see, e.g., [24] and [27] for related results and discussion). Obtaining such an estimator seems to be challenging for heavy-tailed sparse PCA and it is not immediately clear if this is even possible. We plan to report our findings for the above problem in the near future. 5 Experimental Results In this section, we evaluate the finite-sample error of the proposed estimators on simulated data. We concentrate on the case of sparse phase retrieval. Recall that in this case, the link function is known and existing convex and non-convex estimators are applicable predominantly for the case of Gaussian or light-tailed data. The question of what are the necessary assumptions on the measurement vectors for (sparse) phase retrieval to work is an intriguing one [11]. In the sequel, we demonstrate that using the proposed score-based estimators, one could use heavy-tailed and skewed measurements as well, which significantly extend the class of measurement vectors applicable for sparse phase retrieval. Recall that the covariate X has i.i.d. entries with distribution p0. Throughout this section, we set p0 to be Gamma distribution with shape parameter 5 and scale parameter 1 or Rayleigh distribution with scale parameter 2. The random noise ✏is set to be standard Gaussian. Moreover, we solve the optimization problems in (3.3) and (3.6) via the alternating direction method of multipliers (ADMM) algorithm, which introduces a dual variable to handle the constraints and updates the primal and dual variables iteratively. For SIM, let the link functions be f1(u) = u2, f2 = |u|, and f3(u) = 4u2+3 cos(u), correspondingly. Here f1 corresponds to the phase retrieval model and f2 and f3 could be viewed as its robust extension. Throughout the experiment we vary n and fix d = 500 and s⇤= 5. Also, the support of β⇤is chosen uniformly at random from all the possible subsets of [d] with cardinality s⇤. For each j 2 supp(β⇤), we set β⇤ j = 1/ p s⇤· γj, where γj’s are i.i.d. Rademacher random variables. Furthermore, we fix the regularization parameter λ = 4 p log d/n and threshold parameter ⌧= 20. In addition, we adopt the cosine distance cos \(bβ, β⇤) = 1 −|hbβ, β⇤i|, to measure the estimation error. We plot the cosine distance against the theoretical statistical rate of convergence s⇤p log d/n in Figure 2-(a)-(c) for each link function, respectively. The plot is based on 100 independent trials for each n. It shows that the estimation error is bounded by a linear function of s⇤p log d/n, which corroborates the theory. 7 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 f1(u) = u2 f2(u) = |u|, f3(u) = 4u2 + 3 cos(u) Figure 2: Cosine distances between the true parameter β⇤and the estimated parameter bβ in the sparse SIM with the link function in one of f1, f2, and f3. Here we set d = 500. s⇤= 5 and vary n. 6 Discussion In this work, we study estimating the parametric component of SIM and MIM in the high dimensions, under fairly general assumptions on the link function f and response Y . Furthermore, our estimators are applicable in the non-Gaussian setting in which X is not required to satisfy restrictive Gaussian or elliptical symmetry assumptions. Our estimators are based on a data-driven truncation technique in combination with a second-order Stein’s identity. In the low-dimensional setting, for two-layer neural network [14] propose a tensor-based method for estimating the parametric component. Their estimators are sub-optimal even assuming X is Gaussian. An immediate application of our truncation-based estimators enables us to obtain optimal results for a fairly general class of covariate distributions in the low-dimensional setting. Obtaining optimal or near-optimal results in the high-dimensional setting is of great interest for two-layer neural network, albeit challenging. We plan to extend the results of the current paper to two-layer neural networks in high dimensions and report our findings in the near future. References [1] Albert Ai, Alex Lapanowski, Yaniv Plan, and Roman Vershynin. One-bit compressed sensing with non-Gaussian measurements. Linear Algebra and its Applications, 2014. [2] St´ephane Boucheron, G´abor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press, 2013. [3] Petros T Boufounos and Richard G Baraniuk. 1-bit compressive sensing. In Annual Conference on Information Sciences and Systems, pages 16–21. IEEE, 2008. [4] T Tony Cai, Xiaodong Li, and Zongming Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016. [5] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015. [6] Emmanuel J Candes, Thomas Strohmer, and Vladislav Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241–1274, 2013. [7] Xin Chen, Changliang Zou, and Dennis Cook. Coordinate-independent sparse sufficient dimension reduction and variable selection. The Annals of Statistics, 38(6):3696–3723, 2010. [8] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference, 3(3):189–223, 2014. [9] Jianqing Fan, Jinchi Lv, and Lei Qi. Sparse high-dimensional models in economics. Annual Review of Economics, 3(1):291–317, 2011. [10] Larry Goldstein, Stanislav Minsker, and Xiaohan Wei. Structured signal recovery from nonlinear and heavy-tailed measurements. arXiv preprint arXiv:1609.01025, 2016. [11] David Gross, Felix Krahmer, and Richard Kueng. A partial derandomization of phaselift using spherical designs. Journal of Fourier Analysis and Applications, 2015. 8 [12] Joel L Horowitz. Semiparametric and nonparametric methods in econometrics, volume 12. Springer, 2009. [13] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Score function features for discriminative learning: Matrix and tensor framework. arXiv preprint arXiv:1412.2863, 2014. [14] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015. [15] Bo Jiang and Jun S Liu. Variable selection for general index models via sliced inverse regression. The Annals of Statistics, 42(5):1751–1786, 2014. [16] Robert Krauthgamer, Boaz Nadler, and Dan Vilenchik. Do semidefinite relaxations solve sparse PCA up to the information limit? The Annals of Statistics, 43(3):1300–1322, 2015. [17] Guillaume Lecu´e and Shahar Mendelson. Minimax rate of convergence and the performance of empirical risk minimization in phase retrieval. Electronic Journal of Probability, 20(57):1–29, 2015. [18] Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414):316–327, 1991. [19] Ker-Chau Li. On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma. Journal of the American Statistical Association, 87(420):1025–1039, 1992. [20] Ker-Chau Li and Naihua Duan. Regression analysis under link violation. The Annals of Statistics, 17(3):1009–1052, 1989. [21] Xiaodong Li and Vladislav Voroninski. Sparse signal recovery from quadratic measurements via convex programming. SIAM Journal on Mathematical Analysis, 45(5):3019–3033, 2013. [22] Qian Lin, Xinran Li, Dongming Huang, and Jun S Liu. On the optimality of sliced inverse regression in high dimensions. arXiv preprint arXiv:1701.06009, 2017. [23] Qian Lin, Zhigen Zhao, and Jun S Liu. On consistency and sparsity for sliced inverse regression in high dimensions. arXiv preprint arXiv:1507.03895, 2015. [24] Shahar Mendelson. Learning without concentration. In Conference on Learning Theory, pages 25–39, 2014. [25] Matey Neykov, Jun S Liu, and Tianxi Cai. `1-regularized least squares for support recovery of high dimensional single index models with Gaussian designs. Journal of Machine Learning Research, 17(87):1–37, 2016. [26] Matey Neykov, Zhaoran Wang, and Han Liu. Agnostic estimation for misspecified phase retrieval models. In Advances in Neural Information Processing Systems, pages 4089–4097, 2016. [27] Roberto Imbuzeiro Oliveira. The lower tail of random quadratic forms, with applications to ordinary least squares and restricted eigenvalue properties. arXiv preprint arXiv:1312.2903, 2013. [28] Yaniv Plan and Roman Vershynin. The generalized lasso with non-linear observations. IEEE Transactions on Information Theory, 62(3):1528–1537, 2016. [29] Charles Stein, Persi Diaconis, Susan Holmes, and Gesine Reinert. Use of exchangeable pairs in the analysis of simulations. In Stein’s Method. Institute of Mathematical Statistics, 2004. [30] Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. arXiv preprint arXiv:1602.06664, 2016. [31] Kean Ming Tan, Zhaoran Wang, Han Liu, and Tong Zhang. Sparse generalized eigenvalue problem: Optimal statistical rates via truncated Rayleigh flow. arXiv preprint arXiv:1604.08697, 2016. [32] Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang, and Dennis Cook. A convex formulation for high-dimensional sparse sliced inverse regression. manuscript, 2016. [33] Vincent Q Vu, Juhee Cho, Jing Lei, and Karl Rohe. Fantope projection and selection: A nearoptimal convex relaxation of sparse PCA. In Advances in Neural Information Processing Systems, pages 2670–2678, 2013. 9 [34] Tengyao Wang, Quentin Berthet, and Richard J Samworth. Statistical and computational tradeoffs in estimation of sparse principal components. The Annals of Statistics, 44(5):1896–1930, 2016. [35] Zhaoran Wang, Quanquan Gu, and Han Liu. Sharp computational-statistical phase transitions via oracle computational model. arXiv preprint arXiv:1512.08861, 2015. [36] Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. The Annals of Statistics, 42(6):2164– 2201, 12 2014. [37] Zhaoran Wang, Huanran Lu, and Han Liu. Tighten after relax: Minimax-optimal sparse PCA in polynomial time. In Advances in Neural Information Processing Systems, pages 3383–3391, 2014. [38] Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C Eldar, and Tong Zhang. Sparse nonlinear regression: Parameter estimation and asymptotic inference. International Conference on Machine Learning, 2015. [39] Xinyang Yi, Zhaoran Wang, Constantine Caramanis, and Han Liu. Optimal linear estimation under unknown nonlinear transform. In Advances in Neural Information Processing Systems, pages 1549–1557, 2015. [40] Lixing Zhu, Baiqi Miao, and Heng Peng. On sliced inverse regression with high-dimensional covariates. Journal of the American Statistical Association, 101(474):630–643, 2006. 10 | 2017 | 210 |
6,687 | Learning spatiotemporal piecewise-geodesic trajectories from longitudinal manifold-valued data Juliette Chevallier CMAP, École polytechnique juliette.chevallier@polytechnique.edu Pr Stéphane Oudard Oncology Department USPC, AP-HP, HEGP Stéphanie Allassonnière CRC, Université Paris Descartes stephanie.allassonniere@parisdescartes.fr Abstract We introduce a hierarchical model which allows to estimate a group-average piecewise-geodesic trajectory in the Riemannian space of measurements and individual variability. This model falls into the well defined mixed-effect models. The subject-specific trajectories are defined through spatial and temporal transformations of the group-average piecewise-geodesic path, component by component. Thus we can apply our model to a wide variety of situations. Due to the non-linearity of the model, we use the Stochastic Approximation ExpectationMaximization algorithm to estimate the model parameters. Experiments on synthetic data validate this choice. The model is then applied to the metastatic renal cancer chemotherapy monitoring: we run estimations on RECIST scores of treated patients and estimate the time they escape from the treatment. Experiments highlight the role of the different parameters on the response to treatment. 1 Introduction During the past few years, the way we treat renal metastatic cancer was profoundly changed: a new class of anti-angiogenic therapies targeting the tumor vessels instead of the tumor cells has emerged and drastically improved survival by a factor of three (Escudier et al., 2016). These new drugs, however, do not cure the cancer, and only succeed in delaying the tumor growth, requiring the use of successive therapies which must be continued or interrupted at the appropriate moment according to the patient’s response. The new medicine process has also created a new scientific challenge: how to choose the more efficient drug therapy. This means that one has to properly understand how the patient reacts to the possible treatments. Actually, there are several strategies and taking the right decision is a contested issue (Rothermundt et al., 2015, 2017). To achieve that goal, physicians took an interest in mathematical modeling. Mathematics has already demonstrated its efficiency and played a role in the change of stop-criteria for a given treatment (Burotto et al., 2014). However, to the best of our knowledge, there only exists one model which was designed by medical practitioners. Although, very basic mathematically, it seems to show that this point of view may produce interesting results. Introduced by Stein et al. in 2008, the model performs a non-linear regression using the least squares method to fit an increasing or/and decreasing exponential curve. This model is still used but suffers limitations. First, as the profile are fitted individual-by-individual independently, the model cannot explain a global dynamic. Then, the choice of exponential growth avoids the emergence of plateau effects which are often observed in practice. This opens the way to new models which would explain both a population and each individual with other constraints on the shape of the response. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Learning models of disease progression from such databases raises great methodological challenges. We propose here a very generic model which can be adapted to a large number of situations. For a given population, our model amounts to estimating an average trajectory in the set of measurements and individual variability. Then we can define continuous subject-specific trajectories in view of the population progression. Trajectories need to be registered in space and time, to allow anatomical variability (as different tumor sizes), different paces of progression and sensitivity to treatments. The framework of mixed-effects models is well suited to deal with this hierarchical problem. Mixedeffects models for longitudinal measurements were introduced in the seminal paper of Laird and Ware (1982) and have been widely developed since then. The recent generic approach of Schiratti et al. (2015) to align patients is even more suitable. First, anatomical data are naturally modeled as points on Riemannian manifolds while the usual mixed-effects models are defined for Euclidean data. Secondly, the model was built with the aim of granting individual temporal and spatial variability through individual variations of a common time-line grant and parallel shifting of the average trajectory. However, Schiratti et al. (2015) have made a strong hypothesis to build their model as they consider that the mean evolution is a geodesic. This would mean in our targeted situation that the cancer would either go on evolving or is always sensitive to the treatment. Unfortunately, the anti-angiogenic treatments may be inefficient, efficient or temporarily efficient, leading to a re-progression of the metastasis. Therefore, we want to relax this assumption on the model. In this paper, we propose a generic statistical framework for the definition and estimation of spatiotemporal piecewise-geodesic trajectories from longitudinal manifold-valued data. Riemannian geometry allows us to derive a method that makes few assumptions about the data and applications dealt with. We first introduce our model in its most generic formulation and then make it explicit for RECIST (Therasse et al., 2000) score monitoring, i.e. for one-dimension manifolds. Experimental results on those scores are given in section 4.2. The introduction of a more general model is a deliberate choice as we are expecting to apply our model to the corresponding medical images. Because of the non-linearity of the model, we have to use a stochastic version of the Expectation-Maximization algorithm (Dempster et al., 1977), namely the MCMC-SAEM algorithm, for which theoretical results regarding the convergence have been proved in Delyon et al. (1999) and Allassonnière et al. (2010) and numerical efficiency has been demonstrated for these types of models (Schiratti et al. (2015), MONOLIX – MOdèles NOn LInéaires à effets miXtes). 2 Mixed-effects model for piecewise-geodesically distributed data We consider a longitudinal dataset obtained by repeating measurements of n ∈N∗individuals, where each individual i ∈J1, nK is observed ki ∈N∗times, at the time points ti = (ti,j)1⩽j⩽ki and where yi = (yi,j)1⩽j⩽ki denotes the sequence of observations for this individual. We also denote k = Pn i=1 ki the total numbers of observations. We assume that each observation yi,j is a point on a d-dimensional geodesically complete Riemannian manifold (M, g), so that y = (yi,j)1⩽i⩽n, 1⩽j⩽ki ∈M k. We generalize the idea of Schiratti et al. (2015) and build our model in a hierarchical way. We see our data points as samples along trajectories and suppose that each individual trajectory derives from a group-average scenario through spatiotemporal transformations. Key to our model is that the group-average trajectory in no longer assumed to be geodesic but piecewise-geodesic. 2.1 Generic piecewise-geodesic curves model Let m ∈N∗and tR = −∞< t1 R < . . . < tm−1 R < +∞ a subdivision of R, called the breakingup times sequence. Let M0 a d-dimensional geodesically complete manifold and ¯γℓ 0 1⩽ℓ⩽m a family of geodesics on M0. To completely define our average trajectory, we introduce m isometries φℓ 0 : M0 →M ℓ 0 := φℓ 0(M0). This defines m new geodesics – on the corresponding space M ℓ 0 – by setting down γℓ 0 = φℓ 0 ◦¯γ0ℓ. The isometric nature of the mapping φℓ 0 ensures that the manifolds M ℓ 0 remain Riemannian and that the curves γℓ 0 remain geodesic. In particular, each γℓ 0 remains parametrizable (Gallot et al., 2004). We define the average trajectory by ∀t ∈R, γ0(t) = γ1 0(t) 1]−∞,t1 R](t) + m−1 X ℓ=2 γℓ 0(t) 1]tℓ−1 R ,tℓ R](t) + γm 0 (t) 1]tm−1 R ,+∞[(t) . 2 In this framework, M0 may be understood as a manifold-template of the geodesic components of the curve γ0. Because of the piecewise nature of our average-trajectory γ0, constraints have to be formulated on each interval of the subdivision tR. Following the formulation of the local existence and uniqueness theorem (Gallot et al., 2004), constraints on geodesics are generally formulated by forcing a value and a tangent vector at a given time-point. However, such an approach cannot ensure the curve γ0 to be at least continuous. That is why we re-formulate these constraints in our model as boundary conditions. Let a sequence ¯A = ( ¯A0, . . . , ¯Am) ∈(M0)m+1, an initial time t0 ∈R and a final time t1 ∈R. We impose1 that for all ℓ∈J1, m −1K, ¯γ1 0(t0) = ¯A0, ¯γℓ 0(tℓ R) = ¯Aℓ, ¯γℓ+1 0 (tℓ R) = ¯Aℓand ¯γm 0 (t1) = ¯Am. Notably, the 2m constraints are defined step by step. In one dimension (cf section 2.2), the geodesics could be written explicitly and such constraints do not complicate the model so much. In higher dimension, we have to use shooting or matching methods to enforce this constraint. In practice, the choice of the isometries φℓ 0 and the geodesics ¯γℓ 0 have to be done with the aim to be "as regular as possible" (at least continuous as said above) at the rupture points tℓ R. In one dimension for instance, we build trajectories that are continuous, not differentiable but with a very similar slope on each side of the breaking-points. We want the individual trajectories to represent a wide variety of behaviors and to derive from the group average path by spatiotemporal transformations. To do that, we define for each component ℓof the piecewise-geodesic curve γ0 a couple of transformations (φℓ i, ψℓ i). These transformations, namely the diffeomorphic component deformations and the time component reparametrizations, characterize respectively the spatial and the temporal variability of propagation among the population. Thus, individual trajectories may write in the form of ∀t ∈R, γi(t) = γ1 i (t) 1]−∞,t1 R,i](t) + m−1 X ℓ=2 γℓ i (t) 1]tℓ−1 R,i ,tℓ R,i](t) + γm i (t) 1]tm−1 R,i ,+∞[(t) (⋆) where the functions γℓ i are obtained from γℓ 0 through the applications of the two transformations φℓ i and ψℓ i described below. Note that, in particular, each individual possesses his own sequence of rupture times tR,i = tℓ R,i 1⩽ℓ<m. Moreover, we require the fewest constraints possible in the construction: at least continuity and control of the slopes at these breaking-up points. For compactness, we will now abusively denote t0 R for t0 and tm R for t1. To allow different paces in the progression and different breaking-up times for each individual, we introduce some temporal transformations ψℓ i, called time-warp, that are defined for the subject i ∈J1, nK and for the geodesic component ℓ∈J1, mK by ψℓ i(t) = αℓ i(t −tℓ−1 R −τ ℓ i ) + tℓ−1 R . The parameters τ ℓ i correspond to the time-shift between the mean and the individual progression onset and the αℓ i are the acceleration factors that describe the pace of individuals, being faster or slower than the average. To ensure good adjunction at the rupture points, we demand the individual breaking-up times tℓ R,i and the time-warps to satisfy ψℓ i(tℓ R,i) = tℓ R and ψℓ i(tℓ−1 R,i ) = tℓ−1 R . Hence the subdivision tR,i is constrained by the time reparametrizations, which are also constrained. Only the acceleration factors αℓ i and the first time shift τ 1 i are free: for all ℓ∈J1, mK, the constraints rewrite step by step as tℓ R,i = tℓ−1 R + τ ℓ i + tℓ R−tℓ−1 R αℓ i and τ ℓ i = tℓ−1 R,i −tℓ−1 R . Concerning the space variability, we introduce m diffeomorphic deformations φℓ i which enable the different components of the individual trajectories to vary more irrespectively of each other. We just enforce the adjunction to be at least continuous and therefore the diffeomorphisms φℓ i have to satisfy φℓ i ◦γℓ 0(tℓ R) = φℓ+1 i ◦γℓ+1 0 (tℓ R). Note that the mappings φℓ i do not need to be isometric anymore, as the individual trajectories are no longer required to be geodesic. Finally, for all i ∈J1, nK and ℓ∈J1, mK, we set γℓ i = φℓ i ◦γℓ 0 ◦ψℓ i and define γi as in (⋆). The observations yi = (yi,j) are assumed to be distributed along the curve γi and perturbed by an additive Gaussian noise εi ∼N(0, σ2Iki) : ∀(i, j) ∈J1, nK × J1, kiK, yi,j = γi(ti,j) + εi,j where εi,j ∼N(0, σ2) . 1By defining Aℓ= φℓ 0( ¯Aℓ) for each ℓwe can apply the constraints on γℓ 0 instead of ¯γℓ 0. 3 The choice of the isometries φℓ 0 and the diffeomorphisms φℓ i will induce a large panel of piecewisegeodesic models. For example, if m = 1, φ0 = Id and if φ1 i denotes the application that maps the curve γ0 onto its parallel curve for a given non-zero tangent vector wi, we feature the model proposed by Schiratti et al. (2015). In the following paragraph we propose another specific model which can be used for chemotherapy monitoring for instance (see section 4.2). 2.2 Piecewise-logistic curve model We focus in the following on the case of piecewise-logistic model, which presents a real interest regarding to our target application (cf section 4.2). We assume that m = 2 and d = 1 and we set M0 = ]0, 1[ equipped with the logistic metric. Given three real numbers γinit 0 , γescap 0 and γfin 0 we set down φ1 0 : x 7→ γinit 0 −γescap 0 x + γescap 0 and φ2 0 : x 7→ γfin 0 −γescap 0 x + γescap 0 . Thus, we can map M0 onto the intervals ]γescap 0 , γinit 0 [ and ]γescap 0 , γfin 0 [ respectively: if ¯γ0 refers to the sigmoid function, φ1 0 ◦¯γ0 will be a logistic curve, growing from γescap 0 to γinit 0 . In this way, there is essentially a single breaking-up time and we will denote it tR at the population level and ti R at the individual one. Moreover, due to our target applications, we force the first logistic to be decreasing and the second one increasing (this condition may be relaxed). Logistics are defined on open intervals, with asymptotic constraints. We want to formulate our constraints on some noninfinite time-points, as explained in the previous paragraph, we set a positive threshold ν close to zero and demand the logistics γ1 0 and γ2 0 to be ν-near from their corresponding asymptotes. More precisely, we impose the average trajectory γ0 to be of the form γ0 = γ1 0 1]−∞,tR] + γ2 0 1]tR,+∞[ where γ1 0 : R →]γescap 0 , γinit 0 [ γ2 0 : R →]γescap 0 , γfin 0 [ t 7→γinit 0 + γescap 0 e(at+b) 1 + e(at+b) t 7→γfin 0 + γescap 0 e−(ct+d) 1 + e−(ct+d) ( γescap 0 + 2ν ⩽γinit 0 γescap 0 + 2ν ⩽γfin 0 and a, b, c and d are some positive numbers given by the following constraints γ1 0(t0) = γinit 0 −ν , γ1 0(tR) = γ2 0(tR) = γescap 0 + ν and γ2 0(t1) = γfin 0 −ν . In our context, the initial time of the process is known: it is the beginning of the treatment. So, we assume that the average initial time t0 is equal to zero. Especially t0 is no longer a variable. Moreover, for each individual i ∈J1, nK, the time-warps write ψ1 i (t) = α1 i (t −t0 −τ 1 i ) + t0 and ψ2 i (t) = α2 i (t −tR −τ 2 i ) + tR where τ 2 i = τ 1 i + 1−α1 i α1 i (tR −t0). From now on, we note τi for τ 1 i . In the same way as the time-warp, the diffeomorphisms φ1 i and φ2 i are chosen to allow different amplitudes and rupture values: for each subject i ∈J1, nK, given the two scaling factors r1 i and r2 i and the space-shift δi, we define φℓ i(x) = rℓ i (x −γ0(tR)) + γ0(tR) + δi, ℓ∈{1, 2}. Other choices are conceivable but in the context of our target applications, this one is appropriate. Mathematically, any regular and injective function defined on ]γescap 0 , γinit 0 [ (respectively ]γescap 0 , γfin 0 [) is suited. 0 1,000 2,000 0 200 400 γ0 γ1 γ2 γ3 γ4 γ5 γ6 γ7 t0 t1 tR Times (in days) RECIST score (dimentionless) (a) Diversity of individual trajectories. γinit 0 −ν γescap 0 +ν γfin 0 −ν r1 i r2 i δi τi t0 ti R tR t1 ti 1 γ0 γi (b) From average to individual trajectory. Figure 1: Model description. Figure 1a represents a typical average trajectory and several individual ones, for different vectors Pi. The rupture times are represented by diamonds and the initial/final times by stars. Figure 1b illustrates the non-standard constraints for γ0 and the transition from the average trajectory to an individual one: the trajectory γi is subject to a temporal and a spacial warp. In other "words", γi = φ1 i ◦γ1 0 ◦ψ1 i 1]−∞,ti R] + φ2 i ◦γ2 0 ◦ψ2 i 1]ti R,+∞[. 4 To sum up, each individual trajectory γi depends on the average curve γ0 through fixed effects zpop = γinit 0 , γescap 0 , γfin 0 , tR, t1 and random effects zi = α1 i , α2 i , τi, r1 i , r2 i , δi . This leads to a non-linear mixed-effects model. More precisely, for all (i, j) ∈J1, nK × J1, kiK, yi,j = r1 i γ1 i (ti,j) −γ0(tR) + γ0(tR) + δi 1]−∞,ti R](ti,j) + r2 i γ2 i (ti,j) −γ0(tR) + γ0(tR) + δi 1]ti R,+∞[(ti,j) + εi,j where γ1 i = φ1 i ◦γ1 0, γ2 i = φ2 i ◦γ2 0 and ti R = t0 + τ 1 i + tR−t0 α1 i . Figure 1 provides an illustration of the model. On each subfigure, the bold black curve represents the average trajectory γ0 and the colour curves several individual trajectories. The acceleration and the scaling parameters have to be positive and equal to one on average while the time and space shifts are of any signs and must be zero on average. For these reasons, we set αℓ i = eξℓ i and rℓ i = eρℓ i for ℓ∈{1, 2} leading to Pi = t ξ1 i ξ2 i τi ρ1 i ρ2 i δi . We assume that Pi ∼N(0, Σ) where Σ ∈SpR, p = 6. This assumption is important in view of the applications. Usually, the random effects are studied independently. Here, we are interested in correlations between the two phases of patient’s response to treatment (see section 4.2). 3 Parameters estimation with the MCMC-SAEM algorithm In this section, we explain how to use a stochastic version of the EM algorithm to produce maximum a posteriori estimates of the parameters. 3.1 Statistical analysis of the piecewise-logistic curves model We want to estimate (zpop, Σ, σ). The theoretical convergence of the EM algorithm, and a fortiori of the SAEM algorithm (Delyon et al., 1999), is proved only if the model belongs to the curved exponential family. Moreover, for numerical performances this framework is important. Without further hypothesis, the piecewise-logistic model does not satisfy this constraint. We proceed as in Kuhn and Lavielle (2005): we assume that zpop is the realization of independent Gaussian random variables with fixed small variances and estimate the means of those variables. So, the parameters we want to estimate are from now on θ = γinit 0 , γescap 0 , γfin 0 , tR, t1, Σ, σ . The fixed and random effects z = (zpop, zi)1⩽i⩽n are considered as latent variables. Our model write in a hierarchical way as y | z, θ ∼ n O i=1 ki O j=1 N γi(ti,j), σ2 z | θ ∼N(γinit 0 , σ2 init) ⊗N(γescap 0 , σ2 escap) ⊗N(γfin 0 , σ2 fin) ⊗N(tR, σ2 R) ⊗N(t1, σ2 1) n O i=1 N(0, Σ) where σinit, σescap, σfin, σR and σ1 are hyperparameters of the model. The product measures ⊗mean that the corresponding entries are considered to be independent in our model. Of course, it is not the case for the observations which are obtained by repeating measurements for several individuals but this assumption leads us to a more computationally tractable algorithm. In this context, the EM algorithm is very efficient to compute the maximum likelihood estimate of θ. Due to the non-linearity of our model, a stochastic version of the EM algorithm is adopted, namely the Stochastic Approximation Expectation-Maximization (SAEM) algorithm. As the conditional distribution q(z|y, θ) is unknown, the Expectation step is replaced using a Monte-Carlo Markov Chain (MCMC) sampling algorithm, leading to consider the MCMC-SAEM algorithm introduced in Kuhn and Lavielle (2005) and Allassonnière et al. (2010). It alternates between a simulation step, a stochastic approximation step and a maximization step until convergence. The simulation step is achieved using a symmetric random walk Hasting-Metropolis within Gibbs sampler (Robert and Casella, 1999). See the supplementary material for details about algorithmics. To ensure the existence of the maximum a posteriori (theorem 1), we use a "partial" Bayesian formalism, i.e. we assume the following prior (Σ, σ) ∼W−1 (V, mΣ) ⊗W−1 (v, mσ) where V ∈SpR, v, mΣ, mσ ∈R 5 and W−1 (V, mΣ) denotes the inverse Wishart distribution with scale matrix V and degrees of freedom mΣ. In order for the inverse Wishart to be non-degenerate, the degrees mΣ and mσ must satisfy mΣ > 2p and mσ > 2. In practice, we yet use degenerate priors but with correct posteriors .To be consistent with the one-dimension inverse Wishart distribution, we define the density function of distribution of higher dimension as fW−1(V,mΣ)(Σ) = 1 Γp mΣ 2 p |V | 2 p 2 p |Σ| exp −1 2 tr V Σ−1 !mΣ where Γp is the multivariate gamma function. The maximization step is straightforward given the sufficient statistics of our exponential model: we update the parameters by taking a barycenter between the corresponding sufficient statistic and the prior. See the supplementary material for explicit equations. 3.2 Existence of the Maximum a Posteriori The next theorem ensures us that the model is well-posed and that the maximum we are looking for through the MCMC-SAEM algorithm exists. Let Θ the space of admissible parameters : Θ = n γinit 0 , γescap 0 , γfin 0 , tR, t1, Σ, σ ∈R5 × SpR × R+ Σ positive-definite o . Theorem 1 (Existence of the MAP). Given the piecewise-logistic model and the choice of probability distributions for the parameters and latent variables of the model, for any dataset (ti,j, yi,j)i∈J1,nK, j∈J1,kiK, there exist bθMAP ∈argmax θ∈Θ q(θ|y). A detailed proof is postponed to the supplementary material. 4 Experimental results The piecewise-logistic model has been designed for chemotherapy monitoring. More specifically, we have met radiologists of the Hôpital Européen Georges-Pompidou (HEGP – Georges Pompidou European Hospital) to design our model. In practice, patients suffer from the metastatic kidney cancer and take a drug each day. Regularly, they come to the HEGP to check the tumor evolution. The response to a given treatment has generally two distinct phases: first, tumor’s size reduces; then, the tumor grows again. A practical question is to quantify the correlation between both phases and to determine as accurately as possible the individual rupture times ti R which are related to an escape of the patient’s response to treatment. 4.1 Synthetic data In order to validate our model and numerical scheme, we first run experiments on synthetic data. We well understood that the covariance matrix Σ gives a lot of information on the health status of a patient: pace and amplitude of tumor progression, individual rupture times...Therefore, we have to pay special attention to the estimation of Σ in this paragraph. An important point was to allow a lot of different individual behaviors. In our synthetic example, Figure 1a illustrates this variability. From a single average trajectory (γ0 in bold plain line), we can generate individuals who are cured at the end (dot-dashed lines: γ3 and γ4), some whose response to the treatment is bad (dashed lines: γ5 and γ6), some who only escape (no positive response to the treatments – dotted lines: γ7). Likewise, we can generate "patients" with only positive responses or no response at all. The case of individual 4 is interesting in practice: the tumor still grows but so slowly that the growth is negligible, at least in the short-run. Figure 2 illustrates the qualitative performance of the estimation. We are notably able to understand various behaviors and fit subjects which are far from the average path, such as the orange and the green curves. We represent only five individuals but 200 subjects have been used to perform the estimation. To measure the influence of the sample size on our model/algorithm, we generate synthetic datasets of various size and perform the estimation 50 times for each dataset. Means and standard deviations 6 0 500 1,000 1,500 0 100 200 Times (in days) RECIST score (dimentionless) (a) Initialisation. 0 500 1,000 1,500 0 100 200 Times (in days) RECIST score (dimentionless) (b) After 600 iterations. Figure 2: Initialisation and "results". On both figures, the estimated trajectories are in plain lines and the target curves in dashed lines. The (noisy) observations are represented by crosses. The average path is in bold black line, the individuals in color. Figure 2a: Population parameters zpop and latent variables zpop are initialized at the empirical mean of the observations; individual trajectories are initialized on the average trajectory (P = 0, Σ = 0.1Ip, σ = 1). Figure 2b: After 600 iterations, sometime less, the estimated curves fit very well the observations. As the algorithm is stochastic, estimated curves – and effectively the individuals – still wave around the target curves. Table 1: Mean (standard deviation) of relative error (expressed as a percentage) for the population parameters zpop and the residual standard deviation σ for 50 runs according to the sample size n. Sample γinit 0 γescap 0 γfin 0 tR t1 σ size n 50 1.63 (1.46) 9.45 (5.40) 6.23 (2.25) 11.58 (1.64) 4.41 (0.75) 25.24 (12.84) 100 2.42 (1.50) 9.07 (5.19) 7.82 (2.43) 13.62 (1.31) 5.27 (0.60) 10.35 (3.96) 150 2.14 (1.17) 11.40 (5.72) 5.82 (2.55) 9.24 (1.63) 3.42 (0.71) 2.83 (2.31) of the relative errors for the real parameters, namely γinit 0 , γescap 0 , γfin 0 , tR, t1 and σ, are compiled in Table 1. To compare things which are comparable, we have generated a dataset of size 200 and shortened them to the desired size. Moreover, to put the algorithm on a more realistic situation, the synthetic individual times are non-periodically spaced, individual sizes vary between 12 and 18 and the observed values are noisy (σ = 3). We remark that our algorithm is stable and that the bigger the sample size, the better we learn the residual standard deviation σ. The parameters tR and γescap 0 are quite difficult to learn as they occur on the flat section of the trajectory. However, the error we made is not crippling as the most important for clinicians is the dynamic along both phases. As the algorithm enables to estimate both the mean trajectory and the individual dynamic, it succeeds in estimating the inter-individual variability. This ends in a good estimate of the covariance matrix Σ (see figure 4). 4.2 Chemotherapy monitoring: RECIST score of treated patients We now run our estimation algorithm on real data from HEGP. The RECIST (Response Evaluation Criteria In Solid Tumors) score (Therasse et al., 2000) measures the tumoral growth and is a key indicator of the patient survival. We have performed the estimation over a drove of 176 patients of the HEGP. There is an average of 7 visits per subjects (min: 3, max: 22), with an average duration of 90 days between consecutive visits. We have run the algorithm several times, with different proposal laws for the sampler (a Symmetric Random Walk Hasting-Metropolis within Gibbs one) and different priors. We present here a run with a low residual standard variation in respect to the amplitude of the trajectories and complexity of the dataset: σ = 14.50 versus max(γinit 0 , γfin 0 ) −γescap 0 = 452.4. Figure 3a illustrates the performance of the model on the first eight patients. Although we cannot explain all the paths of progression, the algorithm succeeds in fitting various types of curves: from the yellow curve γ3 which is rather flat and only escape to the red γ7 which is spiky. From Figure 3b, it seems that the rupture times occur early in the progression in average. Nevertheless , this result is to be considered with some reserve: the rupture time generally occurs on a stable phase of the disease and the estimation may be difficult. 7 0 100 200 300 400 500 0 200 400 γ0 γ1 γ2 γ3 γ4 γ5 γ6 γ7 γ8 Times (in days) RECIST score (dimentionless) (a) After 600 iterations. 0 1,000 2,000 3,000 4,000 5,000 0 20 40 Individual rupture times (in days) (b) Individual rupture times ti R. Figure 3: RECIST score. We keep conventions of the previous figures. Figure 3a is the result of a 600 iterations run. We represent here only the first 8 patients among the 176. Figure 3b is the histogram of the rupture times ti R for this run. In black bold line, the estimated average rupture time tR is a good estimate of the average of the individual rupture times although there exists a large range of escape. −4 −2 0 2 4 −5 0 5 −10 0 10 2nd acceleration factor ξ2 ←Fast progress Slow progress→ 1st acceleration factor ξ1 ←Slow response Fast response→ Time shift τ ←Early onset Last onset→ (a) The time warp. −10 0 −5 0 5 0 500 0 2,000 4,000 2nd amplitude factor ρ2 ←High step Low step→ 1st amplitude factor ρ1 ←Low step High step→ Space shift δ ←Low score High score→ Individual rupture times ti R (in days) (b) The space warp. Figure 4: Individual random effects. Figure 4a: log-acceleration factors ξ1 i and ξ2 i against times shifts τi. Figure 4b: log-amplitude factors ρ1 i and ρ2 i against space shifts δi. In both figure, the color corresponds to the individual rupture time ti R. These estimations hold for the same run as Figure 3. In Figure 4, we plot the individual estimates of the random effects (obtained from the last iteration) in comparison to the individual rupture times. Even though the parameters which lead the space warp, i.e. ρ1 i , ρ2 i and δi are correlated, the correlation with the rupture time is not clear. In other words, the volume of the tumors seems to not be relevant to evaluate the escape of a patient. On the contrary, which is logical, the time warp strongly impacts the rupture time. 4.3 Discussion and perspective We propose here a generic spatiotemporal model to analyze longitudinal manifold-valued measurements. Contrary to Schiratti et al. (2015), the average trajectory is not assumed to be geodesic anymore. This allows us to apply our model to more complex situations: in chemotherapy monitoring for example, where the patients are treated and tumors may respond, stabilize or progress during the treatment, with different conducts for each phase. At the age of personalized medicine, to give physicians decision support systems is really important. Therefore learning correlations between both phases is crucial. This has been taken into account here. For purpose of working with more complicated data, medical images for instance, we have first presented our model in a very generic version. Then we made it explicit for RECIST scores monitoring and performed experiments on data from the HEGP. However, we have studied that dataset as if all patients behave similarly, which is not true in practice. We believe that a possible amelioration of our model is to put it into a mixture model. Lastly, the SAEM algorithm is really sensitive to initial conditions. This phenomenon is emphasized by the non-independence between the individual variables: actually, the average trajectory γ0 is not exactly the trajectory of the average parameters. Fortunately, the more the sample size n increases, the more γ0 draws closer to the trajectory of the average parameters. 8 Acknowledgments Ce travail bénéficie d’un financement public Investissement d’avenir, reference ANR-11-LABX0056-LMH. This work was supported by a public grant as part of the Investissement d’avenir, project reference ANR-11-LABX-0056-LMH. Travail réalisé dans le cadre d’un projet financé par la Fondation de la Recherche Médicale, "DBI20131228564". Work performed as a part of a project funded by the Fondation of Medical Research, grant number "DBI20131228564". References Stéphanie Allassonnière, Estelle Kuhn, and Alain Trouvé. Construction of bayesian deformable models via a stochastic approximation algorithm: A convergence study. Bernoulli, 16(3):641–678, 08 2010. Mauricio Burotto, Julia Wilkerson, Wilfred Stein, Motzer Robert, Susan Bates, and Tito Fojo. Continuing a cancer treatment despite tumor growth may be valuable: Sunitinib in renal cell carcinoma as example. PLoS ONE, 9(5):e96316, 2014. Bernard Delyon, Marc Lavielle, and Eric Moulines. Convergence of a stochastic approximation version of the em algorithm. The Annals of Statistics, 27(1):94–128, 1999. Arthur Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38, 1977. Bernard Escudier, Camillo Porta, Mélanie Schmidinger, Nathalie Rioux-Leclercq, Axel Bex, Vincent S. Khoo, Viktor Gruenvald, and Alan Horwich. Renal cell carcinoma: Esmo clinical practice guidelines for diagnosis, treatment and follow-up. Annals of Oncology, 27(suppl 5):v58–v68, 2016. Sylvestre Gallot, Dominique Hulin, and Jacques Lafontaine. Riemannian Geometry. Universitext. SpringerVerlag Berlin Heidelberg, 3 edition, 2004. Estelle Kuhn and Marc Lavielle. Maximum likelihood estimation in nonlinear mixed effects models. Computational Statistics & Data Analysis, 49(4):1020–1038, 2005. Nan M. Laird and James H. Ware. Random-effects models for longitudinal data. Biometrics, 38(4):963–974, 1982. Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer Texts in Statistics. SpringerVerlag New York, 1999. Christian Rothermundt, Alexandra Bailey, Linda Cerbone, Tim Eisen, Bernard Escudier, Silke Gillessen, Viktor Grünwald, James Larkin, David McDermott, Jan Oldenburg, Camillo Porta, Brian Rini, Manuela Schmidinger, Cora N. Sternberg, and Paul M. Putora. Algorithms in the first-line treatment of metastatic clear cell renal cell carcinoma – analysis using diagnostic nodes. The Oncologist, 20(9):1028–1035, 2015. Christian Rothermundt, Joscha Von Rappard, Tim Eisen, Bernard Escudier, Viktor Grünwald, James Larkin, David McDermott, Jan Oldenburg, Camillo Porta, Brian Rini, Manuela Schmidinger, Cora N. Sternberg, and Paul M. Putora. Second-line treatment for metastatic clear cell renal cell cancer: experts’ consensus algorithms. World Journal of Urology, 35(4):641–648, 2017. Jean-Baptiste Schiratti, Stéphanie Allassonniere, Olivier Colliot, and Stanley Durrleman. Learning spatiotemporal trajectories from manifold-valued longitudinal data. In Neural Information Processing Systems, number 28 in Advances in Neural Information Processing Systems. 2015. Wilfred D. Stein, William Doug Figg, William Dahut, Aryeh D. Stein, Moshe B. Hoshen, Doug Price, Susan E. Bates, and Tito Fojo. Tumor growth rates derived from data for patients in a clinical trial correlate strongly with patient survival: A novel strategy for evaluation of clinical trial data. The Oncologist, 13(10):1046–1054, 2008. Patrick Therasse, Susan G. Arbuck, Elizabeth A. Eisenhauer, Jantien Wanders, Richard S. Kaplan, Larry Rubinstein, Jaap Verweij, Martine Van Glabbeke, Allan T. van Oosterom, Michaele C. Christian, and Steve G. Gwyther. New guidelines to evaluate the response to treatment in solid tumors. Journal of the National Cancer Institute, 92(3):205–216, 2000. 9 | 2017 | 211 |
6,688 | SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks Kyuhong Shim, Minjae Lee, Iksoo Choi, Yoonho Boo, Wonyong Sung Department of Electrical and Computer Engineering Seoul National University, Seoul, Korea skhu20@snu.ac.kr, {mjlee, ischoi, yhboo}@dsp.snu.ac.kr, wysung@snu.ac.kr Abstract We propose a fast approximation method of a softmax function with a very large vocabulary using singular value decomposition (SVD). SVD-softmax targets fast and accurate probability estimation of the topmost probable words during inference of neural network language models. The proposed method transforms the weight matrix used in the calculation of the output vector by using SVD. The approximate probability of each word can be estimated with only a small part of the weight matrix by using a few large singular values and the corresponding elements for most of the words. We applied the technique to language modeling and neural machine translation and present a guideline for good approximation. The algorithm requires only approximately 20% of arithmetic operations for an 800K vocabulary case and shows more than a three-fold speedup on a GPU. 1 Introduction Neural networks have shown impressive results for language modeling [1–3]. Neural network-based language models (LMs) estimate the likelihood of a word sequence by predicting the next word wt+1 by previous words w1:t. Word probabilities for every step are acquired by matrix multiplication and a softmax function. Likelihood evaluation by an LM is necessary for various tasks, such as speech recognition [4, 5], machine translation, or natural language parsing and tagging. However, executing an LM with a large vocabulary size is computationally challenging because of the softmax normalization. Softmax computation needs to access every word to compute the normalization factor Z, where softmax(zk) = exp(zk)/P V exp(zi) = exp(zk)/Z. V indicates the vocabulary size of the dataset. We refer the conventional softmax algorithm as the "full-softmax." The computational requirement of the softmax function frequently dominates the complexity of neural network LMs. For example, a Long Short-Term Memory (LSTM) [6] RNN with four layers of 2K hidden units requires roughly 128M multiply-add operations for one inference. If the LM supports an 800K vocabulary, the evaluation of the output probability computation with softmax normalization alone demands approximately 1,600M multiply-add operations, far exceeding that of the RNN core itself. Although we should compute the output vector of all words to evaluate the denominator of the softmax function, few applications require the probability of every word. For example, if an LM is used for rescoring purposes as in [7], only the probabilities of one or a few given words are needed. Further, for applications employing beam search, the most probable top-5 or top-10 values are usually required. In speech recognition, since many states need to be pruned for efficient implementations, it is not demanded to consider the probabilities of all the words. Thus, we formulate our goal: to obtain accurate top-K word probabilities with considerably less computation for LM evaluation, where the K considered is from 1 to 500. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we present a fast softmax approximation for LMs, which does not involve alternative neural network architectures or additional loss during training. Our method can be directly applied to full-softmax, regardless of how it is trained. This method is different from those proposed in other papers, in that it is aimed to reduce the evaluation complexity, not to minimize the training time or to improve the performance. The proposed technique is based on singular value decomposition (SVD) [8] of the softmax weight matrix. Experimental results show that the proposed algorithm provides both fast and accurate evaluation of the most probable top-K word probabilities. The contributions of this paper are as follows. • We propose a fast and accurate softmax approximation, SVD-softmax, applied for calculating the top-K word probabilities. • We provide a quantitative analysis of SVD-softmax with three different datasets and two different tasks. • We show through experimental results that the normalization term of softmax can be approximated fairly accurately by computing only a fraction of the full weight matrix. This paper is organized as follows. In Section 2, we review related studies and compare them to our study. We introduce SVD-softmax in Section 3. In Section 4, we provide experimental results. In Section 5, we discuss more details about the proposed algorithm. Section 6 concludes the paper. 2 Related work Many methods have been developed to reduce the computational burden of the softmax function. The most successful approaches include sampling-based softmax approximation, hierarchical softmax architecture, and self-normalization techniques. Some of these support very efficient training. However, the methods listed below must search the entire vocabulary to find the top-K words. Sampling-based approximations choose a small subset of possible outputs and train only with those. Importance sampling (IS) [9], noise contrastive estimation (NCE) [10], negative sampling (NEG) [11], and Blackout [12] are included in this category. These approximations train the network to increase the possibilities of positive samples, which are usually labels, and to decrease the probabilities of negative samples, which are randomly sampled. These strategies are beneficial for increasing the training speed. However, their evaluation does not show any improvement in speed. Hierarchical softmax (HS) unifies the softmax function and output vector computation by constructing a tree structure of words. Binary HS [13, 14] uses the binary tree structure, which is log(V ) in depth. However, the binary representation is heavily dependent on each word’s position, and therefore, a two-layer [2] or three-layer [15] hierarchy is also introduced. In particular, in the study in [15] several clustered words were arranged in a "short-list," where the outputs of the second level hierarchy were the words themselves, not the classes of the third hierarchy. Adaptive softmax [16] extends the idea and allocates the short-list to the first layer, with a two-layer hierarchy. Adaptive softmax achieves both a training time speedup and a performance gain. HS approaches have advantages for quickly gathering probability of a certain word or predetermined words. However, HS should also visit every word to find the topmost likely words, where the merit of the tree structure is not useful. Self-normalization approaches [17, 18] employ an additional training loss term, which leads a normalization factor Z close to 1. The evaluation of selected words can be achieved significantly faster than by using full-softmax if the denominator is trained well. However, the method cannot ensure that the denominator always appears correctly, and should also consider every word for top-K estimation. Differentiated Softmax (D-softmax) [19] restricts the effective parameters, using the fraction of the full output matrix. The matrix allocates higher dimensional representation to frequent words and only a lower dimensional vector to rare words. From this point of view, there is a commonality between our method and D-softmax in that the length of vector used in the output vector computation varies among words. However, the determination of the length of each portion is somewhat heuristic and requires specified training procedures in D-softmax. The word representation learned 2 𝐴 𝑈Σ 𝑽 𝑽 𝑽 (a) Base (b) After SVD (c) Preview window (d) Additional full-view vectors |𝑉| |𝐷| |𝐷| |𝑊| |𝑁| Figure 1: Illustration of the proposed SVD-softmax algorithm. The softmax weight matrix is decomposed by singular value decomposition (b). Only a part of the columns is used to compute the preview outputs (c). Selected rows, which are chosen by sorting the preview outputs, are recomputed with full-width (d). For simplicity, the bias vector is omitted. by D-softmax is restricted from the start, and may therefore be lacking in terms of expressiveness. In contrast, our algorithm first trains words with a full-length vector and dynamically limits the dimension during evaluation. In SVD-softmax, the importance of each word is also dynamically determined during the inference. 3 SVD-softmax The softmax function transforms a D-dimensional real-valued vector h to a V -dimensional probability distribution. The probability calculation consists of two stages. First, we acquire the output vector of size V , denoted as z, from h by matrix multiplication as z = Ah + b (1) where A ∈RV ×D is a weight matrix, h ∈RD is an input vector, b ∈RV is a bias vector, and z ∈RV is the computed output vector. Second, we normalize the output vector to compute the probability yk of each word as yk = softmax(zk) = exp(Akh + bk) PV i=1 exp(Aih + bi) = exp(zk) PV i=1 exp(zi) = exp(zk) Z (2) The computational complexity of calculating the probability distribution over all classes and only one class is the same, because the normalization factor Z requires every output vector elements to be computed. 3.1 Singular value decomposition SVD is a factorization method that decomposes a matrix into two unitary matrices U, V with singular vectors in columns and one diagonal matrix Σ with non-negative real singular values in descending order. SVD is applied to the weight matrix A as A = UΣVT (3) where U ∈RV ×D, Σ ∈RD×D, and V ∈RD×D. We multiply Σ and U to factorize the original matrix into two parts: UΣ and VT. Note that U × Σ multiplication is negligible in evaluation time because we can keep the result as a single matrix. Larger singular values in Σ are multiplied to the leftmost columns of U. As a result, the elements of the B(= UΣ) matrix are statistically arranged in descending order of magnitude, from the first column to the last. The leftmost columns of B are more influential than the rightmost columns. 3 Algorithm 1 Algorithm of the proposed SVD-softmax. 1: input: trained weight matrix A, input vector h, bias vector b 2: hyperparameter: width of preview window W, number of full-view vectors N. 3: initialize: decompose A = UΣVT , B = UΣ 4: ˜h = VT × h 5: ˜z = B[:, : W] × ˜h[: W] + b compute preview outputs with only W dimensions 6: Sort ˜z in descending order select N words of largest preview outputs 7: CN = Top-N word indices of ˜z 8: for all id in CN do 9: ˜z[id] = B[id, :] × ˜h + b[id] update selected words by full-view vector multiplication 10: end for 11: ˜Z = P V exp ˜zi 12: ˜y = exp(˜z)/ ˜Z compute probability distribution using softmax 13: return ˜y 3.2 Softmax approximation Algorithm 1 shows the softmax approximation procedure, which is also illustrated in Figure 1. Previous methods needed to compare every output vector elements to find the top-K words. Instead of using the full-length vector, we consult every word with a window of restricted length W. We call this the "preview window" and the results the "preview outputs." Note that adding the bias b in preview outputs computation is crucial for the performance. Since larger singular values are multiplied to several leftmost columns, it is reasonable to assume that the most important portion of the output vector is already computed with the preview window. However, we find that the preview outputs do not suffice to obtain accurate results. To increase the accuracy, N largest candidates CN are selected by sorting V preview outputs. The selected candidates are recomputed with the full-length window. We call the candidates the "full-view" vectors. As a result, N outputs are computed exactly while (V −N) outputs are only an approximation based on the preview outputs. In other words, only the selected indices use the full window for output vector computation. Finally, the softmax function is applied to the output vector to normalize the probability distribution. The modified output vector ˜zk is formulated as ˜zk = Bk˜h + bk, if k ∈CN Bk[: W]˜h[: W] + bk, otherwise (4) where B ∈RV ×D and ˜h = V T h ∈RD. Note that if k ∈CN, ˜zk is equal to zk. The computational complexity is reduced from O(V × D) to O(V × W + N × D). 3.3 Metrics To observe the accuracy of every word probability, we use Kullback-Leibler divergence (KLD) as a metric. KLD shows the closeness of the approximated distribution to the actual one. Perplexity, or negative log-likelihood (NLL), is a useful measurement for likelihood estimation. The gap between full-softmax and SVD-softmax NLL should be small. For the evaluation of a given word, the accuracy of probability depends only on the normalization factor Z, and therefore we monitor also the denominator of the softmax function. We define "top-K coverage," which represents how many top-K words of full-softmax are included in the top-K words of SVD-softmax. For the beam-search purpose, it is important to correctly select the top-K words, as beam paths might change if the order is mingled. 4 Experimental results The experiments were performed on three datasets and two different applications: language modeling and machine translation. The WikiText-2 [20] and One Billion Word benchmark (OBW) [21] 4 Table 1: Effect of the number of hidden units on the WikiText-2 language model. The number of full-view vectors is fixed to 3,300 for the table, which is about 10% of the size of the vocabulary. Top-K denotes top-K coverage defined in 3.3. The values are averaged. D W ˜Z/Z KLD NLL (full/SVD) Top-10 Top-100 Top-1000 256 16 0.9813 0.03843 4.408 / 4.518 9.97 99.47 952.71 32 0.9914 0.01134 4.408 / 4.441 10.00 99.97 986.94 512 32 0.9906 0.01453 3.831 / 3.907 10.00 99.89 974.87 64 0.9951 0.00638 3.831 / 3.852 10.00 99.99 993.35 1024 64 0.9951 0.00656 3.743 / 3.789 10.00 99.99 992.62 128 0.9971 0.00353 3.743 / 3.761 10.00 100.00 998.28 datasets were used for language modeling. The neural machine translation (NMT) from German to English was trained with a dataset provided by the OpenNMT toolkit [22]. We first analyzed the extent to which the preview window size W and the number of full-view vectors N affect the overall performance and searched the best working combination. 4.1 Effect of the number of hidden units on preview window size To find the relationship between the preview window’s width and the approximation quality, three LMs trained with WikiText-2 were tested. WikiText is a text dataset, which was recently introduced [20]. The WikiText-2 dataset contains 33,278-word vocabulary and approximately 2M training tokens. An RNN with a single LSTM layer [6] was used for language modeling. Traditional full-softmax was used for the output layer. The number of LSTM units was the same as the input embedding dimension. Three models were trained on WikiText-2 with the number of hidden units D being 256, 512, and 1,024. The models were trained with stochastic gradient descent (SGD) with an initial learning rate of 1.0 and momentum of 0.95. The batch size was set to 20, and the network was unrolled for 35 timesteps. Dropout [23] was applied to the LSTM output with a drop ratio of 0.5. Gradient clipping [24] of maximum norm value 5 was applied. The preview window widths W selected were 16, 32, 64, and 128 and the number of full-view candidates N were 5% and 10% of the full vocabulary size for all three models. One thousand sequential frames were used for the evaluation. Table 1 shows the results of selected experiments, which indicates that the sufficient preview window size is proportional to the hidden layer dimension D. In most cases, 1/8 of D is an adequate window width, which costs 12.5% of multiplications. Over 99% of the denominator is covered. KLD and NLL show that the approximation produces almost the same results as the original. The top-K words are also computed precisely. We also checked the order of the top-K words that were preserved. The result showed that using too short window width affects the performance badly. 4.2 Effect of the vocabulary size on the number of full-view vectors The OBW dataset was used to analyze the effect of vocabulary size on SVD-softmax. This benchmark is a huge dataset with a 793,472-word vocabulary. The model used 256-dimension word embedding, an LSTM layer of 2,048 units, and a full-softmax output layer. The RNN LM was trained with SGD with an initial learning rate of 1.0. We explored multiple models by employing a vocabulary size of 8,004, 80,004, 401,951, and 793,472, abbreviated as 8K, 80K, 400K, and 800K below. The 800K model follows the preprocessing consensus, keeping words that appear more than three times. The 400K vocabulary follows the same process as the 800K but without case sensitivity. The 8K and 80K data models were created by choosing the topmost frequent 8K and 80K words, respectively. Because of the limitation of GPU memory, the 800K model was trained with half-precision parameters. We used the full data for training. 5 Table 2: Effect of the number of full-view vector size N on One Billion Word benchmark language model. The preview window width is fixed to 256 in this table. We omitted the ratio of approximated ˜Z and real Z, because the ratio is over 0.997 for all cases in the table. The multiplication ratio is to full-softmax, including the overhead of VT × h. V N NLL (full/SVD) Top-10 Top-50 Top-100 Top-500 Mult. ratio 8K 1024 2.685 / 2.698 9.98 49.81 99.36 469.48 0.493 2048 2.685 / 2.687 9.99 49.99 99.89 496.05 0.605 80K 4096 3.589 / 3.6051 10.00 49.94 99.85 497.73 0.195 8192 3.589 / 3.591 10.00 49.99 99.97 499.56 0.240 400K 16384 3.493 / 3.495 10.00 50.00 100.00 499.90 0.171 32768 3.493 / 3.495 10.00 50.00 100.00 499.98 0.201 800K 32768 4.688 / 4.718 10.00 49.99 99.96 499.99 0.168 65536 4.688 / 4.690 10.00 49.99 99.96 499.89 0.200 Table 3: SVD-softmax on machine translation task. The baseline perplexity and BLEU score are 10.57 and 21.98, respectively. W N Perplexity BLEU 200 5000 10.57 21.99 2500 10.57 21.99 1000 10.58 22.00 100 5000 10.58 22.00 2500 10.59 22.00 1000 10.65 22.01 50 5000 10.60 22.00 2500 10.68 21.99 1000 11.04 22.00 The preview window width and the number of full-view vectors were selected in the powers of 2. The results were computed on randomly selected 2,000 consecutive frames. Table 2 shows the experimental results. With a fixed hidden dimension of 2,048, the required preview window width does not change significantly, which is consistent with the observations in Section 4.1. However, the number of full-view vectors N should increase as the vocabulary size grows. In our experiments, using 5% to 10% of the total vocabulary size as candidates sufficed to achieve a successful approximation. The results prove that the proposed method is scalable and more efficient when applied to large vocabulary softmax. 4.3 Result on machine translation NMT is based on neural networks and contains an internal softmax function. We applied SVDsoftmax to a German to English NMT task to evaluate the actual performance of the proposed algorithm. The baseline network, which employs the encoder-decoder model with an attention mechanism [25, 26], was trained using the OpenNMT toolkit. The network was trained with concatenated data which contained a WMT 2015 translation task [27], Europarl v7 [28], common crawl [29], and news commentary v10 [30], and evaluated with newstest 2013. The training and evaluation data were tokenized and preprocessed by following the procedures in previous studies [31, 32] to conduct case-sensitive translation with 50,004 frequent words. The baseline network employed 500dimension word embedding, encoder- and decoder-networks with two unidirectional LSTM layers with 500 units each, and a full-softmax output layer. The network was trained with SGD with an initial learning rate of 1.0 while applying dropout [23] with ratio 0.3 between adjacent LSTM layers. The rest of the training settings followed the OpenNMT training recipe, which is based on 6 0 10 20 30 40 50 60 70 0 128 256 384 512 640 768 896 1024 S(256) S(512) S(1024) 0 10 20 30 40 50 60 70 0.00 0.20 0.40 0.60 0.80 1.00 S(256) S(512) S(1024) 0.125 Figure 2: Singular value plot of three WikiText-2 language models that differ in hidden vector dimension D ∈{256, 512, 1024}. The left hand side figure represents the singular value for each element, while the right hand side figure illustrates the value proportional to D. The dashed line implies 0.125 = 1/8 point. Both are from the same data. previous studies [31, 33]. The performance of the network was evaluated according to perplexity and the case-sensitive BLEU score [34], which was computed with the Moses toolkit [35]. During translation, a beam search was conducted with beam width 5. To evaluate our algorithm, the preview window widths W selected were 25, 50, 100, and 200, and the numbers of full-view candidates N chosen were 1,000, 2,500, and 5,000. Table 3 shows the experimental results for perplexity and the BLEU score with respect to the preview window dimension W and the number of full-view vectors N. The full-softmax layer in the baseline model employed a hidden dimension D of 500 and computed the probability for V = 50,004 words. The experimental results show that a speed up can be achieved with preview width W = 100, which is 1/5 of D, and the number of full-view vectors N = 2,500 or 5,000, which is 1/5 or 1/10 of V . The parameters chosen did not affect the translation performance in terms of perplexity. For a wider W, it is possible to use a smaller N. The experimental results show that SVD-softmax is also effective when applied to NMT tasks. 5 Discussion In this section, we provide empirical evidence of the reasons why SVD-softmax operates efficiently. We also present the results of an implementation on a GPU. 5.1 Analysis of W, N, and D We first explain the reason the required preview window width W is proportional to the hidden vector size D. Figure 2 shows the singular value distribution of WikiText-2 LM softmax weights. We observed that the distributions are similar for all three cases when the singular value indices are scaled with D. Thus, it is important to preserve the ratio between W and D. The ratio of singular values in a D/8 window over the total sum of singular values for 256, 512, and 1,024 hidden vector dimensions is 0.42, 0.38, and 0.34, respectively. Furthermore, we explore the manner in which W and N affect the normalization term, i.e., the denominator. Figure 3 shows how the denominator is approximated while changing W or N. Note that the leftmost column of Figure 3 represents that no full-view vectors were used. 5.2 Computational efficiency The modeled number of multiplications in Table 2 shows that the computation required can be decreased to 20%. After factorization, the overhead of matrix multiplication VT , which is O(D2), is a fixed cost. In most cases, especially with a very large vocabulary, V is significantly larger than D, and the additional computation cost is negligible. However, as V decreases, the portion of the overhead increases. 7 Figure 3: Heatmap of approximated normalization factor ratio ˜Z/Z. The x and y axis represent N and W, respectively. The WikiText-2 language model with D = 1,024 was used. Note that the maximum values of N and W are 1,024 and 33,278, respectively. The gray line separates the area by 0.99 as a threshold. Best viewed in color. Table 4: Measured time (ms) of full-softmax and SVD-softmax on a GPU and CPU. The experiment was conducted on a NVIDIA GTX Titan-X (Pascal) GPU and Intel i7-6850 CPU. The second column indicates the full-softmax, while the other columns represent each step of SVD-softmax. The cost of the sorting, exponential, and sum is omitted, as their time consumption is negligible. Full-softmax SVD-softmax A × h VT × h Preview window Full-view vectors Sum (speedup) Device (262k, 2k) (2k, 2k) (262k, 256) (16k, 2k) ×2k ×2k ×256 ×2k GPU 14.12 0.33 2.98 1.12 4.43 (×3.19) CPU 1541.43 25.32 189.27 88.98 303.57 (×5.08) We provide an example of time consumption on a CPU and GPU. Assume the weight A is a 262K (V = 218) by 2K (D = 211) matrix and SVD-softmax is applied with preview window width of 256 and the number of full-view vectors is 16K (N = 214). This corresponds to W/D = 1/8 and N/V = 1/16. The setting well simulates the real LM environment and the use of the recommended SVD-softmax hyperparameters discussed above. We used our highly optimized custom CUDA kernel for the GPU evaluation. The matrix B was stored in row-major order for convenient full-view vector evaluation. As observed in Table 4, the time consumption is reduced by approximately 70% on the GPU and approximately 80% on the CPU. Note that the GPU kernel is fully parallelized while the CPU code employs a sequential logic. We also tested various vocabulary sizes and hidden dimensions on the custom kernel, where a speedup is mostly observed, although it is less effective for small vocabulary cases. 5.3 Compatibility with other methods The proposed method is compatible with a neural network trained with sampling-based softmax approximations. SVD-softmax is also applicable to hierarchical softmax and adaptive softmax, especially when the vocabulary is large. Hierarchical methods need large weight matrix multiplication to gather every word probability, and SVD-softmax can reduce the computation. We tested SVD-softmax with various softmax approximations and observed that a significant amount of multiplication is removed while the performance is not significantly affected as it is by full softmax. 8 6 Conclusion We present SVD-softmax, an efficient softmax approximation algorithm, which is effective for computing top-K word probabilities. The proposed method factorizes the matrix by SVD, and only part of the SVD transformed matrix is previewed to determine which words are worth preserving. The guideline for hyperparameter selection was given empirically. Language modeling and NMT experiments were conducted. Our method reduces the number of multiplication operations to only 20% of that of the full-softmax with little performance degradation. The proposed SVD-softmax is a simple yet powerful computation reduction technique. Acknowledgments This work was supported in part by the Brain Korea 21 Plus Project and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2015R1A2A1A10056051). References [1] Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur, “Recurrent neural network based language model,” in Interspeech, 2010, vol. 2, p. 3. [2] Tomáš Mikolov, Stefan Kombrink, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur, “Extensions of recurrent neural network language model,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5528–5531. [3] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier, “Language modeling with gated convolutional networks,” arXiv preprint arXiv:1612.08083, 2016. [4] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4960–4964. [5] Kyuyeon Hwang and Wonyong Sung, “Character-level incremental speech recognition with recurrent neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 5335–5339. [6] Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. [7] Xunying Liu, Yongqiang Wang, Xie Chen, Mark JF Gales, and Philip C Woodland, “Efficient lattice rescoring using recurrent neural network language models,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 4908– 4912. [8] Gene H Golub and Christian Reinsch, “Singular value decomposition and least squares solutions,” Numerische Mathematik, vol. 14, no. 5, pp. 403–420, 1970. [9] Yoshua Bengio, Jean-Sébastien Senécal, et al., “Quick training of probabilistic neural nets by importance sampling.,” in AISTATS, 2003. [10] Andriy Mnih and Yee Whye Teh, “A fast and simple algorithm for training neural probabilistic language models,” arXiv preprint arXiv:1206.6426, 2012. [11] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in Neural Information Processing Systems, 2013, pp. 3111–3119. [12] Shihao Ji, SVN Vishwanathan, Nadathur Satish, Michael J Anderson, and Pradeep Dubey, “Blackout: Speeding up recurrent neural network language models with very large vocabularies,” arXiv preprint arXiv:1511.06909, 2015. 9 [13] Frederic Morin and Yoshua Bengio, “Hierarchical probabilistic neural network language model,” in AISTATS. Citeseer, 2005, vol. 5, pp. 246–252. [14] Andriy Mnih and Geoffrey E Hinton, “A scalable hierarchical distributed language model,” in Advances in Neural Information Processing Systems, 2009, pp. 1081–1088. [15] Hai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and François Yvon, “Structured output layer neural network language model,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5524–5527. [16] Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou, “Efficient softmax approximation for GPUs,” arXiv preprint arXiv:1609.04309, 2016. [17] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul, “Fast and robust neural network joint models for statistical machine translation,” in ACL (1). Citeseer, 2014, pp. 1370–1380. [18] Jacob Andreas, Maxim Rabinovich, Michael I Jordan, and Dan Klein, “On the accuracy of self-normalized log-linear models,” in Advances in Neural Information Processing Systems, 2015, pp. 1783–1791. [19] Welin Chen, David Grangier, and Michael Auli, “Strategies for training large vocabulary neural language models,” arXiv preprint arXiv:1512.04906, 2015. [20] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher, “Pointer sentinel mixture models,” arXiv preprint arXiv:1609.07843, 2016. [21] Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson, “One billion word benchmark for measuring progress in statistical language modeling,” arXiv preprint arXiv:1312.3005, 2013. [22] Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush, “OpenNMT: Open-source toolkit for neural machine translation,” arXiv preprint arXiv:1701.02810, 2017. [23] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014. [24] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio, “On the difficulty of training recurrent neural networks,” ICML (3), vol. 28, pp. 1310–1318, 2013. [25] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, “Learning phrase representations using RNN encoderdecoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014. [26] Ilya Sutskever, Oriol Vinyals, and Quoc V Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112. [27] Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi, “Findings of the 2015 workshop on statistical machine translation,” in Proceedings of the Tenth Workshop on Statistical Machine Translation, 2015, pp. 1–46. [28] Philipp Koehn, “Europarl: A parallel corpus for statistical machine translation,” in MT Summit, 2005, vol. 5, pp. 79–86. [29] Common Crawl Foundation, “Common crawl,” http://commoncrawl.org, 2016, Accessed: 2017-04-11. [30] Jorg Tiedemann, “Parallel data, tools and interfaces in OPUS,” in LREC, 2012, vol. 2012, pp. 2214–2218. 10 [31] Minh-Thang Luong, Hieu Pham, and Christopher D Manning, “Effective approaches to attention-based neural machine translation,” arXiv preprint arXiv:1508.04025, 2015. [32] Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio, “On using very large target vocabulary for neural machine translation,” arXiv preprint arXiv:1412.2007, 2014. [33] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014. [34] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002, pp. 311–318. [35] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al., “Moses: Open source toolkit for statistical machine translation,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, 2007, pp. 177–180. 11 | 2017 | 212 |
6,689 | Concentration of Multilinear Functions of the Ising Model with Applications to Network Data Constantinos Daskalakis ∗ EECS & CSAIL, MIT costis@csail.mit.edu Nishanth Dikkala∗ EECS & CSAIL, MIT nishanthd@csail.mit.edu Gautam Kamath∗ EECS & CSAIL, MIT g@csail.mit.edu Abstract We prove near-tight concentration of measure for polynomial functions of the Ising model under high temperature. For any degree d, we show that a degreed polynomial of a n-spin Ising model exhibits exponential tails that scale as exp(−r2/d) at radius r = ˜Ωd(nd/2). Our concentration radius is optimal up to logarithmic factors for constant d, improving known results by polynomial factors in the number of spins. We demonstrate the efficacy of polynomial functions as statistics for testing the strength of interactions in social networks in both synthetic and real world data. 1 Introduction The Ising model is a fundamental probability distribution defined in terms of a graph G = (V, E) whose nodes and edges are associated with scalar parameters (θv)v∈V and (θu,v){u,v}∈E respectively. The distribution samples a vector x ∈{±1}V with probability: p(x) = exp X v∈V θvxv + X (u,v)∈E θu,vxuxv −Φ ⃗θ , (1) where Φ ⃗θ serves to provide normalization. Roughly speaking, there is a random variable Xv at every node of G, and this variable may be in one of two states, or spins: up (+1) or down (−1). The scalar parameter θv models a local field at node v. The sign of θv represents whether this local field favors Xv taking the value +1, i.e. the up spin, when θv > 0, or the value −1, i.e. the down spin, when θv < 0, and its magnitude represents the strength of the local field. Similarly, θu,v represents the direct interaction between nodes u and v. Its sign represents whether it favors equal spins, when θu,v > 0, or opposite spins, when θu,v < 0, and its magnitude corresponds to the strength of the direct interaction. Of course, depending on the structure of G and the node and edge parameters, there may be indirect interactions between nodes, which may overwhelm local fields or direct interactions. Many popular models, for example, the usual ferromagnetic Ising model [Isi25, Ons44], the Sherrington-Kirkpatrick mean field model [SK75] of spin glasses, and the Hopfield model [Hop82] of neural networks, the Curie-Weiss model [DCG68] all belong to the above family of distributions, with various special structures on G, the θu,v’s and the θv’s. Since its introduction in ∗Authors are listed in alphabetical order. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Statistical Physics, the Ising model has found a myriad of applications in diverse research disciplines, including probability theory, Markov chain Monte Carlo, computer vision, theoretical computer science, social network analysis, game theory, computational biology, and neuroscience; see e.g. [LPW09, Cha05, Fel04, DMR11, GG86, Ell93, MS10] and their references. The ubiquity of these applications motivate the problem of inferring Ising models from samples, or inferring statistical properties of Ising models from samples. This type of problem has enjoyed much study in statistics, machine learning, and information theory; see, e.g., [CL68, AKN06, CT06, Cha07, RWL10, JJR11, SW12, BGS14, Bre15, VMLC16, BK16, Bha16, BM16, MdCCU16, KM17, HKM17, DDK18]. Despite the wealth of theoretical study and practical applications of this model, outlined above, there are still aspects of it that are poorly understood. In this work, we focus on the important topic of concentration of measure. We are interested in studying the concentration properties of polynomial functions f (X) of the Ising model. That is, for a random vector X sampled from p as above and a polynomial f, we are interested in the concentration of f(X) around its expectation E[f(X)]. Since the coordinates of X take values in {±1}, we can without loss of generality focus our attention to multi-linear functions f. While the theory of concentration inequalities for functions of independent random variables has reached a high level of sophistication, proving concentration of measure for functions of dependent random variables is significantly harder, the main tools being martingale methods, logarithmic Sobolev inequalities and transportation cost inequalities. One shortcoming of the latter methods is that explicit constants are very hard or almost impossible to get. For the Ising model, in particular, the log-Sobolev inequalities of Stroock and Zegarlinski [SZ92], known under high temperature,2 do not give explicit constants, and it is also not clear whether they extend to systems beyond the lattice. The high temperature regime is an interesting regime of ‘weak’ dependence where many desirable properties related to Ising models hold. Perhaps the most important of them is that the canonical Markov chain used to sample from these models, namely the Glauber dynamics, is fast mixing. Although the high-temperature regime allows only ‘weak’ pairwise correlations, it is still rich enough to encode interesting dependencies. For instance, in neuroscience, it has been seen that weak pairwise correlations can coexist with strong correlations in the state of the population as a whole [SBSB06]. An alternative approach, proposed recently by Chatterjee [Cha05], is an adaptation to the Ising model of Stein’s method of exchangeable pairs. This powerful method is well-known in probability theory, and has been used to derive concentration inequalities with explicit constants for functions of dependent random variables (see [MJC+14] for a recent work). Chatterjee uses this technique to establish concentration inequalities for Lipschitz functions of the Ising model under high temperature. While these inequalities are tight (and provide Gaussian tails) for linear functions of the Ising model, they are unfortunately not tight for higher degree polynomials, in that the concentration radius is off by factors that depend on the dimension n = |V |. For example, consider the function fc(X) = P i̸=j cijXiXj of an Ising model without external fields, where the cij’s are signs. Chatterjee’s results imply that this function concentrates at radius ±O(n1.5), but as we show this is suboptimal by a factor of ˜Ω(√n). In particular, our main technical contribution is to obtain near-tight concentration inequalities for polynomial functions of the Ising model, whose concentration radii are tight up to logarithmic factors. A corollary of our main result (Theorem 4) is as follows: Theorem 1. Consider any degree-d multilinear function f with coefficients in [−1, 1], defined on an Ising model p without external field in the high-temperature regime. Then there exists a constant C = C(d) > 0 (depending only on d) such that for any r = ˜Ωd(nd/2), we have Pr X∼p[|f(X) −E[f(X)]| > r] ≤exp −C · r2/d n log n . The concentration radius is tight up to logarithmic factors, and the tail bound is tight up to a Od(1/ log n) factor in the exponent of the tail bound. 2High temperature is a widely studied regime of the Ising model where it enjoys a number of useful properties such as decay of correlations and fast mixing of the Glauber dynamics. Throughout this paper we will take “high temperature” to mean that Dobrushin’s conditions of weak dependence are satisfied. See Definition 1. 2 Our formal theorem statements for bilinear and higher degree multilinear functions appear as Theorems 2 and 4 of Sections 3 and 4, respectively. Some further discussion of our results is in order: • Under existence of external fields, it is easy to see that the above concentration does not hold, even for bilinear functions. Motivated by our applications in Section 5 we extend the above concentration of measure result to centered bilinear functions (where each variable Xi appears as Xi −E[Xi] in the function) that also holds under arbitrary external fields; see Theorem 3. We leave extensions of this result to higher degree multinear functions to the next version of this paper. • Moreover, notice that the tails for degree-2 functions are exponential and not Gaussian, and this is unavoidable, and that as the degree grows the tails become heavier exponentials, and this is also unavoidable. In particular, the tightness of our bound is justified in the supplementary material. • Lastly, like Chatterjee and Stroock and Zegarlinski, we prove our results under high temperature. On the other hand, it is easy to construct low temperature Ising models where no non-trivial concentration holds.3 With our theoretical understanding in hand, we proceed with an experimental evaluation of the efficacy of multilinear functions applied to hypothesis testing. Specifically, given a binary vector, we attempt to determine whether or not it was generated by an Ising model. Our focus is on testing whether choices in social networks can be approximated as an Ising model, a common and classical assumption in the social sciences [Ell93, MS10]. We apply our method to both synthetic and real-world data. On synthetic data, we investigate when our statistics are successful in detecting departures from the Ising model. For our real-world data study, we analyze the Last.fm dataset from HetRec’11 [CBK11]. Interestingly, when considering musical preferences on a social network, we find that the Ising model may be more or less appropriate depending on the genre of music. 1.1 Related Work As mentioned before, Chatterjee previously used the method of exchangeable pairs to prove variance and concentration bounds for linear statistics of the Ising model [Cha05]. In [DDK18], the authors prove variance bounds for bilinear statistics. The present work improves upon this by proving concentration rather than bounding the variance, as well as considering general degrees d rather than just d = 2. In simultaneous work, Gheissari, Lubetzky, and Peres proved concentration bounds which are qualitatively similar to ours, though the techniques are somewhat different [GLP17]. 2 Preliminaries We will state some preliminaries here, see the supplementary material for further preliminaries. We define the high-temperature regime, also known as Dobrushin’s uniqueness condition – in this paper, we will use the terms interchangeably. Definition 1. Consider an Ising model p defined on a graph G = (V, E) with |V | = n and parameter vector ⃗θ. Suppose maxv∈V P u̸=v tanh (|θuv|) ≤1 −η for some η > 0. Then p is said to satisfy Dobrushin’s uniqueness condition, or be in the η-high temperature regime. In some situations, we may use the parameter η implicitly and simply say the Ising model is in the high temperature regime. Glauber dynamics refers to the canonical Markov chain for sampling from an Ising model, see the supplementary material for a formal definition. Glauber dynamics define a reversible, ergodic Markov chain whose stationary distribution is identical to the corresponding Ising model. In many relevant settings, including the high-temperature regime, the dynamics are rapidly mixing and hence offer an 3Consider an Ising model with no external fields, comprising two disjoint cliques of half the vertices with infinitely strong bonds; i.e. θv = 0 for all v, and θu,v = ∞if u and v belong to the same clique. Now consider the multilinear function f(X) = P u̸∼v XuXv, wher u ̸∼v denotes that u and v are not neighbors (i.e. belong to different cliques). It is easy to see that the maximum absolute value of f(X) is Ω(n2) and that there is no concentration at radius better than some Ω(n2). 3 efficient way to sample from Ising models. In particular, the mixing time in η-high-temperature is tmix = n log n η . We may couple two executions of the Glauber dynamics using a greedy coupling (also known as a monotone coupling). Roughly, this couples the choices made by the runs to maximize the probability of agreement; see the supplementary material for a formal definition. One of the key properties of this coupling is that it satisfies the following contraction property: Lemma 1. If p is an Ising model in η-high temperature, then the greedy coupling between two executions satisfies the following contraction in Hamming distance: E h dH(X(1) t , X(2) t ) (X(1) 0 , X(2) 0 ) i ≤ 1 −η n t dH(X(1) 0 , X(2) 0 ). The key technical tool we use is the following concentration inequality for martingales: Lemma 2 (Freedman’s Inequality (Proposition 2.1 in [Fre75])). Let X0, X1, . . . , Xt be a sequence of martingale increments, such that Si = Pi j=0 Xj forms a martingale sequence. Let τ be a stopping time and K ≥0 be such that Pr[|Xi| ≤K ∀i ≤τ] = 1. Let vi = Var[Xi|Xi−1] and Vt = Pt i=0 vi. Then Pr[|St| ≥r and Vt ≤b for some t ≤τ] ≤2 exp − r2 2(rK+b) . 3 Concentration of Measure for Bilinear Functions In this section, we describe our main concentration result for bilinear functions of the Ising model. This is not as technically involved as the result for general-degree multilinear functions, but exposes many of the main conceptual ideas. The theorem statement is as follows: Theorem 2. Consider any bilinear function fa(x) = P u,v auvxuxv on an Ising model p (defined on a graph G = (V, E) such that |V | = n) in η-high-temperature regime with no external field. Let ∥a∥∞= maxu,v auv. If X ∼p, then for any r ≥300∥a∥∞n log2 n/η + 2, we have Pr [|fa(X) −E [fa(X)]| ≥r] ≤5 exp − ηr 1735∥a∥∞n log n . Remark 1. We note that η-high-temperature is not strictly needed for our results to hold – we only need Hamming contraction of the “greedy coupling” (see Lemma 1). This condition implies rapid mixing of the Glauber dynamics (in O(n log n) steps) via path coupling (Theorem 15.1 of [LPW09]). 3.1 Overview of the Technique A well known approach to proving concentration inequalities for functions of dependent random variables is via martingale tail bounds. For instance, Azuma’s inequality gives useful tail bounds whenever one can bound the martingale increments (i.e., the differences between consecutive terms of the martingale sequence) of the underlying martingale in absolute value, without requiring any form of independence. Such an approach is fruitful in showing concentration of linear functions on the Ising model in high temperature. The Glauber dynamics associated with Ising models in high temperature are fast mixing and offer a natural way to define a martingale sequence. In particular, consider the Doob martingale corresponding to any linear function f for which we wish to show concentration, defined on the state of the dynamics at some time step t∗, i.e. f(Xt∗). If we choose t∗larger than O(n log n) then f(Xt∗) would be very close to a sample from p irrespective of the starting state. We set the first term of the martingale sequence as E[f(Xt∗)|X0] and the last term is simply f(Xt∗). By bounding the martingale increments we can show that |f(Xt∗) −E[f(Xt∗)|X0]| concentrates at the right radius with high probability. By making t∗large enough we can argue that E[f(Xt∗)|X0] ≈E[f(X)]. Also, crucially, t∗need not be too large since the dynamics are fast mixing. Hence we don’t incur too big a hit when applying Azuma’s inequality, and one can argue that linear functions are concentrated with a radius of ˜O(√n). Crucial to this argument is the fact that linear functions are O(1)-Lipschitz (when the entries of a are constant), bounding the Doob martingale differences to be O(1). The challenge with bilinear functions is that they are O(n)-Lipschitz – a naive application of the same approach gives a radius of concentration of ˜O(n3/2), which albeit better than the trivial radius 4 of O(n2) is not optimal. To show stronger concentration for bilinear functions, at a high level, the idea is to bootstrap the known fact that linear functions of the Ising model concentrate well at high temperature. The key insight is that, when we have a d-linear function, its Lipschitz constants are bounds on the absolute values of certain d−1-linear functions. In particular, this implies that the Lipschitz constants of a bilinear function are bounds on the absolute values of certain associated linear functions. And although a worst case bound on the absolute value of linear functions with bounded coefficients would be O(n), the fact that linear functions are concentrated within a radius of ˜O(√n), means that bilinear functions are ˜O(√n)-Lipschitz in spirit. In order to exploit this intuition, we turn to more sophisticated concentration inequalities, namely Freedman’s inequality (Lemma 2). This is a generalization of Azuma’s inequality, which handles the case when the martingale differences are only bounded until some stopping time (very roughly, the first time we reach a state where the expectation of the linear function after mixing is large). To apply Freedman’s inequality, we would need to define a stopping time which has two properties: 1. The stopping time is larger than t∗with high probability. Hence, with a good probability the process doesn’t stop too early. The harm if the process stops too early (at t < t∗) is that we will not be able to effectively decouple E [fa(Xt)|X0] from the choice of X0. t∗is chosen to be larger than the mixing time of the Glauber dynamics precisely because it allows us to argue that E [fa(Xt∗)|X0] ≈E [fa(Xt∗)] = E[fa(X)]. 2. For all times i + 1 less than the stopping time, the martingale increments are bounded, i.e. |Bi+1 −Bi| = O(√n) where {Bi}i≥0 is the martingale sequence. We observe that the martingale increments corresponding to a martingale defined on a bilinear function have the flavor of the conditional expectations of certain linear functions which can be shown to concentrate at a radius ˜O(√n) when the process starts at its stationary distribution. This provides us with a nice way of defining the stopping time to be the first time when one of these conditional expectations deviates by more than Ω(√n poly log n) from the origin. More precisely, we define a set Ga K(t) of configurations xt, which is parameterized by a function fa(X) and parameter K (which we will take to be ˜Ω(√n)). The objects of interest are linear functions f v a(Xt∗) conditioned on Xt = xt, where f v a are linear functions which arise when examining the evolution of fa over steps of the Glauber dynamics. Ga K(t) are the set of configurations for which all such linear functions satisfy certain conditions, including bounded expectation and concentration around their mean. The stopping time for our process TK is defined as the first time we have a configuration which leaves this set Ga K(t). We can show that the stopping time is large via the following lemma: Lemma 3. For any t ≥0, for t∗= 3tmix, Pr [Xt /∈Ga K(t)] ≤8n exp −K2 8t∗ . Next, we require a bound on the conditional variance of the martingale increments. This can be shown using the property that the martingale increments are bounded up until the stopping time: Lemma 4. Consider the Doob martingale where Bi = E[fa(Xt∗)|Xi]. Suppose Xi ∈Ga K(i) and Xi+1 ∈Ga K(i + 1). Then |Bi+1 −Bi| ≤16K + 16n2 exp −K2 16t∗ . With these two pieces in hand, we can apply Freedman’s inequality to bound the desired quantity. It is worth noting that the martingale approach described above closely relates to the technique of exchangeable pairs exposited by Chatterjee [Cha05]. When we look at differences for the martingale sequence defined using the Glauber dynamics, we end up analyzing an exchangeable pair of the following form: sample X ∼p from the Ising model. Take a step along the Glauber dynamics starting from X to reach X′. (X, X′) forms an exchangeable pair. This is precisely how Chatterjee’s application of exchangeable pairs is set up. Chatterjee then goes on to study a function of X and X′ which serves as a proxy for the variance of f(X) and obtains concentration results by bounding the absolute value of this function. The definition of the function involves considering two greedily coupled runs of the Glauber dynamics just as we do in our martingale based approach. 5 To summarize, our proof of bilinear concentration involves showing various concentration properties for linear functions via Azuma’s inequality, showing that the martingale has ˜O(√n)-bounded differences before our stopping time, proving that the stopping time is larger than the mixing time with high probability, and combining these ingredients using Freedman’s inequality. Full details are provided in the supplementary material. 3.2 Concentration Under an External Field Under an external field, not all bilinear functions concentrate nicely even in the high temperature regime – in particular, they may concentrate with a radius of Θ(n1.5), instead of O(n). As such, we must instead consider “recentered” statistics to obtain the same radius of concentration. The following theorem is proved in the supplementary material: Theorem 3. 1. Bilinear functions on the Ising model of the form fa(X) = P u,v auv(Xu − E[Xu])(Xv −E[Xv]) satisfy the following inequality at high temperature. There exist absolute constants c and c′ such that, for r ≥cn log2 n/η, Pr [|fa(X) −E[fa(X)]| ≥r] ≤4 exp − r c′n log n . 2. Bilinear functions on the Ising model of the form fa(X(1), X(2)) = P u,v auv(X(1) u − X(2) u )(X(1) v −X(2) v ), where X(1), X(2) are two i.i.d samples from the Ising model, satisfy the following inequality at high temperature. There exist absolute constants c and c′ such that, for r ≥cn log2 n/η, Pr hfa(X(1), X(2)) −E[fa(X(1), X(2))] ≥r i ≤4 exp − r c′n log n . 4 Concentration of Measure for d-linear Functions More generally, we can show concentration of measure for d-linear functions on an Ising model in high temperature, when d ≥3. Again, we will focus on the setting with no external field. Although we will follow a recipe similar to that used for bilinear functions, the proof is more involved and requires some new definitions and tools. The proof will proceed by induction on the degree d. Due to the proof being more involved, for ease of exposition, we present the proof of Theorem 4 without explicit values for constants. Our main theorem statement is the following: Theorem 4. Consider any degree-d multilinear function fa(x) = P U⊆V :|U|=d aU Q u∈U xu on an Ising model p (defined on a graph G = (V, E) such that |V | = n) in η-high-temperature regime with no external field. Let ∥a∥∞= maxU⊆V :|U|=d |aU|. There exist constants C1 = C1(d) > 0 and C2 = C2(d) > 0 depending only on d, such that if X ∼p, then for any r ≥C1∥a∥∞(n log2 n/η)d/2, we have Pr [|fa(X) −E [fa(X)]| > r] ≤2 exp − ηr2/d C2∥a∥2/d ∞n log n ! . Similar to Remark 1, our theorem statement still holds under the weaker assumption of Hamming contraction. This bound is also tight up to polylogarithmic factors in the radius of concentration and the exponent of the tail bound, see Remark 1 in the supplementary material. 4.1 Overview of the Technique Our approach uses induction and is similar to the one used for bilinear functions. To show concentration for d-linear functions we will use the concentration of (d −1)-linear functions together with Freedman’s martingale inequality. Consider the following process: Sample X0 ∼p from the Ising model of interest. Starting at X0, run the Glauber dynamics associated with p for t∗= (d + 1)tmix steps. We will study the target 6 quantity, Pr [|fa(Xt∗) −E[fa(Xt∗)|X0]| > K], by defining a martingale sequence similar to the one in the bilinear proof. However, to bound the increments of the martingale for d-linear functions we will require an induction hypothesis which is more involved. The reason is that with higher degree multilinear functions (d > 2), the argument for bounding increments of the martingale sequence runs into multilinear terms which are a function of not just a single instance of the dynamics Xt, but also of the configuration obtained from the coupled run, X′ t. We call such multilinear terms hybrid terms and multilinear functions involving hybrid terms as hybrid multilinear functions henceforth. Since the two runs (of the Glauber dynamics) are coupled greedily to maximize the probability of agreement and they start with a small Hamming distance from each other (≤1), these hybrid terms behave very similar to the non-hybrid multilinear terms. Showing that their behavior is similar, however, requires some supplementary statements about them which are presented in the supplementary material. In addition to the martingale technique of Section 3, an ingredient that is crucial to the proving concentration for d ≥3 is a bound on the magnitude of the (d −1)-order marginals of the Ising model: Lemma 5. Consider any Ising model p at high temperature. Let d be a positive integer. We have X u1,...,ud Ep[Xu1Xu2 . . . , Xud] ≤2 4nd log n η d/2 . This is because when studying degree d ≥3 functions we find ourselves having to bound expected values of degree d −1 multilinear functions on the Ising model. A naive bound of Od(nd−1) can be argued for these functions but by exploiting the fact that we are in high temperature, we can show a bound of Od(n(d−1)/2) via a coupling with the Fortuin-Kastelyn model. When d = 2, (d −1)-linear functions are just linear functions which are zero mean. However, for d ≥3, this is not the case. Hence, we first need to prove this desired bound on the marginals of an Ising model in high temperature. Further details are provided in the supplementary material. 5 Experiments In this section, we apply our family of bilinear statistics on the Ising model to a problem of statistical hypothesis testing. Given a single sample from a multivariate distribution, we attempt to determine whether or not this sample was generated from an Ising model in the high-temperature regime. More specifically, the null hypothesis is that the sample is drawn from an Ising model with a known graph structure with a common edge parameter and a uniform node parameter (which may potentially be known to be 0). In Section 5.1, we apply our statistics to synthetic data. In Section 5.2, we turn our attention to the Last.fm dataset from HetRec 2011 [CBK11]. The running theme of our experimental investigation is testing the classical and common assumption which models choices in social networks as an Ising model [Ell93, MS10]. To be more concrete, choices in a network could include whether to buy an iPhone or an Android phone, or whether to vote for a Republican or Democratic candidate. Such choices are naturally influenced by one’s neighbors in the network – one may be more likely to buy an iPhone if he sees all his friends have one, corresponding to an Ising model with positive-weight edges4 In our synthetic data study, we will leave these choices as abstract, referring to them only as “values,” but in our Last.fm data study, these choices will be whether or not one listens to a particular artist. Our general algorithmic approach is as follows. Given a single multivariate sample, we first run the maximum pseudo-likelihood estimator (MPLE) to obtain an estimate of the model’s parameters. The MPLE is a canonical estimator for the parameters of the Ising model, and it enjoys strong consistency guarantees in many settings of interest [Cha07, BM16]. If the MPLE gives a large estimate of the model’s edge parameter, this is sufficient evidence to reject the null hypothesis. Otherwise, we use Markov Chain Monte Carlo (MCMC) on a model with the MPLE parameters to determine a range of values for our statistic. We note that, to be precise, we would need to quantify the error incurred by the MPLE – in favor of simplicity in our exploratory investigation, we eschew this detail, and 4Note that one may also decide against buying an iPhone in this scenario, if one places high value on individuality and uniqueness – this corresponds to negative-weight edges. 7 at this point attempt to reject the null hypothesis of the model learned by the MPLE. Our statistic is bilinear in the Ising model, and thus enjoys the strong concentration properties explained earlier in this paper. Note that since the Ising model will be in the high-temperature regime, the Glauber dynamics mix rapidly, and we can efficiently sample from the model using MCMC. Finally, given the range of values for the statistic determined by MCMC, we reject the null hypothesis if p ≤0.05. 5.1 Synthetic Data We proceed with our investigation on synthetic data. Our null hypothesis is that the sample is generated from an Ising model in the high temperature regime on the grid, with no external field (i.e. θu = 0 for all u) and a common (unknown) edge parameter θ (i.e., θuv = θ iff nodes u and v are adjacent in the grid, and 0 otherwise). For the Ising model on the grid, the critical edge parameter for high-temperature is θc = ln(1+ √ 2) 2 . In other words, we are in high-temperature if and only if θ ≤θc, and we can reject the null hypothesis if the MPLE estimate ˆθ > θc. To generate departures from the null hypothesis, we give a construction parameterized by τ ∈[0, 1]. We provide a rough description of the departures, for a precise description, see the supplemental material. Each node x selects a random node y at Manhattan distance at most 2, and sets y’s value to x with probability τ. The intuition behind this construction is that each individual selects a friend or a friend-of-a-friend, and tries to convince them to take his value – he is successful with probability τ. Selecting either a friend or a friend-of-a-friend is in line with the concept of strong triadic closure [EK10] from the social sciences, which suggests that two individuals with a mutual friend are likely to either already be friends (which the social network may not have knowledge of) or become friends in the future. An example of a sample generated from this distribution with τ = 0.04 is provided in Figure 1 of the supplementary material, alongside a sample from the Ising model generated with the corresponding MPLE parameters. We consider this distribution to pass the “eye test” – one can not easily distinguish these two distributions by simply glancing at them. However, as we will see, our multilinear statistic is able to correctly reject the null a large fraction of the time. Our experimental process was as follows. We started with a 40 × 40 grid, corresponding to a distribution with n = 1600 dimensions. We generated values for this grid according to the depatures from the null described above, with some parameter τ. We then ran the MPLE estimator to obtain an estimate for the edge parameter ˆθ, immediately rejecting the null if ˆθ > θc. Otherwise, we ran the Glauber dynamics for O(n log n) steps to generate a sample from the grid Ising model with parameter ˆθ. We repeated this process to generate 100 samples, and for each sample, computed the value of the statistic Zlocal = P u=(i,j) P v=(k,l):d(u,v)≤2 XuXv, where d(·, ·) is the Manhattan distance on the grid. This statistic can be justified since we wish to account for the possibility of connections between friends-of-friends of which the social network may be lacking knowledge. We then compare with the value of the statistic Zlocal on the provided sample, and reject the null hypothesis if this statistic corresponds to a p-value of ≤0.05. We repeat this for a wide range of values of τ ∈[0, 1], and repeat 500 times for each τ. Our results are displayed in Figure 1 The x-axis marks the value of parameter τ, and the y-axis indicates the fraction of repetitions in which we successfully rejected the null hypothesis. The performance of the MPLE alone is indicated by the orange line, while the performance of our statistic is indicated by the blue line. We find that our statistic is able to correctly reject the null at a much earlier point than the MPLE alone. In particular, our statistic manages to reject the null for τ ≥0.04, while the MPLE requires a parameter which is an order of magnitude larger, at 0.4. As mentioned before, in the former regime (when τ ≈0.04), it appears impossible to distinguish the distribution from a sample from the Ising model with the naked eye. 5.2 Last.fm Dataset We now turn our focus to the Last.fm dataset from HetRec’11 [CBK11]. This dataset consists of data from n = 1892 users on the Last.fm online music system. On Last.fm, users can indicate (bi-directional) friend relationships, thus constructing a social network – our dataset has m = 12717 such edges. The dataset also contains users’ listening habits – for each user we have a list of their 8 10-3 10-2 10-1 100 Model parameter value 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Test probability of success Probability of rejecting the null with local correlations and MPLE Local correlation statistic MPLE Figure 1: Power of our statistic on synthetic data. fifty favorite artists, whose tracks they have listened to the most times. We wish to test whether users’ preference for a particular artist is distributed according to a high-temperature Ising model. Fixing some artist a of interest, we consider the vector X(a), where X(a) u is +1 if user u has artist a in his favorite artists, and −1 otherwise. We wish to test the null hypothesis, whether X(a) is distributed according to an Ising model in the high temperature regime on the known social network graph, with common (unknown) external field h (i.e. θu = h for all u) and edge parameter θ (i.e., θuv = θ iff u and v are neighbors in the graph, and 0 otherwise). Our overall experimental process was very similar to the synthetic data case. We gathered a list of the ten most-common favorite artists, and repeated the following process for each artist a. We consider the vector X(a) (defined above) and run the MPLE estimator on it, obtaining estimates ˆh and ˆθ. We then run MCMC to generate 100 samples from the Ising model with these parameters, and for each sample, computed the value of the statistics Zk = P u P v:d(u,v)≤k(Xu −tanh(ˆh))(Xv −tanh(ˆh)), where d(·, ·) is the distance on the graph, and k = 1 (the neighbor correlation statistic) or 2 (the local correlation statistic). Motivated by our theoretical results (Theorem 3), we consider a statistic where the variables are recentered by their marginal expectations, as this statistic experiences sharper concentration. We again consider k = 2 to account for the possibility of edges which are unknown to the social network. Strikingly, we found that the plausibility of the Ising modelling assumption varies significantly depending on the artist. We highlight some of our more interesting findings here, see the supplemental material for more details. The most popular artist in the dataset was Lady Gaga, who was a favorite artist of 611 users in the dataset. We found that X(Lady Gaga) had statistics Z1 = 9017.3 and Z2 = 106540. The range of these statistics computed by MCMC can be seen in Figure 2 of the supplementary material – clearly, the computed statistics fall far outside these ranges, and we can reject the null hypothesis with p ≪0.01. Similar results held for other popular pop musicians, including Britney Spears, Christina Aguilera, Rihanna, and Katy Perry. However, we observed qualitatively different results for The Beatles, the fourth most popular artist, being a favorite of 480 users. We found that X(The Beatles) had statistics Z1 = 2157.8 and Z2 = 22196. The range of these statistics computed by MCMC can be seen in Figure 3 of the supplementary material. This time, the computed statistics fall near the center of this range, and we can not reject the null. Similar results held for the rock band Muse. Based on our investigation, our statistic seems to indicate that for the pop artists, the null fails to effectively model the distribution, while it performs much better for the rock artists. We conjecture that this may be due to the highly divisive popularity of pop artists like Lady Gaga and Britney Spears – while some users may love these artists (and may form dense cliques within the graph), others have little to no interest in their music. The null would have to be expanded to accomodate heterogeneity to model such effects. On the other hand, rock bands like The Beatles and Muse seem to be much more uniform in their appeal: users seem to be much more homogeneous when it comes to preference for these groups. 9 Acknowledgments Research was supported by NSF CCF-1617730, CCF-1650733, and ONR N00014-12-1-0999. Part of this work was done while GK was an intern at Microsoft Research New England. References [AKN06] Pieter Abbeel, Daphne Koller, and Andrew Y. Ng. Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research, 7(Aug):1743– 1788, 2006. [BGS14] Guy Bresler, David Gamarnik, and Devavrat Shah. Structure learning of antiferromagnetic Ising models. In Advances in Neural Information Processing Systems 27, NIPS ’14, pages 2852–2860. Curran Associates, Inc., 2014. [Bha16] Bhaswar B. Bhattacharya. Power of graph-based two-sample tests. arXiv preprint arXiv:1508.07530, 2016. [BK16] Guy Bresler and Mina Karzand. Learning a tree-structured Ising model in order to make predictions. arXiv preprint arXiv:1604.06749, 2016. [BM16] Bhaswar B. Bhattacharya and Sumit Mukherjee. Inference in Ising models. Bernoulli, 2016. [Bre15] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings of the 47th Annual ACM Symposium on the Theory of Computing, STOC ’15, pages 771–782, New York, NY, USA, 2015. ACM. [CBK11] Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. Second workshop on information heterogeneity and fusion in recommender systems (hetrec 2011). In Proceedings of the 5th ACM Conference on Recommender Systems, RecSys ’11, pages 387–388, New York, NY, USA, 2011. ACM. [Cha05] Sourav Chatterjee. Concentration Inequalities with Exchangeable Pairs. PhD thesis, Stanford University, June 2005. [Cha07] Sourav Chatterjee. Estimation in spin glasses: A first step. The Annals of Statistics, 35(5):1931–1946, October 2007. [CL68] C.K. Chow and C.N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14(3):462–467, 1968. [CT06] Imre Csiszár and Zsolt Talata. Consistent estimation of the basic neighborhood of Markov random fields. The Annals of Statistics, 34(1):123–145, 2006. [DCG68] Stanley Deser, Max Chrétien, and Eugene Gross. Statistical Physics, Phase Transitions, and Superfluidity. Gordon and Breach, 1968. [DDK18] Constantinos Daskalakis, Nishanth Dikkala, and Gautam Kamath. Testing Ising models. In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’18, Philadelphia, PA, USA, 2018. SIAM. [DMR11] Constantinos Daskalakis, Elchanan Mossel, and Sébastien Roch. Evolutionary trees and the Ising model on the Bethe lattice: A proof of Steel’s conjecture. Probability Theory and Related Fields, 149(1):149–189, 2011. [EK10] David Easley and Jon Kleinberg. Networks, Crowds, and Markets: Reasoning about a Highly Connected World. Cambridge University Press, 2010. [Ell93] Glenn Ellison. Learning, local interaction, and coordination. Econometrica, 61(5):1047– 1071, 1993. [Fel04] Joseph Felsenstein. Inferring Phylogenies. Sinauer Associates Sunderland, 2004. 10 [Fre75] David A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3(1):100–118, 1975. [GG86] Stuart Geman and Christine Graffigne. Markov random field image models and their applications to computer vision. In Proceedings of the International Congress of Mathematicians, pages 1496–1517. American Mathematical Society, 1986. [GLP17] Reza Gheissari, Eyal Lubetzky, and Yuval Peres. Concentration inequalities for polynomials of contracting Ising models. arXiv preprint arXiv:1706.00121, 2017. [HKM17] Linus Hamilton, Frederic Koehler, and Ankur Moitra. Information theoretic properties of Markov random fields, and their algorithmic applications. In Advances in Neural Information Processing Systems 30, NIPS ’17. Curran Associates, Inc., 2017. [Hop82] John J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554– 2558, 1982. [Isi25] Ernst Ising. Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik A Hadrons and Nuclei, 31(1):253–258, 1925. [JJR11] Ali Jalali, Christopher C. Johnson, and Pradeep K. Ravikumar. On learning discrete graphical models using greedy methods. In Advances in Neural Information Processing Systems 24, NIPS ’11, pages 1935–1943. Curran Associates, Inc., 2011. [KM17] Adam Klivans and Raghu Meka. Learning graphical models using multiplicative weights. In Proceedings of the 58th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’17, Washington, DC, USA, 2017. IEEE Computer Society. [LPW09] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, 2009. [MdCCU16] Abraham Martín del Campo, Sarah Cepeda, and Caroline Uhler. Exact goodness-of-fit testing for the Ising model. Scandinavian Journal of Statistics, 2016. [MJC+14] Lester Mackey, Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp. Matrix concentration inequalities via the method of exchangeable pairs. The Annals of Probability, 42(3):906–945, 2014. [MS10] Andrea Montanari and Amin Saberi. The spread of innovations in social networks. Proceedings of the National Academy of Sciences, 107(47):20196–20201, 2010. [Ons44] Lars Onsager. Crystal statistics. I. a two-dimensional model with an order-disorder transition. Physical Review, 65(3–4):117, 1944. [RWL10] Pradeep Ravikumar, Martin J. Wainwright, and John D. Lafferty. High-dimensional ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010. [SBSB06] Elad Schneidman, Michael J. Berry, Ronen Segev, and William Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007–1012, 2006. [SK75] David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical Review Letters, 35(26):1792, 1975. [SW12] Narayana P. Santhanam and Martin J. Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117–4134, 2012. [SZ92] Daniel W. Stroock and Boguslaw Zegarlinski. The logarithmic Sobolev inequality for discrete spin systems on a lattice. Communications in Mathematical Physics, 149(1):175–193, 1992. 11 [VMLC16] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction screening: Efficient and sample-optimal learning of Ising models. In Advances in Neural Information Processing Systems 29, NIPS ’16, pages 2595–2603. Curran Associates, Inc., 2016. 12 | 2017 | 213 |
6,690 | Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher Dept. Statistics UC Los Angeles akfletcher@ucla.edu Mojtaba Sahraee-Ardakan Dept. EE, UC Los Angeles msahraee@ucla.edu Sundeep Rangan Dept. ECE, NYU srangan@nyu.edu Philip Schniter Dept. ECE, The Ohio State Univ. schniter@ece.osu.edu Abstract We consider the problem of estimating a random vector x from noisy linear measurements y = Ax + w in the setting where parameters θ on the distribution of x and w must be learned in addition to the vector x. This problem arises in a wide range of statistical learning and linear inverse problems. Our main contribution shows that a computationally simple iterative message passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large system limit (LSL) under very general parametrizations. Importantly, this LSL applies to all right-rotationally random A – a much larger class of matrices than i.i.d. sub-Gaussian matrices to which many past message passing approaches are restricted. In addition, a simple testable condition is provided in which the mean square error (MSE) on the vector x matches the Bayes optimal MSE predicted by the replica method. The proposed algorithm uses a combination of Expectation-Maximization (EM) with a recently-developed Vector Approximate Message Passing (VAMP) technique. We develop an analysis framework that shows that the parameter estimates in each iteration of the algorithm converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. The SE equations, which extends those of VAMP without parameter adaptation, depend only on the initial parameter estimates and the statistical properties of the problem and can be used to predict consistency and precisely characterize other performance measures of the method. 1 Introduction Consider the problem of estimating a random vector x0 from linear measurements y of the form y = Ax0 + w, w ∼N(0, θ−1 2 I), x0 ∼p(x|θ1), (1) where A ∈RM×N is a known matrix, p(x|θ1) is a density on x0 with parameters θ1, w is additive white Gaussian noise (AWGN) independent of x0, and θ2 > 0 is the noise precision (inverse variance). The goal is to estimate x0 along with simultaneously learning the unknown parameters θ := (θ1, θ2) from the data y and A. This problem arises in Bayesian forms of linear inverse problems in signal processing, as well as in linear regression in statistics. Exact estimation of the parameters θ via maximum likelihood or other methods is generally intractable. One promising class of approximate methods combines approximate message passing (AMP) [1] 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. with expectation-maximization (EM). AMP and its generalizations [2] are a powerful, relatively recent, class of algorithms based on expectation propagation-type techniques. The AMP methodology has the benefit of being computationally fast and has been successfully applied to a wide range of problems. Most importantly, for large, i.i.d., sub-Gaussian random matrices A, the performance of AMP methods can be exactly predicted by a scalar state evolution (SE) [3, 4] that provides testable conditions for optimality, even for non-convex priors. When the parameters θ are unknown, AMP can be easily combined with EM for joint learning of the parameters θ and vector x [5–7]. A recent work [8] has combined EM with the so-called Vector AMP (VAMP) method of [9]. Similar to AMP, VAMP is based on expectation propagation (EP) approximations of belief propagation [10, 11] and can also be considered as a special case of expectation consistent (EC) approximate inference [12–14]. VAMP’s key attraction is that it applies to a larger class of matrices A than standard AMP methods. Aside from Gaussian i.i.d. A, standard AMP techniques often diverge and require a variety of modifications for stability [15–18]. In contrast, VAMP has provable SE analyses and convergence guarantees that apply to all right-rotationally invariant matrices A [9, 19] – a significantly larger class of matrices than i.i.d. Gaussians. Under further conditions, the mean-squared error (MSE) of VAMP matches the replica predictions for optimality [20–23]. For the case when the distribution on x and w are unknown, the work [8] proposed to combine EM and VAMP using the approximate inference framework of [24]. The combination of AMP with EM methods have been particularly successful in neural modeling problems [25, 26]. While [8] provides numerical simulations demonstrating excellent performance of this EM-VAMP method on a range of synthetic data, there were no provable convergence guarantees. Contributions of this work The SE analysis thus provides a rigorous and exact characterization of the dynamics of EM-VAMP. In particular, the analysis can determine under which initial conditions and problem statistics EM-VAMP will yield asymptotically consistent parameter estimates. • Rigorous state evolution analysis: We provide a rigorous analysis of a generalization of EM-VAMP that we call Adaptive VAMP. Similar to the analysis of VAMP, we consider a certain large system limit (LSL) where the matrix A is random and right-rotationally invariant. Importantly, this class of matrices is much more general than i.i.d. Gaussians used in the original LSL analysis of Bayati and Montanari [3]. It is shown (Theorem 1) that in the LSL, the parameter estimates at each iteration converge to deterministic limits θk that can be computed from a set of SE equations that extend those of VAMP. The analysis also exactly characterizes the asymptotic joint distribution of the estimates bx and the true vector x0. The SE equations depend only on the initial parameter estimate, the adaptation function, and statistics on the matrix A, the vector x0 and noise w. • Asymptotic consistency: It is also shown (Theorem 2) that under an additional identifiability condition and a simple auto-tuning procedure, Adaptive VAMP can yield provably consistent parameter estimates in the LSL. The technique uses an ML estimation approach from [7]. Remarkably, the result is true under very general problem formulations. • Bayes optimality: In the case when the parameter estimates converge to the true value, the behavior of adaptive VAMP matches that of VAMP. In this case, it is shown in [9] that, when the SE equations have a unique fixed point, the MSE of VAMP matches the MSE of the Bayes optimal estimator predicted by the replica method [21–23]. In this way, we have developed a computationally efficient method for a large class of linear inverse problems with the properties that, in a certain high-dimensional limit: (1) the performance of the algorithm can be exactly characterized, (2) the parameter estimates bθ are asymptotically consistent; and (3) the algorithm has testable conditions for which the signal estimates bx match replica predictions for Bayes optimality. 2 VAMP with Adaptation Assume the prior on x can be written as p(x|θ1) = 1 Z1(θ1) exp [−f1(x|θ1)] , f1(x|θ1) = N X n=1 f1(xn|θ1), (2) 2 Algorithm 1 Adaptive VAMP Require: Matrix A ∈RM×N, measurement vector y, denoiser function g1(·), statistic function φ1(·), adaptation function T1(·) and number of iterations Nit. 1: Select initial r10, γ10 ≥0, bθ10, bθ20. 2: for k = 0, 1, . . . , Nit −1 do 3: // Input denoising 4: bx1k = g1(r1k, γ1k, bθ1k)), η−1 1k = γ1k/⟨g′ 1(r1k, γ1k, bθ1k)⟩ 5: γ2k = η1k −γ1k 6: r2k = (η1kbx1k −γ1kr1k)/γ2k 7: 8: // Input parameter update 9: bθ1,k+1 = T1(µ1k), µ1k = ⟨φ1(r1k, γ1k, bθ1k)⟩ 10: 11: // Output estimation 12: bx2k = Q−1 k (bθ2kATy + γ2kr2k), Qk = bθ2kATA + γ2kI 13: η−1 2k = (1/N) tr(Q−1 k ) 14: γ1,k+1 = η2k −γ2k 15: r1,k+1 = (η2kbx2k −γ2kr2k)/γ1,k+1 16: 17: // Output parameter update 18: bθ−1 2,k+1 = (1/N){∥y −Abx2k∥2 + tr(AQ−1 k AT)} 19: end for where f1(·) is a separable penalty function, θ1 is a parameter vector and Z1(θ1) is a normalization constant. With some abuse of notation, we have used f1(·) for the function on the vector x and its components xn. Since f1(x|θ1) is separable, x has i.i.d. components conditioned on θ1. The likelihood function under the Gaussian model (1) can be written as p(y|x, θ2) := 1 Z2(θ2) exp [−f2(x, y|θ2)] , f2(x, y|θ2) := θ2 2 ∥y −Ax∥2, (3) where Z2(θ2) = (2π/θ2)N/2. The joint density of x, y given parameters θ = (θ1, θ2) is then p(x, y|θ) = p(x|θ1)p(y|x, θ2). (4) The problem is to estimate the parameters θ = (θ1, θ2) along with the vector x0. The steps of the proposed adaptive VAMP algorithm to perform this estimation are shown in Algorithm 1, which is a generalization of the the EM-VAMP method in [8]. In each iteration, the algorithm produces, for i = 1, 2, estimates bθi of the parameter θi, along with estimates bxik of the vector x0. The algorithm is tuned by selecting three key functions: (i) a denoiser function g1(·); (ii) an adaptation statistic φ1(·); and (iii) a parameter selection function T1(·). The denoiser is used to produce the estimates bx1k, while the adaptation statistic and parameter estimation functions produce the estimates bθ1k. Denoiser function The denoiser function g1(·) is discussed in detail in [9] and is generally based on the prior p(x|θ1). In the original EM-VAMP algorithm [8], g1(·) is selected as the so-called minimum mean-squared error (MMSE) denoiser. Specifically, in each iteration, the variables ri, γi and bθi were used to construct belief estimates, bi(x|ri, γi, bθi) ∝exp h −fi(x, y|bθi) −γi 2 ∥x −ri∥2i , (5) which represent estimates of the posterior density p(x|y, θ). To keep the notation symmetric, we have written f1(x, y|bθ1) for f1(x|bθ1) even though the first penalty function does not depend on y. The EM-VAMP method then selects g1(·) to be the mean of the belief estimate, g1(r1, γ1, θ1) := E [x|r1, γ1, θ1] . (6) For line 4 of Algorithm 1, we define [g′ 1(r1k, γ1k, θ1)]n := ∂[g1(r1k, γ1k, θ1)]n/∂r1n and we use ⟨·⟩for the empirical mean of a vector, i.e., ⟨u⟩= (1/N) PN n=1 un. Hence, η1k in line 4 is a scaled 3 inverse divergence. It is shown in [9] that, for the MMSE denoiser (6), η1k is the inverse average posterior variance. Estimation for θ1 with finite statistics For the EM-VAMP algorithm [8], the parameter update for bθ1,k+1 is performed via a maximization bθ1,k+1 = arg max θ1 E h ln p(x|θ1) r1k, γ1k, bθ1k i , (7) where the expectation is with respect to the belief estimate bi(·) in (5). It is shown in [8] that using (7) is equivalent to an approximation of the M-step in the standard EM method. In the adaptive VAMP method in Algorithm 1, the M-step maximization (7) is replaced by line 9. Note that line 9 again uses ⟨·⟩to denote empirical average, µ1k = ⟨φ1(r1k, γ1k, bθ1k)⟩:= 1 N N X n=1 φ1(r1k,n, γ1k, bθ1k) ∈Rd, (8) so µ1k is the empirical average of some d-dimensional statistic φ1(·) over the components of r1k. The parameter estimate update bθ1,k+1 is then computed from some function of this statistic, T1(µ1k). We show in the full paper [27] that there are two important cases where the EM update (7) can be computed from a finite-dimensional statistic as in line 9: (i) The prior p(x|θ1) is given by an exponential family, f1(x|θ1) = θT 1ϕ(x) for some sufficient statistic ϕ(x); and (ii) There are a finite number of values for the parameter θ1. For other cases, we can approximate more general parametrizations via discretization of the parameter values ⃗θ1. The updates in line 9 can also incorporate other types of updates as we will see below. But, we stress that it is preferable to compute the estimate for θ1 directly from the maximization (7) – the use of a finite-dimensional statistic is for the sake of analysis. Estimation for θ2 with finite statistics It will be useful to also write the adaptation of θ2 in line 18 of Algorithm 1 in a similar form as line 9. First, take a singular value decomposition (SVD) of A of the form A = USVT, S = Diag(s), (9) and define the transformed error and transformed noise, qk := VT(r2k −x0), ξ := UTw. (10) Then, it is shown in the full paper [27] that bθ2,k+1 in line 18 can be written as bθ2,k+1 = T2(µ2k) := 1 µ2k , µ2k = ⟨φ2(q2, ξ, s, γ2k, bθ2k)⟩ (11) where φ2(q, ξ, s, γ2, bθ2) := γ2 2 (s2bθ2 + γ2)2 (sq + ξ)2 + s2 s2bθ2 + γ2 . (12) Of course, we cannot directly compute qk in (10) since we do not know the true x0. Nevertheless, this form will be useful for analysis. 3 State Evolution in the Large System Limit 3.1 Large System Limit Similar to the analysis of VAMP in [9], we analyze Algorithm 1 in a certain large system limit (LSL). The LSL framework was developed by Bayati and Montanari in [3] and we review some of the key definitions in full paper [27]. As in the analysis of VAMP, the LSL considers a sequence of problems indexed by the vector dimension N. For each N, we assume that there is a “true” vector x0 ∈RN that is observed through measurements of the form y = Ax0 + w ∈RN, w ∼N(0, θ−1 2 IN), (13) 4 where A ∈RN×N is a known transform, w is Gaussian noise and θ2 represents a “true” noise precision. The noise precision θ2 does not change with N. Identical to [9], the transform A is modeled as a large, right-orthogonally invariant random matrix. Specifically, we assume that it has an SVD of the form (9) where U and V are N × N orthogonal matrices such that U is deterministic and V is Haar distributed (i.e. uniformly distributed on the set of orthogonal matrices). As described in [9], although we have assumed a square matrix A, we can consider general rectangular A by adding zero singular values. Using the definitions in full paper [27], we assume that the components of the singular-value vector s ∈RN in (9) converge empirically with second-order moments as lim N→∞{sn} P L(2) = S, (14) for some non-negative random variable S with E[S] > 0 and S ∈[0, Smax] for some finite maximum value Smax. Additionally, we assume that the components of the true vector, x0, and the initial input to the denoiser, r10, converge empirically as lim N→∞{(r10,n, x0 n)} P L(2) = (R10, X0), R10 = X0 + P0, P0 ∼N(0, τ10), (15) where X0 is a random variable representing the true distribution of the components x0; P0 is an initial error and τ10 is an initial error variance. The variable X0 may be distributed as X0 ∼p(·|θ1) for some true parameter θ1. However, in order to incorporate under-modeling, the existence of such a true parameter is not required. We also assume that the initial second-order term and parameter estimate converge almost surely as lim N→∞(γ10, bθ10, bθ20) = (γ10, θ10, θ20) (16) for some γ10 > 0 and (θ10, θ20). 3.2 Error and Sensitivity Functions We next need to introduce parametric forms of two key terms from [9]: error functions and sensitivity functions. The error functions describe MSE of the denoiser and output estimators under AWGN measurements. Specifically, for the denoiser g1(·, γ1, bθ1), we define the error function as E1(γ1, τ1, bθ1) := E h (g1(R1, γ1, bθ1) −X0)2i , R1 = X0 + P, P ∼N(0, τ1), (17) where X0 is distributed according to the true distribution of the components x0 (see above). The function E1(γ1, τ1, bθ1) thus represents the MSE of the estimate b X = g1(R1, γ1, bθ1) from a measurement R1 corrupted by Gaussian noise of variance τ1 under the parameter estimate bθ1. For the output estimator, we define the error function as E2(γ2, τ2, bθ2) := lim N→∞ 1 N E∥g2(r2, γ2, bθ2) −x0∥2, x0 = r2 + q, q ∼N(0, τ2I), y = Ax0 + w, w ∼N(0, θ−1 2 I), (18) which is the average per component error of the vector estimate under Gaussian noise. The dependence on the true noise precision, θ2, is suppressed. The sensitivity functions describe the expected divergence of the estimator. For the denoiser, the sensitivity function is defined as A1(γ1, τ1, bθ1) := E h g′ 1(R1, γ1, bθ1) i , R1 = X0 + P, P ∼N(0, τ1), (19) which is the average derivative under a Gaussian noise input. For the output estimator, the sensitivity is defined as A2(γ2, τ2, bθ2) := lim N→∞ 1 N tr " ∂g2(r2, γ2, bθ2) ∂r2 # , (20) where r2 is distributed as in (18). The paper [9] discusses the error and sensitivity functions in detail and shows how these functions can be easily evaluated. 5 3.3 State Evolution Equations We can now describe our main result, which are the SE equations for Adaptive VAMP. The equations are an extension of those in the VAMP paper [9], with modifications for the parameter estimation. For a given iteration k ≥1, consider the set of components, {(bx1k,n, r1k,n, x0 n), n = 1, . . . , N}. This set represents the components of the true vector x0, its corresponding estimate bx1k and the denoiser input r1k. We will show that, under certain assumptions, these components converge empirically as lim N→∞{(bx1k,n, r1k,n, x0 n)} P L(2) = ( b X1k, R1k, X0), (21) where the random variables ( b X1k, R1k, X0) are given by R1k = X0 + Pk, Pk ∼N(0, τ1k), b X1k = g1(R1k, γ1k, θ1k), (22) for constants γ1k, θ1k and τ1k that will be defined below. We will also see that bθ1k →θ1k, so θ1k represents the asymptotic parameter estimate. The model (22) shows that each component r1k,n appears as the true component x0 n plus Gaussian noise. The corresponding estimate bx1k,n then appears as the denoiser output with r1k,n as the input and θ1k as the parameter estimate. Hence, the asymptotic behavior of any component x0 n and its corresponding bx1k,n is identical to a simple scalar system. We will refer to (21)-(22) as the denoiser’s scalar equivalent model. We will also show that these transformed errors qk and noise ξ in (10) and singular values s converge empirically to a set of independent random variables (Qk, Ξ, S) given by lim N→∞{(qk,n, ξn, sn)} P L(2) = (Qk, Ξ, S), Qk ∼N(0, τ2k), Ξ ∼N(0, θ−1 2 ), (23) where S has the distribution of the singular values of A, τ2k is a variance that will be defined below and θ2 is the true noise precision in the measurement model (13). All the variables in (23) are independent. Thus (23) is a scalar equivalent model for the output estimator. The variance terms are defined recursively through the state evolution equations, α1k = A1(γ1k, τ1k, θ1k), η1k = γ1k α1k , γ2k = η1k −γ1k (24a) θ1,k+1 = T1(µ1k), µ1k = E φ1(R1k, γ1k, θ1k) (24b) τ2k = 1 (1 −α1k)2 E1(γ1k, τ1k, θ1k) −α2 1kτ1k , (24c) α2k = A2(γ2k, τ2k, θ2k), η2k = γ2k α2k , γ1,k+1 = η2k −γ2k (24d) θ2,k+1 = T2(µ2k), µ2k = E φ2(Qk, Ξ, S, γ2k, θ2k) (24e) τ1,k+1 = 1 (1 −α2k)2 E2(γ2k, τ2k) −α2 2kτ2k , (24f) which are initialized with τ10 = E[(R10 −X0)2] and the (γ10, θ10, θ20) defined from the limit (16). The expectation in (24b) is with respect to the random variables (21) and the expectation in (24e) is with respect to the random variables (23). Theorem 1. Consider the outputs of Algorithm 1. Under the above assumptions and definitions, assume additionally that for all iterations k: (i) The solution α1k from the SE equations (24) satisfies α1k ∈(0, 1). (ii) The functions Ai(·), Ei(·) and Ti(·) are continuous at (γi, τi, bθi, µi) = (γik, τik, θik, µik). (iii) The denoiser function g1(r1, γ1, bθ1) and its derivative g′ 1(r1, γ1, bθ1) are uniformly Lipschitz in r1 at (γ1, bθ1) = (γ1k, θ1k). (See the full paper [27]. for a precise definition of uniform Lipschitz continuity.) 6 (iv) The adaptation statistic φ1(r1, γ1, bθ1) is uniformly pseudo-Lipschitz of order 2 in r1 at (γ1, bθ1) = (γ1k, θ1k). Then, for any fixed iteration k ≥0, lim N→∞(αik, ηik, γik, µik, bθik) = (αik, ηik, γik, µik, θik) (25) almost surely. In addition, the empirical limit (21) holds almost surely for all k > 0, and (23) holds almost surely for all k ≥0. Theorem 1 shows that, in the LSL, the parameter estimates bθik converge to deterministic limits θik that can be precisely predicted by the state-evolution equations. The SE equations incorporate the true distribution of the components on the prior x0, the true noise precision θ2, and the specific parameter estimation and denoiser functions used by the Adaptive VAMP method. In addition, similar to the SE analysis of VAMP in [9], the SE equations also predict the asymptotic joint distribution of x0 and their estimates bxik. This joint distribution can be used to measure various performance metrics such as MSE – see [9]. In this way, we have provided a rigorous and precise characterization of a class of adaptive VAMP algorithms that includes EM-VAMP. 4 Consistent Parameter Estimation with Variance Auto-Tuning By comparing the deterministic limits θik with the true parameters θi, one can determine under which problem conditions the parameter estimates of adaptive VAMP are asymptotically consistent. In this section, we show with a particular choice of parameter estimation functions, one can obtain provably asymptotically consistent parameter estimates under suitable identifiability conditions. We call the method variance auto-tuning, which generalizes the approach in [7]. Definition 1. Let p(x|θ1) be a parametrized set of densities. Given a finite-dimensional statistic φ1(r), consider the mapping (τ1, θ1) 7→E [φ1(R)|τ1, θ1] , R = X + N(0, τ1), X ∼p(x|θ1). (26) We say the p(x|θ1) is identifiable in Gaussian noise if there exists a finite-dimensional statistic φ1(r) ∈Rd such that (i) φ1(r) is pseudo-Lipschitz continuous of order 2; and (ii) the mapping (26) has a continuous inverse. Theorem 2. Under the assumptions of Theorem 1, suppose that X0 follows X0 ∼p(x|θ0 1) for some true parameter θ0 1. If p(x|θ1) is identifiable in Gaussian noise, there exists an adaptation rule such that, for any iteration k, the estimate bθ1k and noise estimate bτ1k are asymptotically consistent in that limN→∞bθ1k = θ0 1 and limN→∞bτ1k = τ1k almost surely. The theorem is proved in full paper [27]. which also provides details on how to perform the adaptation. A similar result for consistent estimation of the noise precision θ2 is also given. The result is remarkable as it shows that a simple variant of EM-VAMP can provide provably consistent parameter estimates under extremely general distributions. 5 Numerical Simulations Sparse signal recovery: The paper [8] presented several numerical experiments to assess the performance of EM-VAMP relative to other methods. Here, our goal is to confirm that EM-VAMP’s performance matches the SE predictions. As in [8], we consider a sparse linear regression problem of estimating a vector x from measurements y from (1) without knowing the signal parameters θ1 or the noise precision θ2 > 0. Details are given in the full paper [27]. Briefly, to model the sparsity, x is drawn as an i.i.d. Bernoulli-Gaussian (i.e., spike and slab) prior with unknown sparsity level, mean and variance. The true sparsity is βx = 0.1. Following [15, 16], we take A ∈RM×N to be a random right-orthogonally invariant matrix with dimensions under M = 512, N = 1024 with the condition number set to κ = 100 (high condition number matrices are known to be problem for conventional AMP methods). The left panel of Fig. 1 shows the normalized mean square error (NMSE) for various algorithms. The full paper [27] describes the algorithms in details and also shows similar results for κ = 10. 7 Figure 1: Numerical simulations. Left panel: Sparse signal recovery: NMSE versus iteration for condition number for a random matrix with a condition number κ = 100. Right panel: NMSE for sparse image recovery as a function of the measurement ratio M/N. We see several important features. First, for all variants of VAMP and EM-VAMP, the SE equations provide an excellent prediction of the per iteration performance of the algorithm. Second, consistent with the simulations in [9], the oracle VAMP converges remarkably fast (∼10 iterations). Third, the performance of EM-VAMP with auto-tuning is virtually indistinguishable from oracle VAMP, suggesting that the parameter estimates are near perfect from the very first iteration. Fourth, the EMVAMP method performs initially worse than the oracle-VAMP, but these errors are exactly predicted by the SE. Finally, all the VAMP and EM-VAMP algorithm exhibit much faster convergence than the EM-BG-AMP. In fact, consistent with observations in [8], EM-BG-AMP begins to diverge at higher condition numbers. In contrast, the VAMP algorithms are stable. Compressed sensing image recovery While the theory is developed on theoretical signal priors, we demonstrate that the proposed EM-VAMP algorithm can be effective on natural images. Specifically, we repeat the experiments in [28] for recovery of a sparse image. Again, see the full paper [27] for details including a picture of the image and the various reconstructions. An N = 256 × 256 image of a satellite with K = 6678 pixels is transformed through an undersampled random transform A = diag(s)PH, where H is fast Hadamard transform, P is a random subselection to M measurements and s is a scaling to adjust the condition number. As in the previous example, the image vector x is modeled as a sparse Bernoulli-Gaussian and the EM-VAMP algorithm is used to estimate the sparsity ratio, signal variance and noise variance. The transform is set to have a condition number of κ = 100. We see from the right panel of Fig. 1 we see that the that the EM-VAMP algorithm is able to reconstruct the images with improved performance over the standard basis pursuit denoising method spgl1 [29] and the EM-BG-GAMP method from [16]. 6 Conclusions Due to its analytic tractability, computational simplicity, and potential for Bayes optimal inference, VAMP is a promising technique for statistical linear inverse problems. However, a key challenge in using VAMP and related methods is the need to precisely specify the distribution on the problem parameters. This work provides a rigorous foundation for analyzing VAMP in combination with various parameter adaptation techniques including EM. The analysis reveals that VAMP with appropriate tuning, can also provide consistent parameter estimates under very general settings, thus yielding a powerful approach for statistical linear inverse problems. Acknowledgments A. K. Fletcher and M. Saharee-Ardakan were supported in part by the National Science Foundation under Grants 1254204 and 1738286 and the Office of Naval Research under Grant N00014-15-1-2677. S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, and the industrial affiliates of NYU WIRELESS. The work of P. Schniter was supported in part by the National Science Foundation under Grant CCF-1527162. 8 References [1] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proc. Nat. Acad. Sci., vol. 106, no. 45, pp. 18 914–18 919, Nov. 2009. [2] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inform. Theory, Saint Petersburg, Russia, Jul.–Aug. 2011, pp. 2174–2178. [3] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764–785, Feb. 2011. [4] A. Javanmard and A. Montanari, “State evolution for general approximate message passing algorithms, with applications to spatial coupling,” Information and Inference, vol. 2, no. 2, pp. 115–144, 2013. [5] F. Krzakala, M. Mézard, F. Sausset, Y. Sun, and L. Zdeborová, “Statistical-physics-based reconstruction in compressed sensing,” Physical Review X, vol. 2, no. 2, p. 021005, 2012. [6] J. P. Vila and P. Schniter, “Expectation-maximization Gaussian-mixture approximate message passing,” IEEE Trans. Signal Processing, vol. 61, no. 19, pp. 4658–4672, 2013. [7] U. S. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser, “Approximate message passing with consistent parameter estimation and applications to sparse learning,” IEEE Trans. Info. Theory, vol. 60, no. 5, pp. 2969–2985, Apr. 2014. [8] A. K. Fletcher and P. Schniter, “Learning and free energies for vector approximate message passing,” Proc. IEEE ICASSP, March 2017. [9] S. Rangan, P. Schniter, and A. K. Fletcher, “Vector approximate message passing,” Proc. IEEE ISIT, June 2017. [10] M. Seeger, “Bayesian inference and optimal design for the sparse linear model,” J. Machine Learning Research, vol. 9, pp. 759–813, Sep. 2008. [11] M. W. Seeger and H. Nickisch, “Fast convergent algorithms for expectation propagation approximate bayesian inference,” in International Conference on Artificial Intelligence and Statistics, 2011, pp. 652–660. [12] M. Opper and O. Winther, “Expectation consistent free energies for approximate inference,” in Proc. NIPS, 2004, pp. 1001–1008. [13] ——, “Expectation consistent approximate inference,” J. Mach. Learning Res., vol. 1, pp. 2177–2204, 2005. [14] A. K. Fletcher, M. Sahraee-Ardakan, S. Rangan, and P. Schniter, “Expectation consistent approximate inference: Generalizations and convergence,” in Proc. IEEE ISIT, 2016, pp. 190–194. [15] S. Rangan, P. Schniter, and A. Fletcher, “On the convergence of approximate message passing with arbitrary matrices,” in Proc. IEEE ISIT, Jul. 2014, pp. 236–240. [16] J. Vila, P. Schniter, S. Rangan, F. Krzakala, and L. Zdeborová, “Adaptive damping and mean removal for the generalized approximate message passing algorithm,” in Proc. IEEE ICASSP, 2015, pp. 2021–2025. [17] A. Manoel, F. Krzakala, E. W. Tramel, and L. Zdeborová, “Swept approximate message passing for sparse estimation,” in Proc. ICML, 2015, pp. 1123–1132. [18] S. Rangan, A. K. Fletcher, P. Schniter, and U. S. Kamilov, “Inference for generalized linear models via alternating directions and Bethe free energy minimization,” IEEE Transactions on Information Theory, vol. 63, no. 1, pp. 676–697, 2017. [19] K. Takeuchi, “Rigorous dynamics of expectation-propagation-based signal recovery from unitarily invariant measurements,” Proc. IEEE ISIT, June 2017. [20] S. Rangan, A. Fletcher, and V. K. Goyal, “Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing,” IEEE Trans. Inform. Theory, vol. 58, no. 3, pp. 1902–1923, Mar. 2012. [21] A. M. Tulino, G. Caire, S. Verdú, and S. Shamai, “Support recovery with sparsely sampled free random matrices,” IEEE Trans. Inform. Theory, vol. 59, no. 7, pp. 4243–4271, 2013. [22] J. Barbier, M. Dia, N. Macris, and F. Krzakala, “The mutual information in random linear estimation,” arXiv:1607.02335, 2016. 9 [23] G. Reeves and H. D. Pfister, “The replica-symmetric prediction for compressed sensing with Gaussian matrices is exact,” in Proc. IEEE ISIT, 2016. [24] T. Heskes, O. Zoeter, and W. Wiegerinck, “Approximate expectation maximization,” NIPS, vol. 16, pp. 353–360, 2004. [25] A. K. Fletcher, S. Rangan, L. Varshney, and A. Bhargava, “Neural reconstruction with approximate message passing (NeuRAMP),” in Proc. Neural Information Process. Syst., Granada, Spain, Dec. 2011, pp. 2555–2563. [26] A. K. Fletcher and S. Rangan, “Scalable inference for neuronal connectivity from calcium imaging,” in Proc. Neural Information Processing Systems, 2014, pp. 2843–2851. [27] A. Fletcher, M. Sahraee-Ardakan, S. Rangan, and P. Schniter, “Rigorous dynamics and consistent estimation in arbitrarily conditioned linear systems,” arxiv, 2017. [28] J. P. Vila and P. Schniter, “An empirical-Bayes approach to recovering linearly constrained non-negative sparse signals,” IEEE Trans. Signal Process., vol. 62, no. 18, pp. 4689–4703, 2014. [29] E. Van Den Berg and M. P. Friedlander, “Probing the pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890–912, 2008. 10 | 2017 | 214 |
6,691 | OnACID: Online Analysis of Calcium Imaging Data in Real Time Andrea Giovannucci†1 Johannes Friedrich†∗1 Matthew Kaufman‡ Anne K. Churchland‡ Dmitri Chklovskii† Liam Paninski∗ Eftychios A. Pnevmatikakis†2 † Flatiron Institute, New York, NY 10010 ‡ Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724 ∗Columbia University, New York, NY 10027 {agiovannucci, jfriedrich, dchklovskii, epnevmatikakis}@flatironinstitute.org {mkaufman, churchland}@cshl.edu liam@stat.columbia.edu Abstract Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach. 1 Introduction Calcium imaging methods continue to gain traction among experimental neuroscientists due to their capability of monitoring large targeted neuronal populations across multiple days or weeks with decisecond temporal and single-neuron spatial resolution. To infer the neural population activity from the raw imaging data, an analysis pipeline is employed which typically involves solving the following problems (all of which are still areas of active research): i) correcting for motion artifacts during the imaging experiment, ii) identifying/extracting the sources (neurons and axonal or dendritic processes) in the imaged field of view (FOV), and iii) denoising and deconvolving the neural activity from the dynamics of the expressed calcium indicator. 1These authors contributed equally to this work. 2To whom correspondence should be addressed. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The fine spatiotemporal resolution of calcium imaging comes at a data rate cost; a typical two-photon (2p) experiment on a 512×512 pixel large FOV imaged at 30Hz, generates ∼50GB of data (in 16-bit integer format) per hour. These rates can be significantly higher for other planar and volumetric imaging techniques, e.g., light-sheet [1] or SCAPE imaging [4], where the data rates can exceed 1TB per hour. The resulting data deluge poses a significant challenge. Of the three basic pre-processing problems described above, the problem of source extraction faces the most severe scalability issues. Popular approaches reshape the data movies into a large array with dimensions (#pixels) × (#timesteps), that is then factorized (e.g., via independent component analysis [20] or constrained non-negative matrix factorization (CNMF) [26]) to produce the locations in the FOV and temporal activities of the imaged sources. While effective for small or medium datasets, direct factorization can be impractical, since a typical experiment can quickly produce datasets larger than the available RAM. Several strategies have been proposed to enhance scalability, including parallel processing [9], spatiotemporal decimation [10], dimensionality reduction [23], and out-of-core processing [13]. While these approaches enable efficient processing of larger datasets, they still require significant storage, power, time, and memory resources. Apart from recording large neural populations, optical methods can also be used for stimulation [5]. Combining optogenetic methods for recording and perturbing neural ensembles opens the door to exciting closed-loop experiments [24, 15, 8], where the pattern of the stimulation can be determined based on the recorded activity during behavior. In a typical closed-loop experiment, the monitored/perturbed regions of interest (ROIs) have been preselected by analyzing offline a previous dataset from the same FOV. Monitoring the activity of a ROI, which usually corresponds to a soma, typically entails averaging the fluorescence over the corresponding ROI, resulting in a signal that is only a proxy for the actual neural activity and which can be sensitive to motion artifacts and drifts, as well as spatially overlapping sources, background/neuropil contamination, and noise. Furthermore, by preselecting the ROIs, the experimenter is unable to detect and incorporate new sources that become active later during the experiment, which prevents the execution of truly closed-loop experiments. In this paper, we present an Online, single-pass, algorithmic framework for the Analysis of Calcium Imaging Data (OnACID). Our framework is highly scalable with minimal memory requirements, as it processes the data in a streaming fashion one frame at a time, while keeping in memory a set of low dimensional sufficient statistics and a small minibatch of the last data frames. Every frame is processed in four sequential steps: i) The frame is registered against the previous denoised (and registered) frame to correct for motion artifacts. ii) The fluorescence activity of the already detected sources is tracked. iii) Newly appearing neurons and processes are detected and incorporated to the set of existing sources. iv) The fluorescence trace of each source is denoised and deconvolved to provide an estimate of the underlying spiking activity. Our algorithm integrates and extends the online NMF algorithm of [19], the CNMF source extraction algorithm of [26], and the near-online deconvolution algorithm of [11], to provide a framework capable of real time identification and processing of hundreds of neurons in a typical 2p experiment (512×512 pixel wide FOV imaged at 30Hz), enabling novel designs of closed-loop experiments. We apply OnACID to two large-scale (50 and 65 minute long) mouse in vivo 2p datasets; our algorithm can find and track hundreds of neurons faster than real-time, and outperforms the CNMF algorithm of [26] benchmarked on multiple manual annotations using a precision-recall framework. 2 Methods We illustrate OnACID in process in Fig. 1. At the beginning of the experiment (Fig. 1-left), only a few components are active, as shown in the panel A by the max-correlation image3, and these are detected by the algorithm (Fig. 1B). As the experiment proceeds more neurons activate and are subsequently detected by OnACID (Fig. 1 middle, right) which also tracks their activity across time (Fig. 1C). See also Supplementary Movie 1 for an example in simulated data. Next, we present the steps of OnACID in more detail. 3The correlation image (CI) at every pixel is equal to the average temporal correlation coefficient between that pixel and its neighbors [28] (8 neighbors were used for our analysis). The max-correlation image is obtained by computing the CI for each batch of 1000 frames, and then taking the maximum over all these images. 2 5s 5min 30s A B C 1000 frames 6000 frames 90000 frames Max-Correlation Image Found Components Figure 1: Illustration of the online data analysis process. Snapshots of the online analysis after processing 1000 frames (left), 6000 frames (middle), and 90000 frames (right). A) "Max-correlation" image of registered data at each snapshot point (see text for definition). B) Spatial footprints (shapes) of the components (neurons and processes) found by OnACID up to each point. C) Examples of neuron activity traces (marked by contours in panel A and highlighted in red in panel B). As the experiment proceeds, OnACID detects newly active neurons and tracks their activity. Motion correction: Our online approach allows us to employ a very simple yet effective motion correction scheme: each denoised dataframe can be used to register the next incoming noisy dataframe. To enhance robustness we use the denoised background/neuropil signal (defined in the next section) as a template to align the next dataframe. We use rigid, sub-pixel registration [16], although piecewise rigid registration can also be used at an additional computational cost. This simple alignment process is not suitable for offline algorithms due to noise in the raw data, leading to the development of various algorithms based on template matching [14, 23, 25] or Hidden Markov Models [7, 18]. Source extraction: A standard approach for source extraction is to model the fluorescence within a matrix factorization framework [20, 26]. Let Y ∈Rd×T denote the observed fluorescence across space and time in a matrix format, where d denotes the number of imaged pixels, and T the length of the experiment in timepoints. If the number of imaged sources is K, then let A ∈Rd×K denote the matrix where column i encodes the "spatial footprint" of the source i. Similarly, let C ∈RK×T denote the matrix where each row encodes the temporal activity of the corresponding source. The observed data matrix can then be expressed as Y = AC + B + E, (1) where B, E ∈Rd×T denote matrices for background/neuropil activity and observation noise, respectively. A common approach, introduced in [26], is to express the background matrix B as a low rank matrix, i.e., B = bf, where b ∈Rd×nb and f ∈Rnb×T denote the spatial and temporal components of the low rank background signal, and nb is a small integer, e.g., nb = 1, 2. The CNMF framework of [26] operates by alternating optimization of [A, b] given the data Y and estimates of [C; f], and vice versa, where each column of A is constrained to be zero outside of a neighborhood around its previous estimate. This strategy exploits the spatial locality of each neuron to reduce the computational complexity. This framework can be adapted to a data streaming setup using the online NMF algorithm of [19], where the observed fluorescence at time t can be written as yt = Act + bft + εt. (2) Proceeding in a similar alternating way, the activity of all neurons at time t, ct, and temporal background ft, given yt and the spatial footprints and background [A, b], can be found by solving a nonnegative least squares problem, whereas [A, b] can be estimated efficiently as in [19] by only keeping in memory the sufficient statistics (where we define ˜ct = [ct; ft]) Wt = t−1 t Wt−1 + 1 t yt˜c⊤ t , Mt = t−1 t Mt−1 + 1 t ˜ct˜c⊤ t , (3) 3 while at the same time enforcing the same spatial locality constraints as in the CNMF framework. Deconvolution: The online framework presented above estimates the demixed fluorescence traces c1, . . . , cK of each neuronal source. The fluorescence is a filtered version of the underlying neural activity that we want to infer. To further denoise and deconvolve the neural activity from the dynamics of the indicator we use the OASIS algorithm [11] that implements the popular spike deconvolution algorithm of [30] in a nearly online fashion by adapting the highly efficient Pool Adjacent Violators Algorithm used in isotonic regression [3]. The calcium dynamics is modeled with a stable autoregressive process of order p, ct = Pp i=1 γict−i + st. We use p = 1 here, but can extend to p = 2 to incorporate the indicator rise time [11]. OASIS solves a modified LASSO problem minimize ˆc,ˆs 1 2∥ˆc −y∥2 + λ∥ˆs∥1 subject to ˆst = ˆct −γˆct−1 ≥smin or ˆst = 0 (4) where the ℓ1 penalty on ˆs or the minimal spike size smin can be used to enforce sparsity of the neural activity. The algorithm progresses through each time series sequentially from beginning to end and backtracks only to the most recent spike. We can further restrict the lag to few frames, to obtain a good approximate solution applicable for real-time experiments. Detecting new components: The approach explained above enables tracking the activity of a fixed number of sources, and will ignore neurons that become active later in the experiment. To account for a variable number of sources in an online NMF setting, [12] proposes to add a new random component when the correlation coefficient between each data frame and its representation in terms of the current factors is lower than a threshold. This approach is insufficient here since the footprint of a new neuron in the whole FOV is typically too small to modify the correlation coefficient significantly. We approach the problem by introducing a buffer that contains the last lb instances of the residual signal rt = yt −Act −bft, where lb is a reasonably small number, e.g., lb = 100. On this buffer, similarly to [26], we perform spatial smoothing with a Gaussian kernel with radius similar to the expected neuron radius, and then search for the point in space that explains the maximum variance. New candidate components anew, and cnew are estimated by performing a local rank-1 NMF of the residual matrix restricted to a fixed neighborhood around the point of maximal variance. To limit false positives, the candidate component is screened for quality. First, to prevent noise overfitting, the shape anew must be significantly correlated (e.g., θs ∼0.8 −0.9) to the residual buffer averaged over time and restricted to the spatial extent of anew. Moreover, if anew significantly overlaps with any of the existing components, then its temporal component cnew must not be highly correlated with the corresponding temporal components; otherwise we reject it as a possible duplicate of an existing component. Once a new component is accepted, [A, b], [C; f] are augmented with anew and cnew respectively, and the sufficient statistics are updated as follows: Wt = Wt, 1 t Ybufc⊤ new , Mt = 1 t tMt ˜Cbufc⊤ new cnew ˜C⊤ buf ∥cnew∥2 , (5) where Ybuf, ˜Cbuf denote the matrices Y, [C; f], restricted to the last lb frames that the buffer stores. This process is repeated until no new components are accepted, at which point the next frame is read and processed. The whole online procedure is described in Algorithm 1; the supplement includes pseudocode description of all the referenced routines. Initialization: To initialize our algorithm we use the CNMF algorithm on a short initial batch of data of length Tb, (e.g., Tb = 1000). The sufficient statistics are initialized from the components that the offline algorithm finds according to (3). To ensure that new components are also initialized in the darker parts of the FOV, each data frame is normalized with the (running) mean for every pixel, during both the offline and the online phases. Algorithmic Speedups: Several algorithmic and computational schemes are employed to boost the speed of the algorithm and make it applicable to real-time large-scale experiments. In [19] block coordinate descent is used to update the factors A, warm started at the value from the previous iteration. The same trick is used here not only for A, but also for C, since the calcium traces are continuous and typically change slowly. Moreover, the temporal traces of components that do not spatially overlap with each other can be updated simultaneously in vector form; we use a simple greedy scheme to partition the components into spatially non-overlapping groups. Since neurons’ shapes are not expected to change at a fast timescale, updating their values (i.e., recomputing A and b) is not required at every timepoint; in practice we update every lb timesteps. 4 Algorithm 1 ONACID Require: Data matrix Y , initial estimates A, b, C, f, S, current number of components K, current timestep t′, rest of parameters. 1: W = Y [:, 1 : t′]C⊤/t′ 2: M = CC⊤/t′ ▷Initialize sufficient statistics 3: G = DETERMINEGROUPS([A, b], K) ▷Alg. S1-S2 4: Rbuf = [Y −[A, b][C; f]][:, t′ −lb + 1 : t′] ▷Initialize residual buffer 5: t = t′ 6: while there is more data do 7: t ←t + 1 8: yt ←ALIGNFRAME(yt, bft−1) ▷[16] 9: [ct; ft] ←UPDATETRACES([A, b], [ct−1; ft−1], yt, G) ▷Alg. S3 10: C, S ←OASIS(C, γ, smin, λ) ▷[11] 11: [A, b], [C, f], K, G, Rbuf, W, M ← 12: DETECTNEWCOMPONENTS([A, b], [C, f], K, G, Rbuf, yt, W, M) ▷Alg. S4 13: Rbuf ←[Rbuf[:, 2 : lb], yt −Act −bft] ▷Update residual buffer 14: if mod (t −t′, lb) = 0 then ▷Update W, M, [A, b] every lb timesteps 15: W, M ←UPDATESUFFSTATISTICS(W, M, yt, [ct; ft]) ▷Equation (3) 16: [A, b] ←UPDATESHAPES[W, M, [A, b]] ▷Alg. S5 17: return A, b, C, f, S A B C true pos false pos false neg lag (frames) 20 24 28 2000 1500 1000 500 frame Processing time [ms] ofine ∞ 5 0.2 2 0 0.4 0.6 0.8 1 Pearson’s r Figure 2: Application to simulated data. A) Detected and missed components. B) Tukey boxplot of spike train correlations with ground truth. Online deconvolution recovers spike trains well and the accuracy increases with the allowed lag in spike assignment. C) Processing time is less than 33 ms for all the frames. Additionally, the sufficient statistics Wt, Mt are only needed for updating the estimates of [A, b] so they can be updated only when required. Motion correction can be sped up by estimating the motion only on a small (active) contiguous part of the FOV. Finally, as shown in [10], spatial decimation can bring significant speed benefits without compromising the quality of the results. Software: OnACID is implemented in Python and is available at https://github.com/ simonsfoundation/caiman as part of the CaImAn package [13]. 3 Results Benchmarking on simulated data: To compare to ground truth spike trains, we simulated a 2000 frame dataset taken at 30Hz over a 256×256 pixel wide FOV containing 400 "donut" shaped neurons with Poisson spike trains (see supplement for details). OnACID was initialized on the first 500 frames. During initialization, 265 active sources were accurately detected (Fig. S2). After the full 2000 frames, the algorithm had detected and tracked all active sources, plus one false positive (Fig. 2A). After detecting a neuron, we need to extract its spikes with a short time-lag, to enable interesting closed loop experiments. To quantify performance we measured the correlation of the inferred spike train with the ground truth (Fig. 2B). We varied the lag in the online estimator, i.e. the number of future samples observed before assigning a spike at time zero. Lags of 2-5 yield already similar 5 Table 1: OnACID significantly outperforms the offline CNMF approach. Benchmarking is against two independent manual annotations within the precision/recall (and their harmonic mean F1 score) framework. For each row-column pair, the column dataset is regarded as the ground truth. F1 (precision, recall) Labeler 1 Labeler 2 CNMF OnACID 0.79 (0.87,0.72) 0.78 (0.86,0.71) 0.79 (0.83,0.75) CNMF 0.71 (0.74, 0.69) 0.71 (0.75,0.68) Labeler 2 0.89 (0.89,0.89) results as the solution with unrestricted lag. A further requirement for online closed-loop experiments is that the computational processing is fast enough. To balance the computational load over frames, we distributed here the shape update over the frames, while still updating each neuron every 30 frames on average. Because the shape update is the last step of the loop in Algorithm 1, we keep track of the time already spent in the iteration and increase or decrease the number of updated neurons accordingly. In this way the frame processing rate remained always higher than 30Hz (Fig. 2C). Application to in vivo 2p mouse hippocampal data: Next we considered a larger scale (90K frames, 480× 480 pixels) real 2p calcium imaging dataset taken at 30Hz (i.e., 50 minute experiment). Motion artifacts were corrected prior to the analysis described below. The online algorithm was initialized on the first 1000 frames of the dataset using a Python implementation of the CNMF algorithm found in the CaImAn package [13]. During initialization 139 active sources were detected; by the end of all 90K frames, 727 active sources had been detected and tracked (5 of which were discarded due to their small size). Benchmarking against offline processing and manual annotations: We collected manual annotations from two independent labelers who were instructed to find round or donut shaped neurons of similar size using the ImageJ Cell Magic Wand tool [31] given i) a movie obtained by removing a running 20th percentile (as a crude background approximation) and downsampling in time by a factor of 10, and ii) the max-correlation image. The goal of this pre-processing was to suppress silent and promote active cells. The labelers found respectively 872 and 880 ROIs. We also compared with the CNMF algorithm applied to the whole dataset which found 904 sources (805 after filtering for size). To quantify performance we used a precision/recall framework similar to [2]. As a distance metric between two cells we used the Jaccard distance, and the pairing between different annotations was computed using the Hungarian algorithm, where matches with distance > 0.7 were discarded4. Table. 1 summarizes the results within the precision/recall framework. The online algorithm not only matches but outperforms the offline approach of CNMF, reaching high performance values (F1 = 0.79 and 0.78 against the two manual annotations, as opposed to 0.71 against both annotations for CNMF). The two annotations matched closely with each other (F1 = 0.89), indicating high reliability, whereas OnACID vs CNMF also produced a high score (F1 = 0.79), suggesting significant overlap in the mismatches between the two algorithms against manual annotations. Fig. 3 offers a more detailed view, where contour plots of the detected components are superimposed on the max-correlation image for the online (Fig. 3A) and offline (Fig. 3B) algorithms (white) and the annotations of Labeler 1 (red) restricted to a 200×200 pixel part of the FOV. Annotations of matches and mismatches between the online algorithm and the two labelers, as well as between the two labelers in the entire FOV are shown in Figs. S3-S8. For the automated procedures binary masks and contour plots were constructed by thresholding the spatial footprint of each component at a level equal to 0.2 times its maximum value. A close inspection at the matches between the online algorithm and the manual annotation (Fig. 3A-left) indicates that neurons with a strong footprint in the max-correlation image (indicating calcium transients with high amplitude compared to noise and background/neuropil activity) are reliably detected, despite the high neuron density and level of overlap. On the other hand, mismatches (Fig. 3B-left) can sometimes be attributed to shape mismatches, manually selected components with no signature in the max-correlation image (indicating faint or possibly unclear activity) that are not detected by the online algorithm (false negatives), or small partially visible processes detected by OnACID but ignored by the labelers ("false" positives). 4Note that the Cell Magic Wand Tool by construction, tends to select circular ROI shapes whereas the results of the online algorithm do not pose restrictions on the shapes. As a result the computed Jaccard distances tend to be overestimated. This explains our choice of a seemingly high mismatch threshold. 6 Human Ofine Human Online 1 2 3 1 0.6 0.2 0 0.5 1 correlation coefcient CDF 10 0 20 30 40 50 8 12 16 20 24 28 32 200 500 800 inferred neurons Cost of adding neurons Cost of tracking neurons’ activity Time [min] Time [ms] online ofine 1 2 3 1 2 3 r=0.89 r=0.98 r=0.76 20 min A B C D E 0.3 0.9 Matches Mismatches 300 real time 500 700 neurons 0.45 0.55 0.65 Time [ms] Cost of updating shapes per neuron 20 μm 200 400 600 800 neurons 20 40 60 80 Time [ms] real time Figure 3: Application to an in vivo 50min long hippocampal dataset and comparison against an offline approach and manual annotation. A-left) Matched inferred locations between the online algorithm (white) and the manual annotation of Labeler 1 (red), superimposed on the maxcorrelation image. A-right) False positive (white) and false negative (red) mismatches between the online algorithm and a manual annotation. B) Same for the offline CNMF algorithm (grey) against the same manual annotation (red). The online approach outperforms the CNMF algorithm in the precision/recall framework (F1 score 0.77 vs 0.71). The images are restricted to a 200×200 pixel part of the FOV. Matches and non-matches for the whole FOV are shown in the supplement. C) Examples of inferred sources and their traces from the two algorithms and corresponding annotation for three indentified neurons (also shown with orange arrows in panels A,B left). The algorithm is capable of identifying new neurons once they become active, and track their activity similarly to offline approaches. D) Empirical CDF of correlation coefficients between the matched traces between the online and the offline approaches over the entire 50 minute traces. The majority of the correlation coefficients has very high values suggesting that the online algorithm accurately tracks the neural activity across time (see also correlation coefficients for the three examples shown in panel C). E) Timing of the online process. Top: Time required per frame when no shapes are updated and no neurons are updated (top). The algorithms is always faster than real time in tracking neurons and scales mildly with the number of neurons. Time required to update shapes per neuron (middle), and add new neurons (bottom) as a function of the number of neurons. Adding neurons is slower but occurs only sporadically affecting only mildly the required processing time (see text for details). 7 Fig. 3C shows examples of the traces from three selected neurons. OnACID can detect and track neurons with very sparse spiking over the course of the entire 50 minute experiment (Fig. 3C-top), and produce traces that are highly correlated with their offline counterparts. To examine the quality of the inferred traces (where ground truth collection at such scale is both very strenuous and severely impeded by the presence of background signals and neuropil activity), we compared the traces between the online algorithm and the CNMF approach on matched pairs of components. Fig. 3D shows the empirical cumulative distribution function (CDF) of the correlation coefficients from this comparison. The majority of the coefficients attain values close to 1, suggesting that the online algorithm can detect new neurons once they become active and then reliably track their activity. OnACID is faster than real time on average: In addition to being more accurate, OnACID is also considerably faster as it required ∼27 minutes, i.e., ∼2× faster than real time on average, to analyze the full dataset (2 minutes for initialization and 25 for the online processing) as opposed to ∼1.5 hours for the offline approach and ∼10 hours for each of the annotators (who only select ROIs). Fig. 3E illustrates the time consumption of the various steps. In the majority of the frames where no spatial shapes are being updated and no new neurons are being incorporated, OnACID processing speed exceeds the data rate of 30Hz (Fig. 3E-top), and this processing time scales only mildly with the inclusion of new neurons. The cost of updating shapes and sufficient statistics per neuron is also very low (< 1ms), and only scales mildly with the number of existing neurons (Fig. 3E-middle). As argued before this cost can be distributed among all the frames while maintaining faster than real time processing rates. The expensive step appears when detecting and including one or possibly more new neurons in the algorithm (Fig. 3E-bottom). Although this occurs only sporadically, several speedups can be potentially employed here to achieve beyond real time at every frame (see also Discussion section), which would facilitate zero-lag closed-loop experiments. Application to in vivo 2p mouse parietal cortex data: As a second application to 2p data we used a 116,000 frame dataset, taken at 30Hz over a 512×512 FOV (64min long). The first 3000 frames were used for initialization during which the CNMF algorithm found 442 neurons, before switching to OnACID, which by the end of the experiment found a total of 752 neurons (734 after filtering for size). Compared to two independent manual annotations of 928 and 875 ROIs respectively, OnACID achieved F1 = 0.76, 0.79 significantly outperforming CNMF (F1 = 0.65, 0.66 respectively). The matches and mismatches between OnACID and Labeler 1 on a 200×200 pixel part of the FOV are shown in Fig. 4A. Full FOV pairings as well as precision/recall metrics are given in Table 2. Table 2: Comparison of performance of OnACID and the CNMF algorithm using the precision/recall framework for the parietal cortex 116000 frames dataset. For each row-column pair, the column dataset is regarded as ground truth. The numbers in the parentheses are the precision and recall, respectively, preceded by their harmonic mean (F1 score). OnACID significantly outperforms the offline CNMF approach. F1 (precision, recall) Labeler 1 Labeler 2 CNMF OnACID 0.76 (0.86,0.68) 0.79 (0.86,0.72) 0.65 (0.55,0.82) CNMF 0.65 (0.70, 0.60) 0.66 (0.74,0.59) Labeler 2 0.89 (0.86,0.91) For this dataset, rigid motion correction was also performed according to the simple method of aligning each frame to the denoised (and registered) background from the previous frame. Fig. 4B shows that this approach produced strikingly similar results to an offline template based, rigid motion correction method [25]. The difference in the displacements produced by the two methods was less than 1 pixel for all 116,000 frames with standard deviations 0.11 and 0.12 pixel for the x and y directions, respectively. In terms of timing, OnACID processed the dataset in 48 minutes, again faster than real time on average. This also includes the time needed for motion correction, which on average took 5ms per frame (a bit less than 10 minutes in total). 4 Discussion - Future Work Although at first striking, the superior performance of OnACID compared to offline CNMF, for the datasets presented in this work, can be attributed to several factors. Calcium transient events are localized both in space (spatial footprint of a neuron), and in time (typically 0.3-1s for genetic indica8 20μm 0.2 0.8 Human Online Matches Mismatches A ofine ofine online online 4 0 -4 -8 x shift [pixels] B 20 0 40 60 4 0 -4 -8 -12 Time [min] y shift [pixels] Displacements Figure 4: Application to an in vivo 64min long parietal cortex dataset. A-left) Matched inferred locations between the online algorithm (white) and the manual annotation of Labeler 1 (red). A-right) False positive (white) and false negative (red) mismatches between the online algorithm and a manual annotation. B) Displacement vectors estimated by OnACID during motion registration compared to a template based algorithm. OnACID estimates the same motion vectors at a sub-pixel resolution (see text for more details). tors). By looking at a short rolling buffer OnACID is able to more robustly detect activity compared to offline approaches that look at all the data simultaneously. Moreover, OnACID searches for new activity in the residuals buffer that excludes the activity of already detected neurons, making it easier to detect new overlapping components. Finally, offline CNMF requires the a priori specification of the number of components, making it more prone to either false positive or false negative components. For both the datasets presented above, the analysis was done using the same space correlation threshold θs = 0.9. This strict choice leads to results with high precision and lower recall (see Tables 1 and 2). Results can be moderately improved by allowing a second pass of the data that can identify neurons that were initially not selected. Moreover, by relaxing the threshold the discrepancy between the precision and recall scores can be reduced, with only marginal modifications to the F1 scores (data not shown). Our current implementation performs all processing serially. In principle, significant speed gains can be obtained by performing computations not needed at each timestep (updating shapes and sufficient statistics) or occur only sporadically (incorporating a new neuron) in a parallel thread with shared memory. Moreover, different online dictionary learning algorithms that do not require the solution of an inverse problem at each timestep can potentially further speed up our framework [17]. For detecting centroids of new sources OnACID examines a static image obtained by computing the variance across time of the spatially smoother residual buffer. While this approach works very well in practice it effectively favors shapes looking similar to a pre-defined Gaussian blob (when spatially smoothed). Different approaches for detecting neurons in static images can be possibly used here, e.g., [22], [2], [29], [27]. Apart from facilitating closed-loop behavioral experiments and rapid general calcium imaging data analysis, our online pipeline can be potentially employed to future, optical-based, brain computer interfaces [6, 21] where high quality real-time processing is critical to their performance. These directions will be pursued in future work. Acknowledgments We thank Sue Ann Koay, Jeff Gauthier and David Tank (Princeton University) for sharing their cortex and hippocampal data with us. We thank Lindsey Myers, Sonia Villani and Natalia Roumelioti for providing manual annotations. We thank Daniel Barabasi (Cold Spring Harbor Laboratory) for useful discussions. AG, DC, and EAP were internally funded by the Simons Foundation. Additional support was provided by SNSF P300P2_158428 (JF), and NIH BRAIN Initiative R01EB22913, DARPA N66001-15-C-4032, IARPA MICRONS D16PC00003 (LP). 9 References [1] Misha B Ahrens, Michael B Orger, Drew N Robson, Jennifer M Li, and Philipp J Keller. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature methods, 10(5):413–420, 2013. [2] Noah Apthorpe, Alexander Riordan, Robert Aguilar, Jan Homann, Yi Gu, David Tank, and H Sebastian Seung. Automatic neuron detection in calcium imaging data using convolutional networks. In Advances in Neural Information Processing Systems, pages 3270–3278, 2016. [3] Richard E Barlow, David J Bartholomew, JM Bremner, and H Daniel Brunk. Statistical inference under order restrictions: The theory and application of isotonic regression. Wiley New York, 1972. [4] Matthew B Bouchard, Venkatakaushik Voleti, César S Mendes, Clay Lacefield, Wesley B Grueber, Richard S Mann, Randy M Bruno, and Elizabeth MC Hillman. Swept confocallyaligned planar excitation (scape) microscopy for high-speed volumetric imaging of behaving organisms. Nature photonics, 9(2):113–119, 2015. [5] Edward S Boyden, Feng Zhang, Ernst Bamberg, Georg Nagel, and Karl Deisseroth. Millisecondtimescale, genetically targeted optical control of neural activity. Nature neuroscience, 8(9):1263– 1268, 2005. [6] Kelly B Clancy, Aaron C Koralek, Rui M Costa, Daniel E Feldman, and Jose M Carmena. Volitional modulation of optically recorded calcium signals during neuroprosthetic learning. Nature neuroscience, 17(6):807–809, 2014. [7] Daniel A Dombeck, Anton N Khabbaz, Forrest Collman, Thomas L Adelman, and David W Tank. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron, 56(1):43–57, 2007. [8] Valentina Emiliani, Adam E Cohen, Karl Deisseroth, and Michael Häusser. All-optical interrogation of neural circuits. Journal of Neuroscience, 35(41):13917–13926, 2015. [9] Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas J Sofroniew, Davis V Bennett, Joshua Rosen, Chao-Tsung Yang, Loren L Looger, and Misha B Ahrens. Mapping brain activity at scale with cluster computing. Nature methods, 11(9):941–950, 2014. [10] Johannes Friedrich, Weijian Yang, Daniel Soudry, Yu Mu, Misha B Ahrens, Rafael Yuste, Darcy S Peterka, and Liam Paninski. Multi-scale approaches for high-speed imaging and analysis of large neural populations. bioRxiv, page 091132, 2016. [11] Johannes Friedrich, Pengcheng Zhou, and Liam Paninski. Fast online deconvolution of calcium imaging data. PLOS Computational Biology, 13(3):e1005423, 2017. [12] Sahil Garg, Irina Rish, Guillermo Cecchi, and Aurelie Lozano. Neurogenesis-inspired dictionary learning: Online model adaption in a changing world. arXiv preprint arXiv:1701.06106, 2017. [13] A Giovannucci, J Friedrich, B Deverett, V Staneva, D Chklovskii, and E Pnevmatikakis. Caiman: An open source toolbox for large scale calcium imaging data analysis on standalone machines. Cosyne Abstracts, 2017. [14] David S Greenberg and Jason ND Kerr. Automated correction of fast motion artifacts for two-photon imaging of awake animals. Journal of neuroscience methods, 176(1):1–15, 2009. [15] Logan Grosenick, James H Marshel, and Karl Deisseroth. Closed-loop and activity-guided optogenetic control. Neuron, 86(1):106–139, 2015. [16] Manuel Guizar-Sicairos, Samuel T Thurman, and James R Fienup. Efficient subpixel image registration algorithms. Optics letters, 33(2):156–158, 2008. [17] Tao Hu, Cengiz Pehlevan, and Dmitri B Chklovskii. A hebbian/anti-hebbian network for online sparse dictionary learning derived from symmetric matrix factorization. In Signals, Systems and Computers, 2014 48th Asilomar Conference on, pages 613–619. IEEE, 2014. [18] Patrick Kaifosh, Jeffrey D Zaremba, Nathan B Danielson, and Attila Losonczy. Sima: Python software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics, 8:80, 2014. [19] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11(Jan):19–60, 2010. 10 [20] Eran A Mukamel, Axel Nimmerjahn, and Mark J Schnitzer. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron, 63(6):747–760, 2009. [21] Daniel J O’shea, Eric Trautmann, Chandramouli Chandrasekaran, Sergey Stavisky, Jonathan C Kao, Maneesh Sahani, Stephen Ryu, Karl Deisseroth, and Krishna V Shenoy. The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces. Experimental neurology, 287:437–451, 2017. [22] Marius Pachitariu, Adam M Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, and Maneesh Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In Advances in Neural Information Processing Systems, pages 1745–1753, 2013. [23] Marius Pachitariu, Carsen Stringer, Sylvia Schröder, Mario Dipoppa, L Federico Rossi, Matteo Carandini, and Kenneth D Harris. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. bioRxiv, page 061507, 2016. [24] Adam M Packer, Lloyd E Russell, Henry WP Dalgleish, and Michael Häusser. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nature Methods, 12(2):140–146, 2015. [25] Eftychios A Pnevmatikakis and Andrea Giovannucci. Normcorre: An online algorithm for piecewise rigid motion correction of calcium imaging data. Journal of Neuroscience Methods, 291:83–94, 2017. [26] Eftychios A Pnevmatikakis, Daniel Soudry, Yuanjun Gao, Timothy A Machado, Josh Merel, David Pfau, Thomas Reardon, Yu Mu, Clay Lacefield, Weijian Yang, et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285–299, 2016. [27] Stephanie Reynolds, Therese Abrahamsson, Renaud Schuck, P Jesper Sjöström, Simon R Schultz, and Pier Luigi Dragotti. Able: an activity-based level set segmentation algorithm for two-photon calcium imaging data. eNeuro, pages ENEURO–0012, 2017. [28] Spencer L Smith and Michael Häusser. Parallel processing of visual space by neighboring neurons in mouse visual cortex. Nature neuroscience, 13(9):1144–1149, 2010. [29] Quico Spaen, Dorit S Hochbaum, and Roberto Asín-Achá. HNCcorr: A novel combinatorial approach for cell identification in calcium-imaging movies. arXiv preprint arXiv:1703.01999, 2017. [30] Joshua T Vogelstein, Adam M Packer, Timothy A Machado, Tanya Sippy, Baktash Babadi, Rafael Yuste, and Liam Paninski. Fast nonnegative deconvolution for spike train inference from population calcium imaging. Journal of neurophysiology, 104(6):3691–3704, 2010. [31] Theo Walker. Cell magic wand tool, 2014. 11 | 2017 | 215 |
6,692 | Action Centered Contextual Bandits Kristjan Greenewald Department of Statistics Harvard University kgreenewald@fas.harvard.edu Ambuj Tewari Department of Statistics University of Michigan tewaria@umich.edu Predrag Klasnja School of Information University of Michigan klasnja@umich.edu Susan Murphy Departments of Statistics and Computer Science Harvard University samurphy@fas.harvard.edu Abstract Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study. 1 Introduction In the theory of sequential decision-making, contextual bandit problems (Tewari & Murphy, 2017) occupy a middle ground between multi-armed bandit problems (Bubeck & Cesa-Bianchi, 2012) and full-blown reinforcement learning (usually modeled using Markov decision processes along with discounted or average reward optimality criteria (Sutton & Barto, 1998; Puterman, 2005)). Unlike bandit algorithms, which cannot use any side-information or context, contextual bandit algorithms can learn to map the context into appropriate actions. However, contextual bandits do not consider the impact of actions on the evolution of future contexts. Nevertheless, in many practical domains where the impact of the learner’s action on future contexts is limited, contextual bandit algorithms have shown great promise. Examples include web advertising (Abe & Nakamura, 1999) and news article selection on web portals (Li et al., 2010). An influential thread within the contextual bandit literature models the expected reward for any action in a given context using a linear mapping from a d-dimensional context vector to a real-valued reward. Algorithms using this assumption include LinUCB and Thompson Sampling, for both of which regret bounds have been derived. These analyses often allow the context sequence to be chosen adversarially, but require the linear model, which links rewards to contexts, to be time-invariant. There has been little effort to extend these algorithms and analyses when the data follow an unknown nonlinear or time-varying model. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we consider a particular type of non-stationarity and non-linearity that is motivated by problems arising in mobile health (mHealth). Mobile health is a fast developing field that uses mobile and wearable devices for health care delivery. These devices provide us with a real-time stream of dynamically evolving contextual information about the user (location, calendar, weather, physical activity, internet activity, etc.). Contextual bandit algorithms can learn to map this contextual information to a set of available intervention options (e.g., whether or not to send a medication reminder). However, human behavior is hard to model using stationary, linear models. We make a fundamental assumption in this paper that is quite plausible in the mHealth setting. In these settings, there is almost always a “do nothing” action usually called action 0. The expected reward for this action is the baseline reward and it can change in a very non-stationary, non-linear fashion. However, the treatment effect of a non-zero action, i.e., the incremental change over the baseline reward due to the action, can often be plausibly modeled using standard stationary, linear models. We show, both theoretically and empirically, that the performance of an appropriately designed action-centered contextual bandit algorithm is agnostic to the high model complexity of the baseline reward. Instead, we get the same level of performance as expected in a stationary, linear model setting. Note that it might be tempting to make the entire model non-linear and non-stationary. However, the sample complexity of learning very general non-stationary, non-linear models is likely to be so high that they will not be useful in mHealth where data is often noisy, missing, or collected only over a few hundred decision points. We connect our algorithm design and theoretical analysis to the real world of mHealth by using data from a pilot study of HeartSteps, an Android-based walking intervention. HeartSteps encourages walking by sending individuals contextually-tailored suggestions to be active. Such suggestions can be sent up to five times a day–in the morning, at lunchtime, mid-afternoon, at the end of the workday, and in the evening–and each suggestion is tailored to the user’s current context: location, time of day, day of the week, and weather. HeartSteps contains two types of suggestions: suggestions to go for a walk, and suggestions to simply move around in order to disrupt prolonged sitting. While the initial pilot study of HeartSteps micro-randomized the delivery of activity suggestions (Klasnja et al., 2015; Liao et al., 2015), delivery of activity suggestions is an excellent candidate for the use of contextual bandits, as the effect of delivering (vs. not) a suggestion at any given time is likely to be strongly influenced by the user’s current context, including location, time of day, and weather. This paper’s main contributions can be summarized as follows. We introduce a variant of the standard linear contextual bandit model that allows the baseline reward model to be quite complex while keeping the treatment effect model simple. We then introduce the idea of using action centering in contextual bandits as a way to decouple the estimation of the above two parts of the model. We show that action centering is effective in dealing with time-varying and non-linear behavior in our model, leading to regret bounds that scale as nicely as previous bounds for linear contextual bandits. Finally, we use data gathered in the recently conducted HeartSteps study to validate our model and theory. 1.1 Related Work Contextual bandits have been the focus of considerable interest in recent years. Chu et al. (2011) and Agrawal & Goyal (2013) have examined UCB and Thompson sampling methods respectively for linear contextual bandits. Works such as Seldin et al. (2011), Dudik et al. (2011) considered contextual bandits with fixed policy classes. Methods for reducing the regret under complex reward functions include the nonparametric approach of May et al. (2012), the “contextual zooming" approach of Slivkins (2014), the kernel-based method of Valko et al. (2013), and the sparse method of Bastani & Bayati (2015). Each of these approaches has regret that scales with the complexity of the overall reward model including the baseline, and requires the reward function to remain constant over time. 2 Model and Problem Setting Consider a contextual bandit with a baseline (zero) action and N non-baseline arms (actions or treatments). At each time t = 1, 2, . . . , a context vector ¯st ∈Rd′ is observed, an action at ∈ {0, . . . , N} is chosen, and a reward rt(at) is observed. The bandit learns a mapping from a state vector st,at depending on ¯st and at to the expected reward rt(st,at). The state vector st,at ∈Rd is a function of at and ¯st. This form is used to achieve maximum generality, as it allows for infinite possible actions so long as the reward can be modeled using a d-dimensional st,a. In the most 2 unstructured case with N actions, we can simply encode the reward with a d = Nd′ dimensional sT t,at = [I(at = 1)¯sT t , . . . , I(at = N)¯sT t ] where I(·) is the indicator function. For maximum generality, we assume the context vectors are chosen by an adversary on the basis of the history Ht−1 of arms aτ played, states ¯sτ, and rewards rτ(¯sτ, aτ) received up to time t −1, i.e., Ht−1 = {aτ, ¯st, rτ(¯sτ, aτ), i = 1, . . . , N, τ = 1, . . . , t −1}. Consider the model E[rt(¯st, at)|¯st, at] = ¯ft(¯st, at), where ¯ft can be decomposed into a fixed component dependent on action and a time-varying component that does not depend on action: E[rt(¯st, at)|¯st, at] = ¯ft(¯st, at) = f(st,at)I(at > 0) + gt(¯st), where ¯ft(¯st, 0) = gt(¯st) due to the indicator function I(at > 0). Note that the optimal action depends in no way on gt, which merely confounds the observation of regret. We hypothesize that the regret bounds for such a contextual bandit asymptotically depend only on the complexity of f, not of gt. We emphasize that we do not require any assumptions about or bounds on the complexity or smoothness of gt, allowing gt to be arbitrarily nonlinear and to change abruptly in time. These conditions create a partially agnostic setting where we have a simple model for the interaction but the baseline cannot be modeled with a simple linear function. In what follows, for simplicity of notation we drop ¯st from the argument for rt, writing rt(at) with the dependence on ¯st understood. In this paper, we consider the linear model for the reward difference at time t: rt(at) −rt(0) = f(st,at)I(at > 0) + nt = sT t,atθI(at > 0) + nt (1) where nt is zero-mean sub-Gaussian noise with variance σ2 and θ ∈Rd is a vector of coefficients. The goal of the contextual bandit is to estimate θ at every time t and use the estimate to decide which actions to take under a series of observed contexts. As is common in the literature, we assume that both the baseline and interaction rewards are bounded by a constant for all t. The task of the action-centered contextual bandit is to choose the probabilities π(a, t) of playing each arm at at time t so as to maximize expected differential reward E[rt(at) −rt(0)|Ht−1, st,a] = XN a=0 π(a, t)E[rt(a) −rt(0)|Ht−1, st,a] (2) = XN a=0 π(a, t)sT t,aθI(a > 0). This task is closely related to obtaining a good estimate of the reward function coefficients θ. 2.1 Probability-constrained optimal policy In the mHealth setting, a contextual bandit must choose at each time point whether to deliver to the user a behavior-change intervention, and if so, what type of intervention to deliver. Whether or not an intervention, such as an activity suggestion or a medication reminder, is sent is a critical aspect of the user experience. If a bandit sends too few interventions to a user, it risks the user’s disengaging with the system, and if it sends too many, it risks the user’s becoming overwhelmed or desensitized to the system’s prompts. Furthermore, standard contextual bandits will eventually converge to a policy that maps most states to a near-100% chance of sending or not sending an intervention. Such regularity could not only worsen the user’s experience, but ignores the fact that users have changing routines and cannot be perfectly modeled. We are thus motivated to introduce a constraint on the size of the probabilities of delivering an intervention. We constrain 0 < πmin ≤1 −P(at = 0|¯st) ≤πmax < 1, where P(at = 0|¯st) is the conditional bandit-chosen probability of delivering an intervention at time t. The constants πmin and πmax are not learned by the algorithm, but chosen using domain science, and might vary for different components of the same mHealth system. We constrain P(at = 0|¯st), not each P(at = i|¯st), as which intervention is delivered is less critical to the user experience than being prompted with an intervention in the first place. User habituation can be mitigated by implementing the nonzero actions (a = 1, . . . , N) to correspond to several types or categories of messages, with the exact message sent being randomized from a set of differently worded messages. Conceptually, we can view the bandit as pulling two arms at each time t: the probability of sending a message (constrained to lie in [πmin, πmax]) and which message to send if one is sent. While these probability constraints are motivated by domain science, these constraints also enable our 3 proposed action-centering algorithm to effectively orthogonalize the baseline and interaction term rewards, achieving sublinear regret in complex scenarios that often occur in mobile health and other applications and for which existing approaches have large regret. Under this probability constraint, we can now derive the optimal policy with which to compare the bandit. The policy that maximizes the expected reward (2) will play the optimal action a∗ t = arg max i∈{0,...,N} sT t,iθI(i > 0), with the highest allowed probability. The remainder of the probability is assigned as follows. If the optimal action is nonzero, the optimal policy will then play the zero action with the remaining probability (which is the minimum allowed probability of playing the zero action). If the optimal action is zero, the optimal policy will play the nonzero action with the highest expected reward ¯a∗ t = arg max i∈{1,...,N} sT t,iθ with the remaining probability, i.e. πmin. To summarize, under the constraint 1 −π∗ t (0, t) ∈ [πmin, πmax], the expected reward maximizing policy plays arm at with probability π∗(a, t), where If a∗ t ̸= 0 : π∗(a∗ t , t) = πmax, π∗(0, t) = 1 −πmax, π∗(a, t) = 0 ∀a ̸= 0, a∗ t (3) If a∗ t = 0 : π∗(0, t) = 1 −πmin, π∗(¯a∗ t , t) = πmin, π∗(a, t) = 0 ∀a ̸= 0, ¯a∗ t . 3 Action-centered contextual bandit Since the observed reward always contains the sum of the baseline reward and the differential reward we are estimating, and the baseline reward is arbitrarily complex, the main challenge is to isolate the differential reward at each time step. We do this via an action-centering trick, which randomizes the action at each time step, allowing us to construct an estimator whose expectation is proportional to the differential reward rt(¯at) −rt(0), where ¯at is the nonzero action chosen by the bandit at time t to be randomized against the zero action. For simplicity of notation, we set the probability of the bandit taking nonzero action P(at > 0) to be equal to 1 −π(0, t) = πt. 3.1 Centering the actions - an unbiased rt(¯at) −rt(0) estimate To determine a policy, the bandit must learn the coefficients θ of the model for the differential reward rt(¯at) −rt(0) = sT t,¯atθ as a function of ¯at. If the bandit had access at each time t to the differential reward rt(¯at) −rt(0), we could estimate θ using a penalized least-squares approach by minimizing arg min θ XT t=1(rt(¯at) −rt(0) −θT st,¯at)2 + λ∥θ∥2 2 over θ, where rt(a) is the reward under action a at time t (Agrawal & Goyal, 2013). This corresponds to the Bayesian estimator when the reward is Gaussian. Although we have only access to rt(at), not rt(¯at) −rt(0), observe that given ¯at, the bandit randomizes to at = ¯at with probability πt and at = 0 otherwise. Thus E[(I(at > 0) −πt)rt(at)|Ht−1, ¯at, ¯st] = πt(1 −πt)rt(¯a) −(1 −πt)πtrt(0) (4) = πt(1 −πt)(rt(¯at) −rt(0)). Thus (I(at > 0) −πt)rt(at), which only uses the observed rt(at), is proportional to an unbiased estimator of rt(¯at) −rt(0). Recalling that ¯at, at are both known since they are chosen by the bandit at time t, we create the estimate of the differential reward between ¯at and action 0 at time t as ˆrt(¯at) = (I(at > 0) −πt)rt(at). The corresponding penalized weighted least-squares estimator for θ using ˆrt(¯at) is the minimizer of XT t=1πt(1 −πt)(ˆrt(¯at)/(πt(1 −πt)) −θT st,¯at)2 + ∥θ∥2 2 (5) = XT t=1 (ˆrt(¯at))2 πt(1 −πt) −2ˆrt(¯at)θT st,¯at + πt(1 −πt)(θT st,¯at)2 + ∥θ∥2 2 = c −2θTˆb + θT Bθ + ∥θ∥2 2, 4 where for simplicity of presentation we have used unit penalization ∥θ∥2 2, and ˆb = XT t=1(I(at > 0) −πt)st,¯atrt(at), B = I + XT t=1 πt(1 −πt)st,¯atsT t,¯at. The weighted least-squares weights are πt(1 −πt), since var h ˆrt(¯at) πt(1−πt) Ht−1, ¯at, ¯st i = var[ˆrt(¯at)t|Ht−1,¯at,¯st] (πt(1−πt))2 and the standard deviation of ˆrt(¯at) = (I(at > 0) −πt)rt(at) given Ht−1, ¯at, ¯st is of order gt(¯st) = O(1). The minimizer of (5) is ˆθ = B−1ˆb. 3.2 Action-Centered Thompson Sampling As the Thompson sampling approach generates probabilities of taking an action, rather than selecting an action, Thompson sampling is particularly suited to our regression approach. We follow the basic framework of the contextual Thompson sampling approach presented by Agrawal & Goyal (2013), extending and modifying it to incorporate our action-centered estimator and probability constraints. The critical step in Thompson sampling is randomizing the model coefficients according to the prior N(ˆθ, v2B−1) for θ at time t. A θ′ ∼N(ˆθ, v2B−1) is generated, and the action at chosen to maximize sT t,aθ′. The probability that this procedure selects any action a is determined by the distribution of θ′; however, it may select action 0 with a probability not in the required range [1 −πmax, 1 −πmin]. We thus introduce a two-step hierarchical procedure. After generating the random θ′, we instead choose the nonzero ¯at maximizing the expected reward ¯at = arg max a∈{1,...,N} sT t,aθ′. Then we randomly determine whether to take the nonzero action, choosing at = ¯at with probability Algorithm 1 Action-Centered Thompson Sampling 1: Set B = I, ˆθ = 0, ˆb = 0, choose [πmin, πmax]. 2: for t = 1, 2, . . . do 3: Observe current context ¯st and form st,a for each a ∈{1, . . . , N}. 4: Randomly generate θ′ ∼N(ˆθ, v2B−1). 5: Let ¯at = arg max a∈{1,...,N} sT t,aθ′. 6: Compute probability πt of taking a nonzero action according to (6). 7: Play action at = ¯at with probability πt, else play at = 0. 8: Observe reward rt(at) and update ˆθ B = B + πt(1 −πt)st,¯atsT t,¯at, ˆb = ˆb + st,¯at(I(at > 0) −πt)rt(at), ˆθ = B−1ˆb. 9: end for πt = P(at > 0) = max(πmin, min(πmax, P(sT t,¯a˜θ > 0))), (6) and at = 0 otherwise, where ˜θ ∼N(ˆθ, v2B−1). P(sT t,¯a˜θ > 0) is the probability that the expected relative reward sT t,¯a˜θ of action ¯at is higher than zero for ˜θ ∼N(ˆθ, v2B−1). This probability is easily computed using the normal CDF. Finally the bandit updates ˆb, B and computes an updated ˆθ = B−1ˆb. Our action-centered Thompson sampling algorithm is summarized in Algorithm 1. 4 Regret analysis Classically, the regret of a bandit is defined as the difference between the reward achieved by taking the optimal actions a∗ t , and the expected reward received by playing the arm at chosen by the bandit regretclassical(t) = sT t,a∗ t θ −sT t,atθ, (7) 5 where the expectation is taken conditionally on at, sT t,at, Ht−1. For simplicity, let π∗ t = 1 −π∗ t (0, t) be the probability that the optimal policy takes a nonzero action, and recall that πt = 1 −πt(0, t) is the probability the bandit takes a nonzero action. The probability constraint implies that the optimal policy (3) plays the optimal arm with a probability bounded away from 0 and 1, hence definition (7) is no longer meaningful. We can instead create a regret that is the difference in expected rewards conditioned on ¯at, πt, sT t,at, Ht−1, but not on the randomized action at: regret(t) = π∗ t sT t,¯a∗ t θ −πtsT t,¯atθ (8) where we have recalled that given ¯at, the bandit plays action at = ¯at with probability πt and plays the at = 0 with differential reward 0 otherwise. The action-centered contextual bandit attempts to minimize the cumulative regret R(T) = PT t=1 regret(t) over horizon T. 4.1 Regret bound for Action-Centered Thompson Sampling In the following theorem we show that with high probability, the probability-constrained Thompson sampler has low regret relative to the optimal probability-constrained policy. Theorem 1. Consider the action-centered contextual bandit problem, where ¯ft is potentially time varying, and ¯st at time t given Ht−1 is chosen by an adversary. Under this regime, the total regret at time T for the action-centered Thompson sampling contextual bandit (Algorithm 1) satisfies R(T) ≤C d2 ϵ √ T 1+ϵ(log(Td) log 1 δ ) with probability at least 1 −3δ/2, for any 0 < ϵ < 1, 0 < δ < 1. The constant C is in the proof. Observe that this regret bound does not depend on the number of actions N, is sublinear in T, and scales only with the complexity d of the interaction term, not the complexity of the baseline reward g. Furthermore, ϵ = 1/ log(T) can be chosen giving a regret of order O(d2√ T). This bound is of the same order as the baseline Thompson sampling contextual bandit in the adversarial setting when the baseline is identically zero (Agrawal & Goyal, 2013). When the baseline can be modeled with d′ features where d′ > d, our method achieves O(d2√ T) regret whereas the standard Thompson sampling approach has O((d + d′)2√ T) regret. Furthermore, when the baseline reward is time-varying, the worst case regret of the standard Thompson sampling approach is O(T), while the regret of our method remains O(d2√ T). 4.2 Proof of Theorem 1 - Decomposition of the regret We will first bound the regret (8) at time t. regret(t) = π∗ t sT t,¯a∗ t θ −πtsT t,¯atθ = (π∗ t −πt)(sT t,¯atθ) + π∗ t (sT t,¯a∗ t θ −sT t,¯atθ) (9) ≤(π∗ t −πt)(sT t,¯atθ) + (sT t,¯a∗ t θ −sT t,¯atθ), (10) where the inequality holds since (sT t,¯a∗ t θ −sT t,¯atθ) ≥0 and 0 < π∗ t < 1 by definition. Then R(T) = XT t=1 regret(t) ≤ XT t=1(π∗ t −πt)(sT t,¯atθ) | {z } I + XT t=1(sT t,¯a∗ t θ −sT t,¯atθ) | {z } II Observe that we have decomposed the regret into a term I that depends on the choice of the randomization πt between the zero and nonzero action, and a term II that depends only on the choice of the potential nonzero action ¯at prior to the randomization. We bound I using concentration inequalities, and bound II using arguments paralleling those for standard Thompson sampling. Lemma 1. Suppose that the conditions of Theorem 1 apply. Then with probability at least 1 −δ 2, I ≤C p d3T log(Td) log(1/δ) for some constant C given in the proof. Lemma 2. Suppose that the conditions of Theorem 1 apply. Then term II can be bounded as II = T X t=1 (sT t,¯a∗ t θ −sT t,¯atθ) ≤C′ d2 ϵ √ T 1+ϵ log 1 δ log(Td) where the inequality holds with probability at least 1 −δ. 6 The proofs are contained in Sections 4 and 5 in the supplement respectively. In the derivation, the “pseudo-actions” ¯at that Algorithm 1 chooses prior to the πt baseline-nonzero randomization correspond to the actions in the standard contextual bandit setting. Note that I involves only ¯at, not ¯a∗ t , hence it is not surprising that the bound is smaller than that for II. Combining Lemmas 1 and 2 via the union bound gives Theorem 1. 5 Results 5.1 Simulated data We first conduct experiments with simulated data, using N = 2 possible nonzero actions. In each experiment, we choose a true reward generative model rt(s, a) inspired by data from the HeartSteps study (for details see Section 1.1 in the supplement), and generate two length T sequences of state vectors st,a ∈RNK and ¯st ∈RL, where the ¯st are iid Gaussian and st,a is formed by stacking columns I(a = i)[1; ¯st] for i = 1, . . . , N. We consider both nonlinear and nonstationary baselines, while keeping the treatment effect models the same. The bandit under evaluation iterates through the T time points, at each choosing an action and receiving a reward generated according to the chosen model. We set πmin = 0.2, πmax = 0.8. At each time step, the reward under the optimal policy is calculated and compared to the reward received by the bandit to form the regret regret(t). We can then plot the cumulative regret cumulative regret(t) = Xt τ=1 regret(τ). In the first experiment, the baseline reward is nonlinear. Specifically, we generate rewards using 200 400 600 800 1000 Decision Point 0 10 20 30 40 50 Cumulative Regret wrt Optimal Policy Action Centered TS Standard TS (a) Median cumulative regret 200 400 600 800 1000 Decision Point 0 50 100 150 200 Cumulative Regret wrt Optimal Policy Action Centered TS Standard TS (b) Median with 1st and 3rd quartiles (dashed) Figure 1: Nonlinear baseline reward g, in scenario with 2 nonzero actions and reward function based on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach, compared to baseline contextual bandit, median computed over 100 random trials. rt(st,at, ¯st, at) = θT st,at + 2I(|[¯st]1| < 0.8) + nt where nt = N(0, 1) and θ ∈R8 is a fixed vector listed in supplement section 1.1. This simulates the quite likely scenario that for a given individual the baseline reward is higher for small absolute deviations from the mean of the first context feature, i.e. rewards are higher when the feature at the decision point is “near average”, with reward decreasing for abnormally high or low values. We run the benchmark Thompson sampling algorithm (Agrawal & Goyal, 2013) and our proposed action-centered Thompson sampling algorithm, computing the cumulative regrets and taking the median over 500 random trials. The results are shown in Figure 1, demonstrating linear growth of the benchmark Thompson sampling algorithm and significantly lower, sublinear regret for our proposed method. We then consider a scenario with the baseline reward gt(·) function changing in time. We generate rewards as rt(st,at, ¯st, at) = θT st,at + ηT t ¯st + nt where nt = N(0, 1), θ is a fixed vector as above, and ηt ∈R7, ¯st are generated as smoothly varying Gaussian processes (supplement Section 1.1). The cumulative regret is shown in Figure 2, again demonstrating linear regret for the baseline approach and significantly lower sublinear regret for our proposed action-centering algorithm as expected. 7 200 400 600 800 1000 Decision Point 0 50 100 150 Cumulative Regret wrt Optimal Policy Action Centered TS Standard TS (a) Median cumulative regret 200 400 600 800 1000 Decision Point 0 50 100 150 200 250 Cumulative Regret wrt Optimal Policy Action Centered TS Standard TS (b) Median with 1st and 3rd quartiles (dashed) Figure 2: Nonstationary baseline reward g, in scenario with 2 nonzero actions and reward function based on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach, compared to baseline contextual bandit, median computed over 100 random trials. 5.2 HeartSteps study data The HeartSteps study collected the sensor and weather-based features shown in Figure 1 at 5 decision points per day for each study participant. If the participant was available at a decision point, a message was sent with constant probability 0.6. The sent message could be one of several activity or anti-sedentary messages chosen by the system. The reward for that message was defined to be log(0.5 + x) where x is the step count of the participant in the 30 minutes following the suggestion. As noted in the introduction, the baseline reward, i.e. the step count of a subject when no message is sent, does not only depend on the state in a complex way but is likely dependent on a large number of unobserved variables. Because of these unobserved variables, the mapping from the observed state to the reward is believed to be strongly time-varying. Both these characteristics (complex, time-varying baseline reward function) suggest the use of the action-centering approach. We run our contextual bandit on the HeartSteps data, considering the binary action of whether or not to send a message at a given decision point based on the features listed in Figure 1 in the supplement. Each user is considered independently, for maximum personalization and independence of results. As above we set πmin = 0.2, πmax = 0.8. We perform offline evaluation of the bandit using the method of Li et al. (2011). Li et al. (2011) uses the sequence of states, actions, and rewards in the data to form an near-unbiased estimate of the average expected reward achieved by each algorithm, averaging over all users. We used a total of 33797 time points to create the reward estimates. The resulting estimates for the improvement in average reward over the baseline randomization, averaged over 100 random seeds of the bandit algorithm, are shown in Figure 2 of the supplement with the proposed action-centering approach achieving the highest reward. Since the reward is logarithmic in the number of steps, the results imply that the benchmark Thompson sampling approach achieves an average 1.6% increase in step counts over the non-adaptive baseline, while our proposed method achieves an increase of 3.9%. 6 Conclusion Motivated by emerging challenges in adaptive decision making in mobile health, in this paper we proposed the action-centered Thompson sampling contextual bandit, exploiting the randomness of the Thompson sampler and an action-centering approach to orthogonalize out the baseline reward. We proved that our approach enjoys low regret bounds that scale only with the complexity of the interaction term, allowing the baseline reward to be arbitrarily complex and time-varying. Acknowledgments This work was supported in part by grants R01 AA023187, P50 DA039838, U54EB020404, R01 HL125440 NHLBI/NIA, NSF CAREER IIS-1452099, and a Sloan Research Fellowship. 8 References Abe, Naoki and Nakamura, Atsuyoshi. Learning to optimally schedule internet banner advertisements. In Proceedings of the Sixteenth International Conference on Machine Learning, pp. 12–21. Morgan Kaufmann Publishers Inc., 1999. Agrawal, Shipra and Goyal, Navin. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 127–135, 2013. Bastani, Hamsa and Bayati, Mohsen. Online decision-making with high-dimensional covariates. Available at SSRN 2661896, 2015. Bubeck, Sébastien and Cesa-Bianchi, Nicolo. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. Chu, Wei, Li, Lihong, Reyzin, Lev, and Schapire, Robert E. Contextual bandits with linear payoff functions. In International Conference on Artificial Intelligence and Statistics, pp. 208–214, 2011. Dudik, Miroslav, Hsu, Daniel, Kale, Satyen, Karampatziakis, Nikos, Langford, John, Reyzin, Lev, and Zhang, Tong. Efficient optimal learning for contextual bandits. In Proceedings of the TwentySeventh Conference Annual Conference on Uncertainty in Artificial Intelligence, pp. 169–178. AUAI Press, 2011. Klasnja, Predrag, Hekler, Eric B., Shiffman, Saul, Boruvka, Audrey, Almirall, Daniel, Tewari, Ambuj, and Murphy, Susan A. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychology, 34(Suppl):1220–1228, Dec 2015. Li, Lihong, Chu, Wei, Langford, John, and Schapire, Robert E. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, pp. 661–670. ACM, 2010. Li, Lihong, Chu, Wei, Langford, John, and Wang, Xuanhui. Unbiased offline evaluation of contextualbandit-based news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pp. 297–306. ACM, 2011. Liao, Peng, Klasnja, Predrag, Tewari, Ambuj, and Murphy, Susan A. Sample size calculations for micro-randomized trials in mhealth. Statistics in medicine, 2015. May, Benedict C., Korda, Nathan, Lee, Anthony, and Leslie, David S. Optimistic Bayesian sampling in contextual-bandit problems. The Journal of Machine Learning Research, 13(1):2069–2106, 2012. Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2005. Seldin, Yevgeny, Auer, Peter, Shawe-Taylor, John S., Ortner, Ronald, and Laviolette, François. PAC-Bayesian analysis of contextual bandits. In Advances in Neural Information Processing Systems, pp. 1683–1691, 2011. Slivkins, Aleksandrs. Contextual bandits with similarity information. The Journal of Machine Learning Research, 15(1):2533–2568, 2014. Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction. MIT Press, 1998. Tewari, Ambuj and Murphy, Susan A. From ads to interventions: Contextual bandits in mobile health. In Rehg, Jim, Murphy, Susan A., and Kumar, Santosh (eds.), Mobile Health: Sensors, Analytic Methods, and Applications. Springer, 2017. Valko, Michal, Korda, Nathan, Munos, Rémi, Flaounas, Ilias, and Cristianini, Nello. Finite-time analysis of kernelised contextual bandits. In Uncertainty in Artificial Intelligence, pp. 654, 2013. 9 | 2017 | 216 |
6,693 | Cost efficient gradient boosting Sven Peter Heidelberg Collaboratory for Image Processing Interdisciplinary Center for Scientific Computing University of Heidelberg 69115 Heidelberg, Germany sven.peter@iwr.uni-heidelberg.de Ferran Diego Robert Bosch GmbH Robert-Bosch-Straße 200 31139 Hildesheim, Germany ferran.diegoandilla@de.bosch.com Fred A. Hamprecht Heidelberg Collaboratory for Image Processing Interdisciplinary Center for Scientific Computing University of Heidelberg 69115 Heidelberg, Germany fred.hamprecht@iwr.uni-heidelberg.de Boaz Nadler Department of Computer Science Weizmann Institute of Science Rehovot 76100, Israel boaz.nadler@weizmann.ac.il Abstract Many applications require learning classifiers or regressors that are both accurate and cheap to evaluate. Prediction cost can be drastically reduced if the learned predictor is constructed such that on the majority of the inputs, it uses cheap features and fast evaluations. The main challenge is to do so with little loss in accuracy. In this work we propose a budget-aware strategy based on deep boosted regression trees. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. We evaluate our method on a number of datasets and find that it outperforms the current state of the art by a large margin. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting. Source code is made available at http://github.com/svenpeter42/LightGBM-CEGB. 1 Introduction Many applications need classifiers or regressors that are not only accurate, but also cheap to evaluate [33, 30]. Prediction cost usually consists of two different components: The acquisition or computation of the features used to predict the output, and the evaluation of the predictor itself. A common approach to construct an accurate predictor with low evaluation cost is to modify the classical empirical risk minimization objective, such that it includes a prediction cost penalty, and optimize this modified functional [33, 30, 23, 24]. In this work we also follow this general approach, and develop a budget-aware strategy based on deep boosted regression trees. Despite the recent re-emergence and popularity of neural networks, our choice of boosted regression trees is motivated by three observations: (i) Given ample training data and computational resources, deep neural networks often give the most accurate results. However, standard feed-forward architectures route a single input component (for example, a single coefficient in the case of vectorial input) through most network units. While the computational cost can be mitigated by network compression or quantization [14], in the extreme case to binary activations only [16], the computational graph is fundamentally dense. In a standard decision tree, on the other hand, each sample is routed along a single path from the root to a leaf, thus 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. visiting typically only a small subset of all split nodes, the "units" of a decision tree. In the extreme case of a balanced binary tree, each sample visits only log(N) out of a total of N nodes. (ii) Individual decision trees and their ensembles, such as Random Forest [4] and Gradient Boosting [12], are still among the most useful and highly competitive methods in machine learning, particularly in the regime of limited training data, little training time and little expertise for parameter tuning [11]. (iii) When features and/or decisions come at a premium, it is convenient but wasteful to assume that all instances in a data set are created equal (even when assumed i.i.d.). Some instances may be easy to classify based on reading a single measurement / feature, while others may require a full battery of tests before a decision can be reached with confidence [35]. Decision trees naturally lend themselves to such a "sequential experimental design" setup: after first using cheap features to split all instances into subsets, the subsequent decisions can be based on more expensive features which are, however, only elicited if truly needed. Importantly, the set of more expensive features is requested conditionally on the values of features used earlier in the tree. In this work we address the challenge of constructing an ensemble of trees that is both accurate and yet cheap to evaluate. We first describe the problem setup in Section 2, and discuss related work in Section 3. Our key contribution appears in Section 4, where we propose an extension of gradient boosting [12] which takes prediction time penalties into account. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting. As illustrated in Section 5, on a number of datasets our method outperforms the current state of the art by a large margin. 2 Problem setup Consider a regression problem where the response Y ∈R and each instance X is represented by M features, X ∈RM. Let L : R × R →R be a loss function, and T be a set of admissible functions. In supervised learning, given a training set of N pairs (xi, yi) sampled i.i.d. from (X, Y ), a classical approach to learn a predictor T ∈T is to minimize the empirical loss L on the training set, min T ∈T N X i=1 L(yi, T(xi)). (1) In this paper we restrict ourselves to the set T that consists of an ensemble of trees, namely predictors of the form T(x) = PK k=1 tk(x). Each single decision tree tk can be represented as a collection of Lk leaf nodes with corresponding responses ωk = (ωk,1, . . . , ω1,Lk) ∈RLk and a function qk : RM →{1, . . . , Lk} that encodes the tree structure and maps an input to its corresponding terminal leaf index. The output of the tree is tk(x) = ωk,qk(x). Learning even a single tree that exactly minimizes the functional in Eq. (1) is NP-hard under several aspects of optimality [15, 19, 25, 36]. Yet, single trees and ensemble of trees are some of the most successful predictors in machine learning and there are multiple greedy based methods to construct tree ensembles that approximately solve Eq. (1) [4, 12, 11]. In many practical applications, however, it is important that the predictor T is not only accurate but also fast to compute. Given a prediction cost function Ψ : T × RM →R+a standard approach is to add a penalty to the empirical risk minimization above [33, 30, 35, 23, 24]: min T ∈T X i L(yi, T(xi)) + λΨ(T, xi). (2) The parameter λ controls the tradeoff between accuracy and prediction cost. Typically, the prediction cost function Ψ consists of two components. The first is the cost of acquiring or computing relevant input features. For example, think of a patient at the emergency room where taking his temperature and blood oxygen levels are cheap, but a CT-scan is expensive. The second component is the cost of evaluating the function T, which in our case is the sum of the cost of evaluating the K individual trees tk. In more detail, the first component of feature computation cost may also depend on the specific prediction problem. In some scenarios, test instances are independent of each other and the features 2 can be computed for each input instance on demand. But there are also others. In image processing, for example, where the input is an image which consists of many pixels and the task is to predict some function at all pixels. In such cases, even though specific features can be computed for each pixel independently, it may be cheaper or more efficient to compute the same feature, such as a separable convolution filter, at all pixels at once [1, 13]. The cost function Ψ may be dominated in these cases by the second component - the time it takes to evaluate the trees. After discussing related work in Section 3, in Section 4 we present a general adaptation of gradient boosting [12] to minimize Eq. (2), that takes into account both prediction cost components. 3 Related work The problem of learning with prediction cost penalties has been extensively studied. One particular case is that of class imbalance, where one class is extremely rare and yet it is important to accurately annotate it. For example, the famous Viola-Jones cascades [31] use cheap features to discard examples belonging to the negative class. Later stages requiring expensive features are only used for the rare suspected positive class. While such an approach is very successful, due to its early exit strategy it cannot use expensive features for different inputs [20, 30, 9]. To overcome the limitations imposed by early exit strategies, various methods [34, 35, 18, 32] proposed single tree constructions but with more complicated decisions at the individual split nodes. The tree is first learned without taking prediction cost into account followed by an optimization step that includes this cost. Unfortunately, in practice these single-tree methods are inferior to current state-of-the-art algorithms that construct tree ensembles [23, 24]. BUDGETRF [23] is based on Random Forests and modifies the impurity function that decides which split to make, to take feature costs into account. BUDGETRF has several limitations: First, it assumes that tree evaluation cost is negligible compared to feature acquisition, and hence is not suitable for problems where features are cheap to compute and the prediction cost is dominated by predictor evaluation or were both components contribute equally. Second, during its training phase, each usage of a feature incurs its acquisition cost so repeated feature usage is not modeled, and the probability for reaching a node is not taken into account. At test time, in contrast, they do allow "free" reuse of expensive features and do compute the precise cost of reaching various tree branches. BUDGETRF thus typically does not yield deep but expensive branches which are only seldomly reached. BUDGETPRUNE [24] is a pruning scheme for ensembles of decision trees. It aims to mitigate limitations of BUDGETRF by pruning expensive branches from the individual trees. An Integer Linear Program is formulated and efficiently solved to take repeated feature usage and probabilities for reaching different branches into account. This method results in a better tradeoff but still cannot create deep and expensive branches which are only seldomly reached if these were not present in the original ensemble. This method is considered to be state of the art when prediction cost is dominated by the feature acquisition cost [24]. We show in Section 5 that constructing deeper trees with our methods results in a significantly better performance. GREEDYMISER [33], which is most similar to our work, is a stage-wise gradient-boosting type algorithm that also aims to minimize Eq. (2) using an ensemble of regression trees. When both prediction cost components are assumed equally significant, GREEDYMISER is considered state of the art. Yet, GREEDYMISER also has few limitations: First, all trees are assumed to have the same prediction cost for all inputs. Second, by design it constructs shallow trees all having the same depth. We instead consider individual costs for each leaf and thus allow construction of deeper trees. Our experiments in Section 5 suggest that constructing deeper trees with our proposed method significantly outperforms GREEDYMISER. 4 Gradient boosting with cost penalties We build on the gradient boosting framework [12] and adapt it to allow optimization with cost penalties. First we briefly review the original algorithm. We then present our cost penalty in Section 4.1, the step wise optimization in 4.2 and finally our tree growing algorithm that builds trees with deep branches but low expected depth and feature cost in Section 4.3 (such a tree is shown in Figure 1b and compared to a shallow tree that is more expensive and less accurate in Figure 1a). 3 0 1 3 A B 4 C D 2 5 E F 6 G H (a) Other methods 0 4 A B 1 2 C 5 11 12 D 13 E F G 6 H 8 I J 3 K 7 10 L M 9 N O (b) CEGB Figure 1: Illustration of trees generated by the different methods: Split nodes are numbered in the order they have been created, leaves are represented with letters. The vertical position of nodes corresponds to the feature cost required for each sample and the edge’s thickness represents the number of samples moving along this edge. A tree constructed by GreedyMiser is shown in (a): The majority of samples travel along a path requiring a very expensive feature. BudgetPrune could only prune away leaves E,F,G and H which does not correspond to a large reduction in costs. CEGB however only uses two very cheap splits for almost all samples (leaves A and B) and builds a complex subtree for the minority that is hard to classify. The constructed tree shown in (b) is deep but nevertheless cheap to evaluate on average. Gradient boosting tries to minimize the empirical risk of Eq. (1), by constructing a linear combination of K weak predictors tk : RM →R from a set F of admissible functions (not necessarily decision trees). Starting with T0(x) = 0 each iteration k > 0 constructs a new weak function tk aiming to reduce the current loss. These boosting updates can be interpreted as approximations of the gradient descent direction in function space. We follow the notation of [8] who use gradient boosting with weak predictors tk from the set of regression trees T to minimize regularized empirical risk min t1,...,tK∈T N X i=1 " L(yi, K X k=1 tk(xi)) # + K X k=1 Ω(tk). (3) The regularization term Ω(tk) penalizes the complexity of the regression tree functions. They assume that Ω(tk) only depends on the number of leaves Lk and leaf responses wk and derive a simple algorithm to directly learn these. We instead use a more complicated prediction cost penalty Ψ and use a different tree construction algorithm that allows optimization with cost penalties. 4.1 Prediction cost penalty Recall that for each individual tree the prediction cost penalty Ψ consists of two components: (i) the feature acquisition cost Ψf and (ii) the tree evaluation cost Ψev. However, this prediction cost for the k-th tree, which is fitted to the residual of all previous iterations, depends on the earlier trees. Specifically, for any input x, features used in the trees of the previous iterations do not contribute to the cost penalty again. We thus use the indicator function C : N0 ≤K × N≤N × N≤M →{0, 1} with C(k, i, m) = 1 if and only if feature m was used to predict xi by any tree constructed prior to and including iteration k. Furthermore βm ≥0 is the cost for computing or acquiring feature m for a single input x. Then the feature cost contribution Ψf : N0 ≤K × N≤N →R+ of xi for the first k trees 4 is calculated as Ψf(k, i) = M X m=1 βmC(k, i, m) (4) Features computed for all inputs at once (e.g. separable convolution filters) contribute to the penalty independent of the instance x being evaluated. For those we use γm as their total computation cost and define the indicator function D : N0 ≤K ×N≤M →{0, 1} with D(k, m) = 1 if and only if feature m was used for any input x in any tree constructed prior to and including iteration k. Then Ψc(k) = M X m=1 γmD(k, m) (5) The evaluation cost Ψev,k : N≤Lk →R+ for a single input x passing through a tree is the number of split nodes between the root node and the input’s terminal leaf qk(x), multiplied by a suitable constant α ≥0 which captures the cost to evaluate a single split. The total cost Ψev : N0 ≤K × N≤N →R+ for the first k trees is the sum of the costs of each tree Ψev(k, i) = k X ˜k=1 Ψev,˜k(q˜k(xi)). (6) 4.2 Tree Boosting with Prediction Costs We have now defined all components of Eq. (2). Simultaneous optimization of all trees tk is intractable. Instead, as in gradient boosting , we minimize the objective by starting with T0(x) = 0 and iteratively adding a new tree at each iteration. At iteration k we construct the k-th regression tree tk by minimizing the following objective Ok = N X i=1 [L(yi, Tk−1(xi) + tk(xi)) + λΨ(k, xi)] + λΨc(k) (7) with Ψ(k, xi) = Ψev(k, i) + Ψf(k, i). Note that the penalty for features, which are computed for all inputs at once, Ψc(k) does not depend on x but only on the structure of the current and previous trees. Directly optimizing the objective Ok w.r.t. the tree tk is difficult since the argument tk appears inside the loss function. Following [8] we use a second order Taylor expansion of the loss around Tk−1(xi). Removing constant terms from earlier iterations the objective function can be approximated by Ok ≈˜ Ok = N X i=1 gitk(xi) + 1 2hit2 k(xi) + λ∆Ψ(xi) + λ∆Ψc (8) where gi = ∂ˆyiL(yi, ˆyi) ˆyi=Tk−1(xi), (9a) hi = ∂2 ˆyiL(yi, ˆyi) ˆyi=Tk−1(xi), (9b) ∆Ψ(xi) = Ψ(k, xi) −Ψ(k −1, xi), (9c) ∆Ψc = Ψc(k) −Ψc(k −1). (9d) As in [8] we rewrite Eq. (8) for a decision tree tk(x) = ωk,qk(x) with a fixed structure qk, ˜ Ok = Lk X l " X i∈Il gi ! ωk,l + 1 2 X i∈Il hi ! ω2 k,l + λ X i∈Il ∆Ψ(xi) !# + λ∆Ψc (10) with the set Il = {i|qk(xi) = l} containing inputs in leaf l. For this fixed structure the optimal weights and the corresponding best objective reduction can be calculated explicitly: ω∗ k,l = − P i∈Il gi P i∈Il hi , (11a) ˜O∗ k = −1 2 L X l " (P i∈Il gi)2 P i∈Il hi + λ X i∈Il ∆Ψ(xi) !# +λ∆Ψc (11b) 5 As we shall see in the next section, our cost-aware impurity function depends on the difference of Eq. (10) which results by replacing a terminal leaf with a split node [8]. Let p be any leaf of the tree that can be converted to a split node and two new children r and l then the difference of Eq. (10) evaluated for the original and the modified tree is ∆˜Osplit k = 1 2 " (P i∈Ir gi)2 P i∈Ir hi + (P i∈Il gi)2 P i∈Il hi − (P i∈Ip gi)2 P i∈Ip hi # −λ ∆Ψsplit k (12) Let m be the feature used by the node s that we are considering to split. Then ∆Ψsplit k = |Ip|α | {z } Ψsplit ev,k + γm is feature m used for the first time? z }| { (1 −D(k, m)) + X i∈Ip βm is feature m of instance xi used for the first time? z }| { (1 −C(k, i, m)) | {z } Ψsplit f,k (13) 4.3 Learning a weak regressor with cost penalties With these preparations we can now construct the regression trees. As mentioned above, this is a NP-hard problem. We use a greedy algorithm to grow a tree that approximately minimizes Eq. (10). Standard algorithms that grow trees start from a single leaf containing all inputs. The tree is then iteratively expanded by replacing a single leaf with a split node and two new child leaves [4]. Typically this expansion happens in a predefined leaf order (breadth- or depth-first). Splits are only evaluated locally at a single leaf to select the best feature. The expansion is stopped once leaves are pure or once a maximum depth has been reached. Here, in contrast, we adopt the approach of [29] and grow the tree in a best-first order. Splits are evaluated for all current leaves and the one with the best objection reduction according to Eq. (12) is chosen. The tree can thus grow at any location. This allows to compare splits across different leaves and features at the same time (figure 1b shows an example for a best-first tree while figure 1a shows a tree constructed in breadth-first order). Instead of limiting the depth we limit the number of leaves in each tree to prevent over fitting. This procedure has an important advantage when optimizing with cost penalties: Growing in a predefined order usually leads to balanced trees - all branches are grown independent of the cost. Deep and expensive branches using only a tiny subset of inputs are not easily possible. In contrast, growing at the leaf that promises the best tradeoff as given by Eq. (12) encourages growth on branches that contain few instances or growth using cheap features. Growth on branches that contain many instances or growth that requires expensive features is penalized. This strategy results in deep trees that are nevertheless cheap to compute on average. Figure 1 compares an individual tree constructed by others methods to the deeper tree constructed by CEGB. We briefly compare our proposed strategy to GREEDYMISER: When we limit Eq. (8) to first order terms only, use breadth-first instead of best-first growth, assume that features always have to be computed for all instances at once and limit the tree depth to four we minimize Eq. (18) from [33]. GreedyMiser can therefore be represented as a special case of our proposed algorithm. 5 Experiments The Yahoo! Learning to Rank (Yahoo! LTR) challenge dataset [7] consists of 473134 training, 71083 validation and 165660 test document-query pairs with labels {0, 1, 2, 3, 4} where 0 means the document is irrelevant and 4 that it is highly relevant to the query. Computation cost for the 519 features used in the dataset are provided [33] and take the values {1, 5, 10, 20, 50, 100, 150, 200}. Prediction performance is evaluated using the Average Precision@5 metric which only considers the five most relevant documents returned for a query by the regressor [33, 23, 24]. We use the dataset provided by [7] and used in [23, 24]. We consider two different settings for our experiments, (i) feature acquisition and classifier evaluation time both contribute to prediction cost and (ii) classifier evaluation time is negligible w.r.t feature acquisition cost. The first setting is used by GREEDYMISER. Regression trees with depth four are constructed and assumed to approximately cost as much as features with feature cost βm = 1. We therefore set the 6 (a) (b) (c) (d) (e) (f) Figure 2: Comparison against state of the art algorithms: The Yahoo! LTR dataset has been used for (2a) and (2b) in different settings. In (2a) both tree evaluation and feature acquisition cost is considered. In (2b) only feature acquisition cost is shown. (2c) shows results on the MiniBooNE dataset with uniform feature costs. GREEDYMISER and BUDGETPRUNE results for (2b), (2c) and (2d) from [24]. BUDGETPRUNE did not finish training on the HEPMASSS datasets to due their size and the associated CPU time and RAM requirements. CEGB is our proposed method. split cost α = 1 4 to allow a fair comparison with our trees which will contain deeper branches. We also use our algorithm to construct trees similar to GREEDYMISER by limiting the trees to 16 leaves with a maximum branch depth of four. Figure 2a shows that even the shallow trees are already always strictly better than GREEDYMISER. This happens because our algorithm correctly accounts for the different probabilities of reaching different leaves (see also figure 1). When we allow deep branches the proposed method gives significantly better results than GREEDYMISER and learns a predictor with better accuracy at a much lower cost. The second setting is considered by BUDGETPRUNE. It assumes that feature computation is much more expensive than classifier evaluation. We set α = 0 to adapt our algorithm to this setting. 7 (a) (b) Figure 3: In (3a) we study the influence of the feature penalty on the learned classifier. (3b) shows how best-first training results in better precision given the same cost budget. The dataset is additionally binarized by setting all targets y > 0 to y = 1. GREEDYMISER has a disadvantage in this setting since it works on the assumption that the cost of each tree is independent of the input x. We still include it in our comparison as a baseline. Figure 3b shows that our proposed method again performs significantly better than others. This confirms that we learn a classifier with very expected cheap prediction cost in terms of both feature acquisition and classifier evaluation time. The MiniBooNE dataset [27, 21] consists of 45523 training, 19510 validation and 65031 test instances with labels {0, 1} and 50 features. The Forest Covertype dataset [3, 21] consists of 36603 training, 15688 validation and 58101 test instances with 54 features restricted to two classes as done in [24]. Feature costs are not available for either dataset and assumed to be uniform, i.e. βm = 1. Since no relation between classifier evaluation and feature cost is known we only compute the latter to allow a fair comparison, as in [24]. Figure 2c and 2d show that our proposed method again results in a significantly better predictor than both GREEDYMISER and BUDGETPRUNE. We additionally use the HEPMASS-1000 and HEPMASS-not1000 datasets [2, 21]. Similar to MiniBooNE no feature costs are known and we again uniformly set them to one for all features, i.e. βm = 1. Both datasets contain over ten million instances which we split into 3.5 million training, 1.4 million validation and 5.6 million test instances. These datasets are much larger than the others and we did not manage to successfully run BUDGETPRUNE due to its RAM and CPU time requirements. We only report results on GREEDYMISER and our algorithm in Figure 2e and 2f. CEGB again results in a classifier with a better tradeoff than GREEDYMISER. 5.1 Influence of feature cost and tradeoff parameters We use the Yahoo! LTR dataset to study the influence of the features costs β and the tradeoff parameter λ on the learned regressor. Figure 3a shows that regressors learned with a large λ reach similar accuracy as those with smaller λ at a much cheaper cost. Only λ = 0.001 converges to a lower accuracy while others approximately reach the same final accuracy. The tradeoff is shifted towards using cheap features too strongly. Such a regressor is nevertheless useful when the problems requires very cheap results and the final improvement in accuracy does not matter. Next, we set all βm = 1 during training time only and use the original cost during test time. The learned regressor behaves similar to one learned with λ = 0. This shows that the regressors save most of the cost by limiting usage of expensive features to a small subset of inputs. Finally we compare breadth-first to best-first training in Figure 3b. We use the same number of leaves and trees and try to build a classifier that is as cheap as possible. Best-first training always reaches a higher accuracy for a given prediction cost budget. This supports our observation that deep trees which are cheap to evaluate on average are important for constructing cheap and accurate predictors. 5.2 Multi-scale classification / tree structure optimization In images processing, classification using multiple scales has been extensively studied and used to build fast or more more accurate classifiers [6, 31, 10, 26]. The basic idea of these schemes is 8 (a) (b) (c) Figure 4: Multi-scale classification: (4a) shows a single frame from the dataset we used. (4b) shows how our proposed algorithm CEGB is able to build significantly cheaper trees than normal gradient boosting. (4c) zooms into the region showing the differences between the various patch sizes. that a large image is downsampled to increasingly coarse resolutions. A multi-scale classifier first analyzes the coarsest resolution and decides whether a pixel on the coarse level represents a block of homogeneous pixels on the original resolution, or if analysis on a less coarse resolution is required. Efficiency results from the ability to label many pixels on the original resolution at once by labeling a single pixel on a coarser image. We use this setting as an example to show how our algorithm is also capable of optimizing problems where feature cost is negligible compare to predictor evaluation cost. Inspired by average pooling layers in neural networks [28] and image pyramids [5] we first compute the average pixel values across non-overlapping 2x2, 4x4 and 8x8 blocks of the original image. We compute several commonly used and very fast convolutional filters on each of those resolutions. We then replicated these features values on the original resolution, e.g. the feature response of a single pixel on the 8x8-averaged image is used for all 64 pixels We modify Eq. (12) and set ∆Ψsplit k = |Ip|αϵm where ϵm is the number of pixels that share this feature value, e.g. ϵm = 64 when feature m was computed on the coarse 8x8-averaged image. We use forty frames with a resolution of 1024x1024 pixels taken from a video studying fly ethology. Our goal here is to detect flies as quickly as possible, as preprocessing for subsequent tracking. A single frame is shown in Figure 4a. We use twenty of those for training and twenty for evaluation. Accuracy is evaluated using the SEGMeasure score as defined in [22]. Comparison is done against regular gradient boosting by setting Ψ = 0. Figure 4b shows that our algorithm constructs an ensemble that is able to reach similar accuracy with a significantly smaller evaluation cost. Figure 4c shows more clearly how the different available resolutions influence the learned ensemble. Coarser resolutions allow a very efficient prediction at the cost of accuracy. Overall these experiments show that our algorithm is also capable of learning predictors that are cheap while maintaining accuracy even when the evaluation cost of these dominates w.r.t the feature acquisition cost. 6 Conclusion We presented an adaptation of gradient boosting that includes prediction cost penalties, and devised fast methods to learn an ensemble of deep regression trees. A key feature of our approach is its ability to construct deep trees that are nevertheless cheap to evaluate on average. In the experimental part we demonstrated that this approach is capable of handing various different settings of prediction cost penalties consisting of feature cost and tree evaluation cost. Specifically, our method significantly outperformed state of the art algorithms GREEDYMISER and BUDGETPRUNE when feature cost either dominates or contributes equally to the total cost. We additionally showed an example where we are able to optimize the decision structure of the trees itself when evaluation of these is the limiting factor. Our algorithm can be easily implemented using any gradient boosting library and does not slow down training significantly. For these reasons we believe it will be highly valuable for many applications. Source code based on LightGBM [17] is available at http://github.com/svenpeter42/LightGBMCEGB. 9 References [1] Gholamreza Amayeh, Alireza Tavakkoli, and George Bebis. Accurate and efficient computation of Gabor features in real-time applications. Advances in Visual Computing, pages 243–252, 2009. [2] Pierre Baldi, Kyle Cranmer, Taylor Faucett, Peter Sadowski, and Daniel Whiteson. Parameterized machine learning for high-energy physics. arXiv preprint arXiv:1601.07913, 2016. [3] Jock A. Blackard and Denis J. Dean. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Computers and Electronics in Agriculture, 24(3):131 – 151, 1999. [4] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001. [5] Peter Burt and Edward Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on communications, 31(4):532–540, 1983. [6] Vittorio Castelli, Chung-Sheng Li, John Turek, and Ioannis Kontoyiannis. Progressive classification in the compressed domain for large EOS satellite databases. In Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on, volume 4, pages 2199–2202. IEEE, 1996. [7] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Yahoo! Learning to Rank Challenge, pages 1–24, 2011. [8] Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM. [9] Giulia DeSalvo, Mehryar Mohri, and Umar Syed. Learning with deep cascades. In International Conference on Algorithmic Learning Theory, pages 254–269. Springer, 2015. [10] Piotr Dollár, Serge J Belongie, and Pietro Perona. The fastest pedestrian detector in the west. In BMVC, volume 2, page 7. Citeseer, 2010. [11] Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res, 15(1):3133–3181, 2014. [12] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001. [13] Pascal Getreuer. A survey of Gaussian convolution algorithms. Image Processing On Line, 2013:286–310, 2013. [14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations (ICLR), 2016. [15] Thomas Hancock, Tao Jiang, Ming Li, and John Tromp. Lower bounds on learning decision lists and trees. Information and Computation, 126(2):114–122, 1996. [16] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4107–4115. Curran Associates, Inc., 2016. [17] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3149–3157. Curran Associates, Inc., 2017. 10 [18] Matt J Kusner, Wenlin Chen, Quan Zhou, Zhixiang Eddie Xu, Kilian Q Weinberger, and Yixin Chen. Feature-cost sensitive learning with submodular trees of classifiers. In AAAI, pages 1939–1945, 2014. [19] Hyafil Laurent and Ronald L Rivest. Constructing optimal binary decision trees is NP-complete. Information Processing Letters, 5(1):15–17, 1976. [20] Leonidas Lefakis and François Fleuret. Joint cascade optimization using a product of boosted classifiers. In Advances in neural information processing systems, pages 1315–1323, 2010. [21] M. Lichman. UCI machine learning repository, 2013. [22] Martin Maška, Vladimír Ulman, David Svoboda, Pavel Matula, Petr Matula, Cristina Ederra, Ainhoa Urbiola, Tomás España, Subramanian Venkatesan, Deepak MW Balak, et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics, 30(11):1609–1617, 2014. [23] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Feature-budgeted random forest. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1983–1991, Lille, France, 07–09 Jul 2015. PMLR. [24] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Pruning random forests for prediction on a budget. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2334–2342. Curran Associates, Inc., 2016. [25] GE Naumov. NP-completeness of problems of construction of optimal decision trees. In Soviet Physics Doklady, volume 36, page 270, 1991. [26] Marco Pedersoli, Andrea Vedaldi, Jordi Gonzalez, and Xavier Roca. A coarse-to-fine approach for fast deformable object detection. Pattern Recognition, 48(5):1844–1853, 2015. [27] Byron P. Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. Boosted decision trees, an alternative to artificial neural networks. Nucl. Instrum. Meth., A543(2-3):577– 584, 2005. [28] Dominik Scherer, Andreas Müller, and Sven Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. Artificial Neural Networks–ICANN 2010, pages 92–101, 2010. [29] Haijian Shi. Best-first decision tree learning. PhD thesis, The University of Waikato, 2007. [30] Kirill Trapeznikov and Venkatesh Saligrama. Supervised sequential classification under budget constraints. In AISTATS, pages 581–589, 2013. [31] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 2001. [32] Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Efficient learning by directed acyclic graph for resource constrained prediction. In Advances in Neural Information Processing Systems, pages 2152–2160, 2015. [33] Zhixiang Xu, Kilian Weinberger, and Olivier Chapelle. The greedy miser: Learning under testtime budgets. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ’12, pages 1175–1182, July 2012. [34] Zhixiang Eddie Xu, Matt J Kusner, Kilian Q Weinberger, and Minmin Chen. Cost-sensitive tree of classifiers. In ICML (1), pages 133–141, 2013. [35] Zhixiang Eddie Xu, Matt J Kusner, Kilian Q Weinberger, Minmin Chen, and Olivier Chapelle. Classifier cascades and trees for minimizing feature evaluation cost. Journal of Machine Learning Research, 15(1):2113–2144, 2014. [36] Hans Zantema and Hans L Bodlaender. Finding small equivalent decision trees is hard. International Journal of Foundations of Computer Science, 11(02):343–354, 2000. 11 | 2017 | 217 |
6,694 | Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks Surbhi Goel ∗ Department of Computer Science University of Texas at Austin surbhi@cs.utexas.edu Adam Klivans † Department of Computer Science University of Texas at Austin klivans@cs.utexas.edu Abstract We consider the problem of learning function classes computed by neural networks with various activations (e.g. ReLU or Sigmoid), a task believed to be computationally intractable in the worst-case. A major open problem is to understand the minimal assumptions under which these classes admit provably efficient algorithms. In this work we show that a natural distributional assumption corresponding to eigenvalue decay of the Gram matrix yields polynomial-time algorithms in the non-realizable setting for expressive classes of networks (e.g. feed-forward networks of ReLUs). We make no assumptions on the structure of the network or the labels. Given sufficiently-strong eigenvalue decay, we obtain fully-polynomial time algorithms in all the relevant parameters with respect to square-loss. This is the first purely distributional assumption that leads to polynomial-time algorithms for networks of ReLUs. Further, unlike prior distributional assumptions (e.g., the marginal distribution is Gaussian), eigenvalue decay has been observed in practice on common data sets. 1 Introduction Understanding the computational complexity of learning neural networks from random examples is a fundamental problem in machine learning. Several researchers have proved results showing computational hardness for the worst-case complexity of learning various networks– that is, when no assumptions are made on the underlying distribution or the structure of the network [10, 16, 21, 26, 43]. As such, it seems necessary to take some assumptions in order to develop efficient algorithms for learning deep networks (the most expressive class of networks known to be learnable in polynomial-time without any assumptions is a sum of one hidden layer of sigmoids [16]). A major open question is to understand what are the “correct” or minimal assumptions to take in order to guarantee efficient learnability3. An oft-taken assumption is that the marginal distribution is equal to some smooth distribution such as a multivariate Gaussian. Even under such a distributional assumption, however, there is evidence that fully polynomial-time algorithms are still hard to obtain for simple classes of networks [19, 36]. As such, several authors have made further assumptions on the underlying structure of the model (and/or work in the noiseless or realizable setting). In fact, in an interesting recent work, Shamir [34] has given evidence that both distributional assumptions and assumptions on the network structure are necessary for efficient learnability using gradient-based methods. Our main result is that under only an assumption on the marginal distribution, namely eigenvalue decay of the Gram matrix, there exist efficient algorithms for learning broad ∗Work supported by a Microsoft Data Science Initiative Award. †Part of this work was done while visiting the Simons Institute for Theoretical Computer Science. 3For example, a very recent paper of Song, Vempala, Xie, and Williams [36] asks “What form would such an explanation take, in the face of existing complexity-theoretic lower bounds?” 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. classes of neural networks even in the non-realizable (agnostic) setting with respect to square loss. Furthermore, eigenvalue decay has been observed often in real-world data sets, unlike distributional assumptions that take the marginal to be unimodal or Gaussian. As one would expect, stronger assumptions on the eigenvalue decay result in polynomial learnability for broader classes of networks, but even mild eigenvalue decay will result in savings in runtime and sample complexity. The relationship between our assumption on eigenvalue decay and prior assumptions on the marginal distribution being Gaussian is similar in spirit to the dichotomy between the complexity of certain algorithmic problems on power-law graphs versus Erd˝os-Rényi graphs. Several important graph problems such as clique-finding become much easier when the underlying model is a random graph with appropriate power-law decay (as opposed to assuming the graph is generated from the classical G(n, p) model) [6, 22]. In this work we prove that neural network learning problems become tractable when the underlying distribution induces an empirical gram matrix with sufficiently strong eigenvalue-decay. Our Contributions. Our main result is quite general and holds for any function class that can be suitably embedded in an RKHS (Reproducing Kernel Hilbert Space) with corresponding kernel function k (we refer readers unfamiliar with kernel methods to [30]). Given m draws from a distribution (x1, . . . , xm) and kernel k, recall that the Gram matrix K is an m × m matrix where the i, j entry equals k(xi, xj). For ease of presentation, we begin with an informal statement of our main result that highlights the relationship between the eigenvalue decay assumption and the run-time and sample complexity of our final algorithm. Theorem 1 (Informal). Fix function class C and kernel function k. Assume C is approximated in the corresponding RKHS with norm bound B. After drawing m samples, let K/m be the (normalized) m × m Gram matrix with eigenvalues {λ1, . . . , λm}. For error parameter ϵ > 0, 1. If, for sufficiently large i, λi ≈O(i−p), then C is efficiently learnable with m = ˜O(B1/p/ϵ2+3/p). 2. If, for sufficiently large i, λi ≈O(e−i), then C is efficiently learnable with m = ˜O(log B/ϵ2). We allow a failure probability for the event that the eigenvalues do not decay. In all prior work, the sample complexity m depends linearly on B, and for many interesting concept classes (such as ReLUs), B is exponential in one or more relevant parameters. Given Theorem 1, we can use known structural results for embedding neural networks into an RKHS to estimate B and take a corresponding eigenvalue decay assumption to obtain polynomial-time learnability. Applying bounds recently obtained by Goel et al. [16] we have Corollary 2. Let C be the class of all fully-connected networks of ReLUs with one-hidden layer of ℓhidden ReLU activations feeding into a single ReLU output activation (i.e., two hidden layers or depth-3). Then, assuming eigenvalue decay of O(i−ℓ/ϵ), C is learnable in polynomial time with respect to square loss on Sn−1. If ReLU is replaced with sigmoid, then we require eigenvalue decay O(i− √ ℓlog( √ ℓ/ϵ)). For higher depth networks, bounds on the required eigenvalue decay can be derived from structural results in [16]. Without taking an assumption, the fastest known algorithms for learning the above networks run in time exponential in the number of hidden units and accuracy parameter (but polynomial in the dimension) [16]. Our proof develops a novel approach for bounding the generalization error of kernel methods, namely we develop compression schemes tailor-made for classifiers induced by kernel-based regression, as opposed to current Rademacher-complexity based approaches. Roughly, a compression scheme is a mapping from a training set S to a small subsample S′ and side-information I. Given this compressed version of S, the decompression algorithm should be able to generate a classifier h. In recent work, David, Moran and Yehudayoff [13] have observed that if the size of the compression is much less than m (the number of samples), then the empirical error of h on S is close to its true error with high probability. At the core of our compression scheme is a method for giving small description length (i.e., o(m) bit complexity), approximate solutions to instances of kernel ridge regression. Even though we assume K has decaying eigenvalues, K is neither sparse nor low-rank, and even a single column or row of K has bit complexity at least m, since K is an m × m matrix! Nevertheless, we can prove that recent tools from Nyström sampling [28] imply a type of sparsification for solutions 2 of certain regression problems involving K. Additionally, using preconditioning, we can bound the bit complexity of these solutions and obtain the desired compression scheme. At each stage we must ensure that our compressed solutions do not lose too much accuracy, and this involves carefully analyzing various matrix approximations. Our methods are the first compression-based generalization bounds for kernelized regression. Related Work. Kernel methods [30] such as SVM, kernel ridge regression and kernel PCA have been extensively studied due to their excellent performance and strong theoretical properties. For large data sets, however, many kernel methods become computationally expensive. The literature on approximating the Gram matrix with the overarching goal of reducing the time and space complexity of kernel methods is now vast. Various techniques such as random sampling [39], subspace embedding [2], and matrix factorization [15] have been used to find a low-rank approximation that is efficient to compute and gives small approximation error. The most relevant set of tools for our paper is Nyström sampling [39, 14], which constructs an approximation of K using a subset of the columns indicated by a selection matrix S to generate a positive semi-definite approximation. Recent work on leverage scores have been used to improve the guarantees of Nyström sampling in order to obtain linear time algorithms for generating these approximations [28]. The novelty of our approach is to use Nyström sampling in conjunction with compression schemes to give a new method for giving provable generalization error bounds for kernel methods. Compression schemes have typically been studied in the context of classification problems in PAC learning and for combinatorial problems related to VC dimension [23, 24]. Only recently some authors considered compression schemes in a general, real-valued learning scenario [13]. Cotter, ShalevShwartz, and Srebro have studied compression for classification using SVMs to prove that for general distributions, compressing classifiers with low generalization error is not possible [9]. The general phenomenon of eigenvalue decay of the Gram matrix has been studied from both a theoretical and applied perspective. Some empirical studies of eigenvalue decay and related discussion can be found in [27, 35, 38]. There has also been prior work relating eigenvalue decay to generalization error in the context of SVMs or Kernel PCA (e.g., [29, 35]). Closely related notions to eigenvalue decay are that of local Rademacher complexity due to Bartlett, Bousquet, and Mendelson [4] (see also [5]) and that of effective dimensionality due to Zhang [42]. The above works of Bartlett et al. and Zhang give improved generalization bounds via datadependent estimates of eigenvalue decay of the kernel. At a high level, the goal of these works is to work under an assumption on the effective dimension and improve Rademacher-based generalization error bounds from 1/√m to 1/m (m is the number of samples) for functions embedded in a RKHS of unit norm. These works do not address the main obstacle of this paper, however, namely overcoming the complexity of the norm of the approximating RKHS. Their techniques are mostly incomparable even though the intent of using effective dimension as a measure of complexity is the same. Shamir has shown that for general linear prediction problems with respect to square-loss and norm bound B, a sample complexity of Ω(B) is required for gradient-based methods [33]. Our work shows that eigenvalue decay can dramatically reduce this dependence, even in the context of kernel regression where we want to run in time polynomial in n, the dimension, rather than the (much larger) dimension of the RKHS. Recent work on Learning Neural Networks. Due in part to the recent exciting developments in deep learning, there have been several works giving provable results for learning neural networks with various activations (threshold, sigmoid, or ReLU). For the most part, these results take various assumptions on either 1) the distribution (e.g., Gaussian or Log-Concave) or 2) the structure of the network architecture (e.g. sparse, random, or non-overlapping weight vectors) or both and often have a bad dependence on one or more of the relevant parameters (dimension, number of hidden units, depth, or accuracy). Another way to restrict the problem is to work only in the noiseless/realizable setting. Works that fall into one or more of these categories include [20, 44, 40, 17, 31, 41, 11]. Kernel methods have been applied previously to learning neural networks [43, 26, 16, 12]. The current broadest class of networks known to be learnable in fully polynomial-time in all parameters with no assumptions is due to Goel et al. [16], who showed how to learn a sum of one hidden layer of sigmoids over the domain of Sn−1, the unit sphere in n dimensions. We are not aware of other prior 3 work that takes only a distributional assumption on the marginal and achieves fully polynomial-time algorithms for even simple networks (for example, one hidden layer of ReLUs). Much work has also focused on the ability of gradient descent to succeed in parameter estimation for learning neural networks under various assumptions with an intense focus on the structure of local versus global minima [8, 18, 7, 37]. Here we are interested in the traditional task of learning in the non-realizable or agnostic setting and allow ourselves to output a hypothesis outside the function class (i.e., we allow improper learning). It is well known that for even simple neural networks, for example for learning a sigmoid with respect to square-loss, there may be many bad local minima [1]. Improper learning allows us to avoid these pitfalls. 2 Preliminaries Notation. The input space is denoted by X and the output space is denoted by Y. Vectors are represented with boldface letters such as x. We denote a kernel function by kψ(x, x′) = ⟨ψ(x), ψ(x′)⟩ where ψ is the associated feature map and for the kernel and Kψ is the corresponding reproducing kernel Hilbert space (RKHS). For necessary background material on kernel methods we refer the reader to [30]. Selection and Compression Schemes. It is well known that in the context of PAC learning Boolean function classes, a suitable type of compression of the training data implies learnability [25]. Perhaps surprisingly, the details regarding the relationship between compression and ceratin other real-valued learning tasks have not been worked out until very recently. A convenient framework for us will be the notion of compression and selection schemes due to David et al. [13]. A selection scheme is a pair of maps (κ, ρ) where κ is the selection map and ρ is the reconstruction map. κ takes as input a sample S = ((x1, y1), . . . , (xm, ym)) and outputs a sub-sample S′ and a finite binary string b as side information. ρ takes this input and outputs a hypothesis h. The size of the selection scheme is defined to be k(m) = |S′| + |b|. We present a slightly modified version of the definition of an approximate compression scheme due to [13]: Definition 3 ((ϵ, δ)-approximate agnostic compression scheme). A selection scheme (κ, ρ) is an (ϵ, δ)-approximate agnostic compression scheme for hypothesis class H and sample satisfying property P if for all samples S that satisfy P with probability 1 −δ, f = ρ(κ(S)) satisfies Pm i=1 l(f(xi), yi) ≤minh∈H (Pm i=1 l(h(xi), yi)) + ϵ. Compression has connections to learning in the general loss setting through the following theorem which shows that as long as k(m) is small, the selection scheme generalizes. Theorem 4 (Theorem 30.2 [32], Theorem 3.2 [13]). Let (κ, ρ) be a selection scheme of size k = k(m), and let AS = ρ(κ(S)). Given m i.i.d. samples drawn from any distribution D such that k ≤m/2, for constant bounded loss function l : Y′ × Y →R+ with probability 1 −δ, we have E(x,y)∼D[l(AS(x), y)] − m X i=1 l(AS(xi), yi) ≤ v u u tϵ · 1 m m X i=1 l(AS(xi), yi) ! + ϵ where ϵ = 50 · k log(m/k)+log(1/δ) m . 3 Problem Overview In this section we give a general outline for our main result. Let S = {(x1, y1), . . . , (xm, ym)} be a training set of samples drawn i.i.d. from some arbitrary distribution D on X ×[0, 1] where X ⊆Rn. Let us consider a concept class C such that for all c ∈C and x ∈X we have c(x) ∈[0, 1]. We wish to learn the concept class C with respect to the square loss, that is, we wish to find c ∈C that approximately minimizes E(x,y)∼D[(c(x) −y)2]. A common way of solving this is by solving the empirical minimization problem (ERM) given below and subsequently proving that it generalizes. 4 Optimization Problem 1 minimize c∈C 1 m m X i=1 (c(xi) −yi)2 Unfortunately, it may not be possible to efficiently solve the ERM in polynomial-time due to issues such as non-convexity. A way of tackling this is to show that the concept class can be approximately minimized by another hypothesis class of linear functions in a high dimensional feature space (this in turn presents new obstacles for proving generalization-error bounds, which is the focus of this paper). Definition 5 (ϵ-approximation). Let C1 and C2 be function classes mapping domain X to R. C1 is ϵapproximated by C2 if for every c ∈C1 there exists c′ ∈C2 such that for all x ∈X, |c(x)−c′(x)| ≤ϵ. Suppose C can be ϵ-approximated in the above sense by the hypothesis class Hψ = {x → ⟨v, ψ(x)⟩|v ∈Kψ, ⟨v, v⟩≤B} for some B and kernel function kψ. We further assume that the kernel is bounded, that is, |kψ(x, x’)| ≤M for some M > 0 for all x, x’ ∈X. Thus, the problem relaxes to the following, Optimization Problem 2 minimize v∈Kψ 1 m m X i=1 (⟨v, ψ(xi)⟩−yi)2 subject to ⟨v, v⟩≤B Using the Representer theorem, we have that the optimum solution for the above is of the form v∗= Pm i=1 αiψ(xi) for some α ∈Rn. Denoting the sample kernel matrix be K such that Ki,j = kψ(xi, xj), the above optimization problem is equivalent to the following optimization problem, Optimization Problem 3 minimize α∈Rm 1 m||Kα −Y ||2 2 subject to αT Kα ≤B where Y is the vector corresponding to all yi and ||Y ||∞≤1 since ∀i ∈[m], yi ∈[0, 1]. Let αB be the optimal solution of the above problem. This is known to be efficiently solvable in poly(m, n) time as long as the kernel function is efficiently computable. Applying Rademacher complexity bounds to Hψ yields generalization error bounds that decrease, roughly, on the order of B/√m (c.f. Supplemental 1.1). If B is exponential in 1/ϵ, the accuracy parameter, or in n, the dimension, as in the case of bounded depth networks of ReLUs, then this dependence leads to exponential sample complexity. As mentioned in Section 1, in the context of eigenvalue decay, various results [42, 4, 5] have been obtained to improve the dependence of B/√m to B/m, but little is known about improving the dependence on B. Our goal is to show that eigenvalue decay of the empirical Gram matrix does yield generalization bounds with better dependence on B. The key is to develop a novel compression scheme for kernelized ridge regression. We give a step-by-step analysis for how to generate an approximate, compressed version of the solution to Optimization Problem 3. Then, we will carefully analyze the bit complexity of our approximate solution and realize our compression scheme. Finally, we can put everything together and show how quantitative bounds on eigenvalue decay directly translate into compressions schemes with low generalization error. 4 Compressing the Kernel Solution Through a sequence of steps, we will sparsify α to find a solution of much smaller bit complexity that is still an approximate solution (to within a small additive error). The quality and size of the approximation will depend on the eigenvalue decay. 5 Lagrangian Relaxation. We relax Optimization Problem 3 and consider the Lagrangian version of the problem to account for the norm bound constraint. This version is convenient for us, as it has a nice closed-form solution. Optimization Problem 4 minimize α∈Rm 1 m||Kα −Y ||2 2 + λαT Kα We will later set λ such that the error of considering this relaxation is small. It is easy to see that the optimal solution for the above lagrangian version is α = (K + λmI)−1 Y . Preconditioning. To avoid extremely small or non-zero eigenvalues, we consider a perturbed version of K, Kγ = K + γmI. This gives us that the eigenvalues of Kγ are always greater than or equal to γm. This property is useful for us in our later analysis. Henceforth, we consider the following optimization problem on the perturbed version of K: Optimization Problem 5 minimize α∈Rm 1 m||Kγα −Y ||2 2 + λαT Kγα The optimal solution for perturbed version is αγ = (Kγ + λmI)−1 Y = (K + (λ + γ)mI)−1 Y . Sparsifying the Solution via Nyström Sampling. We will now use tools from Nyström Sampling to sparsify the solution obtained from Optimzation Problem 5. To do so, we first recall the definition of effective dimension or degrees of freedom for the kernel [42]: Definition 6 (η-effective dimension). For a positive semidefinite m × m matrix K and parameter η, the η-effective dimension of K is defined as dη(K) = tr(K(K + ηmI)−1). Various kernel approximation results have relied on this quantity, and here we state a recent result due to [28] who gave the first application independent result that shows that there is an efficient way of computing a set of columns of K such that ¯K, a matrix constructed from the columns is close in terms of 2-norm to the matrix K. More formally, Theorem 7 ([28]). For kernel matrix K, there exists an algorithm that gives a set of O (dη(K) log (dη(K)/δ)) columns, such that ¯K = KS(ST KS)†ST K where S is the matrix that selects the specific columns, satisfies with probability 1 −δ, ¯K ⪯K ⪯¯K + ηmI. It can be shown that ¯K is positive semi-definite. Also, the above implies ||K −¯K||2 ≤ηm. We use the decay to approximate the Kernel matrix with a low-rank matrix constructed using the columns of K. Let ¯Kγ be the matrix obtained by applying Theorem 7 to Kγ for η > 0 and consider the following optimization problem, Optimization Problem 6 minimize α∈Rm 1 m|| ¯Kγα −Y ||2 2 + λαT ¯Kγα The optimal solution for the above is ¯αγ = ¯Kγ + λmI −1 Y . Since ¯Kγ = KγS(ST KγS)†ST Kγ, solving for the above enables us to get a solution α∗= S(ST KγS)†ST Kγ ¯αγ, which is a k-sparse vector for k = O (dη(Kγ) log (dη(Kγ)/δ)). Bounding the Error of the Sparse Solution. We bound the additional error incurred by our sparse hypothesis α∗compared to αB. To do so, we bound the error for each of the approximations: sparsification, preconditioning and lagrangian relaxation and then combine them to give the following theorem. Theorem 8 (Total Error). For λ = ϵ2 81B , η ≤ ϵ3 729B and γ ≤ ϵ3 729B , we have 1 m||Kγα∗−Y ||2 2 ≤ 1 m||KαB −Y ||2 2 + ϵ. 6 Computing the Sparsity of the Solution. To compute the sparsity of the solution, we need to bound dη(Kβ). We consider the following different eigenvalue decays. Definition 9 (Eigenvalue Decay). Let the real eigenvalues of a symmetric m × m matrix A be denoted by λ1 ≥· · · ≥λm. 1. A is said to have (C, p)-polynomial eigenvalue decay if for all i ∈{1, . . . , m}, λi ≤Ci−p. 2. A is said to have C-exponential eigenvalue decay if for all i ∈{1, . . . , m}, λi ≤Ce−i. Note that in the above definitions C and p are not necessarily constants. We allow C and p to depend on other parameters (the choice of these parameters will be made explicit in subsequent theorem statements). We can now bound the effective dimension in terms of eigenvalue decay: Theorem 10 (Bounding effective dimension). For γm ≤η, 1. If K/m has (C, p)-polynomial eigenvalue decay for p > 1 then dη(Kγ) ≤ C (p−1)η 1/p + 2. 2. If K/m has C-exponential eigenvalue decay then dη(Kγ) ≤log C (e−1)η + 2. 5 Compression Scheme The above analysis gives us a sparse solution for the problem and, in turn, an ϵ-approximation for the error on the overall sample S with probability 1 −δ. We can now fully define our compression scheme for the hypothesis class Hψ with respect to samples satisfying the eigenvalue decay property. Selection Scheme κ: Given input S = (xi, yi)m i=1, 1. Use RLS-Nyström Sampling [28] to compute ¯ Kγ = KγS(ST KγS)†ST Kγ for η = ϵ3 5832B and γ = ϵ3 5832Bm. Let I be the sub-sample corresponding to the columns selected using S. 2. Solve Optimization Problem 6 for λ = ϵ2 324B to get ¯αγ. 3. Compute the |I|-sparse vector α∗= S(ST KγS)†ST Kγ ¯αγ = K−1 γ ¯Kγ ¯αγ (Kγ is invertible as all eigenvalues are non-zero). 4. Output subsample I along with ˜α∗which is α∗truncated to precision ϵ 4M|I| per non-zero index. Reconstruction Scheme ρ: Given input subsample I and ˜α∗, output hypothesis, hS(x) = clip0,1(wT ˜α∗) where w is a vector with entries K(xi, x) + γm1[x = xi] for i ∈I and 0 otherwise where γ = ϵ3 5832Bm. Note, clipa,b(x) = max(a, min(b, x)) for some a < b. The following theorem shows that the above is a compression scheme for Hψ. Theorem 11. (κ, ρ) is an (ϵ, δ)-approximate agnostic compression scheme for the hypothesis class Hψ for sample S of size k(m, ϵ, δ, B, M) = O d log d δ log √mBMd log(d/δ) ϵ4 where d is the η-effective dimension of Kγ for η = ϵ3 5832B and γ = ϵ3 5832Bm. 6 Putting It All Together: From Compression to Learning We now present our final algorithm: Compressed Kernel Regression (Algorithm 1). Note that the algorithm is efficient and takes at most O(m3) time. For our learnability result, we restrict distributions to those that satisfy eigenvalue decay. Definition 12 (Distribution Satisfying Eigenvalue Decay). Consider distribution D over X and kernel function kψ. Let S be a sample drawn i.i.d. from the distribution D and K be the empirical gram matrix corresponding to kernel function kψ on S. 1. D is said to satisfy (C, p, N)-polynomial eigenvalue decay if with probability 1 −δ over the drawn sample of size m ≥N, K/m satisfies (C, p)-polynomial eigenvalue decay. 7 Algorithm 1 Compressed Kernel Regression Input: Samples S = (xi, yi)m i=1, gram matrix K on S, constants ϵ, δ > 0, norm bound B and maximum kernel function value M on X. 1: Using RLS-Nyström Sampling [28] with input (Kγ, ηm) for γ = ϵ3 5832Bm and η = ϵ3 5832B compute ¯ Kγ = KγS(ST KγS)†ST Kγ. Let I be the subsample corresponding to the columns selected using S. Note that the number of columns selected depends on the η effective dimension of Kγ. 2: Solve Optimization Problem 6 for λ = ϵ2 324B to get ¯αγ over S 3: Compute α∗= S(ST KγS)†ST Kγ ¯αγ = K−1 γ ¯Kγ ¯αγ 4: Compute ˜α∗by truncating each entry of α∗up to precision ϵ 4M|I| Output: hS such that for all x ∈X, hS(x) = clip0,1(wT ˜α∗) where w is a vector with entries K(xi, x) + γm1[x = xi] for i ∈I and 0 otherwise. 2. D is said to satisfy (C, N)-exponential eigenvalue decay if with probability 1 −δ over the drawn sample of size m ≥N, K/m satisfies C-exponential eigenvalue decay. Our main theorem proves generalization of the hypothesis output by Algorithm 1 for distributions satisfying eigenvalue decay in the above sense. Theorem 13 (Formal for Theorem 1). Fix function class C with output bounded in [0, 1] and M-bounded kernel function kψ such that C is ϵ0-approximated by Hψ = {x →⟨v, ψ(x)⟩|v ∈ Kψ, ⟨v, v⟩≤B} for some ψ, B. Consider a sample S = {(xi, yi)m i=1} drawn i.i.d. from D on X × [0, 1]. There exists an algorithm A that outputs hypothesis hS = A(S), such that, 1. If DX satisfies (C, p, m)-polynomial eigenvalue decay with probability 1 −δ/4 then with probability 1 −δ for m = ˜O((CB)1/p log(M) log(1/δ)/ϵ2+3/p), E(x,y)∼D(hS(x) −y)2 ≤min c∈C E(x,y)∼D(c(x) −y)2 + 2ϵ0 + ϵ 2. If DX satisfies (C, m)-exponential eigenvalue decay with probability 1−δ/4 then with probability 1 −δ for m = ˜O(log CB log(M) log(1/δ)/ϵ2), E(x,y)∼D(hS(x) −y)2 ≤min c∈C E(x,y)∼D(c(x) −y)2 + 2ϵ0 + ϵ Algorithm A runs in time poly(m, n). Remark: The above theorem can be extended to different rates of eigenvalue decay. For example, for finite rank r the obtained bound is independent of B but dependent instead on r. Also, as in the proof of Theorem 10, it suffices for the eigenvalue decay to hold only after sufficiently large i. 7 Learning Neural Networks Here we apply our main theorem to the problem of learning neural networks. For technical definitions of neural networks, we refer the reader to [43]. Definition 14 (Class of Neural Networks [16]). Let N[σ, D, W, T] be the class of fully-connected, feed-forward networks with D hidden layers, activation function σ and quantities W and T described as follows: 1. Weight vectors in layer 0 have 2-norm bounded by T. 2. Weight vectors in layers 1, . . . , D have 1-norm bounded by W. 3. For each hidden unit σ(w·z) in the network, we have |w·z| ≤T (by z we denote the input feeding into unit σ from the previous layer). We consider activation functions σrelu(x) = max(0, x) and σsig = 1 1+e−x , though other activation functions fit within our framework. Goel et al. [16] showed that the class of ReLUs/Sigmoids along with their compositions can be approximated by linear functions in a high dimensional Hilbert space 8 (corresponding to a particular type of polynomial kernel). As mentioned earlier, the sample complexity of prior work depends linearly on B, which, for even a single ReLU, is exponential in 1/ϵ. Assuming sufficiently strong eigenvalue decay, we can show that we can obtain fully polynomial time algorithms for the above classes. Theorem 15. For ϵ, δ > 0, consider D on Sn−1 × [0, 1] such that, 1. For Crelu = N[σrelu, 0, ·, 1], DX satisfies (C, p, m)-polynomial eigenvalue decay for p ≥ξ/ϵ, 2. For Crelu−D = N[σrelu, D, W, T], DX satisfies (C, p, m)-polynomial eigenvalue decay for p ≥(ξW DDT/ϵ)D, 3. For Csig−D = N[σsig, D, W, T], DX satisfies (C, p, m)-polynomial eigenvalue decay for p ≥ (ξT log(W DD/ϵ)))D, where DX is the marginal distribution on X = Sn−1, ξ > 0 is some sufficiently large constant and C ≤(n · 1/ϵ · log(1/δ))ζp for some constant ζ > 0. The value of m is obtained from Theorem 13 as m = ˜O(C1/pϵ2+3/p). Each decay assumption above implies an algorithm for agnostically learning the corresponding class on Sn−1 × [0, 1] with respect to the square loss in time poly(n, 1/ϵ, log(1/δ)). Note that assuming an exponential eigenvalue decay (stronger than polynomial) will result in efficient learnability for much broader classes of networks. Since it is not known how to agnostically learn even a single ReLU with respect to arbitrary distributions on Sn−1 in polynomial-time4, much less a network of ReLUs, we state the following corollary highlighting the decay we require to obtain efficient learnability for simple networks: Corollary 16 (Restating Corollary 2). Let C be the class of all fully-connected networks of ReLUs with one-hidden layer of size ℓfeeding into a final output ReLU activation where the 2-norms of all weight vectors are bounded by 1. Then, (suppressing the parameter m for simplicity), assuming (C, i−ℓ/ϵ)-polynomial eigenvalue decay for C = poly(n, 1/ϵ, ℓ), C is learnable in polynomial time with respect to square loss on Sn−1. If ReLU is replaced with sigmoid, then we require eigenvalue decay of i− √ ℓlog( √ ℓ/ϵ). 8 Conclusions and Future Work We have proposed the first set of distributional assumptions that guarantee fully polynomial-time algorithms for learning expressive classes of neural networks (without restricting the structure of the network). The key abstraction was that of a compression scheme for kernel approximations, specifically Nyström sampling. We proved that eigenvalue decay of the Gram matrix reduces the dependence on the norm B in the kernel regression problem. Prior distributional assumptions, such as the underlying marginal equaling a Gaussian, neither lead to fully polynomial-time algorithms nor are representative of real-world data sets5. Eigenvalue decay, on the other hand, has been observed in practice and does lead to provably efficient algorithms for learning neural networks. A natural criticism of our assumption is that the rate of eigenvalue decay we require is too strong. In some cases, especially for large depth networks with many hidden units, this may be true6. Note, however, that our results show that even moderate eigenvalue decay will lead to improved algorithms. Further, it is quite possible our assumptions can be relaxed. An obvious question for future work is what is the minimal rate of eigenvalue decay needed for efficient learnability? Another direction would be to understand how these eigenvalue decay assumptions relate to other distributional assumptions. 4Goel et al. [16] show that agnostically learning a single ReLU over {−1, 1}n is as hard as learning sparse parities with noise. This reduction can be extended to the case of distributions over Sn−1 [3]. 5Despite these limitations, we still think uniform or Gaussian assumptions are worthwhile and have provided highly nontrivial learning results. 6It is useful to keep in mind that agnostically learning even a single ReLU with respect to all distributions seems computationally intractable, and that our required eigenvalue decay in this case is only a function of the accuracy parameter ϵ. 9 Acknowledgements. We would like to thank Misha Belkin and Nikhil Srivastava for very helpful conversations regarding kernel ridge regression and eigenvalue decay. We also thank Daniel Hsu, Karthik Sridharan, and Justin Thaler for useful feedback. The analogy between eigenvalue decay and power-law graphs is due to Raghu Meka. References [1] Peter Auer, Mark Herbster, and Manfred K. Warmuth. Exponentially many local minima for single neurons. In Advances in Neural Information Processing Systems, volume 8, pages 316– 322. The MIT Press, 1996. [2] Haim Avron, Huy Nguyen, and David Woodruff. Subspace embeddings for the polynomial kernel. In Advances in Neural Information Processing Systems, pages 2258–2266, 2014. [3] Peter Bartlett, Daniel Kane, and Adam Klivans. personal communication. [4] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities. 33(4), August 16 2005. [5] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. [6] Pawel Brach, Marek Cygan, Jakub Lacki, and Piotr Sankowski. Algorithmic complexity of power law networks. CoRR, abs/1507.02426, 2015. [7] Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. CoRR, abs/1702.07966, 2017. [8] Anna Choromanska, Mikael Henaff, Michaël Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, volume 38 of JMLR Workshop and Conference Proceedings. JMLR.org, 2015. [9] Andrew Cotter, Shai Shalev-Shwartz, and Nati Srebro. Learning optimally sparse support vector machines. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 266–274, 2013. [10] Amit Daniely. Complexity theoretic limitations on learning halfspaces. In STOC, pages 105– 117. ACM, 2016. [11] Amit Daniely. SGD learns the conjugate kernel class of the network. CoRR, abs/1702.08503, 2017. [12] Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In NIPS, pages 2253–2261, 2016. [13] Ofir David, Shay Moran, and Amir Yehudayoff. On statistical learning via the lens of compression. arXiv preprint arXiv:1610.03592, 2016. [14] Petros Drineas and Michael W Mahoney. On the nyström method for approximating a gram matrix for improved kernel-based learning. journal of machine learning research, 6(Dec):2153–2175, 2005. [15] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error cur matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–881, 2008. [16] Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polynomial time. arXiv preprint arXiv:1611.10258, 2016. [17] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of nonconvexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015. [18] Kenji Kawaguchi. Deep learning without poor local minima. In NIPS, pages 586–594, 2016. [19] Adam R. Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space. In APPROX-RANDOM, volume 28 of LIPIcs, pages 793–809. Schloss Dagstuhl - LeibnizZentrum fuer Informatik, 2014. [20] Adam R. Klivans and Raghu Meka. Moment-matching polynomials. Electronic Colloquium on Computational Complexity (ECCC), 20:8, 2013. 10 [21] Adam R. Klivans and Alexander A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. J. Comput. Syst. Sci, 75(1):2–12, 2009. [22] Anton Krohmer. Finding Cliques in Scale-Free Networks. Master’s thesis, Saarland University, Germany, 2012. [23] Dima Kuzmin and Manfred K. Warmuth. Unlabeled compression schemes for maximum classes. Journal of Machine Learning Research, 8:2047–2081, 2007. [24] Nick Littlestone and Manfred Warmuth. Relating data compression and learnability. Technical report, Technical report, University of California, Santa Cruz, 1986. [25] Nick Littlestone and Manfred Warmuth. Relating data compression and learnability. Technical report, 1986. [26] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pages 855–863, 2014. [27] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. CoRR, abs/1703.10622, 2017. [28] Cameron Musco and Christopher Musco. Recursive sampling for the nyström method. arXiv preprint arXiv:1605.07583, 2016. [29] B. Schölkopf, J. Shawe-Taylor, AJ. Smola, and RC. Williamson. Generalization bounds via eigenvalues of the gram matrix. Technical Report 99-035, NeuroCOLT, 1999. [30] Bernhard Schölkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002. [31] Hanie Sedghi and Anima Anandkumar. Provable methods for training neural networks with sparse connectivity. arXiv preprint arXiv:1412.2693, 2014. [32] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [33] Ohad Shamir. The sample complexity of learning linear predictors with the squared loss. Journal of Machine Learning Research, 16:3475–3486, 2015. [34] Ohad Shamir. Distribution-specific hardness of learning neural networks. arXiv preprint arXiv:1609.01037, 2016. [35] John Shawe-Taylor, Christopher KI Williams, Nello Cristianini, and Jaz Kandola. On the eigenspectrum of the gram matrix and the generalization error of kernel-pca. IEEE Transactions on Information Theory, 51(7):2510–2522, 2005. [36] Le Song, Santosh Vempala, John Wilmes, and Bo Xie. On the complexity of learning neural networks. arXiv preprint arXiv:1707.04615, 2017. [37] Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. CoRR, abs/1605.08361, 2016. [38] Ameet Talwalkar and Afshin Rostamizadeh. Matrix coherence and the nystrom method. CoRR, abs/1408.2044, 2014. [39] Christopher KI Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Proceedings of the 13th International Conference on Neural Information Processing Systems, pages 661–667. MIT press, 2000. [40] Bo Xie, Yingyu Liang, and Le Song. Diversity leads to generalization in neural networks. CoRR, abs/1611.03131, 2016. [41] Qiuyi Zhang, Rina Panigrahy, and Sushant Sachdeva. Electron-proton dynamics in deep learning. CoRR, abs/1702.00458, 2017. [42] Tong Zhang. Effective dimension and generalization of kernel learning. In Advances in Neural Information Processing Systems, pages 471–478, 2003. [43] Yuchen Zhang, Jason D Lee, and Michael I Jordan. l1-regularized neural networks are improperly learnable in polynomial time. In International Conference on Machine Learning, pages 993–1001, 2016. [44] Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, and Michael I. Jordan. Learning halfspaces and neural networks with random initialization. CoRR, abs/1511.07948, 2015. 11 | 2017 | 218 |
6,695 | On Separability of Loss Functions, and Revisiting Discriminative Vs Generative Models Adarsh Prasad Machine Learning Dept. CMU adarshp@andrew.cmu.edu Alexandru Niculescu-Mizil NEC Laboratories America Princeton, NJ, USA alex@nec-labs.com Pradeep Ravikumar Machine Learning Dept. CMU pradeepr@cs.cmu.edu Abstract We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings. Towards this, we develop novel technical machinery, including a notion of separability of general loss functions, which allow us to provide a general framework to obtain `1 convergence rates for general M-estimators. We use this machinery to analyze `1 and `2 convergence rates of generative and discriminative models, and provide insights into their nuanced behaviors in high-dimensions. Our results are also applicable to differential parameter estimation, where the quantity of interest is the difference between generative model parameters. 1 Introduction Consider the classical conditional generative model setting, where we have a binary random response Y 2 {0, 1}, and a random covariate vector X 2 Rp, such that X|(Y = i) ⇠P✓i for i 2 {0, 1}. Assuming that we know P(Y ) and {P✓i}1 i=0, we can use the Bayes rule to predict the response Y given covariates X. This is said to be the generative model approach to classification. Alternatively, consider the conditional distribution P(Y |X) as specified by the Bayes rule, also called the discriminative model corresponding to the generative model specified above. Learning this conditional model directly is said to be the discriminative model approach to classification. In a classical paper [8], the authors provided theoretical justification for the common wisdom regarding generative and discriminative models: when the generative model assumptions hold, the generative model estimators initially converge faster as a function of the number of samples, but have the same asymptotic error rate as discriminative models. And when the generative model assumptions do not hold, the discriminative model estimators eventually overtake the generative model estimators. Their analysis however was for the specific generative-discriminative model pair of Naive Bayes, and logistic regression models, and moreover, was not under a high-dimensional sampling regime, when the number of samples could even be smaller than the number of parameters. In this paper, we aim to extend their analysis to these more general settings. Doing so however required some novel technical and conceptual developments. To motivate the machinery we develop, consider why the Naive Bayes model estimator might initially converge faster. The Naive Bayes model makes the conditional independence assumption that P(X|Y ) = Qp s=1 P(Xs|Y ), so that the parameters of each of the conditional distributions P(Xs|Y ) for s 2 {1, . . . , p} could be estimated independently. The corresponding log-likelihood loss function is thus fully “separable” into multiple components. The logistic regression log-likelihood on the other hand is seemingly much less “separable”, and in particular, it does not split into multiple components each of which can be estimated independently. In general, we do not expect the loss functions underlying statistical estimators to be fully separable into multiple components, so that we need a more flexible notion of separability, where different losses could be shown to be separable to differing degrees. In a very related note, though it might seem unrelated at first, the analysis of `1 convergence rates of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. statistical estimators considerably lags that of say `2 rates (see for instance, the unified framework of [7], which is suited to `2 rates but is highly sub-optimal for `1 rates). In part, the analysis of `1 rates is harder because it implicitly requires analysis at the level of individual coordinates of the parameter vector. While this is thus harder than an `2 error analysis, intuitively this would be much easier if the loss function were to split into independent components involving individual coordinates. While general loss functions might not be so “fully separable”, they might perhaps satisfy a softer notion of separability motivated above. In a contribution that would be of independent interest, we develop precisely such a softer notion of separability for general loss functions. We then use this notion of separability to derive `1 convergence rates for general M-estimators. Given this machinery, we are then able to contrast generative and discriminative models. We focus on the case where the generative models are specified by exponential family distributions, so that the corresponding discriminative models are logistic regression models with the generative model sufficient statistics as feature functions. To compare the convergence rates of the two models, we focus on the difference of the two generative model parameters, since this difference is also precisely the model parameter for the discriminative model counterpart of the generative model, via an application of the Bayes rule. Moreover, as Li et al. [3] and others show, the `2 convergence rates of the difference of the two parameters is what drives the classification error rates of both generative as well as discriminative model classifiers. Incidentally, such a difference of generative model parameters has also attracted interest outside the context of classification, where it is called differential parameter learning [1, 14, 6]. We thus analyze the `1 as well as `2 rates for both the generative and discriminative models, focusing on this parameter difference. As we show, unlike the case of Naive Bayes and logistic regression in low-dimensions as studied in [8], this general highdimensional setting is more nuanced, and in particular depends on the separability of the generative models. As we show, under some conditions on the models, generative and discriminative models not only have potentially different `1 rates, but also differing “burn in” periods in terms of the minimum number of samples required in order for the convergence rates to hold. The choice of a generative vs discriminative model, namely that with a better sample complexity, thus depends on their corresponding separabilities. As a minor note, we also show how generative model M-estimators are not directly suitable in high-dimensions, and provide a simple methodological fix in order to obtain better `2 rates. We instantiate our results with two running examples of isotropic and non-isotropic Gaussian generative models, and also corroborate our theory with instructive simulations. 2 Background and Setup. We consider the problem of differential parameter estimation under the following generative model. Let Y 2 {0, 1} denote a binary response variable, and let X = (X1, . . . , Xp) 2 Rp be the covariates. For simplicity, we assume P[Y = 1] = P[Y = 0] = 1 2. We assume that conditioned on the response variable, the covariates belong to an exponential family, X|Y ⇠P✓⇤ Y (·), where: P✓⇤ Y (X|Y ) = h(X) exp(h✓⇤ Y , φ(X)i −A(✓⇤ Y )). (1) Here, ✓⇤ Y is the vector of the true canonical parameters, A(✓) is the log-partition function and φ(X) is the sufficient statistic. We assume access to two sets of samples X n 0 = {x(0) i }n i=1 ⇠P✓⇤ 0 and X n 1 = {x(1) i }n i=1 ⇠P✓⇤ 1. Given these samples, as noted in the introduction, we are particularly interested in estimating the differential parameter ✓⇤ diff := ✓⇤ 1 −✓⇤ 0, since this is also the model parameter corresponding to the discriminative model, as we show below. In high dimensional sampling settings, we additionally assume that ✓⇤ diff is at most s-sparse, i.e. ||✓⇤ diff||0 s. We will be using the following two exponential family generative models as running examples: isotropic and non-isotropic multivariate Gaussian models. Isotropic Gaussians (IG) Let X = (X1, . . . , Xp) ⇠N(µ, Ip) be an isotropic gaussian random variable; it’s density can be written as: Pµ(x) = 1 p (2⇡)p exp ✓ −1 2(x −µ)T(x −µ) ◆ . (2) Gaussian MRF (GMRF). Let X = (X1, . . . , Xp) denote a zero-mean gaussian random vector; it’s density is fully-parametrized as by the inverse covariance or concentration matrix ⇥= (⌃)−1 ≻0 2 and can be written as: P⇥(x) = 1 r (2⇡)pdet ⇣ (⇥)−1⌘exp ✓ −1 2xT⇥x ◆ . (3) Let d⇥= maxj2[p] &&&&⇥(:,j) &&&& 0 is the maximum number non-zeros in a row of ⇥. Let ⌃⇤= &&&&&&(⇥⇤)−1&&&&&& 1, where |||M|||1 is the `1/`1 operator norm given by |||M|||1 = max j=1,2,...,p Pp k=1 |Mjk|. Generative Model Estimation. Here, we proceed by estimating the two parameters {✓⇤ i }1 i=0 individually. Letting b✓1 and b✓0 be the corresponding estimators, we can then estimate the difference of the parameters as b✓diff = b✓1 −b✓0. The most popular class of estimators for the individual parameters is based on Maximum likelihood Estimation (MLE), where we maximize the likelihood of the given data. For isotropic gaussians, the negative log-likelihood function can be written as: LnIG(✓) = ✓T ✓ 2 −✓T bµ, (4) where bµ = 1 n Pn i=1 xi. In the case of GGMs the negative log-likelihood function can be written as: LnGGM(⇥) = DD ⇥, b⌃ EE −log(det(⇥)), (5) where b⌃= 1 n Pn i=1 xixT i is the sample covariance matrix and hhU, V ii = P i,j UijVij denotes the trace inner product on the space of symmetric matrices. In high-dimensional sampling regimes (n << p), regularized MLEs, for instance with `1-regularization under the assumption of sparse model parameters, have been widely used [11, 10, 2]. Discriminative Model Estimation. Using Bayes rule, we have that: P[Y = 1|X] = P[X|Y = 1]P[Y = 1] P[X|Y = 0]P[Y = 0] + P[X|Y = 1]P[Y = 1] = 1 1 + exp (−(h✓⇤ 1 −✓⇤ 0, φ(x)i + c⇤)) (6) where c⇤= A(✓⇤ 0) −A(✓⇤ 1). The conditional distribution is simply a logistic regression model, with the generative model sufficient statistics as the features, and with optimal parameters being precisely the difference ✓⇤ diff := ✓⇤ 1 −✓⇤ 0 of the generative model parameters. The corresponding negative log-likelihood function can be written as Llogistic(✓, c) = 1 n n X i=1 (−yi(h✓, φ(xi)i + c) + Φ(h✓, φ(xi)i + c)) (7) where Φ(t) = log(1 + exp(t)). In high dimensional sampling regimes, under the assumption that the model parameters are sparse, we would use the `1-penalized version b✓diff of the MLE (7) to estimate ✓⇤ diff. Outline. We proceed by studying the more general problem of `1 error for parameter estimation for any loss function Ln(·). Specifically, consider the general M-estimation problem, where we are given n i.i.d samples Zn 1 = {z1, z2, . . . , zn}, zi 2 Z from some distribution P, and we are interested in estimating some parameter ✓⇤of the distribution P. Let ` : Rp ⇥Z 7! R be a twice differentiable and convex function which assigns a loss `(✓; z) to any parameter ✓2 Rp, for a given observation z. Also assume that the loss is Fisher consistent so that ✓⇤2 argmin✓¯L(✓) where ¯L(✓) def = Ez⇠P[`(✓; z)] is the population loss. We are then interested in analyzing the M-estimators ✓⇤that minimize the empirical loss i.e. b✓2 argmin✓Ln(✓), or regularized versions thereof, where Ln(✓) = 1 n Pn i=1 L(✓; Zi). We introduce a notion of the separability of a loss function, and show how more separable losses require fewer samples to establish convergence for &&& &&&b✓−✓⇤&&& &&& 1. We then instantiate our separability results from this general setting for both generative and discriminative models. We calculate the number of samples required for generative and discriminative approaches to estimate the differential parameter ✓⇤ diff, for consistent convergence rates with respect to `1 and `2 norm. We also discuss the consequences of these results for high dimensional classification for Gaussian Generative models. 3 3 Separability Let R(∆; ✓⇤) = rLn(✓⇤+∆)−rLn(✓⇤)−r2Ln(✓⇤)∆be the error in the first order approximation of the gradient at ✓⇤. Let B1(r) = {✓| ||✓||1 r} be an `1 ball of radius r. We begin by analyzing the low dimensional case, and then extend it to high dimensions. 3.1 Low Dimensional Sampling Regimes In low dimensional sampling regimes, we assume that the number of samples n ≫p. In this setting, we make the standard assumption that the empirical loss function Ln(·) is strongly convex. Let b✓= argmin✓Ln(✓) denote the unique minimizer of the empirical loss function. We begin by defining a notion of separability for any such empirical loss function Ln. Definition 1. Ln is (↵, β, γ) locally separable around ✓⇤if the remainder term R(∆; ✓⇤) satisfies: ||R(∆; ✓⇤)||1 1 β ||∆||↵ 1 8∆2 B1(γ) This definition might seem a bit abstract, but for some general intuition, γ indicates the region where it is separable, ↵indicates the conditioning of the loss, while it is β that quantifies the degree of separability: the larger it is, the more separable the loss function. Next, we provide some additional intuition on how a loss function’s separability is connected to (↵, β, γ). Using the mean-value theorem, we can write ||R(∆, ✓⇤)||1 = """"# r2Ln(✓⇤+ t∆) −r2Ln(✓⇤) $ ∆ """" 1 for some t 2 (0, 1). This can be further simplified as ||R(∆, ✓⇤)||1 """"""r2Ln(✓⇤+ t∆) −r2Ln(✓⇤) """""" 1 ||∆||1. Hence, ↵and 1/β measure the smoothness of Hessian (w.r.t. the `1/`1 matrix norm) in the neighborhood of ✓⇤, with ↵being the smoothness exponent, and 1/β being the smoothness constant. Note that the Hessian of the loss function r2Ln(✓) is a random matrix and can vary from being a diagonal matrix for a fully-separable loss function to a dense matrix for a heavily-coupled loss function. Moreover, from standard concentration arguments, the `1/`1 matrix norm for a diagonal ("separable") subgaussian random matrix has at most logarithmic dimension dependence1, but for a dense ("non-separable") random matrix, the `1/`1 matrix norm could possibly scale linearly in the dimension. Thus, the scaling of `1/`1 matrix norm gives us an indication how “separable” the matrix is. This intuition is captured by (↵, β, γ), which we further elaborate in future sections by explicitly deriving (↵, β, γ) for different loss functions and use them to derive `2 and `1 convergence rates. Theorem 1. Let Ln be a strongly convex loss function which is (↵, β, γ) locally separable function around ✓⇤. Then, if ||rLn(✓⇤)||1 min{ γ 2, # 1 2 $ ↵ ↵−1 β 1 ↵−1 } """ """b✓−✓⇤""" """ 1 2||rLn(✓⇤)||1 where = """"""r2Ln(✓⇤)−1"""""" 1. Proof. (Proof Sketch). The proof begins by constructing a suitable continuous function F, for which b ∆= b✓−✓⇤is the unique fixed point. Next, we show that F(B1(r)) ✓B1(r) for r = 2||rLn(✓⇤)||1. Since F is continuous and `1-ball is convex and compact, the contraction property coupled with Brouwer’s fixed point theorem [9], shows that there exists some fixed point ∆of F, such that ||∆||1 2||rLn(✓⇤)||1. By uniqueness of the fixed point, we then establish our result. See Figure 1 for a geometric description and Section A for more details 3.2 High Dimensional Sampling Regimes In high dimensional sampling regimes (n << p), estimation of model parameters is typically an under-determined problem. It is thus necessary to impose additional assumptions on the true model parameter ✓⇤. We will focus on the popular assumption of sparsity, which entails that the number of non-zero coefficients of ✓⇤is small, so that ||✓⇤||0 s. For this setting, we will be focusing in particular on `1-regularized empirical loss minimization: 1Follows from the concentration of subgaussian maxima [12] 4 b∆ F(b∆) = b∆ F B1(2||rLn(✓⇤)||1) F(B1(2||rLn(✓⇤)||1)) Figure 1: Under the conditions of Theorem 1, F(∆) = −r2Ln(✓⇤)−1 (R(∆; ✓⇤) + rLn(✓⇤)) is contractive over B1(2||rLn(✓⇤)||1) and has b ∆= b✓−✓⇤as its unique fixed point. Using these two observations, we can conclude that """ """ b ∆ """ """ 1 2||rLn(✓⇤)||1. b✓λn = argmin ✓ Ln(✓) + λn ||✓||1 (8) Let S = {i | ✓⇤ i 6= 0} be the support set of the true parameter and M(S) = {v|vSc = 0} be the corresponding subspace. Note that under a high-dimensional sampling regime, we can no longer assume that the empirical loss Ln(·) is strongly convex. Accordingly, we make the following set of assumptions: • Assumption 1 (A1): Positive Definite Restricted Hessian. r2 SSLn(✓⇤) % λminI • Assumption 2 (A2): Irrepresentability. There exists some 2 (0, 1] such that """"""r2 ScSLn(✓⇤) # r2 SSLn(✓⇤) $ −1"""""" 1 1 − • Assumption 3 (A3). Unique Minimizer. When restricted to the true support, the solution to the `1 penalized loss minimization problem is unique, which we denote by: ˜✓λn = argmin ✓2M(S) {Ln(✓) + λn ||✓||1} . (9) Assumptions 1 and 2 are common in high dimensional analysis. We verify that Assumption 3 holds for different loss functions individually. We refer the reader to [13, 5, 11, 10] for further details on these assumptions. For this high dimensional sampling regime, we also modify our separability notion to a restricted separability, which entails that the remainder term be separable only over the model subspace M(S). Definition 2. Ln is (↵, β, γ) restricted locally separable around ✓⇤over the subspace M(S) if the remainder term R(∆; ✓⇤) satisfies: ||R(∆; ✓⇤)||1 1 β ||∆||↵ 1 8∆2 B1(γ) \ M(S) We present our main deterministic result in high dimensions. Theorem 2. Let Ln be a (↵, β, γ) locally separable function around ✓⇤. If (λn, rLn(✓⇤)) are such that, • 8 λn ≥||rLn(✓⇤)||1. • ||rLn(✓⇤)||1 + λn min n γ 2, # 1 2 $ ↵ ↵−1 β 1 ↵−1 o Then we have that support(b✓λn) ✓support(✓⇤) and """ """b✓λn −✓⇤""" """ 1 2(||rLn(✓⇤)||1 + λn) where = """"""r2 SSLn(✓⇤)−1"""""" 1 5 Proof. (Proof Sketch). The proof invokes the primal-dual witness argument [13] which when combined with Assumption 1-3, gives b✓λn 2 M(S) and that b✓λn is the unique solution of the restricted problem. The rest of the proof proceeds similar to Theorem 1, by constructing a suitable function F : R|S| 7! R|S| for which b ∆= b✓λn −✓⇤is the unique fixed point, and showing that F is contractive over B1(r; ✓⇤) for r = 2(||rLn(✓⇤)||1 + λn).See Section B for more details. Discussion. Theorems 1 and 2 provide a general recipe to estimate the number of samples required by any loss `(✓, z) to establish `1 convergence. The first step is to calculate the separability constants (↵, β, γ) for the corresponding empirical loss function Ln. Next, since the loss ` is Fisher consistent, so that r ¯L(✓⇤) = 0, the upper bound on ||rLn(✓⇤)||1 can be shown to hold by analyzing the concentration of rLn(✓⇤) around its mean. We emphasize that we do not impose any restrictions on the values of (↵, β, γ). In particular, these can scale with the number of samples n; our results hold so long as the number of samples n satisfy the conditions of the theorem. As a rule of thumb, the smaller that either γ or β get for any given loss `, the larger the required number of samples. 4 `1-rates for Generative and Discriminative Model Estimation In this section we study the `1 rates for differential parameter estimation for the discriminative and generative approaches. We do so by calculating the separability of discriminative and generative loss functions, and then instantiate our previously derived results. 4.1 Discriminative Estimation As discussed before, the discriminative approach uses `1-regularized logistic regression with the sufficient statistic as features to estimate the differential parameter. In addition to A1-A3, we assume column normalization of the sufficient statistics, i.e. Pn i=1 ([φ(xi)]j)2 n. Let γn = maxi ||φ(x)i||1, ⌫n = maxi ||(φ(x)i)S||2. Firstly, we characterize the separability of the logistic loss. Lemma 1. The logistic regression negative log-likelihood LnLogistic from (7) is ⇣ 2, 1 sγn⌫2n , 1 ⌘ restricted local separable around ✓⇤. Combining Lemma 1 with Theorem 2, we get the following corollary. Corollary 3. (Logistic Regression) Consider the model in (1), then there exist universal positive constants C1, C2 and C3 such that for n ≥C12s2γ2 n⌫4 n log p and λn = C2 q log p n , the discriminative differential estimate b✓diff, satisfies support(b✓diff) ✓support(✓⇤ diff) and &&& &&&b✓diff −✓⇤ diff &&& &&& 1 C3 r log p n . 4.2 Generative Estimation We characterize the separability of Generative Exponential Families. The negative log-likelihood function can be written as: Ln(✓) = A(✓) −h✓, φni , where φn = 1 n Pn i=1 φ(xi). In this setting, the remainder term is independent of the data and can be written as R(∆) = rA(✓⇤+ ∆) −rA(✓⇤) −r2A(✓⇤)∆and rLn(✓⇤) = E[φ(x)] −1 nφ(xi). Hence, ||rLn(✓⇤)||1 is a measure of how well the sufficient statistics concentrate around their mean. Next, we show the separability of our running examples Isotropic Gaussians and Gaussian Graphical Models. Lemma 2. The isotropic Gaussian negative log-likelihood LnIG from (4) is (·, 1, 1) locally separable around ✓⇤. Lemma 3. The Gaussian MRF negative log-likelihood LnGGM from (5) is ⇣ 2, 2 3d⇤ ⇥3 ⌃⇤, 1 3d⇤ ⇥⌃⇤ ⌘ restricted locally separable around ⇥⇤. 6 Comparing Lemmas 1, 2 and 3, we see that the separability of the discriminative model loss depends weakly on the feature functions. On the other hand, the separability for the generative model loss depends critically on the underlying sufficient statistics. This has consequences for their differing sample complexities for differential parameter estimation, as we show next. Corollary 4. (Isotropic Gaussians) Consider the model in (2). Then there exist universal constants C1, C2, C3 such that if the number of samples scale as n ≥C1 log p, then with probability atleast 1 −1/pC2, the generative estimate of the differential parameter b✓diff satisfies """ """b✓diff −✓⇤ diff """ """ 1 C3 r log p n . Comparing Corollary 3 and Corollary 4, we see that for isotropic gaussians, both the discriminative and generative approach achieve the same `1 convergence rates, but at different sample complexities. Specifically, the sample complexity for the generative method depends only logarithmically on the dimension p, and is independent of the differential sparsity s, while the sample complexity of the discriminative method depends on the differential sparsity s. Therefore in this case, the generative method is strictly better than its discriminative counterpart, assuming that the generative model assumptions hold. Corollary 5. (Gaussian MRF) Consider the model in (3), and suppose that the scaled covariates Xk/p⌃⇤ kk are subgaussian with parameter σ2. Then there exist universal positive constants C2, C3, C4 such that if the number of samples for the two generative models scale as ni ≥C22 i 6 (⇥⇤ i )−1d2 ⇥⇤ i log p, for i 2 {0, 1}, then with probability at least 1 −1/pC3, the generative estimate of the differential parameter, b⇥diff = b⇥1 −b⇥0, satisfies """ """ b⇥diff −⇥⇤ diff """ """ 1 C4 r log p n , and support( b⇥i) ✓support(⇥⇤ i ) for i 2 {0, 1}. Comparing Corollary 3 and Corollary 5, we see that for Gaussian Graphical Models, both the discriminative and generative approach achieve the same `1 convergence rates, but at different sample complexities. Specifically, the sample complexity for the generative method depends only on row-wise sparsity of the individual models d2 ⇥⇤ i , and is independent of sparsity s of the differential parameter ⇥⇤ diff. In contrast, the sample complexity of the discriminative method depends only on the sparsity of the differential parameter, and is independent of the structural complexities of the individual model parameters. This suggests that in high dimensions, even when the generative model assumptions hold, generative methods might perform poorly if the underlying model is highly non-separable (e.g. d = ⌦(p)), which is in contrast to the conventional wisdom in low dimensions. Related Work. Note that results similar to Corollaries 3 and 5 have been previously reported in [11, 5] separately. Under the same set of assumptions as ours, Li et al. [5] provide a unified analysis for support recovery and `1-bounds for `1-regularized M-estimators. While they obtain the same rates as ours, their required sample complexities are much higher, since they do not exploit the separability of the underlying loss function. As one example, in the case of GMRFs, their results require the number of samples to scale as n > k2 log p, where k is the total number of edges in the graph, which is sub-optimal, and in particular does not match the GMRF-specific analysis of [11]. On the other hand, our unified analysis is tighter, and in particular, does match the results of [11]. 5 `2-rates for Generative and Discriminative Model Estimation In this section we study the `2 rates for differential parameter estimation for the discriminative and generative approaches. 5.1 Discriminative Approach The bounds for the discriminative approach are relatively straightforward. Corollary 3 gives bounds on the `1 error and establishes that support(b✓) ✓support(✓⇤). Since the true model parameter is s-sparse, ||✓⇤||0 s, the `2 error can be simply bounded as ps kb✓−✓⇤k1. 7 5.2 Generative Approach In the previous section, we saw that the generative approach is able to exploit the inherent separability of the underlying model, and thus is able to get `1 rates for differential parameter estimation at a much lower sample complexity. Unfortunately, it does not have support consistency. Hence a naïve generative estimator will have an `2 error scaling with q p log p n , which in high dimensions, would make it unappealing. However, one can exploit the sparsity of ✓⇤ diff and get better rates of convergence in `2-norm by simply soft-thresholding the generative estimate. Moreover, soft-thresholding also leads to support consistency. Definition 3. We denote the soft-thresholding operator STλn (·), defined as: STλn (✓) = argmin w 1 2 ||w −✓||2 2 + λn ||w||1 . Lemma 4. Suppose ✓= ✓⇤+ ✏for some s-sparse ✓⇤. Then there exists a universal constant C1 such that for λn ≥2 ||✏||1, ||STλn (✓) −✓⇤||2 C1 ps ||✏||1 and ||STλn (✓) −✓⇤||1 C1s ||✏||1 (10) Note that this is a completely deterministic result and has no sample complexity requirement. Motivated by this, we introduce a thresholded generative estimator that has two stages: (a) compute b✓diff using the generative model estimates, and (b) soft-threshold the generative estimate with λn = c ### ###b✓diff −✓⇤ diff ### ### 1. An elementary application of Lemma 4 can then be shown to provide `2 error bounds for b✓diff given its `1 error bounds, and that the true parameter ✓⇤ diff is s-sparse. We instantiate these `2-bounds via corollaries for our running examples of Isotropic Gaussians, and Gaussian MRFs. Lemma 5. (Isotropic Gaussians) Consider the model in (2). Then there exist universal constants C1, C2, C3 such that if the number of samples scale as n ≥C1 log p, then with probability atleast 1 −1/pC2, the soft-thresholded generative estimate of the differential parameter STλn ⇣ b✓diff ⌘ , with the soft-thresholding parameter set as λn = c q log p n for some constant c, satisfies: ### ###STλn ⇣ b✓diff ⌘ −✓⇤ diff ### ### 2 C3 r s log p n . Lemma 6. (Gaussian MRF) Consider the model in Equation 3, and suppose that the covariates Xk/p⌃⇤ kk are subgaussian with parameter σ2. Then there exist universal positive constants C2, C3, C4 such that if the number of samples for the two generative models scale as ni ≥C22 i 6 (⇥⇤ i )−1d2 ⇥⇤ i log p, for i 2 {0, 1}, for i 2 {0, 1}, then with probability at least 1 −1/pC3, the soft-thresholded generative estimate of the differential parameter, STλn ⇣ b⇥diff ⌘ , with the softthresholding parameter set as λn = c q log p n for some constant c, satisfies: ### ###STλn ⇣ b⇥diff ⌘ −⇥⇤ diff ### ### 2 C4 r s log p n . Comparing Lemmas 5 and 6 to Section 5.1, we can see that the additional soft-thresholding step allows the generative methods to achieve the same `2-error rates as the discriminative methods, but at different sample complexities. The sample complexities of the generative estimates depend on the separabilities of the individual models, and is independent of the differential sparsity s, where as the sample complexity of the discriminative estimate depends only on the differential sparsity s. 6 Experiments: High Dimensional Classification In this section, we corroborate our theoretical results on `2-error rates for generative and discriminative model estimators, via their consequences for high dimensional classification. We focus on the case of isotropic Gaussian generative models X|Y ⇠N(µY , Ip), where µ0, µ1 2 Rp are unknown 8 0 50 100 150 200 250 300 350 400 n 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0-1 Error 0-1 Error for s=4,p=512,d=1 Gen-Thresh Logistic (a) s = 4, p = 512 0 50 100 150 200 250 300 350 400 n 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0-1 Error 0-1 Error for s=16,p=512,d=1 Gen-Thresh Logistic (b) s = 16, p = 512 0 50 100 150 200 250 300 350 400 n 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0-1 Error 0-1 Error for s=64,p=512,d=1 Gen-Thresh Logistic (c) s = 64, p = 512 Figure 2: Effect of sparsity s on excess 0 −1 error. and µ1 −µ0 is s-sparse. Here, we are interested in a classifier C : Rp 7! {0, 1} that achieves low classification error EX,Y [1 {C(X) 6= Y }]. Under this setting, it can be shown that the Bayes classifier, that achieves the lowest possible classification error, is given by the linear discriminant classifier C⇤(x) = 1 ! xT w⇤+ b⇤> 0 , where w⇤= (µ1 −µ0) and b⇤= µT 0 µ0−µT 1 µ1 2 . Thus, the coefficient w⇤of the linear discriminant is precisely the differential parameter, which can be estimated via both generative and discriminative approaches as detailed in the previous section. Moreover, the classification error can also be related to the `2 error of the estimates. Under some mild assumptions, Li et al. [3] showed that for any linear classifier bC(x) = 1 n xT bw + bb > 0 o , the excess classification error can be bounded as: E( bC) C1 ✓ || bw −w⇤||2 2 + ''' '''bb −b⇤''' ''' 2 2 ◆ , for some constant C1 > 0, and where E(C) = EX,Y [1 {C(X) 6= Y }] −EX,Y [1 {C⇤(X) 6= Y }] is the excess 0-1 error. In other words, the excess classification error is bounded by a constant times the `2 error of the differential parameter estimate. Methods. In this setting, as discussed in previous sections, the discriminative model is simply a logistic regression model with linear features (6), so that the discriminative estimate of the differential parameter bw as well as the constant bias term bb can be simply obtained via `1-regularized logistic regression. For the generative estimate, we use our two stage estimator from Section 5, which proceeds by estimating bµ0, bµ1 using the empirical means, and then estimating the differential parameter by soft-thresholding the difference of the generative model parameter estimates bwT = STλn (bµ1 −bµ0) where λn = C1 q log p n for some constant C1. The corresponding estimate for b⇤is given by ˆbT = −1 2 h bwT , bµ1 + bµ0i. Experimental Setup. For our experimental setup, we consider isotropic Gaussian models with means µ0 = 1p − 1 ps 1s 0p−s + , µ1 = 1p + 1 ps 1s 0p−s + , and vary the sparsity level s. For both methods, we set the regularization parameter 2 as λn = p log(p)/n. We report the excess classification error for the two approaches, averaged over 20 trials, in Figure 2. Results. As can be seen from Figure 2, our two-staged thresholded generative estimator is always better than the discriminative estimator, across different sparsity levels s. Moreover, the sample complexity or “burn-in” period of the discriminative classifier strongly depends on the sparsity level, which makes it unsuitable when the true parameter is not highly sparse. For our two-staged generative estimator, we see that the sparsity s has no effect on the “burn-in” period of the classifier. These observations validate our theoretical results from Section 5. 2See Appendix J for cross-validated plots. 9 Acknowledgements A.P. and P.R. acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1447574, DMS-1264033, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical Sciences. References [1] Alberto de la Fuente. From ‘differential expression’to ‘differential networking’–identification of dysfunctional regulatory networks in diseases. Trends in genetics, 26(7):326–333, 2010. [2] Christophe Giraud. Introduction to high-dimensional statistics, volume 138. CRC Press, 2014. [3] Tianyang Li, Adarsh Prasad, and Pradeep K Ravikumar. Fast classification rates for high-dimensional gaussian generative models. In Advances in Neural Information Processing Systems, pages 1054–1062, 2015. [4] Tianyang Li, Xinyang Yi, Constantine Carmanis, and Pradeep Ravikumar. Minimax gaussian classification & clustering. In Artificial Intelligence and Statistics, pages 1–9, 2017. [5] Yen-Huan Li, Jonathan Scarlett, Pradeep Ravikumar, and Volkan Cevher. Sparsistency of 1-regularized m-estimators. In AISTATS, 2015. [6] Song Liu, John A Quinn, Michael U Gutmann, Taiji Suzuki, and Masashi Sugiyama. Direct learning of sparse changes in markov networks by density ratio estimation. Neural computation, 26(6):1169–1197, 2014. [7] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348–1356, 2009. [8] Andrew Y Ng and Michael I Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in neural information processing systems, 2:841–848, 2002. [9] James M Ortega and Werner C Rheinboldt. Iterative solution of nonlinear equations in several variables. SIAM, 2000. [10] Pradeep Ravikumar, Martin J Wainwright, John D Lafferty, et al. High-dimensional ising model selection using `1-regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010. [11] Pradeep Ravikumar, Martin J Wainwright, Garvesh Raskutti, Bin Yu, et al. High-dimensional covariance estimation by minimizing `1-penalized log-determinant divergence. Electronic Journal of Statistics, 5: 935–980, 2011. [12] JM Wainwright. High-dimensional statistics: A non-asymptotic viewpoint. preparation. University of California, Berkeley, 2015. [13] Martin J Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using-constrained quadratic programming (lasso). IEEE transactions on information theory, 55(5):2183–2202, 2009. [14] Sihai Dave Zhao, T Tony Cai, and Hongzhe Li. Direct estimation of differential networks. Biometrika, page asu009, 2014. 10 | 2017 | 219 |
6,696 | Fisher GAN Youssef Mroueh⇤, Tom Sercu⇤ mroueh@us.ibm.com, tom.sercu1@ibm.com ⇤Equal Contribution AI Foundations, IBM Research AI IBM T.J Watson Research Center Abstract Generative Adversarial Networks (GANs) are powerful models for learning complex distributions. Stable training of GANs has been addressed in many recent works which explore different metrics between distributions. In this paper we introduce Fisher GAN which fits within the Integral Probability Metrics (IPM) framework for training GANs. Fisher GAN defines a critic with a data dependent constraint on its second order moments. We show in this paper that Fisher GAN allows for stable and time efficient training that does not compromise the capacity of the critic, and does not need data independent constraints such as weight clipping. We analyze our Fisher IPM theoretically and provide an algorithm based on Augmented Lagrangian for Fisher GAN. We validate our claims on both image sample generation and semi-supervised classification using Fisher GAN. 1 Introduction Generative Adversarial Networks (GANs) [1] have recently become a prominent method to learn high-dimensional probability distributions. The basic framework consists of a generator neural network which learns to generate samples which approximate the distribution, while the discriminator measures the distance between the real data distribution, and this learned distribution that is referred to as fake distribution. The generator uses the gradients from the discriminator to minimize the distance with the real data distribution. The distance between these distributions was the object of study in [2], and highlighted the impact of the distance choice on the stability of the optimization. The original GAN formulation optimizes the Jensen-Shannon divergence, while later work generalized this to optimize f-divergences [3], KL [4], the Least Squares objective [5]. Closely related to our work, Wasserstein GAN (WGAN) [6] uses the earth mover distance, for which the discriminator function class needs to be constrained to be Lipschitz. To impose this Lipschitz constraint, WGAN proposes to use weight clipping, i.e. a data independent constraint, but this comes at the cost of reducing the capacity of the critic and high sensitivity to the choice of the clipping hyper-parameter. A recent development Improved Wasserstein GAN (WGAN-GP) [7] introduced a data dependent constraint namely a gradient penalty to enforce the Lipschitz constraint on the critic, which does not compromise the capacity of the critic but comes at a high computational cost. We build in this work on the Integral probability Metrics (IPM) framework for learning GAN of [8]. Intuitively the IPM defines a critic function f, that maximally discriminates between the real and fake distributions. We propose a theoretically sound and time efficient data dependent constraint on the critic of Wasserstein GAN, that allows a stable training of GAN and does not compromise the capacity of the critic. Where WGAN-GP uses a penalty on the gradients of the critic, Fisher GAN imposes a constraint on the second order moments of the critic. This extension to the IPM framework is inspired by the Fisher Discriminant Analysis method. The main contributions of our paper are: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1. We introduce in Section 2 the Fisher IPM, a scaling invariant distance between distributions. Fisher IPM introduces a data dependent constraint on the second order moments of the critic that discriminates between the two distributions. Such a constraint ensures the boundedness of the metric and the critic. We show in Section 2.2 that Fisher IPM when approximated with neural networks, corresponds to a discrepancy between whitened mean feature embeddings of the distributions. In other words a mean feature discrepancy that is measured with a Mahalanobis distance in the space computed by the neural network. 2. We show in Section 3 that Fisher IPM corresponds to the Chi-squared distance (χ2) when the critic has unlimited capacity (the critic belongs to a universal hypothesis function class). Moreover we prove in Theorem 2 that even when the critic is parametrized by a neural network, it approximates the χ2 distance with a factor which is a inner product between optimal and neural network critic. We finally derive generalization bounds of the learned critic from samples from the two distributions, assessing the statistical error and its convergence to the Chi-squared distance from finite sample size. 3. We use Fisher IPM as a GAN objective 1 and formulate an algorithm that combines desirable properties (Table 1): a stable and meaningful loss between distributions for GAN as in Wasserstein GAN [6], at a low computational cost similar to simple weight clipping, while not compromising the capacity of the critic via a data dependent constraint but at a much lower computational cost than [7]. Fisher GAN achieves strong semi-supervised learning results without need of batch normalization in the critic. Table 1: Comparison between Fisher GAN and recent related approaches. Stability Unconstrained Efficient Representation capacity Computation power (SSL) Standard GAN [1, 9] 7 3 3 3 WGAN, McGan [6, 8] 3 7 3 7 WGAN-GP [7] 3 3 7 ? Fisher Gan (Ours) 3 3 3 3 2 Learning GANs with Fisher IPM 2.1 Fisher IPM in an arbitrary function space: General framework Integral Probability Metric (IPM). Intuitively an IPM defines a critic function f belonging to a function class F, that maximally discriminates between two distributions. The function class F defines how f is bounded, which is crucial to define the metric. More formally, consider a compact space X in Rd. Let F be a set of measurable, symmetric and bounded real valued functions on X . Let P(X ) be the set of measurable probability distributions on X . Given two probability distributions P, Q 2 P(X ), the IPM indexed by a symmetric function space F is defined as follows [10]: dF(P, Q) = sup f2F n E x⇠Pf(x) −E x⇠Qf(x) o . (1) It is easy to see that dF defines a pseudo-metric over P(X ). Note specifically that if F is not bounded, supf will scale f to be arbitrarily large. By choosing F appropriately [11], various distances between probability measures can be defined. First formulation: Rayleigh Quotient. In order to define an IPM in the GAN context, [6, 8] impose the boundedness of the function space via a data independent constraint. This was achieved via restricting the norms of the weights parametrizing the function space to a `p ball. Imposing such a data independent constraint makes the training highly dependent on the constraint hyper-parameters and restricts the capacity of the learned network, limiting the usability of the learned critic in a semisupervised learning task. Here we take a different angle and design the IPM to be scaling invariant as a Rayleigh quotient. Instead of measuring the discrepancy between means as in Equation (1), we measure a standardized discrepancy, so that the distance is bounded by construction. Standardizing this discrepancy introduces as we will see a data dependent constraint, that controls the growth of the weights of the critic f and ensures the stability of the training while maintaining the capacity of the critic. Given two distributions P, Q 2 P(X ) the Fisher IPM for a function space F is defined as follows: dF(P, Q) = sup f2F E x⇠P[f(x)] −E x⇠Q[f(x)] p 1/2Ex⇠Pf 2(x) + 1/2Ex⇠Qf 2(x) . (2) 1Code is available at https://github.com/tomsercu/FisherGAN 2 Real P Fake Q x Φ! Φ!(x) 2 Rm v Figure 1: Illustration of Fisher IPM with Neural Networks. Φ! is a convolutional neural network which defines the embedding space. v is the direction in this embedding space with maximal mean separation hv, µ!(P) −µ!(Q)i, constrained by the hyperellipsoid v> ⌃!(P; Q) v = 1. While a standard IPM (Equation (1)) maximizes the discrepancy between the means of a function under two different distributions, Fisher IPM looks for critic f that achieves a tradeoff between maximizing the discrepancy between the means under the two distributions (between class variance), and reducing the pooled second order moment (an upper bound on the intra-class variance). Standardized discrepancies have a long history in statistics and the so-called two-samples hypothesis testing. For example the classic two samples Student’s t−test defines the student statistics as the ratio between means discrepancy and the sum of standard deviations. It is now well established that learning generative models has its roots in the two-samples hypothesis testing problem [12]. Non parametric two samples testing and model criticism from the kernel literature lead to the so called maximum kernel mean discrepancy (MMD) [13]. The MMD cost function and the mean matching IPM for a general function space has been recently used for training GAN [14, 15, 8]. Interestingly Harchaoui et al [16] proposed Kernel Fisher Discriminant Analysis for the two samples hypothesis testing problem, and showed its statistical consistency. The Standard Fisher discrepancy used in Linear Discriminant Analysis (LDA) or Kernel Fisher Discriminant Analysis (KFDA) can be written: supf2F ✓ E x⇠P[f(x)]−E x⇠Q[f(x)] ◆2 Varx⇠P(f(x))+Varx⇠Q(f(x)), where Varx⇠P(f(x)) = Ex⇠Pf2(x) −(Ex⇠P(f(x)))2. Note that in LDA F is restricted to linear functions, in KFDA F is restricted to a Reproducing Kernel Hilbert Space (RKHS). Our Fisher IPM (Eq (2)) deviates from the standard Fisher discrepancy since the numerator is not squared, and we use in the denominator the second order moments instead of the variances. Moreover in our definition of Fisher IPM, F can be any symmetric function class. Second formulation: Constrained form. Since the distance is scaling invariant, dF can be written equivalently in the following constrained form: dF(P, Q) = sup f2F, 1 2 Ex⇠Pf 2(x)+ 1 2 Ex⇠Qf 2(x)=1 E (f) := E x⇠P[f(x)] −E x⇠Q[f(x)]. (3) Specifying P, Q: Learning GAN with Fisher IPM. We turn now to the problem of learning GAN with Fisher IPM. Given a distribution Pr 2 P(X ), we learn a function g✓: Z ⇢Rnz ! X , such that for z ⇠pz, the distribution of g✓(z) is close to the real data distribution Pr, where pz is a fixed distribution on Z (for instance z ⇠N (0, Inz)). Let P✓be the distribution of g✓(z), z ⇠pz. Using Fisher IPM (Equation (3)) indexed by a parametric function class Fp, the generator minimizes the IPM: ming✓dFp(Pr, P✓). Given samples {xi, 1 . . . N} from Pr and samples {zi, 1 . . . M} from pz we shall solve the following empirical problem: min g✓ sup fp2Fp ˆE (fp, g✓) := 1 N N X i=1 fp(xi) −1 M M X j=1 fp(g✓(zj)) Subject to ˆ⌦(fp, g✓) = 1, (4) where ˆ⌦(fp, g✓) = 1 2N PN i=1 f 2 p(xi) + 1 2M PM j=1 f 2 p(g✓(zj)). For simplicity we will have M = N. 3 2.2 Fisher IPM with Neural Networks We will specifically study the case where F is a finite dimensional Hilbert space induced by a neural network Φ! (see Figure 1 for an illustration). In this case, an IPM with data-independent constraint will be equivalent to mean matching [8]. We will now show that Fisher IPM will give rise to a whitened mean matching interpretation, or equivalently to mean matching with a Mahalanobis distance. Rayleigh Quotient. Consider the function space Fv,!, defined as follows Fv,! = {f(x) = hv, Φ!(x)i |v 2 Rm, Φ! : X ! Rm}, Φ! is typically parametrized with a multi-layer neural network. We define the mean and covariance (Gramian) feature embedding of a distribution as in McGan [8]: µ!(P) = E x⇠P (Φ!(x)) and ⌃!(P) = E x⇠P & Φ!(x)Φ!(x)>' , Fisher IPM as defined in Equation (2) on Fv,! can be written as follows: dFv,!(P, Q) = max ! max v hv, µ!(P) −µ!(Q)i q v>( 1 2⌃!(P) + 1 2⌃!(Q) + γIm)v , (5) where we added a regularization term (γ > 0) to avoid singularity of the covariances. Note that if Φ! was implemented with homogeneous non linearities such as RELU, if we swap (v, !) with (cv, c0!) for any constants c, c0 > 0, the distance dFv,! remains unchanged, hence the scaling invariance. Constrained Form. Since the Rayleigh Quotient is not amenable to optimization, we will consider Fisher IPM as a constrained optimization problem. By virtue of the scaling invariance and the constrained form of the Fisher IPM given in Equation (3), dFv,! can be written equivalently as: dFv,!(P, Q) = max !,v,v>( 1 2 ⌃!(P)+ 1 2 ⌃!(Q)+γIm)v=1 hv, µ!(P) −µ!(Q)i (6) Define the pooled covariance: ⌃!(P; Q) = 1 2⌃!(P) + 1 2⌃!(Q) + γIm. Doing a simple change of variable u = (⌃!(P; Q)) 1 2 v we see that: dFu,!(P, Q) = max ! max u,kuk=1 D u, (⌃!(P; Q))−1 2 (µ!(P) −µ!(Q)) E = max ! +++(⌃!(P; Q))−1 2 (µ!(P) −µ!(Q)) +++ , (7) hence we see that fisher IPM corresponds to the worst case distance between whitened means. Since the means are white, we don’t need to impose further constraints on ! as in [6, 8]. Another interpretation of the Fisher IPM stems from the fact that: dFv,!(P, Q) = max ! q (µ!(P) −µ!(Q))>⌃−1 ! (P; Q)(µ!(P) −µ!(Q)), from which we see that Fisher IPM is a Mahalanobis distance between the mean feature embeddings of the distributions. The Mahalanobis distance is defined by the positive definite matrix ⌃w(P; Q). We show in Appendix A that the gradient penalty in Improved Wasserstein [7] gives rise to a similar Mahalanobis mean matching interpretation. Learning GAN with Fisher IPM. Hence we see that learning GAN with Fisher IPM: min g✓max ! max v,v>( 1 2 ⌃!(Pr)+ 1 2 ⌃!(P✓)+γIm)v=1 hv, µw(Pr) −µ!(P✓)i corresponds to a min-max game between a feature space and a generator. The feature space tries to maximize the Mahalanobis distance between the feature means embeddings of real and fake distributions. The generator tries to minimize the mean embedding distance. 3 Theory We will start first by studying the Fisher IPM defined in Equation (2) when the function space has full capacity i.e when the critic belongs to L2(X , 1 2(P + Q)) meaning that R X f 2(x) (P(x)+Q(x)) 2 dx < 1. Theorem 1 shows that under this condition, the Fisher IPM corresponds to the Chi-squared distance between distributions, and gives a closed form expression of the optimal critic function fχ (See Appendix B for its relation with the Pearson Divergence). Proofs are given in Appendix D. 4 −2 0 2 4 −4 −3 −2 −1 0 1 2 3 (a) 2D Gaussians, contour plot 0 1 2 3 4 Mean shift 0.0 0.5 1.0 1.5 2.0 χ2 distance and MLP estimate (b) Exact MLP, N=M=10k 101 102 103 N=M=num training samples 0.0 0.5 1.0 1.5 2.0 χ2 distance and MLP estimate (c) MLP, shift=3 MLP, shift=1 MLP, shift=0.5 Figure 2: Example on 2D synthetic data, where both P and Q are fixed normal distributions with the same covariance and shifted means along the x-axis, see (a). Fig (b, c) show the exact χ2 distance from numerically integrating Eq (8), together with the estimate obtained from training a 5-layer MLP with layer size = 16 and LeakyReLU nonlinearity on different training sample sizes. The MLP is trained using Algorithm 1, where sampling from the generator is replaced by sampling from Q, and the χ2 MLP estimate is computed with Equation (2) on a large number of samples (i.e. out of sample estimate). We see in (b) that for large enough sample size, the MLP estimate is extremely good. In (c) we see that for smaller sample sizes, the MLP approximation bounds the ground truth χ2 from below (see Theorem 2) and converges to the ground truth roughly as O( 1 p N ) (Theorem 3). We notice that when the distributions have small χ2 distance, a larger training size is needed to get a better estimate again this is in line with Theorem 3. Theorem 1 (Chi-squared distance at full capacity). Consider the Fisher IPM for F being the space of all measurable functions endowed by 1 2(P + Q), i.e. F := L2(X , P+Q 2 ). Define the Chi-squared distance between two distributions: χ2(P, Q) = sZ X (P(x) −Q(x))2 P(x)+Q(x) 2 dx (8) The following holds true for any P, Q, P 6= Q: 1) The Fisher IPM for F = L2(X , P+Q 2 ) is equal to the Chi-squared distance defined above: dF(P, Q) = χ2(P, Q). 2) The optimal critic of the Fisher IPM on L2(X , P+Q 2 ) is : fχ(x) = 1 χ2(P, Q) P(x) −Q(x) P(x)+Q(x) 2 . We note here that LSGAN [5] at full capacity corresponds to a Chi-Squared divergence, with the main difference that LSGAN has different objectives for the generator and the discriminator (bilevel optimizaton), and hence does not optimize a single objective that is a distance between distributions. The Chi-squared divergence can also be achieved in the f-gan framework from [3]. We discuss the advantages of the Fisher formulation in Appendix C. Optimizing over L2(X , P+Q 2 ) is not tractable, hence we have to restrict our function class, to a hypothesis class H , that enables tractable computations. Here are some typical choices of the space H : Linear functions in the input features, RKHS, a non linear multilayer neural network with a linear last layer (Fv,!). In this Section we don’t make any assumptions about the function space and show in Theorem 2 how the Chi-squared distance is approximated in H , and how this depends on the approximation error of the optimal critic fχ in H . Theorem 2 (Approximating Chi-squared distance in an arbitrary function space H ). Let H be an arbitrary symmetric function space. We define the inner product hf, fχiL2(X , P+Q 2 ) = R X f(x)fχ(x) P(x)+Q(x) 2 dx, which induces the Lebesgue norm. Let SL2(X , P+Q 2 ) be the unit sphere in L2(X , P+Q 2 ): SL2(X , P+Q 2 ) = {f : X ! R, kfkL2(X , P+Q 2 ) = 1}. The fisher IPM defined on an arbitrary function space H dH (P, Q), approximates the Chi-squared distance. The approximation 5 quality depends on the cosine of the approximation of the optimal critic fχ in H . Since H is symmetric this cosine is always positive (otherwise the same equality holds with an absolute value) dH (P, Q) = χ2(P, Q) sup f2H \ SL2(X , P+Q 2 ) hf, fχiL2(X , P+Q 2 ) , Equivalently we have following relative approximation error: χ2(P, Q) −dH (P, Q) χ2(P, Q) = 1 2 inf f2H \ SL2(X , P+Q 2 ) kf −fχk2 L2(X , P+Q 2 ) . From Theorem 2, we know that we have always dH (P, Q) χ2(P, Q). Moreover if the space H was rich enough to provide a good approximation of the optimal critic fχ, then dH is a good approximation of the Chi-squared distance χ2. Generalization bounds for the sample quality of the estimated Fisher IPM from samples from P and Q can be done akin to [11], with the main difficulty that for Fisher IPM we have to bound the excess risk of a cost function with data dependent constraints on the function class. We give generalization bounds for learning the Fisher IPM in the supplementary material (Theorem 3, Appendix E). In a nutshell the generalization error of the critic learned in a hypothesis class H from samples of P and Q, decomposes to the approximation error from Theorem 2 and a statistical error that is bounded using data dependent local Rademacher complexities [17] and scales like O( p 1/n), n = MN/M+N. We illustrate in Figure 2 our main theoretical claims on a toy problem. 4 Fisher GAN Algorithm using ALM For any choice of the parametric function class Fp (for example Fv,!), note the constraint in Equation (4) by ˆ⌦(fp, g✓) = 1 2N PN i=1 f 2 p(xi) + 1 2N PN j=1 f 2 p(g✓(zj)). Define the Augmented Lagrangian [18] corresponding to Fisher GAN objective and constraint given in Equation (4): LF (p, ✓, λ) = ˆE (fp, g✓) + λ(1 −ˆ⌦(fp, g✓)) −⇢ 2(ˆ⌦(fp, g✓) −1)2 (9) where λ is the Lagrange multiplier and ⇢> 0 is the quadratic penalty weight. We alternate between optimizing the critic and the generator. Similarly to [7] we impose the constraint when training the critic only. Given ✓, for training the critic we solve maxp minλ LF (p, ✓, λ). Then given the critic parameters p we optimize the generator weights ✓to minimize the objective min✓ˆE (fp, g✓). We give in Algorithm 1, an algorithm for Fisher GAN, note that we use ADAM [19] for optimizing the parameters of the critic and the generator. We use SGD for the Lagrange multiplier with learning rate ⇢following practices in Augmented Lagrangian [18]. Algorithm 1 Fisher GAN Input: ⇢penalty weight, ⌘Learning rate, nc number of iterations for training the critic, N batch size Initialize p, ✓, λ = 0 repeat for j = 1 to nc do Sample a minibatch xi, i = 1 . . . N, xi ⇠Pr Sample a minibatch zi, i = 1 . . . N, zi ⇠pz (gp, gλ) (rpLF , rλLF )(p, ✓, λ) p p + ⌘ADAM (p, gp) λ λ −⇢gλ {SGD rule on λ with learning rate ⇢} end for Sample zi, i = 1 . . . N, zi ⇠pz d✓ r✓ˆE (fp, g✓) = −r✓1 N PN i=1 fp(g✓(zi)) ✓ ✓−⌘ADAM (✓, d✓) until ✓converges 6 0 2 4 Mean difference ˆE (a) LSUN ˆE train ˆE val 0.0 0.5 1.0 1.5 g✓iterations ⇥105 0 1 2 3 4 λ ˆ⌦ 0 2 4 Mean difference ˆE (b) CelebA ˆE train 0 1 2 3 4 g✓iterations ⇥105 0 1 2 3 4 ˆ⌦ λ 0 2 4 Mean difference ˆE (c) CIFAR-10 ˆE train ˆE val 0.0 0.5 1.0 1.5 g✓iterations ⇥105 0 1 2 3 4 ˆ⌦ λ Figure 3: Samples and plots of the loss ˆE (.), lagrange multiplier λ, and constraint ˆ⌦(.) on 3 benchmark datasets. We see that during training as λ grows slowly, the constraint becomes tight. Figure 4: No Batch Norm: Training results from a critic f without batch normalization. Fisher GAN (left) produces decent samples, while WGAN with weight clipping (right) does not. We hypothesize that this is due to the implicit whitening that Fisher GAN provides. (Note that WGAN-GP does also succesfully converge without BN [7]). For both models the learning rate was appropriately reduced. 5 Experiments We experimentally validate the proposed Fisher GAN. We claim three main results: (1) stable training with a meaningful and stable loss going down as training progresses and correlating with sample quality, similar to [6, 7]. (2) very fast convergence to good sample quality as measured by inception score. (3) competitive semi-supervised learning performance, on par with literature baselines, without requiring normalization of the critic. We report results on three benchmark datasets: CIFAR-10 [20], LSUN [21] and CelebA [22]. We parametrize the generator g✓and critic f with convolutional neural networks following the model design from DCGAN [23]. For 64 ⇥64 images (LSUN, CelebA) we use the model architecture in Appendix F.2, for CIFAR-10 we train at a 32 ⇥32 resolution using architecture in F.3 for experiments regarding sample quality (inception score), while for semi-supervised learning we use a better regularized discriminator similar to the Openai [9] and ALI [24] architectures, as given in F.4.We used Adam [19] as optimizer for all our experiments, hyper-parameters given in Appendix F. Qualitative: Loss stability and sample quality. Figure 3 shows samples and plots during training. For LSUN we use a higher number of D updates (nc = 5) , since we see similarly to WGAN that the loss shows large fluctuations with lower nc values. For CIFAR-10 and CelebA we use reduced nc = 2 with no negative impact on loss stability. CIFAR-10 here was trained without any label information. We show both train and validation loss on LSUN and CIFAR-10 showing, as can be expected, no overfitting on the large LSUN dataset and some overfitting on the small CIFAR-10 dataset. To back up our claim that Fisher GAN provides stable training, we trained both a Fisher Gan and WGAN where the batch normalization in the critic f was removed (Figure 4). Quantitative analysis: Inception Score and Speed. It is agreed upon that evaluating generative models is hard [25]. We follow the literature in using “inception score” [9] as a metric for the quality 7 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 g✓iterations ⇥105 0 1 2 3 4 5 6 7 8 Inception score (a) Fisher GAN: CE, Conditional (b) Fisher GAN: CE, G Not Cond. (c) Fisher GAN: No Lab WGAN-GP WGAN DCGAN 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Wallclock time (seconds) ⇥104 Figure 5: CIFAR-10 inception scores under 3 training conditions. Corresponding samples are given in rows from top to bottom (a,b,c). The inception score plots are mirroring Figure 3 from [7]. Note All inception scores are computed from the same tensorflow codebase, using the architecture described in appendix F.3, and with weight initialization from a normal distribution with stdev=0.02. In Appendix F.1 we show that these choices are also benefiting our WGAN-GP baseline. of CIFAR-10 samples. Figure 5 shows the inception score as a function of number of g✓updates and wallclock time. All timings are obtained by running on a single K40 GPU on the same cluster. We see from Figure 5, that Fisher GAN both produces better inception scores, and has a clear speed advantage over WGAN-GP. Quantitative analysis: SSL. One of the main premises of unsupervised learning, is to learn features on a large corpus of unlabeled data in an unsupervised fashion, which are then transferable to other tasks. This provides a proper framework to measure the performance of our algorithm. This leads us to quantify the performance of Fisher GAN by semi-supervised learning (SSL) experiments on CIFAR-10. We do joint supervised and unsupervised training on CIFAR-10, by adding a cross-entropy term to the IPM objective, in conditional and unconditional generation. Table 2: CIFAR-10 inception scores using resnet architecture and codebase from [7]. We used Layer Normalization [26] which outperformed unnormalized resnets. Apart from this, no additional hyperparameter tuning was done to get stable training of the resnets. Method Score ALI [24] 5.34 ± .05 BEGAN [27] 5.62 DCGAN [23] (in [28]) 6.16 ± .07 Improved GAN (-L+HA) [9] 6.86 ± .06 EGAN-Ent-VI [29] 7.07 ± .10 DFM [30] 7.72 ± .13 WGAN-GP ResNet [7] 7.86 ± .07 Fisher GAN ResNet (ours) 7.90 ± .05 Unsupervised Method Score SteinGan [31] 6.35 DCGAN (with labels, in [31]) 6.58 Improved GAN [9] 8.09 ± .07 Fisher GAN ResNet (ours) 8.16 ± .12 AC-GAN [32] 8.25 ± .07 SGAN-no-joint [28] 8.37 ± .08 WGAN-GP ResNet [7] 8.42 ± .10 SGAN [28] 8.59 ± .12 Supervised Unconditional Generation with CE Regularization. We parametrize the critic f as in Fv,!. While training the critic using the Fisher GAN objective LF given in Equation (9), we train a linear classifier on the feature space Φ! of the critic, whenever labels are available (K labels). The linear classifier is trained with Cross-Entropy (CE) minimization. Then the critic loss becomes LD = LF −λD P (x,y)2lab CE(x, y; S, Φ!), where CE(x, y; S, Φ!) = −log [Softmax(hS, Φ!(x)i)y], where S 2 RK⇥m is the linear classifier and hS, Φ!i 2 RK with slight abuse of notation. λD is the regularization hyper-parameter. We now sample three minibatches for each critic update: one labeled batch from the small labeled dataset for the CE term, and an unlabeled batch + generated batch for the IPM. Conditional Generation with CE Regularization. We also trained conditional generator models, conditioning the generator on y by concatenating the input noise with a 1-of-K embedding of the label: we now have g✓(z, y). We parametrize the critic in Fv,! and modify the critic objective as above. We also add a cross-entropy term for the generator to minimize during its training step: LG = ˆE +λG P z⇠p(z),y⇠p(y) CE(g✓(z, y), y; S, Φ!). For generator updates we still need to sample only a single minibatch since we use the minibatch of samples from g✓(z, y) to compute both the 8 IPM loss ˆE and CE. The labels are sampled according to the prior y ⇠p(y), which defaults to the discrete uniform prior when there is no class imbalance. We found λD = λG = 0.1 to be optimal. New Parametrization of the Critic: “K + 1 SSL”. One specific successful formulation of SSL in the standard GAN framework was provided in [9], where the discriminator classifies samples into K + 1 categories: the K correct clases, and K + 1 for fake samples. Intuitively this puts the real classes in competition with the fake class. In order to implement this idea in the Fisher framework, we define a new function class of the critic that puts in competition the K class directions of the classifier Sy, and another “K+1” direction v that indicates fake samples. Hence we propose the following parametrization for the critic: f(x) = PK y=1 p(y|x) hSy, Φ!(x)i −hv, Φ!(x)i, where p(y|x) = Softmax(hS, Φ!(x)i)y which is also optimized with Cross-Entropy. Note that this critic does not fall under the interpretation with whitened means from Section 2.2, but does fall under the general Fisher IPM framework from Section 2.1. We can use this critic with both conditional and unconditional generation in the same way as described above. In this setting we found λD = 1.5, λG = 0.1 to be optimal. Layerwise normalization on the critic. For most GAN formulations following DCGAN design principles, batch normalization (BN) [33] in the critic is an essential ingredient. From our semisupervised learning experiments however, it appears that batch normalization gives substantially worse performance than layer normalization (LN) [26] or even no layerwise normalization. We attribute this to the implicit whitening Fisher GAN provides. Table 3 shows the SSL results on CIFAR-10. We show that Fisher GAN has competitive results, on par with state of the art literature baselines. When comparing to WGAN with weight clipping, it becomes clear that we recover the lost SSL performance. Results with the K + 1 critic are better across the board, proving consistently the advantage of our proposed K + 1 formulation. Conditional generation does not provide gains in the setting with layer normalization or without normalization. Table 3: CIFAR-10 SSL results. Number of labeled examples 1000 2000 4000 8000 Model Misclassification rate CatGAN [34] 19.58 Improved GAN (FM) [9] 21.83 ± 2.01 19.61 ± 2.09 18.63 ± 2.32 17.72 ± 1.82 ALI [24] 19.98 ± 0.89 19.09 ± 0.44 17.99 ± 1.62 17.05 ± 1.49 WGAN (weight clipping) Uncond 69.01 56.48 40.85 30.56 WGAN (weight clipping) Cond 68.11 58.59 42.00 30.91 Fisher GAN BN Cond 36.37 32.03 27.42 22.85 Fisher GAN BN Uncond 36.42 33.49 27.36 22.82 Fisher GAN BN K+1 Cond 34.94 28.04 23.85 20.75 Fisher GAN BN K+1 Uncond 33.49 28.60 24.19 21.59 Fisher GAN LN Cond 26.78 ± 1.04 23.30 ± 0.39 20.56 ± 0.64 18.26 ± 0.25 Fisher GAN LN Uncond 24.39 ± 1.22 22.69 ± 1.27 19.53 ± 0.34 17.84 ± 0.15 Fisher GAN LN K+1 Cond 20.99 ± 0.66 19.01 ± 0.21 17.41 ± 0.38 15.50 ± 0.41 Fisher GAN LN K+1, Uncond 19.74 ± 0.21 17.87 ± 0.38 16.13 ± 0.53 14.81 ± 0.16 Fisher GAN No Norm K+1, Uncond 21.15 ± 0.54 18.21 ± 0.30 16.74 ± 0.19 14.80 ± 0.15 6 Conclusion We have defined Fisher GAN, which provide a stable and fast way of training GANs. The Fisher GAN is based on a scale invariant IPM, by constraining the second order moments of the critic. We provide an interpretation as whitened (Mahalanobis) mean feature matching and χ2 distance. We show graceful theoretical and empirical advantages of our proposed Fisher GAN. Acknowledgments. The authors thank Steven J. Rennie for many helpful discussions and Martin Arjovsky for helpful clarifications and pointers. 9 References [1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [2] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. [3] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. [4] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. ICLR, 2017. [5] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, and Zhen Wang. Least squares generative adversarial networks. arXiv:1611.04076, 2016. [6] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. ICML, 2017. [7] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv:1704.00028, 2017. [8] Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature matching gan. arXiv:1702.08398 ICML, 2017. [9] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. NIPS, 2016. [10] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 1997. [11] Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert R. G. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 2012. [12] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv:1610.03483, 2016. [13] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. JMLR, 2012. [14] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML, 2015. [15] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. UAI, 2015. [16] Zaïd Harchaoui, Francis R Bach, and Eric Moulines. Testing for homogeneity with kernel fisher discriminant analysis. In NIPS, 2008. [17] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities. Ann. Statist., 2005. [18] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2nd edition, 2006. [19] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, 2009. [21] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv:1506.03365, 2015. [22] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 10 [23] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015. [24] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. ICLR, 2017. [25] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. ICLR, 2016. [26] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [27] David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. [28] Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. Stacked generative adversarial networks. arXiv preprint arXiv:1612.04357, 2016. [29] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017. [30] D Warde-Farley and Y Bengio. Improving generative adversarial networks with denoising feature matching. ICLR submissions, 8, 2017. [31] Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016. [32] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. [33] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. ICML, 2015. [34] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv:1511.06390, 2015. [35] Alessandra Tosi, Søren Hauberg, Alfredo Vellido, and Neil D. Lawrence. Metrics for probabilistic geometries. 2014. [36] Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert R. G. Lanckriet. On integral probability metrics, φ-divergences and binary classification. 2009. [37] I. Ekeland and T. Turnbull. Infinite-dimensional Optimization and Convexity. The University of Chicago Press, 1983. [38] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256, 2010. [39] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852, 2015. 11 | 2017 | 22 |
6,697 | ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events Evan Racah1,2, Christopher Beckham1,3, Tegan Maharaj1,3, Samira Ebrahimi Kahou4, Prabhat2, Christopher Pal1,3 1 MILA, Université de Montréal, evan.racah@umontreal.ca. 2 Lawrence Berkeley National Lab, Berkeley, CA, prabhat@lbl.gov. 3 École Polytechnique de Montréal, firstname.lastname@polymtl.ca. 4 Microsoft Maluuba, samira.ebrahimi@microsoft.com. Abstract Then detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect. 1 Introduction Climate change is one of the most important challenges facing humanity in the 21st century, and climate simulations are one of the only viable mechanisms for understanding the future impact of various carbon emission scenarios and intervention strategies. Large climate simulations produce massive datasets: a simulation of 27 years from a 25 square km, 3 hour resolution model produces on the order of 10TB of multi-variate data. This scale of data makes post-processing and quantitative assessment challenging, and as a result, climate analysts and policy makers typically take global and annual averages of temperature or sea-level rise. While these coarse measurements are useful for public and media consumption, they ignore spatially (and temporally) resolved extreme weather events such as extra-tropical cyclones and tropical cyclones (hurricanes). Because the general public and policy makers are concerned about the local impacts of climate change, it is critical that we be able to examine how localized weather patterns (such as tropical cyclones), which can have dramatic impacts on populations and economies, will change in frequency and intensity under global warming. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Deep neural networks, especially deep convolutional neural networks, have enjoyed breakthrough success in recent recent years, achieving state-of-the-art results on many benchmark datasets (Krizhevsky et al., 2012; He et al., 2015; Szegedy et al., 2015) and also compelling results on many practical tasks such as disease diagnosis (Hosseini-Asl et al., 2016), facial recognition (Parkhi et al., 2015), autonomous driving (Chen et al., 2015), and many others. Furthermore, deep neural networks have also been very effective in the context of unsupervised and semi-supervised learning; some recent examples include variational autoencoders (Kingma & Welling, 2013), adversarial networks (Goodfellow et al., 2014; Makhzani et al., 2015; Salimans et al., 2016; Springenberg, 2015), ladder networks (Rasmus et al., 2015) and stacked what-where autoencoders (Zhao et al., 2015). There is a recent trend towards video datasets aimed at better understanding spatiotemporal relations and multimodal inputs (Kay et al., 2017; Gu et al., 2017; Goyal et al., 2017). The task of finding extreme weather events in climate data is similar to the task of detecting objects and activities in video - a popular application for deep learning techniques. An important difference is that in the case of climate data, the ’video’ has 16 or more ’channels’ of information (such as water vapour, pressure and temperature), while conventional video only has 3 (RGB). In addition, climate simulations do not share the same statistics as natural images. As a result, unlike many popular techniques for video, we hypothesize that we cannot build off successes from the computer vision community such as using pretrained weights from CNNs (Simonyan & Zisserman, 2014; Krizhevsky et al., 2012) pretrained on ImageNet (Russakovsky et al., 2015). Climate data thus poses a number of interesting machine learning problems: multi-class classification with unbalanced classes; partial annotation; anomaly detection; distributional shift and bias correction; spatial, temporal, and spatiotemporal relationships at widely varying scales; relationships between variables that are not fully understood; issues of data and computational efficiency; opportunities for semi-supervised and generative models; and more. Here, we address multi-class detection and localization of four extreme weather phenomena: tropical cyclones, extra-tropical cyclones, tropical depressions, and atmospheric rivers. We implement a 3D (height, width, time) convolutional encoderdecoder, with a novel single-pass bounding-box regression loss applied at the bottleneck. To our knowledge, this is the first use of a deep autoencoding architecture for bounding-box regression. This architectural choice allows us to do semi-supervised learning in a very natural way (simply training the autoencoder with reconstruction for unlabelled data), while providing relatively interpretable features at the bottleneck. This is appealing for use in the climate community, as current engineered heuristics do not perform as well as human experts for identifying extreme weather events. Our main contributions are (1) a baseline bounding-box loss formulation; (2) our architecture, a first step away from engineered heuristics for extreme weather events, towards semi-supervised learned features; (3) the ExtremeWeather dataset, which we make available in three benchmarking splits: one small, for model exploration, one medium, and one comprising the full 27 years of climate simulation output. 2 Related work 2.1 Deep learning for climate and weather data Climate scientists do use basic machine learning techniques, for example PCA analysis for dimensionality reduction (Monahan et al., 2009), and k-means analysis for clusterings Steinhaeuser et al. (2011). However, the climate science community primarily relies on expert engineered systems and ad-hoc rules for characterizing climate and weather patterns. Of particular relevance is the TECA (Toolkit for Extreme Climate Analysis) Prabhat et al. (2012, 2015), an application of large scale pattern detection on climate data using heuristic methods. A more detailed explanation of how TECA works is described in section 3. Using the output of TECA analysis (centers of storms and bounding boxes around these centers) as ground truth, (Liu et al., 2016) demonstrated for the first time that convolutional architectures could be successfully applied to predict the class label for two extreme weather event types. Their work considered the binary classification task on centered, cropped patches from 2D (single-timestep) multi-channel images. Like (Liu et al., 2016) we use TECA’s output (centers and bounding boxes) as ground truth, but we build on the work of Liu et al. (2016) by: 1) using uncropped images, 2) considering the temporal axis of the data 3) doing multi-class bounding box detection and 4) taking a semi-supervised approach with a hybrid predictive and reconstructive model. 2 Some recent work has applied deep learning methods to weather forecasting. Xingjian et al. (2015) have explored a convolutional LSTM architecture (described in 2.2 for predicting future precipitation on a local scale (i.e. the size of a city) using radar echo data. In contrast, we focus on extreme event detection on planetary-scale data. Our aim is to capture patterns which are very local in time (e.g. a hurricane may be present in half a dozen sequential frames), compared to the scale of our underlying climate data, consisting of global simulations over many years. As such, 3D CNNs seemed to make more sense for our detection application, compared to LSTMs whose strength is in capturing long-term dependencies. 2.2 Related methods and models Following the dramatic success of CNNs in static 2D images, a wide variety of CNN architectures have been explored for video, ex. (Karpathy et al., 2014; Yao et al., 2015; Tran et al., 2014). The details of how CNNs are extended to capture the temporal dimension are important. Karpathy et al. (2014) explore different strategies for fusing information from 2D CNN subcomponents; in contrast, Yao et al. (2015) create 3D volumes of statistics from low level image features. Convolutional networks have also been combined with RNNs (recurrent neural networks) for modeling video and other sequence data, and we briefly review some relevant video models here. The most common and straightforward approach to modeling sequential images is to feed single-frame representations from a CNN at each timestep to an RNN. This approach has been examined for a number of different types of video (Donahue et al., 2015; Ebrahimi Kahou et al., 2015), while (Srivastava et al., 2015) have explored an LSTM architecture for the unsupervised learning of video representations using a pretrained CNN representation as input. These architectures separate learning of spatial and temporal features, something which is not desirable for climate patterns. Another popular model, also used on 1D data, is a convolutional RNN, wherein the hidden-to-hidden transition layer is 1D convolutional (i.e. the state is convolved over time). (Ballas et al., 2016) combine these ideas, applying a convolutional RNN to frames processed by a (2D) CNN. The 3D CNNs we use here are based on 3-dimensional convolutional filters, taking the height, width, and time axes into account for each feature map, as opposed to aggregated 2D CNNs. This approach was studied in detail in (Tran et al., 2014). 3D convolutional neural networks have been used for various tasks ranging from human activity recognition (Ji et al., 2013), to large-scale YouTube video classification (Karpathy et al., 2014), and video description (Yao et al., 2015). Hosseini-Asl et al. (2016) use a 3D convolutional autoencoder for diagnosing Alzheimer’s disease through MRI - in their case, the 3 dimensions are height, width, and depth. (Whitney et al., 2016) use 3D (height, width, depth) filters to predict consecutive frames of a video game for continuation learning. Recent work has also examined ways to use CNNs to generate animated textures and sounds (Xie et al., 2016). This work is similar to our approach in using 3D convolutional encoder, but where their approach is stochastic and used for generation, ours is deterministic, used for multi-class detection and localization, and also comprises a 3D convolutional decoder for unsupervised learning. Stepping back, our approach is related conceptually to (Misra et al., 2015), who use semi-supervised learning for bounding-box detection, but their approach uses iterative heuristics with a support vector machine (SVM) classifer, an approach which would not allow learning of spatiotemporal features. Our setup is also similar to recent work from (Zhang et al., 2016) (and others) in using a hybrid prediction and autoencoder loss. This strategy has not, to our knowledge, been applied either to multidimensional data or bounding-box prediction, as we do here. Our bounding-box prediction loss is inspired by (Redmon et al., 2015), an approach extended in (Ren et al., 2015), as well as the single shot multiBox detector formulation used in (Liu et al., 2015) and the seminal bounding-box work in OverFeat (Sermanet et al., 2013). Details of this loss are described in Section 4. 3 The ExtremeWeather dataset 3.1 The Data The climate science community uses three flavors of global datasets: observational products (satellite, gridded weather station); reanalysis products (obtained by assimilating disparate observational products into a climate model) and simulation products. In this study, we analyze output from the third category because we are interested in climate change projection studies. We would like to better understand how Earth’s climate will change by the year 2100; and it is only possible to conduct 3 such an analysis on simulation output. Although this dataset contains the past, the performance of deep learning methods on this dataset can still inform the effectiveness of these approaches on future simulations. We consider the CAM5 (Community Atmospheric Model v5) simulation, which is a standardized three-dimensional, physical model of the atmosphere used by the climate community to simulate the global climate (Conley et al., 2012). When it is configured at 25-km spatial resolution (Wehner et al., 2015), each snapshot of the global atmospheric state in the CAM5 model output is a 768x1152 image, having 16 ’channels’, each corresponding to a different simulated variable (like surface temperature, surface pressure, precipitation, zonal wind, meridional wind, humidity, cloud fraction, water vapor, etc.). The global climate is simulated at a temporal resolution of 3 hours, giving 8 snapshots (images) per day. The data we provide is from a simulation of 27 years from 1979 to 2005. In total, this gives 78,840 16-channel 768x1152 images. 3.2 The Labels Ground-truth labels are created for four extreme weather events: Tropical Depressions (TD) Tropical Cyclones (TC), Extra-Tropical Cyclones (ETC) and Atmospheric Rivers (AR) using TECA (Prabhat et al., 2012). TECA generally works by suggesting candidate coordinates for storm centers by only selecting points that follow a certain combination of criteria, which usually involves requiring various variables’ (such as pressure, temperature and wind speed) values are between between certain thresholds. These candidates are then refined by breaking ties and matching the "same" storms across time (Prabhat et al., 2012). These storm centers are then used as the center coordinates for bounding boxes. The size of the boxes is determined using prior domain knowledge as to how big these storms usually are, as described in (Liu et al., 2016). Every other image (i.e. 4 per day) is labeled due to certain design decisions made during the production run of the TECA code. This gives us 39,420 labeled images. 3.2.1 Issues with the Labels TECA, the ground truth labeling framework, implements heuristics to assign ’ground truth’ labels for the four types of extreme weather events. However, it is entirely possible there are errors in the labeling: for instance, there is little agreement in the climate community on a standard heuristic for capturing Extra-Tropical Cyclones (Neu et al., 2013); Atmospheric Rivers have been extensively studied in the northern hemisphere (Lavers et al., 2012; Dettinger et al., 2011), but not in the southern hemisphere; and spatial extents of such events not universally agreed upon. In addition, this labeling only includes AR’s in the US and not in Europe. As such, there is potential for many false negatives, resulting in partially annotated images. Lastly, it is worth mentioning that because the ground truth generation is a simple automated method, a deep, supervised method can only do as well as emulating this class of simple functions. This, in addition to lower representation for some classes (AR and TD), is part of our motivation in exploring semi-supervised methods to better understand the features underlying extreme weather events rather than trying to "beat" existing techniques. 3.3 Suggested Train/Test Splits We provide suggested train/test splits for the varying sizes of datasets on which we run experiments. Table 1 shows the years used for train and test for each dataset size. We show "small" (2 years train, 1 year of test), "medium" (8 years train, 2 years test) and "large" (22 years train, 5 years test) datasets. For reference, table 2 shows the breakdown of the dataset splits for each class for "small" in order to illustrate the class-imbalance present in the dataset. Our model was trained on "small", where we split the train set 50:50 for train and validation. Links for downloading train and test data, as well as further information the different dataset sizes and splits can be found here: extremeweatherdataset.github.io. Table 1: Three benchmarking levels for the ExtremeWeather dataset Level Train Test Small 1979, 1981 1984 Medium 1979-1983,1989-1991 1984-1985 Large 1979-1983, 1994-2005, 1989-1993 1984-1988 4 Table 2: Number of examples in ExtremeWeather benchmark splits, with class breakdown statistics for Tropical Cyclones (TC), Extra-Tropical Cyclones (ETC), Tropical Depressions (TD), and United States Atmospheric Rivers (US-AR) Benchmark Split TC (%) ETC (%) TD (%) US-AR (%) Total Small Train 3190 (42.32) 3510 (46.57) 433 (5.74) 404 (5.36) 7537 Test 2882 (39.04) 3430 (46.47) 697 (9.44) 372 (5.04) 7381 4 The model We use a 3D convolutional encoder-decoder architecture, meaning that the filters of the convolutional encoder and decoder are 3 dimensional (height, width, time). The architecture is shown in Figure 1; the encoder uses convolution at each layer while the decoder is the equivalent structure in reverse, using tied weights and deconvolutional layers, with leaky ReLUs (Andrew L. Maas & Ng., 2013) (0.1) after each layer. As we take a semi-supervised approach, the code (bottleneck) layer of the autoencoder is used as the input to the loss layers, which make predictions for (1) bounding box location and size, (2) class associated with the bounding box, and (3) the confidence (sometimes called ’objectness’) of the bounding box. Further details (filter size, stride length, padding, output sizes, etc.) can be found in the supplementary materials. Figure 1: Diagram of the 3D semi-supervised architecture. Parentheses denote subset of total dimension shown (for ease of visualization, only two feature maps per layer are shown for the encoder-decoder. All feature maps are shown for bounding-box regression layers). The total loss for the network, L, is a weighted combination of supervised bounding-box regression loss, Lsup, and unsupervised reconstruction error, Lrec : L = Lsup + λLrec, (1) where Lrec is the mean squared squared difference between input X and reconstruction X∗: Lrec = 1 M ||X −X∗||2 2, (2) where M is the total number of pixels in an image. In order to regress bounding boxes, we split the original 768x1152 image into a 12x18 grid of 64x64 anchor boxes. We then predict a box at each grid point by transforming the representation to 12x18=216 scores (one per anchor box). Each score encodes three pieces of information: (1) how much the predicted box differs in size and location from the anchor box, (2) the confidence that an object of interest is in the predicted box (“objectness”), and (3) the class probability distribution for that object. Each component of the score is computed by several 3x3 convolutions applied to the 640 12x18 feature maps of the last encoder layer. Because each set of pixels in each feature map at a given x, y coordinate can be thought of as a learned representation of the climate data in a 64x64 patch of the input image, we can think of the 3x3 convolutions as having a local receptive field size of 192x192, so they use a representation of a 192x192 neighborhood from the input image as context to determine the box and object centered in the given 64x64 patch. Our approach is similar to (Liu et al., 2015) and (Sermanet et al., 2013), which use convolutions from small local receptive field filters to 5 regress boxes. This choice is motivated by the fact that extreme weather events occur in relatively small spatiotemporal volumes, with the ‘background’ context being highly consistent across event types and between events and non-events. This is in contrast to Redmon et al. (2015), which uses a fully connected layer to consider the whole image as context, appropriate for the task of object identification in natural images, where there is often a strong relationship between background and object. The bounding box regression loss, Lsup, is determined as follows: Lsup = 1 N (Lbox + Lconf + Lcls), (3) where N is the number of time steps in the minibatch, and Lbox is defined as: Lbox = α X i 1obj i R(ui −u∗ i ) + β X i 1obj i R(vi −v∗ i ), (4) where i ∈[0, 216) is the index of the anchor box for the ith grid point, and where 1obj i = 1 if an object is present at the ith grid point, 0 if not; R(z) is the smooth L1 loss as used in (Ren et al., 2015), ui = (tx, ty)i and u∗ i = (t∗ x, t∗ y)i, vi = (tw, th)i and v∗ i = (t∗ w, t∗ h)i and t is the parametrization defined in (Ren et al., 2015) such that: tx = (x −xa)/wa, ty = (y −ya)/ha, tw = log(w/wa), th = log(h/ha) t∗ x = (x∗−xa)/wa, t∗ y = (y∗−ya)/ha, t∗ w = log(w∗/wa), t∗ h = log(h∗/ha), where (xa, ya, wa, ha) is the center coordinates and height and width of the closest anchor box, (x, y, w, h) are the predicted coordinates and (x∗, y∗, w∗, h∗) are the ground truth coordinates. Lconf is the weighted cross-entropy of the log-probability of an object being present in a grid cell: Lconf = X i 1obj i [−log(p(obj)i)] + γ ∗ X i 1noobj i [−log(p(obji))] (5) Finally Lcls is the cross-entropy between the one-hot encoded class distribution and the softmax predicted class distribution, evaluated only for predicted boxes at the grid points containing a ground truth box: Lcls = X i 1obj i X c∈classes −p∗(c) log(p(c)) (6) The formulation of Lsup is similar in spirit to YOLO (Redmon et al., 2015), with a few important differences. Firstly, the object confidence and class probability terms in YOLO are squared-differences between ground truth and prediction, while we use cross-entropy, as used in the region proposal network from Faster R-CNN (Ren et al., 2015) and the network from (Liu et al., 2015), for the object probability term and the class probability term respectively. Secondly, we use a different parametrization for the coordinates and the size of the bounding box. In YOLO, the parametrizations for x and y are equivalent to Faster R-CNN’s tx and ty, for an anchor box the same size as the patch it represents (64x64). However w and h in YOLO are equivalent to Faster-RCNN’s th and tw for a 64x64 anchor box only if (a) the anchor box had a height and width equal to the size of the whole image and (b) there were no log transform in the faster-RCNN’s parametrization. We find both these differences to be important in practice. Without the log term, and using ReLU nonlinearities initialized (as is standard) centered around 0, most outputs (more than half) will give initial boxes that are in 0 height and width. This makes learning very slow, as the network must learn to resize essentially empty boxes. Adding the log term alone in effect makes the "default" box (an output of 0) equal to the height and width of the entire image - this equally slows down learning, because the network must now learn to drastically shrink boxes. Making ha and wa equal to 64x64 is a pragmatic ’Goldilocks’ value. This makes training much more efficient, as optimization can focus more on picking which box contains an object and not as much on what size the box should be. Finally, where YOLO uses squared difference between predicted and ground truth for the coordinate parametrizations, we use smooth L1, due its lower sensitivity to outlier predictions (Ren et al., 2015). 6 5 Experiments and Discussion 5.1 Framewise Reconstruction As a simple experiment, we first train a 2D convolutional autoencoder on the data, treating each timestep as an individual training example (everything else about the model is as described in Section 4), in order to visually assess reconstructions and ensure reasonable accuracy of detection. Figure 2 shows the original and reconstructed feature maps for the 16 climate variables of one image in the training set. Reconstruction loss on the validation set was similar to the training set. As the reconstruction visualizations suggest, the convolutional autoencoder architecture does a good job of encoding spatial information from climate images. original reconstruction Figure 2: Feature maps for the 16 channels in an ’image’ from the training set (left) and their reconstructions from the 2D convolutional autoencoder (right). 5.2 Detection and localization All experiments are on ExtremeWeather-small, as described in Section 3, where 1979 is train and 1981 is validation. The model is trained with Adam (Kingma & Ba, 2014), with a learning rate of 0.0001 and weight decay coefficient of 0.0005. For comparison, and to evaluate how useful the time axis is to recognizing extreme weather events, we run experiments with both 2D (width, height) and 3D (width, height, time) versions of the architecture described in Section 4. Values for α, β, γ (hyperparameters described in loss Equations 4 and 5) were selected with experimentation and some inspiration from (Redmon et al., 2015) to be 5, 7 and 0.5 respectively. A lower value for γ pushes up the confidence of true positive examples, allowing the model more examples to learn from, is thus a way to deal with ground-truth false negatives. Although some of the selection of these parameters is a bit ad-hoc, we assert that our results still provide a good first-pass baseline approach for this dataset. The code is available at https://github.com/eracah/hur-detect During training, we input one day’s simulation at a time (8 time steps; 16 variables). The semisupervised experiments reconstruct all 8 time steps, predicting bounding boxes for the 4 labelled timesteps, while the supervised experiments reconstruct and predict bounding boxes only for the 4 labelled timesteps. Table 3 shows Mean Average Precision (mAP) for each experiment. Average Precision (AP) is calculated for each class in the manner of ImageNet (Russakovsky et al., 2015), integrating the precision-recall curve, and mAP is averaged over classes. Results are shown for various settings of λ (see Equation 1) and for two modes of evaluation; at IOU (intersection over union of the bounding-box and ground-truth box) thresholds of 0.1 and 0.5. Because the 3D model has inherently higher capacity (in terms of number of parameters) than the 2D model, we also experiment with higher capacity 2D models by doubling the number of filters in each layer. Figure 3 shows bounding box predictions for 2 consecutive (6 hours in between) simulation frames, comparing the 3D supervised vs 3D semi-supervised model predictions. It is interesting to note that 3D models perform significantly better than their 2D counterparts for ETC and TC (hurricane) classes. This implies that the time evolution of these weather events is an important criteria for discriminating them. In addition, the semi-supervised model significantly improves the ETC and TC performance, which suggests unsupervised shaping of the spatio-temporal representation is important for these events. Similarly, semi-supervised data improves performance of the 3D model (for IOU=0.1), while this effect is not observed for 2D models, suggesting that 3D representations benefit more from unsupervised data. Note that hyperparameters were tuned in the supervised setting, and a more thorough hyperparameter search for λ and other parameters may yield better semi-supervised results. 7 Figure 3 shows qualitatively what the quantitative results in Table 3 confirm - semi-supervised approaches help with rough localization of weather events, but the model struggles to achieve accurate boxes. As mentioned in Section 4, the network has a hard time adjusting the size of the boxes. As such, in this figure we see mostly boxes of size 64x64. For example, for TDs (usually much smaller than 64x64) and for ARs, (always much bigger than 64x64), a 64x64 box roughly centered on the event is sufficient to count as a true positive at IOU=0.1, but not at the more stringent IOU=0.5. This lead to a large dropoff in performance for ARs and TDs, and a sizable dropoff in the (variably-sized) TCs. Longer training time could potentially help address these issues. Table 3: 2D and 3D supervised and semi-supervised results, showing Mean Average Precision (mAP) and Average Precision (AP) for each class, at IOU=0.1; IOU=0.5. M is model; P is millions of parameters; and λ weights the amount that reconstruction contributes to the overall loss. M Mode P λ ETC (46.47%) AP (%) TC (39.04%) AP (%) TD (9.44%) AP (%) AR (5.04%) AP (%) mAP 2D Sup 66.53 0 21.92; 14.42 52.26; 9.23 95.91; 10.76 35.61; 33.51 51.42; 16.98 2D Semi 66.53 1 18.05; 5.00 52.37; 5.26 97.69; 14.60 36.33; 0.00 51.11; 6.21 2D Semi 66.53 10 15.57; 5.87 44.22; 2.53 98.99; 28.56 36.61; 0.00 48.85; 9.24 2D Sup 16.68 0 13.90; 5.25 49.74; 15.33 97.58; 7.56 35.63; 33.84 49.21; 15.49 2D Semi 16.68 1 15.80; 9.62 39.49; 4.84 99.50; 3.26 21.26; 13.12 44.01; 7.71 3D Sup 50.02 0 22.65; 15.53 50.01; 9.12 97.31; 3.81 34.05; 17.94 51.00; 11.60 3D Semi 50.02 1 24.74; 14.46 56.40; 9.00 96.57; 5.80 33.95; 0.00 52.92; 7.31 Figure 3: Bounding box predictions shown on 2 consecutive (6 hours in between) simulation frames, for the integrated water vapor column channel. Green = ground truth, Red = high confidence predictions (confidence above 0.8). 3D supervised model (Left), and semi-supervised (Right). 5.3 Feature exploration In order to explore learned representations, we use t-SNE (van der Maaten & Hinton, Nov 2008) to visualize the autoencoder bottleneck (last encoder layer). Figure 4 shows the projected feature maps for the first 7 days in the training set for both 3D supervised (top) and semi-supervised (bottom) experiments. Comparing the two, it appears that more TCs (hurricanes) are clustered by the semisupervised model, which would fit with the result that semi-supervised information is particularly valuable for this class. Viewing the feature maps, we can see that both models have learned spiral patterns for TCs and ETCs. 6 Conclusions and Future Work We introduce to the community the ExtremeWeather dataset in hopes of encouraging new research into unique, difficult, and socially and scientifically important datasets. We also present a baseline method for comparison on this new dataset. The baseline explores semi-supervised methods for 8 Figure 4: t-SNE visualisation of the first 7 days in the training set for 3D supervised (top) and semi-supervised (bottom) experiments. Each frame (time step) in the 7 days has 12x18 = 216 vectors of length 640 (number of feature maps in the code layer), where each pixel in the 12x18 patch corresponds to a 64x64 patch in the original frame. These vectors are projected by t-SNE to two dimensions. For both supervised and semi-supervised, we have zoomed into two dense clusters and sampled 64x64 patches to show what that feature map has learned. Grey = unlabelled, Yellow = tropical depression (not shown), Green = TC (hurricane), Blue = ETC, Red = AR. object detection and bounding box prediction using 3D autoencoding CNNs. These architectures and approaches are motivated by finding extreme weather patterns; a meaningful and important problem for society. Thus far, the climate science community has used hand-engineered criteria to characterize patterns. Our results indicate that there is much promise in considering deep learning based approaches. Future work will investigate ways to improve bounding-box accuracy, although even rough localizations can be very useful as a data exploration tool, or initial step in a larger decision-making system. Further interpretation and visualization of learned features could lead to better heuristics, and understanding of the way different variables contribute to extreme weather events. Insights in this paper come from only a fraction of the available data, and we have not explored such challenging topics as anomaly detection, partial annotation detection and transfer learning (e.g. to satellite imagery). Moreover, learning to generate future frames using GAN’s (Goodfellow et al., 2014) or other deep generative models, while using performance on a detection model to measure the quality of the generated frames could be another very interesting future direction. We make the ExtremeWeather dataset available in hopes of enabling and encouraging the machine learning community to pursue these directions. The retirement of Imagenet this year (Russakovsky et al., 2017) marks the end of an era in deep learning and computer vision. We believe the era to come should be defined by data of social importance, pushing the boundaries of what we know how to model. Acknowledgments This research used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Code relies on open-source deep learning frameworks Theano (Bergstra et al.; Team et al., 2016) and Lasagne (Team, 2016), whose developers we gratefully acknowledge. We thank Samsung and Google for support that helped make this research 9 possible. We would also like to thank Yunjie Liu and Michael Wehner for providing access to the climate datasets; Alex Lamb and Thorsten Kurth for helpful discussions. References Awni Y. Hannun Andrew L. Maas and Andrew Y. Ng. Rectifier nonlinearities improve neural network acoustic models. ICML Workshop on Deep Learning for Audio, Speech, and Language Processing, 2013. Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for learning video representations. In the Proceedings of ICLR. arXiv preprint arXiv:1511.06432, 2016. Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730, 2015. Andrew J Conley, Rolando Garcia, Doug Kinnison, Jean-Francois Lamarque, Dan Marsh, Mike Mills, Anne K Smith, Simone Tilmes, Francis Vitt, Hugh Morrison, et al. Description of the ncar community atmosphere model (cam 5.0). 2012. Michael D. Dettinger, Fred Martin Ralph, Tapash Das, Paul J. Neiman, and Daniel R. Cayan. Atmospheric rivers, floods and the water resources of california. Water, 3(2):445, 2011. ISSN 2073-4441. URL http://www.mdpi.com/2073-4441/3/2/445. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. Samira Ebrahimi Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, and Christopher Pal. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 467–474. ACM, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. Raghav Goyal, Samira Kahou, Vincent Michalski, Joanna Materzy´nska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The "something something" video database for learning and evaluating visual common sense. arXiv preprint arXiv:1706.04261, 2017. Chunhui Gu, Chen Sun, Sudheendra Vijayanarasimhan, Caroline Pantofaru, David A Ross, George Toderici, Yeqing Li, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034, 2015. Ehsan Hosseini-Asl, Georgy Gimel’farb, and Ayman El-Baz. Alzheimer’s disease diagnostics by a deeply supervised adaptable 3d convolutional network. 2016. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221–231, 2013. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732, 2014. 10 Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. David A. Lavers, Gabriele Villarini, Richard P. Allan, Eric F. Wood, and Andrew J. Wade. The detection of atmospheric rivers in atmospheric reanalyses and their links to british winter floods and the large-scale climatic circulation. Journal of Geophysical Research: Atmospheres, 117(D20): n/a–n/a, 2012. ISSN 2156-2202. doi: 10.1029/2012JD018027. URL http://dx.doi.org/10. 1029/2012JD018027. D20106. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. Ssd: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015. Yunjie Liu, Evan Racah, Prabhat, Joaquin Correa, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, and William Collins. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. 2016. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders. CoRR, abs/1511.05644, 2015. URL http://arxiv.org/abs/1511.05644. Ishan Misra, Abhinav Shrivastava, and Martial Hebert. Watch and learn: Semi-supervised learning of object detectors from videos. CoRR, abs/1505.05769, 2015. URL http://arxiv.org/abs/ 1505.05769. Adam H Monahan, John C Fyfe, Maarten HP Ambaum, David B Stephenson, and Gerald R North. Empirical orthogonal functions: The medium is the message. Journal of Climate, 22(24):6501– 6514, 2009. Urs Neu, Mirseid G. Akperov, Nina Bellenbaum, Rasmus Benestad, Richard Blender, Rodrigo Caballero, Angela Cocozza, Helen F. Dacre, Yang Feng, Klaus Fraedrich, Jens Grieger, Sergey Gulev, John Hanley, Tim Hewson, Masaru Inatsu, Kevin Keay, Sarah F. Kew, Ina Kindem, Gregor C. Leckebusch, Margarida L. R. Liberato, Piero Lionello, Igor I. Mokhov, Joaquim G. Pinto, Christoph C. Raible, Marco Reale, Irina Rudeva, Mareike Schuster, Ian Simmonds, Mark Sinclair, Michael Sprenger, Natalia D. Tilinina, Isabel F. Trigo, Sven Ulbrich, Uwe Ulbrich, Xiaolan L. Wang, and Heini Wernli. Imilast: A community effort to intercompare extratropical cyclone detection and tracking algorithms. Bulletin of the American Meteorological Society, 94(4): 529–547, 2013. doi: 10.1175/BAMS-D-11-00154.1. Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In British Machine Vision Conference, volume 1, pp. 6, 2015. Prabhat, Oliver Rubel, Surendra Byna, Kesheng Wu, Fuyu Li, Michael Wehner, and Wes Bethel. Teca: A parallel toolkit for extreme climate analysis. ICCS, 2012. Prabhat, Surendra Byna, Venkatram Vishwanath, Eli Dart, Michael Wehner, and William D. Collins. Teca: Petascale pattern recognition for climate science. CAIP, 2015. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546– 3554, 2015. Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. CoRR, abs/1506.02640, 2015. URL http://arxiv.org/ abs/1506.02640. 11 Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. Olga Russakovsky, Eunbyung Park, Wei Liu, Jia Deng, Fei-Fei Li, and Alex Berg. Beyond imagenet large scale visual recognition challenge, 2017. URL http://image-net.org/challenges/ beyond_ilsvrc.php. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. 2016. Pierre Sermanet, David Eigen, Xiang Zhang, Michaël Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. CoRR, abs/1502.04681, 2, 2015. Karsten Steinhaeuser, Nitesh Chawla, and Auroop Ganguly. Comparing predictive power in climate data: Clustering matters. Advances in Spatial and Temporal Databases, pp. 39–55, 2011. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. 2014. L.J.P van der Maaten and G.E. Hinton. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9: 2579–2605, Nov 2008. Michael Wehner, Prabhat, Kevin A. Reed, Dáithí Stone, William D. Collins, and Julio Bacmeister. Resolution dependence of future tropical cyclone projections of cam5.1 in the u.s. clivar hurricane working group idealized configurations. Journal of Climate, 28(10):3905–3925, 2015. doi: 10.1175/JCLI-D-14-00311.1. William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. 2016. Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic textures and sounds by spatial-temporal generative convnet. 2016. Shi Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, pp. 802–810, 2015. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4507–4515, 2015. Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsupervised objectives for large-scale image classification. arXiv preprint arXiv:1606.06582v1, 2016. Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015. 12 | 2017 | 220 |
6,698 | A Meta-Learning Perspective on Cold-Start Recommendations for Items Manasi Vartak∗ Massachusetts Institute of Technology mvartak@csail.mit.edu Arvind Thiagarajan Twitter Inc. arvindt@twitter.com Conrado Miranda Twitter Inc. cmiranda@twitter.com Jeshua Bratman Twitter Inc. jbratman@twitter.com Hugo Larochelle† Google Brain hugolarochelle@google.com Abstract Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation. 1 Introduction The problem of recommending items to users — whether in the form of products, Tweets, or ads — is a ubiquitous one. Recommendation algorithms in each of these settings seek to identify patterns of individual user interest and use these patterns to recommend items. Matrix factorization (MF) techniques [19], have been shown to work extremely well in settings where many users rate the same items. MF works by learning separate vector embeddings (in the form of lookup tables) for each user and each item. However, these techniques are well known for facing serious challenges when making cold-start recommendations, i.e. when having to deal with a new user or item for which a vector embedding hasn’t been learned. Cold-start problems related to items (as opposed to users) are particularly acute in settings where new items arrive continuously. A prime example of this scenario is Tweet recommendation in the Twitter Home Timeline [1]. Hundreds of millions of Tweets are sent on Twitter everyday. To ensure freshness of content, the Twitter timeline must continually rank the latest Tweets and recommend relevant Tweets to each user. In the absence of user-item rating data for the millions of new Tweets, traditional matrix factorization approaches that depend on ratings cannot be used. Similar challenges related to item cold-start arise when making recommendations for news [20], other types of social media, and streaming data applications. In this paper, we consider the problem of item cold-start (ICS) recommendation, focusing specifically on providing recommendations when new items arrive continuously. Various techniques [3, 14, 27, 17] ∗Work done as an intern at Twitter †Work done while at Twitter 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. have been proposed in the literature to extend MF (traditional and probabilistic) to address cold-start problems. Majority of these extensions for item cold-start involve the incorporation of item-specific features based on item description, content, or intrinsic value. From these, a model is prescribed that can parametrically (as opposed to a lookup table) infer a vector embedding for that item. Such item embeddings can then be compared with the embeddings from the user lookup table to perform recommendation of new items to these users. However, in a setting where new items arrive continuously, we posit that relying on user embeddings trained offline into a lookup table is sub-optimal. Indeed, this approach cannot capture substantial variations in user interests occurring on short timescales, a common phenomenon with continuously produced content. This problem is only partially addressed when user embeddings are retrained periodically or incrementally adjusted online. Alternatively, recommendations could be made by performing content-based filtering [21], where we compare each new item to other items the user has rated in the recent past. However, a pure content-based filtering approach does not let us share and transfer information between users. Instead, we would like a method that performs akin to content filtering using a user’s item history, but shares information across users through some form of transfer learning between recommendation tasks across users. In other words, we would like to learn a common procedure that takes as input a set of items from any user’s history and produces a scoring function that can be applied to new test items and indicate how likely this user is to prefer that item. In this formulation, we notice that the recommendation problem is equivalent to a meta-learning problem [28] where the objective is to learn a learning algorithm that can take as input a (small) set of labeled examples (a user’s history) and will output a model (scoring function) that can be applied to new examples (new items). In meta-learning, training takes place in an episodic manner, where a training set is presented along with a test example that must be correctly labeled. In our setting, an episode is equivalent to presenting a set of historical items (and ratings) for a particular user along with test items that must be correctly rated for that user. The meta-learning perspective is appealing for a few reasons. First, we are no longer tied to the MF model where a rating is usually the inner product of the user and item embeddings; instead, we can explore a variety of ways to combine user and item information. Second, it enables us to take advantage of deep neural networks to learn non-linear embeddings. And third, it specifies an effective way to perform transfer learning across users (by means of shared parameters), thus enabling us to cope with limited amount of data per user. A key part of designing a meta-learning algorithm is the specification of how a model is produced for different tasks. In this work, we propose and evaluate two strategies for conditioning the model based on task. The first, called linear weight adaptation, is a light-weight method that builds a linear classifier and adapts weights of the classifier based on the task information. The second, called non-linear bias adaptation, builds a neural network classifier that uses task information to adapt the biases of the neural network while sharing weights across all tasks. Thus our contributions are: (1) we introduce a new hybrid recommendation method for the item cold-start setting that is motivated by limitations in current MF extensions for ICS; (2) we introduce a meta-learning formulation of the recommendation problem and elaborate why a meta-learning perspective is justified in this setting; (3) we propose two key architectures for meta-learning in this recommendation context; and (4) we evaluate our techniques on a production Twitter dataset and demonstrate that they outperform an approach based on lookup table embeddings as well as state-of-the-art industrial models. 2 Problem Formulation Similar to other large-scale recommender systems that must address the ICS problem [6], we view recommendation as a binary classification problem as opposed to a regression problem. Specifically, for an item ti and user uj, the outcome eij ∈{0, 1} indicates whether the user engaged with the item. Engagement can correspond to different actions in different settings. For example, in video recommendation, engagement can be defined as a user viewing the video; in ad-click prediction, engagement can be defined as clicking on an ad; and on Twitter, engagement can be an action related to a Tweet such as liking, Retweeting or replying. Our goal in this context is to predict the probability 2 that uj will engage with ti: Pr(eij=1|ti, uj) . (1) Once the engagement probability has been computed, it can be combined with other signals to produce a ranked list of recommended items. As discussed in Section 1, in casting recommendations as a form of meta-learning, we view the problem of making predictions for one user as an individual task or episode. Let Tj be the set of items to which a user uj has been exposed (e.g. Tweets recommended to uj). We represent each user in terms of their item history, i.e., the set of items to which they have been exposed and their engagement for each of these items. Specifically, user uj is represented by their item history Vj = {(tm, emj)} : tm ∈Tj. Note that we limit item history to only those items that were seen before ti. We then model the probability of Eq. 1 as the output of a model f(ti; θ) where the parameters θ are produced from the item history Vj of user uj: Pr(eij=1|ti, uj) = f(ti; H(Vj)) (2) Thus meta-learning consists of learning the function H(Vj) that takes history Vj and produces parameters of the model f(ti; θ). In this paper, we propose two neural network architectures for learning H(Vj), depicted in Fig. 1. The first approach, called Linear Weight Adaptation (LWA) and shown in Fig. 1a, assumes f(ti; θ) is a linear classifier on top of non-linear representations of items and uses the user history to adapt classifier weights. The second approach, called Non-Linear Bias Adaptation (NLBA) and shown in Fig. 1b, assumes f(ti; θ) to be a neural network classifier on top of item representations and uses the user history to adapt the biases of the units in the classifier. In the following subsections, we describe the two architectures and differences in how they model the classification of a new item ti from its representation F(ti). 3 Proposed Architectures As shown in Fig. 1, both architectures we propose take as input (a) the items to which a user has been exposed along with the rating (i.e. class) assigned to each item by this user (positive, i.e., 1, or negative, i.e., 0), and (b) a test item for which we seek to predict a rating. Each of our architectures in turn consists of two sub-networks. The first sub-network learns a common representation of items (historical and new), which we note F(t). In our implementation, item representations F(t) are learned by a deep feed-forward neural network. We then compute a single representative per class by applying an aggregating function G to the representations of items tm ∈Tj from each class. A simple example of G is the unweighted mean, while more complicated functions may order items by recency and learn to weigh individual embeddings differently. Thus, the class representative embedding for class c ∈{0, 1} can be expressed as shown below. Rc j = G({F(tm)} : tm ∈Tj ∧(emj = c)) (3) Once we have obtained class representatives, the second sub-network applies the LWA and NLBA approaches to adapt the learned model based on item histories. 3.1 Linear Classifier with Task-dependent weights Our first approach to conditioning predictions on users’ item histories has parallels to latent factor models and is appealing due to its simplicity: we learn a linear classifier (for new items) whose weights are determined by the user’s history Vj. Given the two class-representative embeddings R0 j, R1 j described above, LWA provides the bias (first term) and weights (second term) of a linear logistic regression classifier as follows: [b, (w0R0 j + w1R1 j)] = H(Vj) (4) where scalars b, w0, w1 are trainable parameters. More concretely, with f(ti; θ) being logistic regression, Eq. 2 becomes: Pr(eij=1|ti, uj) = σ(b + F(ti) · (w0R0 j + w1R1 j)) (5) 3 { } { } F(t1) F(t3) F(t2) F(t4) positive class negative class { } { } F(t1) F(t4) positive class negative class F(t6) F(t7) F(t5) User 1 User 2 G G G G F(t8) (a) Linear Classifier with Weight Adaptation. Changes in the shading of each connection with the output unit for two users illustrates that the weights of the classifier vary based on each user’s item history. The output bias indicated by the shades of the circles however remains the same. { } { } F(t1) F(t3) F(t2) F(t4) positive class negative class { } { } F(t1) F(t4) positive class negative class F(t6) F(t7) F(t5) F(t8) User 1 User 2 G G G G (b) Non-linear Classifier with Bias Adaptation. Changes in the shading of each unit between two users illustrates that the biases of these units vary based on each user’s item history. The weights however remain the same. Figure 1: Proposed meta-learning architectures While bias b of the classifier is constant across users, its weight vector varies with user-specific item histories (i.e., based on the representative embeddings for classes). This means that different dimensions of F(ti), such as properties of item ti, get weighted differently depending on the user. While simple, the LWA method can be quite effective (see Section 5). Moreover, in some cases, it may be preferred over more complex methods because it allows significant amount of computation to be performed offline. For example, in Eq. 5, the only quantities that are unknown at prediction time are F(ti). All the rest, including Rc j, can be pre-computed, reducing the cost of prediction to the computation of one dot product and one sigmoid. 3.2 Non-linear Classifier with Task-dependent Bias Our first meta-learning strategy is simple and is reminiscent of MF with non-linear embeddings. However, it limits the effect of task information, specifically R0 j and R1 j, on the final prediction. Our second strategy, NLBA, learns a neural network classifier with H hidden-layers where the bias (first term) and weights (second term) of the output, as well as the biases (third term) and weights (fourth term) of all hidden layers are determined as follows: [v0R0 j + v1R1 j, w, {V0 l R0 j + V1 l R1 j}H l=1, {Wl}H l=1] = H(Vj) (6) 4 Here, the vectors v0, v1, w and matrices {V0 l }H l=1, {V1 l }H l=1, {Wl}H l=1 are all trainable parameters. In contrast to LWA, all weights (output and hidden) in NLBA are constant across users, while the biases of output and hidden units are adapted per user. One can think of this approach as learning a shared pool of hidden units whose activation can be modulated depending on the user (e.g. a unit could be entirely shot down for a user with a large negative bias). Compared to LWA, NLBA produces a non-linear classifier of the item representations F(ti) and can model complex interactions between classes and also between the classes and the test item. For example, interactions allow NLBA to explore a different part of the classifier function space that is not accessible to LWA (e.g., ratio of the kth dimension of the class representatives). We find empirically that NLBA significantly improves model performance compared to LWA (Section 5). Selecting Historical Impressions. A natural question with our meta-learning formulation is the minimum item history size required to make accurate predictions. In general, item history size depends on the problem and variability within items, and must be empirically determined. Often, due to the long tail of users, item histories can be very large (e.g., consider a bot which likes every item). Therefore, we recommend setting an upper limit on item history size. Further, for any user, the number of items with positive engagement (eij=1) can be quite different from those with negative engagement (eij=0). Therefore, in our experiments, we choose to independently sample histories for the two classes up to a maximum size k for each class. Note that while this sampling strategy makes the problem more tractable, this sampling throws off certain features (e.g. user click through rate) that would benefit from having the true proportions of positive and negative engagements. 4 Related Work Algorithms for recommendation broadly fall into two categories: content-filtering and collaborativefiltering. Content filtering [21] uses information about items (e.g. product categories, item content, reviews, price) and users (e.g. age, gender, location) to match users to items. In contrast, collaborative filtering [19, 23] uses past user-item ratings to predict future ratings. The most popular technique for performing collaborative filtering is via latent factor models where items and users are represented in a common latent space and ratings are computed as the inner product between the user and item representations. Matrix factorization (MF) is the most popular instantiation of latent factor models and has been used for large scale recommendations of products [19], movies [15]) and news [7]. A significant problem with traditional MF approaches is that they suffer from cold-start problems, i.e., they cannot be applied to new items or users. In order to address the cold-start problem, work such as [3, 14, 27, 17] has extended the MF model so that user- and item-specific terms can be included in their respective representations. These methods are called hybrid methods. Given the power of deep neural networks to learn representations of images and text, many of the new hybrid methods such as [30] and [12] use deep neural networks to learn item representations. Deep learning models based on ID embeddings (as opposed to content embeddings) have also been used for performing large scale video recommendation in [6]. The work in [5, 30] operates in a problem setting similar to ours where new scientific articles must be recommended to users based on other articles in their library. In these settings, users are represented in terms of scientific papers in their “libraries”. Note that unlike our setting where we have positive as well as negative information, there are no negative examples present in this setting. [9, 10] propose RNN architecture for a similar problem, namely that of recommending items during short-lived web sessions. [11] propose co-factorization machines to jointly model topics in Tweets while making recommendations. In this paper, we propose to view recommendations from a meta-learning perspective [28, 18]. Recently, meta-learning has been explored as a popular strategy for learning from a small number of items (also called few-shot learning [16, 13]). Successful applications of meta-learning include MatchingNets [29] in which an episodic scheme is used to train a meta-learner to classify images given very few examples belonging to each classes. In particular, MatchingNets use LSTMs to learn attention weights over all points in the support set and use a weighted sum to make predictions for the test item. Similarly, in [24], the authors propose an LSTM-based meta-learner to learn another learner that performs few-shot classification. [25] proposes a memory-augmented neural network for meta-learning. The key idea is that the network can slowly learn useful representations of data through weight updates while the external memory can cache new data for rapid learning. Most 5 recently, [4] proposes to learn active learning algorithms via a technique based on MatchingNets. While the above state-of-the-art meta-learning techniques are powerful and potentially useful for recommendations, they do not scale to large datasets with hundreds of millions of examples. Our approach of computing a mean representative per class is similar to [26] and [22] in terms of learning class representative embeddings. Our work also has parallels to the recent work on DeepSets [31] where the authors propose a general strategy for performing machine learning tasks on sets. The authors propose to learn an embedding per item and then use a permutation invariant operation, usually a sumpool or maxpool, to learn a single representation that is then passed to another neural network for performing the final classification or regression. Our techniques differ in that our input sets are not homogeneous as assumed in DeepSets and therefore we need to learn multiple representatives, and unlike DeepSets, our network must work for variable size histories and therefore a weighted sum is more effective than the unweighted sum. 5 Evaluation 5.1 Recommending Tweets on Twitter Home Timeline We evaluated our proposed techniques on the real-world problem of Tweet recommendation. The Twitter Home timeline is the primary means by which users on Twitter consume Tweets from their networks [1]. 300+ million monthly active users on Twitter send hundreds of millions of Tweets per day. Every time a user visits Twitter, the timeline ranking algorithm scores Tweets from the accounts they follow and identifies the most relevant Tweets for that user. We model the timeline ranking problem as one of engagement prediction as described in Section 2. Given a Tweet ti and a user uj, the task is to predict the probability of uj engaging with ti, i.e., Pr(eij=1|ti, uj; θ). Engagement can be any action related to the Tweet such as Retweeting, liking or replying. For the purpose of this paper, we will limit our analysis to prediction of one kind of engagement, namely that of liking a Tweet. Because hundreds of millions of new Tweets arrive every day, as discussed in Section 1, traditional matrix factorization schemes suffer from acute cold-start problems and cannot be used for Tweet recommendation. In this work, we cast the problem in terms of meta-learning and adopt the formulation developed in Eq. 2. Dataset. We used production data regarding Tweet engagement to perform an offline evaluation of our techniques. Specifically, the training data was generated as follows: for a particular day d, we collect data for all Tweet impressions (i.e., Tweets shown to a user) generated during that day. Each data point consists of a Tweet ti, the user uj to whom it was shown, and the engagement outcome eij. We then join impression data with item histories (per user) that are computed using impressions from the week prior to d. As discussed in Section 2, there are different strategies for selecting items to build the item history. For this problem, we independently sample impressions with positive engagement and negative engagement, up to a maximum of k engagements in each class. We experimented with different values of k and chose the smallest one that did not produce a significant drop in performance. After applying other typical filtering operations, our training dataset consists of hundreds of millions of examples (i.e., ti, uj pairs) for day d. The test and validation sets were similarly constructed, but for different days. For feature preprocessing, we scale and discretize continuous features and one-hot-encode categorical features. Baseline Models. We implemented different architectural variations of the two meta-learning approaches proposed in Section 2. Along with comparisons within architectures, we compare our models against three external models: (a) first, an industrial baseline (PROD-BASE) not using meta-learning; (b) the industrial baseline augmented with a latent factor model for users (MF); and (c) the state-of-the-art production model for engagement prediction (PROD-BEST). PROD-BASE is a deep feed-forward neural network that uses information pertaining only to the current Tweet in order to predict engagement. This information includes features of the current Tweet ti (e.g. its recency, whether it contains a photo, number of likes), features about the user uj (e.g. how heavily the user uses Twitter, their network), and the Tweet’s author (e.g. strength of connection between the user and author, past interactions). Note that this network uses no historical information about the user. This baseline learns a combined item-user embedding (due to user features present in the input) and performs classification based on this embedding. While relatively simple, this model presents a very strong baseline due to the presence of high-quality features. 6 Model AUROC AUROC (w/CTR) MF (shallow) +0.22% +1.32% MF (deep) +0.55% +1.87% PROD-BEST +2.54% +2.54% LWA +1.76% +2.43% LWA∗ +1.98% +2.31% Table 1: Performance with LWA Model AUROC AUROC (w/CTR) MF (shallow) +0.22% +1.32% MF (shallow) +0.55% +1.87% PROD-BEST +2.54% +2.54% NLBA +2.65% +2.76% Table 2: Performance with NBLA To mimic latent factor models in cold-start settings, in the second baseline, MF we augmented PRODBASE to learn a latent-space representations of users based on ratings. MF uses an independently learned user representation and a current Tweet representation whose inner product is used to make the classification. We evaluate two versions of MF, one that uses a shallow network (1 layer) for learning the representations and another than uses a deep network (5 layers) to learn representations. PROD-BEST is the production model for engagement prediction based on deep neural networks. PROD-BEST uses features not only for the current Tweet but historical features as described in [8]. PROD-BEST is a highly tuned model and represents the state-of-art in Tweet engagement prediction. Experimental Setup. All models were implemented in the Twitter Deep Learning platform [2]. Models were trained to minimize cross-entropy loss and SGD was used for optimization. We use AUROC as the evaluation metric in our experiments. All performance numbers denote percent AUROC improvement relative to the production baseline model, PROD-BASE. For every model, we performed hyperparameter tuning on a validation set using random search and report results for the best performing model. 5.2 Results Linear Classifier with Weight Adaptation. We evaluated two key instantiations of the LWA approach. First, we test the basic architecture described in Section 3.1 where we calculate one representative embedding from the positive and negative class, and take a linear combination of the dot products of the new item with the respective embeddings. We note this architecture LWA (refer Fig. 1.a). When learning class representatives, we use a deep feed-forward neural network (F in Fig. 1.a) to first compute embeddings and then use a weighted average to learn class representatives (G in Fig. 1.a). We also evaluate a model variation where we augment LWA with a network that uses only the new item embedding to produce a prediction that is then combined with the prediction of LWA to produce the final probability. The intuition behind this model, called LWA∗, is that the linear weight adaptation approach works well when there are non-zero items in the two engagement classes. In cases where one of the classes is empty, the model can fall back on the current item to predict engagement. We show the performance of all LWA variants in Table 1. Note that for all models, we also test a variant where we explicitly pass a user-specific click-throughrate (CTR) to the model. The reason is that the CTR provides a strong prior on the probability p(eij = 1) but cannot be easily learned from our architectures because the ratio of positive to negative engagements in the item history is not proportional to the CTR. User-specific CTR can be thought of as being part of the bias term from Eq. 2. We find that the simplest classifier adaptation model, LWA, already improves upon the production baseline (PROD-BASE) by >1.5 percent. Adding the bias in the form of CTR, improves the AUROC even further. Because learning a good class representative embedding is key for our metalearning approaches, we performed experiments varying only the architectures used to learn class representative embeddings (i.e., architectures for F, G in Fig. 1a). The top-level classifier was kept constant at LWA but we varied the aggregation function and depth of the feed forward network used to learn F. Results of this experimentation are shown in Table 3. We find that deep networks work better than shallow networks but a model with 9 hidden layers performs worse than a 5 layer network, possibly due to over-fitting. We also find that weighted combinations of embeddings (when items are sorted by time) perform significantly better than simple averages. A likely reason for the effectiveness of weighted combinations is that item histories can be variable sized; therefore, weighing non-zero entries higher produces better representatives. Non-Linear Classifier with Bias Adaptation. As with LWA, we evaluated different variations of the non-linear bias adaptation approach. Results of this evaluation are shown in Table 2. We use a 7 Hidden AVG Weighted Layers AVG 1 +1.8% +2.31% 5 +2.20% +2.42% 9 +2.09% +1.65% Table 3: Learning a representative per class Engagements AUROC Used POS/NEG +2.54% POS-ONLY +1.76% NEG-ONLY +1.87% Table 4: Effect of different engagements weighted mean for computing class representatives in NLBA. We see that this network immediately beats PROD-BASE by a large margin. Moreover, it also readily beats the state-of-the-art model, PROD-BEST. Augmenting NLBA with user-specific CTR further allows the network to cleanly beat the highly optimized PROD-BEST. For NLBA architectures, we also evaluated the impact on model AUROC when examples of only one class (eij=0 or 1) are present in the item history. These architectures replicate the strategy of only using one class of items to make predictions. These numbers approximate the gain that could be achieved by using a DeepSets [31]-like approach. The results of this evaluation are shown in Table 4. As expected, we find that dropping either type of engagement reduces performance significantly. Summary of Results. We find that both our proposed approaches improve on the baseline production model (PROD-BASE) by up to 2.76% and NLBA readily beats the state-of-the-art production model (PROD-BEST). As discussed in Sec. 3.2, we find that NLBA clearly outperforms LWA because of the non-linear classifier and access to a larger space of functions. A breakdown of NLBA performance by overall user engagement identifies that NLBA shows large gains for highly engaged users. For both techniques, although the improvements in AUROC may appear small numerically, they have large product impact because they translate to significantly higher number of engagements. This gain is particularly noteworthy when compared to the highly optimized and tuned PROD-BEST model. 6 Discussion In this paper, we proposed to view recommendations from a meta-learning perspective and proposed two architectures for building meta-learning models for recommendation. While our techniques show clear wins over state-of-the-art models, there are several avenues for improving the model and operationalizing it. First, our model does not explicitly model the time-varying aspect of engagements. While weighting impressions differently is a way of modeling time dependencies (e.g., more recent impressions get more weight) scalable versions of sequence models such as [29, 10] could be used to explicitly model time dynamics. Second, while we chose a balanced sampling strategy for producing item histories, we believe that different strategies may be more appropriate in different recommendation settings, and thus merit further exploration. Third, producing and storing item histories for every user can be expensive with respect to computational as well as storage cost. Therefore, we can explore the computation of representative embeddings in an online fashion such that at any given time, the system tracks the k most representative embeddings. Finally, we believe that there is room for future work exploring effective visualizations of learned embeddings when input items are not easy to interpret (i.e., beyond images and text). 7 Conclusions In this paper, we study the recommendation problem when new items arrive continuously. We propose to view item cold-start through the lens of meta-learning where making recommendations for one user is considered to be one task and our goal is to learn across many such tasks. We formally define the meta-learning problem and propose two distinct approaches to condition the recommendation model on the task. The linear weight adaptation approach adapts weights of a linear classifier depending on task information while the non-linear bias adaptation approach learns task specific item representations and adapts biases of a neural network based on task information. We perform an empirical evaluation of our techniques on the Tweet recommendation problem. On production Twitter data, we show that our meta-learning approaches comfortably beat the state-of-the-art production models for Tweet recommendation. We show that the non-linear bias adaptation approach outperforms the linear weight adaptation approach. We thus demonstrate that recommendation through meta-learning is effective for item cold-start recommendations and may be extended to other recommendation settings as well. 8 Addressing Reviewer Comments We thank the anonymous reviewers for their feedback on the paper. We have incorporated responses to reviewer comments in the paper text. 9 Acknowledgement We would like to thank all members of the Twitter Timelines Quality team as well as the Twitter Cortex team for their help and guidance with this work. References [1] "never miss important tweets from people you follow". https://blog.twitter.com/2016/ never-miss-important-tweets-from-people-you-follow. Accessed: 2017-05-03. [2] Using deep learning at scale in twitter’s timelines. https://blog.twitter.com/2017/ using-deep-learning-at-scale-in-twitter-s-timelines. Accessed: 2017-05-09. [3] D. Agarwal and B.-C. Chen. Regression-based latent factor models. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 19–28. ACM, 2009. [4] P. Bachman, A. Sordoni, and A. Trischler. Learning algorithms for active learning. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 301–310, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. [5] L. Charlin, R. S. Zemel, and H. Larochelle. Leveraging user libraries to bootstrap collaborative filtering. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 173–182. ACM, 2014. [6] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 191–198. ACM, 2016. [7] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: scalable online collaborative filtering. In Proceedings of the 16th international conference on World Wide Web, pages 271–280. ACM, 2007. [8] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi, A. Atallah, R. Herbrich, S. Bowers, et al. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, pages 1–9. ACM, 2014. [9] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939, 2015. [10] B. Hidasi, M. Quadrana, A. Karatzoglou, and D. Tikk. Parallel recurrent neural network architectures for feature-rich session-based recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 241–248. ACM, 2016. [11] L. Hong, A. S. Doumith, and B. D. Davison. Co-factorization machines: modeling user interests and predicting individual decisions in twitter. In Proceedings of the sixth ACM international conference on Web search and data mining, pages 557–566. ACM, 2013. [12] D. Kim, C. Park, J. Oh, S. Lee, and H. Yu. Convolutional matrix factorization for document context-aware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 233–240. ACM, 2016. [13] G. Koch. Siamese Neural Networks for One-Shot Image Recognition. PhD thesis, University of Toronto, 2015. [14] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 426–434. ACM, 2008. [15] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009. [16] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. [17] X. N. Lam, T. Vu, T. D. Le, and A. D. Duong. Addressing cold-start problem in recommendation systems. In Proceedings of the 2nd international conference on Ubiquitous information management and communication, pages 208–211. ACM, 2008. [18] C. Lemke, M. Budka, and B. Gabrys. Metalearning: a survey of trends and technologies. Artificial intelligence review, 44(1):117–130, 2015. [19] G. Linden, B. Smith, and J. York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet computing, 7(1):76–80, 2003. [20] J. Liu, P. Dolan, and E. R. Pedersen. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces, pages 31–40, 2010. [21] P. Lops, M. De Gemmis, and G. Semeraro. Content-based recommender systems: State of the art and trends. In Recommender systems handbook, pages 73–105. Springer, 2011. [22] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. Computer Vision–ECCV 2012, pages 488–501, 2012. [23] A. Mnih and R. R. Salakhutdinov. Probabilistic matrix factorization. In Advances in neural information processing systems, pages 1257–1264, 2008. [24] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. ICLR, 2017. [25] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850, 2016. 10 [26] J. Snell, K. Swersky, and R. S. Zemel. Prototypical networks for few-shot learning. CoRR, abs/1703.05175, 2017. [27] D. H. Stern, R. Herbrich, and T. Graepel. Matchbox: large scale online bayesian recommendations. In Proceedings of the 18th international conference on World wide web, pages 111–120. ACM, 2009. [28] R. Vilalta and Y. Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77–95, 2002. [29] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016. [30] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 448–456. ACM, 2011. [31] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola. Deep sets. CoRR, abs/1703.06114, 2017. 11 | 2017 | 221 |
6,699 | Learning Unknown Markov Decision Processes: A Thompson Sampling Approach Yi Ouyang University of California, Berkeley ouyangyi@berkeley.edu Mukul Gagrani University of Southern California mgagrani@usc.edu Ashutosh Nayyar University of Southern California ashutosn@usc.edu Rahul Jain University of Southern California rahul.jain@usc.edu Abstract We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when the number of visits to any state-action pair is doubled. We establish ˜O(HS √ AT) bounds on expected regret under a Bayesian setting, where S and A are the sizes of the state and action spaces, T is time, and H is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs. 1 Introduction We consider the problem of reinforcement learning by an agent interacting with an environment while trying to minimize the total cost accumulated over time. The environment is modeled by an infinite horizon Markov Decision Process (MDP) with finite state and action spaces. When the environment is perfectly known, the agent can determine optimal actions by solving a dynamic program for the MDP [1]. In reinforcement learning, however, the agent is uncertain about the true dynamics of the MDP. A naive approach to an unknown model is the certainty equivalence principle. The idea is to estimate the unknown MDP parameters from available information and then choose actions as if the estimates are the true parameters. But it is well-known in adaptive control theory that the certainty equivalence principle may lead to suboptimal performance due to the lack of exploration [2]. This issue actually comes from the fundamental exploitation-exploration trade-off: the agent wants to exploit available information to minimize cost, but it also needs to explore the environment to learn system dynamics. One common way to handle the exploitation-exploration trade-off is to use the optimism in the face of uncertainty (OFU) principle [3]. Under this principle, the agent constructs confidence sets for the system parameters at each time, find the optimistic parameters that are associated with the minimum cost, and then selects an action based on the optimistic parameters. The optimism procedure encourages exploration for rarely visited states and actions. Several optimistic algorithms are proved to possess strong theoretical performance guarantees [4–10]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. An alternative way to incentivize exploration is the Thompson Sampling (TS) or Posterior Sampling method. The idea of TS was first proposed by Thompson in [11] for stochastic bandit problems. It has been applied to MDP environments [12–17] where the agent computes the posterior distribution of unknown parameters using observed information and a prior distribution. A TS algorithm generally proceeds in episodes: at the beginning of each episode a set of MDP parameters is randomly sampled from the posterior distribution, then actions are selected based on the sampled model during the episode. TS algorithms have the following advantages over optimistic algorithms. First, TS algorithms can easily incorporate problem structures through the prior distribution. Second, they are more computationally efficient since a TS algorithm only needs to solve the sampled MDP, while an optimistic algorithm requires solving all MDPs that lie within the confident sets. Third, empirical studies suggest that TS algorithms outperform optimistic algorithms in bandit problems [18, 19] as well as in MDP environments [13, 16, 17]. Due to the above advantages, we focus on TS algorithms for the MDP learning problem. The main challenge in the design of a TS algorithm is the lengths of the episodes. For finite horizon MDPs under the episodic setting, the length of each episode can be set as the time horizon [13]. When there exists a recurrent state under any stationary policy, the TS algorithm of [15] starts a new episode whenever the system enters the recurrent state. However, the above methods to end an episode can not be applied to MDPs without the special features. The work of [16] proposed a dynamic episode schedule based on the doubling trick used in [7], but a mistake in their proof of regret bound was pointed out by [20]. In view of the mistake in [16], there is no TS algorithm with strong performance guarantees for general MDPs to the best of our knowledge. We consider the most general subclass of weakly communicating MDPs in which meaningful finite time regret guarantees can be analyzed. We propose the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm. In TSDE, there are two stopping criteria for an episode to end. The first stopping criterion controls the growth rate of episode length. The second stopping criterion is the doubling trick similar to the one in [7–10, 16] that stops when the number of visits to any state-action pair is doubled. Under a Bayesian framework, we show that the expected regret of TSDE accumulated up to time T is bounded by ˜O(HS √ AT) where ˜O hides logarithmic factors. Here S and A are the sizes of the state and action spaces, T is time, and H is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs [7], and it matches the theoretical lower bound in order of T except for logarithmic factors. We present numerical results that show that TSDE actually outperforms current algorithms with known regret bounds that have the same order in T for a benchmark MDP problem as well as randomly generated MDPs. 2 Problem Formulation 2.1 Preliminaries An infinite horizon Markov Decision Process (MDP) is described by (S, A, c, θ). Here S is the state space, A is the action space, c : S × A →[0, 1]1 is the cost function, and θ : S2 × A →[0, 1] represents the transition probabilities such that θ(s′|s, a) = P(st+1 = s′|st = s, at = a) where st ∈S and at ∈A are the state and the action at t = 1, 2, 3 . . . . We assume that S and A are finite spaces with sizes S ≥2 and A ≥2, and the initial state s1 is a known and fixed state. A stationary policy is a deterministic map π : S →A that maps a state to an action. The average cost per stage of a stationary policy is defined as Jπ(θ) = lim sup T →∞ 1 T E h T X t=1 c(st, at) i . Here we use Jπ(θ) to explicitly show the dependency of the average cost on θ. To have meaningful finite time regret bounds, we consider the subclass of weakly communicating MDPs defined as follows. Definition 1. An MDP is weakly communicating (or weak accessible) if its states can be partitioned into two subsets: in the first subset all states are transient under every stationary policy, and every two states in the second subset can be reached from each other under some stationary policy. 1Since S and A are finite, we can normalize the cost function to [0, 1] without loss of generality. 2 From MDP theory [1], we know that if the MDP is weakly communicating, the optimal average cost per stage J(θ) = minπ Jπ(θ) satisfies the Bellman equation J(θ) + v(s, θ) = min a∈A n c(s, a) + X s′∈S θ(s′|s, a)v(s′, θ) o (1) for all s ∈S. The corresponding optimal stationary policy π∗is the minimizer of the above optimization given by a = π∗(s, θ). (2) Since the cost function c(s, a) ∈[0, 1], J(θ) ∈[0, 1] for all θ. If v satisfies the Bellman equation, v plus any constant also satisfies the Bellman equation. Without loss of generality, let mins∈S v(s, θ) = 0 and define the span of the MDP as sp(θ) = maxs∈S v(s, θ). 2 We define Ω∗to be the set of all θ such that the MDP with transition probabilities θ is weakly communicating, and there exists a number H such that sp(θ) ≤H. We will focus on MDPs with transition probabilities in the set Ω∗. 2.2 Reinforcement Learning for Weakly Communicating MDPs We consider the reinforcement learning problem of an agent interacting with a random weakly communicating MDP (S, A, c, θ∗). We assume that S, A and the cost function c are completely known to the agent. The actual transition probabilities θ∗is randomly generated at the beginning before the MDP interacts with the agent. The value of θ∗is then fixed but unknown to the agent. The complete knowledge of the cost is typical as in [7, 15]. Algorithms can generally be extended to the unknown costs/rewards case at the expense of some constant factor for the regret bound. At each time t, the agent selects an action according to at = φt(ht) where ht = (s1, s2, . . . , st, a1, a2, . . . , at−1) is the history of states and actions. The collection φ = (φ1, φ2 . . . ) is called a learning algorithm. The functions φt allow for the possibility of randomization over actions at each time. We focus on a Bayesian framework for the unknown parameter θ∗. Let µ1 be the prior distribution for θ∗, i.e., for any set Θ, P(θ∗∈Θ) = µ1(Θ). We make the following assumptions on µ1. Assumption 1. The support of the prior distribution µ1 is a subset of Ω∗. That is, the MDP is weakly communicating and sp(θ∗) ≤H. In this Bayesian framework, we define the expected regret (also called Bayesian regret or Bayes risk) of a learning algorithm φ up to time T as R(T, φ) = E h T X t=1 h c(st, at) −J(θ∗) ii (3) where st, at, t = 1, . . . , T are generated by φ and J(θ∗) is the optimal per stage cost of the MDP. The above expectation is with respect to the prior distribution µ1 for θ∗, the randomness in state transitions, and the randomized algorithm. The expected regret is an important metric to quantify the performance of a learning algorithm. 3 Thompson Sampling with Dynamic Episodes In this section, we propose the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm. The input of TSDE is the prior distribution µ1. At each time t, given the history ht, the agent can compute the posterior distribution µt given by µt(Θ) = P(θ∗∈Θ|ht) for any set Θ. Upon applying the action at and observing the new state st+1, the posterior distribution at t + 1 can be updated according to Bayes’ rule as µt+1(dθ) = θ(st+1|st, at)µt(dθ) R θ′(st+1|st, at)µt(dθ′). (4) 2See [7]for a discussion on the connection of the span with other parameters such as the diameter appearing in the lower bound on regret. 3 Let Nt(s, a) be the number of visits to any state-action pair (s, a) before time t. That is, Nt(s, a) = |{τ < t : (sτ, aτ) = (s, a)}|. (5) With these notations, TSDE is described as follows. Algorithm 1 Thompson Sampling with Dynamic Episodes (TSDE) Input: µ1 Initialization: t ←1, tk ←0 for episodes k = 1, 2, ... do Tk−1 ←t −tk tk ←t Generate θk ∼µtk and compute πk(·) = π∗(·, θk) from (1)-(2) while t ≤tk + Tk−1 and Nt(s, a) ≤2Ntk(s, a) for all (s, a) ∈S × A do Apply action at = πk(st) Observe new state st+1 Update µt+1 according to (4) t ←t + 1 end while end for The TSDE algorithm operates in episodes. Let tk be start time of the kth episode and Tk = tk+1 −tk be the length of the episode with the convention T0 = 1. From the description of the algorithm, t1 = 1 and tk+1, k ≥1, is given by tk+1 = min{t > tk : t > tk + Tk−1 or Nt(s, a) > 2Ntk(s, a) for some (s, a)}. (6) At the beginning of episode k, a parameter θk is sampled from the posterior distribution µtk. During each episode k, actions are generated from the optimal stationary policy πk for the sampled parameter θk. One important feature of TSDE is that its episode lengths are not fixed. The length Tk of each episode is dynamically determined according to two stopping criteria: (i) t > tk + Tk−1, and (ii) Nt(s, a) > 2Ntk(s, a) for some state-action pair (s, a). The first stopping criterion provides that the episode length grows at a linear rate without triggering the second criterion. The second stopping criterion ensures that the number of visits to any state-action pair (s, a) during an episode should not be more than the number visits to the pair before this episode. Remark 1. Note that TSDE only requires the knowledge of S, A, c, and the prior distribution µ1. TSDE can operate without the knowledge of time horizon T, the bound H on span used in [7], and any knowledge about the actual θ∗such as the recurrent state needed in [15]. 3.1 Main Result Theorem 1. Under Assumption 1, R(T, TSDE) ≤(H + 1) p 2SAT log(T) + 49HS p AT log(AT). The proof of Theorem 1 appears in Section 4. Remark 2. Note that our regret bound has the same order in H, S, A and T as the optimistic algorithm in [7] which is the best available bound for weakly communicating MDPs. Moreover, the bound does not depend on the prior distribution or other problem-dependent parameters such as the recurrent time of the optimal policy used in the regret bound of [15]. 3.2 Approximation Error At the beginning of each episode, TSDE computes the optimal stationary policy πk for the parameter θk. This step requires the solution to a fixed finite MDP. Policy iteration or value iteration can be used to solve the sampled MDP, but the resulting stationary policy may be only approximately optimal in practice. We call π an ϵ−approximate policy if c(s, π(s)) + X s′∈S θ(s′|s, π(s))v(s′, θ) ≤min a∈A n c(s, a) + X s′∈S θ(s′|s, a)v(s′, θ) o + ϵ. 4 When the algorithm returns an ϵk−approximate policy ˜πk instead of the optimal stationary policy πk at episode k, we have the following regret bound in the presence of such approximation error. Theorem 2. If TSDE computes an ϵk−approximate policy ˜πk instead of the optimal stationary policy πk at each episode k, the expected regret of TSDE satisfies R(T, TSDE) ≤˜O(HS √ AT) + E h X k:tk≤T Tkϵk i . Furthermore, if ϵk ≤ 1 k+1, E h P k:tk≤T Tkϵk i ≤ p 2SAT log(T). Theorem 2 shows that the approximation error in the computation of optimal stationary policy is only additive to the regret under TSDE. The regret bound would remain ˜O(HS √ AT) if the approximation error is such that ϵk ≤ 1 k+1. The proof of Theorem 2 is in the appendix due to the lack of space. 4 Analysis 4.1 Number of Episodes To analyze the performance of TSDE over T time steps, define KT = arg max {k : tk ≤T} be the number of episodes of TSDE until time T. Note that KT is a random variable because the number of visits Nt(x, u) depends on the dynamical state trajectory. In the analysis for time T we use the convention that t(KT +1) = T + 1. We provide an upper bound on KT as follows. Lemma 1. KT ≤ p 2SAT log(T). Proof. Define macro episodes with start times tni, i = 1, 2, . . . where tn1 = t1 and tni+1 = min{tk > tni : Ntk(s, a) > 2Ntk−1(s, a) for some (s, a)}. The idea is that each macro episode starts when the second stopping criterion happens. Let M be the number of macro episodes until time T and define n(M+1) = KT + 1. Let ˜Ti = Pni+1−1 k=ni Tk be the length of the ith macro episode. By the definition of macro episodes, any episode except the last one in a macro episode must be triggered by the first stopping criterion. Therefore, within the ith macro episode, Tk = Tk−1 + 1 for all k = ni, ni + 1, . . . , ni+1 −2. Hence, ˜Ti = ni+1−1 X k=ni Tk = ni+1−ni−1 X j=1 (Tni−1 + j) + Tni+1−1 ≥ ni+1−ni−1 X j=1 (j + 1) + 1 = 0.5(ni+1 −ni)(ni+1 −ni + 1). Consequently, ni+1 −ni ≤ p 2 ˜Ti for all i = 1, . . . , M. From this property we obtain KT =nM+1 −1 = M X i=1 (ni+1 −ni) ≤ M X i=1 q 2 ˜Ti. (7) Using (7) and the fact that PM i=1 ˜Ti = T we get KT ≤ M X i=1 q 2 ˜Ti ≤ v u u tM M X i=1 2 ˜Ti = √ 2MT (8) where the second inequality is Cauchy-Schwarz. From Lemma 6 in the appendix, the number of macro episodes M ≤SA log(T). Substituting this bound into (8) we obtain the result of this lemma. Remark 3. TSDE computes the optimal stationary policy of a finite MDP at each episode. Lemma 1 ensures that such computation only needs to be done at a sublinear rate of p 2SAT log(T). 5 4.2 Regret Bound As discussed in [13, 20, 21], one key property of Thompson/Posterior Sampling algorithms is that for any function f, E[f(θt)] = E[f(θ∗)] if θt is sampled from the posterior distribution at time t. This property leads to regret bounds for algorithms with fixed sampling episodes since the start time tk of each episode is deterministic. However, our TSDE algorithm has dynamic episodes that requires us to have the stopping-time version of the above property. Lemma 2. Under TSDE, tk is a stopping time for any episode k. Then for any measurable function f and any σ(htk)−measurable random variable X, we have E h f(θk, X) i = E h f(θ∗, X) i Proof. From the definition (6), the start time tk is a stopping-time, i.e. tk is σ(htk)−measurable. Note that θk is randomly sampled from the posterior distribution µtk. Since tk is a stopping time, tk and µtk are both measurable with respect to σ(htk). From the assumption, X is also measurable with respect to σ(htk). Then conditioned on htk, the only randomness in f(θk, X) is the random sampling in the algorithm. This gives the following equation: E h f(θk, X)|htk i = E h f(θk, X)|htk, tk, µtk i = Z f(θ, X)µtk(dθ) = E h f(θ∗, X)|htk i (9) since µtk is the posterior distribution of θ∗given htk. Now the result follows by taking the expectation of both sides. For tk ≤t < tk+1 in episode k, the Bellman equation (1) holds by Assumption 1 for s = st, θ = θk and action at = πk(st). Then we obtain c(st, at) = J(θk) + v(st, θk) − X s′∈S θk(s′|st, at)v(s′, θk). (10) Using (10), the expected regret of TSDE is equal to E h KT X k=1 tk+1−1 X t=tk c(st, at) i −T E h J(θ∗) i = E h KT X k=1 TkJ(θk) i −T E h J(θ∗) i + E h KT X k=1 tk+1−1 X t=tk h v(st, θk) − X s′∈S θk(s′|st, at)v(s′, θk) ii =R0 + R1 + R2, (11) where R0, R1 and R2 are given by R0 = E h KT X k=1 TkJ(θk) i −T E h J(θ∗) i , R1 = E h KT X k=1 tk+1−1 X t=tk h v(st, θk) −v(st+1, θk) ii , R2 = E h KT X k=1 tk+1−1 X t=tk h v(st+1, θk) − X s′∈S θk(s′|st, at)v(s′, θk) ii . We proceed to derive bounds on R0, R1 and R2. Based on the key property of Lemma 2, we derive an upper bound on R0. Lemma 3. The first term R0 is bounded as R0 ≤E[KT ]. 6 Proof. From monotone convergence theorem we have R0 = E h ∞ X k=1 1{tk≤T }TkJ(θk) i −T E h J(θ∗) i = ∞ X k=1 E h 1{tk≤T }TkJ(θk) i −T E h J(θ∗) i . Note that the first stopping criterion of TSDE ensures that Tk ≤Tk−1 + 1 for all k. Because J(θk) ≥0, each term in the first summation satisfies E h 1{tk≤T }TkJ(θk) i ≤E h 1{tk≤T }(Tk−1 + 1)J(θk) i . Note that 1{tk≤T }(Tk−1 + 1) is measurable with respect to σ(htk). Then, Lemma 2 gives E h 1{tk≤T }(Tk−1 + 1)J(θk) i = E h 1{tk≤T }(Tk−1 + 1)J(θ∗) i . Combining the above equations we get R0 ≤ ∞ X k=1 E h 1{tk≤T }(Tk−1 + 1)J(θ∗) i −T E h J(θ∗) i = E h KT X k=1 (Tk−1 + 1)J(θ∗) i −T E h J(θ∗) i = E h KT J(θ∗) i + E h KT X k=1 Tk−1 −T J(θ∗) i ≤E h KT i where the last equality holds because J(θ∗) ≤1 and PKT k=1 Tk−1 = T0 + PKT −1 k=1 Tk ≤T. Note that the first stopping criterion of TSDE plays a crucial role in the proof of Lemma 3. It allows us to bound the length of an episode using the length of the previous episode which is measurable with respect to the information at the beginning of the episode. The other two terms R1 and R2 of the regret are bounded in the following lemmas. Their proofs follow similar steps to those in [13, 16]. The proofs are in the appendix due to the lack of space. Lemma 4. The second term R1 is bounded as R1 ≤E[HKT ]. Lemma 5. The third term R2 is bounded as R2 ≤49HS p AT log(AT). We are now ready to prove Theorem 1. Proof of Theorem 1. From (11), R(T, TSDE) = R0 + R1 + R2 ≤E[KT ] + E[HKT ] + R2 where the inequality comes from Lemma 3, Lemma 4. Then the claim of the theorem directly follows from Lemma 1 and Lemma 5. 5 Simulations In this section, we compare through simulations the performance of TSDE with three learning algorithms with the same regret order: UCRL2 [8], TSMDP [15], and Lazy PSRL [16]. UCRL2 is an optimistic algorithm with similar regret bounds. TSMDP and Lazy PSRL are TS algorithms for infinite horizon MDPs. TSMDP has the same regret order in T given a recurrent state for resampling. The original regret analysis for Lazy PSRL is incorrect, but the regret bounds are conjectured to be correct [20]. We chose δ = 0.05 for the implementation of UCRL2 and assume an independent Dirichlet prior with parameters [0.1, . . . , 0.1] over the transition probabilities for all TS algorithms. We consider two environments: randomly generated MDPs and the RiverSwim example [22]. For randomly generated MDPs, we use the independent Dirichlet prior over 6 states and 2 actions but 7 with a fixed cost. We select the resampling state s0 = 1 for TSMDP here since all states are recurrent under the Dirichlet prior. The RiverSwim example models an agent swimming in a river who can choose to swim either left or right. The MDP consists of six states arranged in a chain with the agent starting in the leftmost state (s = 1). If the agent decides to move left i.e with the river current then he is always successful but if he decides to move right he might fail with some probability. The cost function is given by: c(s, a) = 0.8 if s = 1, a = left; c(s, a) = 0 if s = 6, a = right; and c(s, a) = 1 otherwise. The optimal policy is to swim right to reach the rightmost state which minimizes the cost. For TSMDP in RiverSwim, we consider two versions with s0 = 1 and with s0 = 3 for the resampling state. We simulate 500 Monte Carlo runs for both the examples and run for T = 105. 0 2 4 6 8 10 x 10 4 0 100 200 300 400 500 600 T Regret UCRL2 TSMDP Lazy PSRL TSDE (a) Expected Regret vs Time for random MDPs 0 2 4 6 8 10 x 10 4 0 1000 2000 3000 4000 5000 6000 T Regret UCRL2 TSMDP with s0 = 3 TSMDP with s0 = 1 Lazy PSRL TSDE (b) Expected Regret vs Time for RiverSwim Figure 1: Simulation Results From Figure 1(a) we can see that TSDE outperforms all the three algorithms in randomly generated MDPs. In particular, there is a significant gap between the regret of TSDE and that of UCRL2 and TSMDP. The poor performance of UCRL2 assures the motivation to consider TS algorithms. From the specification of TSMDP, its performance heavily hinges on the choice of an appropriate resampling state which is not possible for a general unknown MDP. This is reflected in the randomly generated MDPs experiment. In the RiverSwim example, Figure 1(b) shows that TSDE significantly outperforms UCRL2, Lazy PSRL, and TSMDP with s0 = 3. Although TSMDP with s0 = 1 performs slightly better than TSDE, there is no way to pick this specific s0 if the MDP is unknown in practice. Since Lazy PSRL is also equipped with the doubling trick criterion, the performance gap between TSDE and Lazy PSRL highlights the importance of the first stopping criterion on the growth rate of episode length. We also like to point out that in this example, the MDP is fixed and is not generated from the Dirichlet prior. Therefore, we conjecture that TSDE also has the same regret bounds under a non-Bayesian setting. 6 Conclusion We propose the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm and establish ˜O(HS √ AT) bounds on expected regret for the general subclass of weakly communicating MDPs. Our result fills a gap in the theoretical analysis of Thompson Sampling for MDPs. Numerical results validate that the TSDE algorithm outperforms other learning algorithms for infinite horizon MDPs. The TSDE algorithm determines the end of an episode by two stopping criteria. The second criterion comes from the doubling trick used in many reinforcement learning algorithms. But the first criterion on the linear growth rate of episode length seems to be a new idea for episodic learning algorithms. The stopping criterion is crucial in the proof of regret bound (Lemma 3). The simulation results of TSDE versus Lazy PSRL further shows that this criterion is not only a technical constraint for proofs, it indeed helps balance exploitation and exploration. 8 Acknowledgments Yi Ouyang would like to thank Yang Liu from Harvard University for helpful discussions. Rahul Jain and Ashutosh Nayyar were supported by NSF Grants 1611574 and 1446901. References [1] D. P. Bertsekas, Dynamic programming and optimal control, vol. 2. Athena Scientific, Belmont, MA, 2012. [2] P. R. Kumar and P. Varaiya, Stochastic systems: Estimation, identification, and adaptive control. SIAM, 2015. [3] T. L. Lai and H. Robbins, “Asymptotically efficient adaptive allocation rules,” Advances in applied mathematics, vol. 6, no. 1, pp. 4–22, 1985. [4] A. N. Burnetas and M. N. Katehakis, “Optimal adaptive policies for markov decision processes,” Mathematics of Operations Research, vol. 22, no. 1, pp. 222–255, 1997. [5] M. Kearns and S. Singh, “Near-optimal reinforcement learning in polynomial time,” Machine Learning, vol. 49, no. 2-3, pp. 209–232, 2002. [6] R. I. Brafman and M. Tennenholtz, “R-max-a general polynomial time algorithm for nearoptimal reinforcement learning,” Journal of Machine Learning Research, vol. 3, no. Oct, pp. 213–231, 2002. [7] P. L. Bartlett and A. Tewari, “Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps,” in UAI, 2009. [8] T. Jaksch, R. Ortner, and P. Auer, “Near-optimal regret bounds for reinforcement learning,” Journal of Machine Learning Research, vol. 11, no. Apr, pp. 1563–1600, 2010. [9] S. Filippi, O. Capp´e, and A. Garivier, “Optimism in reinforcement learning and kullback-leibler divergence,” in Allerton, pp. 115–122, 2010. [10] C. Dann and E. Brunskill, “Sample complexity of episodic fixed-horizon reinforcement learning,” in NIPS, 2015. [11] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, no. 3/4, pp. 285–294, 1933. [12] M. Strens, “A bayesian framework for reinforcement learning,” in ICML, 2000. [13] I. Osband, D. Russo, and B. Van Roy, “(More) efficient reinforcement learning via posterior sampling,” in NIPS, 2013. [14] R. Fonteneau, N. Korda, and R. Munos, “An optimistic posterior sampling strategy for bayesian reinforcement learning,” in BayesOpt2013, 2013. [15] A. Gopalan and S. Mannor, “Thompson sampling for learning parameterized markov decision processes,” in COLT, 2015. [16] Y. Abbasi-Yadkori and C. Szepesv´ari, “Bayesian optimal control of smoothly parameterized systems.,” in UAI, 2015. [17] I. Osband and B. Van Roy, “Why is posterior sampling better than optimism for reinforcement learning,” EWRL, 2016. [18] S. L. Scott, “A modern bayesian look at the multi-armed bandit,” Applied Stochastic Models in Business and Industry, vol. 26, no. 6, pp. 639–658, 2010. [19] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in NIPS, 2011. [20] I. Osband and B. Van Roy, “Posterior sampling for reinforcement learning without episodes,” arXiv preprint arXiv:1608.02731, 2016. 9 [21] D. Russo and B. Van Roy, “Learning to optimize via posterior sampling,” Mathematics of Operations Research, vol. 39, no. 4, pp. 1221–1243, 2014. [22] A. L. Strehl and M. L. Littman, “An analysis of model-based interval estimation for markov decision processes,” Journal of Computer and System Sciences, vol. 74, no. 8, pp. 1309–1331, 2008. 10 | 2017 | 222 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.